Choose training

of a superior quality and value

HDP Developer: Enterprise Apache Spark I

Price excl. VAT *
HDP Developer: Enterprise Apache Spark I
MHM computer a.s., Prague
Price excl. VAT *
* Companies not registered for VAT and private persons will be invoiced including VAT of 21%. Please provide us with your company VAT number to receive the invoice excluding VAT.

Course information

This course is designed as an entry point for developers who need to create applications to analyze Big Data stored in Apache Hadoop using Spark. Topics include: An overview of the Hortonworks Data Platform (HDP), including HDFS and YARN; using Spark Core APIs for interactive data exploration; Spark SQL and DataFrame operations; Spark Streaming and DStream
operations; data visualization, reporting, and collaboration; performance monitoring and tuning; building and deploying Spark applications; and an introduction to the Spark Machine Learning Library.

Target Audience

Software engineers that are looking to develop in-memory applications for time sensitive and highly iterative applications in an Enterprise HDP environment.


Students should be familiar with programming principles and have previous experience in software development using either Python or Scala. Previous experience with data streaming, SQL, and HDP is also helpful, but not required.


50% Lecture/Discussion

50% Hands-on Labs

Course Objectives:

  • Describe Hadoop, HDFS, YARN, and the HDP ecosystem
  • Describe Spark use cases
  • Explore and manipulate data using Zeppelin
  • Explore and manipulate data using a Spark REPL
  • Explain the purpose and function of RDDs
  • Employ functional programming practices
  • Perform Spark transformations and actions
  • Work with Pair RDDs
  • Perform Spark queries using Spark SQL and DataFrames
  • Use Spark Streaming stateless and window transformation
  • Visualize data, generate reports, and collaborate using Zeppelin
  • Monitor Spark applications using Spark History Server
  • Learn general application optimization guidelines/tips
  • Use data caching to increase performance of applications
  • Build and package Spark applications
  • Deploy applications to the cluster using YARN
  • Understand the purpose of Spark MLlib

Lab Content:

  • Use common HDFS commands
  • Use a REPL to program in Spark
  • Use Zeppelin to program in Spark
  • Perform RDD transformations and actions
  • Perform Pair RDD transformations and actions
  • Utilize Spark SQL
  • Perform stateless transformations using Spark Streaming
  • Perform window-based transformations
  • Use Zeppelin for data visualization and reporting
  • Monitor applications using Spark History Server
  • Cache and persist data
  • Configure checkpointing, broadcast variables, and executors
  • Build and submit a Spark application to YARN
  • Run Spark MLlib applications

For more information, download the data sheet.

Request training


HDP Developer: Enterprise Apache Spark I

* Required