Data Engineering using Spark Structured API
What you'll learn
- Apache Spark Foundation and Spark Architecture
- Data Engineering and Data Processing in Spark
- Working with Data Sources and Sinks
- Working with Data Frames, Data Sets and Spark SQL
- Using IntelliJ Idea for Spark Development and Debugging
- Unit Testing, Managing Application Logs and Cluster Deployment
- Programming Knowledge Using Scala Programming Language
- A Recent 64-bit Windows/Mac/Linux Machine with 8 GB RAM
This course does not require any prior knowledge of Apache Spark or Hadoop. We have taken enough care to explain Spark Architecture and fundamental concepts to help you come up to speed and grasp the content of this course.
About the Course
I am creating Apache Spark 3 - Spark Programming in Scala for Beginners course to help you understand the Spark programming and apply that knowledge to build data engineering solutions. This course is example-driven and follows a working session like approach. We will be taking a live coding approach and explain all the needed concepts along the way.
Who should take this Course?
I designed this course for software engineers willing to develop a Data Engineering pipeline and application using the Apache Spark. I am also creating this course for data architects and data engineers who are responsible for designing and building the organization’s data-centric infrastructure. Another group of people is the managers and architects who do not directly work with Spark implementation. Still, they work with the people who implement Apache Spark at the ground level.
Spark Version used in the Course
This Course is using the Apache Spark 3.x. I have tested all the source code and examples used in this Course on Apache Spark 3.0.0 open-source distribution.
Who this course is for:
- Software Engineers and Architects who are willing to design and develop a Bigdata Engineering Projects using Apache Spark
- Programmers and developers who are aspiring to grow and learn Data Engineering using Apache Spark