Practical Apache Spark, 1st ed.
Using the Scala API

Authors:

Language: English
Cover of the book Practical Apache Spark

Subjects for Practical Apache Spark

Approximative price 63.29 €

In Print (Delivery period: 15 days).

Add to cartAdd to cart
Publication date:
Support: Print on demand
Work with Apache Spark using Scala to deploy and set up single-node, multi-node, and high-availability clusters. This book discusses various components of Spark such as Spark Core, DataFrames, Datasets and SQL, Spark Streaming, Spark MLib, and R on Spark with the help of practical code snippets for each topic. Practical Apache Spark also covers the integration of Apache Spark with Kafka with examples. You?ll follow a learn-to-do-by-yourself approach to learning ? learn the concepts, practice the code snippets in Scala, and complete the assignments given to get an overall exposure. 

On completion, you?ll have knowledge of the functional programming aspects of Scala, and hands-on expertise in various Spark components. You?ll also become familiar with machine learning algorithms with real-time usage.

What You Will Learn
  • Discover the functional programming features of Scala
  • Understand the complete architecture of Spark and its components
  • Integrate Apache Spark with Hive and Kafka 
  • Use Spark SQL, DataFrames, and Datasets to process data using traditional SQL queries
  • Work with different machine learning concepts and libraries using Spark's MLlib packages

Who This Book Is For

Developers and professionals who deal with batch and stream data processing. 


Chapter 1: Scala - Functional Programming Aspects

Chapter 2: Single & Multi-node cluster setup

Chapter 3: Introduction to Apache Spark and Spark Core

Chapter 4: Spark SQL, Dataframes & Datasets

Chapter 5: Introduction to Spark Streaming

Chapter 6: Spark Structured Streaming

Chapter 7: Spark Streaming with Kafka

Chapter 8: Spark Machine Learning Library

Chapter 9: Working with SparkR

Chapter 10: Spark - Real time use case


Subhashini Chellappan is a technology enthusiast with expertise in the big data and cloud space. She has rich experience in both academia and the software industry. Her areas of interest and expertise are centered on business intelligence, big data analytics and cloud computing.

Dharanitharan Ganesan is a senior analyst with five years of experience in IT. He has a high level of exposure and experience in big data – Apache Hadoop, Apache Spark and various Hadoop ecosystem components. He has a proven track record of improving efficiency and productivity through the automation of various routine and administrative functions in business intelligence and big data technologies. His areas of interest and expertise are centered on machine learning algorithms, statistical modelling and predictive analysis.



Contains extensive coverage of machine-learning algorithms with real-time code implementation using Spark MLib

Explains the SparkR real-time module with code implementation

Covers Spark Streaming and Spark Integration examples with other big data components such as Kafka