Google Cloud Platform for Data Engineering: From Beginner to Data Engineer using Google Cloud Platform - Alasdair Gilchrist

Google Cloud Platform for Data Engineering: From Beginner to Data Engineer using Google Cloud Platform

By Alasdair Gilchrist

  • Release Date: 2019-10-22
  • Genre: Computers

Description

Google Cloud Platform for Data Engineering is designed to take the beginner through a journey to become a competent and certified GCP data engineer. The book, therefore, is split into three parts; the first part covers fundamental concepts of data engineering and data analysis from a platform and technology-neutral perspective. Reading part 1 will bring a beginner up to speed with the generic concepts, terms and technologies we use in data engineering. The second part, which is a high-level but comprehensive introduction to all the concepts, components, tools and services available to us within the Google Cloud Platform. Completing this section will provide the beginner to GCP and data engineering with a solid foundation on the architecture and capabilities of the GCP. Part 3, however, is where we delve into the moderate to advanced techniques that data engineers need to know and be able to carry out. By this time the raw beginner you started the journey at the beginning of part 1 will be a knowledgable albeit inexperienced data engineer. However, by the conclusion of part 3, they will have gained the advanced knowledge of data engineering techniques and practices on the GCP to pass not only the certification exam but also most interviews and practical tests with confidence. In short part 3, will provide the prospective data engineer with detailed knowledge on setting up and configuring DataProc - GCPs version of the Spark/Hadoop ecosystem for big data. They will also learn how to build and test streaming and batch data pipelines using pub/sub/ dataFlow and BigQuery. Furthermore, they will learn how to integrate all the ML and AI Platform components and APIs. They will be accomplished in connecting data analysis and visualisation tools such as Datalab, DataStudio and AI notebooks amongst others. They will also by now know how to build and train a TensorFlow DNN using APIs and Keras and optimise it to run large public data sets. Also, they will know how to provision and use Kubeflow and Kube Pipelines within Google Kubernetes engines to run container workloads as well as how to take advantage of serverless technologies such as Cloud Run and Cloud Functions to build transparent and seamless data processing platforms.  The best part of the book though is its compartmental design which means that anyone from a beginner to an intermediate can join the book at whatever point they feel comfortable.