Implementing a Lakehouse with Microsoft Fabric (DP-601)
This course is designed to build your foundational skills in data engineering on Microsoft Fabric, focusing on the Lakehouse concept.
This course is designed to build your foundational skills in data engineering on Microsoft Fabric, focusing on the Lakehouse concept.
More Information:
- Modality: Virtual
- Provider: Microsoft
- Difficulty: Intermediate
- Duration: 1 Day
- Course Info: Download PDF
- Certificate: See Sample
Course Information
If you enroll in this course at the listed price, you receive a Free Official Exam Voucher for the DP-601T00 Exam. This course does not include Exam Voucher if enrolled within the Master Subscription, however, you can request to purchase the Official Exam Voucher separately.
About this Course:
This course is designed to build your foundational skills in data engineering on Microsoft Fabric, focusing on the Lakehouse concept. This course will explore the powerful capabilities of Apache Spark for distributed data processing and the essential techniques for efficient data management, versioning, and reliability by working with Delta Lake tables. This course will also explore data ingestion and orchestration using Dataflows Gen2 and Data Factory pipelines. This course includes a combination of lectures and hands-on exercises that will prepare you to work with lakehouses in Microsoft Fabric.
Audience:
The primary audience for this course is data professionals who are familiar with data modeling, extraction, and analytics. It is designed for professionals who are interested in gaining knowledge about Lakehouse architecture, the Microsoft Fabric platform, and how to enable end-to-end analytics using these technologies.
Course Objectives:
- Describe end-to-end analytics in Microsoft Fabric
- Describe end-to-end analytics in Microsoft Fabric
- Create a lakehouse
- Ingest data into files and tables in a lakehouse
- Query lakehouse tables with SQL
- Configure Spark in a Microsoft Fabric workspace.
- Identify suitable scenarios for Spark notebooks and Spark jobs.
- Use Spark dataframes to analyze and transform data.
- Use Spark SQL to query data in tables and views.
- Visualize data in a Spark notebook.
- Understand Delta Lake and delta tables in Microsoft Fabric
- Create and manage delta tables using Spark
- Use Spark to query and transform data in delta tables
- Use delta tables with Spark structured streaming
- Describe pipeline capabilities in Microsoft Fabric
- Use the Copy Data activity in a pipeline
- Create pipelines based on predefined templates
- Run and monitor pipelines
Prerequisites:
You should be familiar with basic data concepts and terminology.