The extract, transform, and load (ETL) phase of the data warehouse development life cycle is the most difficult, time-consuming, and labor-intensive phase of building a data warehouse. A solid ETL system is reliable, accurate and high performant.

Joy Mundy, co-author with Ralph Kimball of The Data Warehouse Lifecycle Toolkit (Second Edition) and The Kimball Group Reader (Second Edition), shows you how a properly designed ETL system extracts the data from the source systems, enforces data quality and consistency standards, conforms the data so that separate sources can be used together, and finally delivers the data in a presentation-ready format.

She will lead you through two days of learning on how to:

  • Choose the appropriate architecture for your ETL system
  • Plan and design your ETL system
  • Build the suite of ETL processes
  • Build a comprehensive data cleaning subsystem
  • Tune the overall ETL process for optimum performance
  • Determine the role of Big Data in your DW architecture
  • And much more

This is not a code-oriented implementation course; it is a vendor-neutral architecture course for the designer who must keep a broad perspective. The course is organized around the Kimball Method’s 34 necessary ETL subsystems which are developed in detail throughout the course.

Why attend

This two-day course helps you understand all the factors necessary for effectively designing the back room ETL system of your DW/BI environment. It focuses on the critical processes within the ETL system that are often overlooked. Even if you don’t have an immediate need for every ETL subsystem on our list, it is likely that you will over time. By the end of this course, you will understand how your data warehouse ETL system can be built to anticipate these potential requirements.

Who should attend

This course is designed for those responsible for building the back room ETL system of a data warehouse environment, including ETL architects, ETL designers and developers, and data warehouse operational staff.


Since dimensional models are the ultimate ETL deliverables, some familiarity with the basic principles of dimensional modeling is necessary.

Need custom training for your team?

Get a quote

Inquire about this course

This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.



Understanding The Requirements

  • Business needs
  • Compliance requirements
  • Data profiling strategy
  • Data integration
  • Data latency
  • Archiving, lineage, and impact
  • Available IT skills and licenses
  • Coding vs. tool
  • Hadoop vs. RDBMS
  • The restaurant analogy
  • Defining the project
  • ETL development process flow

Extract The Data

Numbered items refer to the 34 subsystems taught in this course

  • (1) Profile data
  • Create the source to target map
  • (3) Extract data
  • (2) Capture changed data

Clean and Conform

  • (4) Data cleansing system and data quality screens
  • (5) Track error events
  • (6) Audit dimension, compliance tracking
  • (7, 8) Deduplicate and conform dimension data

Deliver Dimension Tables

  • (10) Generate surrogate keys
  • (9) Manage slowly changing dimensions
  • (12) Populate mini-dimensions
  • (14) Create and manage bridge tables
  • (11) Manage hierarchies
  • (12) Maintain special dimensions
  • (17) Distribute dimension data

Deliver Fact Tables

  • (13) Maintain transactional fact tables
  • (13) Maintain periodic snapshots
  • (13) Maintain accumulating snapshots
  • (14) Calculate derived or consolidated fact tables
  • (15) Lookup fact table surrogate keys
  • (16) Handle late arriving data
  • (18) Deliver fact data
  • (19) Update performance aggregations
  • (20, 21) Delivering data to OLAP and other downstream consumers

Manage The ETL Process

  • (22) Schedule jobs and handle exceptions
  • (23, 24) Backup, recovery, and restartability
  • (25, 26) Version control and migration
  • (27) Monitor workflow
  • (28-32) Miscellaneous management topics
  • (33) Compliance
  • (29, 34) Metadata, lineage and dependency


Joy Mundy

Joy Mundy has worked with business managers and IT professionals to prioritize, justify and implement large scale business intelligence and data warehousing systems since 1992. She leverages these consulting experiences when teaching DW/BI courses. She co-authored, with Ralph Kimball and other members of Kimball Group, many of the popular “Toolkit” books including The Data Warehouse Lifecycle Toolkit (Second Edition) and The Kimball Group Reader (Second Edition).

Joy began her career as a financial analyst, but soon decided that she enjoyed working with a wide variety of data. She learned the fundamentals of data warehousing by building a system at Stanford University, and then started a data warehouse consultancy in 1994. She worked at WebTV and Microsoft’s SQL Server product development team for a few years before returning to consulting with Kimball Group in 2004, until Kimball Group’s dissolution in 2016. Joy is now semi-retired, but loves teaching and the occasional consulting engagement. She graduated from Tufts University with a BS in Economics, and from Stanford University with an MS in Engineering-Economic Systems.


Can’t find the course date on our schedule? Does the existing course date not suit you?

Let us know when and where you'd like to attend and we'll notify you when sessions become available.


The fee for this two-day course is EUR 1.450 per person. This includes three days of instruction, lunch and morning/afternoon snacks and course materials.

We offer the following discounts:

  • 10% discount for groups of 2 or more students from the same company registering at the same time.
  • 20% discount for groups of 5 or more students from the same company registering at the same time.

Note: Groups that register at a discounted rate must retain the minimum group size or the discount will be revoked. Discounts cannot be combined.

Copyright ©2020 quest for knowledge