Challenge Labs Collection - Exam DP-203: Data Engineering on Microsoft Azure
(LODS-CL-COL-DP203-1)
Challenge Labs are designed to extend and expand your learning experience with hands-on, scenario-based exercises. Challenge labs are goal oriented, short-duration, scenario-based hands-on exercises, and an excellent way to provide additional skills development and assessment. As new challenge labs are released, and you will gain automatic access to them for the duration of your subscription.
Public visé
- IT Professionals who are looking to keep up to date on features and functionality for a particular technology
- IT Professionals who are responsible for supporting multiple technologies
- IT Professional/Developers who are preparing for vendor certification exams
Programme de cours
Getting Started
- Getting Started with Azure Secure Data and Applications [Getting Started]
- In this lab, you will configure security for an Azure web app, virtual machine, database, and storage account. First, you will configure Application Insights for a web app, and then you will configure security for an Azure virtual machine. Next, you will create an Azure SQL database that uses Azure Defender for SQL, and then you will configure security for a storage account by using a shared access signature (SAS) key. Finally, you will configure security for an Azure Cosmos DB account.Note: Once you begin the challenge lab, you will not be able to pause, save, or exit and then return to your challenge lab. Please ensure that you have set aside enough time to complete the challenge lab before you start.
- Getting Started with Data Processing and Security [Getting Started]
- In this lab, you will implement data security for a dedicated SQL pool by using Azure Synapse Analytics, and then you will implement data security for an Azure Data Lake Storage Gen2 account. Next, you will develop a batch processing solution by using an Azure data factory, and then you will implement data security for an event hub. Finally, you will create an Apache Spark job in a pipeline by using an Azure Synapse Analytics workspace.Note: Once you begin the challenge lab, you will not be able to pause, save, or exit and then return to your challenge lab. Please ensure that you have set aside enough time to complete the challenge lab before you start.
- Getting Started with Data Storage, Monitoring, and Optimization [Getting Started]
- In this lab, you will optimize data distribution for a dedicated SQL pool by using Azure Synapse Analytics, and then you will optimize a hierarchical namespace for an Azure Data Lake Storage Gen2 account. Next, you will optimize a batch processing solution by using an Azure data factory, and then you will monitor streaming events for an event hub. Finally, you will monitor an Apache Spark job by using an Azure Synapse Analytics workspace.Note: Once you begin the challenge lab, you will not be able to pause, save, or exit and then return to your challenge lab. Please ensure that you have set aside enough time to complete the challenge lab before you start.
- Getting Started with Designing an Azure Data Solution [Getting Started]
- In this lab, you will design an Azure data solution. First, you will design an Azure Cosmos DB account, and then you will design an Azure SQL database and an Azure Synapse Analytics SQL pool. Next, you will design an Azure Data Lake Storage solution, and then you will design an event streaming solution by using an Azure stream analytics job and an Azure event hub. Finally, you will design a data pipeline in an Azure data factory.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Getting Started with Implementing an Azure Data Solution [Getting Started]
- In this lab, you will implement an Azure data solution. First, you will provision an Azure Cosmos DB account, and then you will provision an Azure SQL database. Next, you will enable Azure Data Lake Storage, and then you will implement event streaming by using an Azure event hub. Finally, you will create an Azure data factory, and then you will create a pipeline and a copy data activity in the Azure data factory.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
Guided
- Configure a Hierarchical Namespace for Azure Data Lake [Guided]
- In this challenge, you will configure a hierarchical namespace for Azure Data Lake. First, you will deploy a storage account that uses Azure Data Lake Storage. Next, you will design a hierarchical namespace for a data lake, and then you will create a container for blob data by using a hierarchical namespace. Finally, you will run performance benchmarks to optimize and ingest data for a data lake.Note: Once you begin the challenge lab, you will not be able to pause, save, or exit and then return to your challenge lab. Please ensure that you have set aside enough time to complete the challenge lab before you start.
- Create an Apache Spark Job in Synapse Studio [Guided]
- In this challenge, you will create an Apache Spark job in Synapse Studio. First, you will create an Apache Spark pool in an Azure Synapse workspace. Next, you will design a stream processing solution. Finally, you will create an Apache Spark job, and then you will add an Apache Spark job to a Synapse pipeline.Note: Once you begin the challenge, you will not be able to pause, save, or exit and then return to your challenge. Please ensure that you have set aside enough time to complete the challenge before you start.
- Deploy Azure Synapse Analytics [Guided]
- In this challenge, you will provision a new Azure Synapse Analytics SQL pool. First, you will deploy a Synapse SQL pool. Next, you will configure the SQL pool. Finally, you will test the data access.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Design an Azure Cosmos DB Account for High Availability [Guided]
- In this challenge, you will provision a new Azure Cosmos DB account that is designed for high availability. First, you will deploy an Azure Cosmos DB account. Next, you will create a container in a new database, and then you will add items to the container. Finally, you will enable high availability, and then you will test the configuration.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Design an Azure SQL Database Design for Disaster Recovery [Guided]
- In this challenge, you will provision a new Azure SQL database that is enabled for automatic failover to support disaster recovery. First, you will deploy an Azure SQL database as the primary replica. Next, you will create a failover group that includes a secondary replica in another region. Finally, you will test a failover operation.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Design an Azure Synapse Analytics SQL Pool [Guided]
- In this challenge, you will provision a new Azure Synapse Analytics SQL pool that uses workload management. First, you will create a Synapse SQL pool. Next, you will design the SQL pool for performance by using workload management. Finally, you will verify the workload management design.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Design Role-Based Access Control (RBAC) for Azure Data Lake Storage [Guided]
- In this challenge, you will design an Azure Data Lake Storage solution that uses role-based access control (RBAC). First, you will deploy a storage account that uses an Azure Data Lake Storage hierarchical namespace. Next, you will design a container for blob data. Finally, you will design RBAC for the data lake.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Enable Auditing for Azure Synapse Analytics [Guided]
- In this challenge, you will enable auditing for Azure Synapse Analytics. First, you will deploy a dedicated SQL pool in Azure Synapse Analytics. Next, you will design a data auditing strategy. Finally, you will implement auditing, and then you will test auditing of data access.Note: Once you begin the challenge, you will not be able to pause, save, or return to your challenge. Please ensure that you have set aside enough time to complete the challenge before you start.
- Enable Azure Data Lake Storage [Guided]
- In this challenge, you will configure Azure Data Lake Storage. First, you will deploy a storage account that uses an Azure Data Lake Storage hierarchical namespace. Next, you will configure a container for blob data. Finally, you will upload data into the data lake. Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start..
- Enable Azure SQL Database Auditing [Guided]
- In this challenge, you will provision a new Azure SQL database that uses auditing for data security. First, you will deploy an Azure SQL database. Next, you will enable auditing at the server level, and then you will enable auditing at the database level. Finally, you will test the security access, and then you will view the audit log.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Enable Dynamic Data Masking for Azure Synapse Analytics [Guided]
- In this challenge, you will enable dynamic data masking for Azure Synapse Analytics. First, you will design a data security strategy. Next, you will access a dedicated SQL pool in Azure Synapse Analytics. Finally, you will implement and test dynamic data masking.Note: Once you begin the challenge, you will not be able to pause, save, or exit and then return to your challenge. Please ensure that you have set aside enough time to complete the challenge before you start.
- Enable Geo-Replication of an Azure SQL Database [Guided]
- You are a database administrator. You need to provision a new Azure SQL database that is geo-replicated for high availability and load balancing. In this challenge, you will first deploy an Azure SQL database as the primary replica. Next, you will create a geo-replicated database in another region as the secondary replica. Finally, you will test a forced failover operation.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Enable Partitioning for Azure Synapse Analytics [Guided]
- In this challenge, you will enable partitioning for Azure Synapse Analytics. First, you will design a data partitioning strategy, and then you will access a dedicated SQL pool in Azure Synapse Analytics. Next, you will implement data partitioning, and then you will run a query to verify that the partitions were created. Finally, you will switch a partition from a staging table.Note: Once you begin the challenge, you will not be able to pause, save, or exit and then return to your challenge. Please ensure that you have set aside enough time to complete the challenge before you start.
- Enable Security for Azure Data Lake [Guided]
- In this challenge, you will enable security for Azure Data Lake. First, you will deploy a storage account that uses Azure Data Lake Storage. Next, you will create a container for blob data by using a hierarchical namespace, and then you will design access control list (ACL) and role-based access control (RBAC) security for the data lake. Finally, you will enable ACL and RBAC security for the data lake, and then you will test the secure solution. Note: Once you begin the challenge, you will not be able to pause, save, or exit and then return to your challenge. Please ensure that you have set aside enough time to complete the challenge before you start.
- Enable Table Distribution for Azure Synapse Analytics [Guided]
- In this challenge, you will enable table distribution for Azure Synapse Analytics. First, you will deploy a dedicated SQL pool in Azure Synapse Analytics. Next, you will design a table distribution strategy. Finally, you will create a table distribution solution, and then you will verify data access.Note: Once you begin the challenge lab, you will not be able to pause, save, or exit and then return to your challenge lab. Please ensure that you have set aside enough time to complete the challenge lab before you start.
- Implement Azure Policy [Guided]
- In this challenge, you will implement Azure Policy. First, you will view a policy definition. Next, you will assign the policy. Finally, you will test the Azure policy. Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you startIn this challenge, you will implement Azure Policy. First, you will view a policy definition. Next, you will assign the policy. Finally, you will test the Azure policy.
- Implement Dynamic Data Masking by Using Azure SQL Database [Guided]
- In this challenge, you will provision a new Azure SQL database that uses dynamic data masking for data security. First, you will deploy an Azure SQL database. Next, you will apply dynamic data masking to the email and account number columns. Finally, you will test the data security access. Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Implement Self-Service Password Reset [Guided]
- In this challenge, you will implement self-service password reset (SSPR) for selected users in a new Azure Active Directory (Azure AD) tenant. First, you will create a new tenant, and then you will configure a group of users who will be permitted to use SSPR. Next, you will implement SSPR for the Azure AD group. Finally, you will test SSPR.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Install and Configure Azure AD Connect [Guided]
- In this challenge, you will configure synchronization of your Active Directory Domain Services (AD DS) user accounts to a new Azure Active Directory (Azure AD) tenant. First, you will create a new Azure AD tenant. Next, you will configure a global administrator account to use for Azure AD synchronization. Finally, you will install and configure Azure AD Connect.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Integrate a Cosmos DB SQL API Database [Guided]
- In this challenge, you will write code to access an Azure storage account with a private container, upload files to the account, generate a SAS token for the container, and test private access to the account.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Manage an Azure Event Hub [Guided]
- In this challenge, you will create an Event Hubs namespace and then design a stream processing solution. Next, you will create an event hub for event streaming, and then configure security for the event hub.Note: Once you begin the challenge, you will not be able to pause, save, or exit and then return to your challenge. Please ensure that you have set aside enough time to complete the challenge before you start.
- Monitor an Apache Spark Job in Synapse Studio [Guided]
- In this challenge, you will monitor an Apache Spark job in Synapse Studio. First, you will create an Apache Spark pool in a Synapse Analytics workspace, and then you will design a stream processing solution. Next, you will create an Apache Spark job in Synapse Studio, and then you will publish the job. Finally, you will troubleshoot a failed Apache Spark job, and then you will monitor the Apache Spark job.Note: Once you begin the challenge lab, you will not be able to pause, save, or exit and then return to your challenge lab. Please ensure that you have set aside enough time to complete the challenge lab before you start.
- Monitor Data in an Azure Data Factory Pipeline[Guided]
- In this challenge, you will monitor a new Azure data factory that supports a data pipeline. First, you will create a storage account to store diagnostic settings, and then you will design a monitoring solution for data storage and data processing. Next, you will deploy an Azure data factory, and then you will create a data pipeline. Finally, you will publish and monitor the data pipeline.Note: Once you begin the challenge lab, you will not be able to pause, save, or exit and then return to your challenge lab. Please ensure that you have set aside enough time to complete the challenge lab before you start.
- Monitor Stream Events by Using Event Hub [Guided]
- In this challenge, you will monitor an event hub that supports event streaming. First, you will create a storage account for storing diagnostics, and then you will design a monitoring solution for stream processing. Next, you will create an event hub for event streaming, and then you will configure an event grid subscription. Finally, you will monitor stream events for the event hub. Note: Once you begin the challenge lab, you will not be able to pause, save, or exit and then return to your challenge lab. Please ensure that you have set aside enough time to complete the challenge lab before you start.
- Provision an Azure Cosmos DB Account [Guided]
- You are a database administrator. You need to provision a new Azure Cosmos DB account that has restricted access. First, you will deploy an Azure Cosmos DB account. Next, you will create a container in a new database, and then you will add items to the container. Finally, you will restrict access by using role-based access control (RBAC), and then you will test the configuration. Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Provision Azure Data Factory [Guided]
- In this challenge, you will provision a new Azure data factory instance that supports a data pipeline. First, you will deploy an Azure data factory. Next, you will create an Azure SQL database that will hold the data output from a pipeline, and then you will author a copy data activity in a data pipeline. Finally, you will test the data pipeline.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Stream Events by Using Azure Event Hubs [Guided]
- In this challenge, you will provision a new event hub that supports event streaming. First, you will deploy an Event Hubs namespace. Next, you will create an event hub for event streaming. Finally, you will configure security for the event hub.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Transform Data by Using Azure Data Factory [Guided]
- In this challenge, you will provision a new Azure data factory that supports a data pipeline that will transform data. First, you will design a batch processing solution, and then you will add directories to a storage account that uses a Data Lake Storage hierarchical namespace. Next, you will deploy an Azure data factory, and then you will create a data pipeline. Finally, you will author a copy data activity that will transform data into a blob data file, and then you will test and publish the data pipeline.Note: Once you begin the challenge lab, you will not be able to pause, save, or exit and then return to your challenge lab. Please ensure that you have set aside enough time to complete the challenge lab before you start.
Advanced
- Can You Design an Azure Synapse Analytics Solution? [Advanced]
- In this challenge, you will design a new Azure Synapse Analytics SQL pool. First, you will deploy a Synapse SQL pool on a new logical server. Next, you will design the SQL pool for performance by using workload management, and then you will query the data in the data warehouse tables. Finally, you will monitor and audit the data access. Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Can You Develop a Batch Processing Solution by Using Azure Data Factory and Data Lake? [Advanced]
- In this challenge, you will provision a new Azure data factory that supports a data pipeline that will transform data. First, you will create a storage account that uses a Data Lake Storage hierarchical namespace, and then you will design a batch processing solution. Next, you will deploy an Azure data factory, and then you will create a data pipeline that will transform data into a blob data file. Finally, you will enable access control list (ACL) and role-based access control (RBAC) security for the data lake.Note: Once you begin the challenge lab, you will not be able to pause, save, or exit and then return to your challenge lab. Please ensure that you have set aside enough time to complete the challenge lab before you start.
- Can You Implement an Azure SQL Database by Using Secure Distribution? [Advanced]
- You are a database administrator. You need to provision a new Azure SQL database that is geo-replicated for high availability and dynamic data masking for security. In this challenge, you will first deploy an Azure SQL database as the primary database. Next, you will apply dynamic data masking to appropriate columns in the database, and then you will create a geo-replicated database in another region as the secondary database. Finally, you will test the data security access of the geo-replicated database.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Can You Implement Azure Data Protection? [Advanced]
- In this challenge, you will implement Azure data protection. First, you will enable backups of an Azure virtual machine. Next, you will create a virtual machine unmanaged disk snapshot. Finally, you will migrate a virtual machine that has unmanaged disks to use managed disks, and then you will create a virtual machine managed disk snapshot.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Can You Implement Data Movement with Azure Data Factory? [Advanced]
- In this challenge, you will provision a new Azure data factory that supports a data pipeline. First, you will deploy an Azure data factory. Next, you will create a storage account that uses a Data Lake Storage hierarchical namespace, and then you will create a data pipeline. Finally, you will author a copy data activity that will send data output through the data pipeline, and then you will test the data pipeline.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Can You Implement Data Security for Azure Synapse Analytics? [Advanced]
- Can You implement data security for Azure Synapse Analytics. First, you will deploy a dedicated SQL pool in Azure Synapse Analytics. Next, you will design a security strategy for data privacy. Finally, you will implement dynamic data masking and auditing, and then you will test the secure data access solution.Note: Once you begin the challenge, you will not be able to pause, save, or return to your challenge. Please ensure that you have set aside enough time to complete the challenge before you start.
- Can You Manage Apache Spark Jobs in a Pipeline by Using Azure Synapse Analytics? [Advanced]
- In this challenge, you will manage Apache Spark jobs in a pipeline by using Synapse Studio. First, you will create an Apache Spark pool in an Azure Synapse workspace, and then you will design a stream processing solution. Next, you will create an Event Hubs namespace, and then you will create an event hub for event streaming. Finally, you will create an Apache Spark job in Synapse Studio, and then you will add an Apache Spark job to a Synapse pipeline.Note: Once you begin the challenge lab, you will not be able to pause, save, or exit and then return to your challenge lab. Please ensure that you have set aside enough time to complete the challenge lab before you start.
- Can You Optimize a Batch Processing Solution by Using Azure Data Factory and Azure Data Lake? [Advanced]
- In this challenge, you will optimize an Azure data factory that supports a data pipeline to transform data. First, you will create a storage account that uses a Data Lake Storage hierarchical namespace, and then you will design a batch processing solution. Next, you will run a performance benchmark to optimize and ingest data for a data lake. Finally, you will deploy an Azure data factory, and then you will create a data pipeline that will transform data into blob data files.Note: Once you begin the challenge lab, you will not be able to pause, save, or exit and then return to your challenge lab. Please ensure that you have set aside enough time to complete the challenge lab before you start.
- Can You Optimize and Monitor Azure Synapse Analytics? [Advanced]
- In this challenge, you will optimize and monitor a new Azure Synapse Analytics SQL pool. First, you will deploy a Synapse SQL pool. Next, you will optimize the SQL pool, and then you will query the data in the data warehouse tables. Finally, you will monitor the data access.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Can You Troubleshoot Apache Spark Jobs in a Pipeline by Using Azure Synapse Analytics? [Advanced]
- In this challenge, you will troubleshoot Apache Spark jobs in a pipeline by using Synapse Studio. First, you will create an Apache Spark pool in a Synapse Analytics workspace, and then you will design a stream processing solution. Next, you will create an Azure Event Hubs namespace and an event hub for event streaming, and then you will create an event subscription by using Azure Event Grid. Finally, you will create and troubleshoot an Apache Spark job in Synapse Studio, and then you will add an Apache Spark job to a Synapse pipeline.Note: Once you begin the challenge lab you will not be able to pause, save, or exit and return to your challenge lab. Please ensure that you have set aside enough time to complete the challenge lab before you start.
Expert
- Can You Create, Manage, and Monitor Azure Virtual Machines and Storage? [Expert]
- In this challenge, you will implement, manage, and monitor Azure storage and virtual machines. First, you will create and manage an Azure storage account, and then you will configure monitoring for an Azure storage account. Next, you will deploy an Azure virtual machine. Finally, you will configure monitoring for an Azure virtual machine.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Can You Design a Data Movement Solution by Using Azure Data Factory? [Expert]
- In this challenge, you will design a new Azure data factory instance that supports data pipelines. First, you will create a storage account that uses an Azure Data Lake Storage hierarchical namespace, and then you will create an Azure SQL database and an Azure Synapse Analytics SQL pool. Next, you will deploy an Azure data factory, and then you will author multiple data pipelines in a data movement solution. Finally, you will debug and publish the data pipelines.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.
- Can You Develop a Batch Processing Solution by Using Azure Data Factory, Azure Synapse, and Azure Data Lake? [Expert]
- In this challenge, you will develop a batch processing solution by using Azure Data Factory, Azure Synapse, and Azure Data Lake. First, create a storage account that uses a Data Lake Storage hierarchical namespace, and then design a batch processing solution. Next, you will deploy an Azure data factory, and then you will create a data pipeline that will transform data into a blob data file. Finally, create a dedicated SQL pool that uses SQL auditing by using Synapse Studio, and then you will create a second data pipeline to populate the dedicated SQL pool. Note: Once you begin the challenge lab, you will not be able to pause, save, or exit and then return to your challenge lab. Please ensure that you have set aside enough time to complete the challenge lab before you start.
- Can You Optimize a Batch Processing Solution by Using Azure Data Factory, Azure Synapse Analytics, and Azure Data Lake? [Expert]
- In this challenge, you will optimize a batch processing solution by using Azure Data Factory, Azure Synapse Analytics, and Azure Data Lake. First, create a storage account that uses a Data Lake Storage hierarchical namespace, and then design a batch processing solution. Next, run a performance benchmark to optimize and ingest data for a data lake by using the AzCopy utility, and then deploy an Azure data factory that contains a data pipeline to transform data into a blob data file. Finally, create a data pipeline in Azure Synapse Studio to populate a new, dedicated SQL pool, and then you will create a data pipeline to run a performance benchmark by using a data factory.Note: Once you begin the challenge lab, you will not be able to pause, save, or return to your challenge lab.
- Can You Populate Azure Synapse Analytics by Using Azure Data Factory? [Expert]
- In this challenge, you will populate an Azure Synapse Analytics SQL pool by using an Azure data factory data pipeline. First, you will create a storage account that uses an Azure Data Lake Storage hierarchical namespace, and then you will create an Azure Synapse Analytics SQL pool. Next, you will deploy an Azure data factory, and then you will create a data pipeline. Finally, you will author copy data activities that will send data output through the data pipeline, and then you will test the data pipeline.Note: Once you begin a challenge you will not be able to pause, save, or return to your progress. Please ensure you have set aside enough time to complete the challenge before you start.