In this post, we will create a third notebook to prep and transform the wta_matches csv’s. Matches Notebook Launch the Databricks portal and create a new cluster, as shown in the previous post. Name the cluster matchesNotebook. Recall from the previous posts on ADF, that we have ingested the wta_matches files (53 in total) inContinue reading “3. Notebooks with Python – part 2”
Author Archives: belugaboba
2. Notebooks with Python – part 1
Now that we have provisioned a Databricks workspace and have created a Spark cluster, it is time to get spinning by writing our first notebook. A notebook is a collection of cells. These cells are run to execute code, to render formatted text, or to display graphical visualizations. A Databricks notebook cell can execute Python,Continue reading “2. Notebooks with Python – part 1”
1. Creating a Databricks service, workspace and cluster
The next step in our WTA Insights data journey is to cleanse and transform the tennis_wta files that we have ingested in our data lake. The plan is to use Databricks to prep the csv files and then store them back on the data lake, in the cleansed layer, ready for Power BI to consumeContinue reading “1. Creating a Databricks service, workspace and cluster”
8. ADF schedule triggers
So far, we have executed the pipelines in Debug or have run the pipelines once using the Trigger now option. To automate future loads of csv files we will now look at how to schedule pipeline executions using schedule triggers. There are three types of triggers in ADF: Schedule – runs pipelines periodically (every hour,Continue reading “8. ADF schedule triggers”
7. ADF integration with GitHub
Recall that, when we provisioned the Azure Data Factory resource, we chose to configure Git later. In this post, I will show you how to configure source control from Azure Data Factory UI. The pipelines, as well as all the code, scripts and files associated with the WTA Insights project are available on my GitHubContinue reading “7. ADF integration with GitHub”
6. Parameters, variables and loops – part 2
Let’s build a third pipeline to copy the wta_matches files. Matches Pipeline Take a moment to explore the matches files in github. The files follow a consistent naming convention: wta_matches_yyyy.csv, where yyyy represents the year of the WTA season and is a value between 1968 and 2020. Explore the files in raw view. Note thatContinue reading “6. Parameters, variables and loops – part 2”
5. Parameters, variables and loops – part 1
In the previous post, we have created a simple pipeline that fetches the wta_players.csv from HTTP (github) and stores it in our data lake. We are now going to build another pipeline, that fetches the ranking files. Rankings Pipeline Take a moment to explore the ranking files in github. As of the date of thisContinue reading “5. Parameters, variables and loops – part 1”
4. Our first pipeline
Let’s start by taking small baby steps. Our first pipeline will copy the wta_players.csv from github to our datalake. Then, we will learn to make some bigger steps. We will learn to implement more complex logic in our pipelines and make use of parameters, variables and loops. A second pipeline will fetch the wta_rankings csvContinue reading “4. Our first pipeline”
3. Datasets
Now that we have defined the connection information needed by ADF to connect to github and to our ADLS, by creating two linked services, the next step is to tell ADF what data to use from within the data sources. For this we need to create datasets. Datasets identify data within the linked data stores,Continue reading “3. Datasets”
2. Linked services
On the Azure portal, go to the newly created data factory and click on the Author & Monitor tile. This will launch the Azure Data Factory user interface on a separate tab. On the Let’s get started page, click on the expand button on the top-left corner to expand the left sidebar. There are 4Continue reading “2. Linked services”