3. Notebooks with Python – part 2

In this post, we will create a third notebook to prep and transform the wta_matches csv’s. Matches Notebook Launch the Databricks portal and create a new cluster, as shown in the previous post. Name the cluster matchesNotebook. Recall from the previous posts on ADF, that we have ingested the wta_matches files (53 in total) inContinue reading “3. Notebooks with Python – part 2”

2. Notebooks with Python – part 1

Now that we have provisioned a Databricks workspace and have created a Spark cluster, it is time to get spinning by writing our first notebook. A notebook is a collection of cells. These cells are run to execute code, to render formatted text, or to display graphical visualizations. A Databricks notebook cell can execute Python,Continue reading “2. Notebooks with Python – part 1”

1. Creating a Databricks service, workspace and cluster

The next step in our WTA Insights data journey is to cleanse and transform the tennis_wta files that we have ingested in our data lake. The plan is to use Databricks to prep the csv files and then store them back on the data lake, in the cleansed layer, ready for Power BI to consumeContinue reading “1. Creating a Databricks service, workspace and cluster”

7. ADF integration with GitHub

Recall that, when we provisioned the Azure Data Factory resource, we chose to configure Git later. In this post, I will show you how to configure source control from Azure Data Factory UI. The pipelines, as well as all the code, scripts and files associated with the WTA Insights project are available on my GitHubContinue reading “7. ADF integration with GitHub”

6. Parameters, variables and loops – part 2

Let’s build a third pipeline to copy the wta_matches files. Matches Pipeline Take a moment to explore the matches files in github. The files follow a consistent naming convention: wta_matches_yyyy.csv, where yyyy represents the year of the WTA season and is a value between 1968 and 2020. Explore the files in raw view. Note thatContinue reading “6. Parameters, variables and loops – part 2”

5. Parameters, variables and loops – part 1

In the previous post, we have created a simple pipeline that fetches the wta_players.csv from HTTP (github) and stores it in our data lake. We are now going to build another pipeline, that fetches the ranking files. Rankings Pipeline Take a moment to explore the ranking files in github. As of the date of thisContinue reading “5. Parameters, variables and loops – part 1”