Table of contents
- Step 1: Inspecting the CSV Files in Data Lake: Your First Step to Data Optimization
- Step 2: Configuring the Data Flow Sources: Pointing to the Customer.CSV Files and use Join tool after that.
- Step 3: Use Join on Customer id as that is the common field and choose inner join.
- Step 4: Sink Location would be a JSON File in data lake so dataset has been chosen accordingly.
- Step 5: Integrating Data Flow into a Pipeline: Directing Data to ALDS’s Join_example folder. Data must be saved into Json format in Data Lake.
- Step 5: Pipeline Execution Success: Ensuring Smooth Data Transfer
- Step 6: Data Flow Success: Confirming Effective Data Transformation
- Step 7: Verifying JSON file in the data lake in Azure.