Skip to content

Configure Azure Data Factory as a Data Source in NCC Portal

This article describes how to connect Azure Data Factory (ADF) pipelines to Microsoft Fabric using the NCC Portal. Follow these steps to set up, manage, and integrate your data sources from ADF through the Bronze and Silver layers.

Prerequisites

Before you start, gather the following details:

  • Subscription ID
  • Resource Group Name
  • Data Factory Name

Connect Azure Data Factory to NCC Portal

To establish a connection between Azure Data Factory and Microsoft Fabric:

  1. Open Manage Connections and Gateways in Fabric.
  2. Select + New to create a new connection.
  3. If required, create a private endpoint.
  4. Name your connection using the CON_NCC prefix. This ensures visibility and manageability in NCC Portal.
  5. Choose Azure Data Factory as the connection type. Enter the Subscription ID, Resource Group Name, Data Factory Name, and select the appropriate Authentication method.
  6. Follow the instructions in Connect from Azure Data Factory to Microsoft Fabric to complete the setup.

TIP
Using the CON_NCC naming convention helps you quickly identify and manage connections in the NCC Portal.

Add a Data Source

  1. In NCC Portal, navigate to Tenant Settings > Data Sources.
  2. Select Add DataSource.
  3. Complete the following fields:
Field Description Example/Default Value
Name DataSource name in NCC
Data Source Type Type of data source ADF
Namespace Prefix for storing data in Lakehouses
Code Identifier for pipelines ADF
Description Description of DataSource
Connection Name of the Connection in Fabric - Set in previous step
Environment NCC environment for the data source Development

Create a Landing Zone Entity

  1. Go to Landing Zone Entities.
  2. Select New Entity.
  3. Fill in the required details:
Field Description Example/Default Value
Pipeline Not used
Data Source Data Source for connection - Set in previous step
Source schema Enter ADF ADF
Source name Identifier for extracted data ADF
Incremental Extract data incrementally False
Has encrypted columns Check if table has sensitive data False
Entity value Optional. Entity values reference
Lake house Lakehouse for storing data LH_Data_Landingzone
File path File path for data storage Is filled automatically
File name File name for data storage Is filled automatically
File type Expected file type E.g. Json, Csv, Parquet, Xlsx, Txt, Xml

Ensure your ADF pipeline writes data to LH_Data_Landingzone with the correct file path and name for further processing in Bronze and Silver layers.

TIP

  • Click here to see how to apply data encryption on your sensitive data.

Create a Bronze Zone Entity

  1. Go to Bronze Zone Entities.
  2. Select New Entity.
  3. Enter the following information:
Field Description Example/Default Value
Pipeline Orchestrator pipeline for parsing PL_BRZ_COMMAND
Landing zone entity Landing zone entity to be parsed - Set in previous step
Entity value Optional. Entity values reference
Column mappings Optional. Column mapping info
Lake house Lakehouse for storing data LH_Bronze_Layer
Schema Schema for storing data dbo
Name Table name for storing data Is filled automatically
Primary keys Unique identifier fields (Case sensitive)

Create a Silver Zone Entity

  1. Go to Silver Zone Entities.
  2. Select New Entity.
  3. Provide the following details:
Field Description Example/Default Value
Pipeline Orchestrator pipeline for parsing PL_SLV_COMMAND
Bronze layer entity Bronze layer entity to be parsed - Set in previous step
Entity value Optional. Entity values reference
Lake house Lakehouse for storing data LH_Silver_Layer
Schema Schema for storing data dbo
Name Table name for storing data Is filled automatically
Columns to exclude Comma-separated columns to exclude (Case sensitive)
Columns to exclude from history Comma-separated columns to exclude from compare (Case sensitive)

Example configuration

Suppose your organization, InSpark, has an ADF pipeline that extracts a CSV file not accessible via Fabric Pipelines. The pipeline is available through a Fabric connection named NCC_ADF_SALES. The ADF pipeline is called PL_Get_InSpark and expects a filename parameter (total_sales.csv). Here’s how you would configure the data source and entities:

Data Source

Field Value
Name ADF_InSpark
Data Source Type ADF
Namespace InSpark_Sales
Code ADF
Description ADF connection to InSpark_Sales
Connection NCC_ADF_SALES
Environment Development

Landing Zone Entity

Field Value
Pipeline PL_LDZ_COPY_FROM_ADF
Data Source ADF_InSpark
Source schema ADF
Source name ADF
Incremental False
Entity value See example below
Lake house LH_Data_Landingzone
File path InSpark_Sales
File name total_sales
File type Csv

Example Entity value:

Name Value
CustomParametersJSON {"FileName": "total_sales.csv"}
PipelineName PL_Get_InSpark

Bronze Zone Entity

Field Value
Pipeline PL_BRZ_COMMAND
Landing zone entity InSpark_Sales/total_sales
Entity value See example below
Column mappings
Lake house LH_Bronze_Layer
Schema dbo
Name total_sales
Primary keys id

Example Entity value:

Name Value
ColumnDelimiter ,
CompressionType none
Encoding UTF-8
EscapeCharacter \
FirstRowIsHeader 1
RowDelimiter \r\n

Silver Zone Entity

Field Value
Pipeline PL_SLV_COMMAND
Bronze layer entity dbo.total_sales
Entity value
Lake house LH_Silver_Layer
Schema dbo
Name total_sales
Columns to exclude
Columns to exclude from history

Next steps