Skip to content

Configure ODATA Feed Connections in NCC Portal

This article describes how to set up and manage ODATA Feed data sources in the NCC Portal, following Microsoft Learn Fabric documentation style.

Prerequisites

Before you begin, gather the following details:

  • URL
  • Anonymous Access or Basic Access
  • Username
  • Password

Warning

Odata feeds can be slow, you can consider to use the REST API connector if you experience performance issues.

Step 1: Connect ODATA Feed to Fabric

To connect ODATA Feed to your Fabric tenant:

  1. Visit Connect to ODATA Feed and follow the instructions to create a new connection.
  2. If required, create a private endpoint.
  3. Name your connection using the CON_NCC prefix. This naming convention helps ensure visibility and manageability within the NCC Portal.

Step 2: Add a Data Source in NCC Portal

  1. In NCC Portal, go to Tenant Settings > Data Sources.
  2. Select Add DataSource.
  3. Complete the fields as described:
Field Description Example/Default Value
Name Name of the source SQL database
Data Source Type Type of data source ODATA
Namespace Prefix for storing data in Lakehouses InSpark_Sales
Code Identifier for pipelines ODATA
Description Description of data source ODATA Feed connection to InSpark_Sales
Connection Name of the connection in Fabric NCC_ODATA_SALES
Environment NCC environment for the data source Development

Step 3: Create a Landing Zone Entity

  1. Navigate to Landing Zone Entities.
  2. Select New Entity.
  3. Fill in the required details:
Field Description Example/Default Value
Pipeline Not used PL_LDZ_COPY_FROM_ODATA_01
Data Source Data source for connection InSpark_Sales
Source schema Schema name in ODATA Feed dbo
Source name Table or view name in ODATA Feed total_sales
Entity value Optional. Entity values reference
Lake house Lakehouse for storing data LH_Data_Landingzone
File path File path for data storage Filled automatically
File name File name for data storage Filled automatically
File type Expected file type Parquet

TIP

  • Click here to see how to configure the source incrementally.
  • Click here to see how to apply data encryption on your sensitive data.

Step 4: Create a Bronze Zone Entity

  1. Go to Bronze Zone Entities.
  2. Select New Entity.
  3. Enter the following information:
Field Description Example/Default Value
Pipeline Orchestrator pipeline for parsing PL_BRZ_COMMAND
Landing zone entity Landing zone entity to be parsed InSpark_Sales/total_sales
Entity value Optional. Entity values reference
Column mappings Optional. Column mapping info
Lake house Lakehouse for storing data LH_Bronze_Layer
Schema Schema for storing data dbo
Name Table name for storing data Filled automatically
Primary keys Unique identifier fields (Case sensitive) id

Step 5: Create a Silver Zone Entity

  1. Go to Silver Zone Entities.
  2. Select New Entity.
  3. Provide the following details:
Field Description Example/Default Value
Pipeline Orchestrator pipeline for parsing PL_SLV_COMMAND
Bronze layer entity Bronze layer entity to be parsed dbo.total_sales
Entity value Optional. Entity values reference
Lake house Lakehouse for storing data LH_Silver_Layer
Schema Schema for storing data dbo
Name Table name for storing data Filled automatically
Columns to exclude Comma-separated columns to exclude (Case sensitive)
Columns to exclude from history Comma-separated columns to exclude from compare (Case sensitive)

Example configuration

The company InSpark has a ODATA Feed connection in Fabric called NCC_SQL_SALES to the database InSpark_Sales. On this database, there is a table total_sales in the schema dbo that the company wants to extract. The configuration is as follows:

Data Source

Field Value
Name InSpark_Sales
Data Source Type ODATA
Namespace InSpark_Sales
Code ODATA_01
Description ODATA Feed connection to InSpark_Sales
Connection NCC_SQL_SALES
Environment Development

Landing Zone Entity

Field Value
Pipeline PL_LDZ_COPY_FROM_ODATA_01
Data Source InSpark_Sales
Source schema dbo
Source name total_sales
Incremental False
Entity value
Lake house LH_Data_Landingzone
File path InSpark_Sales
File name total_sales
File type Parquet

Bronze Zone Entity

Field Value
Pipeline PL_BRZ_COMMAND
Landing zone entity InSpark_Sales/total_sales
Entity value
Column mappings
Lake house LH_Bronze_Layer
Schema dbo
Name total_sales
Primary keys id

Silver Zone Entity

Field Value
Pipeline PL_SLV_COMMAND
Bronze layer entity dbo.total_sales
Entity value
Lake house LH_Silver_Layer
Schema dbo
Name total_sales
Columns to exclude
Columns to exclude from history

Next steps