Vexdata.io Documentation
  • Vexdata Overview
  • Installation
    • System Requirements
    • Windows Installation
    • Linux Installation
    • Cloud Installation
      • GCP
      • Azure
      • AWS Cloud Deployment Architecture Diagram
        • Pre-requisites
        • Manual Install on ec2
        • Cloud Formation Scripts (command line)
    • Admin Post Installation Setup
    • User Post Installation Setup
    • Server Improvement (Optional)
  • Getting Started
    • Roles and Permissions
    • Managing Groups and Projects
      • Folders
      • Projects
    • Settings
      • Slack Integration
      • Integrations
      • Rules
      • Properties
      • Integrations
      • Email Template
      • Report Template
      • Create Data Connections
        • Databases/Warehouses
        • Run On
  • Creating and Executing Test Cases
    • Test Case
      • Create Test Cases
        • Define Data Source and Target
          • Configure Table/s from Database
          • Input SQL Data
          • Input Files
            • Excel/CSV
            • XML
            • JSON
            • Parquet/AVRO
          • API Input
          • Table Filter
          • Advance - Input
        • Data Compare
          • ETL Testing/Cell By Cell Compare
            • Data Compare Mapping
            • Data Compare Options
          • Schema Compare
          • Counts Compare
        • Schema Monitoring
        • Data Quality
          • SQL Rules
          • Column Level Rules
          • Duplicates
      • Executing Test Cases
      • Defining Test Case Settings
    • Test Runs
    • Variables
    • Note
    • Label
  • Reports
    • User Reports
    • Admin Report or Manager Reports
  • Current Limitations
Powered by GitBook
On this page
  • Source to Target Mappings
  • SQL Transformation
  • Tolerance
  1. Creating and Executing Test Cases
  2. Test Case
  3. Create Test Cases
  4. Data Compare
  5. ETL Testing/Cell By Cell Compare

Data Compare Mapping

Map datasets and columns

PreviousETL Testing/Cell By Cell CompareNextData Compare Options

Last updated 4 months ago

Source to Target Mappings

There are three options for mapping datasets and columns from source to target.

  1. When the table names and column names are same between source and target, they are automatically mapped.

  2. User can explicitly edit the mapping by deleting or perform custom mapping. Steps to perform custom mapping are in the video below.

  1. Upload Dataset/Column mapping file.

User can upload custom mapping file. This option is useful in the following scenarios.

  • when the dataset names or column names are completely different .

  • when user wants to map only certain datasets or columns.

The format of the csv file is

source_table_name
target_table_name
source_column_name
target_column_name
key_column

The CSV file should contain the following headers:

  1. source_table_name: The name of the table in the source database.

  2. target_table_name: The name of the table in the target database where the data is to be migrated or integrated.

  3. source_column_name: The name of the column in the source table.

  4. target_column_name: The name of the column in the target table that corresponds to the source column.

  5. key_column: A flag (usually a boolean or similar indicator) to denote if the source column is a key column (primary key, foreign key, etc.). The value of key_column should be true/false.

Mapping file is case sensitive. Ensure the table names, and column names have the same case as in the database.

SQL Transformation

  • SQL Transformation : This feature can be used to apply an SQL Transformation function for all the Source and Target columns. This can be helpful, for instance, if in one of the columns from either _Source _or Target the name has been written with uppercase, and the corresponding column from the other data source has the name in lowercase. The _SQL Transformation _can be used to write an SQL statement to change the column's names to uppercase. To do this:

    • Click on the icon under the SQL column.

    • Provide the SQL function in the Text Field and click on Done, or...

    • Slide the toggle and add predefined rules.

Tolerance

  • Tolerance : Users can apply Tolerance for numeric (e.g., long, double, float, short, int.) , date and timestamp columns data types . Tolerance cannot be added to the key column.

    • Click on the icon under Tolerance.

    • Select the Operator and_ _type in the tolerance Value.

    • Click on Done.

sample mapping file.

Go to Settings for functionality.

Download
Add New Rule
SQL Transformation