Home / Knowledge base / 7. COMMANDS TO BE USED / 7.4 Prepare Capture Process

7.4 Prepare Capture Process

The “prepare” Capture Process command is used to create new Capture Process.

The “prepare” command does not enable the Database to start capturing data on the configured objects, until the specified Capture Process runs (see chapters 7.15.1 Run any Capture or Apply Processes and 8.2.2.4 Processes Monitoring and Maintenance for the details).

Parameter id is mandatory for "prepare” command. All other parameters are optional. All parameters except id can be changed using “alter” Capture Process command (see chapter 7.5 Alter Capture Process and 8.2.2.3 Modify, Delete and Validate Processes). The name parameter can be altered only in the case if it has not been provided earlier. If there is another Capture Process, which has the same id or the same name then the “prepare” command will fail.

7.4.1 Prepare Capture Process for MS SQL Server Database

REST API:

  • Endpoint: https://repstance_url/configure/process
  • Method: POST
  • Header:
    • Content-Type: application/json
    • X-Token: token
  • Body:
    {
      "command":"prepare",
      "parameters":[
         ["process","capture"],
         ["id","captureID"],
         ["dbtype","mssql"],
         ["name","captureName"],
         ["server","databaseHost"],
         ["port","databasePort"],
         ["user","username"],
         ["password","password"],
         ["dbname","databaseName"],
         ["autostart","{0|1}"],
         ["debuglevel","{0-15}"],
         ["ddlinclude","objectsMask"],
         ["ddlexclude","objectsMask"],
         ["dmlinclude","objectsMask"],
         ["dmlexclude","objectsMask"],
         ["loadinclude","objectsMask"],
         ["loadexclude","objectsMask"],
         ["map=mapID","mappingClause"],
         ["skipapply","skipapplyMask"]
      ]
    }

Server response:

  • HTTP Status – status of the command, possible codes are:
    • 200 – if no error occurs
    • 422 – if error occurs
  • Body:
    {
     "Status":"{Failed|Completed}",
     "Message":"The command execution details"
    }

CLI Syntax:

repcli prepare process=capture id=captureID \
dbtype=mssql name=captureName \
server=databaseHost port=databasePort \
user=username password=password \
dbname=databaseName autostart={0|1} debuglevel={0-15} \
ddlinclude=objectsMask ddlexclude=objectsMask \
dmlinclude=objectsMask dmlexclude=objectsMask \
loadinclude=objectsMask loadexclude=objectsMask \
map=mapID,mappingClause \
skipapply=skipapplyMask

The input parameters are:

  • process – The only appropriate value is capture
  • id – The Capture Process id
  • dbtype – The appropriate value is mssql
  • name – Name of the Capture Process
  • server – Host name or IP address of the database server
  • port – Database port
  • dbname – Database name
  • user – Database User name
  • password – Database User password
  • autostart – Determines if the Process must be run automatically, the possible values are:
    • 0 – do not run the Process automatically (default value)
    • 1 – to run the Process automatically
  • debuglevel – The level of debugging, possible values are 0-15, the default value is 0
  • ddlinclude – Mask of the DDL objects to be captured (see chapter 7.4.4 Capture Objects Specification for the details)
  • ddlexclude – Mask of the DDL objects to be skipped by the Capture Process (see chapter 7.4.4 Capture Objects Specification for the details)
  • dmlinclude – Mask of the DML objects to be captured (see chapter 7.4.4 Capture Objects Specification for the details)
  • dmlexclude – Mask of the DML objects to be skipped by the Capture Process (see chapter 7.4.4 Capture Objects Specification for the details)
  • loadinclude – Mask of the objects to be included into the Initial Load (see chapter 7.4.5 Initial Load for the details)
  • loadexclude – Mask of the objects to be skipped during Initial Load (see chapter 7.4.5 Initial Load for the details)
  • map – The set of the parameters to determine objects’ transformation (see chapter 7.4.6 Objects Mapping and Possible Transforms for the details)
  • skipapply – Used to define the behavior of a Capture Process when extracting data, which was produced by Apply Processes as part of the “loop-back” control function, in that, this Capture Process will extract or ignore the data. The Apply Processes to be skipped are specified by ids separated by comma. In order to specify all Processes the “all” value to be used. In order to avoid skipping any Apply Processes the “none” value to be used. The default value is “all”.

7.4.2 Prepare Capture Process for Oracle Database

REST API:

  • Endpoint: https://repstance_url/configure/process
  • Method: POST
  • Header:
    • Content-Type: application/json
    • X-Token: token
  • Body:
    {
      "command": "prepare",
      "parameters": [
        ["process","capture"],
        ["id","captureID"],
        ["dbtype","oracle"],
        ["name","captureName"],
        ["connectiontype","tns|ezconnect"],
        ["tnsname","tns_alias"],
        ["server","host_name"],
        ["port","port_number"],
        ["servicename","service_or_SID"],
        ["dbname","databaseName"],
        ["user","username"],
        ["password","password"],
        ["autostart","{0|1}"],
        ["debuglevel","{0-15}"],
        ["ddlinclude","objectsMask"],
        ["ddlexclude","objectsMask"],
        ["dmlinclude","objectsMask"],
        ["dmlexclude","objectsMask"],
        ["loadinclude","objectsMask"],
        ["loadexclude","objectsMask"],
        ["map=mapID","mappingClause"],
        ["skipapply","skipapplyMask"],
        ["loadconsistent","{0|1}"],
        ["directlogmode","0|1|2"],
        ["asmuser","asm_user"],
        ["asmpassword","asm_password"],
        ["asmservicename","asm_service_or_SID"],
        ["minearchonly","0|1"],
        ["locallogdir","path_to_log_folder"],
        ["poolingdelay","pooling_delay_in_sec"],
        ["preview","0|1"]
      ]
    }

Server response:

  • HTTP Status – status of the command, the possible codes are:
    • 200 – if no error occurs
    • 422 – if error occurs
  • Body:
    {
    "Status":"{Failed|Completed}",
    "Message":"The command execution details"
    }

CLI Syntax:

repcli prepare process=capture id=captureID \
dbtype=oracle name=captureName \
connectiontype=tns|ezconnect tnsname=tns_alias \
server=databaseHost port=databasePort \
servicename=service_or_SID \
user=username password=password \
dbname=databaseName autostart={0|1} debuglevel={0-15} \
ddlinclude=objectsMask ddlexclude=objectsMask \
dmlinclude=objectsMask dmlexclude=objectsMask \
loadinclude=objectsMask loadexclude=objectsMask \
loadconsistent={0|1} map=mapID,mappingClause \
skipapply=skipapplyMask directlogmode={0|1|2} \
asmuser=asm_username asmpassword=asm_password \
asmservicename=asm_service_name \
minearchonly={0|1locallogdir=path_to_log_directory \
poolingdelay={0-99999preview={0|1

The input parameters are:

  • process – The only appropriate value is capture
  • id – Capture Process id
  • dbtype – Type of RDBMS, the only possible value is oracle
  • name – Name of the Capture Process
  • connectiontype – Specifies the Oracle connection method, the possible values are:
    • tns – Local Naming Method to be used
    • ezconnect – EZCONNECT to be used
  • tnsname – Name of the TNS alias, valid only if connectiontype=tns
  • server – Host name or IP address of the database server, valid only if connectiontype=ezconnect
  • port – Database port number, valid only if connectiontype=ezconnect
  • servicename – Database service name or SID, valid only if connectiontype=ezconnect
  • dbname – Name of either container or pluggable database. Valid only if database=source and for any Oracle versions 12c-19c but excluding RDS instances
  • user – Database User name
  • password – Database User password
  • autostart – Determines if the Process must be run automatically, the possible values are:
    • 0 – do not run the Process automatically (default value)
    • 1 – to run the Process automatically
  • debuglevel – Level of debugging, possible values are 0-15, default value is 0
  • ddlinclude – Mask of the DDL objects to be captured (see the 7.4.4 Capture Objects Specification chapter for the details)
  • ddlexclude – Mask of the DDL objects to be skipped by the Capture Process (see the 7.4.4 Capture Objects Specification chapter for the details)
  • dmlinclude – Mask of the DML objects to be captured (see the 7.4.4 Capture Objects Specification chapter for the details)
  • dmlexclude – Mask of the DML objects to be skipped by the Capture Process (see the 7.4.4 Capture Objects Specification chapter for the details)
  • loadinclude – Mask of the objects to be included into the Initial Load (see chapter 7.4.5 Initial Load for the details)
  • loadexclude – Mask of the objects to be skipped during Initial Load (see chapter 7.4.5 Initial Load for the details)
  • loadconsistent – Determines if all the tables included into Initial Load must be consistent to a single point in time. The possible values are:
    • 0 – do not export all the tables at the same SCN (default)
    • 1 – export all the tables at the SCN
  • map – The set of the parameters to determine objects’ transformation (see 7.4.6 Objects Mapping and Possible Transforms chapter for the details)
  • skipapply – Used to define the behavior of a Capture Process when extracting data, which was produced by Apply Processes as part of the “loop-back” control function, in that, this Capture Process will extract or ignore the data. The Apply Processes to be skipped are specified by ids separated by comma. In order to specify all Processes the “all” value to be used. In order to avoid skipping any Apply Processes the “none” value to be used. The default value is “all”.
  • directlogmode – Used to define Data Capturing Method (see chapter 7.4.3 Overview of Data Capturing Methods for the details). The possible values are:
    • 0 – use LogMiner method to capture the data (default)
    • – use Direct Log Mining method to capture the data. If the “locallogdir” parameter is in use the Capture Process will process the Redo Logs from the local directory, otherwise the Capture Process will read the Logs from the Source Database as Oracle directory objects
    • 2 – use Direct Log Mining for Redo Logs located on the ASM device
  • asmuser – ASM user name
  • asmpassword – ASM user password
  • asmservicename – ASM service name or SID
  • minearchonly – Determines if the Capture Process must process Archived Redo Logs only. The possible values are:
    • 0 – process Online and Archived Redo Logs (default)
    • 1 – process Archived Redo Logs only
  • locallogdir – Path to the local directory to process archived logs from (see chapter 7.4.3 Overview of Data Capturing Methods for the details)
  • poolingdelay – The parameter is used to specify number of seconds the Capture Process will wait after each database changes gathering
  • preview – This parameter is used to validate list of the tables to be included into the replication along with the transformation rules. The possible values are:
    • 0 – do not use “preview” mode (default)
    • 1 – do not apply the configuration to the Capture Process but provide the list of the tables to be included into the replication along with the transformation rules.

7.4.3 Overview of Data Capturing Methods for Oracle Database

Capture Process can extract Oracle database changes either using “LogMiner” functionality or by getting them directly from “Redo” or “Archived Redo” logs (Direct Log Mining).

LogMiner is built-in Oracle method, which is to analyze Redo Logs and extract the databases changes. LogMiner method can be considered in the following cases:

  • Small number of changes (less than 5000 DML operations per second) should be replicated
  • Possible CPU extra load (up to 15% at peak) is acceptable for the Source Database
  • The replication of the LOB columns contain data size more than 4000 bytes is not required
  • Size of the changes to be extracted is less than 5% of the Redo log size
  • Clustered tables need to be replicated
  • Access to Redo/Archived Logs is not possible to configure

In Direct Log Mining mode the Capture Process retrieves Redo Logs and use them to extract the database changes.

Direct Log Mining method best fits for most databases and used to meet the following replications requirements:

  • High replication speed
  • Minimal impact on the Source Database
  • Support LOB columns
  • Replication of Clustered Tables is not required
  • Replication of Partitioned Index-Organized Tables is not required

Capture Process configuration varies depending on the Redo Log location. It supports Redo Logs located as either on file system or ASM device. Capture Process can also be configured to extract the changes from Redo Log files, which are available in the Repstance instance.

File System Location

For the Redo Logs located on the file system the “ONLINELOG_DIR” and “ARCHIVELOG_DIR” directories should be created (see chapter 7.1.2 Prepare Oracle Database as Source Database for the details). The “directlogmode” parameter of the Capture Process should be set to “1”.

ASM Device

Capture Process requires ASM connection to be configured to access Redo Logs. The following parameters should be provided:

  • asmuser
  • asmpassword
  • asmservicename

The “directlogmode” parameter should be set to “2”.

Note – The ASM User has to have “sysdba” role assigned.

RepstanceLocal File System

Capture Process can be configured to extract the changes from Redo Logs delivered into Repstance instance by any external process. This configuration may be used to reduce load to the Oracle Net Services or if the access to Redo Logs through the Oracle Instance is hard to implement. In order to configure the “locallogdir” parameter should be provided and the “directlogmode” parameter should be set to “1”.

7.4.4 Capture Objects Specification

Repstance allows flexibility to determine a set of tables, which are to be captured. The tables for DML and DDL operations are specified by the different parameters.

The following parameters are used for DML replication:

  • dmlinclude – list of the tables to be included into the DML replication
  • dmlexclude – list of the tables to be excluded from the DML replication

The following parameters are used for DDL replication:

  • ddlinclude – list of the tables to be included into the DDL replication
  • ddlexclude – list of the tables to be excluded from the DDL replication

Note These criteria are mutually exclusive. The table will only be replicated in the case that the name matches the “include” criteria and does not match the “exclude” criteria.

The table is specified in schema _name . table_name format. In order to specify the list of the tables, each table must be separated by comma. The % symbol may be used in order to match any number of characters. It may also be used in both schema _name and table_name parts.

The following examples show various ways of defining tables:

  • Customers and employees tables, which are in the dbo schema:
dbo.customers,dbo.employees
  • Any tables in the dbo schema:
dbo.%
  • Any tables having name started from rep or tmp:
%.rep%,%.tmp%
  • Any table in report schema ends with #:
report.%#

The following examples show how to use the DML/DDL parameters:

  • To capture DML changes for all tables in dbo schema except where tables start with rep:
dmlinclude=dbo.%
dmlexclude=dbo.rep%
  • To capture DML changes for all tables in the cap1, cap2 and cap3 schemas except the report1 table from the cap1 schema:
dmlinclude=cap1.%,cap2.%,cap3.%
dmlexclude=cap1.report1
  • To capture DML and DDL operations for any tables in the dbo schema:
dmlinclude=dbo.%
ddlinclude=dbo.%
  • To capture only DDL operations for objects in the report schema:
ddlinclude=report.%

These parameters influence the newly created tables as well. The description below shows the capture behavior of the tables created:

Tables that match DML criteriaTables that match DDL criteriaCapture Behavior
truetrueThis create statement is captured. Any further DML statements will be captured as well
truefalseThis create statement is not captured. No DML statements will be captured
falsetrueThis create statement is captured. DML statements are not captured. However, any further DDL statements are captured
falsefalseThis create statement is not captured. Neither DML or DDL statements will be captured

7.4.5 Initial Load

Repstance has built-in functionality to load objects’ data and start the replication process from the timestamp at which the data has been created. Initial Loading is performing by the Capture Process at the “Run” stage (see chapter 7.15.1 Run any Capture or Apply Processes for the details).

This functionality is used when it is necessary to synchronize data between the Source and Target Databases before starting the replication process. The Initial Load parameters can be configured to either clean up or preserve data in the Target Database before loading from the Source Database. Two further options are to create table in the Target Database if it doesn’t exist and recreate this table in the Target Database if it does exist. 

The following parameters are used to configure the objects for Initial Loading:

loadinclude – list of the tables to be included into the Initial Load

loadexclude – list of the tables to be excluded from the Initial Load

Note These parameters MUST be used in conjunction with the dmlinclude/ dmlexclude object definitions as well, otherwise the Capture Process will skip them.

The format to be specified takes the following form:

schema _name1.table_name1:[loadOption1], schema _name2.table_name2:[loadOption2], ...  

In order to define the list of the tables to be part of this process, and where there are more than one tables mask to be defined each of these must be separated by comma. The % symbol may be used in order to match any number of characters. It may also be used in both schema _name and table_name parts.

The loadOption is an optional parameter, and it used to perform the action detailed below on the data before insertion into the Target Database, and can have the following values:

A – preserve the data which is the DEFAULT value

T – use truncate statement to clean up the data

D – use delete statement to clean up the data

C – create table if it doesn’t exist

R – recreate table if it already exists

Note Once the Initial Load has been completed by Capture Process the values of loadinclude and loadexclude parameters will be removed automatically. The details of Initial Load criteria can be found in the Capture log files.

The following examples show various ways of defining the Initial Loading parameters:

To reload data for the emp and emp_audit_trail tables, which are in the dbo schema and truncate these tables before insertion:

loadinclude=dbo.emp:T,dbo.emp_audit_trail:T 

Any data from tables starting with the rep_ prefix in the dbo schema which need to be deleted before insertion:

loadinclude=dbo.rep_%:D

To reload all tables started with emp prefix except the employee_archive tables:

loadinclude=dbo.emp%:T
loadexclude=dbo.employee_archive

To reload customer table and create it if it doesn’t exist in the Target Database:

loadinclude=dbo.customer:C

7.4.6 Objects Mapping and Possible Transforms

Transforms are primarily used where there is a need to reformat any statements. The transformation can only be configured for a Capture Process, as the Apply function is inherent in the use of transformation. Therefore, it is not possible nor is it necessary to construct an Apply Process for a Transform.

Repstance supports transformation rules, which can be applied to the following objects:

  • schema name
  • table name
  • column name
  • “identity” attribute (valid only for SQL Server DB)
  • “primary key” attribute
  • data type
  • data type length
  • data

Transformation rules can be triggered based on:

  • schema name mask
  • table name mask
  • column name mask
  • “identity” attribute (valid only for SQL Server DB)
  • reference “identity” column (valid only for SQL Server DB)
  • “primary key” attribute
  • data type mask
  • data type length

Transformation rules can be used to exclude the following objects:

  • table
  • column
  • “identity” attribute (valid only for SQL Server DB)
  • reference “identity” column (valid only for SQL Server DB)
  • “primary key” attribute

Transformation rules can be used to add the following objects:

  • column
  • “identity” attribute (valid only for SQL Server DB)
  • “primary key” attribute

Transformation rules can be used to change the following objects:

  • schema name
  • table name
  • column name
  • “identity” attribute (valid only for SQL Server DB)
  • “primary key” attribute
  • data type
  • data type length
  • data

Note – Data can be changed to the :

  • constant value
  • original value of another column from this table
  • result of SQL Server function execution. The original values of any columns from this table can be passed to the Function as parameters. The Function must exist in the Target Database, where the Apply Process is to be run.

There are normally three separate parts that comprise a Transformation. They are:

  • id – the id, which is unique to each Transform has the purpose of determining the order in which the Transform is executed by the Process
  • rule – specifies the object to be transformed and specifies how it is to be transformed
  • description – a meaningful description of this Transformation

The syntax used is:

map=id,rule=(CaptureCriteria:TransformCriteria),description="desc"

where:

CaptureCriteria=schema_mask.table_mask.column_mask.[attr_spec] attr_spec=PK=PK;TYPE=type;ID=id;LEN=len

The CaptureCriteria consists of:

  • schema_mask – the mask is used to specify schema name
  • table_mask – the mask is used to specify table name
  • column_mask – the mask is used to specify column name

The attr_spec is:

  • PK – the specification of PK attribute, it can accept the following values:
  • N– to determine column without PK
  • U – to include PK column
  • ID – determine if the column should or should not be handled as an “identity_insert”, the possible values are:
  • 0 – to process as “non identity_insert” column
  • 1 – to process as “identity_insert” column
  • 2 – to process as reference “identity” column
  • TYPE – name of the data type
  • LEN – length of data type if applicable

TransformCriteria=schema_mask.table_mask.column_mask.[attr_spec]

attr_spec=PK=PK;TYPE=type;ID=id;LEN=len;DATA="data_specification"

data_specification=DATA="SQL_function_or_operation"

The TransformCriteria consists of:

  • schema_mask – the mask is used to specify schema name
  • table_mask – the mask is used to specify table name
  • column_mask – the mask is used to specify column name

The attr_spec is:

  • PK – the specification of PK attribute, it can accept the following values:
  • N – to determine column without PK
  • U – to include PK column
  • ID – determine if the column should or should not be handled as an “identity_insert”, the possible values are:
  • 0 – to process as “non identity_insert” column
  • 1 – to process as “identity_insert” column
  • 2 – to process as reference “identity” column
  • TYPE – name of the data type
  • LEN – length of data type if applicable
  • DATA – this is the data that will result from the transformation specified on the captured data. This can be:
  • predefined value in format "predefined_value"
  • any operations based on the values of this column. In order to use the values found in the column the % needs to be used
  • any operations based on the values of any columns in this table. In order to use the values of any column the following format is used "C:number_of_column". Table name can be passed as parameter as well. The format is "T:0".

To exclude columns the following syntax is used:

schema_mask.table_mask.NULL

where:

  • schema_mask – the mask is used to specify schema name
  • table_mask – the mask is used to specify table name

To include any columns that were not represented in the source table the following syntax is used:

schema_mask.table_mask.column_name.[TYPE=type;LEN=len; PK=pk;ID=id;DATA=”data_specification”]

where:

  • schema_mask – the mask is used to specify schema name
  • table_mask – the mask is used to specify table name
  • column_name – column name (this must be included and can not use the % symbol)
  • TYPE – data type definition of the column to be created (this must be included and can not use the % symbol)
  • LEN – length of data type, where necessary must be specified here
  • PK – the specification of PK attribute. It accepts the following values:
  • N – to determine column without PK
  • U – to include PK column
  • ID – determine if the column should or should not be handled as an “identity_insert”. The possible values are:
  • 0 – to process as “non identity_insert” column
  • 1 – to process as “identity_insert” column
  • DATA – this is the data that will result from the transformation specified on the captured data. This can be:
  • predefined value in format "predefined_value"
  • any operations based on the values of this column. In order to use the values found in the column the % needs to be used
  • any operations based on the values of any columns in this table. In order to use the values of any column the following format is used "C:number_of_column". Table name can be passed as parameter as well. The format is "T:0".

The following examples show how to use the Transformation Rules:

  • For all tables located in the dbo schema change the schema name to report:
map=1,rule=(dbo.%:report.%),description="sample 1"
  • Rename dbo.emp table to report.employees table:
map=2,rule=(dbo.emp:report.employees),description="sample 2"
  • Exclude info column in the dbo.customer table from the replication:
map=3,rule=(dbo.customer.info:dbo.customer.NULL),description="sample 3"
  • For any tables in the dbo schema having name starting with rep_ replace the prefix to the report_:
map=4,rule=(dbo.rep_%:dbo.report_%),description="sample 4"
  • For all tables in the dbo schema having ident column change the column name to id:
map=5,rule=(dbo.%.ident:dbo.%.id),description="sample 5"
  • Exclude from the replication any columns having name id and defined as primary key:
map=6,rule=(dbo.%.id.[PK=U]:dbo.%.id.NULL),description="sample 6"
  • For any tables having column id of int type handle this column as primary key. Any update and delete statements will be built to use id column as primary key or the part of composite primary key:
map=7,rule=(dbo.%.id.[TYPE=int]:dbo.%.id.[PK=P]),description="sample 7"
  • Put the value of the desc column in the branch table into the uppercase:
map=8,rule=(dbo.branch.desc:dbo.branch.desc.[DATA=\"upper(%)\"])
  • Add new column named tag of the nvarchar(100) type into the customer table contains predefined value rep:
map=9,rule=(dbo.customer.NULL:dbo.customer.tag.[TYPE=nvarchar; LEN=100;DATA=\"'rep'\"])
  • Add new column named name of the nvarchar(255) type to the person table contains concatenation of the firstname and lastname columns (in the table firstname has the position 2 and lastname has the position 3):
map=10,rule=(dbo.person.NULL:dbo.person.name.[TYPE=nvarchar; LEN=255;DATA=\"C:2+C:3\"])
  • For any tables in the dbo schema having name starting with rep_ and column id, which is defined as primary key and identity_insert, rename the table into the report schema and without rep_ prefix and use custom function report.generateID to reproduce the value of this column. This column will still be used as a primary key, but not as identity_insert:
map=11,rule=(dbo.rep_%.id.[PK=U;ID=1]:report.%.id.[PK=U;ID=0; DATA=\"report.generateID()\"])

7.4.7 Transformation Rules and Triggering Order

It is imperative that you understand that the order in which the rules are applied can significantly impact on the expected results. The following logic is used to handle rule order processing. The specified Capture Process will go and look for any possible data that meets the first of the configured rules, to apply. In the case where there is more than one rule to be processed, the rule which will “take precedence“ will be the rule which best fulfills the data found. If the objects found match the criteria given in the first rule to be processed any subsequent rules will be ignored because the previous criteria have been met.

The logic used in transformation does not necessarily follow the rules for human logic, in that, the least significant amount of data to be processed should be the first rule, and the rule which affects the largest amount of data should be the last rule. In this way the user will be able to apply the requirements of the transform in the desired way.

The following example demonstrates this program logic. Assume that you need to rename all tables in the dbo schema having a name starting with emp into the schema hr and keep the original table name. Another requirement is to rename the ident columns to be the id for any tables in the dbo schema, in the case where we have an employee table that contains the ident column, this matches both rules. The examples demonstrate the replication behavior depending on the order that the rules are presented:

  • In this case you can’t expect the ident column to be renamed into id column together with renaming dbo schema in hr schema. The dboschema for employee table will be changed to hr schema but the ident column will not be changed:
map=1,rule=(dbo.emp%:hr.emp%),description="sample 1"
map=2,rule=(dbo.%.ident:dbo.%.id),description="sample 2"
  • In this case the dbo schema will not be changed into the hr schema with renaming the ident column to the id column. The dbo schema for employee table will not be changed but the ident column will be changed to id:
map=1,rule=(dbo.%.ident:dbo.%.id),description="sample 3"
map=2,rule=(dbo.emp%:hr.emp%),description="sample 4"
  • In this case the dbo schema for employee table will be changed and the ident column will be changed to id:
map=1,rule=(dbo.emp%.ident:hr.emp%.id)
map=2,rule=(dbo.emp%:hr.emp%)
map=3,rule=(dbo.%.ident:dbo.%.id)