Skip to main content

Advanced Parameters

This section covers advanced configuration options for FastTransfer.

Load Mode

The Load Mode parameter --loadmode determines how data is loaded into the target table.

Available values:

  • Append (default): Inserts data into the target table without removing existing data
  • Truncate: Truncates (deletes all rows from) the target table before inserting new data
./FastTransfer \
...
--loadmode "Append" \
...

Syntax:

  • Short form: -L "Append|Truncate"
  • Long form: --loadmode "Append|Truncate"
Truncate Warning

Using Truncate mode will permanently delete all existing data in the target table before loading new data. Ensure you have proper backups and understand the impact before using this mode in production.

Examples

Append Mode (incremental load):

./FastTransfer \
--sourceconnectiontype pgsql \
--sourceserver "pg.example.com" \
--sourcedatabase "sales" \
--sourcetable "daily_transactions" \
--targetconnectiontype mssql \
--targetserver "sqlserver.example.com" \
--targetdatabase "warehouse" \
--targettable "transactions" \
--loadmode Append

Truncate Mode (full refresh):

./FastTransfer \
--sourceconnectiontype pgsql \
--sourceserver "pg.example.com" \
--sourcedatabase "sales" \
--sourcetable "product_catalog" \
--targetconnectiontype mssql \
--targetserver "sqlserver.example.com" \
--targetdatabase "warehouse" \
--targettable "products" \
--loadmode Truncate

Mapping Method

The Mapping Method parameter --mapmethod determines how columns are mapped between source and target tables.

Available values:

  • Position (default): Maps columns by their ordinal position (1st to 1st, 2nd to 2nd, etc.)
  • Name: Maps columns by their names
./FastTransfer \
...
--mapmethod "Name" \
...

Syntax:

  • Short form: -N "Position|Name"
  • Long form: --mapmethod "Position|Name"

Position Mapping

When using Position mapping (default), columns are matched by their position in the SELECT list and target table:

  • Source column 1 → Target column 1
  • Source column 2 → Target column 2
  • And so on...

Requirements:

  • Source and target must have the same number of columns
  • Column names do not need to match
  • Order matters

Example:

-- Source table
CREATE TABLE source_customers (
id INT,
name VARCHAR(100),
email VARCHAR(255)
);

-- Target table (different column names, same order)
CREATE TABLE target_customers (
customer_id INT,
customer_name VARCHAR(100),
customer_email VARCHAR(255)
);
./FastTransfer \
--sourcetable "source_customers" \
--targettable "target_customers" \
--mapmethod Position # This is the default

Name Mapping

When using Name mapping, columns are matched by their names:

Requirements:

  • Column names must match between source and target
  • Column order does not matter
  • Target can have additional columns not present in source

Example:

-- Source table
CREATE TABLE source_orders (
order_id INT,
customer_id INT,
total_amount DECIMAL(10,2)
);

-- Target table (different order, extra column)
CREATE TABLE target_orders (
order_id INT,
order_date DATETIME DEFAULT GETDATE(), -- Extra column with default
total_amount DECIMAL(10,2),
customer_id INT
);
./FastTransfer \
--sourcetable "source_orders" \
--targettable "target_orders" \
--mapmethod Name

Batch Size

The Batch Size parameter --batchsize controls the number of rows inserted in each batch operation.

./FastTransfer \
...
--batchsize 10000 \
...

Syntax:

  • Short form: -B <number>
  • Long form: --batchsize <number>

Default behavior:

  • If not specified, FastTransfer uses an optimal batch size based on the target database type
  • Typical defaults range from 1,000 to 10,000 rows per batch
Performance Tuning
  • Larger batch sizes (10,000-50,000): Better for high-throughput scenarios with stable networks
  • Smaller batch sizes (1,000-5,000): Better for unreliable networks or when memory is limited
  • Test different batch sizes to find optimal performance for your specific scenario

Batch Size by Target Type

Different target databases have different optimal batch sizes:

SQL Server:

--batchsize 5000  # Good balance for most scenarios

PostgreSQL:

--batchsize 10000  # PostgreSQL handles larger batches well

MySQL:

--batchsize 1000  # MySQL prefers smaller batches

Oracle:

--batchsize 5000  # Moderate batch size for Oracle

Use Work Tables

The Use Work Tables parameter --useworktables enables the use of temporary work tables for staging data before final insertion.

./FastTransfer \
...
--useworktables \
...

Syntax:

  • Short form: -W
  • Long form: --useworktables
How Work Tables Work

When enabled, FastTransfer:

  1. Creates a temporary staging table in the target database
  2. Loads data into the staging table using bulk operations
  3. Transfers data from staging to the target table in a single transaction
  4. Drops the staging table

This approach can improve performance and reliability for large transfers.

Benefits:

  • Better transaction control
  • Ability to validate data before final insert
  • Can be faster for very large datasets
  • Reduces locking on the target table

Considerations:

  • Requires additional temporary storage space
  • User needs permission to create temporary tables
  • May not be beneficial for small datasets

Example with Work Tables

./FastTransfer \
--sourceconnectiontype pgsql \
--sourceserver "pg.example.com" \
--sourcedatabase "operational" \
--sourcetable "large_dataset" \
--targetconnectiontype mssql \
--targetserver "sqlserver.example.com" \
--targetdatabase "warehouse" \
--targettable "large_dataset_copy" \
--useworktables \
--batchsize 10000 \
--degree 4

Run ID

The Run ID parameter --runid is a unique identifier for the current transfer operation. It can be used for tracking, logging, and monitoring purposes.

Information

If not specified, FastTransfer will generate a unique Run ID automatically.

Tips

If possible, inject the RunId provided by the caller (e.g., SSIS, Airflow, or any scheduler that supports a run identifier). This will greatly facilitate log correlation and troubleshooting.

./FastTransfer \
...
--runid "transfer-2024-01-09-001" \
...

Syntax:

  • Long form only: --runid "<unique_identifier>"

Log Level

The Log Level parameter --loglevel controls the verbosity of logging output during transfer operations.

Available values:

  • Information (default): Standard logging with key operation details
  • Debug: Verbose logging with detailed diagnostic information
./FastTransfer \
...
--loglevel "Debug" \
...

Syntax:

  • Long form only: --loglevel "<Information|Debug>"
note

If a settings file is specified with --settingsfile, the log level defined in the settings file takes precedence unless overridden by the command-line parameter.

Information

See Logging Overview for detailed information about logging configuration and output.

Settings File

The Settings File parameter --settingsfile allows you to store all FastTransfer parameters in a configuration file, making it easier to manage complex transfers and reuse configurations.

./FastTransfer \
...
--settingsfile "/path/to/config/transfer-config.json" \
...

Syntax:

  • Long form only: --settingsfile "<path_to_json_file>"
Information

See Logging Overview for examples of settings file configuration.

Complete Example

Here's a complete example using advanced parameters:

.\FastTransfer.exe `
--sourceconnectiontype "pgsql" `
--sourceserver "pgserver.example.com" `
--sourceuser "pguser" `
--sourcepassword "pgpass" `
--sourcedatabase "sales" `
--sourceschema "public" `
--sourcetable "orders" `
--targetconnectiontype "mssql" `
--targetserver "sqlserver.example.com\PROD" `
--targetuser "sqluser" `
--targetpassword "sqlpass" `
--targetdatabase "warehouse" `
--targetschema "staging" `
--targettable "orders_staging" `
--loadmode "Truncate" `
--mapmethod "Name" `
--batchsize 10000 `
--useworktables `
--degree 4 `
--method "Random" `
--loglevel "Debug" `
--runid "pg_to_sql_orders_$(Get-Date -Format 'yyyyMMdd_HHmmss')" `
--settingsfile "C:\config\transfer-config.json"

This example:

  • Transfers data from PostgreSQL to SQL Server
  • Truncates the target table before loading
  • Maps columns by name (allows different column orders)
  • Uses 10,000 rows per batch for optimal throughput
  • Enables work tables for staging
  • Uses 4 parallel threads with Random method for partitioning
  • Enables Debug logging for detailed diagnostics
  • Uses a dynamic Run ID with timestamp
  • Loads additional configuration from a settings file
Copyright © 2026 Architecture & Performance. Built with Docusaurus.