Skip to main content
Version: 0.14

Other Databases

Examples for transferring data from and to ClickHouse, SAP HANA, Netezza, Teradata, and DuckDB databases.

ClickHouse

Transfer data from and to ClickHouse database.

PostgreSQL to ClickHouse

Transfer data from PostgreSQL to ClickHouse using the bulk loader.

Single-threaded:

.\FastTransfer.exe `
--sourceconnectiontype "pgsql" `
--sourceserver "localhost:15432" `
--sourcedatabase "tpch" `
--sourceuser "FastUser" `
--sourcepassword "FastPassword" `
--sourceschema "tpch_10" `
--query "select * from tpch_10.orders limit 100000" `
--targetconnectiontype "clickhousebulk" `
--targetserver "localhost:8123" `
--targetdatabase "default" `
--targetuser "default" `
--targetpassword "YourPassword" `
--targetschema "default" `
--targettable "orders" `
--parallelmethod "None" `
--loadmode "Append" `
--batchsize 10000

Parallel with Ctid:

.\FastTransfer.exe `
--sourceconnectiontype "pgsql" `
--sourceserver "localhost:15432" `
--sourcedatabase "tpch" `
--sourceuser "FastUser" `
--sourcepassword "FastPassword" `
--sourceschema "tpch_10" `
--sourcetable "orders" `
--targetconnectiontype "clickhousebulk" `
--targetserver "localhost:8123" `
--targetdatabase "default" `
--targetuser "default" `
--targetpassword "YourPassword" `
--targetschema "default" `
--targettable "orders" `
--parallelmethod "Ctid" `
--degree 15 `
--loadmode "Truncate" `
--batchsize 1000000

ClickHouse to PostgreSQL

Transfer data from ClickHouse to PostgreSQL with various parallelization methods.

Single-threaded:

.\FastTransfer.exe `
--targetconnectiontype "pgcopy" `
--targetserver "localhost:15432" `
--targetdatabase "tpch" `
--targetuser "FastUser" `
--targetpassword "FastPassword" `
--targetschema "tpch_10" `
--targettable "orders_empty" `
--sourceconnectiontype "clickhouse" `
--sourceserver "localhost:8123" `
--sourcedatabase "default" `
--sourceuser "default" `
--sourcepassword "YourPassword" `
--sourceschema "default" `
--query "select * from default.orders limit 100000" `
--parallelmethod "None" `
--loadmode "Append" `
--batchsize 10000

Parallel with RangeId:

.\FastTransfer.exe `
--targetconnectiontype "pgcopy" `
--targetserver "localhost:15432" `
--targetdatabase "tpch" `
--targetuser "FastUser" `
--targetpassword "FastPassword" `
--targetschema "tpch_10" `
--targettable "orders_empty" `
--sourceconnectiontype "clickhouse" `
--sourceserver "localhost:8123" `
--sourcedatabase "default" `
--sourceuser "default" `
--sourcepassword "YourPassword" `
--sourceschema "default" `
--sourcetable "orders" `
--parallelmethod "RangeId" `
--distributeKeyColumn "O_ORDERKEY" `
--degree 15 `
--loadmode "Truncate" `
--batchsize 100000

Parallel with Ntile:

.\FastTransfer.exe `
--targetconnectiontype "pgcopy" `
--targetserver "localhost:15432" `
--targetdatabase "tpch" `
--targetuser "FastUser" `
--targetpassword "FastPassword" `
--targetschema "tpch_10" `
--targettable "orders_empty" `
--sourceconnectiontype "clickhouse" `
--sourceserver "localhost:8123" `
--sourcedatabase "default" `
--sourceuser "default" `
--sourcepassword "YourPassword" `
--sourceschema "default" `
--sourcetable "orders" `
--parallelmethod "Ntile" `
--distributeKeyColumn "O_ORDERKEY" `
--degree 15 `
--loadmode "Truncate" `
--batchsize 100000

Parallel with Random:

.\FastTransfer.exe `
--targetconnectiontype "pgcopy" `
--targetserver "localhost:15432" `
--targetdatabase "tpch" `
--targetuser "FastUser" `
--targetpassword "FastPassword" `
--targetschema "tpch_10" `
--targettable "orders_empty" `
--sourceconnectiontype "clickhouse" `
--sourceserver "localhost:8123" `
--sourcedatabase "default" `
--sourceuser "default" `
--sourcepassword "YourPassword" `
--sourceschema "default" `
--sourcetable "orders" `
--parallelmethod "Random" `
--distributeKeyColumn "O_ORDERKEY" `
--degree 15 `
--loadmode "Truncate" `
--batchsize 100000

Parallel with DataDriven:

.\FastTransfer.exe `
--targetconnectiontype "pgcopy" `
--targetserver "localhost:15432" `
--targetdatabase "tpch" `
--targetuser "FastUser" `
--targetpassword "FastPassword" `
--targetschema "tpch_10" `
--targettable "orders_empty" `
--sourceconnectiontype "clickhouse" `
--sourceserver "localhost:8123" `
--sourcedatabase "default" `
--sourceuser "default" `
--sourcepassword "YourPassword" `
--sourceschema "default" `
--sourcetable "orders" `
--parallelmethod "DataDriven" `
--distributeKeyColumn "year(o_orderdate)" `
--degree 15 `
--loadmode "Truncate" `
--batchsize 100000
ClickHouse Connection Types
  • clickhouse: Standard connection for reading
  • clickhousebulk: High-performance bulk loader for writing

SAP HANA

Transfer data from and to SAP HANA database.

PostgreSQL to SAP HANA

Transfer data from PostgreSQL to SAP HANA using the HANA bulk loader.

Single-threaded:

.\FastTransfer.exe `
--sourceconnectiontype "pgsql" `
--sourceserver "localhost:15432" `
--sourcedatabase "tpch" `
--sourceuser "FastUser" `
--sourcepassword "FastPassword" `
--sourceschema "tpch_10" `
--sourcetable "orders" `
--targetconnectiontype "hanabulk" `
--targetserver "hxehost:39015" `
--targetdatabase "" `
--targetuser "SYSTEM" `
--targetpassword "YourPassword" `
--targetschema "TPCH" `
--targettable "ORDERS" `
--parallelmethod "None" `
--loadmode "Append" `
--batchsize 5000 `
--query "select * from tpch_10.orders limit 100000"

Parallel with Ctid:

.\FastTransfer.exe `
--sourceconnectiontype "pgsql" `
--sourceserver "localhost:15432" `
--sourcedatabase "tpch" `
--sourceuser "FastUser" `
--sourcepassword "FastPassword" `
--sourceschema "tpch_10" `
--sourcetable "orders" `
--targetconnectiontype "hanabulk" `
--targetserver "hxehost:39015" `
--targetdatabase "" `
--targetuser "SYSTEM" `
--targetpassword "YourPassword" `
--targetschema "TPCH" `
--targettable "ORDERS" `
--parallelmethod "Ctid" `
--degree 4 `
--loadmode "Truncate" `
--batchsize 5000
SAP HANA Batch Size

For SAP HANA, use batch sizes between 5,000 and 10,000 rows for optimal performance.

SAP HANA to PostgreSQL

Transfer data from SAP HANA to PostgreSQL.

.\FastTransfer.exe `
--targetconnectiontype "pgsql" `
--targetserver "localhost:15432" `
--targetdatabase "tpch_import" `
--targetuser "FastUser" `
--targetpassword "FastPassword" `
--targetschema "tpch_10" `
--targettable "orders" `
--sourceconnectiontype "hana" `
--sourceserver "hxehost:39015" `
--sourcedatabase "" `
--sourceuser "SYSTEM" `
--sourcepassword "YourPassword" `
--sourceschema "TPCH" `
--sourcetable "ORDERS" `
--parallelmethod "Random" `
--distributeKeyColumn "o_orderkey" `
--degree 12 `
--loadmode "Truncate" `
--batchsize 5000
SAP HANA Connection Types
  • hana: Standard connection for reading from SAP HANA
  • hanabulk: Bulk loader for writing to SAP HANA

Netezza

Transfer data from Netezza using the DATASLICEID for optimal parallelism.

Netezza to MSSQL

Transfer from Netezza to SQL Server using DataDriven method on DATASLICEID:

.\FastTransfer.exe `
--sourceconnectiontype "nzsql" `
--sourceserver "NZSOURCESYS" `
--sourcedatabase "TPCH" `
--sourceuser "TPCH" `
--sourcepassword "TPCH" `
--targetconnectiontype "msbulk" `
--targetserver "localhost" `
--targetdatabase "tpch_test" `
--sourceschema "tpch_10" `
--sourcetable "orders" `
--targettrusted `
--targetschema "dbo" `
--targettable "orders" `
--parallelmethod "DataDriven" `
--distributeKeyColumn "DATASLICEID" `
--degree 6 `
--loadmode Truncate
Netezza DATASLICEID

The DATASLICEID column in Netezza represents the internal data distribution. Using it as the distribution key provides optimal parallel extraction by leveraging Netezza's data placement.

Teradata

Transfer data from and to Teradata database.

PostgreSQL to Teradata

Transfer data from PostgreSQL to Teradata:

.\FastTransfer.exe `
--sourceconnectiontype "pgsql" `
--sourceserver "localhost:15432" `
--sourcedatabase "tpch" `
--sourceuser "FastUser" `
--sourcepassword "FastPassword" `
--sourceschema "tpch_10" `
--sourcetable "orders" `
--targetconnectiontype "teradata" `
--targetserver "192.168.239.129:1025" `
--targetdatabase "TPCH" `
--targetuser "UTPCH" `
--targetpassword "UTPCH" `
--targetschema "TPCH" `
--targettable "orders" `
--parallelmethod "Ctid" `
--degree 15 `
--loadmode "Truncate" `
--batchsize 10000

Teradata to MSSQL

Transfer data from Teradata to SQL Server:

.\FastTransfer.exe `
--sourceconnectiontype "teradata" `
--sourceserver "192.168.239.129:1025" `
--sourcedatabase "TPCH" `
--sourceuser "UTPCH" `
--sourcepassword "UTPCH" `
--sourceschema "TPCH" `
--sourcetable "orders_15M" `
--targetconnectiontype "msbulk" `
--targetserver "localhost" `
--targetdatabase "tpch10_collation_bin2" `
--targetuser "FastUser" `
--targetpassword "FastPassword" `
--targetschema "dbo" `
--targettable "orders_15M_copy" `
--parallelmethod "RangeId" `
--distributeKeyColumn "o_orderkey" `
--degree 12 `
--loadmode "Truncate"

DuckDB

Transfer data from and to DuckDB database files or in-memory.

MSSQL to DuckDB File

Transfer from SQL Server to a DuckDB database file:

.\FastTransfer.exe `
--sourceconnectiontype "mssql" `
--sourceserver "localhost" `
--sourcedatabase "tpch10_collation_bin2" `
--sourceschema "dbo" `
--sourcetable "orders_1M" `
--sourcetrusted `
--targetconnectiontype "duckdb" `
--targetdatabase "tpch10_insert" `
--targetschema "tpch10" `
--targettable "orders" `
--parallelmethod "None" `
--loadmode "Truncate" `
--targetserver "D:\OpenData\TPCH\data\10\duckdb\tpch10_insert.duckdb"

DuckDB File to PostgreSQL

Transfer from a DuckDB database file to PostgreSQL:

.\FastTransfer.exe `
--sourceconnectiontype "duckdbstream" `
--sourceserver "D:\OpenData\wikidata\dbpedia_14.duckdb" `
--sourceschema "main" `
--sourcetable "dbpedia_14" `
--query "SELECT label, title, content, split, content_word_count, chunk, content_n_chunks, chunk_index, chunk_word_count, chunk_embedding::text FROM dbpedia_14" `
--targetconnectiontype "pgcopy" `
--targetserver "localhost:35432" `
--targetuser "postgres" `
--targetpassword "YourPassword" `
--targetdatabase "wikipedia" `
--targetschema "public" `
--targettable "dbpedia_14" `
--parallelmethod "Ntile" `
--distributeKeyColumn "chunk_index" `
--degree 14 `
--loadmode "Truncate"

DuckDB Stream - CSV Files

Process CSV files using DuckDB's in-memory engine and load into SQL Server:

Single CSV file:

.\FastTransfer.exe `
--sourceconnectiontype "duckdbstream" `
--sourceserver ":memory:" `
--query "SELECT * from read_csv('D:\DATA\arborescence.csv') rcsv" `
--targetconnectiontype "msbulk" `
--targetserver "localhost" `
--targettrusted `
--targetdatabase "Edilians" `
--targetschema "dbo" `
--targettable "arborescence" `
--parallelmethod "None" `
--loadmode "Truncate"

Multiple CSV files with parallel processing:

.\FastTransfer.exe `
--sourceconnectiontype "duckdbstream" `
--sourceserver ":memory:" `
--query "SELECT * exclude filename FROM read_csv('D:\temp\FastBCP_lineitem_Extract\lineitem*.CSV', filename=true)" `
--targetconnectiontype "msbulk" `
--targetserver "localhost" `
--targettrusted `
--targetdatabase "tpch_test" `
--targetschema "dbo" `
--targettable "lineitem" `
--parallelmethod "DataDriven" `
--distributeKeyColumn "filename" `
--datadrivenquery "select file from glob('D:\temp\FastBCP_lineitem_Extract\*.CSV')" `
--loadmode "Truncate" `
--degree 16
DuckDB Connection Types
  • duckdb: Direct connection to DuckDB file
  • duckdbstream: Streaming connection for in-memory processing or file-based operations
  • Use :memory: as server for in-memory processing
tip

Build your command with the Wizard

Copyright © 2026 Architecture & Performance.