When you use manual sync and it is completed successfully, the following dialog appears.


If you face any errors, try looking here.

Check Log

After you make a sync, you are able to check on you PC the compressed file uploaded.

  1. Open a new Windows Explorer window and enter %LocalAppData%\biPost

  2. Open corresponding ZIP file and review contents.

It might me also useful to check after a Sync Completed process, and before you press OK, open %LocalAppData%\biPost and check the files that were just created.

If you need to check which files where loaded to Aurora, check aurora_s3_load_history.

customData.json syntax error

Misspelled fields or tables on customData.json may appear as:


Date format inside post.json

If immediately after launching biPost.exe this error appears String was not recognized as valid DateTime, try the following:

Open post.json file and delete all datetime strings found inside " ". Do not delete the double quotation marks.


No information to Sync

If No information to Sync message appears, verify that customData.json is set to send at least one table.

Elapsed time to sync and load to Aurora

When a new Sync is initiated on biPost.exe, it may take a few seconds to a few minutes to extract, compress and upload to S3. While this is happening, no messages/icons will show that biPost.exe is working and maybe you will see (Not responding) on the top of the window, this is normal.

If you launch Windows Task Manager probably you'll see that biPost.exe *32 is running and consuming a considerable amount of CPU.

Once the compressed package is uploaded to S3, it may take 1 to 3 minutes to process and load your data to MySQL.

If you need to check which tables where loaded, check aurora_s3_load_history.

Upload Limit

Depending on the number of columns on each table and the amount of data of each row, it is possible that a large amount of data sent on a single sync may not load on your Aurora DB.

We have tested different scenarios and a 100,000 row limit for a single sync works fine.

We recommend using Recursive Sync for large tables that have a datetime field available.

Special Characters

Some special characters on char and varchar fields are not supported and thus removed by biPost.exe For example:

  • Enter
  • ()

On SQL Server, all special characters on strings of 100 length or more are removed, leaving only letters and numbers.

Schema Limitations

  • NULL values on char and varchar are converted to '' on MySQL.
  • NULL values on float, money and int datatypes are converted to value 0 (zero) on MySQL.
  • NULL values on datetime are converted to 0000-00-00 00:00:00 on MySQL.
  • bit datatype is converted to VARCHAR(1) on MySQL.
  • Tables without a PRIMARY KEY will not load data on Aurora.

No Internet Connection

If there is no internet connection available, biPost will show the following message:


Firewall Restrictions

If your internet connection has a firewall, it may show different errors like:

  • The remote name could not be resolved.
  • A WebException with status SendFailure was thrown.
  • A WebException with status NameResolutionFailure was thrown.
  • Error making request with Error Code ExpectationFailed and Http Status Code ExpectationFailed.


Grant Firewall to reach Amazon S3:

Create a policy to Allow to:



AWS can have multiple IP addresses for S3 service, so in case the above IP's don't work check the AWS Public IP Address Ranges documentation and look for

  "region": "GLOBAL",
  "service": "AMAZON"


  "region": "us-east-1",
  "service": "AMAZON"

AppData\Local folder

Sometimes it is necessary to manually delete the content of \AppData\Local\biPost folder.

Open a new Windows Explorer window and enter %LocalAppData%\biPost. Select all and delete.

************** Exception Text **************
System.IO.IOException: The process cannot access the file '' because it is being used by another process.

MySQL schemas are created but no data is loaded

Two things might be causing this problem:

1. RDS instance cannot reach S3 bucket.

When we look at our CloudWatch logs, we see Unable to initialize S3Stream, so do the following:

  • Check if your IAM Policy to Grant Access to S3 is correctly using the S3 bucket ARN we provided. Also double check the policy document (JSON).

  • Check that IAM Role has attached the former IAM Policy. Copy ARN Role to a notepad for next steps.

  • Go to RDS Parameter Groups, select the cluster group and click Compare Parameters, it should show the IAM ARN Role (the one you just copied on a notepad) on the parameters shown here.

  • Go to RDS Clusters and check if IAM Role is listed and active for your cluster.

If after this you still experience this error, check out Manually debugging S3Stream.

2. Name of your destination database must be all lower case.

When we look at our CloudWatch logs, we see:

SequelizeConnectionError: ER_BAD_DB_ERROR: Unknown database

Double check that your DB name is all lower case.

Manually debugging S3Stream

In this section we will use the syntax and some parameters that are explained by AWS Documentation for LOAD DATA FROM S3.

The goals is to see whether an error is shown while directly importing data from S3 to MySQL-Aurora.

Using MySQL Workbench (or any client of your preference), open a connection to your MySQL instance preferably using root account.


LOAD DATA FROM S3 's3://{my-bucket-name}/{my-36-digit-service-number}/process_{my-36-digit-service-number}{YYYY_M_D-of-yesterday}_post/{my-example-table}.csv'
REPLACE INTO TABLE `{my-database-name}`.`{my-example-table}`

Replace curly brackets with your data.


  • my-bucket-name: Name of your dedicated bucket, assigned by Factor BI.
  • my-36-digit-service-number: Number assigned to your service.
  • YYYY_M_D-of-yesterday: Date for yesterday on GMT America/Mexico_City.
  • my-example-table: Pick any of the tables that you are synchronizing.
  • my-database-name: Name of your destination DB on MySQL.
  • {column_1},{column_2},...,{column_n}: List all the columns in the same order as they appear in the .CSV file created on AppData\Local folder.


LOAD DATA FROM S3 's3://bipostdata-f0123abc4567/a1bcd23e-4fa5-67b8-cd9e-f0123abc4567/process_a1bcd23e-4fa5-67b8-cd9e-f0123abc45672017_9_13_post/CLIE.csv'
REPLACE INTO TABLE `mytestdb`.`clie`


LOAD FROM S3 privileges

The Aurora user that executes LOAD DATA FROM S3 requires the following privilege:

GRANT LOAD FROM S3 ON *.* TO 'your-user-name';

By default this privilege is set to your Master Username when you created your Aurora instance.

If you are using a different user and the privilege is not set, the following error appears:

Access denied; you need (at least one of) the LOAD FROM S3 privilege(s) for this operation

The only way to see this error is executing LOAD FROM S3 manually.

If your MySQL user already has this privilege and you see the following error, try these steps.

Access denied for user 'your-user-name'@'' (using password: YES)