Very slow syncing performance with Postgres #1067
Unanswered
ajscilingo
asked this question in
Q&A
Replies: 2 comments 7 replies
-
PostgreSQL is really performant. Can you share something with me to be able to reproduce your scenario ? |
Beta Was this translation helpful? Give feedback.
6 replies
-
Hi, we just did an additional test. Our previous test was going through our VPN, which appears to have created quite a dramatic bottleneck, when testing DotMim.Sync with our test data set inside a LAN, this same data takes only 10 seconds to sync rather than 8 minutes! |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, we've been developing a proof of concept using DotMim.Sync to keep our databases in sync. For smaller tables and schemas it has worked as expected within somewhat reasonable time constraints but for larger tables and schemas we've seen a massive performance hit. Case in point, we're trying to sync just one table which has two foreign keys on look up tables. The look up tables do not have any foreign keys and have 61 rows of data and 6 rows of data respectively, each have a primary key which is a UUID. The table we're trying to sync has 17 columns, one of which is a UUID primary key and 10,056 rows of data. For the purpose of this test we've set the SyncOptions property DisableConstraintsOnApplyChanges to true to avoid issues with foreign key constraints. On average synching only this one table takes about 7 minutes! We've played around with the BatchSize property as well but this has resulted in only about a 30 sec performance gain at best. We've also tried inserting the data manually into the table using the export feature in pgadmin to produce a SQL script and all 10,056 rows are inserted in microseconds. Is this expected performance from DotMim.Sync?
Beta Was this translation helpful? Give feedback.
All reactions