Newsequentialid on Sql Server and DotMim.Sync #574
Replies: 3 comments 6 replies
-
The issue is less using There are mainly two downsides of using GUIDs as primary keys.
Now, why is that (2) you may ask? The reason is that the clustered index affects where rows are stored on disk. So if the guids are not added sequentially, sql server will have to re-arrange the partitions on disk regularly, which degrades the insert speed. (due to higher disk usage) To overcome this, sequential guids may be used. In that case, there is an interesting library, which mimics the way Mongodb creates sequential guids. (Mongodb is a good rolemodel, as it is a databse that supports clustering and sharding out of the box) So check out this library: Disclaimer: I have not used it yet, but the foundation looks rock solid. I hope that helped! |
Beta Was this translation helpful? Give feedback.
-
:o :o :o !!!!! You know this framework has not even reach the You should be crazy :D (And I love it !!!) Anyway, on your On SQL Server, you need to Reorganize your indexes on a week / daily basics timeframe. To mitigate the fragmentation, you need also to create your table with a correct Fill Factor on your cluster index, to get more spaces in each row pages, and having the opportunity to fragment less. It will have an impact on Select speed and disk usage, but it's always worth it. Using an incremental value (like When I was instructor, I usually use to say to my students that a good Fill Factor for any table with Guids could be around 70% ~ 80% That being said, as suggest @gentledepp, using a By the way, where are the bottlenecks in your case ? on the clients ? on the server ? |
Beta Was this translation helpful? Give feedback.
-
With 5000 users would UtcTicks offer a sufficiently robust and performant solution? All inserts would be sequential and the likelihood of collisions is infinitesimal. I assume I've missed something obvious. |
Beta Was this translation helpful? Give feedback.
-
First off thanks for a great framework for syncing, I haven't found anything out there as feature rich.
Jut thought I'd see if anyone has any thoughts on the following design decision :
I started using the sync framework with 2 scopes, one to pull (download) all required data and one to push data (upload) from the clients to the server (around 4-5k clients).
I'm using guids for primary keys and the keys are generated on the client and at certain intervals all the data added on the client is synced to the sever.
A clients' data is specific to that client so they're only ever adding records that they'll have access to in the future.
So they can work offline and make numerous inserts on their app (mobile client) without having to wait for server responses and when the sync is called it uploads their data.
Because I'm generating guids on the client for my primary key I can't take advantage of Sql Servers NEWSEQUENTIALID() function which apparently causes less page fragmentation and results in faster access via the primary key.
I'm a little concerned that as the system grows with more records I'd have issues with speed and server resources. Is this an over reaction?
I'm contemplating using web api calls for all the client inserts/updates and then mainly use the sync framework for downloading data. In which case I could have an identity column, add a record and return it to the client to insert.
Problem is I'd lose my ability to work offline if the client needs to add a record. Also the app captures a lot of data and making all the web api calls results in a slower user experience when using the app, it seems I'd lose a lot of goodness using web api calls. Id still have a scope for pulling data but would probably only use it for reinitialsing a client with their data if they change phones/tablets etc...
Has anyone experienced any server database problems with guids as primary keys eg speed and space?
Is there any way to take advantage of NEWSEQUENTIALID() when using DotMim.Sync?
Thanks in advance
Beta Was this translation helpful? Give feedback.
All reactions