Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Merged by Bors] - sync2: add sqlstore #6405

Closed
wants to merge 20 commits into from
Closed

Conversation

ivan4th
Copy link
Contributor

@ivan4th ivan4th commented Oct 23, 2024


Motivation

The database-aware sync implementation is rather complex and splitting
it in parts might help the review process.

Description

The sqlstore package provides simple sequence-based interface to the
tables being synchronized. It is used by the FPTree data structure as
the database layer, and doesn't do range fingerprinting by itself.

SyncedTable and SyncedTableSnapshot provide methods that wrap the
necessary SQL operations. sql/expr package was added to facilitate
SQL generation.

The sql/expr package is also used by Bloom filters (#6332), which
are to be re-benchmarked and updated later. The idea is to use this
simple AST-based SQL generator in other places in the code where we're
generating SQL. The rqlite dependency which is introduced is also to
be used to convert schema SQL to a "canonical" form during schema
drift detection, and to filter out comments properly in the migration
scripts.

#6358 needs to be merged before this one.

Given that after recent item sync is done (if it's needed at all), the
range set reconciliation algorithm no longer depends on newly received
item being added to the set, we can save memory by not adding the
received items during reconciliation.

During real sync, the received items will be sent to the respective
handlers and after the corresponding data are fetched and validated,
they will be added to the database, without the need to add them to
cloned OrderedSets which are used to sync against particular peers.
Copy link

codecov bot commented Oct 23, 2024

Codecov Report

Attention: Patch coverage is 80.65693% with 106 lines in your changes missing coverage. Please review.

Project coverage is 79.8%. Comparing base (8f93588) to head (3750cb7).
Report is 27 commits behind head on develop.

Files with missing lines Patch % Lines
sync2/rangesync/combine_seqs.go 74.8% 27 Missing and 9 partials ⚠️
sync2/sqlstore/dbseq.go 80.1% 15 Missing and 8 partials ⚠️
sync2/sqlstore/sqlidstore.go 41.9% 14 Missing and 4 partials ⚠️
sync2/sqlstore/syncedtable.go 91.8% 8 Missing and 4 partials ⚠️
sql/expr/expr.go 87.6% 8 Missing and 2 partials ⚠️
sync2/sqlstore/testdb.go 69.5% 6 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff            @@
##           develop   #6405     +/-   ##
=========================================
- Coverage     79.9%   79.8%   -0.1%     
=========================================
  Files          335     341      +6     
  Lines        43656   44204    +548     
=========================================
+ Hits         34892   35296    +404     
- Misses        6800    6915    +115     
- Partials      1964    1993     +29     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.


🚨 Try these New Features:

This adds multi-peer synchronization support.
When the local set differs too much from the remote sets,
"torrent-style" "split sync" is attempted which splits the set into
subranges and syncs each sub-range against a separate peer.
Otherwise, the full sync is done, syncing the whole set against
each of the synchronization peers.
Full sync is also done after each split sync run.
The local set can be considered synchronized after the specified
number of full syncs has happened.

The approach is loosely based on [SREP: Out-Of-Band Sync of
Transaction Pools for Large-Scale
Blockchains](https://people.bu.edu/staro/2023-ICBC-Novak.pdf) paper by
Novak Boškov, Sevval Simsek, Ari Trachtenberg, and David Starobinski.
The `sqlstore` package provides simple sequence-based interface to the
tables being synchronized. It is used by the FPTree data structure as
the database layer, and doesn't do range fingerprinting by itself.

`SyncedTable` and `SyncedTableSnapshot` provide methods that wrap the
necessary SQL operations. `sql/expr` package was added to facilitate
SQL generation.
sql/expr/expr.go Show resolved Hide resolved
sync2/rangesync/combine_seqs.go Show resolved Hide resolved
sync2/rangesync/combine_seqs.go Show resolved Hide resolved
type dbIDKey struct {
//nolint:unused
id string
//nolint:unused
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are these linter annotations still needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the linter appears to be silly enough to mark fields which are never accessed directly (like foo.bar) as unused, even though it's often useful to have such fields in structs serving as map keys etc.

Copy link
Member

@fasmat fasmat Oct 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not a big fan of structs as keys and nolint indicates an issue (although not necessarily the correct one):

Is chunkSize really part of the key and not part of the value object? If I look up the same ID with a different chunkSize should I expect to not find it in the cache or should I find it and then evaluate the chunkSize to be the one I expect?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also if chunkSize can change dynamically doesn't this lead to the same values (by ID) being stored multiple times in the cache with different chunkSizes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After a bit more benchmarking, I've removed the LRU cache altogether. It was more useful with an earlier FPTree iteration that did not do node-aligned range splits, but now it just doesn't improve performance at all

// LRU cache for ID chunks.
type lru = simplelru.LRU[dbIDKey, []rangesync.KeyBytes]

const lruCacheSize = 1024 * 1024
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might be good to have a comment here as to how much memory this would translate into in the LRU cache in total when it's full.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's somewhat hard to say exactly what will be size of the cache as it stores chunks of different size.
I'll add a limit on the size of each cached chunk and will make cache size configurable

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed the LRU cache (see above)

sync2/sqlstore/dbseq.go Show resolved Hide resolved
// if the chunk size was reduced due to a short chunk before wraparound, we need
// to extend it back
if cap(s.chunk) < s.chunkSize {
s.chunk = make([]rangesync.KeyBytes, s.chunkSize)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure that this is actually needed. Unless you're passing the actual slice somehow, there's no real need of allocating again. You can just over-allocate at the first time, and use the clear() builtin over it to just reuse that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is needed as chunk size is dynamic. As the SQL iterator progresses, it loads larger and larger chunks by means of applying larger LIMIT values. This is due to usage pattern of FPTree which may request just a few items from some iterators and a lot of items from others.
The comment was somewhat misplaced b/c it applies to the case where capacity is enough.
We still need to reallocate the chunk if chunkSize grew. Allocating max-sized chunks from the start for each iterator created might be wasteful.

sync2/sqlstore/dbseq.go Outdated Show resolved Hide resolved
sync2/sqlstore/sqlidstore.go Show resolved Hide resolved
sync2/sqlstore/sqlidstore.go Show resolved Hide resolved
s.chunkSize = min(s.chunkSize*2, s.maxChunkSize)
switch {
case err != nil || ierr != nil:
return errors.Join(ierr, err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Returning both errors doesn't make sense to me, I think ierr should take precedence since the db request was interrupted because of it and err isn't interesting in this case, i.e.

case ierr != nil:
    return ierr
case err != nil:
    return err
case n == 0:
    // unchanged
...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed, thanks

@spacemesh-bors spacemesh-bors bot changed the base branch from sync2/multipeer to develop November 2, 2024 21:41
@ivan4th
Copy link
Contributor Author

ivan4th commented Nov 3, 2024

bors merge

spacemesh-bors bot pushed a commit that referenced this pull request Nov 3, 2024
-------

## Motivation

The database-aware sync implementation is rather complex and splitting
it in parts might help the review process.
@spacemesh-bors
Copy link

spacemesh-bors bot commented Nov 3, 2024

Pull request successfully merged into develop.

Build succeeded:

@spacemesh-bors spacemesh-bors bot changed the title sync2: add sqlstore [Merged by Bors] - sync2: add sqlstore Nov 3, 2024
@spacemesh-bors spacemesh-bors bot closed this Nov 3, 2024
@spacemesh-bors spacemesh-bors bot deleted the sync2/sqlstore branch November 3, 2024 06:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants