- Drop protocol will be launched with no token for the first several months. During this period, the points program will be active. The points program will be used to distribute some part of initial supply of the DROP token. To get the points, users will need to perform certain actions, such as providing liquidity, staking, etc.
- A user will get points for defined timeframe
- Referral program points are only distributed if the referrer passed the KYC procedure
This crawler is used to get the drop points data from various sources and store it in a database. The data is used to calculate the points for the users and then provided to be used in the smart contract to display the points on front-end and to distribute the DROP tokens.
The crawler is designed to be modular and extensible. It consists of several components:
- Crawler - the main component that retreives the data from the source
- REST API - the component that provides the data to other services/clients (tRPC server)
- Complimentary SubQuery Indexers - as for some sources it is not possible to get the data directly, we use SubQuery indexers to get the data from the chain
- Neutron native token balance
- Neutron Astroport LP token balance
- Neutron Astroport LP token balance (staked)
- Neutron Mars using as collateral
- Osmosis network native token balance *
- Osmosis Astroport LP token balance
- Osmosis Astroport LP token balance (staked)
- Osmosis Mars using as collateral
- Kujira native token balance
- Secret network native token balance
- is integrated with the help of Subquery indexer
This crawler is intended to run several times a day. The action plan is as follows:
- Get heights from the chains store
- Create tasks out of these heights
- For each task:
0. Update the task status to
running
- Get the data from the source
- Store the data in the database
- Update the task status to
ready
- In case of error, update the task status to
fail
- Aggregate the data for the users and store it in the database
- Update
referral_balance
for the users who passed KYC who referred the users
- Update the tasks status to
processed
As database we use SQLite3. The schema is as follows:
Column |
Type |
Constraints |
batch_id |
INTEGER |
PRIMARY KEY AUTOINCREMENT |
ts |
INTEGER |
|
Column |
Type |
Constraints |
asset_id |
TEXT |
|
batch_id |
INTEGER |
PRIMARY KEY (batch_id DESC, asset_id) |
price |
NUMERIC |
|
ts |
INTEGER |
|
Column |
Type |
Constraints |
protocol_id |
TEXT |
|
batch_id |
INTEGER |
PRIMARY KEY (batch_id DESC, protocol_id) |
height |
INTEGER |
|
status |
TEXT |
|
jitter |
NUMERIC |
|
ts |
INTEGER |
|
Column |
Type |
Constraints |
batch_id |
INTEGER |
PRIMARY KEY (batch_id DESC, address, protocol_id) |
address |
TEXT |
|
protocol_id |
TEXT |
|
height |
INTEGER |
|
asset |
TEXT |
|
balance |
NUMERIC |
|
Column |
Type |
Constraints |
address |
TEXT |
PRIMARY KEY |
ts |
INTEGER |
|
Column |
Type |
Constraints |
address |
TEXT |
PRIMARY KEY |
referal |
TEXT |
|
Column |
Type |
Constraints |
batch_id |
INTEGER |
PRIMARY KEY (batch_id DESC, address, asset_id) |
address |
TEXT |
|
asset_id |
TEXT |
|
points |
NUMERIC |
|
referal_points_l1 |
NUMERIC |
|
referal_points_l2 |
NUMERIC |
|
Column |
Type |
Constraints |
schedule_id |
INTEGER |
PRIMARY KEY AUTOINCREMENT |
protocol_id |
INTEGER |
|
asset_id |
TEXT |
|
multiplier |
REAL |
|
start |
INTEGER |
|
end |
INTEGER |
|
enabled |
BOOLEAN |
|
- install bun (you can use rtx, asdf, etc or install it manually)
- run
bun install
- define
.env
(or just copy env.sample
to .env
and adjust it)
- run
bun run crawl --help
to get the list of available commands
- run
bun run crawl <command> --help
to get the list of available options for the command
- to get into the DB you can use any SQLite3 client and connect to the
./data.db
file