Distributed stateful services inspired by Orleans
This crate provides a framework for scalable, distributed and stateful services based on message passing between objects
Most of your application code will be written in forms of ServiceObjects
and Messages
use async_trait::async_trait;
use rio_rs::prelude::*;
use serde::{Deserialize, Serialize};
use std::sync::Arc;
#[derive(TypeName, Message, Deserialize, Serialize)]
pub struct HelloMessage {
pub name: String
}
#[derive(TypeName, Message, Deserialize, Serialize)]
pub struct HelloResponse {}
#[derive(TypeName, WithId, Default)]
pub struct HelloWorldService {
pub id: String,
}
#[async_trait]
impl Handler<HelloMessage> for HelloWorldService {
type Returns = HelloResponse;
async fn handle(
&mut self,
message: HelloMessage,
app_data: Arc<AppData>,
) -> Result<Self::Returns, HandlerError> {
println!("Hello world");
Ok(HelloResponse {})
}
}
To run your application you need to spin up your servers, the Server
TODO: Include example of other databases
use rio_rs::prelude::*;
use rio_rs::cluster::storage::sql::{SqlMembersStorage};
use rio_rs::object_placement::sql::SqlObjectPlacementProvider;
#[tokio::main]
async fn main() {
let addr = "0.0.0.0:5000";
// Configure types on the server's registry
let mut registry = Registry::new();
registry.add_type::<HelloWorldService>();
registry.add_handler::<HelloWorldService, HelloMessage>();
// Configure the Cluster Membership provider
let pool = SqlMembersStorage::pool()
.connect("sqlite::memory:")
.await
.expect("Membership database connection failure");
let members_storage = SqlMembersStorage::new(pool);
let membership_provider_config = PeerToPeerClusterConfig::default();
let membership_provider =
PeerToPeerClusterProvider::new(members_storage, membership_provider_config);
// Configure the object placement
let pool = SqlMembersStorage::pool()
.connect("sqlite::memory:")
.await
.expect("Object placement database connection failure");
let object_placement_provider = SqlObjectPlacementProvider::new(pool);
// Create the server object
let mut server = Server::new(
addr.to_string(),
registry,
membership_provider,
object_placement_provider,
);
server.prepare().await;
let listener = server.bind().await.expect("Bind");
// Run the server
// server.run(listener).await;
}
Communicating with the cluster is just a matter of sending the serialized known messages via TCP.
The [client
] module provides an easy way of achieving this:
use rio_rs::prelude::*;
use rio_rs::cluster::storage::sql::{SqlMembersStorage};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Member storage configuration (Rendezvous)
let pool = SqlMembersStorage::pool()
.connect("sqlite::memory:")
.await?;
let members_storage = SqlMembersStorage::new(pool);
# members_storage.prepare().await;
// Create the client
let mut client = ClientBuilder::new()
.members_storage(members_storage)
.build()?;
let payload = HelloMessage { name: "Client".to_string() };
let response: HelloResponse = client
.send(
"HelloWorldService".to_string(),
"any-string-id".to_string(),
&payload,
).await?;
// response is a `HelloResponse {}`
Ok(())
}
There are a few things that must be done before v0.1.0:
- Naive server/client protocol
- Basic cluster support
- Basic placement support
- Object self shutdown
- Naive object persistence
- Public API renaming
- Reduce Boxed objects
- Create a Server builder
- Remove need to use
add_static_fn(FromId::from_id)
-> Removed in favour ofRegistry::add_type
- Support service background task
- Pub/sub
- Examples covering most use cases
- Background async task on a service
- Background blocking task on a service (see black-jack)
- Pub/sub (see black-jack)
- Re-organize workspace
- Support ephemeral port
- Remove the need for an
Option<T>
value formanaged_state
attributes (as long as it has a 'Default') - Support 'typed' message/response on client (TODO define what this means)
-
ServiceObject::send
shouldn't need a type for the member storage - Handle panics on messages handling
- Error and panic handling on life cycle hooks (probably kill the object)
- Create a test or example to show re-allocation when servers dies
- Sqlite support for sql backends
- PostgreSQL support for sql backends
- Redis support for members storage
- Redis support for state backend (loader and saver)
- Redis support for object placement
- Remove the need for two types of concurrent hashmap (papaya and dashmap)
- Client doesn't need to have a access to the cluster backend if we implement an HTTP API
- [~] Allow
ServiceObject
trait without state persistence - Create server from config
- Bypass clustering for self messages
- [~] Bypass networking for local messages
- Move all the client to user tower
- Remove the need to pass the StateSaver to
ObjectStateManager::save_state
- Include registry configuration in Server builder
- Create a getting started tutorial
- Cargo init
- Add deps (rio-rs, tokio, async_trait, serde, sqlx - optional)
- Write a server
- Write a client
- Add service and messages
- Cargo run --bin server
- Cargo run --bin client
- Life cycle
- Life cycle depends on app_data(StateLoader + StateSaver)
- Cargo test?
- MySQL support for sql backends
- Add more extensive tests to client/server integration
- Increase public API test coverage
- Client/server keep alive
- Reduce static lifetimes
- 100% documentation of public API
- Placement strategies (nodes work with different sets of trait objects)
- [~] Dockerized examples
- Add pgsql jsonb support
- Add all SQL storage behind a feature flag (sqlite, mysql, pgsql, etc)
- Supervision
- Ephemeral objects (aka regular - local - actors)
- Remove magic numbers
- Object TTL
- Matrix test with different backends
- Replace prints with logging
- Code of conduct
- Metrics and Tracing
- Deny allocations based on system resources