- Clone the repository
git clone https://github.com/rngcntr/janusbench.git && cd janusbench
- Build the project
Optionally, you can crate an executable shell script which runs
mvn clean install -DskipTests
janusbench
:cat res/stub.sh target/janusbench-0.1.0-jar-with-dependencies.jar > target/janusbench && chmod +x target/janusbench
- Show available storage/index backends and benchmarks
janusbench list storage janusbench list index janusbench list benchmark
- Run example benchmarks
janusbench run -s cassandra -i elasticsearch IndexedEdgeExistenceOnSupernode
- Storage (one required):
- cassandra
- scylla
- berkeleyje
- hbase
- yugabyte
- foundationdb
- inmemory
- Index (optional):
- elasticsearch
- solr
- lucene
cassandra | scylla | berkeleyje | hbase | yugabyte | foundationdb | inmemory | |
---|---|---|---|---|---|---|---|
none | ✔ | ✔ | ✔ | ✔ | ✔ | (✔) | ✔ |
elasticsearch | ✔ | ✔ | ✔ | ✔ | ✔ | (✔) | ✘ |
solr | ✔ | ✔ | ✔ | ✔ | ✔ | (✔) | ✘ |
lucene | ✔ | ✔ | ✔ | ✔ | ✔ | (✔) | ✘ |
✔ functional
(✔) runnable but not suitable for production environments
✘ incompatible by design
A lot of services and composed configurations are already available within janusbench.
All of the supported services, which include storage and index backends, are defined and managed by Dockerfiles in their own subdirectory of docker/services
.
At the moment, new backends need to be registrated within the corresponding backend classes Storage.java and Index.java.
By using these services, more complex scenarios can be built using Docker Compose.
The necessary Docker Compose files are located in docker/configurations and use a consistent naming scheme where storage
and index
refer to the names of backend services:
janusgraph<-storage><-index>?.yml
The files themselves use the normal Docker Compose format and define combinations of services that can be used when running benchmarks.
In janusbench, there are two kinds of benchmarks:
Micro Benchmarks serve the purpose of being easy to create and configure, even without recompiling the benchmark suite. As soon as configured via a YAML file in conf/defaults, Micro Benchmarks are available from the janusbench
CLI.
Micro Benchmarks only support a limited set of features. If more sophisticated methods (e.g. statistical distributions) are required, it is preferred to write the benchmark in Java and have access to the full set of features of the Java language.
Helper Benchmarks - located within the package de.rngcntr.janusbench.benchmark.helper - are very basic tasks like adding vertices and edges or testing the existence of such elements. This kind of benchmark is not thought to be called by the user directly. Instead, it represents a key task, whose runtime is measured by a composed benchmark (see section below).
All benchmarks in the scope of this package need to be public and extend the abstract class Benchmark.
The inherited methods buildUp()
and tearDown()
can be used to manage local data structures which may be utilized to support the actual performance-measured steps taken in performAction()
.
Simple Benchmarks are located within the package de.rngcntr.janusbench.benchmark.composed.
Composed benchmarks within this package need to be public and extend the class ComposedBenchmark.
As for all benchmarks, the buildUp()
, performAction()
and tearDown()
methods are called within the lifecycle of each benchmark.
For composed benchmarks, we can use the buildUp()
method to assemble it from various simple benchmarks.
In order to do so, one can instantiate different simple benchmarks, group them into logical units using ComposedBenchmarks and add them to the final composed benchmark by calling addComponent()
.
Composed benchmarks can use parameters, e.g. to control their number of iterations. In order to keep the CLI clean, these parameters are parsed from a file which contains default values. These files are parsed at runtime using SnakeYAML and will only be able to set public, non-static fields of your benchmark classes.