Sink connector sinks data from Kafka into Clickhouse. The connector is tested with the following converters
- JsonConverter
- AvroConverter (Using Apicurio Schema Registry or Confluent Schema Registry)
- Inserts, Updates and Deletes using ReplacingMergeTree/CollapsingMergeTree - Updates/Deletes
- Deduplication logic to dedupe records from Kafka topic.(Based on Primary Key)
- Exactly once semantics
- Bulk insert to Clickhouse.
- Store Kafka metadata Kafka Metadata
- Kafka topic to ClickHouse table mapping, use case where MySQL table can be mapped to a different CH table name.
- Store raw data in JSON(For Auditing purposes)
- Monitoring(Using Grafana/Prometheus) Dashboard to monitor lag.
- Kafka Offset management in ClickHouse
- Increased Parallelism(Customize thread pool for JDBC connections)
- MySQL (Debezium)
- PostgreSQL (Debezium) (Testing in progress)
Docker image for Sink connector altinity/clickhouse-sink-connector:latest
https://hub.docker.com/r/altinity/clickhouse-sink-connector
cd deploy/docker
./start-docker-compose.sh
For detailed instructions Setup