Skip to content
This repository has been archived by the owner on Jun 7, 2024. It is now read-only.

Documentation: Nakadi Platform Management #1064

Open
wsalembi opened this issue Jun 13, 2019 · 21 comments
Open

Documentation: Nakadi Platform Management #1064

wsalembi opened this issue Jun 13, 2019 · 21 comments

Comments

@wsalembi
Copy link

I'm looking for any documentation that provides more information or recommendations on managing the Nakadi platform in a production environment.

  • High-available setup
  • Capacity management
  • Backup/restore strategies
  • Release management (upgrade strategy)

Is there any community forum to discuss the Nakadi platform?

@adyach
Copy link
Member

adyach commented Jun 13, 2019

Hi @wsalembi, thank you for your interest in Nakadi!

There is no open source documents on how to operate Nakadi or forums to discuss it, but the team is happy to collaborate to clarify any questions you have and maybe open source (partially) what we have on the topic. Feel free to ask in this issue or file another one. I will cover your point in this issue.

  • High-available setup
    Nakadi availability is covered by availability of Apache Kafka cluster, which is configurable upto your requirements. Nakadi publishes events in batches. In order to preserve ordering within partition Nakadi does not retry to publish messages in Kafka even if one event in batch is failed to be published. It rejects the whole publishing request.

    Our team hosts Kafka on AWS in 3 different AZ with replication factor of 3 (2 in sync replicas) in rack aware mode, it is possible to lose one AZ completely.
    There is also SLOs defined for Nakadi, if you need more on that topic, let me know.

  • Capacity management
    Nakadi itself is stateless application, so scaling in and out Nakadi is not a problem, especially, if you run in the cloud environment as we do in AWS. Nakadi is CPU bound due processing of events such as schema validation and compression, that's why it scales out very well based on CPU usage.

    Nakadi is backed by Apache Kafka as stateful application it will require effort to scale it up in advance. Although, Kafka is network bound, the main concern is a disk utilisation, because the machine can be exchanged to the bigger one (more network, more cpu) in a couple of minutes if you use some kind attachable / detachable volumes to avoid data replication, but once you have to extend disk capacity you have to introduce new nodes in the cluster, which requires rebalance of the cluster in order to distribute the load. We keep ~40% of free disk space per broker to be able to add more if load grows unexpectedly. One of our invention is Kafka supervisor bubuku, which help to move data in different ways between brokers.

    Apache Zookeeper is used to maintain distributed acknowledgement of consumed events (Subscriptions API). Usual setup for Nakadi keeps 3 node Zookeeper ensemble, and scales vertically only by replacing machines with more powerful ones. It is over-provisioned by CPU quite a lot, keeping its usage is around 40%-60% works good for us.

    Postgres is used as EventTypes metadata store. The most hot paths are cached, so it does not really requires capacity planing, since its CPU and disk usage are low. The current cluster runs with 1 master and 2 replicas.

  • Backup/restore strategies
    This should be covered by Apache Kafka and Apache Zookeeper setup.

  • Release management (upgrade strategy)
    Nakadi can be rolled out in several different ways: switch traffic between stacks or blue green deployment. Basically, it is stateless application there is nothing special about upgrading it.

    Apache Kafka cluster is upgraded using bubuku and recommendation for upgrade from Apache Kafka documentation.

@lenalebt
Copy link

lenalebt commented Jul 3, 2019

Thanks for your post here, it helped me better understand it.

Since I know Zalando is running quite a few services in k8s (besides the vintage DC and maybe still some STUPS services around?) I wonder if you have experience running Nakadi (maybe together with Kafka?) in k8s. Or together with managed Kafka such as Amazon MSK.

Besides that, are there others running Nakadi you know of outside of Zalando? I am happy to read names or numbers, but a simple "yes/no" would help as well.

@adyach
Copy link
Member

adyach commented Jul 17, 2019

@lenalebt unfortunately, the answer is no to your questions for now

@lenalebt
Copy link

Too bad! But thanks for the update :).

@mateimicu
Copy link

I am also interested in this subject.

@lenalebt We are trying now to run Nakadin in ESK using MSK(and the provided zookeeper)

@lmontrieux
Copy link
Contributor

@mateimicu that's very interesting, how is it working for you? We'd love to know more about your efforts, and any issues you may be running into!

@mateimicu
Copy link

@lmontrieux it works, we can use it. The problem we had is ensuring backups for it. Keeping all Nakadi data stores in sync(MSK, zookeeper, DB).

We are now trying to understand how we could what should we backup and how to do it. We want to use it for missing critical system, we can't afford to lose messages.

Also, we need to be able to do disaster recovery and migrations (maybe even keep a hot-replica in another AWS region).

@lmontrieux
Copy link
Contributor

We keep zookeeper and DB backups separately - the two are not synchronized, but that should not be an issue. In the extremely unlikely event that we would lose both the entire zookeeper ensemble and the database (and all its replicas) at the same time, we can recover from the backups and start again from there.

Regarding Kafka, we have replication on 3 brokers, each in a different availability zone, and ack=all to make sure that Nakadi does not confirm publishing until the data is on all brokers. The Kafka brokers use EBS volumes, so the risks of data loss are very, very small. But we also have an application that consumes all data from all Nakadi event types and persists it to S3 for our data lake, so we have another copy in case everything goes south.

@lmontrieux
Copy link
Contributor

re: keeping a hot-replica in another AWS region: we don't do it, but I guess you could use Kafka MM to replicate to another cluster, or even an application that reads from Nakadi and writes to another Nakadi. I see potential issues around offsets and timelines, though.

@eupestov
Copy link

Thanks for sharing @lmontrieux

If the DB and ZK backups are not synchronised you will probably need to do some 'magic' after you restore from the backups to deal with the data which is stored in both, e.g. subscriptions and event types?

Is it possible to create a subscription which will get all the events event as new event types get registered? Or you update that 'sink' subscription with every new event type explicitly?

@lmontrieux
Copy link
Contributor

Yes, you'd probably have to do some cleanup, as there could be subscriptions in the DB that aren't in Zookeeper, or the other way around. But since subscription or event type creation are relatively rare events, we take it as an acceptable risk that, in the case of a complete meltdown, we'll have to clear up a few subscriptions or event types.

It isn't possible to create a subscription which gets all events from all event types, for 2 reasons:

  • the events in a subscription are fixed, so you'd have to re-create it every time a new event type is created
  • there should be a reasonable maximum number of partitions to which a subscription is registered, Nakadi will not be able to handle too many. We set our maximum to 100 partitions per subscription

To archive everything, we wrote a (not yet open source unfortunately) application that periodically lists all the event types, and then creates subscriptions to read events from them.

@mateimicu
Copy link

Hi @lmontrieux,

I see a line mentioning SLO monitoring support in the documentation, but I can't find more information, is there another place I should look at?

@lmontrieux
Copy link
Contributor

Hi @mateimicu

For SLO monitoring, we use Nakadi itself. Basically, Nakadi produces its own access log to an event type, nakadi.access.log. It also logs some stats in nakadi.batch.published and nakadi.data.streamed.
We use these for our monitoring, including SLOs

@lmontrieux
Copy link
Contributor

I think we need to come up with a proper admin documentation - that would be the right place to answer these questions, and also highlight configuration options

@mateimicu
Copy link

@lmontrieux souds like a good idea.

Another question: who creates the nakadi.access.log and the other event types. I tried to enable the kpi_collection audit_log_collection features but got the following error

Error occurred while publishing events to nakadi.data.streamed, EventType "nakadi.data.streamed" does not exist.

@mateimicu
Copy link

I think i figured this out, just starting a new process worked :).

@lmontrieux
Copy link
Contributor

:)

@eupestov
Copy link

eupestov commented Nov 8, 2019

If you do not mind I ask here instead of starting a new thread:

We have two timelines for event type X:
A (original): e1, e2, e3, ... eN
B (new): eN+1, ... eM

What should happen if we create a new subscription for X with read_from=begin? Should it start streaming from e1 or eN+1? It looks like it starts from eN+1 in my case, which is not what I expected.

@lmontrieux
Copy link
Contributor

It should start at the oldest available offset of the oldest available timeline, if you read from begin.

@eupestov
Copy link

eupestov commented Nov 8, 2019

Thanks. In may case the retention time for the event type was set to -1 and because of that the old topic was cleaned up almost immediately because of:

final Date cleanupDate = new Date(System.currentTimeMillis() + retentionTime);

Is this a bug or a feature? =)

@lmontrieux
Copy link
Contributor

Looks like a bug. Could you please open an issue (and maybe a PR if you feel like it) ?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants