Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data migration and Migrating from 3.x to 4.x in Kubernetes #1722

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

henokgetachew
Copy link
Contributor

Description

This guide answers:

  • How do you use couchdb-migration in Kubernetes environments?
  • How do you migrate a 3.x instance to 4.x without using docker-compose as an intermediary?

License

The software is provided under AGPL-3.0. Contributions to this project are accepted under the same license.

Copy link
Contributor

@mrjones-plip mrjones-plip left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work! We're in strong need of more k8s docs - yay!

I've made some suggestions - since this is endemic to Medic, we should move this to the Medic section so it's a peer of the existing EKS docs

content/en/hosting/4.x/data-migration-k8s.md Outdated Show resolved Hide resolved
relatedContent: >
---

The hosting architecture differs entirely between CHT-Core 3.x and CHT-Core 4.x. When both versions are running in Kubernetes, migrating data requires specific steps using the `couchdb-migration` tool. This tool interfaces with CouchDB to update shard maps and database metadata.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a link to the CouchDB Migration tool? That way folks can read up on it before using it.


## Migration Steps

### 1. Initial Setup
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please update all the numbers to use native markdown enumeration like this:

Suggested change
### 1. Initial Setup
1. Initial Setup

content/en/hosting/4.x/data-migration-k8s.md Outdated Show resolved Hide resolved
content/en/hosting/4.x/data-migration-k8s.md Outdated Show resolved Hide resolved

### 5. Clone the 3.x Data Volume

First, identify the volume ID from your 3.x data:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should remind them that they need to exist the medic-os container before preceeding

Comment on lines +91 to +96
# Get the PVC name
kubectl get pvc -n $NAMESPACE

# Get the volume ID from the PVC
VOLUME_ID=$(kubectl get pvc <your-3x-pvc-name> -n $NAMESPACE -o jsonpath='{.spec.volumeName}')
EBS_VOLUME_ID=$(kubectl get pv $VOLUME_ID -o jsonpath='{.spec.awsElasticBlockStore.volumeID}' | cut -d'/' -f4)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think because this section is so long, it's likely having them manually set <your-3x-pvc-name> to be an env var. let's break this out into it's own step to find the the PVC name and set the three env vars. the next larger section can then just by copy and pasted blindly

content/en/hosting/4.x/data-migration-k8s.md Outdated Show resolved Hide resolved

Create a values.yaml file using the volume ID from the previous step:

```yaml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two suggestions: let's use the updated template here and as well let's simplify it for single node. We can then link to the larger yaml file and say, something like "use this larger one for multi-node"

combining these two suggestions will make this yaml file a lot less complex to edit and have have fewer changes need made to it

Comment on lines +213 to +237
First verify CouchDB is running properly:
```shell
# Check pod status
kubectl get pods -n $NAMESPACE

# For single node check CouchDB is up
kubectl exec -it -n $NAMESPACE $(kubectl get pod -n $NAMESPACE -l cht.service=couchdb -o name) -- \
curl -s http://localhost:5984/_up

# For clustered setup (check all nodes)
kubectl exec -it -n $NAMESPACE $(kubectl get pod -n $NAMESPACE -l cht.service=couchdb-1 -o name) -- \
curl -s http://localhost:5984/_up
```

Access the new CouchDB pod based on your deployment type.

For single node:
```shell
kubectl exec -it -n $NAMESPACE $(kubectl get pod -n $NAMESPACE -l cht.service=couchdb -o name) -- bash
```

For clustered setup (always use couchdb-1):
```shell
kubectl exec -it -n $NAMESPACE $(kubectl get pod -n $NAMESPACE -l cht.service=couchdb-1 -o name) -- bash
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

love how we've set $NAMESPACE so all these command work just copy and paste - nice work!!

Copy link
Contributor

@mrjones-plip mrjones-plip left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops! prior feedback I submitted as "comment" as opposed to "request changes"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants