-
-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data migration and Migrating from 3.x to 4.x in Kubernetes #1722
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! We're in strong need of more k8s docs - yay!
I've made some suggestions - since this is endemic to Medic, we should move this to the Medic section so it's a peer of the existing EKS docs
relatedContent: > | ||
--- | ||
|
||
The hosting architecture differs entirely between CHT-Core 3.x and CHT-Core 4.x. When both versions are running in Kubernetes, migrating data requires specific steps using the `couchdb-migration` tool. This tool interfaces with CouchDB to update shard maps and database metadata. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a link to the CouchDB Migration tool? That way folks can read up on it before using it.
|
||
## Migration Steps | ||
|
||
### 1. Initial Setup |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please update all the numbers to use native markdown enumeration like this:
### 1. Initial Setup | |
1. Initial Setup |
|
||
### 5. Clone the 3.x Data Volume | ||
|
||
First, identify the volume ID from your 3.x data: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should remind them that they need to exist the medic-os
container before preceeding
# Get the PVC name | ||
kubectl get pvc -n $NAMESPACE | ||
|
||
# Get the volume ID from the PVC | ||
VOLUME_ID=$(kubectl get pvc <your-3x-pvc-name> -n $NAMESPACE -o jsonpath='{.spec.volumeName}') | ||
EBS_VOLUME_ID=$(kubectl get pv $VOLUME_ID -o jsonpath='{.spec.awsElasticBlockStore.volumeID}' | cut -d'/' -f4) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think because this section is so long, it's likely having them manually set <your-3x-pvc-name>
to be an env var. let's break this out into it's own step to find the the PVC name and set the three env vars. the next larger section can then just by copy and pasted blindly
|
||
Create a values.yaml file using the volume ID from the previous step: | ||
|
||
```yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two suggestions: let's use the updated template here and as well let's simplify it for single node. We can then link to the larger yaml file and say, something like "use this larger one for multi-node"
combining these two suggestions will make this yaml file a lot less complex to edit and have have fewer changes need made to it
First verify CouchDB is running properly: | ||
```shell | ||
# Check pod status | ||
kubectl get pods -n $NAMESPACE | ||
|
||
# For single node check CouchDB is up | ||
kubectl exec -it -n $NAMESPACE $(kubectl get pod -n $NAMESPACE -l cht.service=couchdb -o name) -- \ | ||
curl -s http://localhost:5984/_up | ||
|
||
# For clustered setup (check all nodes) | ||
kubectl exec -it -n $NAMESPACE $(kubectl get pod -n $NAMESPACE -l cht.service=couchdb-1 -o name) -- \ | ||
curl -s http://localhost:5984/_up | ||
``` | ||
|
||
Access the new CouchDB pod based on your deployment type. | ||
|
||
For single node: | ||
```shell | ||
kubectl exec -it -n $NAMESPACE $(kubectl get pod -n $NAMESPACE -l cht.service=couchdb -o name) -- bash | ||
``` | ||
|
||
For clustered setup (always use couchdb-1): | ||
```shell | ||
kubectl exec -it -n $NAMESPACE $(kubectl get pod -n $NAMESPACE -l cht.service=couchdb-1 -o name) -- bash | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
love how we've set $NAMESPACE
so all these command work just copy and paste - nice work!!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops! prior feedback I submitted as "comment" as opposed to "request changes"
Co-authored-by: mrjones <[email protected]>
Description
This guide answers:
License
The software is provided under AGPL-3.0. Contributions to this project are accepted under the same license.