Skip to content

Commit

Permalink
Merge pull request #22 from prashantkalkar/node_level_data_disk_config
Browse files Browse the repository at this point in the history
Added capacity to allow data disk config override at node level.
  • Loading branch information
prashantkalkar authored May 6, 2024
2 parents 1d52113 + 1236b41 commit df0eb2f
Show file tree
Hide file tree
Showing 4 changed files with 44 additions and 22 deletions.
19 changes: 19 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,24 @@
## Unreleased

Upgrade notes:
* Renamed the `data_volume` variable to `default_data_volume`. The data disk config now represent default disk configuration.
This is overridable at node level if required.
For existing code change as follows:
```terraform
data_volume = {
# ...
}
```
To new code
```terraform
default_data_volume = {
# ...
}
```
Breaking Changes:
* Renamed the data_volume variable as default_data_volume. The new configuration represent default data disk config which will be used by all the nodes.
This can be overridden at node level by providing selectively different values for the values. (See the input variable documents)

## v0.5.0

Upgrade notes:
Expand Down
22 changes: 11 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,13 @@ Note: Currently only Amazon Linux based AMI is supported. The script is written
module "cluster" {
source = "git::[email protected]:prashantkalkar/stateful_application_module.git?ref=<version-git-tag>"
app_name = "cluster-test-setup"
node_image = "<ami_id>"
node_image = "<ami_id>"
node_key_name = "my-keypair"
nodes = [
{
node_ip = "<InstanceIPToBeAllocated>"
node_id = "<NodeId>" # should be unique
node_subnet_id = "<subnet_id>"
node_ip = "<InstanceIPToBeAllocated>"
node_id = "<NodeId>" # should be unique
node_subnet_id = "<subnet_id>"
},
{
node_ip = "<InstanceIPToBeAllocated>"
Expand All @@ -36,24 +36,24 @@ module "cluster" {
node_subnet_id = "<subnet_id>"
}
]
node_files = [
node_files = [
{
node_id = "<NodeId>" # should be unique
node_id = "<NodeId>" # should be unique
node_files_toupload = [filebase64("${path.module}/config_file.cfg")]
},
{
node_id = "<NodeId>"
node_id = "<NodeId>"
node_files_toupload = [filebase64("${path.module}/config_file.cfg")]
},
{
node_id = "<NodeId>"
node_id = "<NodeId>"
node_files_toupload = [filebase64("${path.module}/config_file.cfg")]
}
]
node_config_script = filebase64("${path.module}/node_config_script.sh")
security_groups = [aws_security_group.cluster_sg.id]
instance_type = "<node_instance_type>"
data_volume = {
default_data_volume = {
file_system_type = "xfs"
mount_path = "/mydata"
mount_path_owner_user = "ec2-user"
Expand Down Expand Up @@ -189,15 +189,15 @@ https://aws.amazon.com/premiumsupport/knowledge-center/ec2-ubuntu-secondary-netw
| <a name="input_asg_inservice_timeout_in_mins"></a> [asg\_inservice\_timeout\_in\_mins](#input\_asg\_inservice\_timeout\_in\_mins) | Timeout in mins which will be used by the rolling update script to wait for instances to be InService for an ASG | `number` | `10` | no |
| <a name="input_asg_lifecycle_hook_heartbeat_timeout"></a> [asg\_lifecycle\_hook\_heartbeat\_timeout](#input\_asg\_lifecycle\_hook\_heartbeat\_timeout) | Timeout for ASG initial lifecycle hook. This is used only during ASG creation, subsequent value changes are not handled by terraform (has to be updated manually) | `number` | `3600` | no |
| <a name="input_command_timeout_seconds"></a> [command\_timeout\_seconds](#input\_command\_timeout\_seconds) | The timeout that will be used by the userdata script to retry commands on failure. Keep it higher to allow manual recovery | `number` | `1800` | no |
| <a name="input_data_volume"></a> [data\_volume](#input\_data\_volume) | device\_name = "Device name for additional Data volume, select name as per https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html"<br> type = "EBS volume type e.g. gp2, gp3 etc"<br> iops = "Only valid for type gp3"<br> throughput\_mib\_per\_sec = "only valid for type gp3"<br> mount\_path = "path where to mount the data volume"<br> file\_system\_type = "File system to use to format the volume. eg. ext4 or xfs. This is used only initial time. Later changes will be ignored"<br> mount\_params = "Parameters to be used while mounting the volume eg. noatime etc. Optional, empty if not provided"<br> mount\_path\_owner\_user = "OS user that should own volume mount path will be used for chown"<br> mount\_path\_owner\_group = "OS group that should own the volume mount path, will be used for chown" | <pre>object({<br> device_name = optional(string, "/dev/sdf")<br> size_in_gibs = number<br> type = string<br> iops = optional(number)<br> throughput_mib_per_sec = optional(number)<br> mount_path = string<br> file_system_type = string<br> mount_params = optional(list(string), [])<br> mount_path_owner_user = string<br> mount_path_owner_group = string<br> tags = optional(map(string), {})<br> })</pre> | n/a | yes |
| <a name="input_default_data_volume"></a> [default\_data\_volume](#input\_default\_data\_volume) | This is default data volume configuration. This can be selectively overridden at node config level<br> device\_name = "Device name for additional Data volume, select name as per https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html"<br> type = "EBS volume type e.g. gp2, gp3 etc"<br> iops = "Only valid for type gp3"<br> throughput\_mib\_per\_sec = "only valid for type gp3"<br> mount\_path = "path where to mount the data volume"<br> file\_system\_type = "File system to use to format the volume. eg. ext4 or xfs. This is used only initial time. Later changes will be ignored"<br> mount\_params = "Parameters to be used while mounting the volume eg. noatime etc. Optional, empty if not provided"<br> mount\_path\_owner\_user = "OS user that should own volume mount path will be used for chown"<br> mount\_path\_owner\_group = "OS group that should own the volume mount path, will be used for chown" | <pre>object({<br> device_name = optional(string, "/dev/sdf")<br> size_in_gibs = number<br> type = string<br> iops = optional(number)<br> throughput_mib_per_sec = optional(number)<br> mount_path = string<br> file_system_type = string<br> mount_params = optional(list(string), [])<br> mount_path_owner_user = string<br> mount_path_owner_group = string<br> tags = optional(map(string), {})<br> })</pre> | n/a | yes |
| <a name="input_http_put_response_hop_limit"></a> [http\_put\_response\_hop\_limit](#input\_http\_put\_response\_hop\_limit) | n/a | `number` | `1` | no |
| <a name="input_instance_type"></a> [instance\_type](#input\_instance\_type) | n/a | `string` | n/a | yes |
| <a name="input_jq_download_url"></a> [jq\_download\_url](#input\_jq\_download\_url) | n/a | `string` | `"https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64"` | no |
| <a name="input_node_config_script"></a> [node\_config\_script](#input\_node\_config\_script) | Base64 encoded node configuration shell script.<br> Must include configure\_cluster\_node and wait\_for\_healthy\_cluster function. Check documentation for more details about the contract | `string` | n/a | yes |
| <a name="input_node_files"></a> [node\_files](#input\_node\_files) | node\_id = node identifier (this is not a index and need not in any specific ordered).<br> node\_files\_toupload = list of file to be uploaded per node. These can be cluster config files etc.<br> node\_files\_toupload.contents = Base64 encoded contents of the file to be uploaded on the node.<br> node\_files\_toupload.destination = File destination on the node. This will be the file path and name on the node. The file ownership should be changed by node\_config\_script. | <pre>set(object({<br> node_id = string<br> node_files_toupload = optional(list(object({<br> contents = string<br> destination = string<br> })), [])<br> }))</pre> | n/a | yes |
| <a name="input_node_image"></a> [node\_image](#input\_node\_image) | n/a | `string` | n/a | yes |
| <a name="input_node_key_name"></a> [node\_key\_name](#input\_node\_key\_name) | n/a | `string` | n/a | yes |
| <a name="input_nodes"></a> [nodes](#input\_nodes) | node\_id = node identifier (this is not a index and need not in any specific ordered).<br> node\_ip = IP address of the cluster node. This should be available within the subnet.<br> node\_image = image for node of the cluster node.<br> node\_subnet\_id = Id of the subnet where node should be created. | <pre>set(object({<br> node_id = string<br> node_ip = string<br> node_image = optional(string)<br> node_subnet_id = string<br> }))</pre> | n/a | yes |
| <a name="input_nodes"></a> [nodes](#input\_nodes) | node\_id = node identifier (this is not a index and need not in any specific ordered).<br> node\_ip = IP address of the cluster node. This should be available within the subnet.<br> node\_image = image for node of the cluster node.<br> node\_subnet\_id = Id of the subnet where node should be created.<br> node\_data\_disk = override the default data disk configuration for the node. (follow the same schema of data disk). | <pre>set(object({<br> node_id = string<br> node_ip = string<br> node_image = optional(string)<br> node_subnet_id = string<br> node_data_disk = optional(map(string), {})<br> }))</pre> | n/a | yes |
| <a name="input_root_volume"></a> [root\_volume](#input\_root\_volume) | n/a | <pre>object({<br> device_name = string<br> size_in_gibs = number<br> type = string<br> })</pre> | <pre>{<br> "device_name": "/dev/xvda",<br> "size_in_gibs": 16,<br> "type": "gp3"<br>}</pre> | no |
| <a name="input_security_groups"></a> [security\_groups](#input\_security\_groups) | n/a | `list(string)` | n/a | yes |

Expand Down
2 changes: 1 addition & 1 deletion main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ module "cluster_nodes" {
for_each = local.node_id_to_node_map
source = "./modules/node-module"
app_name = var.app_name
data_volume = var.data_volume
data_volume = merge(var.default_data_volume, each.value.node_data_disk)
node_id = each.key
node_instance_profile_id = aws_iam_instance_profile.node_instance_profile.id
node_ip = each.value.node_ip
Expand Down
23 changes: 13 additions & 10 deletions variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,14 @@ variable "nodes" {
node_ip = string
node_image = optional(string)
node_subnet_id = string
node_data_disk = optional(map(string), {})
}))
description = <<EOT
node_id = node identifier (this is not a index and need not in any specific ordered).
node_ip = IP address of the cluster node. This should be available within the subnet.
node_image = image for node of the cluster node.
node_subnet_id = Id of the subnet where node should be created.
node_data_disk = override the default data disk configuration for the node. (follow the same schema of data disk).
EOT
}

Expand All @@ -29,7 +31,7 @@ variable "node_files" {
EOT
}

variable "data_volume" {
variable "default_data_volume" {
type = object({
device_name = optional(string, "/dev/sdf")
size_in_gibs = number
Expand All @@ -44,15 +46,16 @@ variable "data_volume" {
tags = optional(map(string), {})
})
description = <<EOT
device_name = "Device name for additional Data volume, select name as per https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html"
type = "EBS volume type e.g. gp2, gp3 etc"
iops = "Only valid for type gp3"
throughput_mib_per_sec = "only valid for type gp3"
mount_path = "path where to mount the data volume"
file_system_type = "File system to use to format the volume. eg. ext4 or xfs. This is used only initial time. Later changes will be ignored"
mount_params = "Parameters to be used while mounting the volume eg. noatime etc. Optional, empty if not provided"
mount_path_owner_user = "OS user that should own volume mount path will be used for chown"
mount_path_owner_group = "OS group that should own the volume mount path, will be used for chown"
This is default data volume configuration. This can be selectively overridden at node config level
device_name = "Device name for additional Data volume, select name as per https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html"
type = "EBS volume type e.g. gp2, gp3 etc"
iops = "Only valid for type gp3"
throughput_mib_per_sec = "only valid for type gp3"
mount_path = "path where to mount the data volume"
file_system_type = "File system to use to format the volume. eg. ext4 or xfs. This is used only initial time. Later changes will be ignored"
mount_params = "Parameters to be used while mounting the volume eg. noatime etc. Optional, empty if not provided"
mount_path_owner_user = "OS user that should own volume mount path will be used for chown"
mount_path_owner_group = "OS group that should own the volume mount path, will be used for chown"
EOT
}

Expand Down

0 comments on commit df0eb2f

Please sign in to comment.