Skip to content

Latest commit

 

History

History
128 lines (92 loc) · 8.87 KB

README.md

File metadata and controls

128 lines (92 loc) · 8.87 KB

Grafana Dashboard for AWS ParallelCluster

This is a sample solution based on Grafana for monitoring various component of an HPC cluster built with AWS ParallelCluster. There are 6 dashboards that can be used as they are or customized as you need.

  • ParallelCluster Summary - this is the main dashboard that shows general monitoring info and metrics for the whole cluster. It includes Slurm metrics and Storage performance metrics.
  • HeadNode Details - this dashboard shows detailed metric for the HeadNode, including CPU, Memory, Network and Storage usage.
  • Compute Node List - this dashboard show the list of the available compute nodes. Each entry is a link to a more detailed page.
  • Compute Node Details - similarly to the HeadNode details this dashboard show the same metric for the compute nodes.
  • GPU Nodes Details - This dashboard shows GPUs releated metrics collected using nvidia-dcgm container.
  • Cluster Logs - This dashboard shows all the logs of your HPC Cluster. The logs are pushed by AWS ParallelCluster to AWS ClowdWatch Logs and finally reported here.
  • Cluster Costs(beta / in developemnt) - This dashboard shows the cost associated to AWS Service utilized by your cluster. It includes: EC2, EBS, FSx, S3, EFS.

Quickstart

Create a cluster using AWS ParallelCluster and include the following configuration:

PC 3.X

Update your cluster's config by adding the following snippet in the HeadNode and Scheduling section:

CustomActions:
  OnNodeConfigured:
    Script: https://raw.githubusercontent.com/aws-samples/aws-parallelcluster-monitoring/main/post-install.sh
    Args:
      - v0.9
Iam:
  AdditionalIamPolicies:
    - Policy: arn:aws:iam::aws:policy/CloudWatchFullAccess
    - Policy: arn:aws:iam::aws:policy/AWSPriceListServiceFullAccess
    - Policy: arn:aws:iam::aws:policy/AmazonSSMFullAccess
    - Policy: arn:aws:iam::aws:policy/AWSCloudFormationReadOnlyAccess
Tags:
  - Key: 'Grafana'
    Value: 'true'

See the complete example config: pcluster.yaml.

AWS ParallelCluster

AWS ParallelCluster is an AWS supported Open Source cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters in the AWS cloud. It automatically sets up the required compute resources and a shared filesystem and offers a variety of batch schedulers such as AWS Batch, SGE, Torque, and Slurm.

Solution components

This project is build with the following components:

  • Grafana is an open-source platform for monitoring and observability. Grafana allows you to query, visualize, alert on and understand your metrics as well as create, explore, and share dashboards fostering a data driven culture.
  • Prometheus open-source project for systems and service monitoring from the Cloud Native Computing Foundation. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true.
  • The Prometheus Pushgateway is on open-source tool that allows ephemeral and batch jobs to expose their metrics to Prometheus.
  • Nginx is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server.
  • Prometheus-Slurm-Exporter is a Prometheus collector and exporter for metrics extracted from the Slurm resource scheduling system.
  • Node_exporter is a Prometheus exporter for hardware and OS metrics exposed by *NIX kernels, written in Go with pluggable metric collectors.

Note: while almost all components are under the Apache2 license, only Prometheus-Slurm-Exporter is licensed under GPLv3, you need to be aware of it and accept the license terms before proceeding and installing this component.

Example Dashboards

Cluster Overview

ParallelCluster

HeadNode Dashboard

Head Node

ComputeNodes Dashboard

Compute Node List

Logs

Logs

Cluster Cost

Costs

Quickstart

  1. Create a Security Group that allows you to access the HeadNode on Port 80 and 443. In the following example we open the security group up to 0.0.0.0/0 however we highly advise restricting this down further. More information on how to create your security groups can be found here
read -p "Please enter the vpc id of your cluster: " vpc_id
echo -e "creating a security group with $vpc_id..."
security_group=$(aws ec2 create-security-group --group-name grafana-sg --description "Open HTTP/HTTPS ports" --vpc-id ${vpc_id} --output text)
aws ec2 authorize-security-group-ingress --group-id ${security_group} --protocol tcp --port 443 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-id ${security_group} --protocol tcp --port 80 —-cidr 0.0.0.0/0
  1. Create a cluster with the post install script post-install.sh, the Security Group you created above as AdditionalSecurityGroup on the HeadNode, and a few additional IAM Policies. You can find a complete AWS ParallelCluster template here. Please note that, at the moment, the installation script has only been tested using Amazon Linux 2.
CustomActions:
  OnNodeConfigured:
    Script: https://raw.githubusercontent.com/aws-samples/aws-parallelcluster-monitoring/main/post-install.sh
    Args:
      - v0.9
Iam:
  AdditionalIamPolicies:
    - Policy: arn:aws:iam::aws:policy/CloudWatchFullAccess
    - Policy: arn:aws:iam::aws:policy/AWSPriceListServiceFullAccess
    - Policy: arn:aws:iam::aws:policy/AmazonSSMFullAccess
    - Policy: arn:aws:iam::aws:policy/AWSCloudFormationReadOnlyAccess
Tags:
  - Key: 'Grafana'
    Value: 'true'
  1. Connect to https://headnode_public_ip or http://headnode_public_ip (all http connections will be automatically redirected to https) and authenticate with the default Grafana password. A landing page will be presented to you with links to the Prometheus database service and the Grafana dashboards.

Login Screen Login Screen

Note: Because of the higher volume of network traffic due to the compute nodes continuously pushing metrics to the HeadNode, in case you expect to run a large scale cluster (hundreds of instances), we would recommend to use an instance type slightly bigger than what you planned for your master node.

Security

See CONTRIBUTING for more information.

License

This library is licensed under the MIT-0 License. See the LICENSE file.