From c8c754c0c750c606cb755c1d867ea10dd8ea4bc5 Mon Sep 17 00:00:00 2001 From: edp-bot Date: Mon, 18 Dec 2023 10:57:09 +0000 Subject: [PATCH] Update documentation --- 404.html | 2 +- .../aws-deployment-diagram/index.html | 2 +- .../aws-reference-architecture/index.html | 2 +- developer-guide/edp-workflow/index.html | 4 +- developer-guide/index.html | 2 +- .../kubernetes-deployment/index.html | 2 +- developer-guide/local-development/index.html | 2 +- .../mk-docs-development/index.html | 2 +- .../reference-architecture/index.html | 2 +- .../reference-cicd-pipeline/index.html | 2 +- developer-guide/telemetry/index.html | 26 ++ faq/index.html | 2 +- features/index.html | 2 +- getting-started/index.html | 2 +- glossary/index.html | 2 +- index.html | 2 +- operator-guide/add-jenkins-agent/index.html | 2 +- operator-guide/add-ons-overview/index.html | 2 +- .../add-other-code-language/index.html | 2 +- .../add-security-scanner/index.html | 2 +- operator-guide/argocd-integration/index.html | 2 +- .../artifacts-verification/index.html | 2 +- .../aws-marketplace-install/index.html | 2 +- operator-guide/capsule/index.html | 2 +- .../configure-keycloak-oidc-eks/index.html | 2 +- .../index.html | 2 +- operator-guide/delete-edp/index.html | 2 +- .../delete-jenkins-job-provision/index.html | 2 +- operator-guide/dependency-track/index.html | 2 +- operator-guide/deploy-aws-eks/index.html | 2 +- operator-guide/deploy-okd-4.10/index.html | 2 +- operator-guide/deploy-okd/index.html | 2 +- operator-guide/ebs-csi-driver/index.html | 2 +- operator-guide/edp-access-model/index.html | 2 +- operator-guide/edp-kiosk-usage/index.html | 2 +- .../eks-oidc-integration/index.html | 2 +- operator-guide/enable-irsa/index.html | 2 +- .../index.html | 2 +- .../github-debug-webhooks/index.html | 2 +- operator-guide/github-integration/index.html | 2 +- .../gitlab-debug-webhooks/index.html | 2 +- operator-guide/gitlab-integration/index.html | 2 +- operator-guide/harbor-oidc/index.html | 2 +- operator-guide/headlamp-oidc/index.html | 2 +- .../import-strategy-jenkins/index.html | 2 +- .../import-strategy-tekton/index.html | 2 +- operator-guide/import-strategy/index.html | 2 +- operator-guide/index.html | 2 +- operator-guide/install-argocd/index.html | 2 +- operator-guide/install-defectdojo/index.html | 2 +- operator-guide/install-edp/index.html | 2 +- .../index.html | 2 +- operator-guide/install-harbor/index.html | 2 +- .../install-ingress-nginx/index.html | 2 +- operator-guide/install-keycloak/index.html | 2 +- operator-guide/install-kiosk/index.html | 2 +- operator-guide/install-loki/index.html | 2 +- .../install-reportportal/index.html | 2 +- operator-guide/install-tekton/index.html | 2 +- operator-guide/install-velero/index.html | 2 +- .../install-via-helmfile/index.html | 2 +- .../jira-gerrit-integration/index.html | 2 +- operator-guide/jira-integration/index.html | 2 +- operator-guide/kaniko-irsa/index.html | 2 +- operator-guide/kibana-ilm-rollover/index.html | 2 +- .../kubernetes-cluster-settings/index.html | 2 +- .../logsight-integration/index.html | 2 +- operator-guide/loki-irsa/index.html | 2 +- .../manage-custom-certificate/index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- .../index.html | 2 +- operator-guide/multitenant-logging/index.html | 2 +- .../namespace-management/index.html | 2 +- operator-guide/nexus-sonatype/index.html | 2 +- .../notification-msteams/index.html | 2 +- operator-guide/oauth2-proxy/index.html | 2 +- .../openshift-cluster-settings/index.html | 2 +- operator-guide/overview-devsecops/index.html | 2 +- .../index.html | 2 +- .../overview-multi-tenancy/index.html | 2 +- operator-guide/overview-sast/index.html | 2 +- operator-guide/perf-integration/index.html | 2 +- operator-guide/prerequisites/index.html | 2 +- .../index.html | 2 +- .../reportportal-keycloak/index.html | 2 +- .../restore-edp-with-velero/index.html | 2 +- operator-guide/sast-scaner-semgrep/index.html | 2 +- .../schedule-pods-restart/index.html | 2 +- operator-guide/sonarqube/index.html | 2 +- operator-guide/ssl-automation-okd/index.html | 2 +- operator-guide/tekton-monitoring/index.html | 2 +- operator-guide/tekton-overview/index.html | 2 +- operator-guide/upgrade-edp-2.10/index.html | 2 +- operator-guide/upgrade-edp-2.11/index.html | 2 +- operator-guide/upgrade-edp-2.12/index.html | 2 +- operator-guide/upgrade-edp-2.8/index.html | 2 +- operator-guide/upgrade-edp-2.9/index.html | 2 +- operator-guide/upgrade-edp-3.0/index.html | 2 +- operator-guide/upgrade-edp-3.1/index.html | 2 +- operator-guide/upgrade-edp-3.2/index.html | 2 +- operator-guide/upgrade-edp-3.3/index.html | 2 +- operator-guide/upgrade-edp-3.4/index.html | 2 +- operator-guide/upgrade-edp-3.5/index.html | 2 +- operator-guide/upgrade-edp-3.6/index.html | 2 +- .../upgrade-keycloak-19.0/index.html | 2 +- operator-guide/vcs/index.html | 2 +- operator-guide/velero-irsa/index.html | 2 +- .../waf-tf-configuration/index.html | 2 +- overview/index.html | 2 +- pricing/index.html | 2 +- roadmap/index.html | 2 +- search/search_index.json | 2 +- sitemap.xml | 319 +++++++++--------- sitemap.xml.gz | Bin 1562 -> 1569 bytes supported-versions/index.html | 2 +- use-cases/application-scaffolding/index.html | 2 +- use-cases/autotest-as-quality-gate/index.html | 2 +- use-cases/external-secrets/index.html | 2 +- use-cases/index.html | 2 +- use-cases/tekton-custom-pipelines/index.html | 2 +- user-guide/add-application/index.html | 2 +- user-guide/add-autotest/index.html | 2 +- user-guide/add-cd-pipeline/index.html | 2 +- user-guide/add-cluster/index.html | 2 +- .../add-custom-global-pipeline-lib/index.html | 2 +- user-guide/add-git-server/index.html | 2 +- user-guide/add-infrastructure/index.html | 2 +- user-guide/add-library/index.html | 2 +- user-guide/add-marketplace/index.html | 2 +- user-guide/add-quality-gate/index.html | 2 +- user-guide/application/index.html | 2 +- user-guide/autotest/index.html | 2 +- user-guide/build-pipeline/index.html | 2 +- user-guide/cd-pipeline-details/index.html | 2 +- user-guide/ci-pipeline-details/index.html | 2 +- user-guide/cicd-overview/index.html | 2 +- user-guide/cluster/index.html | 2 +- user-guide/code-review-pipeline/index.html | 2 +- user-guide/components/index.html | 2 +- user-guide/container-stages/index.html | 2 +- user-guide/copy-shared-secrets/index.html | 2 +- user-guide/customize-cd-pipeline/index.html | 2 +- user-guide/customize-ci-pipeline/index.html | 2 +- user-guide/dockerfile-stages/index.html | 2 +- user-guide/ecr-to-docker-stages/index.html | 2 +- user-guide/git-server-overview/index.html | 2 +- user-guide/gitops/index.html | 2 +- user-guide/helm-release-deletion/index.html | 2 +- user-guide/helm-stages/index.html | 2 +- user-guide/index.html | 2 +- user-guide/infrastructure/index.html | 2 +- user-guide/library/index.html | 2 +- user-guide/manage-branches/index.html | 2 +- user-guide/manage-environments/index.html | 2 +- user-guide/marketplace/index.html | 2 +- user-guide/opa-stages/index.html | 2 +- user-guide/pipeline-framework/index.html | 2 +- user-guide/pipeline-stages/index.html | 2 +- user-guide/prepare-for-release/index.html | 2 +- user-guide/semi-auto-deploy/index.html | 2 +- user-guide/terraform-stages/index.html | 2 +- 162 files changed, 348 insertions(+), 317 deletions(-) create mode 100644 developer-guide/telemetry/index.html diff --git a/404.html b/404.html index ed0075461..480e36dd3 100644 --- a/404.html +++ b/404.html @@ -1 +1 @@ - EPAM Delivery Platform

404 - Not found

\ No newline at end of file + EPAM Delivery Platform

404 - Not found

\ No newline at end of file diff --git a/developer-guide/aws-deployment-diagram/index.html b/developer-guide/aws-deployment-diagram/index.html index b26531d16..833d08a41 100644 --- a/developer-guide/aws-deployment-diagram/index.html +++ b/developer-guide/aws-deployment-diagram/index.html @@ -1 +1 @@ - EDP Deployment on AWS - EPAM Delivery Platform
Skip to content

EDP Deployment on AWS⚓︎

This document describes the EPAM Delivery Platform (EDP) deployment architecture on AWS. It utilizes various AWS services such as Amazon Elastic Kubernetes Service (EKS), Amazon EC2, Amazon Route 53, and others to build and deploy software in a repeatable, automated way.

Overview⚓︎

The EDP deployment architecture consists of two AWS accounts: Shared and Explorer. The Shared account hosts shared services, while the Explorer account runs the development team workload and EDP services. Both accounts have an AWS EKS cluster deployed in multiple Availability Zones (AZs). The EKS cluster runs the EDP Services, development team workload, and shared services in the case of the Shared account.

EPAM Delivery Platform Deployment Diagram on AWS
EPAM Delivery Platform Deployment Diagram on AWS

Key Components⚓︎

  1. AWS Elastic Kubernetes Service (EKS): A managed Kubernetes service used to run the EDP Services, development team workload, and shared services. EKS provides easy deployment and management of Kubernetes clusters.
  2. Amazon EC2: Instances running within private subnets that serve as nodes for the EKS cluster. Autoscaling Groups are used to deploy these instances, allowing for scalability based on demand.
  3. Amazon Route 53: A DNS web service manages external and internal DNS records for the EDP deployment. It enables easy access to resources using user-friendly domain names.
  4. AWS Application Load Balancer (ALB): Used for managing ingress traffic into the EDP deployment. Depending on requirements, ALBs can be configured as internal or external load balancers.
  5. AWS WAF: Web Application Firewall service used to protect external ALBs from common web exploits by filtering malicious requests.
  6. AWS Certificate Manager (ACM): A service that provisions manages, and deploys SSL/TLS certificates for use with AWS services. ACM is used to manage SSL certificates for secure communication within the EDP deployment.
  7. AWS Elastic Container Registry (ECR): A fully-managed Docker container registry that stores and manages Docker images. ECR provides a secure and scalable solution for storing container images used in the EDP deployment.
  8. AWS Systems Manager Parameter Store: Used to securely store and manage secrets required by various components of the EDP deployment. Parameter Store protects sensitive information such as API keys, database credentials, and other secrets.

High Availability and Fault Tolerance⚓︎

The EKS cluster is deployed across multiple AZs to ensure high availability and fault tolerance. This allows for automatic failover in case of an AZ outage or instance failure. Autoscaling Groups automatically adjust the number of EC2 instances based on demand, ensuring scalability while maintaining availability.

Design Considerations⚓︎

Reliability⚓︎

  • Using multiple AZs ensures high availability and fault tolerance for the EKS cluster.
  • Autoscaling Groups enable automatic scaling of EC2 instances based on demand, providing reliability during peak loads.
  • Multiple NAT gateways are deployed in each AZ to ensure reliable outbound internet connectivity.

Performance Efficiency⚓︎

  • Utilizing AWS EKS allows for efficient management of Kubernetes clusters without the need for manual configuration or maintenance.
  • Spot instances can be utilized alongside on-demand instances within the EKS cluster to optimize costs while maintaining performance requirements.
  • Amazon Route 53 enables efficient DNS resolution by managing external and internal DNS records.

Security⚓︎

  • External ALBs are protected using AWS WAF, which filters out malicious traffic and protects against common web exploits.
  • ACM is used to provision SSL/TLS certificates, ensuring secure communication within the EDP deployment.
  • Secrets required by various components are securely stored and managed using the AWS Systems Manager Parameter Store.

Cost Optimization⚓︎

  • Utilizing spot and on-demand instances within the EKS cluster can significantly reduce costs while maintaining performance requirements.
  • Autoscaling Groups allow for automatic scaling of EC2 instances based on demand, ensuring optimal resource utilization and cost efficiency.

Conclusion⚓︎

The EPAM Delivery Platform (EDP) deployment architecture on AWS follows best practices and patterns from the Well-Architected Framework. By leveraging AWS services such as EKS, EC2, Route 53, ALB, WAF, ACM, and Parameter Store, the EDP provides a robust and scalable CI/CD system that enables developers to deploy and manage infrastructure and applications quickly. The architecture ensures high availability, fault tolerance, reliability, performance efficiency, security, and cost optimization for the EDP deployment.

\ No newline at end of file + EDP Deployment on AWS - EPAM Delivery Platform
Skip to content

EDP Deployment on AWS⚓︎

This document describes the EPAM Delivery Platform (EDP) deployment architecture on AWS. It utilizes various AWS services such as Amazon Elastic Kubernetes Service (EKS), Amazon EC2, Amazon Route 53, and others to build and deploy software in a repeatable, automated way.

Overview⚓︎

The EDP deployment architecture consists of two AWS accounts: Shared and Explorer. The Shared account hosts shared services, while the Explorer account runs the development team workload and EDP services. Both accounts have an AWS EKS cluster deployed in multiple Availability Zones (AZs). The EKS cluster runs the EDP Services, development team workload, and shared services in the case of the Shared account.

EPAM Delivery Platform Deployment Diagram on AWS
EPAM Delivery Platform Deployment Diagram on AWS

Key Components⚓︎

  1. AWS Elastic Kubernetes Service (EKS): A managed Kubernetes service used to run the EDP Services, development team workload, and shared services. EKS provides easy deployment and management of Kubernetes clusters.
  2. Amazon EC2: Instances running within private subnets that serve as nodes for the EKS cluster. Autoscaling Groups are used to deploy these instances, allowing for scalability based on demand.
  3. Amazon Route 53: A DNS web service manages external and internal DNS records for the EDP deployment. It enables easy access to resources using user-friendly domain names.
  4. AWS Application Load Balancer (ALB): Used for managing ingress traffic into the EDP deployment. Depending on requirements, ALBs can be configured as internal or external load balancers.
  5. AWS WAF: Web Application Firewall service used to protect external ALBs from common web exploits by filtering malicious requests.
  6. AWS Certificate Manager (ACM): A service that provisions manages, and deploys SSL/TLS certificates for use with AWS services. ACM is used to manage SSL certificates for secure communication within the EDP deployment.
  7. AWS Elastic Container Registry (ECR): A fully-managed Docker container registry that stores and manages Docker images. ECR provides a secure and scalable solution for storing container images used in the EDP deployment.
  8. AWS Systems Manager Parameter Store: Used to securely store and manage secrets required by various components of the EDP deployment. Parameter Store protects sensitive information such as API keys, database credentials, and other secrets.

High Availability and Fault Tolerance⚓︎

The EKS cluster is deployed across multiple AZs to ensure high availability and fault tolerance. This allows for automatic failover in case of an AZ outage or instance failure. Autoscaling Groups automatically adjust the number of EC2 instances based on demand, ensuring scalability while maintaining availability.

Design Considerations⚓︎

Reliability⚓︎

  • Using multiple AZs ensures high availability and fault tolerance for the EKS cluster.
  • Autoscaling Groups enable automatic scaling of EC2 instances based on demand, providing reliability during peak loads.
  • Multiple NAT gateways are deployed in each AZ to ensure reliable outbound internet connectivity.

Performance Efficiency⚓︎

  • Utilizing AWS EKS allows for efficient management of Kubernetes clusters without the need for manual configuration or maintenance.
  • Spot instances can be utilized alongside on-demand instances within the EKS cluster to optimize costs while maintaining performance requirements.
  • Amazon Route 53 enables efficient DNS resolution by managing external and internal DNS records.

Security⚓︎

  • External ALBs are protected using AWS WAF, which filters out malicious traffic and protects against common web exploits.
  • ACM is used to provision SSL/TLS certificates, ensuring secure communication within the EDP deployment.
  • Secrets required by various components are securely stored and managed using the AWS Systems Manager Parameter Store.

Cost Optimization⚓︎

  • Utilizing spot and on-demand instances within the EKS cluster can significantly reduce costs while maintaining performance requirements.
  • Autoscaling Groups allow for automatic scaling of EC2 instances based on demand, ensuring optimal resource utilization and cost efficiency.

Conclusion⚓︎

The EPAM Delivery Platform (EDP) deployment architecture on AWS follows best practices and patterns from the Well-Architected Framework. By leveraging AWS services such as EKS, EC2, Route 53, ALB, WAF, ACM, and Parameter Store, the EDP provides a robust and scalable CI/CD system that enables developers to deploy and manage infrastructure and applications quickly. The architecture ensures high availability, fault tolerance, reliability, performance efficiency, security, and cost optimization for the EDP deployment.

\ No newline at end of file diff --git a/developer-guide/aws-reference-architecture/index.html b/developer-guide/aws-reference-architecture/index.html index 471839230..af22b17a0 100644 --- a/developer-guide/aws-reference-architecture/index.html +++ b/developer-guide/aws-reference-architecture/index.html @@ -1 +1 @@ - EDP Reference Architecture on AWS - EPAM Delivery Platform
Skip to content

EDP Reference Architecture on AWS⚓︎

The reference architecture of the EPAM Delivery Platform (EDP) on AWS is designed to provide a robust and scalable CI/CD system for developing and deploying software in a repeatable and automated manner. The architecture leverages AWS Managed Services to enable developers to quickly deploy and manage infrastructure and applications. EDP recommends to follow the best practices and patterns from the Well-Architected Framework, the AWS Architecture Center, and EKS Best Practices Guide.

Architecture Details⚓︎

The AWS Cloud comprises three accounts: Production, Shared, and Development.

Note

AWS Account management is out of scope for this document.

Each account serves specific purposes:

  • The Production account is used to host production workloads. The Production account serves as the final destination for deploying business applications. It maintains a separate ECR registry to store Docker images for production-level applications. The environment is designed to be highly resilient and scalable, leveraging the EPAM Delivery Platform's CI/CD pipeline to ensure consistent and automated deployments. With proper access control and separation from development environments, the Production account provides a stable and secure environment for running mission-critical applications.
  • The Development account is dedicated to development workload and lower environments. This account hosts the EDP itself, running on AWS EKS. It provides developers an isolated environment to build, test, and deploy their applications in lower environments, ensuring separation from production workloads. Developers can connect to the AWS Cloud using a VPN, enforcing secure access.
  • The Shared holds shared services that are accessible to all accounts within the organization. These services include SonarQube, Nexus, and Keycloak, which are deployed in Kubernetes Clusters managed by AWS Elastic Kubernetes Service (EKS). The shared services leverage AWS RDS, AWS EFS, and AWS ALB/NLB. The deployment of the shared services is automated using Kubernetes cluster-addons approach with GitOps and Argo CD.

EPAM Delivery Platform Reference Architecture on AWS
EPAM Delivery Platform Reference Architecture on AWS

Infrastructure as Code⚓︎

Infrastructure as Code (IaC) is a key principle in the EPAM Delivery Platform architecture. Terraform is the IaC tool to provision and manage all services in each account. AWS S3 and AWS DynamoDB serve as the backend for Terraform state, ensuring consistency and reliability in the deployment process. This approach enables the architecture to be version-controlled and allows for easy replication and reproducibility of environments.

Docker Registry⚓︎

The architecture utilizes AWS Elastic Container Registry (ECR) as a Docker Registry for container image management. ECR offers a secure, scalable, and reliable solution for storing and managing container images. It integrates seamlessly with other AWS services and provides a highly available and durable storage solution for containers in the CI/CD pipeline.

IAM Roles for Service Accounts (IRSA)⚓︎

The EPAM Delivery Platform implements IAM Roles for Service Accounts (IRSA) to provide secure access to AWS services from Kubernetes Clusters. This feature enables fine-grained access control with individual Kubernetes pods assuming specific IAM roles for authenticated access to AWS resources. IRSA eliminates the need for managing and distributing access keys within the cluster, significantly enhancing security and reducing operational complexity.

SSL Certificates⚓︎

The architecture uses the AWS Certificate Manager (ACM) to secure communication between services to provide SSL certificates. ACM eliminates the need to manually manage SSL/TLS certificates, automating the renewal and deployment process. The EDP ensures secure and encrypted traffic within its environment by leveraging ACM.

AWS WAF⚓︎

The architecture's external Application Load Balancer (ALB) endpoint is protected by the AWS Web Application Firewall (WAF). WAF protects against common web exploits and ensures the security and availability of the applications hosted within the EDP. It offers regular rule updates and easy integration with other AWS services.

Parameter Store and Secrets Manager⚓︎

The architecture leverages the AWS Systems Manager Parameter Store and Secrets Manager to securely store and manage all secrets and parameters utilized within the EKS clusters—parameter Store stores general configuration information, such as database connection strings and API keys. In contrast, Secrets Manager securely stores sensitive information, such as passwords and access tokens. By centralizing secrets management, the architecture ensures proper access control and reduces the risk of unauthorized access.

Summary⚓︎

The reference architecture of the EPAM Delivery Platform on AWS provides a comprehensive and scalable environment for building and deploying software applications. With a strong focus on automation, security, and best practices, this architecture enables developers to leverage the full potential of AWS services while following industry-standard DevOps practices.

\ No newline at end of file + EDP Reference Architecture on AWS - EPAM Delivery Platform
Skip to content

EDP Reference Architecture on AWS⚓︎

The reference architecture of the EPAM Delivery Platform (EDP) on AWS is designed to provide a robust and scalable CI/CD system for developing and deploying software in a repeatable and automated manner. The architecture leverages AWS Managed Services to enable developers to quickly deploy and manage infrastructure and applications. EDP recommends to follow the best practices and patterns from the Well-Architected Framework, the AWS Architecture Center, and EKS Best Practices Guide.

Architecture Details⚓︎

The AWS Cloud comprises three accounts: Production, Shared, and Development.

Note

AWS Account management is out of scope for this document.

Each account serves specific purposes:

  • The Production account is used to host production workloads. The Production account serves as the final destination for deploying business applications. It maintains a separate ECR registry to store Docker images for production-level applications. The environment is designed to be highly resilient and scalable, leveraging the EPAM Delivery Platform's CI/CD pipeline to ensure consistent and automated deployments. With proper access control and separation from development environments, the Production account provides a stable and secure environment for running mission-critical applications.
  • The Development account is dedicated to development workload and lower environments. This account hosts the EDP itself, running on AWS EKS. It provides developers an isolated environment to build, test, and deploy their applications in lower environments, ensuring separation from production workloads. Developers can connect to the AWS Cloud using a VPN, enforcing secure access.
  • The Shared holds shared services that are accessible to all accounts within the organization. These services include SonarQube, Nexus, and Keycloak, which are deployed in Kubernetes Clusters managed by AWS Elastic Kubernetes Service (EKS). The shared services leverage AWS RDS, AWS EFS, and AWS ALB/NLB. The deployment of the shared services is automated using Kubernetes cluster-addons approach with GitOps and Argo CD.

EPAM Delivery Platform Reference Architecture on AWS
EPAM Delivery Platform Reference Architecture on AWS

Infrastructure as Code⚓︎

Infrastructure as Code (IaC) is a key principle in the EPAM Delivery Platform architecture. Terraform is the IaC tool to provision and manage all services in each account. AWS S3 and AWS DynamoDB serve as the backend for Terraform state, ensuring consistency and reliability in the deployment process. This approach enables the architecture to be version-controlled and allows for easy replication and reproducibility of environments.

Docker Registry⚓︎

The architecture utilizes AWS Elastic Container Registry (ECR) as a Docker Registry for container image management. ECR offers a secure, scalable, and reliable solution for storing and managing container images. It integrates seamlessly with other AWS services and provides a highly available and durable storage solution for containers in the CI/CD pipeline.

IAM Roles for Service Accounts (IRSA)⚓︎

The EPAM Delivery Platform implements IAM Roles for Service Accounts (IRSA) to provide secure access to AWS services from Kubernetes Clusters. This feature enables fine-grained access control with individual Kubernetes pods assuming specific IAM roles for authenticated access to AWS resources. IRSA eliminates the need for managing and distributing access keys within the cluster, significantly enhancing security and reducing operational complexity.

SSL Certificates⚓︎

The architecture uses the AWS Certificate Manager (ACM) to secure communication between services to provide SSL certificates. ACM eliminates the need to manually manage SSL/TLS certificates, automating the renewal and deployment process. The EDP ensures secure and encrypted traffic within its environment by leveraging ACM.

AWS WAF⚓︎

The architecture's external Application Load Balancer (ALB) endpoint is protected by the AWS Web Application Firewall (WAF). WAF protects against common web exploits and ensures the security and availability of the applications hosted within the EDP. It offers regular rule updates and easy integration with other AWS services.

Parameter Store and Secrets Manager⚓︎

The architecture leverages the AWS Systems Manager Parameter Store and Secrets Manager to securely store and manage all secrets and parameters utilized within the EKS clusters—parameter Store stores general configuration information, such as database connection strings and API keys. In contrast, Secrets Manager securely stores sensitive information, such as passwords and access tokens. By centralizing secrets management, the architecture ensures proper access control and reduces the risk of unauthorized access.

Summary⚓︎

The reference architecture of the EPAM Delivery Platform on AWS provides a comprehensive and scalable environment for building and deploying software applications. With a strong focus on automation, security, and best practices, this architecture enables developers to leverage the full potential of AWS services while following industry-standard DevOps practices.

\ No newline at end of file diff --git a/developer-guide/edp-workflow/index.html b/developer-guide/edp-workflow/index.html index b1d73ec3f..45ce9172e 100644 --- a/developer-guide/edp-workflow/index.html +++ b/developer-guide/edp-workflow/index.html @@ -1,7 +1,7 @@ - EDP Project Rules. Working Process - EPAM Delivery Platform
Skip to content

EDP Project Rules. Working Process⚓︎

This page contains the details on the project rules and working process for EDP team and contributors. Explore the main points about working with Gerrit, following the main commit flow, as well as the details about commit types and message below.

Project Rules⚓︎

Before starting the development, please check the project rules:

  1. It is highly recommended to become familiar with the Gerrit flow. For details, please refer to the Gerrit official documentation and pay attention to the main points:

    a. Voting in Gerrit.

    b. Resolution of Merge Conflict.

    c. Comments resolution.

    d. One Jira task should have one Merge Request (MR). If there are many changes within one MR, add the next patch set to the open MR by selecting the Amend commit check box.

  2. Only the Assignee is responsible for the MR merge and Jira task status.

  3. Every MR should be merged in a timely manner.

  4. Log time to Jira ticket.

Working Process⚓︎

With EDP, the main workflow is based on the getting a Jira task and creating a Merge Request according to the rules described below.

Workflow

Get Jira task → implement, verify by yourself the results → create Merge Request (MR) → send for review → resolve comments/add changes, ask colleagues for the final review → track the MR merge → verify by yourself the results → change the status in the Jira ticket to CODE COMPLETE or RESOLVED → share necessary links with a QA specialist in the QA Verification channel → QA specialist closes the Jira task after his verification → Jira task should be CLOSED.

Commit Flow

  1. Get a task in the Jira/GitHub dashboard. Please be aware of the following points:

    a. Every task has a reporter who can provide more details in case something is not clear.

    b. The responsible person for the task and code implementation is the assignee who tracks the following:

    • Actual Jira task status.
    • Time logging.
    • Add comments, attach necessary files.
    • In comments, add link that refers to the merged MR (optional, if not related to many repositories).
    • Code review and the final merge.
    • MS Teams chats - ping other colleagues, answer questions, etc.
    • Verification by a QA specialist.
    • Bug fixing.

    c. Pay attention to the task Status that differs in different entities, the workflow will help to see the whole task processing:

    View Jira workflow
    View Jira workflow

    d. There are several entities that are used on the EDP project: Story, Improvement, Task, Bug.

    a. Every task has a reporter who can provide more details in case something is not clear.

    b. The responsible person for the task and code implementation is the assignee who tracks the following:

    • Actual GitHub task status.
    • Add comments, attach necessary files.
    • In comments, add link that refers to the merged MR (optional, if not related to many repositories).
    • Code review and the final merge.
    • MS Teams chats - ping other colleagues, answer questions, etc.
    • Verification by a QA specialist.
    • Bug fixing.

    c. If the task is created on your own, make sure it is populated completely. See an example below:

    GitHub issue
    GitHub issue

  2. Implement feature, improvement, fix and check the results on your own. If it is impossible to check the results of your work before the merge, verify all later.

  3. Create a Merge Request, for details, please refer to the Code Review Process.

  4. When committing, use the pattern: commit type: Commit message (#GitHub ticket number).

    a. commit type:

    feat: (new feature for the user, not a new feature for build script)

    fix: (bug fix for the user, not a fix to a build script)

    docs: (changes to the documentation)

    style: (formatting, missing semicolons, etc; no production code change)

    refactor: (refactoring production code, eg. renaming a variable)

    test: (adding missing tests, refactoring tests; no production code change)

    chore: (updating grunt tasks etc; no production code change)

    !: (added to other commit types to mark breaking changes) For example:

    feat!: Add ingress links column into Applications table on stage page (#77)
    + EDP Project Rules. Working Process - EPAM Delivery Platform       

    EDP Project Rules. Working Process⚓︎

    This page contains the details on the project rules and working process for EDP team and contributors. Explore the main points about working with Gerrit, following the main commit flow, as well as the details about commit types and message below.

    Project Rules⚓︎

    Before starting the development, please check the project rules:

    1. It is highly recommended to become familiar with the Gerrit flow. For details, please refer to the Gerrit official documentation and pay attention to the main points:

      a. Voting in Gerrit.

      b. Resolution of Merge Conflict.

      c. Comments resolution.

      d. One Jira task should have one Merge Request (MR). If there are many changes within one MR, add the next patch set to the open MR by selecting the Amend commit check box.

    2. Only the Assignee is responsible for the MR merge and Jira task status.

    3. Every MR should be merged in a timely manner.

    4. Log time to Jira ticket.

    Working Process⚓︎

    With EDP, the main workflow is based on the getting a Jira task and creating a Merge Request according to the rules described below.

    Workflow

    Get Jira task → implement, verify by yourself the results → create Merge Request (MR) → send for review → resolve comments/add changes, ask colleagues for the final review → track the MR merge → verify by yourself the results → change the status in the Jira ticket to CODE COMPLETE or RESOLVED → share necessary links with a QA specialist in the QA Verification channel → QA specialist closes the Jira task after his verification → Jira task should be CLOSED.

    Commit Flow

    1. Get a task in the Jira/GitHub dashboard. Please be aware of the following points:

      a. Every task has a reporter who can provide more details in case something is not clear.

      b. The responsible person for the task and code implementation is the assignee who tracks the following:

      • Actual Jira task status.
      • Time logging.
      • Add comments, attach necessary files.
      • In comments, add link that refers to the merged MR (optional, if not related to many repositories).
      • Code review and the final merge.
      • MS Teams chats - ping other colleagues, answer questions, etc.
      • Verification by a QA specialist.
      • Bug fixing.

      c. Pay attention to the task Status that differs in different entities, the workflow will help to see the whole task processing:

      View Jira workflow
      View Jira workflow

      d. There are several entities that are used on the EDP project: Story, Improvement, Task, Bug.

      a. Every task has a reporter who can provide more details in case something is not clear.

      b. The responsible person for the task and code implementation is the assignee who tracks the following:

      • Actual GitHub task status.
      • Add comments, attach necessary files.
      • In comments, add link that refers to the merged MR (optional, if not related to many repositories).
      • Code review and the final merge.
      • MS Teams chats - ping other colleagues, answer questions, etc.
      • Verification by a QA specialist.
      • Bug fixing.

      c. If the task is created on your own, make sure it is populated completely. See an example below:

      GitHub issue
      GitHub issue

    2. Implement feature, improvement, fix and check the results on your own. If it is impossible to check the results of your work before the merge, verify all later.

    3. Create a Merge Request, for details, please refer to the Code Review Process.

    4. When committing, use the pattern: commit type: Commit message (#GitHub ticket number).

      a. commit type:

      feat: (new feature for the user, not a new feature for build script)

      fix: (bug fix for the user, not a fix to a build script)

      docs: (changes to the documentation)

      style: (formatting, missing semicolons, etc; no production code change)

      refactor: (refactoring production code, eg. renaming a variable)

      test: (adding missing tests, refactoring tests; no production code change)

      chore: (updating grunt tasks etc; no production code change)

      !: (added to other commit types to mark breaking changes) For example:

      feat!: Add ingress links column into Applications table on stage page (#77)
       
       BREAKING CHANGE: Ingress links column has been added into the Applications table on the stage details page
       

      b. Commit message:

      • brief, for example:

        fix: Remove secretKey duplication from registry secrets (#63)

        or

      • descriptive, for example:

        feat: Provide the ability to configure hadolint check (#88)
         
         *Add configuration files .hadolint.yaml and .hadolint.yml to stash
        -

        Note

        It is mandatory to start a commit message from a capital letter.

      c. GitHub tickets are typically identified using a number preceded by the # sign and enclosed in parentheses.

    Note

    Make sure there is a descriptive commit message for a breaking change Merge Request. For example:

    feat!: Add ingress links column into Applications table on stage page (#77)

    BREAKING CHANGE: Ingress links column has been added into the Applications table on the stage details page

    Note

    If a Merge Request contains both new functionality and breaking changes, make sure the functionality description is placed before the breaking changes. For example:

    feat!: Update Gerrit to improve access

    • Implement Developers group creation process
    • Align group permissions

    BREAKING CHANGES: Update Gerrit config according to groups

    \ No newline at end of file +

    Note

    It is mandatory to start a commit message from a capital letter.

  5. c. GitHub tickets are typically identified using a number preceded by the # sign and enclosed in parentheses.

Note

Make sure there is a descriptive commit message for a breaking change Merge Request. For example:

feat!: Add ingress links column into Applications table on stage page (#77)

BREAKING CHANGE: Ingress links column has been added into the Applications table on the stage details page

Note

If a Merge Request contains both new functionality and breaking changes, make sure the functionality description is placed before the breaking changes. For example:

feat!: Update Gerrit to improve access

  • Implement Developers group creation process
  • Align group permissions

BREAKING CHANGES: Update Gerrit config according to groups

\ No newline at end of file diff --git a/developer-guide/index.html b/developer-guide/index.html index e94d01612..418e5de98 100644 --- a/developer-guide/index.html +++ b/developer-guide/index.html @@ -1 +1 @@ - Overview - EPAM Delivery Platform
Skip to content

Overview⚓︎

The EPAM Delivery Platform (EDP) Developer Guide serves as a comprehensive technical resource specifically designed for developers. It offers detailed insights into expanding the functionalities of EDP. This section focuses on explaining the development approach and fundamental architectural blueprints that form the basis of the platform's ecosystem.

Within these pages, you'll find architectural diagrams, component schemas, and deployment strategies essential for grasping the structural elements of EDP. These technical illustrations serve as references, providing a detailed understanding of component interactions and deployment methodologies. Understanding the architecture of EDP and integrating third-party solutions into its established framework enables the creation of efficient, scalable, and customizable solutions within the EPAM Delivery Platform.

\ No newline at end of file + Overview - EPAM Delivery Platform
Skip to content

Overview⚓︎

The EPAM Delivery Platform (EDP) Developer Guide serves as a comprehensive technical resource specifically designed for developers. It offers detailed insights into expanding the functionalities of EDP. This section focuses on explaining the development approach and fundamental architectural blueprints that form the basis of the platform's ecosystem.

Within these pages, you'll find architectural diagrams, component schemas, and deployment strategies essential for grasping the structural elements of EDP. These technical illustrations serve as references, providing a detailed understanding of component interactions and deployment methodologies. Understanding the architecture of EDP and integrating third-party solutions into its established framework enables the creation of efficient, scalable, and customizable solutions within the EPAM Delivery Platform.

\ No newline at end of file diff --git a/developer-guide/kubernetes-deployment/index.html b/developer-guide/kubernetes-deployment/index.html index ed719f9df..3fb01c5ba 100644 --- a/developer-guide/kubernetes-deployment/index.html +++ b/developer-guide/kubernetes-deployment/index.html @@ -1 +1 @@ - Kubernetes Deployment - EPAM Delivery Platform
Skip to content

Kubernetes Deployment⚓︎

This section provides a comprehensive overview of the EDP deployment approach on a Kubernetes cluster. EDP is designed and functions based on a set of key guiding principles:

  • Operator Pattern Approach: Approach is used for deployment and configuration, ensuring that the platform aligns with Kubernetes native methodologies (see schema below).
  • Loosely Coupling: EDP comprises several loosely coupled operators responsible for different platform parts. These operators can be deployed independently, enabling the most straightforward platform customization and delivery approach.

    Kubernetes Operator
    Kubernetes Operator

The following deployment diagram illustrates the platform's core components, which provide the minimum functional capabilities required for the platform operation: build, push, deploy, and run applications. The platform relies on several mandatory dependencies:

  • Ingress: An ingress controller responsible for routing traffic to the platform.
  • Tekton Stack: Includes Tekton pipelines, triggers, dashboard, chains, etc.
  • ArgoCD: Responsible for GitOps deployment.

EPAM Delivery Platform Deployment Diagram
EPAM Delivery Platform Deployment Diagram

  • Codebase Operator: Responsible for managing git repositories, versioning, and branching. It also implements Jira integration controller.
  • CD Pipeline Operator: Manages Continuous Delivery (CD) pipelines and CD stages (which is an abstraction of Kubernetes Namespace). Operator acts as the bridge between the artifact and deployment tools, like Argo CD. It defines the CD pipeline structure, artifacts promotion logic and triggers the pipeline execution.
  • Tekton Pipelines: Manages Tekton pipelines and processes events (EventListener, Interceptor) from Version Control Systems. The pipelines are integrated with external tools like SonarQube, Nexus, etc.
  • EDP Portal: This is the User Interface (UI) component, built on top of Headlamp.

Business applications are deployed on the platform using the CD Pipeline Operator and Argo CD. By default, the CD Pipeline Operator uses Argo CD as a deployment tool. However, it can be replaced with any other tool, like FluxCD, Spinnaker, etc. The target environment for the application deployment is a Kubernetes cluster where EDP is deployed, but it can be any other Kubernetes cluster.

\ No newline at end of file + Kubernetes Deployment - EPAM Delivery Platform
Skip to content

Kubernetes Deployment⚓︎

This section provides a comprehensive overview of the EDP deployment approach on a Kubernetes cluster. EDP is designed and functions based on a set of key guiding principles:

  • Operator Pattern Approach: Approach is used for deployment and configuration, ensuring that the platform aligns with Kubernetes native methodologies (see schema below).
  • Loosely Coupling: EDP comprises several loosely coupled operators responsible for different platform parts. These operators can be deployed independently, enabling the most straightforward platform customization and delivery approach.

    Kubernetes Operator
    Kubernetes Operator

The following deployment diagram illustrates the platform's core components, which provide the minimum functional capabilities required for the platform operation: build, push, deploy, and run applications. The platform relies on several mandatory dependencies:

  • Ingress: An ingress controller responsible for routing traffic to the platform.
  • Tekton Stack: Includes Tekton pipelines, triggers, dashboard, chains, etc.
  • ArgoCD: Responsible for GitOps deployment.

EPAM Delivery Platform Deployment Diagram
EPAM Delivery Platform Deployment Diagram

  • Codebase Operator: Responsible for managing git repositories, versioning, and branching. It also implements Jira integration controller.
  • CD Pipeline Operator: Manages Continuous Delivery (CD) pipelines and CD stages (which is an abstraction of Kubernetes Namespace). Operator acts as the bridge between the artifact and deployment tools, like Argo CD. It defines the CD pipeline structure, artifacts promotion logic and triggers the pipeline execution.
  • Tekton Pipelines: Manages Tekton pipelines and processes events (EventListener, Interceptor) from Version Control Systems. The pipelines are integrated with external tools like SonarQube, Nexus, etc.
  • EDP Portal: This is the User Interface (UI) component, built on top of Headlamp.

Business applications are deployed on the platform using the CD Pipeline Operator and Argo CD. By default, the CD Pipeline Operator uses Argo CD as a deployment tool. However, it can be replaced with any other tool, like FluxCD, Spinnaker, etc. The target environment for the application deployment is a Kubernetes cluster where EDP is deployed, but it can be any other Kubernetes cluster.

\ No newline at end of file diff --git a/developer-guide/local-development/index.html b/developer-guide/local-development/index.html index 0e70448ce..a427ce0d8 100644 --- a/developer-guide/local-development/index.html +++ b/developer-guide/local-development/index.html @@ -1,4 +1,4 @@ - Operator Development - EPAM Delivery Platform
Skip to content

Operator Development⚓︎

This page is intended for developers with the aim to share details on how to set up the local environment and start coding in Go language for EPAM Delivery Platform.

Prerequisites⚓︎

  • Git is installed;
  • One of our repositories where you would like to contribute is cloned locally;
  • Local Kubernetes cluster (Kind is recommended) is installed;
  • Helm is installed;
  • Any IDE (GoLand is used here as an example) is installed;
  • GoLang stable version is installed.

Note

Make sure GOPATH and GOROOT environment variables are added in PATH.

Environment Setup⚓︎

Set up your environment by following the steps below.

Set Up Your IDE⚓︎

We recommend using GoLand and enabling the Kubernetes plugin. Before installing plugins, make sure to save your work because IDE may require restarting.

Set Up Your Operator⚓︎

To set up the cloned operator, follow the three steps below:

  1. Configure Go Build Option. Open folder in GoLand, click the add_config_button button and select the Go Build option:

    Add configuration
    Add configuration

  2. Fill in the variables in Configuration tab:

    • In the Files field, indicate the path to the main.go file;
    • In the Working directory field, indicate the path to the operator;
    • In the Environment field, specify the namespace to watch by setting WATCH_NAMESPACE variable. It should equal default but it can be any other if required by the cluster specifications.
    • In the Environment field, also specify the platform type by setting PLATFORM_TYPE. It should equal either kubernetes or openshift.

    Build config
    Build config

  3. Check cluster connectivity and variables. Local development implies working within local Kubernetes clusters. Kind (Kubernetes in Docker) is recommended so set this or another environment first before running code.

Pre-commit Activities⚓︎

Before making commit and sending pull request, take care of precautionary measures to avoid crashing some other parts of the code.

Testing and Linting⚓︎

Testing and linting must be used before every single commit with no exceptions. The instructions for the commands below are written here.

It is mandatory to run test and lint to make sure the code passes the tests and meets acceptance criteria. Most operators are covered by tests so just run them by issuing the commands "make test" and "make lint":

  make test
+ Operator Development - EPAM Delivery Platform       

Operator Development⚓︎

This page is intended for developers with the aim to share details on how to set up the local environment and start coding in Go language for EPAM Delivery Platform.

Prerequisites⚓︎

  • Git is installed;
  • One of our repositories where you would like to contribute is cloned locally;
  • Local Kubernetes cluster (Kind is recommended) is installed;
  • Helm is installed;
  • Any IDE (GoLand is used here as an example) is installed;
  • GoLang stable version is installed.

Note

Make sure GOPATH and GOROOT environment variables are added in PATH.

Environment Setup⚓︎

Set up your environment by following the steps below.

Set Up Your IDE⚓︎

We recommend using GoLand and enabling the Kubernetes plugin. Before installing plugins, make sure to save your work because IDE may require restarting.

Set Up Your Operator⚓︎

To set up the cloned operator, follow the three steps below:

  1. Configure Go Build Option. Open folder in GoLand, click the add_config_button button and select the Go Build option:

    Add configuration
    Add configuration

  2. Fill in the variables in Configuration tab:

    • In the Files field, indicate the path to the main.go file;
    • In the Working directory field, indicate the path to the operator;
    • In the Environment field, specify the namespace to watch by setting WATCH_NAMESPACE variable. It should equal default but it can be any other if required by the cluster specifications.
    • In the Environment field, also specify the platform type by setting PLATFORM_TYPE. It should equal either kubernetes or openshift.

    Build config
    Build config

  3. Check cluster connectivity and variables. Local development implies working within local Kubernetes clusters. Kind (Kubernetes in Docker) is recommended so set this or another environment first before running code.

Pre-commit Activities⚓︎

Before making commit and sending pull request, take care of precautionary measures to avoid crashing some other parts of the code.

Testing and Linting⚓︎

Testing and linting must be used before every single commit with no exceptions. The instructions for the commands below are written here.

It is mandatory to run test and lint to make sure the code passes the tests and meets acceptance criteria. Most operators are covered by tests so just run them by issuing the commands "make test" and "make lint":

  make test
 

The command "make test" should give the output similar to the following:

Tests directory for one of the operators
"make test" command

  make lint
 

The command "make lint" should give the output similar to the following:

Tests directory for one of the operators
"make lint" command

Observe Auto-Generated Docs, API and Manifests⚓︎

The commands below are especially essential when making changes to API. The code is unsatisfactory if these commands fail.

  • Generate documentation in the .MD file format so the developer can read it:

    make api-docs
     

    The command "make api-docs" should give the output similar to the following:

"make api-docs" command with the file contents
"make api-docs" command with the file contents