Terraform solution accompanying F5 DevCentral article Bolt-on Auth with NGINX and F5 Distributed Cloud.
This solution makes modest use of automation for the deployment. However, there are several preparatory steps that need to occur manually. The tools required are:
- Docker Desktop
- Terraform
- A GitHub account
- An Azure account (we will be using Microsoft Entra ID (formerly Azure Active Directory) as our OIDC Identity Provider)
- Microsoft Entra ID configured with user accounts
- Git
- An F5 Distributed Cloud account
To enable OpenID Connect-based single sign-on, we will need to employ the use of NGINX Plus. We will create a container image of NGINX Plus, complete with the auth_jwt and njs modules. The following is a high-level overview of how this solution works:
We will make use of a GitHub repository containing the Dockerfile we will use to build the container image for this solution, as it has been pre-hardened to run in an unprivileged mode in vk8s.
Since NGINX Plus is commercial software, we will need to host the image we build in a private container registry. In this walkthrough, I will be using the GitHub Container Registry (ghcr). Here are the steps:
-
Clone the repository:
git clone https://github.com/f5devcentral/nginx-unprivileged-f5xc
-
If you already have an NGINX Plus license, download the crt and key files from the MyF5 portal. New to NGINX Plus? You can request a free 30-day trial which includes a crt and key.
-
Copy the nginx-repo.crt and nginx-repo.key files to the root of the cloned repo. Do not commit these files to Git!
-
Run the Docker build command:
export DOCKER_DEFAULT_PLATFORM=linux/amd64 sudo DOCKER_BUILDKIT=1 docker build --no-cache --secret id=nginx-key,src=nginx-repo.key --secret id=nginx-crt,src=nginx-repo.crt -t nginx-oidc .
-
If the image build was successful, login to ghcr using a developer token, tag the image, and push it to the registry:
export $GITHUB_USER=<your GitHub account> export $GITHUB_TOKEN=<your GitHub developer token> echo $GITHUB_TOKEN | docker login ghcr.io -u $GITHUB_USER --password-stdin docker tag nginx-oidc ghcr.io/$GITHUB_USER/nginx-oidc docker push ghcr.io/$GITHUB_USER/nginx-oidc
-
In your browser, navigate to
https://github.com/<your-github-account>?tab=packages
to ensure that it has been published. Verify that the image is private!
In this solution, we will be leveraging the authorization code flow, as depicted in the following diagram:
To do this, we need to configure Microsoft Entra ID to assume the role of Identity Provider (IDP).
-
Log into the Azure Portal and select "Microsoft Entra ID":
-
Click "App registrations" then "New registration":
-
Give the application a name. This is the name that will appear in the Microsoft authorization request dialog when we test the solution later on. Select "Web" for Redirect URI type, and specify a redirect URI for this application. The URI must exactly match the hostname and port of the F5 Distributed Cloud HTTP Load Balancer that will be created when we use Terraform to deploy in a later step. The format needs to be:
https://your-fqdn-here:443/_codexch
-
Click "Register".
-
Click "Certificates & secrets", then "New client secret":
-
In the resulting dialog, enter any value for the description, and choose an appropriate expiration date. Click "Add":
-
The new client secret will have been created. Copy the secret’s "Value" and keep it somewhere safe for a later step. Do this now - you will not have the ability to retrieve it later.
-
Click "Overview". Locate and copy the Application (client) ID and Directory (tenant) ID values for use in a later step:
Earlier I mentioned that we would be configuring the cluster and deploying the application and NGINX Plus proxy. We will use Terraform to do this, with the help of another GitHub repository.
-
In the F5 Distributed Cloud console, create a new application namespace. Application namespaces create a logical separation for enhanced security between applications. This new namespace will be for the Sentence app and the NGINX Plus OIDC auth service. For example, "demo-nginx-auth".
-
In the F5 Distributed Cloud console, create a Service Credential that has the admin role for this newly created namespace.
-
Clone this repository:
git clone https://github.com/f5devcentral/terraform-nginx-auth.git
-
Open the cloned repository in an editor such as VS Code. We will be editing multiple files there.
-
Make a copy of the terraform.tfvars.example file, and name it terraform.tfvars
-
In your terraform.tfvars file, make the following updates:
-
Update lines 1-2 with information from your F5 Distributed Cloud tenant information. Both values can be seen in Tenant ID field of the Distributed Cloud console’s Administration -> Tenant Overview page.
Example: Tenant ID: <tenant>-<tenant_suffix>
-
Update line 3 with the namespace name that you created in step 1.
-
Update lines 5-8 with GitHub container registry information from the "Building the NGINX Plus Container Image" section of this guide.
-
Update lines 10 and 13. Change the "example.com" to a custom domain name you own that has been delegated to F5 Distributed Cloud. See Domain Delegation for details on how to set this up in your tenant.
NOTE: The value on line 13 must match the fqdn you used in step 3 of the "Create the Azure OIDC Identity Provider" section of this guide. If you need to, you can change the URL you set in that step using the Azure Portal.
-
By default, the Sentence app will deploy to a Kubernetes cluster in the Dallas, Texas (U.S.) region, and the NGINX Plus OIDC auth service to a Kubernetes cluster in Seattle, Washington (U.S.). If you would like to update these, update lines 11 and 14 in your terraform.tfvars using values from this list.
-
To secure the Sentence app and API, NGINX Plus needs to be configured variables that are required in the OIDC authorization flow. For details about this solution, see the nginx-openid-connect repo. We will use Terraform variables to inject the secret values into the NGINX Plus config that will be installed in Kubernetes.
NOTE: You will be populating the file below with sensitive information. Do not commit this file into a public repository. Ideally, these values should be stored in a secure enclave to prevent unintended disclosure.
-
In your terraform.tfvars file, make the following updates:
-
On line 16, replace <Azure Tenant (directory) ID> with the value from Step 8 in the "Create the Azure OIDC Identity Provider" section earlier in this guide.
-
On line 17: Replace <Azure Application (client) ID> with the value from Step 8 in the "Create the Azure OIDC Identity Provider" section earlier in this guide.
-
On line 18: Replace <Azure Client Secret Value> with the value from Step 7 in the "Create the Azure OIDC Identity Provider" section earlier in this guide.
-
Optionally, for extra security, you can update the random phrase on line 19 used as an OIDC hmac key to ensure nonce values used in the authorization flow are unpredictable.
-
-
Save the file.
To programmatically interact with F5 Distributed Cloud, you must create an API certificate in the PKCS (.p12) format.
-
Follow the procedure to create an API Certificate for the Service Credential you created earlier.
-
Download the API certificate’s p12 file.
-
In the shell you will execute Terraform in, set the following environment variables:
export VES_P12_PASSWORD=<your .p12 file password> export VOLT_API_P12_FILE=<full path to your .p12 file>
We will run Terraform plan and apply in 2 different phases: One to create the F5 Distributed Cloud vk8s clusters and return their respective kubeconfig files, and the next phase to install the actual resources to the Kubernetes clusters.
-
In the same shell as above, run Terraform init to download the providers needed for the solution, and prepare for the next steps:
terraform init
-
Run Terraform plan to see a list of all the resources that will be created for the cluster creation phase:
terraform plan -target=module.xc-re-vk8s-kubeconfig
-
Scroll through the list of all the resources that Terraform plans to create. Note all the resources that will be created:
- A Virtual Kubernetes cluster, deployed in 2 different regions
- A Kubeconfig file used to deploy the Sentence app and NGINX Plus OIDC manifests to the newly created
-
If you are satisfied with the list of resources created and there were no errors reported by the previous step, run Terraform apply:
terraform apply -target=module.xc-re-vk8s-kubeconfig
NOTE: When prompted to confirm the apply step, type "yes".
-
Now that the cluster has been provisioned, run a Terraform plan to review the remainder of the solution to be deployed:
terraform plan
-
Scroll through the list of all the resources that Terraform plans to create. Note all the resources that will be created:
- 2 load balancers and origin pool pairs
- Kubernetes manifests for deploying all the needed resources into the clusters (ConfigMaps, Secrets, Deployments, Services)
NOTE: Terraform will report the existence of the items that were created in step 4. Not to worry - Terraform is already aware of these object that were created and will not unnecessarily re-create them.
-
If you are satisfied with the list of resources created and there were no errors reported by the previous step, run Terraform apply:
terraform apply
NOTE: When prompted to confirm the apply step, type "yes".
-
If there are no errors, the entire solution is now deployed.
-
In your browser, navigate to the NGINX Plus OIDC auth site. Hint: this is the host you entered into the terraform.tfvars file, line 13.
-
You should be redirected to Microsoft login screens, requesting your authorization for the application to use your ID to authenticate to the Sentence app.
-
Once authenticated, you should be allowed to access the Sentence app:
-
Log into the F5 Distributed Cloud console.
-
Navigate to "Distributed Apps".
-
Select the namespace you created earlier in the top left menu.
-
Click on "Applications" -> "Virtual K8s".
-
Note that a cluster has been created and is showing a "Ready" status:
-
Click on the cluster link. Here you can examine the Services, Deployment objects, and Pods that were deployed to the cluster.
-
Click "Load Balancers" -> "HTTP Load Balancers". Click on each of the Load Balancers that were created and examine the metrics available. Note that the deployed applications are healthy.
A benefit of using Terraform is that removing all the resources it created in F5 Distributed Cloud is just as easy as creating them. For this, use Terraform destroy, and type "yes" when prompted:
terraform destroy
NOTE: Remember to delete the API Credential and namespace you created in the F5 Distributed Cloud console if you no longer need them.