Micro services or API layer design the most non avoidable design patterns, any product can adopt to evolve into more robust design. It is interesting to look at different patterns from rest or aggregated api with graphql as BFF or server less and working with cloud and Messaging ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Serverless architecture is the current trend in the software industry, which allows you to run applications in cost effective ephemeral services. With GraphQL, you can reduce the number of service calls to single endpoint. Using these two technologies, we can build cost effective and efficient API.
In this article series, I am gonna guide you on how to integrate GraphQL API and Azure functions together (Part I). Also, how we can secure this API using oAuth2 authentication (Part II).
Below is the high-level diagram of what we are going to cover in this article,
I presume you have already worked with Azure technologies and GraphQL. So I am not going into a detailed explanation on each term on each section.
Prerequisites
Basic knowledge on Azure technologies, Azure CLI command and GraphQL
We have the option to write an Azure function in different languages such as C#, NodeJS, python etc. I will select NodeJS in this article.
First, let’s create our Azure function project. Open your console and run below command:
func init graphql-functions --worker-runtime node cd graphql-functions
Then, run the following command to generate HTTP Trigger template.
func new --template "Http Trigger" --name graphql
This will generate two pre-built templates inside “graphql” folder as function.json and index.js. We’ll be writing our business logic inside this index.js.
Now open this function project using VS Code or your favorite IDE and edit function.json. Here we need to replace `HTTP out` binding to "$return" as follow:
Let’s install Apollo server integration for Azure Functions using npm:
npm install --save apollo-server-azure-functions
Now open “graphql/index.js” file. We are going to write our logic in this file. First, let’s write a simple GraphQL schema with “Hello World” response.
Replace the existing code with below code:
func host start
You should see similar output in your console.
Copy the GraphQL and paste in browser. You should see below GraphQL playground as follow:
Alright, now we have integrated Azure function with GraphQL. Now we can start writing our actual business logic. For this article, I will be using Cosmos DB.
Set up Cosmos DB in Azure
I have already set up a simple Cosmos DB in my Azure subscription.
NOTE: Better maintain all your resources (cosmos DB, function app) in one resource group. So then it’s easier to do the cleanup at the end.
Following is the structure of a document. You may add some dummy records as the following structure in your Cosmos container.
Note: Since this is just for a demo, adding the connection string in the code. The recommended way is to store the connection string in a config file or store it in Key vault.
Now, let’s create the connection to our Cosmo DB.
const client = new CosmosClient(connectionString);
Then replace the existing GraphQL schema with the following schema:
As you can see, we have defined our user schema with id, first name, last name and email, which is similar to our Cosmos document structure. Also, we have introduced two queries. One query is to get the user by Id & other method is to retrieve all the users in our DB.
Then, let’s replace our resolver method with the following:
Summary : customer desired having a development web application on a public domain but ring fenced to only allow access to authenticated users from Azure AD.
Following on from myprevious blog postcovering SSL Termination and NGINX, in this post we will expand our deployment to also now include user authentication of a new web app.
As with every article in this series this has been driven by customer use cases. In this instance the customer desired having a development web application on a public domain but ring fenced to only allow access to authenticated users from Azure AD.
Before we dive deeper into the use case and implementation its important to understand the various components if unfamiliar. I will speak about benefits of certain technologies as I go through but it is worth taking a quick look at these links as a level set if you need it:
As mentioned the customer was looking to add authentication to their development applications. There are many other reasons you may want to add user authentication to your application for example any application that wants to serve differing content or features to users based on an associated property. More information on how scopes and permissions in the Microsoft Identity Platform can be foundhere.
The customer in this instance wanted their production code base and development code base to be as similar as possible using the Microsoft Authentication Library was not possible.
This got me thinking as to the developer overhead that goes into implementing authentication at the application level. I started speaking to some developers I know and found a common theme summarized eloquently by one developer who stated "I just want to spend time developing value add features, why the **** do I have to care about authentication".
Although extreme it is a fair point. Authentication in my opinion is an infrastructure and security concern. Shifting your authentication outside of your application to middleware has two clear benefits:
Developer Overhead -Developers can spend less time concerned about implementing authentication and more time working on features that add value to your business and customers.
Application Workload -Offloading the authentication to dedicated middleware decreases the processing required by the compute that is hosting your application.
That being said this isn't suitable for every application or every business. It could still be the case that if developers remain responsible for authentication once shifting outside of the application. In this event it can actually increase developer workload.
This blog post details design decisions and the implementation for a deployment using OAuth2 Reverse proxy to handle user authentication for your micro-services. I have this currently running onhttps://www.owain.online/.
I have even setup a test user so feel free to click on the link and login with the following credentials to try out the authentication.
Email address: testuser@owain.online
Password: Th4nk54th3bL0g
DAPR
It would be worth noting that OAuth2 Proxy was not the first authentication middleware I worked with when creating this demo and blog. Initially I usedDAPR's Oauth2 Middleware Component.This component has a lot of promise and utilities asidecar architectureto deploy its authentication component. This is brilliant as it allows you to configure your authentication individually for each microservice. However unfortunately DAPR's authentication component is still in Alpha and has some unexpected behavior. I have raised a relevant issuehere, hopefully in the future this component will be something to revisit.
Implementation
I will be using Azure Key Vault to store some of the sensitive data used in this blog. I will do a quick walk through below of how this is configured and setup for AKS. This is not essential to setup Oauth2 in AKS and will add unnecessary complexity if building this as a POC/Demo. Feel free to skip this part if not relevant for you.
I will also be building on the architecture created in myoriginal blog post. This is not essential to complete before this implementation but will provide context.
I have changed the application being deployed from my boring API to quite an exciting web app developed byMark Harrison, a Senior Specialist here at Microsoft UK.
We will be deploying a web application with an Oauth2 reverse proxy ensuring that only authenticated users are able to access the web app. We will also be deploying an API for the web app which wont be use the Oauth proxy and will only be accessible from inside the cluster. Take a look at the pods in architecture at the top of the page for more clarity.
As we are going to be authenticating users accessing our application using Azure Active Directory we will need to create anapplication registrationin our directory.
These registrations can be considered the definition of the application and are the objects that describe the application to Azure AD. To create our app registration we will go through the portal into Azure Active Directory.
When registering the application we have some information we need to pass over all of which can be changed once the app registration is created. The Redirect URI is important to highlight here as it will redirect users once authentication is finished. We will pass through a callback URL which will be used later when we configure our Oauth2 Proxy.
Update the redirect URI to your suit your domain and protocol.
https://<yourdomain>/oauth2/callback
Once your application is created take a note of the Application (client) ID on the overview page. We will need this later.
Within our application registration we will also need to create a client secret to be used to identify our application. We could also use a certificate for higher assurance however for this example a secret will suffice.
Take a note of your client secret when it is created as it will only be available to view once (You can create a new secret if you loose it).
This is all the setup required on the application registration for now however it is worth highlighting some additional features. This app registration allows you to create custom branding for your login to provide an integrated experience with the rest of your application.
API permissions is also important to be aware of. By default theMicrosoft Graphis added for this application to enable retrieval of basic user data when signed in. Additional permissions can be added if they are required by the application however the user will need to consent to the application using this data when they first login.
Azure Key Vault
In this scenario we will use Azure Key Vault to secure our secrets when being used by our AKS applications.
In this demo we will be usingsecure cookiesand as a result we will need to create a cookie secret. We can create a cookie secret with the following command using OpenSSL.
Now we must set the secrets in Key Vault. To do this we will use the Azure CLI to save some time. This can also be done through the portal. First ensure that you are logged in and are in the correct subscription. Then run:
az keyvault secret set --vault-name "aks-zero-trust-kv" --name "oauth2-proxy-client-id" --value "<Application (Client) ID>"
az keyvault secret set --vault-name "aks-zero-trust-kv" --name "oauth2-proxy-client-secret" --value "<Client Secret>"
az keyvault secret set --vault-name "aks-zero-trust-kv" --name "oauth2-cookie-secret" --value $cookie_secret
If we then check our Key Vault we should see our secrets:
Azure Kubernetes Service
We now need to deploy our applications and components into our Kubernetes cluster.
Azure Key Vault Intergration
To start with we will need to enable theAzure Secret Store CSI Driveron our cluster if we did not enable it at creation. We can do so with the following command:
az aks enable-addons --addons azure-keyvault-secrets-provider --name myAKSCluster --resource-group myResourceGroup
We can then verify the install by running:
kubectl get pods -n kube-system -l 'app in (secrets-store-csi-driver,secrets-store-provider-azure)
Now that our driver is running on our cluster we need to decide how we are going to allow our AKS cluster to access our Key Vault. We have a number of options for doing that:
A user-assigned or system-assigned managed identity
In this deployment we will use user-assigned managed identities although very soon workload identity will be GA. I would encourage you to take some time to take a look at the difference between the user assigned managed identity we will use today and workload identity.
In the future I will edit these deployment files and blog to also show how to leverage workload identity.
To create a managed identity for our cluster we can use the following command:
az aks update -g <resource-group> -n <cluster-name> --enable-managed-identity
We can then query the client ID of the identity created for us.
az aks show -g <resource-group> -n <cluster-name> --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId -o tsv
We then need to add that client ID to our key vault with the appropriate permissions:
# set policy to access keys in your key vault
az keyvault set-policy -n <keyvault-name> --key-permissions get --spn <identity-client-id>
# set policy to access secrets in your key vault
az keyvault set-policy -n <keyvault-name> --secret-permissions get --spn <identity-client-id>
# set policy to access certs in your key vault
az keyvault set-policy -n <keyvault-name> --certificate-permissions get --spn <identity-client-id>
Feel free to check the permissions have been set in your Key Vault through the portal.
We must now create a SecretProviderClass. The secret provider class will access our KeyVault using the managed identity we have just created. As Oauth2 Proxy also requires our secrets to be passed through as environment variables we will include some secret objects in this file. Two important things to note here:
Kubernetes Secrets- These key vault secrets are still fundamentally being passed through in this instance as Kubernetes secrets as we are passing them as environment variables. This may not be secure enough for some environments. We could alternatively mount these secrets to the pod and reference the mount point. This however still has its risks. We do still benefit from being able to rotate, update and disable secrets from Azure Key Vault.
Secret Syncing -The great thing about the secret provider class is that however you are accessing the secrets you can continually synchronize the secrets with the version in your key vault. The secret is created when the pod is mounted at which point the latest version will be used from the key vault. That being said the secret provider class does not restart application pods that are already running.
The secret provider class requires the Managed Identity client ID and the tenant ID of your key vault (Github Link):
apiVersion: v1
kind: Namespace # We are splitting our app & API across namespaces for later usage.
metadata:
labels:
app.kubernetes.io/name: colors
name: colors-web
---
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-aks-zero-trust-user-msi # needs to be unique per namespace
namespace: colors-web
spec:
provider: azure
secretObjects: # secretObjects defines the desired state of synced K8s secret objects
- secretName: client-id
type: opaque
data:
- objectName: oauth2-proxy-client-id
key: oauth2_proxy_client_id
- secretName: client-secret
type: opaque
data:
- objectName: oauth2-proxy-client-secret
key: oauth2_proxy_client_secret
- secretName: cookie-secret
type: opaque
data:
- objectName: oauth2-proxy-cookie-secret
key: oauth2_proxy_cookie_secret
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true"
userAssignedIdentityID: <Managed Identity Client ID>
keyvaultName: aks-zero-trust-kv # Set to the name of your key vault
cloudName: "" # [OPTIONAL for Azure] if not provided, the Azure environment defaults to AzurePublicCloud
objects: |
array:
- |
objectName: oauth2-proxy-client-id
objectType: secret # object types: secret, key, or cert
objectVersion: "" # [OPTIONAL] object versions, default to latest if empty
- |
objectName: oauth2-proxy-client-secret
objectType: secret # object types: secret, key, or cert
objectVersion: "" # [OPTIONAL] object versions, default to latest if empty
- |
objectName: oauth2-proxy-cookie-secret
objectType: secret # object types: secret, key, or cert
objectVersion: "" # [OPTIONAL] object versions, default to latest if empty
tenantId: <Your tenant ID> # The tenant ID of your key vault
We then must apply the secret provider class:
kubectl apply -f secretproviderclass.yaml
It is worth checking the secrets currently and noting that currently the objects defined are not created. This is because as mentioned these are created when the application requires them.
Application Deployment
We now need to deploy our application. We will be deploying two applications in this example. One web app and one API. First lets deploy the API with the following manifest (Github Link):
Notice up to this point there is no mention of oauth2. We are not passing any information to the application and the application has no user authentication built in.
We now need to create two components our Oauth2 container to handle the authentication and our ingress resource. First we will deploy the Oauth2 application.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
application: colors-service-oauth2-proxy
name: colors-service-oauth2-proxy-deployment
namespace: colors-web
spec:
replicas: 1
selector:
matchLabels:
application: colors-service-oauth2-proxy
template:
metadata:
labels:
application: colors-service-oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --azure-tenant=<Azure tenant ID> # Azure AD OAuth2 Proxy application Tenant ID
- --pass-access-token=true
- --cookie-name=_proxycookie
- --upstream=<Redirect URL>
- --cookie-csrf-per-request=true
- --cookie-csrf-expire=5m # Avoid unauthorized csrf cookie errors.
- --email-domain=* # Email domains allowed to use the proxy
- --http-address=0.0.0.0:4180
- --oidc-issuer-url=https://login.microsoftonline.com/<Tenant ID>/v2.0
- --user-id-claim=oid
name: colors-service-oauth2-proxy
image: quay.io/oauth2-proxy/oauth2-proxy:v7.4.0
imagePullPolicy: Always
volumeMounts:
- name: secrets-store01-inline
mountPath: "/mnt/secrets-store"
readOnly: true
env:
- name: OAUTH2_PROXY_CLIENT_ID # keep this name - it\'s required to be defined like this by OAuth2 Proxy
valueFrom:
secretKeyRef:
name: client-id
key: oauth2_proxy_client_id
- name: OAUTH2_PROXY_CLIENT_SECRET # keep this name - it\'s required to be defined like this by OAuth2 Proxy
valueFrom:
secretKeyRef:
name: client-secret
key: oauth2_proxy_client_secret
- name: OAUTH2_PROXY_COOKIE_SECRET # keep this name - it\'s required to be defined like this by OAuth2 Proxy
valueFrom:
secretKeyRef:
name: cookie-secret
key: oauth2_proxy_cookie_secret
ports:
- containerPort: 4180
protocol: TCP
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-aks-zero-trust-user-msi"
---
apiVersion: v1
kind: Service
metadata:
labels:
application: colors-service-oauth2-proxy
name: colors-service-oauth2-proxy-svc
namespace: colors-web
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
application: colors-service-oauth2-proxy
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "2000m"
nginx.ingress.kubernetes.io/proxy-buffer-size: "32k"
name: colors-service-oauth2-proxy-ingress
namespace: colors-web
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /oauth2
pathType: Prefix
backend:
service:
name: colors-service-oauth2-proxy-svc
port:
number: 4180
In this deployment we are referencing the secret objects included in the secrets provider which once we apply this manifest will be created. We can also see that we are using the secrets volumes. The volumes specifies the CSI and the secret provider class and the volume mount then mounts the secrets to the pod. This allows the deployment to create the secrets.
Once you have added the ID's for your specific deployment we can apply this deployment.
kubectl apply oauth2proxy.yaml
If we look for our secrets we should now see the secrets have been created.
REDMOND+owaino@DESKTOP-9V6KSRB MINGW64 ~/Documents/Azure-Demo-Projects/AKS-Zero-Trust (main
$ kubectl get secrets
NAME TYPE DATA AGE
client-id opaque 1 2s
client-secret opaque 1 2s
cookie-secret opaque 1 2s
tls-secret kubernetes.io/tls 2 6d17h)
Now we will deploy the ingress. If you are following from the previous post we will replace the existing ingress. If you are notplease install the NGINX Ingress Controlleron your cluster.
The auth-url indicates the url that your requests will be forwarded too when they hit this ingress. Here we are using the /oauth2 endpoint that our oauth proxy will stand up. Auth-Sign-In points to the starting url of the authentication flow and passes our callback url and finally the auth response headers allow us to specify what values from our authorization we would like to forward for use by the application.
These are the basic required annotations for NGINX external authentication but I would encourage you totake a look at the broader set, some of which are very powerful.
Now we understand the auth related annotations we can apply our ingress:
kubectl apply -f ingress-srv.yaml
We can now check that our pods are deployed and running as expected:
REDMOND+owaino@DESKTOP-9V6KSRB MINGW64 ~/Documents/Azure-Demo-Projects/AKS-Zero-Trust (main)
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
colors-service-oauth2-proxy-deployment-78ffb756f5-sd4v5 1/1 Running 0 16m
colors-web-depl-554b54449c-ntflf 1/1 Running 2 (5d18h ago) 6d17h
REDMOND+owaino@DESKTOP-9V6KSRB MINGW64 ~/Documents/Azure-Demo-Projects/AKS-Zero-Trust (main)
$ kubectl get pods -n colors-api
NAME READY STATUS RESTARTS AGE
colors-api-depl-79c887f867-vhgg9 1/1 Running 0 13m
Providing you see no errors you should now be able to head the the domain or IP address you have been using to configure this deployment and on the route you specified be greeted by an Azure AD Login screen.
Once we authenticate with a user in your Azure AD directory we are greeted by Marks great colors application.
We have now managed to ring fence our web application with Azure AD authentication without having to make any code changes and with Azure Key Vault integration!
Finally its time to configure our application to see what it does and also highlight that we can now access our internal API without publicly exposing it.
To configure the application we require the FQDN of the API service we deployed earlier. As Kubernetes by default does not restrict traffic between pods or namespaces we can specify the service name of our API for internal calls. As our API service isn't exposed to the internet we need to un-tick the box so that our calls are made using the pods running our application and not the client.
We will also need to include the route to the API which in this case is /colors/random but you can also take a look at the otheroptions available here.
To get the FQDN of our service we can execute the following commands:
It's worth taking another look at the architecture as a refresher as to what has been implemented including the SSL termination if following from the first blog!
The use of Oauth2 reverse proxy has enabled us to authenticate at the ingress level. Although this application doesn't do anything with the headers that are forwarded on to it you could easily now set feature flags or unique user content based on the authentication information passed through in the headers.
In my next blog post I will take a look at Network Policies and Open Service Mesh to examine how we can leverage different features to restrict network traffic, enable mTLS and manage observability.