Deploying azure functions on custom “hardware” with Docker and Azure Kubernetes

Hector Andres Mejia Vallejo
7 min readMay 11, 2021

--

Currently, there is a need to deploy applications with as little setup and as quickly as possible, while also being scalable. With different environments that need different setup for our application to be able to run, and with limited time for development, Docker and Kubernetes kick in.

The perfect combination

Docker is an open-source application Containerization technology where we can “package” our application for a plug and play solution. It dramatically reduces application setup, because there is no need to set up dependencies with respect to varying hardware, OS, etc. Docker containers are built with its own lightweight OS and all dependencies preinstalled using a recipe that we define it ourselves just once (the Dockerfile). Essentially, we can run the app anywhere.

On the other hand, Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services. It provides a framework to run distributed systems resiliently.

Cool!

Now to business. I assume that you already have an Azure Function with all its business logic ready to be deployed, but for some reason we need customized specifications (say to perform AI or compute intensive workloads) and we also need this azure function to be able to scale automatically.

Basically, we are building a docker image for our function, pushing to Docker hub, setting an azure Kubernetes cluster and deploying our app at scale.

Let’s go!

Table of contents

Install Azure CLI and log in
About the azure function
Install Kubectl
Install Docker
Enable cluster monitoring on azure
Setup an Azure Kubernetes Cluster (AKS)
Deploy KEDA to your kubernetes cluster
Get the necessary environment variables for our application container
Build docker image and push to hub
Deploy azure function to kubernetes
Watch your deployments
Installing Kubernetes Dashboard
Deleting a deployment
· Epilogue

Install Azure CLI and log in

See: Install the Azure CLI | Microsoft Docs. Once installed login to azure using CLI:

az login -u <username> -p <password>

About the azure function

I am assuming that you have an azure function project ready to be deployed on custom infrastructure. However this command will start a new azure function project for you:

func init MyFunctionProj --worker-runtime python --docker

Install Kubectl

Kubectl will serve as an interface between our machine and our future cluster for monitoring, setup, etc. Follow these steps (they are pretty easy): Install Tools | Kubernetes

Install Docker

Run the following:

sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

or visit: Get Docker | Docker Documentation. Once installed, login to docker hub:

docker login -u <username> -p <password>

Enable cluster monitoring on azure

Run this:

az provider show -n Microsoft.OperationsManagement -o table
az provider show -n Microsoft.OperationalInsights -o table

if is not registered:

az provider register --namespace Microsoft.OperationsManagement
az provider register --namespace Microsoft.OperationalInsights

Setup an Azure Kubernetes Cluster (AKS)

The juicy stuff! This is going to be our cluster, with custom specifications for our containerized function. You can check virtual machine types here VM sizes — Azure Virtual Machines | Microsoft Docs, just keep in mind the overall cost of your node pools inside your kubernetes cluster. Run this in your terminal:

az aks create --resource-group <myResourceGroup> --name <myAKSCluster> --node-count <nodeCount> --enable-addons monitoring --generate-ssh-keys  --node-vm-size <selectedNodeSize>

Once done, connect to your cluster:

az account set --subscription <subscription>
az aks get-credentials --resource-group <myResourceGroup> --name <myAKSCluster>

Your should now be able to interact with your cluster using kubectl, but for sanity check:

kubectl cluster-info

Deploy KEDA to your kubernetes cluster

KEDA is an event-driven autoscaler for kubernetes. With it, we are able to use azure functions on kubernetes and scale containers based on the number of events to be processed. To deploy to our cluster simply execute:

kubectl apply -f https://github.com/kedacore/keda/releases/download/v2.2.0/keda-2.2.0.yaml

or visit: KEDA | Deploying KEDA for more info.

Get the necessary environment variables for our application container

More often than not, your application will need environment variables. In that case we should create a file called local.settings.json inside the root of the function directory, that will issue all the necessary environment variables to our cluster.

For instance I have created an azure function that is listening to an azure queue for incoming data and executes an object detection model. My local.setting.file is something like:

You can also see this article for more information about local.settings.json and other azure function details: Work with Azure Functions Core Tools | Microsoft Docs

Build docker image and push to hub

Now we have to package our application. We should have a Dockerfile specifying everything necessary to setup our app. After that, just build and push to docker hub:

cd path/to/azfunction
docker build --tag imageName:latest .
docker push imageName:latest

Deploy azure function to kubernetes

Once our image is in our docker hub, we can deploy our application to our kubernetes cluster. Once the deployment is complete the docker container (with our function) will live inside a kubernetes pod. Multiple pods with the same azure function can be instantiated by providing manual parameters or scaling automatically giving a minimum and maximum. Run this command to generate a .yaml file for our deployment:

func kubernetes deploy --name afpdqueue-rtv --image-name datasciencedev/afpdqueue-rtv2 --python --dry-run > deploy.yaml

If we were to run this command without dry-run option, the image would be automatically deployed to the cluster. However, we need to make sure it is sending the right parameters for the ScaledObject, which handles autoscaling. The structure should be like:

---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: <deploymentName>
namespace: default
labels: {}
spec:
scaleTargetRef:
name: <deploymentName>
triggers:
- type: azure-queue
metadata:
direction: in
queueName: videoqueue
connectionFromEnv: STORAGE_CONNECTIONSTRING_ENV_NAME
queueLength: "5" default is 5
minReplicaCount: 1 # Optional. Default: 0
maxReplicaCount: 5 # Optional. Default: 100
---

For more information see KEDA | Scaling Deployments, to know how this parameters affect the deployment.

After everything is set. Execute:

kubectl apply -f .\deploy.yaml

Watch your deployments

Once the application is deployed you can see what is going on. You can see what is deployed on a certain namespace, or across all namespaces:

kubectl get deployments --all-namespaces=true

You can find the pod associated with your deployment, if there are one or more in action:

kubectl get pods

With that command you should see output like this:

Your pods will start with the name of your deployment plus a unique hash. You can watch the logs of all those pods by running:

kubectl logs <nameOfPod>

Installing Kubernetes Dashboard

The Kubernetes dashboard

Of course everything is easier with a GUI! We can have a visual interface to monitor our cluster and deployments, instead of using the terminal. Configure kubernetes dashboard with the following command:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

Then create a service account:

kubectl create serviceaccount <nameForYourServiceAccount>
kubectl create clusterrolebinding <nameForYourServiceAccount> --clusterrole=cluster-admin --serviceaccount=default:<nameForYourServiceAccount>

Now we need to copy the service account secret to get our token for the dashboard.

kubectl describe serviceaccount <nameForYourServiceAccount>

The command above will result in the following output:

Copy the text under mountable secrets, and execute:

kubectl describe secret <serviceAccountSecret>

You should see something like:

Now copy the entire token. On another terminal run:

kubectl proxy

This will start a web server with our dashboard, navigate to: http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login and finally paste the entire token on the login page.

Deleting a deployment

If for some reason you should delete your deployments, simply:

kubectl delete deploy <name-of-function-deployment>
kubectl delete ScaledObject <name-of-function-deployment>
kubectl delete secret <name-of-function-deployment>

Epilogue

This is my first blog entry to share some of the knowledge I am gaining as my career progresses, so have patience for me :D

Although the commands shown come from a specific project, my best guess is that you can still make use for it, or at least part of it. All of this is a compilation of various documentations sources from all the technologies involved and putting it all together took some time, and a learning curve. I hope it will be useful for you guys. Specially if you are using azure functions and have a need for custom hardware to deal with some heavy workloads.

Thank you for taking the time to read this article!

See you!

--

--

Hector Andres Mejia Vallejo
Hector Andres Mejia Vallejo

Written by Hector Andres Mejia Vallejo

Ecuadorian. Studied CS, now working as a data engineer and doing a masters in data science. Passionate for AI and discovering stories in the data!