🧨 Ignite The Tour: London Highlights – Data and Containers in Azure


Many of us don’t get the opportunity to travel to the States for Microsoft’s biggest public convention: Ignite. Fortunately, over the last few years Microsoft have been taking the show on the road visiting countries all of the globe for a 2-day highlight tour. It also has the benefit of being a good few months after all the initial announcements so there is a bit more depth and understanding on all of the topics up for discussion. There’s content for IT Pros, engineers, admins and developers and did I mention it’s free to attend!? I headed to London with some of my risual colleagues to #LearnItAll (or at least as much as we could cram in!). At these kind of events I like to have a good mix of different sessions. This time, I focussed on finding out what’s new in IT Operations (rather than DevOps), Containers and Data, as well as the latest announcements on new Azure and Office 365 technologies like Azure Arc and Project Cortex. Originally I was planning to live blog this, and then I decided on a summary blog but it turned out there was so much content this article was left in draft and I forgot about it! So I’m publishing it with the Data and Container sessions now and hope to write up the rest in the future!

Data Options in Azure

Step one in looking into any of Azure’s many data options is understand what is important to you. It’s important to have a strategy for your data and its storage to make sure you leverage the right tool for the job. Here’s a few questions worth asking

  • What kind of data do you have?
    • Structured (e.g. rows and columns) – Try Azure SQL Database
    • Unstructured (videos, documents, images etc.) – Use Azure BLOB storage
    • Semi-structured (may have a nested hierarchy but the underneath data is unstructured – e.g. JSON files) – CosmosDB could be right for you
  • How much data do you have?
    • Volume – how much data is there, how much each year
    • Velocity – how quickly does data come in
    • Variety – categorise the different data types you expect to have
  • How could Azure help over traditional on-prem solutions
    • Provision new services quickly
    • Pay as you go
    • Limit impact of new services on existing environments
    • Upgrade legacy operations
    • Give devs more control
    • Dynamic scaling of storage
    • Fine-grained governance
    • Secret management with Azure Key Vault
    • Free security updates for SQL 2008 servers

Modernizing with Containers

Containers in Azure makes your apps more efficient than a traditional Virtual Machine approach. You have no OS to manage, startup times can be slow and there is a lot to monitor/maintain. However, although you can build containers in your own on-prem data centre, this can become very expensive, have limited scale and comes with a number of compliance and regulatory hurdles to get past before you can even get started. So why would you go for containers in the cloud? For a start, they are more efficient and don’t have that initial high investment you would require on-prem. Developers can run the same code on their own machines that runs in the cloud as every environment looks the same.

The speaker recommended creating a base Linux from Scratch distro and did his programming in the open source Go. Scratch is the most “emptiest” version of Linux you can get; all other Linux distros, e.g. Debian or Ubuntu, are built on top of this. However, instead of using Go (which a lot of sysadmins may not be comfortable with) you can use Docker along with a Dockerfile (think batch script) to set up your environment. When you have a container image ready, store it in a Docker registry so it can be shared and re-used across (or outside of) your organisation.

Azure helps with the management of containers in a few key ways. Azure Container Registry is Microsoft’s own representation of a Docker registry. This can be private or public and it can also use Globally Redundant Storage (GRS) to replicate the registry so it is in the same location as your resources – this is a key differentiator over AWS or GCP. Azure Container Instances (ACI) can be used to take away some of the management overhead you would get with a Kubernetes-based solution. To containerise an app and learn Kubernetes is a lot to learn when starting out. ACI gets you up and running quickly even if the eventual goal may be to move to Kubernetes. The speaker went through a great demo of migrating from VMs to ACI, starting with containerising the app, then uploading it to the registry, then deploying it to ACI. He has the code hosted on GitHub [see the session link] if you want to give it a try yourselves. A few tips:

  • If a VM has something like an SQL Database or MongoDB as part of it, don’t try to containerise them. Instead use one of the Azure PaaS solutions. This lets you concentrate on your app, not the underlying services
  • Resist the temptation to put secrets or environment variables in your docker files (which could be widely visible). Instead, use App Settings inside the Azure portal to store them more securely
  • The more you automate, the more you can concentrate on innovation. e.g. automate the patching and use tools like Microsoft Teams or Logic Apps if a manual approval is needed
  • Start secure and stay secure. Scan for code vulnerabilities automatically as part of you CI/CD process

Containers vs Serverless

  • Presented by Mark Allan
  • Session BRK30097

After only just getting to grips with Containers, you may now be hearing a lot of buzz around Serverless and how that is the way forward for a microservice-based architecture. Microservices started out with Containers; this wrapped the microservice app with all its dependencies in one container. You may start out with Azure Container Instances but Kubernetes is the common tool to manage and monitor containers. In Azure, that is typically using the Azure Kubernetes Service (AKS).

  • Master nodes hold the management bits (AKS manages that for you, very complex to do manually)
  • The master nodes communicates with the worker nodes via a kubelet
  • Users access the worker node via a proxy inside the node
    • They then get routed to the pods in the worker node that may have one or more containers in

Serverless computing removes the need for you to worry about any of the underlying infrastructure needed for containers. It allows developers to concentrate on just writing the app code rather than any infrastructure orchestration. Azure does this with Functions:

  • These are event driven
  • Open source runtime
  • Low management, low cost, highly flexible
Kubernetes? 
Fixed(ish) cost 
Limited elasticity 
Run anywhere 
Run anything 
Code it yourself 
Complex infrastructure 
or 
Serverless functions? 
Consumption-based cost 
Enormous elasticity 
Cloud lock-in 
Restricted platform 
Glue code provided 
Trivial infrastructure
Pros & Cons slide from Mark Allan’s Session

Each approach has its pros and cons but you are able to get the best of both worlds. The standard approach is to join a container to a function. However, a new way is to actually run serverless functions inside containers. Kubernetes Event Driven Autoscaling (KEDA) is a way to do this, developed by Microsoft & Red Hat. This is a new alternative to try and solve the complexity introduced with Kubernetes. It uses something called an Open Application Model (OAM). The OAM separates the dev from the app config, from the infra. It’s still a work in progress but it does work. If you want to have a look at it, Rudr is an OAM for Kubernetes. This uses a YAML file to hold the app architecture, then OAM build the Kubernetes environment from it.

Open Application Model 
Application developer 
creates reusable components 
that run code inside containers. 
component A 
config 
Application operator• 
creates application config to group 
Components into applications. add 
operational traits. and deploy. 
application 
config 
TLS 
component H 
Infrastructure operator 
optionally configures the 
runtime environment. as needed. 
environment 
config 
component 
config 
autoscale 
component A 
(s) 
load balancing 
Application scope
Open Application Model (OAM)

My thoughts from both of my container sessions where that it is still a very dev/app architect orientated solution that can be a steep learning curve if you are more infrastructure focussed. OAM may help with this as it has a special role for the infrastructure operator. Technical lock-in could be an issue for a serverless approach but the benefits may outweigh the “complexity tax” that containers introduces. Something I’ll definitely be keeping my eye on and getting more practical experience of.


So that’s part one of (maybe) on ongoing series as there is still a lot to talk about!

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.