EnlargeAna Ulin

Azure Container Instances (ACI), which let you create Linux and Windows containers without having to manage the virtual machines they run on, are now generally available.

ACI brings serverless principles to containerized applications. Serverless computing, pioneered by Amazon's Lambda and found on Azure as Functions, is designed to defer all system management (physical and virtual machine deployment and patching) and load-based scaling decisions to the platform provider. Developers just write their application code; they no longer have to care about spinning up virtual machines, updating operating systems, cutting over to new hardware, or anything else.

Traditional container deployments require virtual machines to run on. With ACI's serverless containers, the management of those virtual machines goes away. ACI containers can be deployed using Microsoft's own Azure interface, or with Kubernetes, without needing any VMs to be spun up first. The containers are billed according to how much processor time and memory they use on a per second basis: $0.000012 per CPU-second, $0.000004 per GB of memory-second.

Typically, multiple containers are run within a single virtual machine. This can make them undesirable for multitenant workloads, because the isolation between containers within a virtual machine is imperfect. The containers within ACI are unusual, as they're isolated from one another using a hypervisor (making them similar to Hyper-V Containers within Windows). The use of a hypervisor provides much stronger separation between containers.

When ACI was in preview, the Kubernetes support was provided with an experimental bridge between Kubernetes and the ACI interface. This has grown into a broader project called Virtual Kubelet, which allows Kubernetes to manage containers not just on ACI but also on other serverless container platforms such as Hyper.sh and Amazon's Fargate.

Original Article

[contf] [contfnew]

Ars Technica

[contfnewc] [contfnewc]