Big Three in Cloud Prompts ARM to Rethink Software

British chip design company ARM has seen rough days in the last few years: a merger with Nvidia fell through, and plans for an IPO were put on hold as semiconductor companies got caught in the middle of countries battling over chip supremacy.

Despite the setback, ARM remains afloat, driven by unprecedented success in smartphones, and slowly working its way into the server market. ARM was once considered dead-on-arrival in servers, but the narrative has changed with the big three cloud providers putting ARM processor designs in cloud native environments.

ARM is now trying to shore up its software stack, a weakness that held back its adoption in server environments for more than a decade. At the recent ARM DevSummit conference, the company outlined its software development efforts for cloud native, server and embedded computing environments over the next two years.

Design Methodologies, Developer Experiences

The core focus during keynotes by ARM executives was on application design methodologies and developer experiences in the changing computing landscape.

ARM licenses its processor designs to customers, which then put it in physical chips. Some notable licensees include Apple, Amazon, Samsung, Qualcomm and Google.

ARM dominates software in mobile and edge devices. The Android OS revolves around ARM, and embedded applications are tuned to run on power-efficient edge devices with ARM processors. But it is a different story in servers with the codebase largely tilted to the x86 architecture.

ARM is now undertaking a journey that company executives said a fundamental in the way it was approached the software development. The company is giving software the control of selecting the hardware on which an application should be processed.

“As software developers, we’re used to starting with a problem and figuring the rest out from there. But we’re flipping how things have traditionally worked. The software chooses the characteristics, not the hardware,” said Mark Hambleton, vice president of open source software at ARM, during a keynote.

The software-defined approach is already being taken up by chip companies like Intel and Nvidia, which provide a generous serving of software alongside its chips. Intel’s OneAPI and Nvidia’s CUDA are tuned to take advantage of homegrown chips. But ARM can’t be biased in its software development tools as its processors are licensed by many companies.

“We’re moving away from defining the hardware and then shoehorning the software in and working around whatever compromises that dictates. This reflects the radical shift taking place in the ARM ecosystem,” Hambleton said.

Open Source, Off-the-Shelf

ARM’s software approach is largely based on open source standards, with off-the-shelf operating systems offering rich middleware and APIs, which are enabled by standard base firmware with good upstream support for the peripherals built into the platforms.

But the open standards approach has its complexities — the bulk of the application development effort lands in the hands of its customers. ARM is now pushing more tools through its partners to make it easier for developers to write cloud native code.

“Our Works on ARM program is expanding to include all the major cloud providers. For software developers, this means easy, open access to outstanding ARM-based cloud services that they can trust will run their applications,” Hambleton said.

Amazon’s AWS offers virtual machines on its ARM-based chip called Graviton. Google, Microsoft and Oracle are offering VMs based on Ampere Computing’s ARM-based Altra and Altra Max processors, which have up to 128 cores for cloud native applications. HPE is also using Ampere’s chip in its ProLiant RL300 Gen11 flagship server for hybrid cloud environments.

Ampere is one of the few success stories of ARM server chips and maintains a running list of more than 100 applications that are supported on the virtual machines of Google, Microsoft and Oracle and bare-metal providers like Equinix. The software support list includes the NGINX web server, Memcached, Apache Cassandra, MySQL and Hadoop.

The Linux Foundation’s CNCF (Cloud Native Computing Foundation) also works with ARM to bring Kubernetes systems to the edge.

The Works on Arm program supports more than 100 open source projects that include AlmaLinux and Alpine, virtualization tools such as KVM and Xen Project, databases including MariaDB and the Ruby and Python programming languages. Developers can use those tools to write and test applications before deployment in the cloud.

The program uses DevOps, an iterative development pipeline that relies on quick improvements and deployment of code. The cloud-based development tools are for ARM’s Neoverse platform, which is designed for cloud native and high-performance applications.

The program could kickstart high-end application development for ARM servers, which has historically been a struggle. ARM servers emerged more than a decade ago as a possible option to x86 chips from Intel and AMD, but the processors were not considered powerful enough for conventional database, ERP or other big-iron applications.

Early ARM servers from companies like Calxeda and AppliedMicro were designed for the LAMP (Linux, Apache, MySQL, PHP/Perl/Python) stack running on internal servers. But ARM has now found a sweet spot in cloud applications with its low-power processors providing a nimble way to scale up web applications without the overhead of power-hungry x86 processors.

Microsoft Azure launched its first Ampere ARM-based virtual machines in April, saying it provided “50% better price-performance than comparable x86-based VMs.” Google Cloud also launched ARM virtual machines for the first time this year with the Tau T2A offering.

Code That Just Works

But like Intel and AMD, ARM wants to make coding easy so developers can write code that just works regardless of the hardware.

“The team is identifying ways to further abstract complexity to ensure a seamless developer experience. These include a common tool chain across all our IP and frameworks for programming, debugging, and analyzing across CPU, GPU and NPU,” said Gary Campbell, executive vice president of central engineering at ARM, during a keynote.

Developers can write code that is portable across multiple hardware environments, which will reduce the cost and labor involved in rewriting code for different environments.

“That’s a big challenge for software developers. We’re spending too much time doing the same things over and over, leading to too much cost. The bulk of the software running on that device was custom-made for that device. This meant no opportunity to scale,” Hambleton said.

The chip designer is working with partners on “extensions” to meet the workloads. Some of the partnerships are in areas that include 5G systems, machine learning and edge applications. A lot of the software developments will be targeted at the company’s ARMv9 architecture, which will define next-generation server chips.

“Specialized processing is the new standard bearer that will enable us to move beyond the faster, better, cheaper track of general-purpose computing. That is why it’s a central design principle for the rolling program of … substantial extensions to the ARMv9 architecture that we’ll be deploying over the next few years,” ARM’s Campbell said.

Cloud Native, HPC

Cloud native and high-performance computing environments usually have graphics processors to accelerate AI. Nvidia is matching up its homegrown ARM-based CPU called Grace with its latest H100 GPU based on the Hopper architecture. ARM is also putting extensions in its chip designs to accelerate artificial intelligence and math applications.

One such extension for cloud native processors is called Scalable Vector Extension, which will “drive a significant uplift in cloud to edge performance efficiency,” Campbell said.

The SVE2 extensions will be a key extension in future Neoverse chips. The Neoverse roadmap includes V-series processors in 2023 and 2024 for high-performance systems, followed by a processor design called Poseidon in 2025. The N-series chips through 2025 will focus on power efficiency, and E-series chips will have fast data throughput, which is important for 5G networks and artificial intelligence applications.

ARM also announced the availability of Windows Dev Kit 2023, also known as Project Volterra, which a native ARM development hardware for Windows 11. Developers are mostly manually porting x86 code for Windows 11 for applications to run on ARM editions of the OS.

Microsoft has launched previews of its ARM-native toolchains, including Visual Studio 2022 and .Net 7, which will be released in the coming weeks. Developers can write artificial intelligence applications for laptops with Qualcomm’s ARM-based CX chips, which also include neural processors.

group Created with Sketch.

Leave a Comment

%d bloggers like this: