arm hq cambridge

Arm chips for mobile and IoT devices have supported secure enclaves – a concept known as confidential computing – for years. It’s called the Arm TrustZone trusted execution environment, and it’s been available for mobile devices since 2004 and in its M-Class IoT since 2014. The chip designer is now bringing confidential computing to its data center-class chip designs.

The Armv9 architecture, launched in March, features Arm CCA (Confidential Compute Architecture).

Since Arm, based in Cambridge, UK, licenses its designs out to various chipmakers, the release will help democratize confidential computing in data centers, Mark Knight, director of architecture products at Arm, said. Server-chip giants Intel, AMD, and IBM each have their own secure enclave technology for data centers.

Arm CCA builds on the original Arm TrustZone technology, Knight told DCK, to extend the principle of a hardware-based secure processing environment to a wider range of workloads.

“Arm CCA takes the kind of high-trust secure enclaves that have previously been accessible to only device manufacturers and operating system vendors and opens secure computing to all developers and all data center workloads,” he said.

TrustZone is primarily accessible to silicon vendors and OEMs, he said. Applications are usually written to run in specialized environments and are usually smaller in size.

With Arm CCA, any developer can take advantage of confidential computing by running their application in the secure enclave. It can run on any standard operating system, including Linux and Windows.

What Is Confidential Computing?

Data is routinely encrypted when it’s sent over the internet or stored in databases or backups, but it usually must be decrypted for an application to do anything with it. That offers a window of opportunity for attackers.

“While protecting data at rest and data in transit are long-established techniques, protecting data while it’s actively being processed has remained a harder challenge,” said Knight.

This is the problem confidential computing aims to solve. It works by isolating a workload in a secure enclave on a chip. This is a common approach to protecting payment information on mobile devices, for example. Other processes running on the same chip can’t peer over the enclave walls, preventing them from listening in on sensitive operations or peering into active memory.

AWS, Google Cloud, Microsoft Azure, and IBM Cloud all offer secure enclaves to their customers using confidential computing features in Intel and AMD processors. IBM also has its own proprietary confidential computing technology in the IBM Z chips.

“All the major cloud providers are rolling this stuff out,” said Mike Bursell, chief security architect at Red Hat, speaking at Arm’s Vision Day conference in March.


“I think it’s going to take a while: some process changes, a lot of engineering as well, a lot of work for me and everyone involved in engineering,” Bursell said. “But that’s where I want to be going – that it becomes transparent because it’s everywhere. It’s the right thing to do.”

Confidential computing can be enabled in any data center using either Intel’s SGX technology or secure enclaves supported by AMD Epyc 2 server chips.

At first, the big difference between Intel’s and AMD’s approaches was the size of the secure enclave. Google said that its AMD Epyc 2-powered enclaves could go up to 896 gigabytes, while Intel SGX originally topped out at 128 megabytes. But in October Intel announced that the secure enclaves in its 3rd generation Intel Xeon Scalable processors (“Ice Lake”) would hold up to 1 terabyte. The new Intel chips launched in April.

There is no specific limitation on the size of the Arm CCA secure enclaves, said Knight. “It’s simply down to ensuring the platform has sufficient memory and processing resources to support the required concurrent workloads.”

Why Enterprises Use Secure Enclaves in Data Centers?

According to a February survey by Pulse, sponsored by Arm, 33 percent of enterprises already use confidential computing.

The biggest use case for it was protecting data from platform administrators and service providers, chosen by 59 percent of respondents. Another 40 percent said it can prevent platform software, such as hypervisors, from accessing data, while 36 percent said it could offer protections in multi-tenant or multi-user environments.

For example, enterprises using IaaS platforms need to trust their providers not to access their data, said Knight. Their data hosted on these platforms might be accessible to systems and processes that they can’t easily audit.

With confidential computing technologies like Arm CCA, whether the provider promises not to access the data is no longer a question. The provider cannot access data in the secure enclave at all.

This benefits enterprises, but also the cloud and hosting providers themselves. “Arm CCA can help to reduce the risk that data center staff and systems are exposed to sensitive data,” he said.

The technology will be appealing to CIOs, CISOs and data center operations and security staff, said Dion Hinchcliffe, VP and principal analyst at Constellation Research.

“The promise here is to create such a secure operating mode that cybersecurity threats can be substantially reduced,” he told DCK. “This has the potential to cut cybersecurity and breach management costs considerably over the long term.”

Secure Enclaves’ Interoperability Challenge

Since secure enclaves are hardware-specific, converting workloads to work on the technology can be tricky.

There are no widely accepted standards for confidential computing yet, according to Gartner analyst Steve Riley.

“Google offers a framework called Asylo,” he told DCK. “Microsoft offers a framework called Open Enclave. Fortanix offers a framework called Runtime Encryption. To varying degrees, these attempt to reduce or eliminate application-specific coding to work with enclaves. But none is a declared standard, and it’s unlikely that one will ever emerge as a de facto standard.”

IBM and AMD address this issue by protecting entire virtual machines. This means the integration happens at the hypervisor level but introduces a potential area of risk should the hypervisor become compromised.

For Intel SGX, companies either need to rewrite their application to take advantage of secure enclaves or use a third-party technology like Fortanix.

Arm CCA also involves a change to the underlying hardware, said Knight. The new features can be accessed with the Realm Management Extension – “realms” being Arm’s terminology for secure enclaves. “But the architecture has been carefully designed so that existing software workloads can be migrated to platforms that support the Realm Management Extension with minimum effort.”

Arm has chosen to go with the virtualized approach, said Knight. “A typical workload running within a realm would be a Linux or Windows virtual machine or a container.”

Arm plans to collaborate with the industry on developing standards, he said, including working with the Confidential Compute Consortium. “We will be presenting more technical details this year.”

Arm in the Data Center

Arm doesn’t make its own chips. Instead, it licenses its RISC-based technology to manufacturers. The chips’ low cost and high energy efficiency made them appealing for use in phones and other smart devices, but Arm has yet to see widespread uptake in the data center market.

Arm chips power every iPhone and Android smartphone and most tablets. Apple’s newest M1 chips that power the latest MacBooks and iPads are Arm-based. About 100 billion Arm chips have shipped in the last five years, according to Arm.

But there are signs of momentum growing for Arm in the data center. Microsoft is reportedly working on an Arm processor design for its cloud platform. In 2018, AWS began offering EC2 A1 instances powered by its Arm-based Graviton processors. Both Microsoft Azure and Oracle Cloud have been sampling Arm server chips by Ampere for their platforms. Alibaba and Tencent are also using Arm chips, and Oracle said it plans to offer Arm-powered cloud computing services this year.

Nvidia, whose $40 billion bid to acquire Arm is currently moving through far-from-certain international regulatory approval processes, said last month that it was working on an Arm-based data center CPU, called “Grace,” for the most demanding AI workloads.

According to IDC, Arm’s server market share is tiny but growing. In the fourth quarter of 2020, the percentage of servers running on the Arm architecture was up 345 percent compared to a year ago.

Last summer, for the first time ever, an Arm-powered system was named the world’s fastest supercomputer.

Arm-based servers could reduce upfront data center infrastructure costs by 30 to 60 percent, lower ongoing infrastructure costs by 15 to 35 percent, and lower total cloud infrastructure costs by up to 80 percent, Forrester said last year.

According to Forrester consultant Jan ten Sythoff, Arm processors have a smaller footprint, allowing for a higher density of cores per server, resulting in a lower number of total servers required. Arm servers are also more power efficient and require less cooling.

Meanwhile, Arm is investing heavily in supporting AI workloads and edge computing use cases.

“With Armv9-A, Arm will continue to focus on performance,” said Arm’s Knight, “helping to further increase power efficiency and compute density within the data center.”

Source link

%d bloggers like this: