10 votes

Google unveils custom Arm-based chips, following similar efforts at rivals Amazon and Microsoft

1 comment

  1. skybrian
    Link
    From the article: … … This isn’t so unusual these days, but I remember being very impressed with company presentations about how the Google hardware folks were building their own data centers,...

    From the article:

    Google is trying to make cloud computing more affordable with a custom-built Arm-based server chip. At its Cloud Next conference in Las Vegas on Tuesday, the company said the new processor will become available later in 2024.

    With the new Arm-based chip, Google is playing catch-up with rivals such as Amazon and Microsoft, which have been employing a similar strategy for years. The tech giants compete fiercely in the growing market for cloud infrastructure, where organizations rent out resources in faraway data centers and pay based on usage.

    Market leader Amazon Web Services introduced its Graviton Arm chip in 2018. "Almost all their services are already ported and optimized on the Arm ecosystem," Chirag Dekate, an analyst at technology industry researcher Gartner, told CNBC in an interview. Graviton has picked up business from Datadog, Elastic, Snowflake and Sprinklr, among others.

    Google has used Arm-based server computers for internal purposes to run YouTube advertising, the BigTable and Spanner databases, and the BigQuery data analytics tool. The company will gradually move them over to the cloud-based Arm instances, which are named Axion, when they become available, a spokesperson said.

    Datadog and Elastic plan to adopt Axion, along with OpenX and Snap, the spokesperson said.

    Broader use of chips drawing on Arm's architecture might lead to lower carbon emissions for certain workloads. Virtual slices of physical servers containing the Axion chips deliver 60% more energy efficiency than comparable VMs based on the x86 model, Google cloud chief Thomas Kurian wrote in a blog post. Arm chips, which are popular in smartphones, offer a shorter set of instructions than x86 chips, which are commonly found in PCs.

    This isn’t so unusual these days, but I remember being very impressed with company presentations about how the Google hardware folks were building their own data centers, stocking them with custom board-level designs for their servers, and building their own network switches. It all seemed really impressive at the time, very deep tech. They were playing the CPU vendors off each other to get deep discounts, threatening to switch cpu architectures, and fixing all the portability bugs in their software so they could actually run it and make the threat credible.

    I guess nowadays they’re probably doing similar negotiations with fabs? Anything to cut costs.

    I’m sure the efficiency improvements will continue, but not enough to keep up with increased usage. I wonder how bad the AI space crunch is?

    4 votes