6 votes

A look inside Apple’s A13 Bionic chip and what it tells us about the future of mobile technology

2 comments

  1. [2]
    Akir
    Link
    I really hate this kind of article. It doesn't look 'inside' the chip; it doesn't even open past the first page of the (imaginary) whitepapers where it gives you the quick overview. Basically...

    I really hate this kind of article. It doesn't look 'inside' the chip; it doesn't even open past the first page of the (imaginary) whitepapers where it gives you the quick overview. Basically everything about these chips are wrapped in mystery as far as I'm aware. Nobody seems to be able to give a satisfactory answer for what exactly the "neural engine" actually does or how it works. They talk about how the vertical integration helps, but when they go into examples they talk about how it accelerates text to speech - a technology that has literally been around for decades!

    Seriously, if someone publishes an article that says "Product X is the best" without explaining why, it's not journalism. Journalism requires research. This is just a regurgitation of Apple's marketing.

    3 votes
    1. onyxleopard
      Link Parent
      I’m no expert in semiconductors, but as far as I can tell, the “neural engine” comprises processors that are not general purpose (as designated CPUs), but silicon designed specifically to do...

      Nobody seems to be able to give a satisfactory answer for what exactly the "neural engine" actually does or how it works.

      I’m no expert in semiconductors, but as far as I can tell, the “neural engine” comprises processors that are not general purpose (as designated CPUs), but silicon designed specifically to do operations that are frequently performed when doing inference from (i.e., making predictions) deep neural networks. This requires multiplying large matrixes together a lot of times. So, while you wouldn’t use the cores in the neural engine for all compute, you would use it for things like text-to-speech models, searching and extracting events from natural language in users’ emails and automatically populating the suggestions in their calendar. All sorts of models involved with Siri. Making predictions about travel destinations in maps. Facial recognition, automated video stabilization, compositing multiple camera exposures for Night Mode, accidental touch rejection, Apple Pencil input, handwriting recognition, language models for typing correction/prediction, and probably thousands of other statistical models that make user experience better. Basically, as ML engineers find more and more functions to fit (i.e., train), it’s more and more valuable to have dedicated hardware to decode these models quickly and offload this from the general purpose CPUs.

      3 votes