5 votes

We're building computers wrong

7 comments

  1. [3]
    Tum
    Link
    It's a shit title: the video is about using analogue computers for neural network applications.

    It's a shit title: the video is about using analogue computers for neural network applications.

    12 votes
    1. cfabbro
      (edited )
      Link Parent
      If you ever feel like the original title to something you're submitting is garbage, feel free to editorialize it in order to make it better/more accurate/less clickbaity/etc. We're not overly...

      If you ever feel like the original title to something you're submitting is garbage, feel free to editorialize it in order to make it better/more accurate/less clickbaity/etc. We're not overly strict about titles being submitted as originally written here on Tildes. And if you would like to change this one now, please let me know what you would like it to be and I can edit it for you.

      10 votes
    2. gco
      (edited )
      Link Parent
      Agreed, although Derek has been transparent as to why that is the case (He usually starts with a more normal sounding title and changes to something clickbaity to drive views; same for...

      Agreed, although Derek has been transparent as to why that is the case (He usually starts with a more normal sounding title and changes to something clickbaity to drive views; same for thumbnails), I'm still not completely comfortable with it. I'm likely in the minority thinking that the original titles are good enough but then the subsequent ones are clearly not aimed at myself.

      2 votes
  2. mtset
    (edited )
    Link
    This is interesting to me. First off, it's worth calling out that the device he's showing off with the Lorentz system is ThAT, which is specifically a teaching/learning device; it's extremely cool...

    This is interesting to me. First off, it's worth calling out that the device he's showing off with the Lorentz system is ThAT, which is specifically a teaching/learning device; it's extremely cool and I suggest anyone interested in this topic look into it.

    The chip-scale devices he's demonstrating are very interesting; however, I am somewhat skeptical of their long-term utility. Digital NAND storage is useful because there is a large region of charge in which the 1 and 0 bits are still recognized; significant charge leakage is acceptable over time without data loss. This is not remotely true with the analog use case, and it would mean that, say, a security camera using Mythic's chips would get worse at recognizing people and objects over time, and more so if it were deployed in a hot environment.

    I think Mythic's chips are good for the same use cases something like an FPGA fills for digital circuits: situations where a) performance per Watt is critical and b) the model needs to be updated over time. Were I engineering a system using these chips, I'd store the weights digitally, in some highly redundant (or large-feature-size) NAND on another chip. Once a model is sufficiently refined, if the product run was large enough to justify it, I'd want to burn the resistances into silicon using, say, depletion-mode MOSFETs (which would be smaller and yet more power efficient than misusing NAND cells). It's interesting to me that Derek didn't mention this; it would have been maybe a few sentences at the end, but it's not something Mythic offers.

    I fear that Derek was motivated to talk about this not just because he finds it genuinely interesting, but because Mythic gave him a bucket of money to shill their chips. That's fine, but he really should disclose that.

    EDIT: With a little more effort, I've found a concept called the Electronically Programmable Analog Circuit, or EPAC. Because ePac, a packaging standard, and EPAC, the European Processor Accelerator Chip, a RISC-V implementation, are cluttering up the namespace, it can be hard to find literature on the topic, but here is a pre-2000 paper and a 2019 DigiKey (huge electronics supplier) article on the state of the industry, and why it's not as useful as people thought it might be. These techniques are far more flexible, and much more interesting for dynamic systems modelling, but also not something offered by the company being covered here. As /u/vektor points out, this is a very, very limited technology, in that it can only really compete when executing specific kinds of neural networks, and in that it's not nearly as flexible as other topologies. It does have a lot of advantages, but the lack of bredth in this video makes me cringe a little.

    12 votes
  3. [2]
    vektor
    Link
    I think this could be big, but for now I'm hugely sceptical: What is demonstrated so far is only a subset of current neural network models (dense layers), and it's only demonstrated for the feed...

    I think this could be big, but for now I'm hugely sceptical: What is demonstrated so far is only a subset of current neural network models (dense layers), and it's only demonstrated for the feed forward case. This could be big if they could backprop on, say, a CNN or Transformer architecture. The computational structure they demonstrate is just full matrix multiplication, with a hard-coded matrix. Hardly impressive.

    3 votes
    1. Tum
      Link Parent
      Yeah, I've not seen them used for training the neural networks: consumer devices that use neural networks use the already trained network (which is feed forward), so could deliver significant...

      Yeah, I've not seen them used for training the neural networks: consumer devices that use neural networks use the already trained network (which is feed forward), so could deliver significant energy savings for (for instance) mobile devices. Current chip fabrication is also heading towards more 'system on a chip' design, making it easy to add to future devices.

      That being said, I agree it would be awesome if they could implement back-prop or CNN using this analogue approach.

      1 vote
  4. yellow
    Link
    I looked into stuff like this years ago for a college assignment. One of the papers I found was this one. The idea is to use an analog neural network to process an image directly from a camera...

    I looked into stuff like this years ago for a college assignment. One of the papers I found was this one. The idea is to use an analog neural network to process an image directly from a camera chip without converting it to digital.

    2 votes