6 votes

A Domain-Specific Architecture for Deep Neural Networks

Tags: tpu, dsa

2 comments

  1. PennyOaken
    Link
    "Compared to CPUs and GPUs, the single-threaded TPU has none of the sophisticated microarchitectural features that consume transistors and energy to improve the average case, but not the 99th...

    "Compared to CPUs and GPUs, the single-threaded TPU has none of the sophisticated microarchitectural features that consume transistors and energy to improve the average case, but not the 99th percentile case; that is, there are no caches, branch prediction, out-of-order execution, multiprocessing, speculative prefetching, address coalescing, multithreading, context switching, and so forth. Minimalism is a virtue of domain-specific processors."

    This is the sexiest few sentences I've read in months

    1 vote
  2. Atvelonis
    Link
    I recently watched a lecture by David Patterson titled "A New Golden Age for Computer Architecture" that touches upon a number of topics related to instruction sets and other architectural...

    I recently watched a lecture by David Patterson titled "A New Golden Age for Computer Architecture" that touches upon a number of topics related to instruction sets and other architectural elements, including reduced instruction set computer (RISC) architectures, Google's tensor processing unit (TPU), and domain-specific architectures (DSAs) in general. I'm relatively familiar with these concepts at a high level, but was interested in the technical references to DSAs he made throughout the talk. The paper by Norman P. Jouppi, Cliff Young, Nishant Patil, and David Patterson I've linked above describes the nature and function of DSAs as they can be applied to deep neural networks (DNNs). Microarchitecture is one of the more interesting areas of computer science to me; I don't work in the field, and I can't claim to have a great deal of technical knowledge in this regard, but I'm always interested to learn more about how it's continuing to develop.