1 vote

PS5 leak exposes laziness in PC tech channels

13 comments

  1. [7]
    cptcobalt
    Link
    This video was a bit ranty, and assumes you have previous knowledge of the PS5 leak. Well, I didn't—so I did some research. Basically, there don't seem to be any credible, substantiated, leaks....

    This video was a bit ranty, and assumes you have previous knowledge of the PS5 leak. Well, I didn't—so I did some research. Basically, there don't seem to be any credible, substantiated, leaks. Nothing that passes my sniff test. These could be true, but also possibly aren't:

    • A dev kit spec leak which seems to be old: Dev kit specs: 8 Core Zen2 @3.2Ghz Boost, 12.6-14.2 TF Navi GPU, 24GB GDDR6 RAM.
    • AMD Gonzalo: This is a codename decoder, based on found CPU specs? 8 Core Zen2 @3.2 Ghz boost. I've seen some references to the top line referencing Navi, but a tweet that decodes the top line has too much contention for me to believe it.
    • Other rumors without substantive evidence point to the idea that this'll have a Navi GPU, but I can't find anything real (e.g. non-fanboy commentary) about this.

    Boy, it's hard separating the evidence from the fanboy commentary and guesses. I might've missed things, but I'm also not convinced that there's too much out there about the PS5.

    6 votes
    1. [5]
      WinterCharm
      (edited )
      Link Parent
      Thanks for proving those links and context. And yeah, the video is definitely ranty, which is why I put it in the tag ahead of the title. Just wanted to warn people before they jumped in :) People...

      Thanks for proving those links and context. And yeah, the video is definitely ranty, which is why I put it in the tag ahead of the title. Just wanted to warn people before they jumped in :)

      People do not realize just how much Nvidia is sandbagging their own lineup right now. In the past, each GPU generation was a 50-80% leap in performance, even more if they moved to a new node. And once Nvidia got ahead of AMD they REALLY slowed down, and now they're sitting on top eating profit. The 2080Ti is barely 30% faster. It should be the next "2080" but their performance leap was SO good that they bumped their entire lineup "up the chain", and kept it on the 12nm node. the 2080Ti probably costs around $400 to produce, and the cards are making $800 in profit. (12nm wafers are cheap. Even at a massive die size, yields are decent enough).

      What I mean is the 1660Ti that's being marketed as a x60 card, is really an 1650Ti card. And because it's on 12nm, it's SO cheap to produce. Tiny tiny area, tiny card, old node, and massive profit margins for Nvidia. And yet, Nvidia has priced it at the x60 price.

      I'm amazed that people forgot that the 980Ti is about the same speed as a 1070. And this pattern has been repeating since the 580Ti and 670, 680Ti and 770 and so on. It's not that unusual. And the leaps get bigger with a die shrink. The other thing is that GPU memory typically doubled every few years. We went from 1GB 750Ti's to 4GB 1050Ti's to 6GB (instead of 8) 1660Ti...

      Also 20-24GB of memory isn't outlandish. Computers today that are capable of 4K gaming have 16GB of ram, and 8GB of GPU memory (for a total of 24GB) Keep in mind that consoles share the x86 architecture now, but their chips share the memory pool... 20-24GB of memory isn't surprising or outlandish, if you account for optimization on consoles being able to save you some GPU memory.

      The 2070 should be about the same speed as a 1080Ti. It's not... it's behind a 1080Ti, because Nvidia put it there, and people are eating this up... and the price increase this generation has been insane because they're marketing it as some miracle of engineering. The reality is that it's a node behind, and holy hell their architecture is good, but no way it's worth that much. It doesn't match the context of performance being seen from companies at the very head of silicon architecture on 7nm

      Just to be clear, by context I mean "expected performance increases by everyone else" not raw performance. For example, Apple's 12nm to 7nm push gave them 89% better performance, over just 2 years. And they've been doing this since 2010 when they launched their first A4 chip. Delivering 80-100% performance jumps between each generation like clockwork. Use that to put into context how weak Nvidia's "we worked so hard to make it 30% faster, and now we're charging you $1200" argument looks.

      These cards are so overpriced right now and there will be a reckoning when the next consoles come out and are capable of 4k 60Hz at $500. People are going to be in denial until they see it. Then they'll be angry... (stages of grief).

      I would encourage anyone to stay away from the RTX lineup like the cancer it is, unless you're comfortable burning money. And if you have that kind of money, and don't mind, that's fine. But I would not buy it thinking that you're going to be superior to the next gen consoles.

      3 votes
      1. [4]
        babypuncher
        Link Parent
        To be fair, much of Nvidias cost in their RTX chips is in the RT cores. There is enough extra silicon there to call it an upgrade, they just chose to use it to add new features that currently...

        To be fair, much of Nvidias cost in their RTX chips is in the RT cores. There is enough extra silicon there to call it an upgrade, they just chose to use it to add new features that currently offer little value instead of increasing performance.

        1. [3]
          WinterCharm
          Link Parent
          You're right that the RT cores add value, but they real value is those Tensor cores. In the professional space they're used for AI workloads. The die size has gone up, but it doesn't justify the...

          You're right that the RT cores add value, but they real value is those Tensor cores. In the professional space they're used for AI workloads.

          The die size has gone up, but it doesn't justify the cost increase. 12nm is ½ the price it was before because the process has matured so much, and it was already just a 16nm++ process (16, 14, 12 are just 16, 16+ and 16++).

          Let's do some math:

          2080Ti die measures 31 mm x 25 mm, or 775 mm² source

          The wafer diameter used is typically 450mm for such large dies, and is what's commercially available since 2017 Source

          at 31mm x 25mm, on a 450mm diameter wafer, you get 170 dies per wafer Using this handy calculator. Now, let's assume poor yields even though 12nm silicon is mature, at 50% yields (which is piss poor for 16nm++, I'd expect 75%+ at this size), that leaves Nvidia with ~85 dies per wafer that they can actually use.

          The price per square inch price In 2019 is $3.00 per square inch of a 450mm wafer Source At 255 square inches on a 450mm wafer, you're looking at $765 per wafer (to produce) plus markup, plus manufacturing, etc. Silicon manufacturing is a premium business and the margins are huge. Let's go with 25x markup to account for all the manufacturing, packaging, etc. $19,125 to manufacture our estimated 85 2080Ti dies. Your cost per die is $225 per die. Add about $100 of VRAM (GDDR6, 8GB) and your cost for the each card, raw is going to be $325. Add the PCB and Power Delivery components, and you're looking at another $50. Normally, we'd expect those cards to sell at $700, with a 100% markup (typical of Nvidia).

          Even after overestimating the cost of each die at every step, There's still no way, it's even close enough to justify $1200 price tags.

          2 votes
          1. [2]
            babypuncher
            Link Parent
            If Nvidia's profit margine really is that astronomical on these RTX chips then Navi should have no trouble competing with them. I'm still a little skeptical though, because the brand new Radeon...

            If Nvidia's profit margine really is that astronomical on these RTX chips then Navi should have no trouble competing with them.

            I'm still a little skeptical though, because the brand new Radeon VII is priced and performs right up with the RTX 2080, but lacks any equivalent to the RT and Tensor cores. If the margins on these cards really was that high then AMD could have easily lopped $150 off the Radeon VII and absolutely destroyed the 2080 in $/performance comparisons. Either AMD's chip design is many years behind, or they are paying a lot more for fabrication than AMD.

            2 votes
            1. WinterCharm
              Link Parent
              It’s on a newer process (lower yields) and HBM2 is really expensive, and they put 16GB of HBM2 on this thing. It’s a cut down MI 50 that generally retails for much higher. In compute it’s better...

              It’s on a newer process (lower yields) and HBM2 is really expensive, and they put 16GB of HBM2 on this thing.

              It’s a cut down MI 50 that generally retails for much higher.

              In compute it’s better than a 2080Ti assuming you don’t need the fixed function Tensor cores

              The Radeon VII is a binning server reject being sold basically at cost to demonstrate that AMD has some product in the upper range.

              New nodes are significantly more expensive. Don’t underestimate how much that cost scales out.

              At the same time the only reason Nvidia is able to take the piss here is because AMD is not competitive. The company is only now starting to make money again after the long winter of shitty AMD CPUs and then Shitty AMD GPU’s as all R&D money went into Ryzen... Navi will be their first decent GPU for consumers in a long while. And it’ll still target midrange this year. It won’t be until 2020 that AMD will target the high end.

              1 vote
    2. babypuncher
      Link Parent
      3.2 GHz Zen 2 cores should be an absolutely massive upgrade over the wimpy Jaguar cores in the current consoles. The bump in IPC alone is pretty big.

      3.2 GHz Zen 2 cores should be an absolutely massive upgrade over the wimpy Jaguar cores in the current consoles. The bump in IPC alone is pretty big.

  2. [6]
    WinterCharm
    Link
    By the time the PS5 comes out, in 2020, it will have a 7nm GPU that will be capable of 4K Ultra settings at 60fps, and 1440p ultra at 144fps and still cost $400... Progress in tech is going to...

    By the time the PS5 comes out, in 2020, it will have a 7nm GPU that will be capable of 4K Ultra settings at 60fps, and 1440p ultra at 144fps and still cost $400... Progress in tech is going to happen. So many people are being duped by the silly narrative that people have swallowed hook line and sinker, thanks to Nvidia that 12nm is hard and the 2080Ti is some miracle of engineering.

    Nvidia's 2080Ti is a wildly overpriced GPU that is really a $600 card, and we have seen this EXACT pattern play out when the PS4 was promised to have 1080p Ultra at 60Fps, and the best PC graphics card 2 years before it was barely capable of 1080p Ultra at the time. And yet, two years later, we saw top tier performance from the PS4.

    If you doubt the progress in Silicon, look at where Apple is with their SoC's. Arguably, they absolutely LEAD the silicon engineering race -- first to deploy 7nm AT SCALE. Spec2006 performance touching that of a 8700k. If you look at their progress, with each CPU being 40-50% faster than the one before it YEAR AFTER YEAR, you see the true scale of progress...

    3 votes
    1. [2]
      babypuncher
      Link Parent
      You're assuming developers don't decide to use that extra horsepower for shinier graphics. A PS4 Pro could probably play games at 4k/144fps if they looked like Quake 3.

      By the time the PS5 comes out, in 2020, it will have a 7nm GPU that will be capable of 4K Ultra settings at 60fps, and 1440p ultra at 144fps and still cost $400

      You're assuming developers don't decide to use that extra horsepower for shinier graphics. A PS4 Pro could probably play games at 4k/144fps if they looked like Quake 3.

      1 vote
      1. WinterCharm
        Link Parent
        1440p ultra is for VR. and the 4k60 is the limit of what most TV's can do today, there's not much point in pushing for something most people's TV's won't even be able to handle. Also, it's...

        1440p ultra is for VR. and the 4k60 is the limit of what most TV's can do today, there's not much point in pushing for something most people's TV's won't even be able to handle.

        Also, it's undeniable that beautiful visuals really sell a game.

        1 vote
    2. [3]
      Kraetos
      Link Parent
      Ok, so where's the AMD card delivering 2080 Ti performance at $600?

      Ok, so where's the AMD card delivering 2080 Ti performance at $600?

      1. [2]
        WinterCharm
        Link Parent
        It’ll be out in 2020 and in the PS5 the so-called “big Navi”

        It’ll be out in 2020 and in the PS5 the so-called “big Navi”

        1. Kraetos
          Link Parent
          So Nvidia has a 18-24 month lead time on AMD here and will be close to shipping second gen RTX cards by the time these things ship. That’s a long time to have no competition and doesn’t really...

          So Nvidia has a 18-24 month lead time on AMD here and will be close to shipping second gen RTX cards by the time these things ship. That’s a long time to have no competition and doesn’t really support the idea that this is “easy” or that Nvidia is “sandbagging.”

          1 vote