Ever since Intel announced Xe, its next-generation graphics processor, there’s been speculation and discussion about what kind of GPUs the company would bring to market. Ice Lake’s new GPU is an important step along this path, with its substantial performance uplift and Gen 11 graphics. There have been rumors that Intel’s next-generation 10nm CPU, codenamed Tiger Lake, will have 96 EUs — and a new bit of information suggests at least one model of Intel’s upcoming dGPU will feature that many EUs as well.
Hot Hardware spotted this data via Twitter user Komachi. Komachi found the following document on the European Economic Community (EEC) website:
The “96EU” remark implies that this is a Tiger Lake-style configuration, while the “Alpha” points to the hardware still being very early in development. I’m not going to try to speculate too much on how much performance moving from 64 to 96 EUs will get Intel, though we’d expect that kind of shift to boost performance by 1.5x on paper, assuming no other significant bottlenecks in the design prevent scaling out. Since we’d be dealing with a standalone card, we can probably assume clocks equal-to or higher than an Ice Lake laptop part. With 8 threads per EU, we’d consider this a 768-core configuration (though AMD, NV, and Intel GPUs all perform a different amount of work per-core, so GPUs can’t be compared directly on core count).
This might not sound very exciting at first, since laptops aren’t exactly known for high-end performance, but pairing a low-end GPU with its own dedicated memory bandwidth can pay huge dividends compared to using a local iGPU. There are two different spots in the historical record we can consult on this.
First, let’s look back at some data from 2014, comparing AMD’s Kaveri-based APU (the A10-7850K) graphics against the low-end Radeon R7 250. There’s a specific reason to look back at this configuration: The A10-7850K had a 512:32:8 configuration, while the R7 250 had a 384:24:8 build. Balancing this was the issue of clock — the A10-7850K ran at 720MHz, while the R7 250 ran at 1GHz. In this case, the two GPUs came out nearly identically in terms of overall processing capability, and the A10-7850K had an advantage in texture units. The only advantage the R7 250 had is substantially more memory bandwidth, approximately 74GB/s dedicated versus ~34.1GB/s shared.
TechSpot compared the two solutions at the time. Here’s a couple indicative results: I’ve outlined the two specific results we’re comparing in blue boxes.
Performance in Metro: Last Light improved by ~40 percent when moving from integrated to discrete, courtesy of the R7 250’s 128-bit GDDR5 interface, which offered far more bandwidth than the shared DDR3 memory the Kaveri APU had to work with.
The performance impact of additional memory bandwidth is even higher here. At 1280×800, the integrated GPU isn’t too far behind the dGPU, but once you increase resolution, the APU drops off the charts.
This is good comparison information, but it’s also rather old. I can’t find any truly comprehensive written reviews of the RX 550 versus the 3400G or 2400G, and the few game results I’ve found have all tended to imply parity between the two solutions. This YouTuber claims somewhat different results, with a few tests showing a larger gap between the two cards, as shown below:
Results like this appear to be the exception, however, rather than the rule, and the RX 550 and Ryzen 3400G appear to be much closer in performance than the old R7 250 versus A10-7850K.
The 3400G is a 704:44:16 solution at 1.4GHz and 51.2GB/s of shared memory bandwidth, while the RX 550 is 640:40:16 at 1.18GHz and 112GB/s of dedicated memory bandwidth. In this case, there seems to be less performance difference between the two cards, though the RX 550 still provides more horsepower overall. While the Ryzen iGPU is positioned much more strongly relative to the RX 550 than in our previous comparison, the R7 250 demolished the A10-7850K, despite only leading in memory bandwidth.
There are a lot of reasons for the potential difference. AMD’s entire CPU architecture is different, as is its APU interconnect. Ryzen’s Vega-derived cores are still based on GCN, but it’s a later, more-efficient version. Memory bandwidth on the Ryzen platform is intrinsically higher, boosting overall comparative performance.
The takeaway is this: Even if Intel launches a low-end card with a similar configuration to its iGPU, overall dGPU performance is very likely to be higher — but we can’t really judge by how much. There’s no way to perform this kind of comparison with an Nvidia card, and we have older AMD results that point in one direction and newer results that imply a smaller gap for some configurations. The RX 550 is almost never slower than the 3400G, despite running at lower clocks. If Intel’s goal is to challenge from the low and midrange markets first before it makes a play for the high-end, bringing in a lower-tier part first makes sense. Intel may be looking for a chip that can let it challenge Nvidia’s lower-end parts in laptops and the occasional desktop more than it wants to bring a huge-die product to market for gamers. Every dollar of OEM laptop spend dedicated to a non-Intel GPU is a dollar of profit that Intel isn’t capturing. Intel’s CEO, Bob Swan, has openly stated that he intends to focus on being a company with 30 percent market share in a huge range of markets rather than laser-focusing on 90 percent market share in the CPU space. Taking more space in critical consumer markets is key to doing that.
Now Read:
- Gearbox CEO Randy Pitchford, Xbox’s Phil Spencer Are Fighting Over Moore’s Law
- How Much Does It Matter That PCs Are Faster Than Consoles?
- AMD and Intel Go Head to Head in the Surface Laptop 3
https://ift.tt/2SuRfAQ from ExtremeTechExtremeTech https://ift.tt/2ZolM4U
via IFTTT
No comments:
Post a Comment