- AMD EPYC "Turin" with Zen5 cores rumored to feature maximum TDP of 600W - VideoCardz.com
-
Watercooled servers about to become mainstream I guess. Either that or the entire empty space in the server is gonna be heatsink
ID: hie9ldzID: hiea15xYea it saves power wasted on moving air and lets processors run 10-20% faster. That's a more competitive service
ID: hiea1t4I read that as well a while back, and kinda put 2 and 2 together. It's what makes most sense in terms of cooling huge banks of servers, plus it also dampens vibrations, meaning servers won't likely need to spend so much on huge enterprise SSD and can stick to even larger capacity HDD drives, since less heat and vibrations should benefits HDD's more
ID: hiefizuIs there any reason for the servers to NOT be a solid blob of metal heatsinks? Who needs a case, just have a massive heatsink that acts as protection.
I'm being somewhat serious here.
ID: hiefw6gThe bottleneck is on the die to IHS contact area.
So having a gigantic heatsink wouldn't bring meaningfull changes past a certain threshold.
ID: hiejbl5One part is what the other user stated, the other part I can think of is grounding of the parts. Main use of a case is ease of handling and grounding, and I don't think you should ground something to a CPU where any change in the millivolt scale can create issues with a CPU
ID: hif5hunWe had a meeting our server vendor Cisco, and they said water-cooling is coming soon.
ID: hifbx2nNot surprising. 95% of their server line is INTC ; - )
ID: hifrpygLooks like they're going to have to convert their rooms into freezers. IT is going to need freezer gear before going in
ID: hig0fclUndersea servers.
ID: hig4dxvMicrosoft actually did something like this.
-
Milan X is not even announced yet let alone Zen 4 and let alone Zen 5.
This seems like a really pointless speculation for what is a 2023 or 2024 product.
-
How seriously can you take a "leak" that shows zero information other than a hypothetical power draw?
ID: hieotpiNot very. wouldn't be shocked if we saw a 400+ watt part in the next few years, but personally I struggle to imagine them "changing the game" with 600w+ when a lot of the world only has 1440 watt power budgets anyways.
ID: hif5m7kWell, it fits in the same chassis used today. We do 2x280W now, so 1x600 isn't that hard as long as that 600W isn't literally double the density at the source.
I looked at a 240W 1U Epyc the other day, the heatsink isn't all that large. It just has reliable consistent airflow over a large surface area.
ID: hif951cThen you need to look at Teslas Dojo compute module. One slide showed 10,000 amps going into that module. Of course that is bleeding edge.
ID: hifiqzkWho only has 1440w? Is this a 110V thing i’m top privileged to know about? I would say 2200-2860 was the normal? (10-13A)
ID: hif8itzLot of the world has 1440w? Power is out of country?
ID: hifpq4tThe source regularly leaks information about AMD products - you can scroll back through their history to validate their previous claims, although you'll have to scroll pretty far to find anything you can actually confirm, but they have accurately leaked things like Milan and Ryzen 5000 mobile parts' core counts, base clocks, TDPs, and boost clocks well before official release of that information.
-
I think it will be some special use case SKU rather than regular top of the line product...
-
Didn’t a company just recently reject using Intel Ice lake for their servers because the TDP was too high? I can’t imagine that many companies looking to upgrade are going to be happy about this
ID: hiepnx0That's cloudflare and that ain't what they said. They said to achieve the same performance as milan intel's icelake needs a lot more power and it'd blow their powe
budgetID: hieo6ngIt's 256 cores that's a little over a Watt per thread. So you can't just look at the TDP in isolation.
ID: hif3a1pI recall seeing on eBay of a quad CPU Xeon server.
That had dual core Netburst chips. Each one of them had around 150W TDP.
Even a Ryzen 1600 would s*** on that server for almost all CPU workloads.
ID: hif79gfThings aren't often rejected due to TDP, but instead to performance / W.
Generally, a single rack has a max power budget. After accounting for misc stuff like networking, that leaves a maximum budget of W per RU (rack unit).
So the question becomes something like:
How much performance can I get with 700W per U?
And third gen Epyc in most scenarios, gives more than Ice Lake. AVX512 workloads being one well known case where IL wins, but there are a few other niche cases as well.
So Intel has to compete where the max power per rack is not a primary concern, where they sell on reputation or ignorance, or with bigger discounts. Or in cases where the CPU part is not much of a factor -- fully loaded with RAM, lots of storage, networking, and/or GPUs, the CPU portion of the power may not be very large. Also, their PCIe4 implementation is slightly faster, which might matter in some rare cases.
The recent 3rd gen Epyc systems I have used have surprisingly good idle and full load power, and in the full load case, tweaking the BIOS to lower the max power doesn't hurt performance very much, so it is easy to target the max power that a rack can consume and then run final tweaks to tune it.
Intel might have lower idle CPU power, but once you have enough SSDs or HDs + misc other stuff 'on' all the time, most of the idle power isn't from the CPU anyway, and the entire point of having such servers is that they are used -- not idle -- anyway.
ID: hifklfyEfficiency is the problem not max power. It could consume a shitload of power in absolute terms but if it's efficienct that's fine, perf/w is one of the ultimate metrics and if there's a significant difference the better one is likely going to win
-
Tbh, as long as perf/watt and perf/$ goes up, nobody really cares how physical cores are split up. In a way the current epyc are kind of "8 socket" but done right.
Power density increase is inevitable too.
-
Power efficiency going down the drain with each release from AMD and Nvidia. Apple on the other hand 🙂
ID: hien2g5How many server processors does Apple make?
ID: hiehtqg256 cores at 600W indicates a very significant improvement in effiency.
-
I really can't see it without some crazy exotic cooling.
That's the low setting on a heater coming from how many mm2 of die space.
ID: hifw3zoID: hifwp6gDensity is king in the server space as well by spacing out the dies you introduce latency.
-
will monolithic dies ever return?
ID: hifmiduThey haven't gone anywhere yet, it depends on the application. APUs and SoCs are still monolithic for example (including Apple's recent 57b transistor SoC and the consoles), although eventually when things reduce in price SoIC/CoWoS/Foveros could keep reducing their use
-
Not really news since 3990X can do this if you unlock its power limits. Its extremely power limited at stock 280W.
-
Hello from Turin
-
Literally take with a grain of salt, but if true, it’s gonna be a monster for sure
-
x30 power efficiency by 2025 they said?
ID: hieca5jEfficiency is a factor of performance as well. And Turin is a monster.
ID: hiejn89Beast of Turin, indeed.
ID: hiefanpThey're quadrupling the number of cores compared to Zen3 EPYC while only going a little over double the TDP. Suppose clocks and IPC stay exactly the same, that's almost cutting in half the energy consumption for the same performance per core. Granted, that's a rather crude and simplistic comparison, and not entirely exact (as there are other components besides the cores themselves), but it gets the point across.
ID: hieca4qit can still be efficient, just much much more dense too.
ID: hieurvwhigher IPC and cores with only a 50% TDP bump would go towards that goal.
-
Moar cores!
-
after this launch, AMD will be the next Intel. Intel will be the next IBM. Intel thinks they would be the only one going higher watt.
ID: hig3sjl… look at the core counts, not shocking that number of cores requires more power.
引用元:https://www.reddit.com/r/Amd/comments/qhp6mu/amd_epyc_turin_with_zen5_cores_rumored_to_feature/
Funny you say that was just reading an article about more companies moving to fully immersed servers in mineral oil or whatever non-conductive liquid they use.