- AMD Instinct MI250X with MCM GPU to feature 110 Compute Units, 128GB HBM2e memory, and 500W TDP - VideoCardz.com
Almost 50tflops in double precision, that's crazy!
now i really hope ROCm finally gets better.... those cards sound wonderful for the AI software i develop.ID: hhwbi7h
and they are developing SYCL Vulcan for MI200ID: hhz3t1i
Agreed, though for OpenCL workloads, we might get that working properly via Mesa first.
I assume this isn't meant for gamingID: hhuuxsv
I assume LTT is trying to game on it anywaysID: hhuvuhz
If you don't mind me asking, what is it meant for?ID: hhvnyd9
But could it be forced to run gamesID: hhvpvwo
Not with that attitudeID: hhwqwtt
Can it run Crysis?ID: hhws3ml
Yeah it can /s
220 CUs would fit the performance figures perfectly. (Assuming that performance in FP64 is half the figure mentioned.)
(220 CUs make a lot more sense considering that the MI100 has 120 CUs.)ID: hhwhk1m
Not to mention the power figure of 500W. You can't get to 500W at 1.7GHz with just 110 CU's.ID: hhyqvqh
Just goes to show how much improvement AMD's made to their physical design through these 3 uArchs on 7nm. Vega 20's double precision output was 7.373TFLOPs at 300W. MI250X is just under 7x that at 2/3rds extra power.ID: hhwvssa
This CDNA2.0 This is the first from the ground design server chip. CDNA1 was a CGN5.0 evolution. You don't know the configuration of a CU. So that means A CDNA 2 CU could be way more powerful then a CDNA 1 CU.ID: hhylhkn
From the other figures, the only other way would be for one CDNA 2 CU to be exactly twice as powerful as one CDNA 1 CU. This would seem somewhat strange. It would also imply 55 CUs per chiplet, which again is a strange number.
It's not completely out of the question that AMD changed what a CU means, but I lean towards mistrusting the rumour.
Cool beansID: hhv4i2w
500W. Burned beans.ID: hhv67bq
Just need more beans then.ID: hhvg77p
I wonder if CDNA2 is still based on GCN, like CDNA is.ID: hhuu79h
It is.ID: hhw3jjs
Why are they still using GCN? That's weird.ID: hhw99uc
I wonder if CDNA2 is still based on GCN, like CDNA is.
CDNA 2 is based on CDNA 1, which is based on GCN 5, which...
Will it msrp?
What’s the eth hash rate lolID: hhvdxb4
128GB HBM2e - divides into eight 16GB stacks. JEDEC standard for HBM2e is 307GB/s/stack, Samsung HBM2e is 410GB/s/stack, SKHynix HBM2e is 460GB/s/stack. So memory bandwidth (the limiting factor, generally, in Ethereum mining) is somewhere between 2456GB/s and 3680GB/s.
307GB/s/stack * 8 stacks = 2456GB/s
460GB/s/stack * 8 stacks = 3680GB/s
Let's use a Radeon VII as the other reference point - it's a GCN based chip (as these likely are) with HBM and has 1024GB/s memory bandwidth, with a TDP of 295W. It's also a fairly popular mining card still, and clocks in at around 93MH/s at 200W according to whattomine.
For JEDEC standard HBM2e2456 GB/s / 1024 GB/s = X MH/s / 93 MH/s # GB/s cancel each other out, multiply through by 93 MH/s 2456 * 93 MH/s / 1024 = X MH/s 223 MH/s = X MH/s
For Samsung HBM2e3280 GB/s / 1024 GB/s = X MH/s / 93 MH/s # GB/s cancel each other out, multiply through by 93 MH/s 3280 * 93 MH/s / 1024 = X MH/s 297 MH/s = X MH/s
For SKHynix HBM2e3680 GB/s / 1024 GB/s = X MH/s / 93 MH/s # GB/s cancel each other out, multiply through by 93 MH/s 3680 * 93 MH/s / 1024 = X MH/s 334 MH/s = X MH/s Power. Radeon VII is 295W but mines at around 200W. Assuming the same scaling: 500W TDP -> 338W. Let's just say 350W to be more conservative.
TL;DR: Somewhere between 223MH/s and 334MH/s at around 350W, easily making it one of the most efficient mining cards out there. According to whattomine that's about $16 - $24 in pre-tax profit per day.ID: hhvkvik
Would only take two years to break even at that rate...ID: hhus3mm
Doesn't matter, MSRP will be $2000 and you won't be able to find it under $5000. /s
EDIT: Clearly I wasn't trying to name actual prices, I just picked a number out of my ass that sounded expensive and then multiplied it. Get over it.ID: hhuuik5
Lol. This is Instinct MI. It's MSRP will be closer to $10 000ID: hhv4zx3
You'll be lucky to get it under 10k lolID: hhuu2f2
It would be a steal for both of these prices
I wonder how much fps would it get in Crysis at 15360x8640 resolution.ID: hhv51pm
Literally 0ID: hhv728v
It has no video outID: hhv35ut
Theoretically something even with the lack of video out. There's a setting in Windows you can turn on if you have a CPU with integrated graphics that lets you use the integrated for light tasks like browsing and the gpu for heavy tasks like gaming
With that kind of TDP it’s definitely going to hog at least 4-5 expansion slotsID: hhv65dn
Probably water-cooled in server racks would be my guess that's a lot of heat to remove from the area.ID: hhvq14p
Nah, either water cooled or just LOTS of airflow from some good ol deltasID: hhvq6w9
If it’s using deltas, I figure will need a massive heatsink. I’m just going off the trend I’ve noticed with GPU heatsinks getting larger every couple of yearsID: hhwrkm0
It doesn't use a standard PCIe card. It's based around the OCP Acceleration Module.
I'd buy it
what would somthing like this be used for, is this amds version of quadro from nvidiaID: hhy6tep
An equivalent to Nvidia’s Tesla line.
I wonder what AMD's doing about their Solution Stack for Reinforcement Learning. No use releasing a powerful GPU without an accompanying software for writing codes.
Why does it have the same numbers for single precision and double precision? Isn't single usually faster than double?
Missing something key here... why are their instinct line so incredibly expensive compared to Nvidia?
Wouldn't it make sense to push the product at a loss to increase adoption amongst data scientists and ML/AI engineers?
I badly want an MI50 but gd... for that price I could get two PNY Nvidia A16s...
MI? Weird that it has the same brand as some of the most untrusted Chinese smartphones. Does AMD explain this?
I'm not sure why you think Aircraft Manufacturing and Design would have anything to say about this. /s
Since they have nothing to do with that company, I would assume, no, AMD does not have an explanation for this.
You’re a moron