Given how absurdly expensive RTX 3080 is, I've started looking for alternatives. Found this post on getting ROCm to work with tensorflow in ubuntu. Has anyone seen benchmarks of RX 6000 series cards vs. RTX 3000 in deep learning benchmarks?
It's very close to done. Link As others have pointed out, OpenCL over ROCr has been working for months, it was just some things from the HIP stack missing.
Curious about this topic, this Reddit is generally all about games but what’s the general condition of AMD cards on the popular ML frameworks?ID: gyxwgx7
same as the game performance for opencl but nvidia is miles ahead in anything that supports cudaID: gyyq7gn
On applications with a HIP backend, AMD is competitive with CUDA. My Radeon VII ran GPT-2 training with Pytorch at around 85% the speed of the Titan V machine I used at work.ID: gyyn6g8
Out of the famous frameworks, only Pytorch has official support for ROCm, but it works only with Polaris and Vega GPUs and performance is really subpar.
ROCm support is rather limited,
No RDNA support yet.
Best chances getting it to work are with some Radeon Vega GPU's and MI100.
They even went and removed my rx480 from the support list at 4.0, though it doesn't seem to really have worked well in older versions either. It runs with them, but then eventually fails.
And even then, installing it is a pain in the ass. Best chances getting it to actually work are with the ROCm docker image with pytorch (or tensorflow?) already compiled on it.
Oh and about the RTX 3080: you'd want more memory, so you'd really want a 3090 or a Quadro with at least 16GB... Talking about absurd prices...
It's awful. Still no Navi support (soon™ for over a year now), performance is awful (a radeon VII performs worse than my 2060 Super), and getting it to work in the first place was really cumbersome for me. I gave up on my old rx480 and went to the green side because ROCm was unusable for anything serious.
Unless you go with some weird framework, such as PlaidML or other OpenCL-based ones, and accept the awful performance, lack of community and lack of many pre-trained models, you're stuck with Nvidia, sadly.
It doesn't work for Navi cards yetID: gyxtj99
Did you use any of the popular ML libraries (Pytorch, Tensorflow) at all? Neither works with OpenCL.
I tried compiling Pytorch to work with my 6900xt. Compilation log shows no binaries compiled for gfx1030 (Navi 21). The moment you try to put tensors on the GPU you get errors.
AMD employees agree with me:
The ROCM components up to OpenCL compileare running on RDNA1/2 already. Remaining work is primarily on math libraries including MIOpen.
Without the math libraries, no ML framework runs on NaviID: gyxv86l
only the openCL part works currently, the HIP part is needed for tensorflow-rocmID: gyxuysg
I tried using ROCm with the TF Object Detection API and had little success. It was so much of a headache that I found Google Colab easier to work with.
Vega seems to be best supported and that's largely because the MI50 and MI25 are both vega-derived cards with annoying modifications.
As far as I know - and I may well be wrong here - You won't see more than half-hearted RX6000 support because they will only really be supporting the CDNA-based MI100 for these kinds of workloads... if you do get consumer or workstation card support it'll be an afterthought.
I think what AMD are aiming for is replacing everything with a full open-source software stack, which is a massive undertaking.
It's still janky and a pain to use, as others have said, but it's getting better all the time.
Really hope this works out for you. This CUDA monoculture is probably holding back multiple scientific fields right now.ID: gyxwkwo
What's the matter? I thought nvidia is quite supportive?ID: gyxz4af
It does but fanboys gotta fanboyID: gyy168a
No, Nvidia drops binaries, and that is it... they may be stable... but there is no *Support*... except occasionally from an interested developer, ZERO collaboration on improvements, that's Nvidia's modus operandi on everything.ID: gyxzqei
CUDA support is excellent for Deep Learning, Big data, statistics, mathematics, simulations, etc. AMD might never catch up for the next few years, since Nvidia is light years ahead in this regard.ID: gyxxil1
Why would it be holding back scientific fields?ID: gyy6gyi
Well, many scientific super computers have radeon or CDNA based accelerators...
What happens when so many projects decided to shackle themselves to CUDA only development when you try to run them, for instance, on a radeon based supercomputer?
It's great, I use it with a Vega Frontier. I initially chose it because my work is great for FP16, so the avantage over an NV card was much bigger at the time.
I use the private server for solving Complex Analysis, Numerical Techniques and Image Processing problems. Hence, no Deep Learning here.
However, I would still suggest a Nvidia GPU. That's because both of Intel's and Nvidia's Stacks are just too good to pass when it comes to anything related to Computer Engineering.ID: gyze0nu
That's because both of Intel's and Nvidia's Stacks are just too good to pass when it comes to anything related to Computer Engineering.
I thought intels OneAPI was cross-platform by design and supported by AMD.