Deep Learning options on Radeon RX 6800

1 : Anonymous2021/04/16 10:42 ID: ms0jry

So I plan to classify land usage in satellite images by using a CNN - the thing is, I have an RX 6800 and as far as I can tell from my research, DL on Radeon is not quite a thing yet. In the current market I wont be able to change to nVidia (and even if cards were availiable, I dont have the money to buy another one), so I need to get it to work.

The goal is to get TensorFlow working on the 6800. As far as I can tell from my research, I have the follwing options:

ROCm, but it seems BigNavi isnt officially supported (but can be made working if I believe this article and I need to setup a Linux to use it PlaidML, but this would limit me to Keras and not true Tensorflow TensorFlow with DirectML ( with the Drawback it doesn't use TF 2.x

I am sort of new to DL, only did a couple easy beginners exercises in university, so I am currently somewhat stuck at setting up the basics.

I hope someone can help me with this task or recommend me an entirely different solution. Cheers!

Update: Thank you all for the suggestions & help, you are amazing! I will test if I can get the 6800 running in ROCm with some workaround, and if not I will try DirectML and see if it I can live with the processing times or not when I get it to work (theres a dude on YT who has compared processing times Last option will be some cloudservice, but lets wait and see. I will update this thread if I have something to report

2 : Anonymous2021/04/16 12:09 ID: guprd4g

Your best bet is probably to use Linux and try out.

PyTorch for AMD ROCm™ Platform now available as Python package | PyTorch

ID: gupxl7r

Thanks, I'll check that out

Edit: ...when support for Navi21 is officially implemented

ID: guq3szx

Good luck waiting for that. I've been waiting for support for my 5700 XT for over a year until I switched to a 3080.

ID: guqe6eq

Currently it is unclear if they even want to implement current and future RDNA products into ROCm. Their main focus for ROCm are the CDNA products (especially the ones for supercomputers like Frontier, I would guess).

ID: guqkh58

I have used both VEGA 56 and Rx 580 for ML training.

I used Rocm for Vega 56 and DirectML for Rx 580

I would say this: while the speed of DirectML is honestly a bit trash, you are not doing anything commercially, so while training, just leave your computer overnight or something. It will work out. Rocm does not even support Navi, and it still gives me nightmare when trying to install it.

3 : Anonymous2021/04/16 12:36 ID: gupu2nd

Short answer is, for the time being, forget about tensorflow or pytorch. ROCm doesn't support any Navi cards yet, but on their Github they mentioned it'll come out this year.

Tensorflow DirectML's performance is abysmal by the way. You will have to install GNU/Linux even when ROCm ends up supporting Navi cards.

ID: gupzrcz

Tensorflow DirectML's performance is abysmal by the way

But at least it would be still faster than CPU I guess?

ID: guq1t6x

It is but I really wouldn't rely on it. MS doesn't seem to be interested in supporting it, they still dont have TF 2.x or any Pytorch version at all. The graph-based code that you'll have to write for TF 1.x will require a serious amount of effort to port to the eager execution paradigm in 2.x, if you ever need to port it later on.

4 : Anonymous2021/04/16 12:05 ID: gupqzki

Ditch the "I don't want Linux" attitude if you want to do ML.

ROCm has a jupyter notebook docker with everything baked in. That should work for you.

ID: gupxen6

Its not that I dont WANT Linux, I just have never used it and thus 0 experience

(Edit: spelling)

ID: gupztw8

I do not see "0 experience", I see "space for growth" 🙂

ID: gupxjju

If you want any kind of career in ML, now is as good as time as any to install Ubuntu and start fucking around with it. 🙂

Source: am Data Scientist.

ID: gupxlq2

It's worth learning and there are so many useful tutorials out there to get you started!

ID: guqgntc

Even setting up CUDA ML applications is kinda cancer on Windows. I have to brootforce package management commands to get stuff working. I'm just trying stuff for memes, not actually training AI, but I seriously considered trying Linux for ML stuff.

ID: guqh9k8

It isn't too bad once you have the basics. Installing the rocm drivers is pretty easy, just run a script that installs the kernel modules and reboot. After that, convolve away

ID: guqlyot

Even getting Nvidia ML to work on Linux is not a walk in the park. If you're a beginner and just want to get your feet wet sort of speak, perhaps using one of the cloud providers may be a better option.

Here is one that seems to offer decent pricing but I am not affiliated with them and I've never used them myself so do some more research than I did:

ID: guqsmpb

+1 for linux for ML. I wish I hadn't wasted time on windows looking up alternate python packages along with other dependencies and with worse performance.

You can do dual boot and linux can read ntfs files so you don't need a separata storage altogether. Though you need to be a bit careful about file permissions and inodes when using them on both OSes.

5 : Anonymous2021/04/16 11:54 ID: guppypz

I'd be hitting up

6 : Anonymous2021/04/16 12:17 ID: gups6e6

Check out

. There's some good information there but depending on drivers it can be difficult.

It's not extremely busy but good info. Unfortunately, deep learning with AMD isn't as easy as Nvidia and it can be very frustrating.

ID: gupy1kw

Unfortunately, deep learning with AMD isn't as easy as Nvidia and it can be very frustrating.

Sadly, that much I realised like 2 minutes after I began searching google

ID: gupyhts

It's doable, you just have to be willing to put in some effort for setting it all up. I've never used the docker that the other person mentioned but that might be worth a shot, too.

Don't give up right away. If you're using tensorflow then you know half the battle is troubleshooting and the amount of time in actually running what you need/want is minor compared to set up. If you're determined, you can do it.

7 : Anonymous2021/04/16 12:16 ID: gups4gi

Well first things first, Machine learning libraries have predominately supported NVIDIA's CUDA software layer and so far adoption of Rocm has been slow. However there is one library, which now has supported wheels with Rocm support; Pytorch, but it's still in beta and only on Linux (which imo is really the better OS for your work), moreover there is no Navi2 support yet for rocm so you're out of luck there. You'd have to wait for that. Numba is also a nice python library if you wanted to build one from scratch, but again you'd run in the same problem when it comes to GPU compute (Rocm support).

If you're really desperate, I honestly haven't worked with PyOpenCL but that'd would be an alternative. Perhaps acquire an RTX 2000 GPU? Even an RTX 2060 is an option.

Another option is to create the initial model on your current computer, do some training with that a small dataset. Then do the majority of your training via a cloud platform such as google cloud/microsoft azure/colab pro. Obviously not the ideal workflow, but it's an alternative. Do look into what hardware they use and which libraries are supported. I am biased towards pytorch, but in the end just use what's available.

Handy links;

Convolutional Neural Networks (CNN) - Deep Learning Wizard Start via Cloud Partners | PyTorch Numba: A High Performance Python Compiler (pydata.org)
ID: gupxub6

Thank you very much for the detailed answer, I'll be sure to read into what you are suggesting

8 : Anonymous2021/04/16 12:44 ID: gupuwmp

I recently bought a used gtx titan x (Maxwell) from Ebay. Prices 320-400 euro.

If you can build a cooling solution and get the drivers working you can get a k80 for less than 200 euros in ebay.

If you are new to ML, you will have enough trouble, dont add harware incompatibility to them

9 : Anonymous2021/04/16 14:57 ID: guqbqt1

I don't think I saw any mention using AWS for this. Do all your dev work using cpu and when you have it all sorted out, you can use aws to get nvidia gpu instances that you pay for by the hour to do the real work. If they have spot instances for gpu, then even better price wise.

ID: guqkz2n

Honestly this seems to be a reasonable option considering the alternatives

ID: guqmbk2

I'd second this. At some point one could spend more time trying to get the environment working than actually making progress on the project, and the AWS instances are much fastee to work with anyway.

Google Colab also has free cloud compute resources which probably beat out a CPU for development!

10 : Anonymous2021/04/16 15:57 ID: guqk597

There's a Tencent-developed Open Source CNN library that runs on pretty much anything, as it's using Vulkan. It's called ncnn, you might want to take a look.

ID: guqxldo

I am pretty happy with this one. Vulkan really is a blessing.

11 : Anonymous2021/04/16 11:56 ID: gupq6y5

exists if you’re looking to swap to nvidia

ID: guptufh

He’d be swapping to decent drivers at the same time.

ID: gupwi6s

On Linux? A number of people will contest that...

ID: gupxy3j

Lmao, Nvidia Linux drivers are abysmal. I could write a fucking book about all the things they do wrong and all the ways they have been the biggest force holding linux graphics back.

12 : Anonymous2021/04/16 11:54 ID: guppzon

Maybe trading the 6800 for an nvidia card is an option. Since u seem like u gona need DL in the future.

13 : Anonymous2021/04/16 12:58 ID: gupwh52

I'll probably get shot down by posting this here, but if you don't want to mess around too much with Linux and/or headaches down the short term, it might not be a bad idea to get a couple (much cheaper) Tesla K80s to hold down the fort while the driver issues / lack of support get sorted out. And ... yep, you guessed it, mining crypto while you wait. I'm seeing the K80's at $250 on Amazon right now, at a rate of $6-$10 / day mining, you'll be able to get ROI on your K80 in 1-2 months. Plus they may come in handy for lower workload jobs later down the road. I'm probably going to get murdered for mentioning mining here, but that's the economic reality at the moment.

Other options: If what you're working on is interesting (in Amazon's eyes lol), they are know to give out $1000 credits (which doesn't last much honestly, but definitely could save you a few months in fees) in their cloud services, many of which are optimized for the sole purpose of DL. (via AWS Activate program)

There are also more affordable GPU-for-DL-lending options like gpu.land, although I have never used them so I can't vouch for them -- just something I saw on PH. Ironically they don't allow crypto mining (not that anyone would want to on the cloud), but they only take crypto as payment last time I checked. ¯_(ツ)_/¯ edit: PH = Product Hunt, not ... the other PH. You pervs.

As for me, I'm currently using a mix of AWS (which I was rewarded credit) and Google Colab Pro for my training. $1000 credit from AWS, $300 credit from Google, but in order to actually gain access to the GPU enabled services, you're going to have to talk to CS and explain to them what you plan to do.

ID: gupyegx

I'll have to see if this is an option for me, since I live in Germany. Btw electricity prices here are so high that mining is dead anyways

ID: guq7riw

Just a small side note, nobody is going to use a K80 for mining. Your profit estimate is high by a factor of about 30-40x.

They'll return ~$8-10/month, at a pace of about 2MH/s, or roughly twelve times slower than an RX480, at twice the power draw.

14 : Anonymous2021/04/16 15:00 ID: guqc5bg

I use DirectML on my 6900XT. For neural networks like recurrent it runs twice as fast as my 2080 Ti but for convolutional it runs twice as slow

15 : Anonymous2021/04/16 15:17 ID: guqeiah

Honestly, just get colab pro. Way more simpler. It will take more time than a local machine but its way better than the hassle of linux drivers.

Also if you really need it then non tf 2.x is more than fine for most workloads. Afaik tf 2.x only brought QOL improvement mainly rather than any huge perf improvements.

16 : Anonymous2021/04/16 20:13 ID: gurivue

Take this from someone with zero experience developing for ML, but from my experience using primarily waifu2x, NCNN-Vulkan works super fast on AMD cards, and that's also the case for RealSR and Flowframes, which both use the NCNN-Vulkan framework as well.

I remember seeing benchmarks comparing realsr-ncnn-vulkan performance across multiple GPUs, and the 5700 XT was able to beat even the 2080 Ti if I recall correctly, and with waifu2x, from my own tests waifu2x-ncnn-vulkan is able to outperform waifu2x-caffe running cuDNN on an Nvidia card, something no other port I tried before was able to achieve, so you may want to check it out.

17 : Anonymous2021/04/16 13:17 ID: gupymnw

Depending on your model, you might be able to get away with just training & classifying on a CPU. You really don't need a GPU until you start training HUGE models.

First rule of optimization: measurement. Find out if your workload NEEDS a GPU at all before wasting a bunch of time making it work.

18 : Anonymous2021/04/16 14:29 ID: guq7xta

Best way I found to do it if you're on a windows 10 box is to do the following.

1) Enable WSL

2) Install Ubuntu 20 (or whatever your favorite flavor of linux is)

3) Install JupyterLab or whatever you want to use.

It was actually a pretty painless process. Took about 20 minutes.

ID: gurbzjn

Unlike CUDA you can’t run ROCm in WSL.

ID: gurg31n

WOW! Really?! Well then my advice was totally trash!

Thank you very much for that comment, you've probably saved people hours of headache!

Have a great day!

19 : Anonymous2021/04/16 12:56 ID: gupw848

Unfortunately Nvidia is the best choice here, i always used AMD but to make anything work is always a problem... and a lot of scripts work with cuda straight away.
It s also slower in terms of computations.

Hopefully in the future the support for AMD will get better, that is what i hope ! AMD forever !

ID: gupzni8

I would give you a confirmational shout of “Team Red!”, but I got tired of waiting and bought a scalped Nvidia this gen. Still AMD in cpu at least!

ID: guqnfqc

CUDA is not worth supporting the ethical and moral villainy that is novideo. Team AMD, team red for life.

ID: gur3rlk

So if using cuda could make your work more effective, you shouldn’t do it right According to you

20 : Anonymous2021/04/16 17:25 ID: guqwddm

It might be worth checking out HIPIFY, which lets you automatically convert CUDA code to vendor neutral code that can be run on any GPU. Disclaimer, I have never used it and have no idea how it works.

21 : Anonymous2021/04/16 13:38 ID: guq16u5

The best solution for you is:

Step 1: Buy a Nvidia GPU

Step 2: Code in TensorFlow

Step 3: ......

Step 4: Profit?

23 : Anonymous2021/04/16 14:50 ID: guqatvf

Intel oddly enough has some options these days, and I hear they are scalable, both using the same code base for Intel processors, IGPs and their upcoming desktop graphic cards

24 : Anonymous2021/04/16 15:07 ID: guqd459

Try PlaidML. Since you are new to deep learning, Keras will work just fine! In fact, tensorflow 2.0 actively uses examples that leverages tf.keras API calls anyway.

25 : Anonymous2021/04/16 15:32 ID: guqgle7

Yeah try PlaidML. I took a course in ML in university last year and used it to train my neural networks. Beside some bugs it works good enough

26 : Anonymous2021/04/16 15:57 ID: guqk18z

As someone that's using pytorch and has an AMD card, my answer is: Use some of the "free" GPU providers (Kaggle/Google colab, etc). The ROCm platform is just bad and will make you waste lots of time

27 : Anonymous2021/04/16 16:22 ID: guqnkrt

Go to

and trade it for a 30 series card.

28 : Anonymous2021/04/16 17:01 ID: guqt55j

Question - did you try trading your GPU for an RTX 3070? I wanted an RX 6800, offered my RTX 3070 in trade on Facebook marketplace and someone did the trade as the yare similar value. My brother is offering an RTX 3070 right now for an RX 6800 in order to mimic what I did, so there definitely are people out there who are willing to do this.

29 : Anonymous2021/04/16 17:43 ID: guqyxui

No, honestly I try to avoid getting rid of my 6800. That would be the last resort if nothing else works, since I love the card (its amazing in gaming) and I enjoy having a full AMD build again after almost 10 years. (Sidenote: And it looks amazing in my current setup, even if that argument shouldn't be as important as performance obviously)

30 : Anonymous2021/04/16 18:17 ID: gur3kti

I totally agree, I much prefer the 6800 as well. I was just proposing it as an option if the need truly demands it and as a way to save money.

31 : Anonymous2021/04/16 17:12 ID: guquo17

ROCm if you want to use your card, but I'd recommend just buying an Nvidia GPU, or using a cloud provider with Nvidia GPUs, like AWS, Azure, or GCP.

32 : Anonymous2021/04/16 17:49 ID: guqzqkv

why not use Xeon processors with Deep Learning Boost ?

33 : Anonymous2021/04/16 19:23 ID: gurcajs

Either buy an NVIDIA GPU, or if you have an Intel CPU with an iGPU then OneAPI is already so far a head of ROCm that’s it’s simply laughable.

By the time their discrete GPUs launch OneAPI would be quite close to the CUDA ecosystem and might even match it as far as core support goes, unlike ROCm it’s also been actually adopted because it’s cross platform.

34 : Anonymous2021/04/16 21:21 ID: gurrwv9
35 : Anonymous2021/04/16 14:26 ID: guq7icx

The best course of action is for you to sell your 6800 for scalper prices and buy either a Turing card or an entry level Ampere (3060/Ti -ish) GPU. If you're serious about DL, Nvidia is the only option. Yeah you can jump through a lot of hoops to get it working on AMD, but long term it won't be sustainable. Trust me, you will have a lot of problems with ML on the software side if you are new to it. You DO NOT WANT hardware pains on top of that.

36 : Anonymous2021/04/16 14:36 ID: guq8v74

Use one of the cloud services, then buy an nvidia card when they’re back in stock

引用元:https://www.reddit.com/r/Amd/comments/ms0jry/deep_learning_options_on_radeon_rx_6800/

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x