i3-7100 vs FX-8350 – Something’s Definitely Not Right With AMD FX Benchmarks or Why Cores Matter (RA Tech)

1 : Anonymous2021/07/01 17:34 ID: obque8
i3-7100 vs FX-8350 - Something's Definitely Not Right With AMD FX Benchmarks or Why Cores Matter (RA Tech)
2 : Anonymous2021/07/01 22:02 ID: h3qfvuj

as a fx6300 user, i remember watching either HU or GN video and thinking that they got lower results in some games than i expected.

3 : Anonymous2021/07/01 18:31 ID: h3pmxoq

These are interesting results to be sure. I do wonder whether OS level improvements have anything to do with how the FX CPUs are performing today compared to older results.


, / do you guys have anything to comment regarding this? RA Tech's points seem pretty convincing. Of course I'm not claiming that your results are wrong and they could be a result of a difference in testing methodology, I'm just curious what do you think about this.

ID: h3pohdt

Cant speak for anyone else, but what I can say is that I was one of those in the comments section for GN/HUB videos saying they are getting results WELL BELOW what I got.

Still subbed to both, but my results match up with RA Tech's and not GN/HUB 🙁

ID: h3pobae

I believe that the problem is where they test. Where and how you test a game is as important as the game itself. CPU and GPU loads are never uniform in no game ever.

Wizzard from TechPowerUp actually tests CPUs in the FOREST area of White Orchard for Witcher 3. That is right, he is using the tutorial area and not Novigrad or Beauclair City to test them, but rather the second smallest area in the game and the most remote, NPC-free area.

Hell, even WHERE you begin testing matters. I start from outside the city and in the morning where AI routines restart themselves. So my load is heavier (I am not the video maker, I never ever owned FX since its slow in STALKER) than what even the dude in the video did.

This BTW is also why reviewers underestimate VRAM usage in games. Reviewers test cutscenes or a 1 minute lightweight scene and say "Well it doesnt use lots of VRAM". Meanwhile here I was in Doom Eternal playing the END-game levels of the title and seeing my old RTX 2080 or a friend's RTX 3070 get stutters every 20-30 seconds because the levels are big and have lots of enemies. So where it matters the most, the game actually wants more VRAM. Even if 8GB is fine for Hell on Earth, a small tutorial level with 5 enemy types, it is just not fine for the actual game. So that is one example where a reviewer (TechPowerUp) fails hard because of where they test and how they test.

This is my issue with reviews. Digital Foundry had the right idea - play some of the game and find demanding real CPU-heavy scenes to test the games at and GPU ones for the GPU. But EVEN they then resort to light scenes, canned benchmarks (Far Cry) or loading/cutscenes (Metro, Crysis 3) to test GPus and CPUs so even they dont do as they say they should do.

And preemptively in case GN or HU attack me - I am not the video maker. But I do play games to completion on hard difficulty modes... so I do know games. And the video strikes me as pretty much correct.

ID: h3qio6o

A great example is Fallout 4 performance. I located the most intensive area in the game due to draw calls, but it's never used in reviews. Just Sanctuary Hills and Red Rocket, the least complex areas in the game. Here's the thread on Anandtech where I got others to benchmark their systems with that specific save at the top of Corvega:

None of the big names bother talking about draw calls, which is unfortunate, as they're the biggest CPU performance drain in damn never every game with dense scenes.

ID: h3q3q9n

I can understand your complaint about using canned benchmarks or cutscenes like DF does, but at the end of the day, they are repeatable so you can be assured that the data is truly 1:1 and not tainted by any sort of load variance between runs. If it's a choice between testing the absolute worst stress points versus testing in a scenario where the load is guaranteed to remain the same, IMO they are correct to choose the latter.

From what I can tell in my own testing compared to DF's, their CPU tests are typically conducted in instances where the CPU tends to become the limiting component, even when they are benchmarks (contrasted with, for example, HUB, who in my experience allows too much GPU-limited data into their CPU benchmarks).

ID: h3pzjq4

I do wonder whether OS level improvements have anything to do with how the FX CPUs are performing today compared to older results.

It's an interesting idea, and one that could perhaps be investigated with a Windows 7 install. However, the quote from JayzTwoCents was from 2017, right? That's quite a way back, so I'd guess that scheduler changes from the past few years aren't the major factor here.

ID: h3qpght

Tek Syndicate got a lot of crap for reporting their experience with the FX-8350 back in Jan 10, 2013

Proper testing methodology has been around for a long long time. I own both i3-7100 and FX-8320e (and a few other CPUs). Spend a lot of time with Massive Snowdrop Engine.

ID: h3qqyb6

They didn't test the same cpu or in the same places. Maybe even not at all. No dual core with with 33% uplift from hyperthreading (effectively a 3 core cpu) can compete in a multithreaded game.

Also, the fx is largely under 60fps here. Not the definition of great.

4 : Anonymous2021/07/01 23:06 ID: h3qnz8y

I had a Thuban at 4.2Ghz and replaced that setup with a FX platform including DDR3. All i can say; the FX was better in all aspects mainly due to the clock advantage and the larger DDR3 memory bandwidth that was provided. The DDR3 on a FX maxes out on a avg of 1866Mhz, i did run my FX at constant 4.8Ghz with a 300Mhz FSB and 2400MHz DDR3. That alone was good for 761 CB(15) points. When i replaced it with a 2700x the experience was all over the map. Everything was just faster and more consistent. The FX was'nt a bad platform, it was released in a time where games where mainly still dependend of single core threading rather then dual core or multi-core. Anything that can utilitize multi-core the FX will shine; period. And it's a free ride knowing you buy a 3.5GHz base CPU and you can overclock it all the way to 5GHz. It provides lots of fun for countless of hours really. And they did give a multiplier free CPU while intels where locked.

For the value? FX was alot better. Just dont play games or work on apps that are dependend of just single core alone. If you want to sqeeze out more performance, aim for the CPU/NB since the L3 cache in there is indirectly controlled by that as well. When you run the CPU/NB at 2800Mhz or so you'll note a minimum framerate boost in games instant. Just know that not all CPU's can even pass 2700Mhz, yours has to be pretty golden for that. And, the FSB is a great contribution to higher performance. Increase this from stock 200Mhz to 300Mhz and fiddle around with multipliers, and it will be faster then a stock 5Ghz model. Just due to the faster FSB and memory.

Power on 5Ghz is a serious issue; as this directly transfers to big heat. A 240MM rad with 4 fans with push pull config is'nt enough to cool such a 220W consuming CPU. It's why i kept it for years on 4.8Ghz because that was the sweetspot of the chip. The power difference for 200Mhz more was just insane.

5 : Anonymous2021/07/01 18:29 ID: h3pmp6f

The i3 has probably exactly 100% more ipc. Comparison based on core counts is silly between these two.

ID: h3qkr9n

thing is cores are more important than sheer IPC if games and OS ask for them

windows 10 asks literally for 1C 2T cpu,and guess what 7100 has 2 cores and 4 threads which means half of its resources is reserved to OS only even if it had 100% better IPC

majority of games are 2C 4T optimized so where does OS go?? to the narnia right making shit ton of stuttering happening because that is how many cores and threads 7100 has

all while FX8350 was 8C 8T or for USA 4C 8T cpu and if we look at how 7100 looks whoa all of sudden 8350 has breathing room

OS will take 1C 2T

there is 3C 6T leftover for games which will take 2C 4T

there is still 1C 2T leftover for browser or discord which is why it can handle load without dropouts or heavy stuttering

this is why it is minimum today to go with 6C 12T,because OS will take just enough cores and threads that you can still shove a game and side task without hickups if optimizing with process lasso

this is not CPU issue,this is windows 10 task scheduler and all of programs out there catching up to scalability narrative AMD wants which caught intel with pants down because now all till 8th gen is basicly useless since OS will just eat 2C 4T and only thing you can do is play the game and example of this:

my friend's core i3 8100 and my old Q9550 are excellent example of this we could play fortnite with no issues only if we close discord and kill as many backend programs as possible

while now my R7 3800X can chug along fortnite,discord,OS,many backend programs and even mozilla with ease

but i use process lasso to make CCX2 which is better part of CPU only for games while shittier CCX1 is for OS and browser etc. which actually fixed small stuttering and brought 10-20 more fps

this is why it is better to upgrade to 6+ core and 6+ thread cpu and if you cannot upgrade now to new stuff check if old stuff you have supports such cpus because what you gain from just having extra 2 or 4 locked cores is more than you think

edit: and pepole upvoting comment above i commented on show how little they know about basics of PC's and how their bad statements and hideous dobule downs were taken as it was truth because what GN and HUB told is a lie,which is why i will say this:


ID: h3qqmau

There are videos on YT that pin the FX 8350 vs Ryzen 2200G and i3 8100. While it's slightly slower (except in some games like modern Assassin's Creed games), the quadcores are at 100% usage while the FX hardly passes 60-70%, leaving headroom for the OS.

ID: h3qn0om

I think a better question is: why even compare a octo core processor to a 2 core hyperthreading cpu? There are plenty of skylake derivatives that don't have such a core deficit. Piledriver was never competition to any i3 from any intel generation, ever. A 2600k/3770k is the proper comparison point.


Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x