Jump to content

PSA: PC Minimum Supported Specs Changes Coming in 2023


[DE]Danielle
 Share

Recommended Posts

On 2022-09-08 at 9:09 AM, MillbrookWest said:

SSE and AVX are both SIMD instructions (It's in the name; SSE). AVX is workload specific however due to its large register size. So it depends on how you push data through the system. You need to push a lot of data through the system to fill up the AVX registers (Or put another way, you need to push around really big chunks of data). SSE uses smaller register sizes, so they're easier to fill up in most cases (and so work fine for games).

But that being said, if perf. is the goal, and you are pushing lots of data, you're better off taking the read/copy hit and having the GPU run the numbers since they are designed to do it. There's a reason AVX hit a roadblock at 512 (the only use i personally know of is copying memory). At that scale, just let the GPU run the numbers. GPU's blow CPU's out of the water for SIMD, but it depends on what you're parallelizing.

There are absolutely gaming workloads that benefit from using AVX over taking the latency penalty to do it on the GPU. It's true that once upon a time this was less the case because using AVX instructions would cause the CPU to lower its clock rate, but that has not been the case for some time. Moreover, there's a lot more going on in AVX than just extra wide SIMD, especially in AVX-512.  AVX can do things, a lot of things, that you simply cannot do on a GPU. This is an extremely naive and ill-informed viewpoint.

To be honest, your statement about AVX 512 hitting a wall is really strange because it doesn't really make any sense. It gives me the impression you don't really know what you're talking about and are instead just parroting your points from other websites. AVX 512 has barely even appeared in consumer facing processors; Intel doesn't enable it in its lower-end processors because it takes up an extravagant amount of die area that can be left dark to improve thermal density. The fact that AVX hasn't progressed beyond 512 bit width has nothing to do with "hitting a wall" or anything like that—it's because the benefits of AVX 512 primarily come in the form of non-SIMD related instructions and the extra big registers. If it was as pointless as you imply, AMD wouldn't be adding it to its next generation cores.

  • Like 1
Link to comment
Share on other sites

On 2022-09-08 at 3:56 AM, taiiat said:

ACX is tricky - to get an actual performance benefit you have to convert the vast majority, or even all of your Code to run on AVX. due to the stock nature of basically all CPU's being to throttle when an AVX instruction passes through it, that means that passing an AVX Instruction from time to time will actually reduce overall performance, rather than raise it.

now, that stock behavior is certainly horrific, but out of the box configuration is how 99.9% of Users will be configured ofcourse, so for better or for worse we're stuck with assuming that is the behavior on all CPU's.
also doesn't help that the Instructions perform more work concurrently but also consume more Power and as a result elevate in Temperature - which for low end systems, the main subject here, are the most prone to potential Thermal problems of passing large amounts of AVX Instructions.

it also does behave much differently from traditional instructions, so there is a bit of setup cost anytime you want to call some while already running traditional Instructions.

You clearly have not used AVX instructions in your actual own code. Please allow me to inform you on this topic with which you are not familiar.

First of all, the "AVX lowers clocks thing" is both out of date and also drastically exaggerated. The largest clock bins that any consumer CPUs ever dropped for AVX by default are four bins. That's 400 MHz. The performance benefits of using AVX in places where software benefits from it far outstrip this small drop in clock rate on older consumer CPUs that typically run at 3 GHz or more.  

Moreover, the "you shouldn't use AVX unless all of your code can use it" meme is specifically talking about within the arena of high performance computing, and it is no longer best guidance even in that arena. For the very specific cases that benefit the most from using avx2 and avx512 instructions, there is no better solution, especially since modern processors don't have to drop clock rates to use AVX instructions.

All of this meme about AVX being a high power and causing you to lose performance by dropping clock rate originates with early Xeon processors that ran relatively low clock rates and had to drop a lot of it to make use of AVX instructions, due to power delivery having not caught up with the high current demands of modern processors. AVX instructions were overly heating due to the fact that those processors were fabricated on a much older process and had relatively many cores packed onto a single die for the day.

The reality on modern processors is that you do not drop clocks whenever you use AVX and that it does not cause excessive heating unless you are using specific Instructions in a tight loop, like running a Linpack stress test or benchmark. Depending on your software, it is possible to see a 2 to 5x speed up for certain operations when using AVX instructions (as much as 25x for AVX-512!)

Link to comment
Share on other sites

49 minutes ago, auxy said:

It's true that once upon a time this was less the case because using AVX instructions would cause the CPU to lower its clock rate, but that has not been the case for some time.

 

36 minutes ago, auxy said:

You clearly have not used AVX instructions in your actual own code. Please allow me to inform you on this topic with which you are not familiar.

it's clearly not me that isn't familiar. an AVX offset is still the default behavior on newer Platforms, plus even if it was not (though it is), unless you expect your Software to only be run on the newest Hardware, that still leaves the vast majority of the Hardware supporting the feature to do just that.

further explaining what i already outlined.

38 minutes ago, auxy said:

All of this meme about AVX being a high power and causing you to lose performance by dropping clock rate originates with early Xeon processors

and still persists today because it's a factor that should be considered, due to the largest demographic having pretty close to the bare minimums for Cooling, or any other options which are available.

it doesn't meant that AVX isn't useful. it means that it's not just something you can do for free without thinking if your Software runs on a wide range of Hardware and you care about the experience of all of them.
AVX and AVX2 are great, and making use of them in Software can offer some pretty nice benefits. but maximizing performance will mean determining what Hardware probably should or shouldn't be running them, and Hardware that probably shouldn't will still need the traditional route for everything.

it's beneficial, but also added complexity.

  • Like 1
Link to comment
Share on other sites

6 hours ago, auxy said:

There are absolutely gaming workloads that benefit from using AVX over taking the latency penalty to do it on the GPU.

That's basically what i said, no? "Specific workloads" means there are cases for it, just not in great number... hence, SSE is "good enough" for the data you'd be pushing around so fine enough for a minimum...

6 hours ago, auxy said:

Moreover, there's a lot more going on in AVX than just extra wide SIMD, especially in AVX-512. 

This is mostly true for 512, but again, the use case is specific. AVX is more of a nicer SSE etc with wider registers, at least up to 512. 

6 hours ago, auxy said:

AVX can do things, a lot of things, that you simply cannot do on a GPU. This is an extremely naive and ill-informed viewpoint.

Never said otherwise, merely pointed out the trade-off if you wanted an appreciable perf. gain. The read/copy hit is significant in itself. But the point still stands, if you are running simd instructions and need registers that wide, you might want to run it on the GPU - preferably all on the GPU to side step the issues you mention (and i mention).

6 hours ago, auxy said:

To be honest, your statement about AVX 512 hitting a wall is really strange because it doesn't really make any sense. It gives me the impression you don't really know what you're talking about and are instead just parroting your points from other websites.

If you've got examples i'd like to hear them. As i say, i've only ever seen benefit to copying memory for servers. We don't push data that wide, so i've never personally seen any reference to work that needs it. I hear the PS3 emulation team use AVX512 (though i wouldn't be surprised, given the PS3 arch., if their use case was similar).

6 hours ago, auxy said:

AVX 512 has barely even appeared in consumer facing processors

I included reference to the 512 set as a note to its almost futility - registers that size, and the space they physically take up (which you note on as well). Not that they should be used in consumer facing software for this reason.

6 hours ago, auxy said:

AMD wouldn't be adding it to its next generation cores.

AI, because everything nowadays is AI. Matrix math is a bunch of small ints, and the the large register size means a lot done in one go (but as i said above, GPU's blow the CPU away for this workload). This is for apps like Matlab(and others) that don't accelerate with a gpu. Though i hear they might just be 2 256bit registers? AMD get their work done a little different iirc.

6 hours ago, auxy said:

You clearly have not used AVX instructions in your actual own code. Please allow me to inform you on this topic with which you are not familiar.

This wasn't directed at me, but just to note that i have. In C, no less. Like i said above, AVX is a nicer instruction set to use than SSE, but fundamentally it's of little consequence. If you're using a 256bit register (for example) you'll always be using a 256bit register. Even when you don't have workload for it. Like i said, if you aren't working with really large chucks of data it doesn't really make sense, and if your modus operendi is performance, then get it running on the GPU.

EDIT:

Out of curiosity, i googled if there was use for AVX512 in video games and ended up spending a not insignificant amount of time watching Ultimate Epic Battle Simulator 2 video's on youtube (i used to do things like this with Total War: Rome when i was a kid). Anyways, the relevant part:

Spoiler

https://steamcommunity.com/app/1468720/discussions/0/3274689919157957914/

From one of the Developers in reference to AVX512:

Quote

UEBS2 uses pure GPU for this. The AI itself actually runs on the GPU, along with instancing, and bone animation. This would simply be impossible on the CPU. Trust us we have tried! At the beginning of development we actually had this running on the CPU. On an 8 core CPU we could barely get the AI to even see eachother in time to react when charging at eachother. Took around 1 second just to process what LOD to render for 1 million troops.

 

 

Edited by MillbrookWest
Link to comment
Share on other sites

Quote

From one of the Developers in reference to AVX512:

oh well yeah, AVX512 is untennable for Video Games. it's too hard to use, and the Hardware requirements are ofcourse also way too high anyways, not enough CPU's actually support it. and yet again raised Thermals from the CPU doing more work in less time, impacting the lower specced (the majority of a Customer base for a Video Game that isn't VR) users the most.
if you relied on either of these to make your game, well, it better be a pro bono charity case, because you'll never make any Money on it. because nobody can play your game.
so even if in their experiments 512 was performant enough, it wouldn't have mattered anyways because they couldn't actually rely on it in play since nobodys' Hardware supports it so a whole lotta work for nothing.

 

anyways it's all not relevant for this game anyways, requiring AVX at all complicates System Requirements a lot, so any reliance on it forces you to Branch the game and that's just not practical. hence why of the games i know that will make use of AVX in any capacity(which is only AVX and AVX2), it's not a requirement, which again means that the game has to Branch and function on Hardware that does and does not support AVX, without either of them having terrible performance.

can't require something that you can't ensure will actually run okay on all of the Clients, have to assume that they are running the default Configuration for all of their Hardware, with the stock Cooler, or a very inexpensive aftermarket one, as most of them are.

Link to comment
Share on other sites

8 hours ago, taiiat said:

oh well yeah, AVX512 is untennable for Video Games. it's too hard to use, and the Hardware requirements are ofcourse also way too high anyways, not enough CPU's actually support it. and yet again raised Thermals from the CPU doing more work in less time, impacting the lower specced

anyways it's all not relevant for this game anyways, requiring AVX at all complicates System Requirements a lot, so any reliance on it forces you to Branch the game and that's just not practical.

Difficulty-wise it functionally would make no difference, at least for AVX and SSE (and no big issues for AVX -> AVX). They're both SIMD so you'll already have a buffer/dataset you've prepared (whatever that looks like) and you would merely make a call to whatever function you use to process that information - you wouldn't need a different codebase for example. Meaning you can build in a fallback. Outside that, the compiler would work its magic (or not, who knows?).

But principally, you'd need something that can be vectorized, and enough of it to need to push through registers that wide. Maybe DE can give insight on what they use SSE for in a gaming context? @[DE]Danielle Because i don't honestly know; updating camera? At the very least would be an interesting behind the curtain.

But otherwise yeah, if SSE poses a problem, then AVX poses more of a problem since it was introduced much later. So hardware support is even lower. But with that being said, AVX does obsolete SSE.

Edited by MillbrookWest
Link to comment
Share on other sites

Intel i7-8750H says it supports SSE4.1, SSE4.2

 

Does this mean it doesn't support SSE, SSE2, SSE3, SSSE3, and does not supporting those 4 mean I won't be able to play warframe?

 

It's a bit of a technobabble announcement that will go over the heads of people like me, might want to give more specific details next time.

Link to comment
Share on other sites

Quote

Holy hell, this is incredible! 🤣 I can't believe you weren't already using SSE4 as a baseline! To say nothing of SSSE3, which predates the Evolution Engine itself!

Honestly at this point in time if you're going to bother to change your target, I really think you should be moving up to at least AVX. AVX can offer a huge performance benefit on the PS4 and Xbox One if your application can make use of it; I'm sure you're probably using it there already? AVX is supported on processors all the way back to Sandy Bridge (2nd-gen Core family) and Bulldozer (AMD FX series); surely chips older than this barely run the game as it is. 

Looking through the thread, I guess some of you are still playing on machines that are more than a decade old. You know, kudos to you, I suppose, for keeping your old PCs in service and out of the landfills, but damn, y'all. You could have bought a new bad-ass gaming PC with the extra money you've spent on power keeping those old inefficient systems running all these years! 😆 

To nitpick, AVX was not supported on Pentium or Celeron processors until 11th gen. The G4560 and G3258 were somewhat popular, great value Pentiums. (I believe one of them had hyperthreading and the other was overclockable.)

Link to comment
Share on other sites

9 hours ago, RoninFive said:

Intel i7-8750H says it supports SSE4.1, SSE4.2

 

Does this mean it doesn't support SSE, SSE2, SSE3, SSSE3, and does not supporting those 4 mean I won't be able to play warframe?

 

It's a bit of a technobabble announcement that will go over the heads of people like me, might want to give more specific details next time.

It means that it supports everything up to these two, I have an i5-7300HQ and it supports SSE4.1/4.2, so you're good

  • Like 2
Link to comment
Share on other sites

6 minutes ago, Uggymoo said:

that's great for those that can afford an upgrade but I can't thanks for the kick in the nuts

So I looked into it more and my cpu does support SSE , SSE2 and SSE3 but I still get the message that my cpu isn't supported 

 AMD Phenom II X4 955 - HDX955FBK4DGI (cpu-world.com) 

or is it because it's not an intel cpu ???

You get message because your cpu dont support SSE4.1/4.2

Edited by Myscho
Link to comment
Share on other sites

In 2023 (specific date TBD), we will be changing our minimum specs to require the following CPU features: SSSE3, SSE4.1 and SSE4.2*. 

*Your CPU must support SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2.

in the first post their words not mine

7 minutes ago, Myscho said:

You get message because your cpu dont support SSE4.1/4.2

Edited by Uggymoo
Link to comment
Share on other sites

4 minutes ago, Uggymoo said:

In 2023 (specific date TBD), we will be changing our minimum specs to require the following CPU features: SSSE3, SSE4.1 and SSE4.2*. 

*Your CPU must support SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2.

their in the first post words not mine

Answer remain the same, your CPU dont support SSE 4.1 and 4.2, hence the warning message.

You need support for all of them and yours end with SSE3

Edited by Myscho
Link to comment
Share on other sites

3 minutes ago, Myscho said:

Answer remain the same, your CPU dont support SSE 4.1 and 4.2, hence the warning message

in the first post it says and I quote "*Your CPU must support SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2." that is what the asterisks * is for 

 

Link to comment
Share on other sites

Just now, Uggymoo said:

in the first post it says and I quote "*Your CPU must support SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2." that is what the asterisks * is for 

 

Dont you get it? Yes, that right, but again, if you dont have them all = unsupported cpu 

Link to comment
Share on other sites

Hi my name is Ericsson (Bagualado), i play the game in low config i can do the main content with a very old pc and no have money to change it early... my config is an E5700 2 nucleos 3.0ghz, video PC YES 610, 8 gb of ram. i can play with 20~30 fps in normal maps, is bad only in Fortuna, Deimos and i think Eidolon i stay with 7 fps in Fortuna, but no block weak computer please DE.

thank for atention

Edited by Bagualado
sem agradecer a atenção de quem leu
Link to comment
Share on other sites

Honestly this looks like lazyness by DE developers, look at "Path of Exile" for example, they got 3 client versions, DX 11, DX12(still on beta) and even an Vulkan client. So why again DE can't have a "low end" client for Warframe? Hell why not transition to Vulkan instead?

Edited by SaintLucifer
  • Like 1
Link to comment
Share on other sites

1 hour ago, Dexxie said:

yay i won't be able to play since
**amd phenom II**
2k hours and ivee gotta find new parts somehow
just bought this pc lmao

You bought Phenom II in 2022 ? Daamn, what a waste of money

Edited by Myscho
  • Like 2
Link to comment
Share on other sites

14 minutes ago, Phaynipe said:

I still couldn't figure out my Intel Pentium g4560 processor does it support it or not? Or will it not be possible to play on it? Help! 🙏🤷‍♂️

 

Quote

Instruction Set Extensions | Intel® SSE4.1, Intel® SSE4.2

what you have to support is SSE4.2, and you support that.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...