Jump to content
Dante Unbound: Known Issues ×
Dante Unbound: Share Bug Reports and Feedback Here! ×

Geforce Now Help por favor


Nerf_SupYo

Recommended Posts

Greetings, I used to run Warframe 1080p on my GTX 1070 with an ancient Core i7-4770 and saw the frame rates swinging wildly  from the low 50s up to the hundreds (regardless of how low I set the gfx options it never was stable, fps limiters or not) - however.

Now running the RTX 3080 Now service from Nvidia (aka cloud pc running a 3080 and threadripper cpu) and while it can* look better and get better frame rates they still are no more stable, which baffles me completely. You couldn't ask for much more horsepower than that and it still swings down to the 50s.

I posed this in the nvidia forums and got zero responses, does anyone here have any thoughts? It kinda sucks having it be so unstable like that. A flat 120fps would be dandy.

Link to comment
Share on other sites

10 hours ago, Nerf_SupYo said:

Greetings, I used to run Warframe 1080p on my GTX 1070 with an ancient Core i7-4770 and saw the frame rates swinging wildly  from the low 50s up to the hundreds (regardless of how low I set the gfx options it never was stable, fps limiters or not) - however.

Now running the RTX 3080 Now service from Nvidia (aka cloud pc running a 3080 and threadripper cpu) and while it can* look better and get better frame rates they still are no more stable, which baffles me completely. You couldn't ask for much more horsepower than that and it still swings down to the 50s.

I posed this in the nvidia forums and got zero responses, does anyone here have any thoughts? It kinda sucks having it be so unstable like that. A flat 120fps would be dandy.

Warframe is predominantly limited by single core perf. Threadripper's all core perf. is in another world compared to the 4770, but single core ranges from 'modest' to a merely 'better', depending on the sku.

Link to comment
Share on other sites

11 hours ago, MillbrookWest said:

Warframe is predominantly limited by single core perf. Threadripper's all core perf. is in another world compared to the 4770, but single core ranges from 'modest' to a merely 'better', depending on the sku.

Are you able to get a stable fps with (whatever your system happens to be)?

Link to comment
Share on other sites

23 hours ago, Nerf_SupYo said:

Are you able to get a stable fps with (whatever your system happens to be)?

I don't often check framerates. I know i have enough headroom to allow SSAA and not really notice. Last time i did check however it was a long while ago on a kuva run and i was ~120-85 (maybe). This was with a 5800x. But only a three man squad (i cap at 120 tho so obvs. wouldn't go higher).

How do Geforce now run their service? Do you get all of a threadripper (i would doubt), or are you given an allocation? If it's virtualized, you'll likely lose some perf to the hypervisor. On top of that, depending on how/if they allocate cores you also contend with (what i assume is) the 3000 gen's architecture limitations as well.

EDIT:

Figured i'd see what perf. looks like in general now. From what i can see, i'm basically almost always GPU bound @5120x2160 when i remove the fraemrate cap, but:

worksheet-spreadsheetu9fdo.png

This was a cambion drift vault run.

Link to comment
Share on other sites

On 2022-08-31 at 10:04 PM, MillbrookWest said:

I don't often check framerates. I know i have enough headroom to allow SSAA and not really notice. Last time i did check however it was a long while ago on a kuva run and i was ~120-85 (maybe). This was with a 5800x. But only a three man squad (i cap at 120 tho so obvs. wouldn't go higher).

How do Geforce now run their service? Do you get all of a threadripper (i would doubt), or are you given an allocation? If it's virtualized, you'll likely lose some perf to the hypervisor. On top of that, depending on how/if they allocate cores you also contend with (what i assume is) the 3000 gen's architecture limitations as well.

EDIT:

Figured i'd see what perf. looks like in general now. From what i can see, i'm basically almost always GPU bound @5120x2160 when i remove the fraemrate cap, but:

worksheet-spreadsheetu9fdo.png

This was a cambion drift vault run.

If (and that's an if, I'm no expert) I'm understanding what I'm seeing there it looks like you're getting roughly the same as I am, some very good fps and some very average and everywhere in between. (Is that about right?)

For Geforce Now (aka GFN) there are 3 tiers, the free tier where you wait in a queue till a machine is ready (aka a server rack but they are actual individual gpu/cpu setups not virtualized, afaik), next being the paid tier where you get the same hardware but no waiting in line. The "3080 Now" tier that I use is no wait with the discrete 3080 and threadripper and for the supported game it is definitely sweet. Dying Light 2 or Cyberpunk 2077 look and play incredibly well at 4k given reasonable settings - and with fairly stable frame rates, which is why I was so surprised to see no improvement on warframe, a famously well optimized game.

Link to comment
Share on other sites

16 hours ago, Nerf_SupYo said:

If (and that's an if, I'm no expert) I'm understanding what I'm seeing there it looks like you're getting roughly the same as I am, some very good fps and some very average and everywhere in between. (Is that about right?)

With boxplots 50% of the data points are within the box, and 25% each the whiskers (or read another way, i spent 50% of my game time with that output. Average is the point inside the box). The smaller the box and whiskers the more stable the performance. Warframe has large fluctuations it seems.

Since the graph above was 100% GPU bound i figured i'd run it again at 1080p and see what my numbers look like, this time on an a Kuva survival since that's the last time i remember my own numbers:

worksheet_warframecpug7c0r.png

It is better, but from the line graph you can see that the largest swings seem to be when the GPU isn't fully loaded. But regardless, ranges from ~240 down to ~100 at times. 

16 hours ago, Nerf_SupYo said:

For Geforce Now (aka GFN) there are 3 tiers, the free tier where you wait in a queue till a machine is ready (aka a server rack but they are actual individual gpu/cpu setups not virtualized, afaik), next being the paid tier where you get the same hardware but no waiting in line. The "3080 Now" tier that I use is no wait with the discrete 3080 and threadripper and for the supported game it is definitely sweet. Dying Light 2 or Cyberpunk 2077 look and play incredibly well at 4k given reasonable settings - and with fairly stable frame rates, which is why I was so surprised to see no improvement on warframe, a famously well optimized game.

If you have access to task manager you can try adjusting the core affinity of the warframe process so that Warframe stays within the same CCD/CCX (i forget which one is which). In windows alone this nets better perf due to the way windows scheduler works (nothing magical tho, the scheduler is 'good enough'). I think i recall the 3000 gen zen cpu's also benefitting? (might need to double check that).

Also, what frequency do they let the CPU's run up to? I remember they boost pretty high, but for data centres power consumption tends to be a very tangible cost. Boost might not be as aggressive, if they even let it boost at all.

Outside that someone who also runs a threadripper might need to chime in, as im unsure what Warframe should perform like.

Link to comment
Share on other sites

4 hours ago, MillbrookWest said:

With boxplots 50% of the data points are within the box, and 25% each the whiskers (or read another way, i spent 50% of my game time with that output. Average is the point inside the box). The smaller the box and whiskers the more stable the performance. Warframe has large fluctuations it seems.

Since the graph above was 100% GPU bound i figured i'd run it again at 1080p and see what my numbers look like, this time on an a Kuva survival since that's the last time i remember my own numbers:

It is better, but from the line graph you can see that the largest swings seem to be when the GPU isn't fully loaded. But regardless, ranges from ~240 down to ~100 at times. 

If you have access to task manager you can try adjusting the core affinity of the warframe process so that Warframe stays within the same CCD/CCX (i forget which one is which). In windows alone this nets better perf due to the way windows scheduler works (nothing magical tho, the scheduler is 'good enough'). I think i recall the 3000 gen zen cpu's also benefitting? (might need to double check that).

Also, what frequency do they let the CPU's run up to? I remember they boost pretty high, but for data centres power consumption tends to be a very tangible cost. Boost might not be as aggressive, if they even let it boost at all.

Outside that someone who also runs a threadripper might need to chime in, as im unsure what Warframe should perform like.

Thanks for all your thought and input here, I know it isn't a huge problem but has bugged me over the years every time a huge explosion of gfx glitter goes off you're in jello territory - or even with all that off it still does anyway. I did play the xbox series x version for awhile and don't recall this problem coming up, but I know they can put a much heavier hand on console versions than our 5000 and 1 different pc builds. Unfortunately the level of detail GFN users get to see into how the rig is actually running is very limited and we definitely don't get to control anything as deep as core affinities. Ah well, gracias anyway.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...