Jump to content
The Lotus Eaters: Share Bug Reports and Feedback Here! ×
The Lotus Eaters: Known Issues ×

My GPU runs much hotter at higher framerates, only with this game


Apav
 Share

Recommended Posts

Hi guys,

So while I'm playing other modern and demanding games my GPU's temperature reaches a certain temperature and stays there, no matter what I set the max framerate to. That range is 78-85C depending on the game, but with most games I peak at about 80C on average. However with Warframe my GPU runs much much hotter, depending on the framerate I'm getting. When I'm getting 100+ fps (which I do get a lot of the time) the hottest I've seen my GPU reach is 93C! Reducing the max framerate setting to a lower number does reduce the heat in most cases, so I can confirm that the heat is directly related to the FPS I am getting. For me, a PC gamer for over a decade, that is unheard of. I've also taken all the precautionary steps (cleaned all the dust out of  PC, updated to latest drivers, made sure nothing else was hogging my PC's resources while I was playing, etc). I found someone else experiencing the same exact issue here so I know I'm not alone.

I want to play without an fps limit so I can take advantage of my 120hz monitor. But with these temps I can't play the game above 60 fps at all really, unless I want my fans to get really loud and my GPU (and my room) to get really hot. My GPU is old enough as it is, I don't want to shorten its lifespan further.  Warframe is the only game that I have ever experienced this issue with, so this is definitely not normal and I hope there is a fix for this.  Though I am fearful this is just how Warframe is coded and there is nothing I can do.

Below are my temperature averages for each max framerate setting, as well as the FPS I get when it's set to no limit.

Orbiter average:

30 fps: 65C
40 fps: 70C
50 fps: 80C
60 fps: 80C
72 fps: 87C
120/144/200/no limit: 93C

Average fps with no limit: 95-110fps

Mission average:

30 fps: 64C
40 fps: 66C
50 fps: 69C
60 fps: 79C
72 fps: 80C
120/144/200/no limit: 93C

Average fps with no limit for most demanding (PoE): 60-70fps
Average fps with no limit for least demanding (Hydron): 100-120fps

Dojo average:

30 fps: 80C
40 fps: 80C
50 fps: 85C
60 fps: 89C
72 fps: 88C
120/144/200/no limit - 88C

Average fps with no limit: 50-70fps

Relay average:

30 fps: 67C
40 fps: 71C
50 fps: 80C
60 fps: 80C
72 fps: 83C
120/144/200/no limit: 87C

Average fps with no limit: 80-90fps

 

My specs:

i7 7700k at stock

GTX 770 2GB

Playing on max settings at 1440p

Running game on a Samsung 830 SSD

16GB DDR4 RAM at 3200Mhz

Windows 10 Pro

Edited by Apav
Link to comment
Share on other sites

23 hours ago, Apav said:

For me, a PC gamer for over a decade, that is unheard of.

This should be pretty much expected if you run without a frame limit - you are, essentially, allowing the GPU to run as close to 100% as, and when it can (this is made worse when you then couple a high resolution). In other words, you are sustaining a high current load across the GPU die for prolonged periods.

NOTE: 100% in an app like Afterburner is a bit of a vague metric. Each GPU is comprised of distinct chunks which take on the work given to it, and you can't really measure the load with accuracy. It is accurate insofar as a general representation of the work the GPU is doing, but the metric does not show what the load distribution is internally. My 100% above is representative of a GPU actually getting close to 100% internally.

As an example: a CPU is comprised of distinct blocks (ALU, FPU, CU etc), which may not all be used at the same time. A GPU is also built with blocks that may not be used at the same time (this is a TL;DR for GPU's). If you have ever used the popular prime95 software, you'll see it has a few tests which you can run across your CPU. Each test runs the CPU at 100%, but each test also puts out different heat levels. The one which puts out the most heat is using more of the circuitry on the CPU over time.

The example for GPU's is similar, but understand that GPU's and CPU's work in different ways (parallel vs serial). Most titles nowadays use differed rendering. This is a method which takes the lighting pass, and moves it to the end of the render pipeline. This is a key distinction because inherent to forward rendering is the fact that you are basically performing a lighting pass across all the geometry in your scene relative to the number of lights you have. Differed rendering uses your resolution relative to the number of lights. You can see, that forward rendering can cause you to lean on the circuitry(ROPs, TMU's, shaders etc.) more  - Warframe uses forward rendering.

If you can find another modern forward rendered title (modern shader work will be nice too), bump up the resolution, and and run it at a high frame-rate, you will likely see the exact same thing happen... You can't really fault the Devs for this. The GPU is a dumb processor, and it just does as it was designed.

EDIT:
Another example may be the AVX-512 instruction set. Relative to heat output, 100% with this instruction set is not the same as 128, and 256. AVX-512 puts out a significant amount of heat from the same piece of silicon. That being said, it's not the devs/desingers fault for wanting to use 512 over 256.

Edited by MillbrookWest
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...