Jump to content

MillbrookWest

PC Member
  • Content Count

    1,595
  • Joined

  • Last visited

Community Reputation

1,745

About MillbrookWest

  • Rank
    Gold Hunter

Recent Profile Visitors

13,134 profile views
  1. You should drop your EE.log into a support ticket. I think DE also ask to say something in chat when you have an issue so its easier for them to find the rough location in the log. Not too sure if they work the global chat the same way, but you can just ask some generic question.
  2. That's normal. If it's sitting at 90+ and not going down, then that isn't. This is mostly just compiling shaders for the level, which generally makes the cpu hurt while doing it. Most modern games usually get this done when you boot it up for the first time. Warframe does it per planet - or biome (which ever description best matches all the places). For example: Mirrors Edge on start-up: Mirrors Edge in-game:
  3. For those forcing AF through their drivers (like i do, coz not entirely sure whats up with the built-in version) it seems that the results are stored in memory for every texture that gets sampled. In the plains, just doing a quick run around the map, you end up ~4.8-5GB. Could end up higher, wasn't looking to be exhaustive. This is the same with Fortuna, and is the same again with Deimos. Doing the final bounty, i generally end up ~5.5GB for VRAM usage: This doesn't seem to be an issue if you use the built-in AF solution (at least in my case), and the usage seems mor
  4. This was addressed at the end of the post you quoted. More squad = increased game logic -> More points for AI to check, more work on network thread (probs part of IO) to synchronize sessions etc. etc. this means more cpu work, principally. As far as the GPU is concerned, it doesn't care if its rendering your squad-mate, or a specter. They're functionally seen as identical to the gpu. For posterity tho, i've provided examples with a squad below, however, if you care about the numbers you post, you should be open to improving them. These could have been done by yourself, instea
  5. The point was in relation to this opening statement: As i said, the gpu is just an add-in card. It accelerates the work its given. If it isn't given work, it does nothing ergo; throwing a higher tier GPU does nothing to improve perf. As i said in my other posts, there is a bottleneck elsewhere. Observe (i'll put the disclaimer here so it's not missed: This is just a small part of how cpu's work): By forcing windows to not context switch as much between cores, you get a boost in performance (4c vs. 12T). Forced to 4 cores: No limit 12 threads: I g
  6. See note on renderer. Also note the windows task scheduler. Of all the OS's, Windows comes out bottom for the 'smartness'[read: Its old] of the algorithm used (documented all over internet). Ideally, you don't want to switch cores unless the cores is under load -> See context switching.
  7. There are many parts to a gpu, just as there are many parts to a cpu, beyond the aggregated 100% you might read from some software. This metric merely gives an impression of work done. General rule of thumb; If the GPU isn't maxed out with no v-sync, then there is a bottleneck somewhere else (since all a GPU is is an add-in accelerator card - but similar rules apply within the gpu itself). openGL was faster than DX9. No one used openGL because there was next to no support for it (i believe even carmack has gone on record to make this point). By and large, there is a similar trend
  8. If the GPU usage is low, then the gpu isn't the limiting factor. CPU, IO, bandwidth etc. could be what limits perf. They have a DX12 version in the works that is confirmed. However, all it will do is allow them to push more work to the GPU since it's a graphics API. If there is other issues as above, it does diddly squat; ditto for Vulkan. Game logic (for example) is not typically influenced in any way by how fast your GPU is.
×
×
  • Create New...