Jump to content
Jade Shadows: Share Bug Reports and Feedback Here! ×

Can Someone Explain Me How Gddr5 Vram And Bandwidth Works?


Cemges
 Share

Recommended Posts

So I am confused, with cards having 128-192-256-384 bit memory interface and effect of vram amount on bandwidth. Now there's a model Nvidia GTX870M, and it has two configurations with 3 and 6 gb vram, GDDR5 at 2500 mhz. It also has 192 bit memory interface. Now how do these have the exact same bandwidth?

Thre problem is people talksabout this gpu being around as powerful as say gtx660ti but not performing as well due to low memory bandwidth. And therefore they say that it cannot use the memory it has for games if you buy 6 gb version due to 192 bit, but yet, it can be used for other tasks? I am confused. Can someone tell me how memory interface, memory clock, memory amount can effect bandwidth or how is it calculated? Is it really that extra memory has no visible effect on bandwidth and it basicly is useless after an amount?

Edited by Cemges
Link to comment
Share on other sites

I can explain things.

 

GDDR5 stands for Double Data Rate Type Five Synchronous Graphics Random Access Memory, it's the 5th generation of DDR (Dual Data Rate) memory architecture (ergo the Type 5) and is a standard for graphics cards because of its ability to be clocked very fast as well as its high bandwidth.

 

VRAM means Video Random Access Memory in this case; it's basically the onboard memory which the GPU uses to store data and instructions for calculations. Typically you want higher VRAM if you're displaying 3D models with extremely high resolution textures, shadow maps, etc.

 

Bandwidth is, in layman's terms, the pipe which data travels through. The higher the bandwidth, the wider the pipe is, meaning more data can be sent through. The bandwidth value is a measure of how much data can be sent over a second; the higher the value means that texture and calculation data can be streamed to/from the VRAM and GPU at a faster rate. This is determined by memory clock and #-bit interface it has with the GPU (assuming the maximum VRAM bandwidth is lower than the maximum supported VRAM bandwidth of the GPU itself).

 

 

Can someone tell me how memory interface, memory clock, memory amount can effect bandwidth or how is it calculated? Is it really that extra memory has no visible effect on bandwidth and it basicly is useless after an amount?

Bandwidth isn't affected by the size of the VRAM, but rather how fast the VRAM is clocked and how many connections are made to the GPU (the #-bit interface). While some VRAM might support higher bandwidth, there's a possibility that the reason two GPUs have the same memory bandwidth is because that is the maximum bandwidth that the GPU itself can handle. Now, just because the amount of memory doesn't affect the bandwidth doesn't mean it's useless--in fact more VRAM means more space for the GPU to store/cache graphics data without having to purge old data and re-fetch data from your normal RAM or HDD. Too little of VRAM will negatively affect your performance because the GPU will have to communicate with the CPU much more frequently to fetch data.

 

 

 

 

Also, as an add-on note, the reason why the GTX870M is considered "about as powerful" as a GTX660Ti is because the GTX870M is a Mobile GPU for laptops (hence the M) and the GTX660Ti is a Desktop GPU with a redesigned architecture to increase performance (hence the Ti). Mobile GPUs are designed to work in smaller chipsets and boards, meaning less space for additional data pipes (wires) to connect the GPU and VRAM (resulting in lower bandwidth due to a lower #-bit interface). Paired with the fact that mobile GPUs can't benefit from large heatsinks like the Desktop cards means that both the VRAM and the GPU have to be clocked somewhat slower than their Desktop counterparts.

 

So although the GTX870M is superior to the GTX660Ti in terms of generation and iteration, it's actually about on par (if not slightly inferior) to the desktop counterpart because it's a laptop GPU. Currently there is no desktop model of the GTX 8xx cards (the 800 series is a mobile chipset), but if you were to compare a GTX760 to a GTX660, the 760 would blow it out of the water.

Link to comment
Share on other sites

Wow thanks for the detailed answer. But in this case, depending on how games use the vram it might make up for some of the lack of bandwidth, but say, not for msaa, right? And in this case only way to improve bandwidth on a gpu is to basicly overclock the gpu, which is't the safest thing on a laptop...

Link to comment
Share on other sites

Wow thanks for the detailed answer. But in this case, depending on how games use the vram it might make up for some of the lack of bandwidth, but say, not for msaa, right? And in this case only way to improve bandwidth on a gpu is to basicly overclock the gpu, which is't the safest thing on a laptop...

In a way yes and in a way no.

 

For a game that doesn't utilize super high resolution textures, shadowmaps or things that need lots of VRAM (i.e. Minecraft), lots of VRAM doesn't really matter. But games like MMOs with rather fancy graphics need quite a bit of VRAM for all of the character textures on screen as well as shadows, etc. It's not really "how" they use it, more of an "if" they use it. All games will use VRAM, but some need more VRAM than others. Typically the more active the game is (MMOs = visually very active, lots of character movement, loading in/out of areas, changing scenes and high res textures) the more VRAM bandwidth and capacity you want.

 

As for overclocking, it depends which is currently clocked faster as well as the maximum bandwidth the GPU can support; Some GPU calculations/instructions require 2~4 variables, all of which should ideally be loaded from the VRAM in the time it takes the GPU to execute a single clock cycle (or two). Because VRAM is DDR, it can execute load/store operations on both the rising and the falling edges of the clock signal (this effectively doubles the rate at which data is loaded/stored; DDR1600 runs at 800MHz but it's "effectively" 1600MHz because it triggers twice per clock cycle). Most GPU's VRAM bandwidth is enormous, so to increase bandwidth you can usually get away with just overclocking your VRAM (which you should only do so by 100MHz increments for stability reasons, memory clocks really like things in multiples of 100).

 

The majority of GPUs now will see a memory clock of about 2~3x the GPU clock, which is really sort of the ideal range as-is.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
 Share

×
×
  • Create New...