Jump to content
The Lotus Eaters: Share Bug Reports and Feedback Here! ×
  • 0

How Will This Run Warframe?


Codesco
 Share

Question

Recommended Posts

  • 0

that's lga1151 chipset, he can upgrade that chip to a 4ghz i7 if he chooses to do so.

the other upgrades i listed will dramatically increase performance within a reasonable price difference. (although the switch to ssd will leave you with drastically less space vs. price)

Link to comment
Share on other sites

  • 0
You also made several incorrect presumptions about loading assets into RAM and how WarFrame utilizes system RAM and GPU RAM

 

Such as?

Also, if your entire case against the addition of an ssd is that the decrease in loading times and streaming is not worth the money, then you could have stated it clearly and i can agree to an extent.

 

I repeat something so that people will stop parroting marketing terms as fact.  Hyperthreading is artificial PR garbage that hurts gaming more than it helps.  It is far better to look for non-gimmicky ways to improve gaming.

 

This was stated in response to your hyperthreading blurb.  HyperThreading is useless for gaming.
See how that works?  Do you understand it yet?  HyperThreading is useless for gaming.
So when you post about having more cores you should be sure to mention that HyperThreading is useless for gaming.
Sometimes it requires repetition in order to get people to understand a point being made.

 

Apparently you like attacking and overexagerating things to absurd levels. Apparently I've been "parroting marketing terms" and "shilling so hard for a corporation" but i freaking didn't.

There are many ways to convey a thought. You could have been chill and have had a moderate discussion, but looks like you like attacking and accusing random things for reasons i can't comprehend.

 

I did express my stance on the matter and you are not addressing the point. I even said that 4-8 and higher hypershreading is indeed useless for gaming, but the particular case of 2-4 can help relieve gross compatibility issues.

 

Now, you could start to discuss in an educate and chill manner, otherwise i'm not here to be your punch bag.

 

This is absolutely false; if you were involved in the Direct X programming ecosystem or actually knew what OpenGL/Vulkan/Kronos/etc were up to, then you'd know that the DX12 hype is not going to pan out as well as people are hoping.  What DX12 games have you tested that would lead you to conclude that WarFrame would benefit from a DX12-exclusive build?

 

Well, there are alot of benchmarks around that show that command throughput is vastly increased for multicore systems:

http://www.gamespot.com/forums/pc-mac-linux-society-1000004/3dmark-api-overhead-feature-test-early-dx12-perfor-31936719/

http://www.anandtech.com/show/9112/exploring-dx12-3dmark-api-overhead-feature-test/3

 

How i know that warframe has an api throughput issue?

https://forums.warframe.com/index.php?/topic/511105-where-is-the-bottleneck/

 

Would it increase performance 10x in all games no exception? obviously not, there is else in a game other than driver overhead, i expect at least a more conservative increase in performance for the edge case of many objects flying in the scene and relays and gpu usage less than 90%.

Link to comment
Share on other sites

  • 0

Probably 'cause neither I nor anybody else I know of that has these cards knows what this issue even is.  Your post is the first I've heard of this existing.

 

It is so difficult to search for 'Nvidia GTX 970 lawsuit', isn't it?

 

Such as?

Also, if your entire case against the addition of an ssd is that the decrease in loading times and streaming is not worth the money, then you could have stated it clearly and i can agree to an extent.

 

 

Apparently you like attacking and overexagerating things to absurd levels. Apparently I've been "parroting marketing terms" and "shilling so hard for a corporation" but i freaking didn't.

There are many ways to convey a thought. You could have been chill and have had a moderate discussion, but looks like you like attacking and accusing random things for reasons i can't comprehend.

 

I did express my stance on the matter and you are not addressing the point. I even said that 4-8 and higher hypershreading is indeed useless for gaming, but the particular case of 2-4 can help relieve gross compatibility issues.

 

Now, you could start to discuss in an educate and chill manner, otherwise i'm not here to be your punch bag.

 

 

Well, there are alot of benchmarks around that show that command throughput is vastly increased for multicore systems:

http://www.gamespot.com/forums/pc-mac-linux-society-1000004/3dmark-api-overhead-feature-test-early-dx12-perfor-31936719/

http://www.anandtech.com/show/9112/exploring-dx12-3dmark-api-overhead-feature-test/3

 

How i know that warframe has an api throughput issue?

https://forums.warframe.com/index.php?/topic/511105-where-is-the-bottleneck/

 

Would it increase performance 10x in all games no exception? obviously not, there is else in a game other than driver overhead, i expect at least a more conservative increase in performance for the edge case of many objects flying in the scene and relays and gpu usage less than 90%.

 

An SSD is not a good utilization of the limited budget of the topic poster (the OP).  So constantly recommending an SSD is not only counterproductive but shows a complete blind ignorance on your part to those who don't have the budget for such a luxury when building a new system.

 

As I've said before, it is objectively shown that more money in CPU & GPU will provide significantly better return on investment than spending money on an SSD.  The SSDs are generally better for those who have more disposable income to spend.  You continue to ignore that and that is why I keep repeating myself.

 

HyperThreading is useless for gaming.  I repeat myself because apparently it doesn't register with you yet.

It doesn't matter if it is dual-core or quad-core or whatever.  HyperThreading does not provide a significant boost in gaming performance compared to non-hyperthreading processors.  The limited budget of the OP is better spent on a better quality processor that is objectively better in performance and benchmarks compared to the hyperthreaded 'burst clock' nonsense of that poor-quality i3 that was initially chosen.

 

Lemme respond to those 'benchmarks'.  3DMark is a useful gauge of general performance but artificial benchmarks ALONE are not a reason to ever promote something.  You need real-world data from real systems and real games actively running.

In the past, 3DMark has been 'cheesed' by drivers from both Nvidia and AMD/ATI who were caught with their pants down when it came to benchmark-specific optimizations.  So I take these 'results' with critical skepticism.

 

The benchmarks you linked are useless and only serve to parrot the PR and Marketing of DirectX12 being pushed heavily by MicroSoft to promote more Win10 installs because reasons.  When we get actual game benchmarks instead of artificial ones, then feel free to link them.

 

Also, early benchmarks from 3DMark by two different websites doesn't automatically mean it is enough information to make an informed decision.  It is always wise to have a variety of benchmarks from both artificial software like 3DMark as well as actual real-world actual gaming benchmarks from the games themselves.

 

Windows 10 has a host of issues and problems to deal with.  Forced updates that render computers unbootable, embedded and default-enabled spyware & keylogging, counterintuitive design and mandatory advertising everywhere, it is no small wonder that there's been such opposition to the mandatory forced no-choice upgrades that have been coerced against the will of many people.

 

I'm not a fan of Win10 for many reasons but the primary reason is that it is deliberatly anti-consumer and anti-freedom in the most disgusting ways possible.  Creating a Win10-exclusive build for an API that hasn't been fully released yet would be a waste of dev time and effort that could be far better spent on optimizing the game for the existing renderers that we have.  If they'd gone OpenGL then we wouldn't even be in this DX9 & DX10 & DX11 mess, but that's something else.

 

If you look at performance prior to the 18.4 patch, things ran really well.  But for whatever reasons, the graphical changes in recent hotfixes have messed up performance for many systems (even on Win10).

 

Your fanatical demeanor to promote HyperThreading and DX12 leads me to label you a shill and a corporate speakerphone.  That isn't attacking, it is in your actual postings.  You are promoting these technologies without noting the downsides to them and the potential pitfalls of utilizing either or both of them.  You blatantly ignore that the OP has a limited budget and may not even be on Windows 10 yet.

Edited by FreshNinja007
Link to comment
Share on other sites

  • 0

-snip-

 

Well in the same post you are quoting i am clearly saying that i agree that ssd might not be worth the money and made clear in my first post that is a better idea to spend more on cpu, so that's a fat net waste of two exagerated paragraphs on your part.

Together with all your other references to my apparent ignorance on OP being in a budget.

In what spacetime did i say that hyperthreading would provide a significant boost in performance?

Because, ya now, i never did :3

Only mentioned compatibility and thus potential stuttering/freezing.

OpenGL has always been a single-threaded API, that means that said api can only be executed using the power of one of your four cores.

Vulkan is a multi-threaded api, now i can use (almost) all your four cores to execute the api driving a linear increment in performance for the specific issue of driver overhead.

The difference will be noticeable in that edge case i described, by how much is dependent on how well it gets implemented by the respective devs.

I did say that there is more to a game than driver overhead too and thus acknowledged that.

You ramble about benchmarks but if you knew a thing or two you could see the potential in such an architecture to solve the aforementioned issue.

Dx12 does the same thing, but only restricted to win10 because microsoft is microsoft.

Rambling about win10 yaddayadda whatever.

Basically your strategy is to exagerate, putting things in my mouth that i didn't say, accusing on the grounds of those artificial overexagerations, missing important points by tunnel-vision and ignoring my questions entirely.

 

I will thus stop reading your posts and replying to your posts as it's a waste of time to repeat to you every time my stance on everything.

 

One day you will realize that there are ways to communicate that don't involve mindless accusations out of thin air.

Have a nice day, 'Sir'.

Edited by dadaddadada
Link to comment
Share on other sites

  • 0

OpenGL has always been a single-threaded API, that means that said api can only be executed using the power of one of your four cores.

Vulkan is a multi-threaded api, now i can use (almost) all your four cores to execute the api driving a linear increment in performance for the specific issue of driver overhead.

The difference will be noticeable in that edge case i described, by how much is dependent on how well it gets implemented by the respective devs.

I did say that there is more to a game than driver overhead too and thus acknowledged that.

You ramble about benchmarks but if you knew a thing or two you could see the potential in such an architecture to solve the aforementioned issue.

Dx12 does the same thing, but only restricted to win10 because microsoft is microsoft.

Rambling about win10 yaddayadda whatever.

 

OpenGL has come a long way and is not bound to being single-threaded.  In fact, OpenGL has lead the way in features and hardware capabilities (Tesselation being a big one) long before DirectX could catch up to it.

 

There is a huge difference in the 'potential of such architecture' and actual real-world results.  You have still failed to provide such real-world results to back up your claims about DX12 superiority or the 'magic bullet' to solve issues in WarFrame performance.

 

If you want to ignore the very real issues with Windows 10 then that is your choice, but please do not expect others to completely ignore the dangers of  using an operating system that is deliberately anti-consumer and requires too many compromises to make it viable for a stable and useful gaming experience.  Not everyone has problems, but that doesn't mean those that do should be ignored or have their claims dismissed because of it.

 

If your response to criticism is to 'take your ball and go home' then that is your perogative.  It may be a good idea to read up on things before offering 'advice' to mislead and misinform those who don't have the current updated knowledge on things like bottlenecks and Win10 shenanigans.

Link to comment
Share on other sites

  • 0

The topic is derailed.

 

nVidia wins the lawsuit. Read and learn few new things (from year ago) :

http://wccftech.com/nvidia-geforce-gtx-970-memory-issue-fully-explained/

 

And I would take DX12 over OpenGL. Real world result by Square Enix (from year ago):

 

That isn't a 'real world' result at all.  That video is a tech demo.  It isn't interactive or playable.  It is therefore completely useless when it comes to actionable real-world tangible benchmarkable results.  Please stop wasting my time with that.

 

Btw, Nvidia hasn't 'won' anything.  That link is to a technology blog run by enthusiasts.  Neither of the people doing the GTX 970 articles have notable credentials/experience in proper technical analysis and longform pieces that such an issue would require.

 

Now this link that you provided that 'fully explained' the memory issue is complete horse manure.  It is nothing but obfusticated PR damage control by Nvidia to purposefully mislead and misinform consuners so that they can mitigate any potential legal liability.  The data was already proven incorrect by real-world results as posted on both Nvidia forums and the Guru3D forums.

 

What is even worse is that this piece from the same website seems to be an amateur opinion piece masquerading as useful information.  Making wild claims that performance issues are not from the graphics card but are end-user related or possibly even due to pirated copies of the game (what!? seriously!?) makes me question why such a person would be given the task of writing that kind of article.  And of course it is to generate controversy for views.  That's how most of those blogs work now.

 

Now the actual class-action lawsuit is a thing and the 'proof' is public record and already submitted to the court.  From what I have looked into, that lawsuit is still ongoing because reasons.  The short version is that the public posted specifications did not match the actual card's specifications once it was released.  On top of that, the 3.5 GB and 0.5 GB memory seperation/allocation issues were also not fully disclosed.  And of course all this violates numerous federal and state laws regarding deceptive advertising and all that.

 

People can purchase whatever they like, but putting money down on a GTX 970 despite the fact that it was falsely advertised (and continues to be falsely advertised to this day) as a fully-capable 4 GB card is just being a terrible consumer.  Terrible for the buyer and terrible for the market to support such deception.

 

Since the OP was building a new computer, I explicitly advised against choosing a GTX 970 due to these issues.  There are plenty of better GPUs out there that don't have the same issues and even 'older' cards like the GTX 960 would be a better choice.

 

Consumers that reward such garbage business practices are messing things up for themselves and making the market worse for everyone.  There's a reason why so many big-budget 'AAA' games are now sold as incomplete and buggy hogwash (Assassin's Creed Unity, Duke Nukem Forever, Aliens: Colonial Marines, WatchDogs) with heavily hyped-up preorders and doing whatever they can to remove content from the game to resell it piecemeal for preorders and DLC.  This happens because stupid people kept buying into it and continue to do so.  They don't understand they have an alternative to support better games and companies that DON'T do that type of thing.

 

Ah well, another round of Chrono Trigger is always fun.

Edited by FreshNinja007
Link to comment
Share on other sites

  • 0

Re-railing the topic: Codesco, those parts should be able to run the game A-okay at 60 frames a second. Yes, an i3 does have significantly less performance, but it is a 6th generation dual-core Intel CPU with hyper threading.

 

As long as you're not trying to multitask, and that you've Unparked your CPU cores (Unparking your cores can make a MASSIVE difference in performance and potentially get rid of lots of CPU-related issues such as stuttering, etc), you should have no problems running Warframe at the highest settings at 60 frames a second.

 

Good luck on your PC build!

Link to comment
Share on other sites

  • 0
On top of that, the 3.5 GB and 0.5 GB memory seperation/allocation issues were also not fully disclosed.  And of course all this violates numerous federal and state laws regarding deceptive advertising and all that.

Prebuilt computers come with partitioned drives and a total memory amount that may actually be more than one stick, would they get hit for false advertising too?  Not to mention integrated chips that use some system memory for the "video memory", are the companies selling these machines going to get hit for false advertising too?

 

Mind quoting the federal laws?

(Since state ones vary too much since we could all easily be in separate states.)

 

People can purchase whatever they like, but putting money down on a GTX 970 despite the fact that it was falsely advertised (and continues to be falsely advertised to this day) as a fully-capable 4 GB card is just being a terrible consumer.

Again, neither I nor anybody else I know with this card was aware of this issue, so unless time machines and news sites from the future have gone out and we all simply skipped picking ours up, that's an unreasonable accusation for you to make.

 

If I PM you my address, will you send me a replacement card?  Since you seem so intent on us not using this one.

 

There's a reason why so many big-budget 'AAA' games are now sold as incomplete and buggy hogwash

[...]

with heavily hyped-up preorders and doing whatever they can to remove content from the game to resell it piecemeal for preorders and DLC.

... that is unrelated to the memory partitioning on a specific model of video card or picking specific parts for OP's machine.

Edited by Rydian
Link to comment
Share on other sites

  • 0

-snip-

[ off topic ]

To be as constructive as possible:

 

What you saw in the Square Enix tech demo was the real time rendering performance of the engine and DX12 interface. There is no difference is the character moved with pre made code or by a input method. Moving a character is insignificant compared to what you saw in the video from the early 2015. I would suggest to study Epic Games and check out the DX12 interface and the utilities and the input methods they have. You can even play with Darth Vader with U14.10 and DX12. Google it.

 

and just to give you a glimpse of a simplified input methods:

 

and simplified character movements:

 

DX12 is here whether you like it or not.

 

About nVidia:

 

When I go and buy Samsung EVO 250 SSD drive, I see in the front cover: 250GB. Than I read the manual: 250GB. Than I install it into my computer and voila, I get 238GB. So, what you think I do first?

1) sue Samsung and cause rucks in forums and tell people not buy EVO cause they lie

2) Enjoy the product and play games with short loading speeds

 

To use whole memory capacity on GTX 970 (both memory modules: 3.5gb + 0.5gb) you need to run the games in 4K and ultra settings. Performance impact compared to GTX980 is minimal: 1% - 5%. 4K gaming is not recommended either of the cards. And GTX 970 is not par with GTX 980. They are different products which use different technologies. And they have different price tags. Yes, nvidia has already won the lawsuit.

[ / offtopic ]

Edited by carnaga
Link to comment
Share on other sites

  • 0

*snip*

 

A prerendered non-interactive demo is not useful for real-world benchmarking regardless of what you claim.  I'm not going to argue with you on this point, especially since non-representative 'vertical slice' garbage goes on constantly in the video games industry as it is.  The only benchmarks worth a damn are the actual released game products that we can install on our computers and test on our own terms.  We cannot trust the results of that Square-Enix demo or any other type of tech demo until we can do the benchmarks on our own computers.  If it was a downloadable tech demo with source code, that would be another story.  The chances of that happening are slim to none.

 

DX12 is still early enough that it has around 5-7 released titles that actually use it.  When we get more higher-profile games making DX12 builds then we can do proper benchmarks and make comparisons.  We need the equivalent of a 'Crysis' and 'Unreal Tournament' for DX12.  Artificial benchmarks are NICE as a side by side comparison to provide additional data in ADDITION to actual real-world game benchmarks.

 

Every single hard drive ever sold since at least the year 2000 has a blurb on the box in fine print about formatting in the OS reduces available storage and that hard drive manufacturers calculate storage differently from how the OS calculates it.  So your lawsuit would get thrown back in your face with a laugh and dismissal from any competent judge.

 

The Nvidia situation happens where there is evidence that Nvidia misrepresented the technical specifications of their product and withheld crucial information that would be required to make an informed consumer decision.  That's what the lawsuit is about.

 

To go back to the hard drive example, if I buy a hard drive advertised as '7200 RPM' and it turns out to only run at 5400 RPM because reasons, then I could demand a refund/compensation for the error on their part.

Edited by FreshNinja007
Link to comment
Share on other sites

  • 0

* snip *

FYI, we will soon get an additional game where you will be able to play and do real-life benchmarks with a proper game product aimed for consumers and with DX12 :)

661u2ZA.jpg
 
ospuueC.jpg
 
What I think of benchmarks: 1) You can benchmark a game for your self - that can your rig run it or 2) compare different hardware how they perform (tom's hardware etc). The reason why we want to benchmark games is to gather accurate and comparable data. You cannot gather accurate and comparable data over different system setups if you control the camera and the character via game play. Each run is different giving dispersion over the gathered data. And the weakest link is the player itself. This is the reason why people - who really know how to benchmark accurately, use games that can actually pull it off. Metro last light / redux, 3rd party mods for Crysis3, DAI, TotalWar and so on not to mention about 3Dmark and other benchmarking tools.
 
For example, this is a bad benchmark (excluding the power consumption thingy which was all about the benchmark) where the tester does these "Loops" manually.
6ahlSik.png
 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...