Jump to content
Dante Unbound: Known Issues ×
Dante Unbound: Share Bug Reports and Feedback Here! ×

Installer Performance


Krang2013
 Share

Recommended Posts

Reinstalling Warframe from local and getting only 30mb/s. CPU at 25%. Sometimes the speed drops to 17mb/s during installation. I see it using 4 cores around 70-80% each but the rest are 5-10%. Something's not threaded enough. Should be easily able to get 100mb/s out of this without using 5 entire cores. 

You might want to reconsider the io/compression lib because its slow af even with 4 threads. This is basically 1/4 of a GB connection on 4ghz.

That's not even the largest problem I see. So, I'm adding a picture to illustrate. 

wf_installer3.png

The first issue and first segment in the picture, the installer is using 4 cores very inefficiently. The second segment is installing and downloading updates.

The third segment. When finished, the installer is using 2 cores at 60-80%. How can this possibly be? That is a TON of cpu time for displaying this:

wf_installer4.png

 

I hope you will consider these suggestions. I don't think the engine developers would be too satisfied knowing the launcher did this. The application is branded "Evolution Engine," that's the only reason I mention them.

 

Amazing how the chromium.dll is chewing up so many resources sliding images back and forth. I didn't check, but it sounds a lot like flash... and u wouldn't do that to us, right?

 

-edit-

I did some research on the decompression speed which is a minor part of my topic. It would seem that the developers of compression software have decided to only optimize for storage and not retrieval. A lot of work has gone into making compression multi-threaded, but decompression in almost every tool is single-threaded. Which is very weird and lazy because some of these companies using the referenced algos make tons of money. I found a good example for those interested:

compression_benchmarks.png

Source: https://catchchallenger.first-world.info/wiki/Quick_Benchmark:_Gzip_vs_Bzip2_vs_LZMA_vs_XZ_vs_LZ4_vs_LZO

 

I tested lz4 and where I got 30MB/s, I now get 1GB/s+ It's a shock when you compress a 20GB file at lightning speed only for it to take half an hour to decompress. Using any traditional lz algo, this file will take at least 3 seconds to decompress. 

Even though it is only 35% compared to 15-20% average, the last two on the chart are magnitudes faster.

Well, I guess that answers one of my own questions. You guys don't really have a say in the matter since it's probably part of some easy-bake api.

Edited by Krang2013
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...