Jump to content
[DE]Drew

Dedicated Conclave Servers

Recommended Posts

Oh sorry, I didn't post an update, the multi instance problem should be fixed now (" platform not enabled"). Could you send me a log from the single instance that takes a long time to start?

Share this post


Link to post
Share on other sites

@maciejs Can you add some additional lines in the log so it's possible to deduce the information on the end-of-game screen? Kills/deaths are probably already possible, but Oro, ranking/winner/score are not, from what I've heard.

Share this post


Link to post
Share on other sites

OK, next build should have some more info in the log after every match.

  • Like 1
  • Applause 1
  • Upvote 1

Share this post


Link to post
Share on other sites

Wow I'm glad to see there's still someone looking after this amazing feature after a couple years.

 

Any chance we can host our clan Dojo in the future? 
My friends and I are tired of watching each other glitch in and out of existence (as well as walking on air instead of jumping) and it is difficult to guide new members to our labs without resorting to voice chat.  I can only guess that this would be fixed by using a more high-powered server.

Share this post


Link to post
Share on other sites

Multiinstance may be broken again.

Tried to start a server while being online in Warframe (server is on the same computer for testing reasons), got the dreaded H.Misc.cache Error.

Using this batch:

Spoiler

 

set exec=
set type=LunaroClan

%exec%Warframe.x64.exe -log:%type%.log -cluster:private -allowmultiple ^
-applet:/Lotus/Types/Game/DedicatedServer /Lotus/Types/GameRules/DefaultDedicatedServerSettings ^
-instance:2 ^
-settings:%type% ^
-dx10:1 -dx11:0
pause

 

Using this DS.cfg :

Spoiler

 

+nowarning
+version=5

[LotusDedicatedServerConfig,/EE/Types/Base/Config]

[LotusDedicatedServerConfig,/Lotus/Types/GameRules/LotusDedicatedServerConfig]
App.DedicatedServerFrameRate=60

[60fpsLotusDedicatedServerConfig,/Lotus/Types/GameRules/LotusDedicatedServerConfig]
App.DedicatedServerFrameRate=60

[120fpsLotusDedicatedServerConfig,/Lotus/Types/GameRules/LotusDedicatedServerConfig]
App.DedicatedServerFrameRate=120

[LotusDedicatedServerSettings,/Lotus/Types/Game/DedicatedServerSettings]
version=8
starChart=/Lotus/Types/Game/SolarMap/OriginSolarMapRedux
gameConfig=/Lotus/Types/GameRules/LotusDedicatedServerGameConfig
eloRating=2
missionIdToNode={
    {
        id=SB_Title
        node=PvpNode11
    },
    {
        id=TDM_Title
        node=PvpNode9
    },
    {
        id=DM_Title
        node=PvpNode10
    },
    {
        id=CTF_Title
        node=PvpNode0
    },
    {
        id=DM_Alt_Title
        node=PvpNode14
    },
    {
        id=TDM_Alt_Title
        node=PvpNode13
    },
    {
        id=VT_Title
        node=PvpNode15
    }
}
levelOverrides={
    {
        missionTag=PvpNode0
        levels=/Lotus/Types/GameRules/PVPCTFLevels
    },
    {
        missionTag=PvpNode10
        levels=/Lotus/Types/GameRules/PVPDMLevels
    },
    {
        missionTag=PvpNode9
        levels=/Lotus/Types/GameRules/PVPDMLevels
    },
    {
        missionTag=PvpNode11
        levels=/Lotus/Types/GameRules/PVPSBLevels
    },
    {
        missionTag=PvpNode13
        levels=/Lotus/Types/GameRules/PVPDMLevels
    },
    {
        missionTag=PvpNode14
        levels=/Lotus/Types/GameRules/PVPDMLevels
    },
    {
        missionTag=PvpNode15
        levels=/Lotus/Types/GameRules/PVPVoidTearLevels
    }
}
PVPmodes={
    PVPMODE_CAPTURETHEFLAG=CTF_Title
    PVPMODE_CAPTURETHEFLAG_Alternative=PVPMODE_CAPTURETHEFLAG_ALTERNATIVE
    PVPMODE_TEAMDEATHMATCH=TDM_Title
    PVPMODE_TEAMDEATHMATCH_Alternative=PVPMODE_TEAMDEATHMATCH_ALTERNATIVE
    PVPMODE_DEATHMATCH=DM_Title
    PVPMODE_DEATHMATCH_Alternative=PVPMODE_DEATHMATCH_ALTERNATIVE
    PVPMODE_SPEEDBALL=SB_Title
    PVPMODE_VOIDTEAR=VT_Title
    PVPMODE_VOIDTEAR_Alternative=PVPMODE_VOIDTEAR_ALTERNATIVE
    PVPMODE_DEATHMATCH_ALTERNATIVE=DM_Alt_Title
    PVPMODE_TEAMDEATHMATCH_ALTERNATIVE=TDM_Alt_Title
    PVPMODE_CAPTURETHEFLAG_ALTERNATIVE=CTF_Alt_Title
    PVPMODE_VOIDTEAR_ALTERNATIVE=VT_Alt_Title
}
matchmakingRegionOverride=EUROPE

[Lunaro,LotusDedicatedServerSettings]
missionId=SB_Title
motd=Lunaro FFA, WARNING, this is a test dedicated server, expect sudden d/c. If you get lag, please /w NoSpax or PM in Forums.
enableVoice=1
clanOnly=0
highBandwidth=0
eloRating=2

[LunaroClan,Lunaro]
motd=Lunaro ClanOnly
enableVoice=1
clanOnly=1

[DeathMatch,LotusDedicatedServerSettings]
missionId=DM_Title
motd=DM FFA, WARNING, this is a test dedicated server, expect sudden d/c.
enableVoice=0
clanOnly=0
highBandwidth=0
eloRating=0

[VoidTear,LotusDedicatedServerSettings]
missionId=VT_Title
motd=Void Tear Mode?
enableVoice=1
clanOnly=0
highBandwidth=1
eloRating=2

 

No Crash, just an error, followed by server shutdown. Do I have wrong settings, maybe? It has been years since I last used Dedi servers and want to help debugging.

  • Like 1

Share this post


Link to post
Share on other sites

Yes, the problem is you're running the game at the same time and they fight for access to cache. Multiple copies of dedicated servers are smart enough to avoid it, but it comes with a minor perf penalty, so game is a bit less careful.

  • Like 2

Share this post


Link to post
Share on other sites
Posted (edited)

Not sure if this the right thread to ask this, and note that I'm not a network engineer, but here goes anyway.

 

Recently, I've hosted some games and noticed that some specific players had a large amount of dropped packets (5%-10%) which caused rubberbanding (example gfycat clip).

Interestingly, it did not seem to be caused by CPU/GPU, ping or bandwidth issues. We did a few tests on that -- of course, these things are hard to pin down -- but one of the guys with the problem had consistently lowest ping, even, and it also seemed to happen when he has the only one on the server. Symptoms weren't rubberbanding in that case of course, but increased load times and again entries in the logs.

I know you're using the UDP protocol for these connections, and after reading up on the subject, I figured it might be because of lost/dropped packets due to fragmentation. Maybe because some routers on the way drop fragmented UDP packets by default. So what I've tried now is to force an MTU on my Interface like this, with different values for the MTU:

> netsh int ipv4 set subinterface "my Internet-facing interface name" mtu=1300 store=persistent

When looking at the logs, I think it might have resolved it for someone, but as different people may have different per-connection MTUs, it's hard to find one setting that works for everyone.

I can figure out an acceptable MTU for specific people by doing ping tests with varying frame sizes, if they allow pinging (which I got from here, section "The Diagnosis and Fix", example params in the blog are wrong, it's -f -l *size*, not -l -f *size*), e.g.:

> ping ***.***.***.*** -f -l 1272

 

So that's how far I've come already, now for the questions:

  • Is there a way to investigate, monitor, debug or report these connection issues?
  • Is it possible to set the MTU on the connections themselves instead, i.e. just for Warframe instead of globally on my interface?

 

Naturally, when I'm not hosting with a fixed lower than normal MTU, then I myself might be running into problems as client.

 

 

P.S.: From what I've read, the minimum MTU on the Internet should be 576, i.e. any connection must support that without fragmenting the packets. But if I set it that low, no one can connect to any games I'm hosting anymore (even PvE). Warframe doesn't like that, at all.

Edited by Kontrollo
added example clip
  • Like 1
  • Upvote 1

Share this post


Link to post
Share on other sites

Could you send me a log from a session that suffers from this problem? I don't think hand-tweaking a per-connection MTU is a sustainable solution here. Sadly, this is also mostly beyond the scope of application control. We do have a congestion control system that tries to make sure we don't send too much data if client can't handle it. Our internal MTU starts at around 1300 bytes (Ethernet theoretical guaranteed max is 1500). Forcing it to a significantly lower value can cause issues, however, we rarely send packets that big, we simply do not generate that much data in a frame and can't wait too long for the buffer to fill as it'd affect ping.

  • Like 2

Share this post


Link to post
Share on other sites
Posted (edited)

Thanks a lot for the reply. I've been reading my post again, and I could imagine that it looks like there's definitely a problem with MTU and how it's being handled.

That's not really the case, it's all just speculation on my part.

Here, I'm unable to create a good test environment. I also know networking is hard, and especially so because it's UDP. It makes basically zero guarantees, I can't use any dev tools, and the game itself is a black box to me. So there's no way for me to know how Warframe deals with problems on the UDP protocol layer, except what little I can gather from the logs.

Also, I haven't tried any capture/inspection software like WireShark, because it might raise some third party tool red flags.

The questions are mainly about whether there's a way for me (or you) to get more insight into or mitigate these problematic packet losses.

 

Btw, earlier I've sent some logs and some more information through a ticket and PMs.

Edited by Kontrollo
typo

Share this post


Link to post
Share on other sites

Since your rather knowledgeable about this, can you tell me what tweaks i can use to make myself a decent server if I do this? 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...