Jump to content

Nekomian

PC Member
  • Posts

    1,906
  • Joined

Reputation

974

5 Followers

Recent Profile Visitors

2,460 profile views
  1. I'd reread the FAQ regarding this: You fit the criteria of now only being able to trade with other cross-platform save enabled players. If another PSN player doesn't have it enabled, you're not able to trade with them (since it would allow offloading console-specific trading to PC via cross-save progression).
  2. I get the intent behind this, but this only further segments the matchmaking pool arbitrarily. Does it make a difference if someone gets a random RNG loadout assignment of Revenant vs brings one of their own accord, when the end result is just the same? Both make your experience "easier", but one gives extra rewards to the other player that don't affect you whatsoever. The entirety of the missions can also be done by summoning your necramech, bypassing most all personal modifiers and gear loadouts - only really requiring a good build for guard mode and pressing storm shroud occasionally if using Voidrig. Enforcing stricter matchmaking requirements based on research point count won't make the missions any easier or harder because that's already done through the random loadouts - we have many frames and abilities that in squad play provide ample buffs and will show up in peoples' weekly choices regardless. Unless you ban specific frames or weapons, there's always going to be players who get good rolls and can bring equipment that trivializes certain missions; limiting matchmaking to similar research point count won't resolve people feeling "carried" in missions or normalize the perceived difficulty. The way it's currently implemented seems intended to me - if a player decides to bring their own equipment regardless of loadout choices, they get less rewards, hence "personal" modifiers. If they wanted a mode where everyone had the exact same challenge, they would've released it as such without any personal choice and only the mission modifiers. All of the mission-specific modifiers still apply and create challenge, and the individual modifiers can be chosen by the player for their own additional rewards. This isn't similar to taxi-ing a player to profit taker or something similar - this is a player interacting with the system as it released, which is giving them the choice to make the mission more or less "difficult" for varying rewards. It sounds like you want challenging content where there's no random personal modifiers or loadouts, since these will always influence the perceived / relative difficulty of the mission depending on the equipment and modifiers provided. If you want the relative difficulty to be consistent between players and for the mission, they'd have to all have similar loadouts and no personal restrictions that don't apply to any one specific individual in the squad (i.e. reduced duration modifier doesn't affect a frame that doesn't care about duration stats). That might be a fun challenge (oops, all Excals! or something), but it's not what this current iteration of DA/EDA is intended to provide, and wouldn't change significantly even with modifier count pooling restrictions.
  3. It 100% is, if it were some simple task it would definitely already be solved. To further elaborate on this, there really isn't such a thing as a "stable" connection in regards to networking - things are constantly in flux, with dynamic routing on backend servers happening on the fly. There's also many different kinds of ISPs and infrastructure configs, so one could reassign you an IP every hour and break your connection to host / other clients, and one could just not allow UDP punch through for certain IPs or clients (either due to firewalls or NAT types). These things aren't particularly noticeable when just web browsing because they don't require persistent connections and can quickly try to recover if there's an issue, but with P2P it would have to re-negotiate and retrieve a new IP and go through the forwarding and tunneling process again. Personal network equipment could also suddenly close the in-use ports due to a firmware bug, or any other number of things could go wrong. Most people like to point and wave the finger at the game, but the fact is it establishes direct IP connections between host and clients; the majority of the things that occur with disconnects or connectivity issues at that point are going to be a result of something in the connection pipeline (or on the user's end), and the game can't do anything about that. It can try to gracefully recover (caching user IDs and session IDs and attempting to rejoin new sessions or reforming them) and implement a hybrid approach (on-demand relays for hosting sessions as they transition or if they can't migrate host, so it doesn't disconnect all players), but that's very time consuming to handle every single condition between networks and costly to spin up on-demand hosting (they already do this for relays, and those tend to die really hard during events like Tennocon due to the sheer number of people online). There's still many things to improve upon, but the network pipeline itself is very streamlined in terms of what it does - send traffic on port, listen for traffic on port, decompress packets and interpret them, and vice versa for sending them out. The majority of the issues in my experience are with game logic, i.e. an ability not working properly on hosts vs clients, progress not migrating or syncing properly, etc. and not the connection itself.
  4. My stance on this is unchanged from the other 100s of times it's been posted - I understand the accessibility concerns of this, but toggles that affect gameplay like this cause too much confusion and ambiguity, and add additional tech debt. The abilities in question should just be changed or reworked in some capacity if they're disruptive.
  5. Usually issues like these are account specific - there may have been some issues with quest progression flags on your account not setting properly. You'd have to reach out to support to get it resolved: https://digitalextremes.zendesk.com/hc/en-us
  6. It might just be getting overwhelmed with rendering all the cosmetics and net data that hubs cause - this happens on any console, and on PC too. My system often hitches frequently as player data and cosmetics are loaded in, and then smooths out once they are. From a quick search, it looks like the resolution on Switch is dynamic: Docked: Lowest 540p, Highest 720p. But it hangs in the middle most of the way Portable: Lowest 432p and Dynamic so can go to 720p in no-activity areas It seems to lower to 540p internal rendering mostly in hubs while docked, likely due to all the visuals. Does the issue also happen if you're just charging the device via USB-C, or could you try a different power adapter to the dock (rated for the same wattage, 39W / 15V - 2.6A)? I just wanna confirm it's not any external factors whatsoever, since I'm not seeing other players report this and it's important to isolate factors completely when troubleshooting.
  7. Given the majority of the responses were blaming LoS checks and a mix of "just another reason to get rid of it" and "congrats DE not only is LoS killed all interest in the new frame [but] is messing up people's systems", it very much reads like that. I gave an explanation as to why it likely would not be raycast checks too, and the response was "I'm assuming foes killed by Tragedy still affect your performance". Sometimes it's better to just report bugs as they're seen and provide reproduction steps for them, rather than tell a dev "this is the exact issue" and make an assumption regarding it - they have their own testing and tools to confirm what it is and isn't using the provided reproduction. Otherwise, they could end up chasing down bugs from an incorrect angle based on these assumptions.
  8. I don't think these were code / cert updates, and there's no NSW specific notes between these updates, so I'd find it very difficult to believe script changes could somehow adjust the fan speed of the hardware when docked. I've also not seen any other report of this - normally with issues this impactful they'd be mass reported by multiple players. What made me think it was the dock is because this new area might be a bit taxing with GI lighting, though I'm not sure how (if it even is, though patch notes seem to indicate it's there) it's implemented on Switch. If the dock is for some reason not able to draw enough power compared to what the Switch is asking for the area, or there's some voltage issue at the higher demanding clock speeds, it could cause problems (i.e. the fan not receiving power for some reason). The device would also force into sleep mode and show a warning if it was overheating, from what I can tell. A friend who plays on switch told me they've had no issues with sanctum hub or entrati labs tilesets as you described either, and they've been playing for hours. I had them test yesterday too by standing in a busy hub for a few mins while their console was docked and also nothing, the fans in their system worked fine the whole time and no weird visual artifacting or thermal problems.
  9. Host or client specific hard toggles only causes matchmaking pool division, and doesn't resolve the issues of disconnects or graceful degradation of connections. Anyone, including people with horrendous connection dropouts, could now "force host", and players who claim "but my internet is fine" can still suffer from intermittent connectivity issues to other clients, or even have restrictive UDP tunneling rules limiting the clients that can join them. The existing system works well enough for this without being too restrictive, letting players host if no other sessions are available (or if their limits to join are very restrictive). The cut-off for mission joins could be stricter for very quick ones like capture, but that's a whole separate adjustment. It would be better if the migration itself didn't cause 10s+ interruptions to gameplay (and disable buffs, or worse cause progstops), and things seamlessly resolve while you continue to play. A hybrid approach (like Destiny 2 implemented) and better fallbacks to quickly make anyone host / determine the best host for everyone would work as a much nicer long-term solution than a toggle that players could misuse. Especially moreso with cross platform play, the choice of host can change the mission experience quite a bit. Granted, this is a lot easier said than done, but it's going to continue to be an issue as users with vastly different devices and connection types (via mobile) start to play.
  10. A bit confused by this - the game is just sending workloads / instructions to the CPU and graphics APIs, it should have no way to control fan speed or account for thermals. This should be managed by the switch's firmware instead. I'd make sure you're using a sufficient power supply for the dock, that the dock is not malfunctioning and its firmware is updated, and I'd try disconnecting the joycons from it while docked if they're connected too (I read that the right joycon can apparently sometimes cause a short, for whatever reason). You could also try using a different HDMI cable, in case it's having issues, but it sounds like a dock-related issue if it's working just fine in handheld mode.
  11. Just tested - two SP survival solo runs, one locked at 30fps (using in game frame limiter) and the other at 165fps. Both had around 500 kills in 5 mins, 100% life support the whole time, and 30+ enemies spawned at any given moment. Stood in the same tileset the whole time, using the same abilities and weapons. Log events were also both around 2000, though that matters little because it depends heavily on the events that occur during the mission (even just a worldState refresh can add like 200 lines) and the logging is likely on an entirely separate thread to prevent any locking or waits (since you would not want a game to pause everything to finish writing / flushing a log to disk). I'm not arguing that there isn't some spawn rate disparity between systems / consoles (especially last gen consoles vs PC) since it's very present and should be resolved somehow, but rather I just don't think it's tied to FPS whatsoever. Some things might be tangentially tied to FPS by proxy of requiring rendering threads to complete, but the majority of the time the difference should be negligent. I'd find it very difficult to believe spawning game logic for an enemy cap is tied to something so variable as FPS, when on some systems you can just look down at the ground and get 30+ fps from occlusion culling.
  12. It shouldn't - in mission sessions have a 7.5s timeout, so something is disconnecting you permanently from the host, not intermittently, to the point where it can't recover. It's possible your network (or ISP) assigns new IPs very quickly and/or often and the host can't receive your new one, or something is closing UDP ports traffic is communicating through, but I honestly can't say without more troubleshooting or info.
  13. I see this shared around a lot, but it's used to refer to wildly incorrect statements quite a bit. CPU cycles and routines are "tied to frame rate" in so much that they may have to wait on certain calls / tasks to continue (in the example, if I had to guess I'd assume Mesa's ability is likely waiting on rendered checks / GPU pipeline progress, so it runs a bit "faster" not having to wait as long if there's some timing delay with a frame in progress and it's not just using prior rendered checks, or if its dependent on some thread to complete before running through the ability script checks again) but the difference should be negligible in casual play / for the majority of players. It also does not work the other way around; game logic is not necessarily dependent on FPS, and in fact it can run while frames are in the process of being rendered or stored in a buffer to go to the monitor even (the video even demonstrates this, with 95 hits occurring in 30 frames, meaning some were calculated independent from any single frame). This all happens so quickly that even at 15fps the game can function just fine, albeit from a visual perspective it would look choppy and bad. Most games decouple from any frame timings as much as possible by using tick rates (it's described in Hz but it's just ms between packets sent honestly), and Warframe probably negotiates these rates between hosts and clients on session join or some early connectivity step. I'm sure there's still some things that are dependent on FPS to some extent, but it's more likely they're just able to run faster or slower due to having logic that necessitates the object it's being applied to is rendered, or some pre-requisite graphics thread completes its tasks before continuing. I'm not sure to what extent the tick rate is lowered, but even at low amounts it's not contributing to (or harming) enemy spawns, and neither would FPS, as it's a game logic determination (syncing all enemy spawns / positions across host and client should not take more than a few communications, in the order of maybe 100s of milliseconds depending on latency). They likely have set targets of enemies to spawn per client to avoid taxing the hardware excessively (this logic could be tied to FPS, but is not dependent on it - i.e. "when avg frames < 30, don't spawn more entities, when between 30 and 60 spawn up to X amount, etc, but I'm a bit doubtful of this approach), so we end up with situations where if the host is on iOS or something it appears less / inconsistent with other platforms. There could be a situation where game logic is being halted by rendering routines for whatever reason, but it still should occur fast enough that it would not pose any significant impact on spawn logic itself running. It's way more likely that the spawning logic would need to be tweaked, not that lower FPS is somehow impacting it (if you look at ESO as an example, enemies all spawn in at once after map wiping, within' milliseconds of one another - regardless of FPS). There's a few ways to approach resolving this, but the best way would be to establish some sort of baseline across all platforms and meet it on them. This could result in having to optimize lower-end platforms for handling large amounts of entity spawns (reducing time game logic takes to run, reducing graphical fidelity to avoid severe FPS drops, etc) or in higher-end platforms getting lowered to create an even play field for spawn rates. There's no right or wrong way to sync platform spawns either, though obviously players would want more spawns since this is a looter shooter. Each way has trade-offs and takes time, and I'm sure devs of a game are more aware of this than anyone since they'd be the ones coding it.
  14. Do you have dynamic resolution on (or on auto) in settings perhaps? In my experience it often randomly decides to lower the resolution of everything even if my system can handle it, so I just keep it off. It matches what you're describing here to me at least.
  15. @krys715 Other users have reported this too, it seems to stem from spamming operator / drifter transference over an extended period of time. Not sure how or why, but I can't seem to replicate it myself either:
×
×
  • Create New...