I've a server/client architecture implemented, where all state changes are sent to the function, validated and broadcasted to all clients connected. This works rather well, but the system does not maintain synchronization between the client instances of the game as of now.
If there happened to be a 5 second lag between the server and a particular client then he would receive the state change 5 seconds after the rest of the clients thus leaving him with game state out of sync. I've been searching for various ways to implement a synchronization system between the clients but haven't found much so far.
I'm new to network programming, and not so naive to think that I can invent a working system myself without dedicating a severe amount of time to it. The ideas I've been having, however, is to keep some kind of time system, so each state change would be connected to a specific timestamp in the game. That way when a client received a state change, it would know exactly in which period of the game the changed happened, and would in turn be able to correlate for the lag. The problem with this method is that in those n seconds lag the game would have had continued on the client side, and thus the client would have to rollback in time to update for the state change which definitely would get messy.
So I'm looking for papers discussion the subjects or algorithms that solves it. Perhaps my whole design of how the multiplayer system works is flawed, in the sense that a client's game instance shouldn't update unless notion is received from the server? Right now the clients just update themselves in their game loop assuming that any states haven't changed.
The basic approach to this is something called Dead Reckoning and a quite nice article about it can be found here. Basically it is a predication algorithm for where entities positions will be guessed at for the times between server updates.
There are more advanced methodologies that build on this concept, but it is a good starting point.
Also a description of how this is handled in the source engine (Valve's engine for the first Half Life game) can be found here, the principle is basically the same - until the server tells you otherwise use a prediction algorithm to move the entity along an expected path - but this article handles the effect this has on trying to shoot something in more depth.
The best resources I've found in this area are these two articles from Valve Software:
Latency Compensating Methods in Client/Server In-game Protocol Design and Optimization
Source Multiplayer Networking
There will never be a way to guarantee perfect synchronisation across multiple viewpoints in real time - the laws of physics make it impossible. If the sun exploded now, how could you guarantee that observers on Alpha Centauri see the supernova at the same time as we would on Earth? Information takes time to travel.
Therefore, your choices are to either model everything accurately with latency that may differ from viewer to viewer (which is what you have currently), or model them inaccurately without latency and broadly synchronised across viewers (which is where prediction/dead reckoning/extrapolation come in). Slower games like real time strategy tends to go the first route, faster games go the second route.
In particular, you should never assume that the time it takes to travel will be constant. This means that merely sending start and stop messages to move entities will never suffice under either model. You need to send periodic updates of the actual state (typically several times a second for faster games) so that the recipient can correct error in its predictions and interpolations.
If client see events happening at the rate the server is feeding him, which is the normal way to do it (I've worked with protocols of Ultima Online, KalOnline and a little bit of World of Warcraft), then this momentaneous 5 secounds delay would just make him receive this 5 secounds of events all at once and see those events passing really fast or near instantly, as other players would see him "walking" really fast for a short distance if his outputs delay too. After that everything flows normally again. Actually, except for graphic and physics normalization, I can't see any special needs to make it synchronize properly, it just synchronize itself.
If you ever played Valve games in two near computers you would notice they don't care much about minor details like "the exact place where you died" or "where you dead body gibs flyed to". It is all up to client side and totally affected by latency, but this is irrelevant.
After all, lagged players must accept their condition, or close their damn eMule.
Your best option is to send the changes back to the client from the future, thereby arriving at the client at the same point in time it does for other clients that does not have lag problems.
Related
I've got quite an abstract question. I'm working on a project that requires constant device communication. I'm integrating multiple devices onto an external processing unit with a touchpanel to execute certain methods. I.e. the "start videocall" button on the touchpanel activates a relay, turns a display-device, camera-device and microphone-device on, etc.
On the flipside, I'm also trying to monitor these devices. What status do they currently have? Are they enabled/disabled ? What input is the display device currently on?
So far, I've come up with two solutions to prevent a bottleneck in the communication where I'm constantly polling (i.e. every two to five seconds to keep an acurate and up-to-date status) the on-state and input-state of the display-device.
Make use of threading so I can enqueue the different commands and execute them async. By also reading the response async, all communication should be nicely spaced out but I'd have a very "busy" communication line, taking it's toll on the processing unit.
With the help of events have the display-device notify the processor of it's changed status. This would take a lot of stress off of the communication line, but I feel like this is very easily disrupted. If the device doesn't throw it's events correctly (or the events are missed out on) the monitored state does not correspond with the actual state.
I'm curious if there are other ways of going about this issue. As of now, I'm leaning towards the second one because it stresses the processing unit a whole lot less, I just feel like I should be building in a lot of safeguards to prevent an inacurate representation of the actual device-states.
The project runs in C# on .Net 3.5.
Polling works, but it isn't fun or optimal. Reactive is best but as you've mentioned there may be a hiccup insuring your still listening to to the device and not just standing by for nothing. In this situation it makes since to optimize both processes. Poll when you're waiting or haven't heard a response in so long and listen when your polling returns good info, passing the polling.
That said, you shouldn't worry about taxing the unit too much with polling on various threads. This sounds like a purpose device so as long as you're not running it hot or stressing it to max all the time then using your resources are perfectly fine.
I have a game where I have to get data from the server (through REST WebService with JSON) but the problem is I don't know when the data will be available on the server. So, I decided to use such a method that hit Server after specific time or on request on every frame of the game. But certainly this is not the right, scale able and efficient approach. Obviously, hammering is not the right choice.
Now my question is that how do I know that data has arrived at server so that I can use this data to run my game. Or how should I direct the back-end team to design the server in an efficient way that it responds efficiently.
Remember at server side I have Python while client side is C# with unity game-engine.
It is clearly difficult to provide an answer with little details. TL;DR is that it depends on what game you are developing. However, polling is very inefficient for at least three reasons:
The former, as you have already pointed out, it is inefficient because you generate additional workload when there is no need
The latter, because it requires TCP - server-generated updates can be sent using UDP instead, with some pros and cons (like potential loss of packets due to lack of ACK)
You may get the updates too late, particularly in the case of multiplayer games. Imagine that the last update happened right after the previous poll, and your poll is each 5 seconds. The status could be already stale.
The long and the short of it is that if you are developing a turn-based game, poll could be alright. If you are developing (as the use of Unity3D would suggest) a real-time game, then server-generated updates, ideally using UDP, are in my opinion the way to go.
Hope that helps and good luck with your project.
I have a Windows Phone game that requires support for multiplayer. The multiplayer is similar to the one in Wordament: everyone plays the same game; the client gets the game initially, then the each player plays the game on his own without any interaction with the others and when the game ends, the results from everyone are collected and displayed. The difference is: in my application, the game doesn't end after a specified period of time but rather when one of the clients signals it. So, when someone completes the game (reaches a goal), all the others have to be notified that someone won.
My initial thought is pool the server every let's say 5 seconds to see if the game state has been changed. When a client completes the game, it sends a request with that info and all the other clients, upon the next pool request, will get the new status. This, IMO, is the simplest and most convenient solution because all I need is one byte of data to tell me if the game is over or not.
Real time (as in millisecond accuracy) is not critical. As you might have noticed in the previous paragraph, a 5 seconds delay is acceptable.
However, I am asking you, experts, if a duplex channel would be more appropriate for this scenario? I found solutions like Pusher which provide the two way channel but it seems to me that such a solution is very complex and expensive (we have a very limited budget).
Will share my current knowledge.
Pull(Poll)
Simple to implement, widely used.
Examples: Facebook.com, TeamCity web interface, .NET Client for QPID Message Broker
Push
Take a look at this article
Performance of HTTP polling duplex server-side channel in Microsoft Silverlight 3
What I've noticed for myself: need extra efforts for configuration, possible issues with scalability and performance
The only scenario I can think of - exchange of large amount of data on constant basis
Example: Massively multiplayer online games(huges number of events, notification time is extremely critial)
Get changes on demand
Typical for bussines desktop application.
Examples: TFS(refresh grids(tasks and bugs), get locked file status on check out)
Conclusion: Pooling for your task fits ideally
I'm trying to create a website similar to BidCactus and LanceLivre.
The specific part I'm having trouble with is the seconds aspect of the timer.
When an auction starts, a timer of 15 seconds starts counting down, and every time a person bids, the timer is reset and the price of the item is increased by 0,01$.
I've tried using SignalR for this bit, and while it does work well during trials runs in the office, it's just not good enough for real world usage where seconds count. I would get HTTP 503 errors when too many users were bidding and idling on the site.
How can I make the timer on the clients end shows the correct remaining time?
Would HTTP GETting that information with AJAX every second allow me to properly display the missing time? That's a request each second!
And not only that, but when a user requests that GET, I calculate remaining seconds, but until the user see's that response, that time is no longer useful as a second or more might pass between processing and returning. Do you see my conundrum?
Any suggestions on how to approach this problem?
There are a couple problems with the solution you described:
It is extremely wasteful. There is already a fairly high accuracy clock built into every computer on the Internet.
The Internet always has latency. By the time the packet reaches the client, it will be old.
The Internet is a variable-latency network, so the time update packets you get could be as high or higher than one second behind for one packet, and as low as 20ms behind for another packet.
It takes complicated algorithms to deal with #2 and #3.
If you actually need second-level accuracy
There is existing Internet-standard software that solves it - the Network Time Protocol.
Use a real NTP client (not the one built into Windows - it only guarantees it will be accurate to within a couple seconds) to synchronize your server with national standard NTP servers, and build a real NTP client into your application. Sync the time on your server regularly, and sync the time on the client regularly (possibly each time they log in/connect? Maybe every hour?). Then simply use the system clock for time calculations.
Don't try to sync the client's system time - they may not have access to do so, and certainly not from the browser. Instead, you can get a reference time relative to the system time, and simply add the difference as an offset on client-side calculations.
If you don't actually need second-level accuracy
You might not really need to guarantee accuracy to within a second.
If you make this decision, you can simplify things a bit. Simply transmit a relative finish time to the client for each auction, rather than an absolute time. Re-request it on the client side every so often (e.g. every minute). Their global system time may be out of sync, but the second-hand on their clock should pretty accurately tick down seconds.
If you want to make this a little more slick, you could try to determine the (relative) latency for each call to the server. Keep track of how much time has passed between calls to the server, and the time-left value from the previous call. Compare them. Then, calculate whichever is smaller, and base your new time off that calculation.
I'd be careful when engineering such a solution, though. If you get the calculations wrong, or are dealing with inaccurate system clocks, you could break your whole syncing model, or unintentionally cause the client to prefer the higest latency call. Make sure you account for all cases if you write the "slick" version of this code :)
One way to get really good real-time communication is to open a connection from the browser to a special tcp/ip socket server that you write on the server. This is how a lot of chat packages on the web work.
Duplex sockets allow you to push data both directions. Because the connection is already open, you can send quite a bit of very fast data across.
In the past, you needed to use Adobe Flash to accomplish this. I'm not sure if browsers have advanced enough to handle this without a plugin (eg, websockets?)
Another approach worth looking at is long polling. In concept, a connection is made to the server that just doesn't die, and it gives you the opportunity on the server to trickle bits of realtime data down to the clients.
Just some pointers. I have written web software using JavaScript <-> Flash <-> Python/PHP, and was please with how it worked.
Good luck.
Suppose you were forced to use TCP sockets over UDP sockets (ie: something that Silverlight insists on). Would it be possible to create a multiplayer game that involves sending real time positional updates to up to say eight players so that each player could accurately see every other player in real time, even though UDP would be the better protocol to use? Given the option, would you wish to go as far as to select a different technology (ie: Java), simply to gain UDP support?
Thanks,
Nick
As long as a few milliseconds aren't important i see no reason to use UDP.
To receive UDP packets, you must have a public IP address.
To receive UDP packets, you need to be able to listen on a port. Not all frameworks in all environments can do this, often for security reasons and such.
As you describe Silverlight as a target platform, we can anticipate that this won't always be the case for your players.
Use TCP.
As an alternative to Silverlight, you might look at Haxe (or Flash).
(From the comments, there is mention of STUN and stuff; that's an interesting if difficult angle to pursue.)
It depends on how fast of real-time you are looking at. For example, if you try to make a space battle, and everyone is close, but moving at a high speed, then you may find that the milliseconds difference makes a difference, but, if you are doing something like an auto racing game then it won't make any difference, so TCP is fine.
So, try it, get some numbers and decide if it is acceptable.
The bigger problem will be the difference in bandwidth, so, if one person is playing over a really slow connection, and everyone else are on very fast connections, then that slower player will be a problem. You may need to scale the updates to the slowest connection, and you may find that TCP/UDP issues are not enough of a concern, as the difference in connection speeds are a far bigger problem.
So, test with various connection speeds, with differing numbers of users, each with their own connection speeds, and see if, as one user, the game is still enjoyable.
UPDATE
It is not bandwidth that will be the concern, but the latency, as was pointed out in a comment. I had picked the wrong term, as several people might be able to respond quickly and be closer to real-time, but one user may be much slower, perhaps on a congested network, slow computer, or whatever, but they may only send updates every 1000ms, whereas everyone else is doing it every 100ms.