I have a Windows Phone game that requires support for multiplayer. The multiplayer is similar to the one in Wordament: everyone plays the same game; the client gets the game initially, then the each player plays the game on his own without any interaction with the others and when the game ends, the results from everyone are collected and displayed. The difference is: in my application, the game doesn't end after a specified period of time but rather when one of the clients signals it. So, when someone completes the game (reaches a goal), all the others have to be notified that someone won.
My initial thought is pool the server every let's say 5 seconds to see if the game state has been changed. When a client completes the game, it sends a request with that info and all the other clients, upon the next pool request, will get the new status. This, IMO, is the simplest and most convenient solution because all I need is one byte of data to tell me if the game is over or not.
Real time (as in millisecond accuracy) is not critical. As you might have noticed in the previous paragraph, a 5 seconds delay is acceptable.
However, I am asking you, experts, if a duplex channel would be more appropriate for this scenario? I found solutions like Pusher which provide the two way channel but it seems to me that such a solution is very complex and expensive (we have a very limited budget).
Will share my current knowledge.
Pull(Poll)
Simple to implement, widely used.
Examples: Facebook.com, TeamCity web interface, .NET Client for QPID Message Broker
Push
Take a look at this article
Performance of HTTP polling duplex server-side channel in Microsoft Silverlight 3
What I've noticed for myself: need extra efforts for configuration, possible issues with scalability and performance
The only scenario I can think of - exchange of large amount of data on constant basis
Example: Massively multiplayer online games(huges number of events, notification time is extremely critial)
Get changes on demand
Typical for bussines desktop application.
Examples: TFS(refresh grids(tasks and bugs), get locked file status on check out)
Conclusion: Pooling for your task fits ideally
Related
I've got quite an abstract question. I'm working on a project that requires constant device communication. I'm integrating multiple devices onto an external processing unit with a touchpanel to execute certain methods. I.e. the "start videocall" button on the touchpanel activates a relay, turns a display-device, camera-device and microphone-device on, etc.
On the flipside, I'm also trying to monitor these devices. What status do they currently have? Are they enabled/disabled ? What input is the display device currently on?
So far, I've come up with two solutions to prevent a bottleneck in the communication where I'm constantly polling (i.e. every two to five seconds to keep an acurate and up-to-date status) the on-state and input-state of the display-device.
Make use of threading so I can enqueue the different commands and execute them async. By also reading the response async, all communication should be nicely spaced out but I'd have a very "busy" communication line, taking it's toll on the processing unit.
With the help of events have the display-device notify the processor of it's changed status. This would take a lot of stress off of the communication line, but I feel like this is very easily disrupted. If the device doesn't throw it's events correctly (or the events are missed out on) the monitored state does not correspond with the actual state.
I'm curious if there are other ways of going about this issue. As of now, I'm leaning towards the second one because it stresses the processing unit a whole lot less, I just feel like I should be building in a lot of safeguards to prevent an inacurate representation of the actual device-states.
The project runs in C# on .Net 3.5.
Polling works, but it isn't fun or optimal. Reactive is best but as you've mentioned there may be a hiccup insuring your still listening to to the device and not just standing by for nothing. In this situation it makes since to optimize both processes. Poll when you're waiting or haven't heard a response in so long and listen when your polling returns good info, passing the polling.
That said, you shouldn't worry about taxing the unit too much with polling on various threads. This sounds like a purpose device so as long as you're not running it hot or stressing it to max all the time then using your resources are perfectly fine.
I have a game where I have to get data from the server (through REST WebService with JSON) but the problem is I don't know when the data will be available on the server. So, I decided to use such a method that hit Server after specific time or on request on every frame of the game. But certainly this is not the right, scale able and efficient approach. Obviously, hammering is not the right choice.
Now my question is that how do I know that data has arrived at server so that I can use this data to run my game. Or how should I direct the back-end team to design the server in an efficient way that it responds efficiently.
Remember at server side I have Python while client side is C# with unity game-engine.
It is clearly difficult to provide an answer with little details. TL;DR is that it depends on what game you are developing. However, polling is very inefficient for at least three reasons:
The former, as you have already pointed out, it is inefficient because you generate additional workload when there is no need
The latter, because it requires TCP - server-generated updates can be sent using UDP instead, with some pros and cons (like potential loss of packets due to lack of ACK)
You may get the updates too late, particularly in the case of multiplayer games. Imagine that the last update happened right after the previous poll, and your poll is each 5 seconds. The status could be already stale.
The long and the short of it is that if you are developing a turn-based game, poll could be alright. If you are developing (as the use of Unity3D would suggest) a real-time game, then server-generated updates, ideally using UDP, are in my opinion the way to go.
Hope that helps and good luck with your project.
[I'm not sure whether to post this in stackoverflow or serverfault, but since this is a C# development project, I'll stick with stackoverflow...]
We've got a multi-tiered application that is exhibiting poor performance at unpredictable times of the day, and we're trying to track down the cause(s). It's particularly difficult to fix because we can't reproduce it on our development environment - it's a sporadic problem on our production servers only.
The architecture is as follows: Load balanced front end web servers (IIS) running an MVC application (C#). A home-grown service bus, implemented with MSMQ running in domain-integration mode. Five 'worker pool' servers, running our Windows Service, which responds to requests placed on the bus. Back end SQL Server 2012 database, mirrored and replicated.
All servers have high spec hardware, running Windows Server 2012, latest releases, latest windows update. Everything bang up to date.
When a user hits an action in the MVC app, the controller itself is very thin. Pretty much all it does is put a request message on the bus (sends an MSMQ message) and awaits the reply.
One of the servers in the worker pool picks up the message, works out what to do and then performs queries on the SQL Server back end and does other grunt work. The result is then placed back on the bus for the MVC app to pick back up using the Correlation ID.
It's a nice architecture to work with in respect to the simplicity of each individual component. As demand increases, we can simply add more servers to the worker pool and all is normally well. It also allows us to hot-swap code in the middle tier. Most of the time, the solution performs extremely well.
However, as stated we do have these moments where performance is a problem. It's proving difficult to track down at which point(s) in the architecture the bottleneck is.
What we have attempted to do is send a request down the bus and roundtrip it back to the MVC app with a whole suite of timings and metrics embedded in the message. At each stop on the route, a timestamp and other metrics are added to the message. Then when the MVC app receives the reply, we can screen dump the timestamps and metrics and try to determine which part of the process is causing the issue.
However, we soon realised that we cannot rely on the Windows time as an accurate measure, due to the fact that many of our processes are down to the 5-100ms level and a message can go through 5 servers (and back again). We cannot synchronize the time across the servers to that resolution. MS article: http://support.microsoft.com/kb/939322/en-us
To compound the problem, each time we send a request, we can't predict which particular worker pool server will handle the message.
What is the best way to get an accurate, coordinated and synchronized time that is accurate to the 5ms level? If we have to call out to an external (web)service at each step, this would add extra time to the process, and how can we guarantee that each call takes the same amount of time on each server? Even a small amount of latency in an external call on one server would skew the results and give us a false positive.
Hope I have explained our predicament and look forward to your help.
Update
I've just found this: http://www.pool.ntp.org/en/use.html, which might be promising. Perhaps a scheduled job every x hours to keep the time synchronised could get me to the sub 5 ms resolution I need. Comments or experience?
Update 2
FWIW, We've found the cause of the performance issue. It occurs when the software tests if a queue has been created before it opens it. So it was essentially looking up the queue twice, which is fairly expensive. So the issue has gone away.
What you should try is using the Performance Monitor that's part of Windows itself. What you can do is create a Data Collector Set on each of the servers and select the metrics you want to monitor. Something like Request Execution Time would be a good one to monitor for.
Here's a tutorial for Data Collector Sets: https://www.youtube.com/watch?v=591kfPROYbs
Hopefully this will give you a start on troubleshooting the problem.
I'm trying to create a website similar to BidCactus and LanceLivre.
The specific part I'm having trouble with is the seconds aspect of the timer.
When an auction starts, a timer of 15 seconds starts counting down, and every time a person bids, the timer is reset and the price of the item is increased by 0,01$.
I've tried using SignalR for this bit, and while it does work well during trials runs in the office, it's just not good enough for real world usage where seconds count. I would get HTTP 503 errors when too many users were bidding and idling on the site.
How can I make the timer on the clients end shows the correct remaining time?
Would HTTP GETting that information with AJAX every second allow me to properly display the missing time? That's a request each second!
And not only that, but when a user requests that GET, I calculate remaining seconds, but until the user see's that response, that time is no longer useful as a second or more might pass between processing and returning. Do you see my conundrum?
Any suggestions on how to approach this problem?
There are a couple problems with the solution you described:
It is extremely wasteful. There is already a fairly high accuracy clock built into every computer on the Internet.
The Internet always has latency. By the time the packet reaches the client, it will be old.
The Internet is a variable-latency network, so the time update packets you get could be as high or higher than one second behind for one packet, and as low as 20ms behind for another packet.
It takes complicated algorithms to deal with #2 and #3.
If you actually need second-level accuracy
There is existing Internet-standard software that solves it - the Network Time Protocol.
Use a real NTP client (not the one built into Windows - it only guarantees it will be accurate to within a couple seconds) to synchronize your server with national standard NTP servers, and build a real NTP client into your application. Sync the time on your server regularly, and sync the time on the client regularly (possibly each time they log in/connect? Maybe every hour?). Then simply use the system clock for time calculations.
Don't try to sync the client's system time - they may not have access to do so, and certainly not from the browser. Instead, you can get a reference time relative to the system time, and simply add the difference as an offset on client-side calculations.
If you don't actually need second-level accuracy
You might not really need to guarantee accuracy to within a second.
If you make this decision, you can simplify things a bit. Simply transmit a relative finish time to the client for each auction, rather than an absolute time. Re-request it on the client side every so often (e.g. every minute). Their global system time may be out of sync, but the second-hand on their clock should pretty accurately tick down seconds.
If you want to make this a little more slick, you could try to determine the (relative) latency for each call to the server. Keep track of how much time has passed between calls to the server, and the time-left value from the previous call. Compare them. Then, calculate whichever is smaller, and base your new time off that calculation.
I'd be careful when engineering such a solution, though. If you get the calculations wrong, or are dealing with inaccurate system clocks, you could break your whole syncing model, or unintentionally cause the client to prefer the higest latency call. Make sure you account for all cases if you write the "slick" version of this code :)
One way to get really good real-time communication is to open a connection from the browser to a special tcp/ip socket server that you write on the server. This is how a lot of chat packages on the web work.
Duplex sockets allow you to push data both directions. Because the connection is already open, you can send quite a bit of very fast data across.
In the past, you needed to use Adobe Flash to accomplish this. I'm not sure if browsers have advanced enough to handle this without a plugin (eg, websockets?)
Another approach worth looking at is long polling. In concept, a connection is made to the server that just doesn't die, and it gives you the opportunity on the server to trickle bits of realtime data down to the clients.
Just some pointers. I have written web software using JavaScript <-> Flash <-> Python/PHP, and was please with how it worked.
Good luck.
I've a server/client architecture implemented, where all state changes are sent to the function, validated and broadcasted to all clients connected. This works rather well, but the system does not maintain synchronization between the client instances of the game as of now.
If there happened to be a 5 second lag between the server and a particular client then he would receive the state change 5 seconds after the rest of the clients thus leaving him with game state out of sync. I've been searching for various ways to implement a synchronization system between the clients but haven't found much so far.
I'm new to network programming, and not so naive to think that I can invent a working system myself without dedicating a severe amount of time to it. The ideas I've been having, however, is to keep some kind of time system, so each state change would be connected to a specific timestamp in the game. That way when a client received a state change, it would know exactly in which period of the game the changed happened, and would in turn be able to correlate for the lag. The problem with this method is that in those n seconds lag the game would have had continued on the client side, and thus the client would have to rollback in time to update for the state change which definitely would get messy.
So I'm looking for papers discussion the subjects or algorithms that solves it. Perhaps my whole design of how the multiplayer system works is flawed, in the sense that a client's game instance shouldn't update unless notion is received from the server? Right now the clients just update themselves in their game loop assuming that any states haven't changed.
The basic approach to this is something called Dead Reckoning and a quite nice article about it can be found here. Basically it is a predication algorithm for where entities positions will be guessed at for the times between server updates.
There are more advanced methodologies that build on this concept, but it is a good starting point.
Also a description of how this is handled in the source engine (Valve's engine for the first Half Life game) can be found here, the principle is basically the same - until the server tells you otherwise use a prediction algorithm to move the entity along an expected path - but this article handles the effect this has on trying to shoot something in more depth.
The best resources I've found in this area are these two articles from Valve Software:
Latency Compensating Methods in Client/Server In-game Protocol Design and Optimization
Source Multiplayer Networking
There will never be a way to guarantee perfect synchronisation across multiple viewpoints in real time - the laws of physics make it impossible. If the sun exploded now, how could you guarantee that observers on Alpha Centauri see the supernova at the same time as we would on Earth? Information takes time to travel.
Therefore, your choices are to either model everything accurately with latency that may differ from viewer to viewer (which is what you have currently), or model them inaccurately without latency and broadly synchronised across viewers (which is where prediction/dead reckoning/extrapolation come in). Slower games like real time strategy tends to go the first route, faster games go the second route.
In particular, you should never assume that the time it takes to travel will be constant. This means that merely sending start and stop messages to move entities will never suffice under either model. You need to send periodic updates of the actual state (typically several times a second for faster games) so that the recipient can correct error in its predictions and interpolations.
If client see events happening at the rate the server is feeding him, which is the normal way to do it (I've worked with protocols of Ultima Online, KalOnline and a little bit of World of Warcraft), then this momentaneous 5 secounds delay would just make him receive this 5 secounds of events all at once and see those events passing really fast or near instantly, as other players would see him "walking" really fast for a short distance if his outputs delay too. After that everything flows normally again. Actually, except for graphic and physics normalization, I can't see any special needs to make it synchronize properly, it just synchronize itself.
If you ever played Valve games in two near computers you would notice they don't care much about minor details like "the exact place where you died" or "where you dead body gibs flyed to". It is all up to client side and totally affected by latency, but this is irrelevant.
After all, lagged players must accept their condition, or close their damn eMule.
Your best option is to send the changes back to the client from the future, thereby arriving at the client at the same point in time it does for other clients that does not have lag problems.