I'm currently working on a game with a client and server, and am trying to figure out a way to tell the amount of time from a client sending a packet and the server receiving it (so I can check where the enemies were at that point).
I attempted sending
DateTime.Now.Subtract(DateTime.MinValue.AddYears(1969)).TotalMilliseconds
With then client, then just check that same value on the server when it recieves the packet, and subtract them, but the issue with this is that timezones could completely break this, if the client and server are on different timezones. Also it seemed not the most accurate.
Is there a "proper" way to do this?
Well sending the epoch time will not account for leap seconds, but timezone changes should not be affected if you use DateTime.UtcNow and do all processing in UTC. Using this method would allow for users to manipulate that number, since it is based of of the computers time setting. There is not real proper way to handle this. Look at many games with latency issues. This occurs for both clientside and server-side processing.
The other issue with this method, depending on the type of game, is that the reaction of a user depends on events in real time. So if you reverse time for a calculation, the result of that could have affected another players actions.
For a complex handling, I think the game 'Eve online' will slow down 'Game Time' for large fights.
Related
I've ran into an issue which i'm struggling to decide the best way to solve. Perhaps my software articheture needs to change?
I have a cron job which hits my website method every 10 seconds and then on my website the method then makes an API call each time to an API however the API is rate limited x amount in a minute and y amount a day
Currently i'm exceeding the API limits and need to control this in the website method somehow. I've thought storing in a file perhaps but seems hacky similary to a database as I don't currently use one for this project.
I've tried this package: https://github.com/David-Desmaisons/RateLimiter but alas it doesn't work in my scenario and I think it would work if I did one request with a loop as provided in his examples. I noticed he had a persistent timer(PersistentCountByIntervalAwaitableConstraint) but he has no documentation or examples for it(I emailed him incase). I've done a lot of googling around and can't find any examples of this only server rate limiting which is the other way around server limiting client and not client limiting requests to server
How can I solve my issue without changing the cronjobs? What does everyone think the best solution to this is?
Assuming that you don't want to change the clients generating the load, there is no choice but to implement rate limiting on the server.
Since an ASP.NET application can be restarted at any time, the state used for that rate-limiting must be persisted somewhere. You can choose any data store you like for that.
In this case you have two limits: One per minute and one per day. If you simply apply two separate rate limiters you will end up with the daily limit being exceeded fairly quickly. After that, there will be no further access for the rest of the day. Likely, this is undesirable.
It seems better to only apply the daily limit because it is more restrictive. A simple solution would be to calculate how far apart requests must be to meet the daily limit. Then, you store the date of the last request. Any new incoming request is immediately failed if not enough time has passed.
Let me know if this helps you.
There is a server that publishes some XML data every 5 seconds for GET fetch. The URL is simple and does not change, like www.XXX.com/fetch-data. The data is published in a loop every 5 seconds precisely, and IS NOT guaranteed to be unique every time (but does change quite often anyway). Apart from that, I can also fetch XML at www.XXX.com/fetch-time, where server time is stored, in unix time format. So, the fetch-time resolution is unfortunately just in seconds.
What I need is a way to synchronize my client code such that it fetches the data AS SOON AS POSSIBLE to when they are published. If I just naively fetch in a loop every 5 seconds, what might happen is that if I get really unlucky, my loop might start right before the server loop ends, so I will basically always end up with 5 second old data. I need a mechanism to get both server and client loops in tandem. Also, I need to compensate for lag (ping), so that the fetch request is sent actually a little before the server publishes new data.
The server code is proprietary and can't be changed, so all the hard stuff must be done by client. Also, there are many other questions about high-precision time measurements and sleep functions, so you can abstract from those and take them as granted. Any help with the algorithm would be much appreciated.
I'm writing an application in C# that allows people to track the amount of time they spend on tasks. It can be used by a single person to track their own personal time, but it will also be able to work in, for example, a company - like, if they want to track the amount of time spend on some project.
The data being stored by this program is pretty simple - a collection of all the tasks and each "block" of time that was spent on it (including date, start/stop time, and length of time spent).
For the multiuser functionality, my plan was to have a single server that the clients send updates to the tracked time. I don't think the clients will need a continuous connection as the updates would typically be pretty far apart.
Additionally, as both the server and the client will store a copy of the data, either of them can ask for a copy from the other if there's a data loss on either. Femaref has informed me that this is a poor idea, so I've removed it.
So, my question is, how should I approach this? I've seen some C# client/server tutorials, but those seem to be geared towards continuous connections.
Your best bet is to track the data separately. First Allow users to track there own time, and just store that in a local db (you can use something like csharp-sqlite ), then when the user connects sync what data you want to keep on server.
For data that you want to track sever side your just going to want the app to sign in and say its starting a task and then sign out when its stopping a task(then have the server side hit the db functions)(your going to want to keep the user data, and the server data separate, so you know what you can trust, and what implications there are for using what data ) .
Obviously, your going to want to handle situations where a task goes on longer then expected. For example someone forgets to say there done with the task(like there computer just crashes)(you can do this by having your app just say its still working on a task every so often).
The best way I have found to get around issues that are caused by trusting peoples input is to just tie into something like your local A.D or LDAP and allow management control(because in the end they are the ones that sort out any messes that come from people having the wrong hours) thats all handled server side. If you don't have A.D or LDAP, you might have to consider implementing some kind of RSA key mechanism for authentication and authority chains.
For talking to the server side process on the client, I suggest something like SOAP (SOAP using C#). That way you can move your server language to what ever makes your feel all warm and fuzzy.
This is a bit of a broad question so its hard to cover everything, but it should give you some leads in the right direction.
I'm trying to create a website similar to BidCactus and LanceLivre.
The specific part I'm having trouble with is the seconds aspect of the timer.
When an auction starts, a timer of 15 seconds starts counting down, and every time a person bids, the timer is reset and the price of the item is increased by 0,01$.
I've tried using SignalR for this bit, and while it does work well during trials runs in the office, it's just not good enough for real world usage where seconds count. I would get HTTP 503 errors when too many users were bidding and idling on the site.
How can I make the timer on the clients end shows the correct remaining time?
Would HTTP GETting that information with AJAX every second allow me to properly display the missing time? That's a request each second!
And not only that, but when a user requests that GET, I calculate remaining seconds, but until the user see's that response, that time is no longer useful as a second or more might pass between processing and returning. Do you see my conundrum?
Any suggestions on how to approach this problem?
There are a couple problems with the solution you described:
It is extremely wasteful. There is already a fairly high accuracy clock built into every computer on the Internet.
The Internet always has latency. By the time the packet reaches the client, it will be old.
The Internet is a variable-latency network, so the time update packets you get could be as high or higher than one second behind for one packet, and as low as 20ms behind for another packet.
It takes complicated algorithms to deal with #2 and #3.
If you actually need second-level accuracy
There is existing Internet-standard software that solves it - the Network Time Protocol.
Use a real NTP client (not the one built into Windows - it only guarantees it will be accurate to within a couple seconds) to synchronize your server with national standard NTP servers, and build a real NTP client into your application. Sync the time on your server regularly, and sync the time on the client regularly (possibly each time they log in/connect? Maybe every hour?). Then simply use the system clock for time calculations.
Don't try to sync the client's system time - they may not have access to do so, and certainly not from the browser. Instead, you can get a reference time relative to the system time, and simply add the difference as an offset on client-side calculations.
If you don't actually need second-level accuracy
You might not really need to guarantee accuracy to within a second.
If you make this decision, you can simplify things a bit. Simply transmit a relative finish time to the client for each auction, rather than an absolute time. Re-request it on the client side every so often (e.g. every minute). Their global system time may be out of sync, but the second-hand on their clock should pretty accurately tick down seconds.
If you want to make this a little more slick, you could try to determine the (relative) latency for each call to the server. Keep track of how much time has passed between calls to the server, and the time-left value from the previous call. Compare them. Then, calculate whichever is smaller, and base your new time off that calculation.
I'd be careful when engineering such a solution, though. If you get the calculations wrong, or are dealing with inaccurate system clocks, you could break your whole syncing model, or unintentionally cause the client to prefer the higest latency call. Make sure you account for all cases if you write the "slick" version of this code :)
One way to get really good real-time communication is to open a connection from the browser to a special tcp/ip socket server that you write on the server. This is how a lot of chat packages on the web work.
Duplex sockets allow you to push data both directions. Because the connection is already open, you can send quite a bit of very fast data across.
In the past, you needed to use Adobe Flash to accomplish this. I'm not sure if browsers have advanced enough to handle this without a plugin (eg, websockets?)
Another approach worth looking at is long polling. In concept, a connection is made to the server that just doesn't die, and it gives you the opportunity on the server to trickle bits of realtime data down to the clients.
Just some pointers. I have written web software using JavaScript <-> Flash <-> Python/PHP, and was please with how it worked.
Good luck.
I've a server/client architecture implemented, where all state changes are sent to the function, validated and broadcasted to all clients connected. This works rather well, but the system does not maintain synchronization between the client instances of the game as of now.
If there happened to be a 5 second lag between the server and a particular client then he would receive the state change 5 seconds after the rest of the clients thus leaving him with game state out of sync. I've been searching for various ways to implement a synchronization system between the clients but haven't found much so far.
I'm new to network programming, and not so naive to think that I can invent a working system myself without dedicating a severe amount of time to it. The ideas I've been having, however, is to keep some kind of time system, so each state change would be connected to a specific timestamp in the game. That way when a client received a state change, it would know exactly in which period of the game the changed happened, and would in turn be able to correlate for the lag. The problem with this method is that in those n seconds lag the game would have had continued on the client side, and thus the client would have to rollback in time to update for the state change which definitely would get messy.
So I'm looking for papers discussion the subjects or algorithms that solves it. Perhaps my whole design of how the multiplayer system works is flawed, in the sense that a client's game instance shouldn't update unless notion is received from the server? Right now the clients just update themselves in their game loop assuming that any states haven't changed.
The basic approach to this is something called Dead Reckoning and a quite nice article about it can be found here. Basically it is a predication algorithm for where entities positions will be guessed at for the times between server updates.
There are more advanced methodologies that build on this concept, but it is a good starting point.
Also a description of how this is handled in the source engine (Valve's engine for the first Half Life game) can be found here, the principle is basically the same - until the server tells you otherwise use a prediction algorithm to move the entity along an expected path - but this article handles the effect this has on trying to shoot something in more depth.
The best resources I've found in this area are these two articles from Valve Software:
Latency Compensating Methods in Client/Server In-game Protocol Design and Optimization
Source Multiplayer Networking
There will never be a way to guarantee perfect synchronisation across multiple viewpoints in real time - the laws of physics make it impossible. If the sun exploded now, how could you guarantee that observers on Alpha Centauri see the supernova at the same time as we would on Earth? Information takes time to travel.
Therefore, your choices are to either model everything accurately with latency that may differ from viewer to viewer (which is what you have currently), or model them inaccurately without latency and broadly synchronised across viewers (which is where prediction/dead reckoning/extrapolation come in). Slower games like real time strategy tends to go the first route, faster games go the second route.
In particular, you should never assume that the time it takes to travel will be constant. This means that merely sending start and stop messages to move entities will never suffice under either model. You need to send periodic updates of the actual state (typically several times a second for faster games) so that the recipient can correct error in its predictions and interpolations.
If client see events happening at the rate the server is feeding him, which is the normal way to do it (I've worked with protocols of Ultima Online, KalOnline and a little bit of World of Warcraft), then this momentaneous 5 secounds delay would just make him receive this 5 secounds of events all at once and see those events passing really fast or near instantly, as other players would see him "walking" really fast for a short distance if his outputs delay too. After that everything flows normally again. Actually, except for graphic and physics normalization, I can't see any special needs to make it synchronize properly, it just synchronize itself.
If you ever played Valve games in two near computers you would notice they don't care much about minor details like "the exact place where you died" or "where you dead body gibs flyed to". It is all up to client side and totally affected by latency, but this is irrelevant.
After all, lagged players must accept their condition, or close their damn eMule.
Your best option is to send the changes back to the client from the future, thereby arriving at the client at the same point in time it does for other clients that does not have lag problems.