It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I have a turn-based game in which two players may play against each others.
It's written in C# and uses XNA 4.0.
Currently multiplayer is implemented with TCP/IP. It works pretty nicely, but only if the players are within the same network and one of them knows the IP of the other.
So the question is: How should I implemented the online play for this game? Is TCP a reasonable method for connecting two random players from the opposite sides of the world without them having to deal with IP addresses and ports(or any other such technical details)?
To make this problem more challenging, I have no server for hosting the game matching service. (Well, I have an access to a virtual web server which I could use for sharing the IPs.)
To list questions:
Does .NET offer better choice of communication method than TCP?
What would be the best way to deal with NATs in my case?
Is there a cheap way of getting my own server and run the TCP game matching service there?
TCP vs UDP.
TCP is a bit slower than UDP but more failsafe.
Since your game is turn based it will probably send minimal amounts of data between the client and server and it is not really latency dependant, I would say you might aswell go for TCP.
To make this problem more challenging, I have no server for hosting the game matching service. (Well, I have an access to a virtual web server which I could use for sharing the IPs.)
If you are going to provide your players with a server browser or similar you will need to have a centralized server, a web server with a script/application built for this would do just fine.
Is there a cheap way of getting my own server and run the TCP game matching service there?
A web server or similar host would do just fine and is usually cheap, what you want is:
Function for a server to add itself to the server list.
Function for a client to retrieve the servers on the list.
Doing web requests with C# is no problem at all, the requests could look something like:
http://www.example.com/addToServerList.php?name=MyEpicServer&ip=213.0.0.ABC (adds this server to the list)
http://www.example.com/getOnlineServers.php (returns list of all the servers)
You need to specify what kind of load and latency that is expected and tolerated.
General answer is:
For real time games - UDP.
For scrabble-like-games - TCP.
Use your server to share IP's as you said.
Minecraft uses TCP. It's good for traffic that must be transmitted and received AND can be queued a little.
UDP is a one way error checking. Only the receiving side check for error. This was needed with the older slow ethernet technology where a round trip to check packets is too slow.
TCP is a very reliable protocol with a handshake. So the sending side knows if the data is transmitted sucessfully. But due to the round trip, it puts a lot more overhead and lag on the transmission.
TCP also do arrange packets, which UDP also don't do.
Some games it does not mind losing packets (for example "steaming" data where objects moves around, and it will get updated the next round of packet anyway). There you can use UDP. But if it is critical to get all the data, rather go with TCP, otherwise you will spend a lot of time writing code to make sure that all the data is transmitted successfully.
The networks is quickly enough and the internet being TCP/IP I recommend TCP, except if you really need very low lattency traffic.
This website gives a good summary:
http://www.diffen.com/difference/TCP_vs_UDP
NAT: Should not be a problem just as long as your Time To Live (TTL) is big enough. Every time it get NAT, it's TTL get subtracted by one. When it is 0, it gets dropped
Related
I have a game where I have to get data from the server (through REST WebService with JSON) but the problem is I don't know when the data will be available on the server. So, I decided to use such a method that hit Server after specific time or on request on every frame of the game. But certainly this is not the right, scale able and efficient approach. Obviously, hammering is not the right choice.
Now my question is that how do I know that data has arrived at server so that I can use this data to run my game. Or how should I direct the back-end team to design the server in an efficient way that it responds efficiently.
Remember at server side I have Python while client side is C# with unity game-engine.
It is clearly difficult to provide an answer with little details. TL;DR is that it depends on what game you are developing. However, polling is very inefficient for at least three reasons:
The former, as you have already pointed out, it is inefficient because you generate additional workload when there is no need
The latter, because it requires TCP - server-generated updates can be sent using UDP instead, with some pros and cons (like potential loss of packets due to lack of ACK)
You may get the updates too late, particularly in the case of multiplayer games. Imagine that the last update happened right after the previous poll, and your poll is each 5 seconds. The status could be already stale.
The long and the short of it is that if you are developing a turn-based game, poll could be alright. If you are developing (as the use of Unity3D would suggest) a real-time game, then server-generated updates, ideally using UDP, are in my opinion the way to go.
Hope that helps and good luck with your project.
Is there any way (preferably in C#) how to regularly measure connection layer latency (roundtrip) without changing the application protocol and without creating separate dedicated connection - e.g. using some similar SYN-ACK trick like tcping do but without closing/opening connection?
I'm connecting to the servers via given ASCII based protocol (and always using TCP_NODELAY). Servers send me large amount of discrete messages and I'm regularly sending 'heartbeat' payload (but there is no response payload to the heartbeat).
I cannot change the protocol and in many cases I also cannot create more than one physical connection to the server.
Keep in mind that TCP does windowing, so this could cause issues when trying to implement an elegant SEQ/ACK solution. (you would want sequence, not synchronize)
[EDIT: Snipped a very overcomplicated and confusing explaination.]
I'd have to say the best way is to use a simple stopwatch method of starting a timer, making a very thin request or poll, and measure the time back from it. If that query really is the lightest you can make it, then that should give you the minimum amount of time you can reasonably expect to wait, which sometimes more valuable than the ping (which can be misleading).
If you really absolutely need just the network time to machine and back, just use an ICMP ping.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I'm developing a two part product. One part is a C# app on a single board computer running Windows Embedded Standard. The other piece is an embedded device running an RTOS with a TCP/IP stack. The two devices need to communicate via an Ethernet cable. It's a point to point connection. Nothing is connected to any outside network.
I'm a bit of a network programming novice, so I'm looking for best practices. I'm making the assumption that TCP sockets is the most straightforward approach. I figured I'd get some advice before writing any code and getting lost.
The one thing that is guaranteed is that the embedded device will maintain a static IP address of 169.254.1.1. That's about all I know. So what do I need to know to get these guys connected? I know there are issues with subnets... but that's where my knowledge is falling short. Which should be the client, which is the server, port numbers etc.
Little more info per request: The two ends are going to be exchanging a pretty simple binary protocol. That part is already defined and it working over an RS-232 link. But the RS-232 port is going away. I want to use TCP to basically carry this information, plus give me all the the good stuff like retries and error checking. Either end of the system can initiate a transfer.
You might want to look into some introduction to socket programming, like http://beej.us/guide/bgnet.
The windows node will have to configure a network interface in the same
subnet as the rtos device and choose a (different) ip address from that
subnet.
Which device is the server and waits for a connection depends on your
application, maybe also on the boot order of the two nodes. For example,
if the RTOS device has booted first, it might wait for a TCP connection
until the other side is ready.
Of course, you'll need some protocol inside the tcp stream. Possibly UDP
might also be an option, if you don't need the features TCP provides. This
might also reduce the memory footprint on the RTOS side.
If the two sides can reboot (or crash :-) indenpedently from each other without
some hardware watchdog taking note, be sure that the other side recognizes the
reboot (e.g. with the tcp keepalive feature), so that you can reestablish the
connection.
With TCP you'll get the reliable connection, but depending on you application,
you might discover that the Nagle algorithm keeps your message latency higher
than what the RS232 setup had. Use the NO_DELAY option on the socket to avoid
this.
I recommend you very much against doing TCP yourself. It is nice but has many hard-to-discover pitfalls for machine-to-machine interface (e.g. naggling, KA configuration, testing the packetizing state machine).
We are in 2012. You can use ZeroMQ and similar [feel free to edit with more and good IP messaging libraries].
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm currently working on a c# online multiplayer game in real-time. The aim is to have client/server based connection using the UDP protocol. So far I've used UDP for players' movements and TCP for events (a player shooting, a player losing life) because I need to be sure such data will arrive to all players connected to the server. I know that UDP is said 'unreliable' and some packets may be lost. But I've read everywhere to never mix TCP and UDP because it can affect the connection.
The main question is how should I organize my network?
UDP is connectionless, how should I save who's is who? Should I save ip adresses of the clients in a list?
Should I use TCP for important events or use UDP? If I need to use UDP, how can I make sure that data will not be lost?
By using both TCP and UDP, I need to save for each player their IP in a list (for UDP) and the TcpClient which is connected in another list (for the UDP). How could I change that to be more effective?
Connections have improved a lot since early game development. In the past the speed advantages of UDP made it a very desirable protocol, even balanced out the reliability issues. However as networks have improved the reasons to shy away from TCP have dissipated.
I would advise picking one of the two protocols and going with it. But mostly because it will simply your network layer and make it easier to debug network issues. When I have to pick between TCP and UDP I make the decision more on how I want my networking logic to flow.
With a UDP based system you do need to do a bit more bookkeeping yourself, but not really enough for it to factor into the decision. A UDP game flows more like independent cells that all happen to share the same world. You don't want a lot of reactive logic (after he does this, i do that), if something is dropped or forgotten the game will keep going smoothly.
TCP will give you much more control. Depending on the API and can involve a bit more setup but its worth the effort. TCP lets you work with a networked partner much like you would work with another thread on the same CPU. There is an overhead with everything that you do but it sounds like you already have it working so might as well stick with it.
I generally tend towards UDP myself because its ingrained I think. Also whenever dealing with networking you have to plan for the un-expected, the lost or delayed packet, and UDP helps drive that message home. If you break that rule you will notice right away with UDP, might not with TCP.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I need to build a server to accept client connection with a very high frequency and load (each user will send a request each 0.5 seconds and should get a response in under 800ms, I should be able to support thousands of users on one server). The assumption is that the SQL Server is finely tuned and will not pose a problem. (assumption that of course might not be true)
I'm looking to write a non-blocking server to accomplish this. My back end is an SQL Server which is sitting on another machine. It doesn't have to be updated live - so I think I can cache most of the data in memory and dump it to the DB every 10-20 seconds.
Should I write the server using C# (which is more compatible with SQL Server)? maybe Python with Tornado? What should be my considerations when write a high-performance server?
EDIT: (added more info)
The Application is a game server.
I don't really know the actual traffic - but this is the prognosis and the server should support it and scale well.
It's hosted "in the cloud" in a Datacenter.
Language doesn't really matter. Performance does. (a Web service can be exposed on the SQL Server to allow other languages than .NET)
The connections are very frequent but small (very little data is returned and little computations are necessary).
It should hold most of the data in the memory for fastest performance.
Any thoughts will be much appreciated :)
Thanks
Okay, if you REALLY need high performance, don't go for C#, but C/C++, it's obvious.
In any case, the fastest way to do server programming (as far as I know) is to use IOCP (I/O Completion Ports). Well, that's what I used when I made a MMORPG server emulator, and it performed faster than the official C++ select-based servers.
Here's a very complete introduction to IOCP in C#
http://www.codeproject.com/KB/IP/socketasynceventargs.aspx
Good luck !
Use the programming language that you know the most. It's a lot more expensive to hunt down performance issues in an large application that you do not fully understand.
It's a lot cheaper to buy more hardware.
People will say C++, because garbage collection in .Net could kill your latency. You could avoid garbage collection though if you were clever, by reusing existing managed objects.
Edit: your assumption about SQL Server is probably wrong. You need to store your state in memory for random access. If you need to persist changes, journal them to the filsystem and consolidate them with the database infrequently
Edit 2: You will have a lot different threads talking to the same data. In order to avoid blocking and deadlocks, learn about lock-free programming (Interlocked.CompareExchange etc)
I was part of a project that included very high-performance server code, which actually included the ability to response with a TCP packet within 12 milliseconds or so.
We used C# and I must agree with jgauffin - a language that you know is much more important than just about anything.
Two tips:
Writing to console (especially in color) can really slow things down.
If it's important for the server to be fast at the first requests, you might want to use a pre-JIT compiler to avoid JIT compilation during the first requests. See Ngen.exe.