I'm trying to better understand C# Socket classes. I'm working with a TcpClient object.
I'm using it into a Unity (game development) project, so I cannot block the main thread.
I would like to use method with Begin/End form like, for example, BeginConnect() and EndConnect().
I'm doing this:
public class TcpTest : MonoBehaviour
{
private TcpClient m_TcpClient = null;
private void Start()
{
m_TcpClient = new TcpClient();
m_TcpClient.BeginConnect("127.0.0.1", 40000, OnConnectAsyncCallback, null);
}
private void OnConnectAsyncCallback(IAsyncResult i_AsyncResult)
{
m_TcpClient.EndConnect(i_AsyncResult);
MyAsyncDebug.Log("Connect callback received.");
}
}
I'm using hercules as a socket debugging tool. I've started a TCP server with it and everythings was fine.
Then I've stopped the TCP server on hercules. I've re-ran my code and I was expecting an error or an exception. Nothing, the code just starts the BeginConnect() and the AsyncCallback is never called anymore.
I've also tried to start the server after the BeginConnect method was called but it does not work.
How can I know if Begin Connect is failed (maybe the destination is unreachable or any other reason)?
I cannot understand how they are designed (I've already read the MSDN docs).
Thank you and sorry for the newbie question!
TcpClient does not have a property to set connection time out. There are properties for send and receive timeout but not connection timeout. One common way of waiting for connection timeout is to wait on the IAsyncResult from BeginConnect or use the ConnectAsync
IAsyncResult ar = client.BeginConnect("host", port, null, null);
var success = ar.AsyncWaitHandle.WaitOne(TimeSpan.FromSeconds(60));
OR
var client = new TcpClient();
if (!client.ConnectAsync("host", port).Wait(60))
Once timeout you have to close the client. In your original code the callback should eventually get called.
Related
I am trying to make an aSync connection to a server using TcpClient.BeginConnect, but am encountering some difficulties. This is my first time using Tcp so please bear with me.
The connection itself works fine when the server is running, i can send and receive messages without problem. However when I stop the server and try to connect to it, Tcp.BeginConnect will pretend it is actually connected to a server without returning an error, until i try to actually send data which will obviously fail.
When i use TcpClient.Connect() instead it'll return A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. when no connection is established after a few seconds, letting me know the connection failed.
Is there a way to get this same behaviour with TcpClient.BeginConnect? Or am I doing something wrong myself?
i looked around and found C# BeginConnect callback is fired when not connected which is somewhat similair and the answer was that EndConnect had to be called in the callback before the socket becomes usuable, but i'm already doing that.
my code:
public static void OpenTcpASyncConnection()
{
if (client == null)
{
client = new TcpClient();
IAsyncResult connection = client.BeginConnect(serverIp, serverPort, new AsyncCallback(ASyncCallBack), client);
bool succes = connection.AsyncWaitHandle.WaitOne();//returns true
if (!succes)
{
client.Close();
client.EndConnect(connection);
throw new Exception("TcpConnection::Failed to connect.");
}
else
{
Debug.LogFormat("TcpConnection::Connecting to {0} succeeded", serverIp);
}
}
else
{
Debug.Log("TcpConnection::Client already exists");
}
}
public static void ASyncCallBack(IAsyncResult ar)
{
Debug.Log("Pre EndConnect");
client.EndConnect(ar);
Debug.Log("Post EndConnect");//this never gets called?
}
the boolean succes is true even if the server is offline (or does this always return true as long as the operation finishes?), thus i assume it thinks it is actually connected, and the Debug.Log after client.EndConnect(ar) never gets called. Not a single error gets returned.
In summary; Am I forgetting something/doing something wrong? or is this expected behaviour?
Edit: language is C# with the .net 3.5 framework. It is ment for a Unity application though i'm not inheriting from monobehaviour for this. If you require any additional information I will try to provide this.
Kind regards and thanks for your time,
Remy.
I am experimenting with client-server applications using the System.Net namespace in C#. I am currently using the following TcpListener code to listen for incoming connections:
TcpListener listener = new TcpListener(IPAddress.Any, 62126);
List<Connection> ClientConnections = new List<Connection>();
while (true)
{
listener.Start();
while (true)
{
if (listener.Pending())
{
ClientConnections.Add(new Connection(listener.AcceptTcpClient()));
break;
}
}
}
(Where Connection is a class that takes the accepted TcpClient via public Connection(TcpClient client) { ... } and maintains the connection on a separate thread.)
Do I need to invoke listener.Start() every time an incoming connection is accepted or is that unnecessary?
You're busy waiting if no pending connection requests exist. This is not necessary. Just delete that if. Make sure you understand why it is not necessary.
I do not understand why there are two nested loops. You only need one. Call Start only once.
I can tell that you have not read the documentation. Quite dangerous. You are capable of answering these questions yourself.
No. Start needs to be called only once. Remove the outer while loop
preface:
I've been stumped on this for awhile now and am not having much luck finding what I need.
I have a C# (.NET 3.5) Service. One thread acts as an asynchronous listener for incoming TCP connections. When data comes in I spawn off a new worker thread to handle the data, and sends an acknowledgement back.
On a second thread in the same service we send commands out, until today it would gather information from the data base, build a new socket, connect then ship the command and I'm using the Socket.Receive to invoke blocking and wait for a response (or until a timeout occurrs).
Everything has been working great until a new client has a need to send data to us so fast (5-10 second intervals) that we can no longer open a new socket to get a command through. So I started looking into when a command needs to be sent that the "listener" thread has a client connected. If that client is connected currently use that socket instead of creating a new one.
Issue:
I'm to the point where I can send my command back on the same socket the listener receives, but when the client sends data back as the response it takes twice for the Socket.Receive method to actually fire thinking it received data. The first time it gets into my listener class, the 2nd time, in my command class where I actually want it to be.
Question:
Is there some option or something I need to do before calling my Socket.Receive method to ensure the data gets to the correct place?
In my listener class I have a list of objects "CSocketPacket"
public class CSocketPacket
{
public CSocketPacket(System.Net.Sockets.Socket socket)
{
thisSocket = socket;
this.IpAddress =
((System.Net.IPEndPoint)socket.RemoteEndPoint).Address.ToString();
}
public System.Net.Sockets.Socket thisSocket;
public byte[] dataBuffer = new byte[BUFFER_SIZE];
public string IpAddress; //Use this to search for the socket
}
Then when I send a command I'm creating a new tcp socket object:
client = new Socket(
AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
IPEndPoint ep = new IPEndPoint(
IPAddress.Parse(Strings.Trim(ipAddress)), port);
IPEndPoint LocalIp = new IPEndPoint(IPAddress.Parse(
System.Configuration.ConfigurationManager.AppSettings["SourceIP"]), port);
then I'm looking into my listener class list to see if that socket is connected:
if (listener.SocketExists(ipAddress))
{
// set the client socket in this class to the
// instance of the socket from the listener class
SocketIndex = listener.FindSocketInList(ipAddress);
if (SocketIndex != -1)
{
// might need to figure out how to avoid copying the socket
// to a new variable ???
client = listener.ConnectedSockets[SocketIndex].thisSocket;
SocketBeingReUsed = true;
}
}
else
{
// try to connect to the client
client.Connect(ep);
}
finally I go through my steps of sending and receiving
if (client.Connected)
{
if (client.Poll(1000, SelectMode.SelectWrite))
{
int sentAmount = Send(ref client);
client.ReceiveTimeout = 90000; //90 seconds
returnData = ReceiveData(ref client, sentAmount);
}
}
everything works up to the point in my ReceiveData(ref client, sentAmount) method where I call the Socket.Receive(data, total, Socket.ReceiveBufferSize, SocketFlags.None); method.
I've been using a tool called Hercules to test sending/receiving packets across two machines on my home network.
Does anyone have any ideas of what I can do to solve this? I do apologize for such a lengthy question but I want to try to give as much info and not paste my entire project. I'm up for any suggestions.
Disclaimer: I wrote this code approx 3 years ago, so I'm pry doing things I shouldn't be I'm sure :P
Thanks to all who read this.
Sincerely,
Chris
OK, so now I'm following along! Given what you've said in the comments above, then the way to solve the problem is to have a single class/thread that reads from the socket (which is the correct way to read from sockets anyway) and then it will coordinate which class gets the data. I think it might work a little like the Command Design Pattern.
Say I have a TcpClient that I accept from a TcpServer in c#, and for some reason it keeps defaulting with blocking off. Is there some other force that can change how the socket blocking is set? Like say is it affected by the remote connection at all?
I know I set blocking to fale a few builds back, but I changed it, and even introduced the TcpClient instead of a straight socket. I haven't specifically changed the blocking back to true, I just commented blocking = false out. Is that something that persists maybe with the endpoint?
I don't know though it just seemed that as I was programming one day my sockets just became unruley without any real change in their code.
public void GetAcceptedSocket(TcpClient s)
{
try
{
sock = s;
IPEndPoint remoteIpEndPoint = s.Client.RemoteEndPoint as IPEndPoint;
strHost = remoteIpEndPoint.Address.ToString();
intPort = remoteIpEndPoint.Port;
ipEnd = remoteIpEndPoint;
sock.ReceiveTimeout = 10000;
boolConnected = true;
intLastPing = 0;
LastPingSent = DateTime.Now;
LastPingRecieved = DateTime.Now;
handleConnect(strHost, intPort);
oThread = new Thread(new ThreadStart(this.run));
oThread.Start();
}
catch (Exception e)
{
handleError(e, "Connect method: " + e.Message);
}
}
Like say is it affected by the remote
connection at all?
Nope.
It would have been better if you show some code where you create Socket or TcpClient in server side. I cannot find TcpServer class in C#. Are you talking about TcpListener?
If you are, please make it sure that you set Socket's Blocking true if you use AcceptSocket to create a new socket. If you call AcceptTcpClient, you should not worry about blocking or non-blocking mode as TcpClient is always blocking mode.
The TcpClient class provides simple
methods for connecting, sending, and
receiving stream data over a network
in synchronous blocking mode.
from MSDN.
There really is no answer, I've never duplicated this, and after a few months, and a fresh windows install, picked the code up, and didn't have the problem anymore. Very weird though, defiantly couldn't find a programmatic reasoning for it happening. Like I said, after a few months it just kinda worked?
They should really have an 'Unawnserable' option :p.
Probably just watch this video: http://screencast.com/t/OWE1OWVkO
As you see, the delay between a connection being initiated (via telnet or firefox) and my program first getting word of it.
Here's the code that waits for the connection
public IDLServer(System.Net.IPAddress addr,int port)
{
Listener = new TcpListener(addr, port);
Listener.Server.NoDelay = true;//I added this just for testing, it has no impact
Listener.Start();
ConnectionThread = new Thread(ConnectionListener);
ConnectionThread.Start();
}
private void ConnectionListener()
{
while (Running)
{
while (Listener.Pending() == false) { System.Threading.Thread.Sleep(1); }//this is the part with the lag
Console.WriteLine("Client available");//from this point on everything runs perfectly fast
TcpClient cl = Listener.AcceptTcpClient();
Thread proct = new Thread(new ParameterizedThreadStart(InstanceHandler));
proct.Start(cl);
}
}
(I was having some trouble getting the code into a code block)
I've tried a couple different things, could it be I'm using TcpClient/Listener instead of a raw Socket object? It's not a mandatory TCP overhead I know, and I've tried running everything in the same thread, etc.
Maybe it's some kind of dns resolve? Are you using IP address to access your server's host or some name which is being resolved by your DNS? The code ParmesanCodice gave should work with no delay unless there's something wrong on client/network side.
Try to add following line to your windows\system32\drivers\etc\hosts:
127.0.0.1 localhost
it may solve your problem or just connect as 127.0.0.1:85
You should consider accepting your clients asynchronously, this will most likely remove the lag you are seeing.
I've modified your code slightly
public IDLServer(System.Net.IPAddress addr,int port)
{
Listener = new TcpListener(addr, port);
Listener.Start();
// Use the BeginXXXX Pattern to accept clients asynchronously
listener.BeginAcceptTcpClient(this.OnAcceptConnection, listener);
}
private void OnAcceptConnection(IAsyncResult asyn)
{
// Get the listener that handles the client request.
TcpListener listener = (TcpListener) asyn.AsyncState;
// Get the newly connected TcpClient
TcpClient client = listener.EndAcceptTcpClient(asyn);
// Start the client work
Thread proct = new Thread(new ParameterizedThreadStart(InstanceHandler));
proct.Start(client);
// Issue another connect, only do this if you want to handle multiple clients
listener.BeginAcceptTcpClient(this.OnAcceptConnection, listener);
}
Doesn't the debugger add overhead ?
I had issues like this when I was building my MMO server.
can't remember how I got round it now.
I think this has something to do with resource allocation on services, I use the approach suggested by ParmesanCodice (well a similar one at least) and during testing I found that the first 5 to 10 connections were rubbish but after that the service seems to hammmer out new connections like theres no tomorrow ...
Maybe its a socket thing in the framework.
Have you tried a load test?
Throw say 1000 connections at it and see what happens, it should get faster after handling each one.
You could avoid the entire Listener.Pending while loop. AcceptTcpClient() is a blocking call so you could just let your code run and pend on that. I don't know why that loop would take 1 second (instead of 1 millisecond) but since you indicate that is where the lag is, you can get rid of it.