I have multiple devices connected to a TCP/Ip port and i want to read all these devices through sockets in .Net how can i do this before this i had a single device connected and it was working fine but now i have multiple devices can anybody help me in listening multiple socket connection?
This is not a complete answer, but should point you in the right direction. You can use something like
Socket socketForClient = tcpListener.Accept();
call for every connecting client. You can have an array of Socket objects you can process/update as new connections come in or get closed.
You will want to create an asynchronous Tcp listener. Read up here: MSDN Socket Class
First you set up your listener:
private static System.Threading.ManualResetEvent connectDone =
new System.Threading.ManualResetEvent(false);
void StartListen(IPEndPoint serverEP, int numDevices)
{
sock.Bind(serverEP);
sock.Listen(numDevices); // basically sit here and wait for client to request connect
/*
* While statement not required here because AcceptConnection()
* method instructs the socket to BeginAccept()...
*/
connectDone.Reset();
sock.BeginAccept(new AsyncCallback(AcceptConnection), sock);
connectDone.WaitOne();
}
In some examples, you might see the the BeginAccept(...) method inside of a while(true) block. But you don't need that with async. I think using the while(true) is improper. Of course, you then accept connections aynchronously:
void AcceptConnection(IAsyncResult asyncRes)
{
connectDone.Set();
System.Net.Sockets.Socket s = channelworker.EndAccept(asyncRes);
byte[] messagebuffer = new byte[bufferSize];
/*
* Tell socket to begin Receiving from caller.
*/
s.BeginReceive(messageBuffer, 0, messageBuffer.Length,
System.Net.Sockets.SocketFlags.None, new AsyncCallback(Receive), s);
/*
* Tell Channel to go back to Accepting callers.
*/
connectDone.Reset();
sock.BeginAccept(new AsyncCallback(AcceptConnection), sock);
connectDone.WaitOne();
}
Usually, once you work through a couple of the asynchronous exercises and get the hang of .Beginxxx/.Endxxx
methods, and using the AsyncCallback, you will get the hang of how it works. Read through the MSDN reference I gave you and this should get you a pretty good start.
Ususally for multi connection application the server listens to a specific port, and after recieving a connection it return a new port where your device will create a new socket.
Related
I use c# to create a TCP server to connect to the iOS devices. However, I've found that it can only accept one iOS device at a time. I couldn't figure out what is the problem. Can anyone have a look and see what is the problem?
IPAddress ipadr = IPAddress.Parse(localIP);
System.Net.IPEndPoint EndPoint = new System.Net.IPEndPoint(ipadr, 8060);
newsock.Bind(EndPoint);
newsock.Listen(10);
client = newsock.Accept();
IPEndPoint clientip = (IPEndPoint)client.RemoteEndPoint;
while (true)
{
if (!isDisConnected)
{
data = new byte[1024];
recv = client.Receive(data);
if (recv == 0)
break;
string receivedText = Encoding.ASCII.GetString(data, 0, recv);
}
client.Close();
newsock.Close();
There are two kinds of sockets: The socket that you use to listen (it is never connected) and the sockets that correspond to connections (each socket represents one connection).
Accept returns you a connected socket to the client that was just accepted. Each call to Accept accepts a new, independent client.
If you want to handle more than one client at a time (which is almost always required) you must ensure that a call to Accept is pending at all times so that new clients can be accepted.
A simple model to achieve this is to accept in a loop forever and start a thread for each client that you accepted:
while (true) {
var clientSocket = listeningSocket.Accept();
Task.Factory.StartNew(() => HandleClient(clientSocket));
}
Take a look at AcceptAsync. Each accept operation allows one connection, so you have to call Accept again. AcceptAsync works asychronously and avoids the difficulties of having to create delegates or threads.
The general model is:
Accept operation completes
Hand off AcceptSocket to code that will Receive data asychronously from that socket.
Call Accept again to listen for more clients.
The same principle also works if you want to do synchronous receives.
Check out this question: Server design using SocketAsyncEventArgs
preface:
I've been stumped on this for awhile now and am not having much luck finding what I need.
I have a C# (.NET 3.5) Service. One thread acts as an asynchronous listener for incoming TCP connections. When data comes in I spawn off a new worker thread to handle the data, and sends an acknowledgement back.
On a second thread in the same service we send commands out, until today it would gather information from the data base, build a new socket, connect then ship the command and I'm using the Socket.Receive to invoke blocking and wait for a response (or until a timeout occurrs).
Everything has been working great until a new client has a need to send data to us so fast (5-10 second intervals) that we can no longer open a new socket to get a command through. So I started looking into when a command needs to be sent that the "listener" thread has a client connected. If that client is connected currently use that socket instead of creating a new one.
Issue:
I'm to the point where I can send my command back on the same socket the listener receives, but when the client sends data back as the response it takes twice for the Socket.Receive method to actually fire thinking it received data. The first time it gets into my listener class, the 2nd time, in my command class where I actually want it to be.
Question:
Is there some option or something I need to do before calling my Socket.Receive method to ensure the data gets to the correct place?
In my listener class I have a list of objects "CSocketPacket"
public class CSocketPacket
{
public CSocketPacket(System.Net.Sockets.Socket socket)
{
thisSocket = socket;
this.IpAddress =
((System.Net.IPEndPoint)socket.RemoteEndPoint).Address.ToString();
}
public System.Net.Sockets.Socket thisSocket;
public byte[] dataBuffer = new byte[BUFFER_SIZE];
public string IpAddress; //Use this to search for the socket
}
Then when I send a command I'm creating a new tcp socket object:
client = new Socket(
AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
IPEndPoint ep = new IPEndPoint(
IPAddress.Parse(Strings.Trim(ipAddress)), port);
IPEndPoint LocalIp = new IPEndPoint(IPAddress.Parse(
System.Configuration.ConfigurationManager.AppSettings["SourceIP"]), port);
then I'm looking into my listener class list to see if that socket is connected:
if (listener.SocketExists(ipAddress))
{
// set the client socket in this class to the
// instance of the socket from the listener class
SocketIndex = listener.FindSocketInList(ipAddress);
if (SocketIndex != -1)
{
// might need to figure out how to avoid copying the socket
// to a new variable ???
client = listener.ConnectedSockets[SocketIndex].thisSocket;
SocketBeingReUsed = true;
}
}
else
{
// try to connect to the client
client.Connect(ep);
}
finally I go through my steps of sending and receiving
if (client.Connected)
{
if (client.Poll(1000, SelectMode.SelectWrite))
{
int sentAmount = Send(ref client);
client.ReceiveTimeout = 90000; //90 seconds
returnData = ReceiveData(ref client, sentAmount);
}
}
everything works up to the point in my ReceiveData(ref client, sentAmount) method where I call the Socket.Receive(data, total, Socket.ReceiveBufferSize, SocketFlags.None); method.
I've been using a tool called Hercules to test sending/receiving packets across two machines on my home network.
Does anyone have any ideas of what I can do to solve this? I do apologize for such a lengthy question but I want to try to give as much info and not paste my entire project. I'm up for any suggestions.
Disclaimer: I wrote this code approx 3 years ago, so I'm pry doing things I shouldn't be I'm sure :P
Thanks to all who read this.
Sincerely,
Chris
OK, so now I'm following along! Given what you've said in the comments above, then the way to solve the problem is to have a single class/thread that reads from the socket (which is the correct way to read from sockets anyway) and then it will coordinate which class gets the data. I think it might work a little like the Command Design Pattern.
I am writing an application in C# that needs to handle incoming connections and I've never done server side programming before. This leads me to these following questions:
Pros and cons of high backlog / low backlog? Why shouldn't we set the backlog to a huge number?
If I call Socket.Listen(10), after 10 Accept()s do I have to call Listen() again? Or do I have to call Listen() after every Accept()?
If I set my backlog to 0 and hypothetically two people want to connect to my server at the same time, what would happen? (I am calling Socket.Select in a loop and checking readability of the listening socket, after I handle the first connection would the second connection be successful upon the next iteration if I called Listen() again?)
Thanks in advance.
The listen backlog is, as Pieter said, a queue which is used by the operating system to store connections that have been accepted by the TCP stack but not, yet, by your program. Conceptually, when a client connects it's placed in this queue until your Accept() code removes it and hands it to your program.
As such, the listen backlog is a tuning parameter that can be used to help your server handle peaks in concurrent connection attempts. Note that this is concerned with peaks in concurrent connection attempts and in no way related to the maximum number of concurrent connections that your server can maintain. For example, if you have a server which receives 10 new connections per second then it's unlikely that tuning the listen backlog will have any affect even if these connections are long lived and your server is supporting 10,000 concurrent connections (assuming your server isn't maxing out the CPU serving the existing connections!). However, if a server occasionally experiences short periods when it is accepting 1000 new connections per second then you can probably prevent some connections from being rejected by tuning the listen backlog to provide a larger queue and therefore give your server more time to call Accept() for each connection.
As for pros and cons, well the pros are that you can handle peaks in concurrent connection attempts better and the corresponding con is that the operating system needs to allocate more space for the listen backlog queue because it is larger. So it's a performance vs resources trade off.
Personally I make the listen backlog something that can be externally tuned via a config file.
How and when you call listen and accept depends upon the style of sockets code that you're using. With synchronous code you'd call Listen() once with a value, say 10, for your listen backlog and then loop calling Accept(). The call to listen sets up the end point that your clients can connect to and conceptually creates the listen backlog queue of the size specified. Calling Accept() removes a pending connection from the listen backlog queue, sets up a socket for application use and passes it to your code as a newly established connection. If the time taken by your code to call Accept(), handle the new connection, and loop round to call Accept() again is longer than the gap between concurrent connection attempts then you'll start to accumulate entries in the listen backlog queue.
With asynchronous sockets it can be a little different, if you're using async accepts you will listen once, as before and then post several (again configurable) async accepts. As each one of these completes you handle the new connection and post a new async accept. In this way you have a listen backlog queue and a pending accept 'queue' and so you can accept connections faster (what's more the async accepts are handled on thread pool threads so you don't have a single tight accept loop). This is, usually, more scalable and gives you two points to tune to handle more concurrent connection attempts.
What the backlog does is provide a queue with clients that are trying to connect to the server, but which you haven't processed yet.
This concerns the time between when the client actually connects to the server and the time you Accept or EndAccept the client.
If accepting a client takes a long time, it is possible that the backlog becomes full and new client connections will be rejected until you had time to process clients from the queue.
Concerning your questions:
I don't have information on that. If the default number does not pose any problems (no rejected client connections) leave it at its default. If you see many errors when new clients want to connect, increase the number. However, this will probably be because you take too much time accepting a new client. You should solve that issue before increasing the backlog;
No, this is handled by the system. The normal mechanism of accepting clients takes care of this;
See my earlier explanation.
Try this program and you will see what is backlog good for.
using System;
using System.Net;
using System.Net.Sockets;
/*
This program creates TCP server socket. Then a large number of clients tries to connect it.
Server counts connected clients. The number of successfully connected clients depends on the BACKLOG_SIZE parameter.
*/
namespace BacklogTest
{
class Program
{
private const int BACKLOG_SIZE = 0; //<<< Change this to 10, 20 ... 100 and see what happens!!!!
private const int PORT = 12345;
private const int maxClients = 100;
private static Socket serverSocket;
private static int clientCounter = 0;
private static void AcceptCallback(IAsyncResult ar)
{
// Get the socket that handles the client request
Socket listener = (Socket) ar.AsyncState;
listener.EndAccept(ar);
++clientCounter;
Console.WriteLine("Connected clients count: " + clientCounter.ToString() + " of " + maxClients.ToString());
// do other some work
for (int i = 0; i < 100000; ++i)
{
}
listener.BeginAccept(AcceptCallback, listener);
}
private static void StartServer()
{
// Establish the locel endpoint for the socket
IPEndPoint localEndPoint = new IPEndPoint(IPAddress.Any, PORT);
// Create a TCP/IP socket
serverSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
// Bind the socket to the local endpoint and listen
serverSocket.Bind(localEndPoint);
serverSocket.Listen(BACKLOG_SIZE);
serverSocket.BeginAccept(AcceptCallback, serverSocket);
}
static void Main(string[] args)
{
StartServer();
// Clients connect to the server.
for (int i = 0; i < 100; ++i)
{
IPAddress ipAddress = IPAddress.Parse("127.0.0.1");
IPEndPoint remoteEP = new IPEndPoint(ipAddress, PORT);
// Create a TCP/IP socket and connect to the server
Socket client = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
client.BeginConnect(remoteEP, null, null);
}
Console.ReadKey();
}
}
}
I read 2 C# chat source code & I see a problem:
One source uses Socket class:
private void StartToListen(object sender , DoWorkEventArgs e)
{
this.listenerSocket = new Socket(AddressFamily.InterNetwork , SocketType.Stream , ProtocolType.Tcp);
this.listenerSocket.Bind(new IPEndPoint(this.serverIP , this.serverPort));
this.listenerSocket.Listen(200);
while ( true )
this.CreateNewClientManager(this.listenerSocket.Accept());
}
And other one uses TcpListener class:
server = new TcpListener(portNumber);
logger.Info("Server starts");
while (true)
{
server.Start();
if (server.Pending())
{
TcpClient connection = server.AcceptTcpClient();
logger.Info("Connection made");
BackForth BF = new BackForth(connection);
}
}
Please help me to choose the one. I should use Socket class or TcpListener class. Socket connection is TCP or UDP? Thanks.
UDP is connectionless, but can have a fake connection enforced at both ends on the socket objects. TCP is a stream protocol (what you send will be received in chunks on the other end), and additionally creates endpoint sockets for each accepted socket connection (the main listening socket is left untouched, although you'd probably need to call listen() again). UDP uses datagrams, chunks of data which are received whole on the other side (unless the size is bigger than the MTU, but that's a different story).
It looks to me like these two pieces of code are both using TCP, and so as the underlying protocol is the same, they should be completely compatible with each other. It looks as if you should use the second bit of code since it's higher level, but only the server can really use this, the client needs a different bit of code since it doesn't listen, it connects... If you can find the 'connecting' code at the same level of abstraction, use that.
i am using c# sockets Asynchronous mode.
i need to serve only one connection in my application from a server point of view. once one is connected then i would like to refuse any more connection requests.
also the server only serves to connect to a single client. when the communication is done the server has to be restarted.
but from what i have read on the topic, it is not possible to close beginaccept.
i would like some ideas regarding how get around this situation.
Normally, in the BeginAccept async callback you would call BeginAccept again so that another connection can be accepted. However, you can omit this step if you do not want to allow another connection. So that the connection is refused in a timely manner, consider also closing the listening socket in the callback. The accepted Socket will remain open in this case for you to use even though the listening socket is closed.
class SocketTest
{
private Socket m_Listener = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
public void Test()
{
m_Listener.Bind(new IPEndPoint(IPAddress.Loopback, 8888));
m_Listener.Listen(16);
m_Listener.BeginAccept(AcceptCallback, null);
}
private void AcceptCallback(IAsyncResult ar)
{
Socket s = m_Listener.EndAccept(ar);
m_Listener.Close();
/* Use s here. */
}
}