I have a block of code that "listens" on a TCP port and just sends an string back no matter what is sent. The issue is the client side is just testing to see if the port is active then disconnecting. At which point I get an error thrown.
Cannot access a disposed object
Object name: 'System.Net.Socket.NetworkSystem'
I think the issue is that this code is on a thread and when the connection closes the while loop references a disposed object... how should I prevent the error from firing when the client closes the connection?
//Cretae listener to accept client connections
TcpClient tcpClient = (TcpClient)client;
NetworkStream clientStream = tcpClient.GetStream();
byte[] rcvBuffer = new byte[BUFSIZE]; // Receive buffer
int bytesRcvd; // Received byte count
while (ServiceRunning) // Run forever, accepting and servicing connections
{
try
{
// Receive until client closes connection, indicated by 0 return value
int totalBytesEchoed = 0;
//I THINK THIS IS WHERE THE PROBLEM IS .. THE CLIENTSTREAM.READ???
while (((bytesRcvd = clientStream.Read(rcvBuffer, 0, rcvBuffer.Length)) > 0) && (ServiceRunning))
{
clientStream.Write(responseBytes, 0, responseBytes.Length);
WriteEventToWindowsLog("GSSResponderService", "Received "+System.Text.Encoding.UTF8.GetString(rcvBuffer), System.Diagnostics.EventLogEntryType.Information);
totalBytesEchoed += bytesRcvd;
}
WriteEventToWindowsLog("GSSResponderService", "Responded to " + totalBytesEchoed.ToString() + " bytes.", System.Diagnostics.EventLogEntryType.Information);
// Close the stream and socket. We are done with this client!
clientStream.Close();
tcpClient.Close();
}
catch (Exception e)
{
//THIS IS GETTING TRIGGERED WHEN A CONNECTION IS LOST
WriteEventToWindowsLog("GSSResponderService", "Error:" + e.Message, System.Diagnostics.EventLogEntryType.Error);
clientStream.Close();
tcpClient.Close();
break;
}
}
}
According to MSDN, Read method of NetworkStream class throws IOException when the underlying Socket is closed and ObjectDisposedException when the NetworkStream is closed or there is a failure reading from the network. The same exceptions are thrown by Write method.
Therefore it shoud be enough to catch these 2 exception types and take appropriate action in exception handlers.
TcpClient tcpClient = (TcpClient)client;
NetworkStream clientStream = tcpClient.GetStream();
byte[] rcvBuffer = new byte[BUFSIZE]; // Receive buffer
int bytesRcvd; // Received byte count
while (ServiceRunning) // Run forever, accepting and servicing connections
{
try
{
// Receive until client closes connection, indicated by 0 return value
int totalBytesEchoed = 0;
try
{
while (((bytesRcvd = clientStream.Read(rcvBuffer, 0, rcvBuffer.Length)) > 0) && (ServiceRunning))
{
clientStream.Write(responseBytes, 0, responseBytes.Length);
WriteEventToWindowsLog("GSSResponderService", "Received "+System.Text.Encoding.UTF8.GetString(rcvBuffer), System.Diagnostics.EventLogEntryType.Information);
totalBytesEchoed += bytesRcvd;
}
}
catch(IOException)
{
//HERE GOES CODE TO HANDLE CLIENT DISCONNECTION
}
catch(ObjectDisposedException)
{
//HERE GOES CODE TO HANDLE CLIENT DISCONNECTION
}
WriteEventToWindowsLog("GSSResponderService", "Responded to " + totalBytesEchoed.ToString() + " bytes.", System.Diagnostics.EventLogEntryType.Information);
// Close the stream and socket. We are done with this client!
clientStream.Close();
tcpClient.Close();
}
catch (Exception e)
{
WriteEventToWindowsLog("GSSResponderService", "Error:" + e.Message, System.Diagnostics.EventLogEntryType.Error);
clientStream.Close();
tcpClient.Close();
break;
}
}
}
Related
I've got an exception (System.SocketException) being thrown on the (I believe, second call to System.Net.Sockets.Socket.BeginReceive) that I do not understand why is being thrown. I'm trying to receive, and respectively append data that is received to a per-client MemoryStream object. This results in the received event never being fired, and some data not appended to the stream.
static void EndReceive(IAsyncResult ar)
{
// our client socket
var client = (Client)ar.AsyncState;
// amount of bytes received in this call (usually 1024 for buffer size)
var received = client.socket.EndReceive(ar);
if (received > 0)
{
// append to memorystream
client.stream.Write(client.buffer, 0, received);
}
else
{
// raise event done
ClientSent(client, client.stream.ToArray());
// clear out the memorystream for a new transfer
client.stream.SetLength(0L);
}
try
{
// continue receiving
client.socket.BeginReceive(client.buffer, 0, client.buffer.Length, SocketFlags.None, (AsyncCallback)EndReceive, client);
}
catch (SocketException socketException)
{
// error thrown every single time:
// System.Net.Sockets.SocketException: 'A request to send or receive data was disallowed because the socket
// is not connected and (when sending on a datagram socket using a sendto call) no address was supplied'
return;
}
catch (ObjectDisposedException disposedException)
{
return;
}
}
I don't know what caused this particular error. But after some tinkering I got the EndReceive method working correctly. Here's my new design:
static void EndReceive(IAsyncResult ar)
{
var client = (Client)ar.AsyncState;
var received = 0;
try
{
received = client.socket.EndReceive(ar);
}
catch (SocketException ex)
{
client.socket.BeginDisconnect(false, (AsyncCallback)EndDisconnect, client);
}
if (received > 0)
{
client.stream.Write(client.buffer, 0, received);
if (client.socket.Available == 0)
{
var sent = client.stream.ToArray();
client.stream.SetLength(0L);
Console.WriteLine("Client sent " + sent.Length + " bytes of data");
}
}
try
{
client.socket.BeginReceive(client.buffer, 0, client.buffer.Length, SocketFlags.None, (AsyncCallback)EndReceive, client);
}
catch (SocketException socketException)
{
return;
}
catch (ObjectDisposedException disposedException)
{
return;
}
}
I am currently trying to write an app for android with Xamarin where I want to create and destroy sockets to the same device, and then redo that process over again. I have written both client and server code. I am having a problem doing that, since the app always crashes on the server side when it tries to read data from the client for a second time.
What I mean is that it is always successful the first time around, but the second time around, it always crashes. We figured out that the problem was on the client though cause once we started keeping the sockets open and reusing them instead of closing them and recreating a new one when needed, it worked as intended and did not crash. Here is the code we ended up using:
[SERVER]
public class BluetoothSocketListener {
private BluetoothScanner _scanner;
private BluetoothServerSocket serverSocket;
private string TAG = "Socket Listener: ";
private Thread listenThread;
public BluetoothSocketListener(BluetoothScanner scanner, UUID uuid) {
_scanner = scanner;
BluetoothServerSocket tmp = null;
try {
tmp = scanner.Adapter.ListenUsingInsecureRfcommWithServiceRecord("AGHApp", uuid);
} catch(Exception e) {
Console.WriteLine(TAG + "Listen failed, exception: " + e);
}
serverSocket = tmp;
listenThread = new Thread(new ThreadStart(StartListening));
listenThread.Start();
}
private void StartListening() {
Console.WriteLine(TAG + "Listening...");
BluetoothSocket socket = null;
while(_scanner.Running){
try {
socket = serverSocket.Accept();
}catch(Exception e) {
Console.WriteLine(TAG + "Accept failed: " + e);
break;
}
if (socket != null) {
lock (this) {
ReadData(socket.InputStream);
socket.Close();
}
}
}
serverSocket.Close();
}
private void ReadData(Stream stream) {
// Check to see if this NetworkStream is readable.
if(stream.CanRead){
byte[] streamData = new byte[1024];
StringBuilder completeMsg = new StringBuilder();
int bytesRead = 0;
// Incoming message may be larger or smaller than the buffer size.
do{
bytesRead = stream.Read(streamData, 0, 1);
completeMsg.AppendFormat("{0}", Encoding.ASCII.GetString(streamData, 0, bytesRead));
}
while(stream.IsDataAvailable());
// Print out the received message to the console.
Console.WriteLine("Message : " + completeMsg);
}
else{
Console.WriteLine("Cannot read from stream");
}
}
}
[CLIENT]
private void SendData(BluetoothDevice device, string msg){
Console.WriteLine(TAG + "Finding socket");
BluetoothSocket socket = null;
if(sockets.ContainsKey(device.Address)) {
socket = sockets[device.Address];
}
else {
socket = device.CreateInsecureRfcommSocketToServiceRecord(_uuid);
socket.Connect();
sockets.Add(socket);
}
Console.WriteLine(TAG + "Socket connected, writing to socket");
byte[] bMsg = Encoding.ASCII.GetBytes(msg);
socket.OutputStream.Write(bMsg, 0, bMsg.Length);
socket.OutputStream.Close();
}
As can be seen, I never actually close the sockets on the client side after I send the message. This is not the problem though since if this is necessary, I can easily do this in some other function.
What I would like is to create and close the socket every time I want to send a message, since I only want to send something every 15 minutes, and the device might have moved and is no longer available. It is also not necessary to keep track of the devices. Fire and forget. This is what we started with and would like to have something similar to this as well:
private void SendData(BluetoothDevice device, string msg){
BluetoothSocket socket = device.CreateInsecureRfcommSocketToServiceRecord(_uuid);
socket.Connect();
Console.WriteLine(TAG + "Socket connected, writing to socket");
byte[] bMsg = Encoding.ASCII.GetBytes(msg);
socket.OutputStream.Write(bMsg, 0, bMsg.Length);
socket.OutputStream.Close();
socket.Close();
Console.WriteLine(TAG + "Socket closed");
}
Something to notice is that the server actually closes the socket after it receives the message, why is that even working? And why can't I close the socket on the client side? Am I missing something integral here?
The exception is a Java.IO exception, where the message reads: bt socket closed, read -1
Would really appreciate some help!
So I am writing a simple client-server application. It should send a packet, then wait to receive a packet, than send one etc... The problem is, it recieves the first packet, but when I start the TcpListener in the second iteration, it gives me this error:
No connection could be made because the target machine actively
refused it 127.0.0.1:13
private void listenForConnections()
{
bool prejelPaket = false;
listener = new TcpListener(IPAddress, port);
listener.Start();
while (!packetReceived)
{
try
{
client = listener.AcceptTcpClient();
listener.Stop();
networkStream = client.GetStream();
byte[] message = new byte[1024];
networkStream.Read(message, 0, message.Length);
networkStream.Close();
string strMessage = Encoding.UTF8.GetString(message);
packetReceived= true;
MessageBox.Show("received message: " + strMessage);
client.Close();
}
catch (Exception ee)
{
thListen.Join();
}
}
}
private void sendPacket(object pClient)
{
string message = "test message;
try
{
client = (TcpClient)pClient;
client.Connect(IPAddress, port);
networkStream = client.GetStream();
byte[] strMessage = Encoding.UTF8.GetBytes(message);
networkStream.Write(strMessage, 0, strMessage.Length);
networkStream.Close();
client.Close();
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
}
}
Create the client / networkstream once. Store them in a property until you are finished sending and receiving. Then close and dispose. Do not stop / close the connection between each iteration.
Move the
listener.Stop();
outside the while loop.
EDIT: to explain why
The reason why it works the first time but second iteration fails is because after the first client is accepted from client = listener.AcceptTcpClient() the next line of code calls listener.Stop() which stops listening for connections. Any subsequent calls to listener.AcceptTcpClient() will throw an InvalidOperationException. Moving listener.Stop() outside the while loop only stops listening for connections once it exits the loop.
Looking at it again packetReceived is set to true in the first iteration as well, so it's going to exit the while loop after the first client anyway, is this the intended behaviour?
Recently I have tackled a strange behaviour of .Net synchronous receive method. I needed to write an application that has nodes which communicate with each other by sending/receiving data. Each server has a receipt loop which is synchronous, after receiving a serialized class it deserializes and processes it. After that it sends asynchronously this serialized class to some chosen nodes (using AsynchSendTo).
The MSDN clearly says that:
"If you are using a connection-oriented Socket, the Receive method
will read as much data as is available, up to the size of the buffer.
If the remote host shuts down the Socket connection with the Shutdown
method, and all available data has been received, the Receive method
will complete immediately and return zero bytes."
In my case it's not true. There are some random cases when the Receive doesn't block and returns 0 bytes (non-deterministic situtation) right away after establishing connection. I'm 100% sure that the sender was sending at lest 1000 bytes. One more funny fact: when putting Sleep(500) before receive everything works just fine. Hereunder is the receiving code:
_listener = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
try
{
_listener.Bind(_serverEndpoint);
_listener.Listen(Int32.MaxValue);
while (true)
{
Console.WriteLine("Waiting for connection...");
Socket handler = _listener.Accept();
int totalBytes = 0;
int bytesRec;
var bytes = new byte[DATAGRAM_BUFFER];
do
{
//Thread.Sleep(500);
bytesRec = handler.Receive(bytes, totalBytes, handler.Available, SocketFlags.None);
totalBytes += bytesRec;
} while (bytesRec > 0);
handler.Shutdown(SocketShutdown.Both);
handler.Close();
}
}
catch (SocketException e)
{
Console.WriteLine(e);
}
Also the sending part:
public void AsynchSendTo(Datagram datagram, IPEndPoint recipient)
{
byte[] byteDatagram = SerializeDatagram(datagram);
try
{
var socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
socket.BeginConnect(recipient, ConnectCallback, new StateObject(byteDatagram, byteDatagram.Length, socket));
}
catch (SocketException e)
{
Console.WriteLine(e);
}
}
public void ConnectCallback(IAsyncResult result)
{
try
{
var stateObject = (StateObject)result.AsyncState;
var socket = stateObject.Socket;
socket.EndConnect(result);
socket.BeginSend(stateObject.Data, 0, stateObject.Data.Length, 0, new AsyncCallback(SendCallback), socket);
}
catch (Exception ex)
{
Console.WriteLine("catched!" + ex.ToString());
}
}
public void SendCallback(IAsyncResult result)
{
try
{
var client = (Socket)result.AsyncState;
client.EndSend(result);
client.Shutdown(SocketShutdown.Both);
client.Close();
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}
class StateObject
{
public Byte[] Data { get; set; }
public int Size;
public Socket Socket;
}
My question: am I using the synchronous receive in a wrong way? Why it doesn't block event though there is data to receive?
You're shooting yourself in the foot.
bytesRec = handler.Receive(bytes, totalBytes, handler.Available, SocketFlags.None);
At the very beginning of the connection, Available will be 0, forcing it to return immediately with 0. Instead, you should specify the number of bytes which are free in your buffer (e.g. bytes.Length-totalBytes), then it will also block.
You may have a concurrency problem here. After you accept a connection, you jump straight into receive. The sender process may not have enough time to reach the call to send and so your handler.Available is 0 and the receive returns.
This is also why the "bug" does not occur when you add the sleep of 500 ms.
(C#) Here I created a socket listener application which waits for the client hit. Now my requirement is my application should listen up to 30 seconds only later it should throw an error saying "Request Timed Out". Any suggestions ?
try
{
Byte[] bytes = new Byte[256];
String data = null;
// Enter the listening loop.
while (true)
{
ReceiveTimer.Stop(); ;
logger.Log("Waiting for a connection... ");
// Perform a blocking call to accept requests.
// You could also use server.AcceptSocket() here.
TcpClient client = server.AcceptTcpClient();
logger.Log("Connected!");
data = null;
// Get a stream object for reading and writing
NetworkStream stream = client.GetStream();
int i;
// Loop to receive all the data sent by the client.
while ((i = stream.Read(bytes, 0, bytes.Length)) != 0)
{
// Translate data bytes to a ASCII string.
data = System.Text.Encoding.ASCII.GetString(bytes, 0, i);
logger.Log("Received: {0}", data);
// Process the data sent by the client.
data = data.ToUpper();
byte[] msg = System.Text.Encoding.ASCII.GetBytes(data);
// Send back a response.
stream.Write(msg, 0, msg.Length);
logger.Log("Sent: {0}", data);
}
// Shutdown and end connection
client.Close();
logger.Log("Shutdown and end connection");
}
}
catch (SocketException ex)
{
logger.Log("SocketException: " + ex);
}
I would think that the ReceiveTimeout property being set to 30 seconds would take care of that for you. Have you tried that yet?
client.ReceiveTimeout = 30000;
The property defaults to 0, so you would want to set that. It throws an IOException. You could catch that and throw the exception of your choosing to bubble up the application.
Why would you want to limit the time that the server listening ?
Server should be listening all the time that it's running.
Anyway, You can create a task(thread) that will be listening and after the timeout will be canceled.