I have a function(please see code below) which reads some data from the web. The problem with this function is that sometimes it will return fast but another time it will wait indefinitely. I heard that threads helps me to wait for a definite period of time and return.
Can you please tell me how to make a thread wait for 'x' seconds and return if there is no activity recorded. My function also returns a string as a result, Is it possible to catch that value while using a thread?
private string ReadMessage(SslStream sslStream)
{
// Read the message sent by the server.
// The end of the message is signaled using the
// "<EOF>" marker.
byte[] buffer = new byte[2048];
StringBuilder messageData = new StringBuilder();
int bytes = -1;
try
{
bytes = sslStream.Read(buffer, 0, buffer.Length);
// Use Decoder class to convert from bytes to UTF8
// in case a character spans two buffers.
Decoder decoder = Encoding.ASCII.GetDecoder();
char[] chars = new char[decoder.GetCharCount(buffer, 0, bytes)];
decoder.GetChars(buffer, 0, bytes, chars, 0);
messageData.Append(chars);
// Check for EOF.
}
catch (Exception ex)
{
throw;
}
return messageData.ToString();
}
For Andre Calil's comment:
My need is to read/write some value to a SSL server. For every write operation the server sends some response, The ReadMessage is responsible for reading the incoming message. I've found situations wnen the ReadMessage(sslStream.Read(buffer, 0, buffer.Length);) waits forever. To combat this problem, i considered threads which can wait for 'x' seconds and return after that. Following code demonstrates the working of ReadMEssage
byte[] messsage = Encoding.UTF8.GetBytes(inputmsg);
// Send hello message to the server.
sslStream.Write(messsage);
sslStream.Flush();
// Read message from the server.
outputmsg = ReadMessage(sslStream);
// Console.WriteLine("Server says: {0}", serverMessage);
// Close the client connection.
client.Close();
You can't (sanely) make a second thread interrupt the one you're executing this code from. Use a read timeout instead:
private string ReadMessage(SslStream sslStream)
{
// set a timeout here or when creating the stream
sslStream.ReadTimeout = 20*1000;
// …
try
{
bytes = sslStream.Read(…);
}
catch (IOException)
{
// a timeout occurred, handle it
}
}
As an aside, the following construct is pointless:
try
{
// some code
}
catch (Exception ex) {
throw;
}
If all you're doing is rethrowing, you don't need the try..catch block at all.
You can set the ReadTimeout on the SslStream so that the call to Read will timeout after a specified amount of time.
If you don't want to block the main thread, use an asynchronous pattern.
Without knowing exactly what you are trying to achieve, it sounds like you want to read data from an SSL stream that may take quite a while to respond, without blocking your UI/main thread.
You can consider doing your read asynchronously instead using BeginRead
Using that approach, you define a callback method that is invoked every time Read has read data and placed it into the specified buffer.
Just sleeping (whether using Thread.Sleep or by setting ReadTimeout on the SslStream) will block the thread this code is running on.
Design it to be Asynchronous by putting the ReadMessage in its own thread waiting for the answer. Once the answer is provided create an event back to the main code to handle its output.
Related
This specifically is a question on what is going on in the background communications of NetworkStream consuming raw data over TCP. The TcpClient connection is communicating directly with a hardware device on the network. Every so often, at random times, the NetworkStream appears to hiccup, and can be best described while observing in debug mode. I have a read timeout set on the stream and when everything is working as expected, when stepping over Stream.Read, it will sit there and wait the length of the timeout period for incoming data. When not, only a small portion of the data comes through, the TcpClient still shows as open and connected, but Stream.Read no longer waits for the timeout period for incoming data. It immediately steps over to the next line, no data is received obviously, and no data will ever come through until everything is disposed of and a new connection is reestablished.
The question is, in this specific scenario, what state is the NetworkStream in at this point, what causes it, and why is the TcpClient connection still in a seemingly open and valid state? What is going on in the background? No errors thrown and captured, is the stream silently failing in the background? What is the difference between states of TcpClient and NetworkStream?
private TcpClient Client;
private NetworkStream Stream;
Client = new TcpClient();
var result = Client.BeginConnect(IPAddress, Port, null, null);
var success = result.AsyncWaitHandle.WaitOne(TimeSpan.FromSeconds(2));
Client.EndConnect(result);
Stream = Client.GetStream();
try
{
while (Client.Connected)
{
bool flag = true;
StringBuilder sb = new StringBuilder();
while (!IsCompleteRecord(sb.ToString()) && Client.Connected)
{
string response = "";
byte[] data = new byte[512];
Stream.ReadTimeout = 60000;
try
{
int recv = Stream.Read(data, 0, data.Length);
response = Encoding.ASCII.GetString(data, 0, recv);
}
catch (Exception ex)
{
}
sb.Append(response);
}
string rec = sb.ToString();
// send off data
Stream.Flush();
}
}
catch (Exception ex)
{
}
You are not properly testing for the peer closing its end of the connection.
From this link : https://msdn.microsoft.com/en-us/library/system.net.sockets.networkstream.read%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396
This method reads data into the buffer parameter and returns the number of bytes successfully read. If no data is available for reading, the Read method returns 0. The Read operation reads as much data as is available, up to the number of bytes specified by the size parameter. If the remote host shuts down the connection, and all available data has been received, the Read method completes immediately and return zero bytes.
You are simply doing a stream.read, and not interpreting the fact that you might have received 0 bytes, which means that the peer closed its end of the connection. This is called a half close. It will not send to you anymore. At that point you should also close your end of the socket.
There is an example available here :
https://msdn.microsoft.com/en-us/library/bew39x2a(v=vs.110).aspx
// Read data from the remote device.
int bytesRead = client.EndReceive(ar);
if (bytesRead > 0) {
// There might be more data, so store the data received so far.
state.sb.Append(Encoding.ASCII.GetString(state.buffer,0,bytesRead));
// Get the rest of the data.
client.BeginReceive(state.buffer,0,StateObject.BufferSize,0,
new AsyncCallback(ReceiveCallback), state);
} else {
// All the data has arrived; put it in response.
if (state.sb.Length > 1) {
response = state.sb.ToString();
}
// Signal that all bytes have been received.
receiveDone.Set(); ---> not that this event is set here
}
and in the main code block it is waiting for receiveDone:
receiveDone.WaitOne();
// Write the response to the console.
Console.WriteLine("Response received : {0}", response);
// Release the socket.
client.Shutdown(SocketShutdown.Both);
client.Close();
Conclusion : check for reception of 0 bytes and close your end of the socket because that is what the other end has done.
A timeout is handled with an exception. You are not really doing anything with a timeout because your catch block is empty. You would just continue trying to receive.
#Philip has already answered ythe question.
I just want to add that I recommend the use of SysInternals TcpView, which is basically a GUI for netstat and lets you easily check the status of all network connections of your computer.
About the detection of the connection state in your program, see here in SO.
I'm using the asynchronous methos BeginSend and I need some sort of a timeout mechanism. What I've implemented works fine for connect and receive timeouts but I have a problem with the BeginSend callback. Even a timeout of 25 seconds is often not enough and gets exceeded. This seems very strange to me and points towards a different cause.
public void Send(String data)
{
if (client.Connected)
{
// Convert the string data to byte data using ASCII encoding.
byte[] byteData = Encoding.ASCII.GetBytes(data);
client.NoDelay = true;
// Begin sending the data to the remote device.
IAsyncResult res = client.BeginSend(byteData, 0, byteData.Length, 0,
new AsyncCallback(SendCallback), client);
if (!res.IsCompleted)
{
sendTimer = new System.Threading.Timer(SendTimeoutCallback, null, 10000, Timeout.Infinite);
}
}
else MessageBox.Show("No connection to target! Send");
}
private void SendCallback(IAsyncResult ar)
{
if (Interlocked.CompareExchange(ref sendTimeoutflag, 1, 0) != 0)
{
// the flag was set elsewhere, so return immediately.
return;
}
sendTimeoutflag = 0; //needs to be reset back to 0 for next reception
// we set the flag to 1, indicating it was completed.
if (sendTimer != null)
{
// stop the timer from firing.
sendTimer.Dispose();
}
try
{
// Retrieve the socket from the state object.
Socket client = (Socket)ar.AsyncState;
// Complete sending the data to the remote device.
int bytesSent = client.EndSend(ar);
ef.updateUI("Sent " + bytesSent.ToString() + " bytes to server." + "\n");
}
catch (Exception e)
{
MessageBox.Show(e.ToString());
}
}
private void SendTimeoutCallback(object obj)
{
if (Interlocked.CompareExchange(ref sendTimeoutflag, 2, 0) != 0)
{
// the flag was set elsewhere, so return immediately.
return;
}
// we set the flag to 2, indicating a timeout was hit.
sendTimer.Dispose();
client.Close(); // closing the Socket cancels the async operation.
MessageBox.Show("Connection to the target has been lost! SendTimeoutCallback");
}
I've tested timeout values up to 30 seconds. The value of 30 seconds has proved to be the only one never to time out. But that just seems like an overkill and I believe there's a different underlying cause.Any ideas as to why this could be happening?
Unfortunately, there's not enough code to completely diagnose this. You don't even show the declaration of sendTimeoutflag. The example isn't self-contained, so there's no way to test it. And you're not clear about exactly what happens (e.g. do you just get the timeout, do you complete a send and still get a timeout, does something else happen?).
That said, I see at least one serious bug in the code, which is your use of the sendTimeoutflag. The SendCallback() method sets this flag to 1, but it immediately sets it back to 0 again (this time without the protection of Interlocked.CompareExchange()). Only after it's set the value to 0 does it dispose the timer.
This means that even when you successfully complete the callback, the timeout timer is nearly guaranteed to have no idea and to close the client object anyway.
You can fix this specific issue by moving the assignment sendTimeoutflag = 0; to a point after you've actually completed the send operation, e.g. at the end of the callback method. And even then only if you take steps to ensure that the timer callback cannot execute past that point (e.g. wait for the timer's dispose to complete).
Note that even having fixed that specific issue, you may still have other bugs. Frankly, it's not clear why you want a timeout in the first place. Nor is it clear why you want to use lock-free code to implement your timeout logic. More conventional locking (i.e. Monitor-based with the lock statement) would be easier to implement correctly and would likely not impose a noticeable performance penalty.
And I agree with the suggestion that you would be better-served by using the async/await pattern instead of explicitly dealing with callback methods (but of course that would mean using a higher-level I/O object, since Socket doesn't suppose async/await).
NetworkStream stream = socket.GetStream();
if (stream.CanRead)
{
while (true)
{
int i = stream.Read(buf, 0, 1024);
result += Encoding.ASCII.GetString(buf, 0, i);
}
}
Above code was designed to retrieve message from a TcpClient while running on a separate thread. Read Method works fine until it is supposed to return -1 to indicate there is nothing to read anymore; instead, it just terminates the thread it is running on without any apparent reason - tracing each step using the debugger shows that it just stops running right after that line.
Also I tried encapsulating it with a try ... catch without much success.
What could be causing this?
EDIT: I tried
NetworkStream stream = socket.GetStream();
if (stream.CanRead)
{
while (true)
{
int i = stream.Read(buf, 0, 1024);
if (i == 0)
{
break;
}
result += Encoding.ASCII.GetString(buf, 0, i);
}
}
thanks to #JonSkeet, but the problem is still there. The thread terminates at that read line.
EDIT2: I fixed the code like this and it worked.
while (stream.DataAvailable)
{
int i = stream.Read(buf, 0, 1024);
result += Encoding.ASCII.GetString(buf, 0, i);
}
I think the problem was simple, I just didn't think thoroughly enough. Thanks everyone for taking a look at this!
No, Stream.Read returns 0 when there's nothing to read, not -1:
Return value
The total number of bytes read into the buffer. This can be less than the number of bytes requested if that many bytes are not currently available, or zero (0) if the end of the stream has been reached.
My guess is that actually, no exception is being thrown and the thread isn't being aborted - but it's just looping forever. You should be able to see this if you step through in the debugger. Whatever's happening, your "happy" termination condition will never be hit...
Since you're trying to read ASCII characters, from a stream, take a look at the following as a potentially simpler way to do it:
public IEnumerable<string> ReadLines(Stream stream)
{
using (StreamReader reader = new StreamReader(stream, Encoding.ASCII))
{
while (!reader.EndOfStream)
yield return reader.ReadLine();
}
}
While this may not be exactly what you want, the salient points are:
Use a StreamReader to do all the hard work for you
Use a while loop with !reader.EndOfStream to loop through the stream
You can still use reader.Read(buffer, 0, 1024) if you'd prefer to read chunks into a buffer, and append to result. Just note that these will be char[] chunks not byte[] chunks, which is likely what you want.
It looks to me like it is simply blocking - i.e. waiting on the end of the stream. For it to return a non-positive number, it is necessary that the stream be closed, i.e. the the caller has not only sent data, but has closed their outbound socket. Otherwise, the system cannot distinguish between "waiting for a packet to arrive" and "the end of the stream".
If the caller is sending one message only, they should close their outbound socket after sending (they can keep their inbound socket open for a reply).
If the caller is sending multiple messages, then you must use a framing approach to read individual sub-messages. In the case of a text-based protocol this usually means "hunt the newline".
I have a client/server infrastructure. At present they use a TcpClient and TcpListener to send a receive data between all the clients and server.
What I currently do is when data is received (on it's own thread), it is put in a queue for another thread to process in order to free the socket so it is ready and open to receive new data.
// Enter the listening loop.
while (true)
{
Debug.WriteLine("Waiting for a connection... ");
// Perform a blocking call to accept requests.
using (client = server.AcceptTcpClient())
{
data = new List<byte>();
// Get a stream object for reading and writing
using (NetworkStream stream = client.GetStream())
{
// Loop to receive all the data sent by the client.
int length;
while ((length = stream.Read(bytes, 0, bytes.Length)) != 0)
{
var copy = new byte[length];
Array.Copy(bytes, 0, copy, 0, length);
data.AddRange(copy);
}
}
}
receivedQueue.Add(data);
}
However I wanted to find out if there is a better way to do this. For example if there are 10 clients and they all want to send data to the server at the same time, one will get through while all the others will fail.Or if one client has a slow connection and hogs the socket all other communication will halt.
Is there not some way to be able to receive data from all clients at the same time and add the received data in the queue for processing when it has finished downloading?
So here is an answer that will get you started - which is more beginner level than my blog post.
.Net has an async pattern that revolves around a Begin* and End* call. For instance - BeginReceive and EndReceive. They nearly always have their non-async counterpart (in this case Receive); and achieve the exact same goal.
The most important thing to remember is that the socket ones do more than just make the call async - they expose something called IOCP (IO Completion Ports, Linux/Mono has these two but I forget the name) which is extremely important to use on a server; the crux of what IOCP does is that your application doesn't consume a thread while it waits for data.
How to Use The Begin/End Pattern
Every Begin* method will have exactly 2 more arguments in comparisson to it's non-async counterpart. The first is an AsyncCallback, the second is an object. What these two mean is, "here is a method to call when you are done" and "here is some data I need inside that method." The method that gets called always has the same signature, inside this method you call the End* counterpart to get what would have been the result if you had done it synchronously. So for example:
private void BeginReceiveBuffer()
{
_socket.BeginReceive(buffer, 0, buffer.Length, BufferEndReceive, buffer);
}
private void EndReceiveBuffer(IAsyncResult state)
{
var buffer = (byte[])state.AsyncState; // This is the last parameter.
var length = _socket.EndReceive(state); // This is the return value of the method call.
DataReceived(buffer, 0, length); // Do something with the data.
}
What happens here is .Net starts waiting for data from the socket, as soon as it gets data it calls EndReceiveBuffer and passes through the 'custom data' (in this case buffer) to it via state.AsyncResult. When you call EndReceive it will give you back the length of the data that was received (or throw an exception if something failed).
Better Pattern for Sockets
This form will give you central error handling - it can be used anywhere where the async pattern wraps a stream-like 'thing' (e.g. TCP arrives in the order it was sent, so it could be seen as a Stream object).
private Socket _socket;
private ArraySegment<byte> _buffer;
public void StartReceive()
{
ReceiveAsyncLoop(null);
}
// Note that this method is not guaranteed (in fact
// unlikely) to remain on a single thread across
// async invocations.
private void ReceiveAsyncLoop(IAsyncResult result)
{
try
{
// This only gets called once - via StartReceive()
if (result != null)
{
int numberOfBytesRead = _socket.EndReceive(result);
if(numberOfBytesRead == 0)
{
OnDisconnected(null); // 'null' being the exception. The client disconnected normally in this case.
return;
}
var newSegment = new ArraySegment<byte>(_buffer.Array, _buffer.Offset, numberOfBytesRead);
// This method needs its own error handling. Don't let it throw exceptions unless you
// want to disconnect the client.
OnDataReceived(newSegment);
}
// Because of this method call, it's as though we are creating a 'while' loop.
// However this is called an async loop, but you can see it the same way.
_socket.BeginReceive(_buffer.Array, _buffer.Offset, _buffer.Count, SocketFlags.None, ReceiveAsyncLoop, null);
}
catch (Exception ex)
{
// Socket error handling here.
}
}
Accepting Multiple Connections
What you generally do is write a class that contains your socket etc. (as well as your async loop) and create one for each client. So for instance:
public class InboundConnection
{
private Socket _socket;
private ArraySegment<byte> _buffer;
public InboundConnection(Socket clientSocket)
{
_socket = clientSocket;
_buffer = new ArraySegment<byte>(new byte[4096], 0, 4096);
StartReceive(); // Start the read async loop.
}
private void StartReceive() ...
private void ReceiveAsyncLoop() ...
private void OnDataReceived() ...
}
Each client connection should be tracked by your server class (so that you can disconnect them cleanly when the server shuts down, as well as search/look them up).
You should use asynchronous socket programming to achieve this. Take a look at the example provided by MSDN.
You should use asynchronous method of reading the data, an example is:
// Enter the listening loop.
while (true)
{
Debug.WriteLine("Waiting for a connection... ");
client = server.AcceptTcpClient();
ThreadPool.QueueUserWorkItem(new WaitCallback(HandleTcp), client);
}
private void HandleTcp(object tcpClientObject)
{
TcpClient client = (TcpClient)tcpClientObject;
// Perform a blocking call to accept requests.
data = new List<byte>();
// Get a stream object for reading and writing
using (NetworkStream stream = client.GetStream())
{
// Loop to receive all the data sent by the client.
int length;
while ((length = stream.Read(bytes, 0, bytes.Length)) != 0)
{
var copy = new byte[length];
Array.Copy(bytes, 0, copy, 0, length);
data.AddRange(copy);
}
}
receivedQueue.Add(data);
}
Also you should consider using AutoResetEvent or ManualResetEvent to be notified when new data is added to the collection so the thread that handle the data will know when data is received, and if you are using 4.0 you better switch off to using BlockingCollection instead of Queue.
What I do usually is using a thread pool with several threads.
Upon each new connection I'm running the connection handling (in your case - everything you do in the using clause) in one of the threads from the pool.
By that you achieve both performance since you're allowing several simultaneously accepted connection and you also limiting the number of resources (threads, etc') you allocate for handling incoming connections.
You have a nice example here
Good Luck
I'm developing a server application that asynchronously accepts TCP connections (BeginAccept/EndAccept) and data (BeginReceive/EndReceive). The protocol requires an ACK to be sent whenever the EOM character is found before it will send the next message. The accept and receive are working but the sending app is not receiving the ACK (sent synchronously).
private void _receiveTransfer(IAsyncResult result)
{
SocketState state = result.AsyncState as SocketState;
int bytesReceived = state.Socket.EndReceive(result);
if (bytesReceived == 0)
{
state.Socket.Close();
return;
}
state.Offset += bytesReceived;
state.Stream.Write(state.Buffer, 0, bytesReceived);
if (state.Buffer[bytesReceived - 1] == 13)
{
// process message
Messages.IMessage message = null;
try
{
var value = state.Stream.ToArray();
// do some work
var completed = true;
if (completed)
{
// send positive ACK
var ackMessage = string.Format(ack, message.TimeStamp.ToString("yyyyMMddhhmm"), message.MessageType, message.Id, "AA", message.Id);
var buffer = ASCIIEncoding.ASCII.GetBytes(ackMessage);
int bytesSent = state.Socket.Send(buffer, 0, buffer.Length, SocketFlags.None);
}
else
{
// send rejected ACK
var ackMessage = string.Format(ack, message.TimeStamp.ToString("yyyyMMddhhmm"), message.MessageType, message.Id, "AR", message.Id);
state.Socket.Send(ASCIIEncoding.ASCII.GetBytes(ackMessage));
}
}
catch (Exception e)
{
// log exception
// send error ACK
if (message != null)
{
var ackMessage = string.Format(ack, DateTime.Now.ToString("yyyyMMddhhmm"), message.MessageType, message.Id, "AE", message.Id);
state.Socket.Send(ASCIIEncoding.ASCII.GetBytes(ackMessage));
}
}
}
state.Socket.BeginReceive(state.Buffer, 0, state.Buffer.Length, SocketFlags.None, new AsyncCallback(_receiveTransfer), state);
}
The state.Socket.Send returns the correct number of bytes but the data isn't received until the socket is disposed.
Suggestions are appreciated.
you shouldn't do anything synchronous from async completion routines. Under load you can end up hijacking all IO completion threads from the thread pool and severly hurt performance, up to and including complete IO deadlock. So don't send ACKs synchronously from async callback.
protocols and formats that use preambles are easier to manage that those that use terminators. Ie. write the length of the message in the fixed size message header as opposed to detecting a terminator \0x13. Of course, this applies if the protocol is under your control to start with.
As for your question, you didn't specify if the same code as you posted is also on the client side too.
How long are you giving it? The network stack can buffer, and that could delay transmition. From MSDN:
To increase network efficiency, the
underlying system may delay
transmission until a significant
amount of outgoing data is collected.
A successful completion of the Send
method means that the underlying
system has had room to buffer your
data for a network send.
You might want to try flushing using the IOControl method.
edit
Actually, the IOControl flush will kill the buffer. You may want to check out the Two Generals Problem to see if your protocol will have some inherent problems.
try setting TCP_NODELAY socket option
Have you set the NoDelay property on the socket to true? When set to false (the default), data is buffered for up to 200 milliseconds before it's sent. The reason is to reduce network traffic by limiting the number of packets that are sent. Setting NoDelay to true will force the data to be sent sooner.