C# Using Socket.Receive(bytes) takes about 3 minutes - c#

I am using .NET 6.0 and recently using int numBytes = client.Receive(bytes); has been taking around about 3 minutes.
The Socket variable is called client.
This issue was not occuring 3 days ago.
The full code that I am using is:
string data = "";
byte[] bytes = new byte[2048];
client = httpServer.Accept();
// Read inbound connection data
while (true)
{
int numBytes = client.Receive(bytes); // Taking about 3 minutes here
data += Encoding.ASCII.GetString(bytes, 0, numBytes);
if (data.IndexOf("\r\n") > -1 || data == "")
{
break;
}
}
The timing is also not always consistent. Sometimes (rarely) it can be instant and othertimes it can take 3 minutes - 1 hour.
I have attempted the following:
Restarting my computer
Changing networks
Turning off the firewall
Attempting on a different computer
Attempting on a different computer with the firewall off
Using a wired and wireless connection
However none of these worked and instead resulted in the same issue.
What I expect to happen and what used to happen is that it would continue through the code normally instead of being hung up on 1 line of code for a long time.

You could use the client.Poll() method to check if data is available to be read from the socket before calling client.Receive().
If client.Poll() returns false, it means that there is no data available to be read and you can handle that situation accordingly.

Related

Socket is not receiving all the bytes C# .NET [duplicate]

This question already has an answer here:
How can we find that all bytes receive from sockets?
(1 answer)
Closed 7 years ago.
I have a problem at work with sockets. I have a client to the server should send a screenshot. The problem is that the server is not receiving all the bytes from the array, which is sent by the client.Constantly lacks 255 bytes (checked back several times). Accordingly, on the server side I can not perform the conversion from byte array back into an image.
Client sends data to the server:
byte[] bytesforSend = ConvertBitmapToByteArray(GetScreenImage());
client.Send(bytesforSend, bytesforSend.Length, 0);
Server recieves data from the client:
int lenght = cl.socket.Receive(bytes);
Perhaps all very easy to solve but I'm working with sockets the first time and I don't understand why this is so.
Let me paste here the code that you have written in a comment:
List<byte[]> recievingBytes = new List<byte[]>();
List<int> lenghts = new List<int>();
int lenght; do
{
lenght = cl.socket.Receive(bytes);
recievingBytes.Add(bytes);
lenghts.Add(lenght);
} while (lenght != 0);
The most likely problem here (I saw a similar issue in a commercial library that communicated with a camera) is that you are assuming that the all the data will reach its destination at the same time, but this could not be true depending on the networking conditions or on how the client is actually sending the data.
Assume for example that the client sends a 2048 byte block of data in four 512 byte TCP segments. The first three arrive immediately, but due to some networking issue the last packet is lost and needs to be retransmitted. In the mean time you have already executed the while (lenght != 0) check and ended the loop. After that the last 512 byte piece arrives, but you have missed it.
What you need to do is to replace the while(lenght != 0) for something like while(IDontHaveAllTheDataThatIExpect && !timeout), this assumes of course that you either know beforehand how much data you will receive or that you can somehow detect the end of the data.

Receiving MSMQ Transactions As a Group

I'm currently having to send files over the size of 4MB through several servers using MSMQ. The files are initially sent in chunks, like so:
using (MessageQueueTransaction oTransaction = new MessageQueueTransaction())
{
// Begin the transaction
oTransaction.Begin();
// Start reading the file
using (FileStream oFile = File.OpenRead(PhysicalPath))
{
// Bytes read
int iBytesRead;
// Buffer for the file itself
var bBuffer = new byte[iMaxChunkSize];
// Read the file, a block at a time
while ((iBytesRead = oFile.Read(bBuffer, 0, bBuffer.Length)) > 0)
{
// Get the right length
byte[] bBody = new byte[iBytesRead];
Array.Copy(bBuffer, bBody, iBytesRead);
// New message
System.Messaging.Message oMessage = new System.Messaging.Message();
// Set the label
oMessage.Label = "TEST";
// Set the body
oMessage.BodyStream = new MemoryStream(bBody);
// Log
iByteCount = iByteCount + bBody.Length;
Log("Sending data (" + iByteCount + " bytes sent)", EventLogEntryType.Information);
// Transactional?
oQueue.Send(oMessage, oTransaction);
}
}
// Commit
oTransaction.Commit();
}
These messages are sent from Machine A to Machine B, and then forwarded to Machine C. However, I've noticed that the PeekCompleted event on Machine B is triggered before all messages are sent.
For example, a test run just now showed 8 messages sent, and were processed on Machine B in groups of 1, 1 and then 6.
I presume this is due to the transactional part ensuring the messages arrive in exactly the right order, but not guaranteeing they are all collected at exactly at the same time.
The worry I have is that when Machine B passes the messages to Machine C, these now count as 3 separate transactions, and I'm unsure as to whether the transactions themselves are delivered in the correct order (for example, 1 then 6 then 1).
My question is, is it possible to receive messages using PeekCompleted by transaction (meaning, all 8 messages are collected first), and pass them on so Machine C gets all 8 messages together? Even in a system where multiple transactions are being sent at the same time?
Or are the transactions themselves guaranteed to arrive in the correct order?
I think I missed this when looking at the topic:
https://msdn.microsoft.com/en-us/library/ms811055.aspx
That these messages will either be sent together, in the order they
were sent, or not at all. In addition, consecutive transactions
initiated from the same machine to the same queue will arrive in the
order they were committed relative to each other. Moreover
So, no matter how diluted the transactions get, the order will never be affected.

C# / GTK# Serial Port Read Issue

So I'm trying to read real time data from the serial port object in C# / Gtk#. I have a product which talks over RF to the computer and every time it gets a command it sends back an ACK. If I use AccessPort and auto send a command every 500ms, I get my ACK. I've ran AccessPort for hours and been able to match every single command to an ACK, so I know this is physically working.
I wrote a small program in C# / Gtk# that send data out the serial port at X ms and has a delegated thread which reads the serial port for any bytes that come back. My problem is that no matter how I write the method for the serial reading, it never actually captures all the bytes that I know are there.
So far this is the "lightest" code I have:
private void readSerial(){
byte readByte = 0x00;
Gtk.Application.Invoke (delegate {
try {
readByte = (byte)serialPort.ReadByte();
Console.WriteLine(readByte.ToString("X2"));
} catch (System.ArgumentException sae) {
Console.WriteLine(sae.Message);
}
});
}
I have assigned that method to a thread in the main function:
writeThread = new Thread (writeSerial);
readThread = new Thread (readSerial);
And I start it when a connect button is pressed, along with the writeThread. The writeThread is working fine as I can see the product execute the correct instruction every X ms ( currently I'm testing at 500ms). The ACK should arrive at the computer every X ms + 35 ms * module ID, so if my end product has a module id of 2 the response would be delayed by 70ms and hence the computer should see it at 570ms or X + 70ms.
Is there a better way to do this? I'm I doing something boneheadedly wrong?
Some other code I've played with was reading 0x0E bytes from the serial port and storing the bytes into a buffer, this also missed a lot of the bytes I know are coming back.
Can anyone offer some help? I do know the readSerial method is actually starting as I do see a 0x00 pop out on the console, which is correct as 0x00 are dispersed among the data I'm looking for.
Figured it out!
I'm not sure what the exact issue was but when I removed the delegation and just used a while(true) look inside that method it worked fine.

TcpListener: Detecting a client disconnect as opposed to a client not sending any data for a while

I was looking how to detect a 'client disconnect' when using a TcpListener.
All the answers seem to be similar to this one:
TcpListener: How can I detect a client disconnect?
Basically, read from the stream and if Read() returns 0 the client had disconnected.
But that's assuming that a client disconnects after every single stream of data it sent.
We operate in environments where the TCP connect/disconnect overhead is both slow and expensive.
We establish a connection and then we send a number of requests.
Pseudocode:
client.Connect();
client.GetStatus();
client.DoSomething();
client.DoSomethingElse();
client.AndSoOn();
client.Disconnect();
Each call between Connect and Disconnect() sends a stream of data to the server.
The server knows how to analyze and process the streams.
If let the TcpListener read in a loop without ever disconnecting it reads and handles all the messages, but after the client disconnects, the server has no way of knowing that and
it will never release the client and accept new ones.
var read = client.GetStream().Read(buffer, 0, buffer.Length);
if (read > 0)
{
//Process
}
If I let the TcpListener drop the client when read == 0 it only accepts
the first stream of data only to drop the client immediately after.
Of course this means new clients can connect.
There is no artificial delay between the calls,
but in terms of computer time the time between two calls is 'huge' of course,
so there will always be a time when read == 0 even though that does not mean
the client has or should be disconnected.
var read = client.GetStream().Read(buffer, 0, buffer.Length);
if (read > 0)
{
//Process
}
else
{
break; //Always executed as soon as the first stream of data has been received
}
So I'm wondering... is there a better way to detect if the client has disconnected?
You could get the underlying socket using the NetworkStream.Socket property and use it's Receive method for reading.
Unlike NetworkStream.Read, the linked overload of Socket.Receive will block until the specified number of bytes have been read, and will only return zero if the remote host shuts down the TCP connection.
UPDATE: #jrh's comment is correct that NetworkStream.Socket is a protected property and cannot be accessed in this context. In order to get the client Socket, you could use the TcpListener.AcceptSocket method which returns the Socket object corresponding to the newly established connection.
Eren's answer solved the problem for me. In case anybody else is facing the same issue
here's some 'sample' code using the Socket.Receive method:
private void AcceptClientAndProcess()
{
try
{
client = server.Accept();
client.ReceiveTimeout = 20000;
}
catch
{
return;
}
while (true)
{
byte[] buffer = new byte[client.ReceiveBufferSize];
int read = 0;
try
{
read = client.Receive(buffer);
}
catch
{
break;
}
if (read > 0)
{
//Handle data
}
else
{
break;
}
}
if (client != null)
client.Close(5000);
}
You call AcceptClientAndProcess() in a loop somewhere.
The following line:
read = client.Receive(buffer);
will block until either
Data is received, (read > 0) in which case you can handle it
The connection has been closed properly (read = 0)
The connection has been closed abruptly (An exception is thrown)
Either of the last two situations indicate the client is no longer connected.
The try catch around the Socket.Accept() method is also required
as it may fail if the client connection is closed abruptly during the connect phase.
Note that did specify a 20 second timeout for the read operation.
The documentation for NetworkStream.Read does not reflect this, but in my experience, 'NetworkStream.Read' blocks if the port is still open and no data is available, but returns 0 if the port has been closed.
I ran into this problem from the other side, in that NetworkStream.Read does not immediately return 0 if no data is currently available. You have to use NetworkStream.DataAvailable to find out if NetworkStream.Read can read data right now.

SerialPort.BytesToRead always evaluated to 0 even though it isn't

I am interfacing a scale to a computer through a serial port. I check the value SerialPort.BytesToRead for when it reaches 0 inside a loop. However the loop exits even though BytesToRead is not equal to 0. I can't post a screenshot as I am a new user but by going through debug I can see that BytesToRead is in fact not 0.
This results in my data not being read entirely. I have tried for different expressions such as _port.BytesToRead > 0 but the result is the same. Even assigning the value of BytesToRead to a variable gives a 0. Without the loop ReadExisting doesn't return all the data sent from the scale so I don't really have a choice. ReadLine doesn't work either. So why is BytesToRead always 0?
private void PortDataReceived(object sender, SerialDataReceivedEventArgs e)
{
{
var input = string.Empty;
// Reads the data one by one until it reaches the end
do
{
input += _port.ReadExisting();
} while (_port.BytesToRead != 0);
_scaleConfig = GenerateConfig(input);
if (ObjectReceived != null)
ObjectReceived(this, _scaleConfig);
}
}
My boss figured it out. Here's the code.
private void PortDataReceived2(object sender, SerialDataReceivedEventArgs e)
{
var bytesToRead = _port.BytesToRead;
_portDataReceived.Append(_port.ReadExisting());
// Buffer wasn't full. We are at the end of the transmission.
if (bytesToRead < _port.DataBits)
{
//Console.WriteLine(string.Format("Final Data received: {0}", _portDataReceived));
IScalePropertiesBuilder scaleReading = null;
scaleReading = GenerateConfig(_portDataReceived.ToString());
_portDataReceived.Clear();
if (ObjectReceived != null)
{
ObjectReceived(this, scaleReading);
}
}
}
Your original code was strange because I don't see anyway for the code to know if the buffer is empty because your code has emptied it, or because the device hasn't sent yet. (This seems to be a fundamental problem in your design: you want to read until you get all the bytes, but only after you read all the bytes can you figure out how many there should have been).
Your later code is even stranger, because DataBits is the serial configuration for number of bits per byte (between 5 and 8 inclusive)--only in RS232 can a byte be less than 8 bits.
That said, I have been seeing very strange behavior around BytesToRead. It appears to me that it is almost completely unreliable and must get updated only after it would be useful. There is a note on MSDN about it being inconsistent, but it doesn't include the case of it being inexplicably 0, which I have seen as well.
Perhaps when you are running the debugger it's going slow enough that there are in fact bytes to read, but when you running it without the debugger and therefore there are no breakpoints happening, it ends up exiting the loop before the device on the serial port has time to send the data. Most likely the ReadExisting will read all the data on the port, and then exit immediately because no new data is on the port. Perhaps to alleviate the problem you can put a small wait (perhaps with Thread.Sleep()) between reading the data and checking to see if there is more data by checking the value of BytesToRead. Although you should probably be looking at the data that you are reading in order to determine when you have infact read all the necessary data for whatever it is you are trying to receive.

Categories

Resources