ReadAllbytes method not finishing - network stream - c#

I have this ReadAllBytes method which is supposed to read certain amount
of bytes from NetworkStream
private void ReadAllBytes(byte[] buffer, int length)
{
if (buffer.Length != length)
throw new Exception("WriteBytes: Length should be same");
Stream stm = m_client.GetStream();
// Start reading
int offset = 0;
int remaining = length;
while (remaining > 0)
{
int read = stm.Read(buffer, offset, remaining);
if (read <= 0)
throw new EndOfStreamException
(String.Format("ReadAllBytes: End of stream reached with {0} bytes left to read", remaining));
remaining -= read;
offset += read;
}
}
The thing is it works most of the times, but sometimes when program enters this function, it never returns and seems to run forever. I found out this using logging. I use it like:
public TcpClient m_client = new TcpClient();
m_client.Connect(IP,port);
ReadAllBytes(lengthArray, 2);
Can someone help me what is the problem? I think it is related to timeouts, but how to be sure? and how to fix this?
Can it be related to how I am disposing this class?
I also don't get any exceptions.

That method seems fine. How certain are you that the stream can deliver as many bytes as you think it should? Is the sender guaranteed to send the right amount?
The bug is that not enough data is there. It's a bug. A timeout does not fix a bug, it's a way to break the read so that the app does not hang forever if the network goes down (and also so that bugs are not catastrophic). Fix the bug. But also keep the timeout just in case.
also I don't see that bug occurring when I am using the existing C++ project
This means the bug is in your C# code. It's not a C# thing, read can't work any other way in principle. What do you think is read supposed to do when there is no data? It can either wait or fail. No other choices.
The fact that you have called Read when there is no data coming is your bug/problem/fault. Not Reads fault.
should introduce timeout; Yes but I prefer that I get exception instead forever wait
Yes, a timeout is always required when it comes to network communication in case the network goes down (or in case you have a bug and don't want to hang forever). Always have a timeout.

Related

UDP Client uses too much memory when sending multiple small parts of a huge array

I'm capturing the desktop with Desktop Duplication API, encoding it in h265 and sending it in parts via UDP (Can't use TCP because I need as low of a latency as possible). I'm doing all of this in C# and Visual Studio, and the memory usage goes through the roof as soon as I uncomment the udpclient.Send().
With it commented everything works great (including the frames capture, the splitting, etc..), as soon as I send it, I reach the 2gb mark in usage in less than 10 seconds and it just keeps ramping up until it crashes. Also, none of the data is lost as my server receives everything so my packet management seems to be good.
int offset = 0;
int packetSize = 200;
for (int i=0; i< clone.Length/packetSize; i++)
{
int diff = clone.Length - offset;
if (diff > packetSize)
Array.ConstrainedCopy(clone, offset, subBuffer, 0, packetSize);
else
Array.ConstrainedCopy(clone, offset, subBuffer, 0, diff);
udpClient.Send( subBuffer, packetSize, "255.255.255.255", 9009);
offset += packetSize;
}
I'm at this stage just experimenting with the splitting and everything, and as stated none of that part creates any issues (I know it could be made better). It's just the udpclient.Send() that makes everything go wrong. Any idea on what could cause this and how I could somehow force some memory management with the send?
After a lot of trial and error "in the dark", I managed to solve it by doing my own socket management.
It turns out doing as many "Send" calls as possible is permitted, but the socket doesn't respond well to it, meaning it still tries to send even when it's "clogged", and starts leaking memory if that makes sense. I ended up sending asynchronously and waiting to know that the bytes were really sent before attempting another send. Here's the code for it, works perfectly, no memory leaks, 0 packet loss. I started from the simple socket implementation I've taken from here then added the custom management part.
Boolean isAvailable = true;
public void Send(byte[] data)
{
if (!isAvailable)
return;
isAvailable = false;
_socket.BeginSend(data, 0, data.Length, SocketFlags.None, (ar) =>
{
_socket.EndSend(ar);
isAvailable = true;
}, state);
}

SerialPort.BytesToRead always evaluated to 0 even though it isn't

I am interfacing a scale to a computer through a serial port. I check the value SerialPort.BytesToRead for when it reaches 0 inside a loop. However the loop exits even though BytesToRead is not equal to 0. I can't post a screenshot as I am a new user but by going through debug I can see that BytesToRead is in fact not 0.
This results in my data not being read entirely. I have tried for different expressions such as _port.BytesToRead > 0 but the result is the same. Even assigning the value of BytesToRead to a variable gives a 0. Without the loop ReadExisting doesn't return all the data sent from the scale so I don't really have a choice. ReadLine doesn't work either. So why is BytesToRead always 0?
private void PortDataReceived(object sender, SerialDataReceivedEventArgs e)
{
{
var input = string.Empty;
// Reads the data one by one until it reaches the end
do
{
input += _port.ReadExisting();
} while (_port.BytesToRead != 0);
_scaleConfig = GenerateConfig(input);
if (ObjectReceived != null)
ObjectReceived(this, _scaleConfig);
}
}
My boss figured it out. Here's the code.
private void PortDataReceived2(object sender, SerialDataReceivedEventArgs e)
{
var bytesToRead = _port.BytesToRead;
_portDataReceived.Append(_port.ReadExisting());
// Buffer wasn't full. We are at the end of the transmission.
if (bytesToRead < _port.DataBits)
{
//Console.WriteLine(string.Format("Final Data received: {0}", _portDataReceived));
IScalePropertiesBuilder scaleReading = null;
scaleReading = GenerateConfig(_portDataReceived.ToString());
_portDataReceived.Clear();
if (ObjectReceived != null)
{
ObjectReceived(this, scaleReading);
}
}
}
Your original code was strange because I don't see anyway for the code to know if the buffer is empty because your code has emptied it, or because the device hasn't sent yet. (This seems to be a fundamental problem in your design: you want to read until you get all the bytes, but only after you read all the bytes can you figure out how many there should have been).
Your later code is even stranger, because DataBits is the serial configuration for number of bits per byte (between 5 and 8 inclusive)--only in RS232 can a byte be less than 8 bits.
That said, I have been seeing very strange behavior around BytesToRead. It appears to me that it is almost completely unreliable and must get updated only after it would be useful. There is a note on MSDN about it being inconsistent, but it doesn't include the case of it being inexplicably 0, which I have seen as well.
Perhaps when you are running the debugger it's going slow enough that there are in fact bytes to read, but when you running it without the debugger and therefore there are no breakpoints happening, it ends up exiting the loop before the device on the serial port has time to send the data. Most likely the ReadExisting will read all the data on the port, and then exit immediately because no new data is on the port. Perhaps to alleviate the problem you can put a small wait (perhaps with Thread.Sleep()) between reading the data and checking to see if there is more data by checking the value of BytesToRead. Although you should probably be looking at the data that you are reading in order to determine when you have infact read all the necessary data for whatever it is you are trying to receive.

Problem with sockets and OutOfMemory error

I have a huge problem. Trying to create an app that has to have two parts: server and client side. Those two parts have to communicate somehow and exchange objects. I have decides to use Sockets because i'm not familiar with WCF, and i can test both parts on same computer (just put them to listen at 127.0.0.1 address).
Now, when i try to send some "custom" serializable object from client i got "OutOfMemory" exception at server side! I read about Sockets, ways to send/receive objects, i have tried some code i found on net but no luck! I have no idea what's wrong with my code.
Here's my code:
This is test class defined in code of both sides:
[Serializable]
class MyClass
{
public string msg="default";
}
Client-side sending code (works fine):
private void cmdSendData_Click(object sender, System.EventArgs e)
{
try
{
MyClass test = new MyClass();
NetworkStream ns = new NetworkStream(m_socWorker); //m_socWorker is socket
BinaryWriter bw = new BinaryWriter(ns);
MemoryStream ms = new MemoryStream();
BinaryFormatter bf = new BinaryFormatter();
bf.Serialize(ms, test);
bw.Write(ms.ToArray());
MessageBox.Show("Length is: " + ms.ToArray().Length); //length is 152!
ns.Close();
}
catch(System.Net.Sockets.SocketException se)
{
MessageBox.Show (se.Message );
}
}
Server-side code (the one that cause problems):
public void OnDataReceived(IAsyncResult asyn)
{
try
{
CSocketPacket theSockId = (CSocketPacket)asyn.AsyncState ;
NetworkStream ns = new NetworkStream(m_socWorker);
byte[] buffer = new byte[1024];
ns.Read(buffer, 0, buffer.Length);
BinaryFormatter bin = new BinaryFormatter();
MemoryStream mem = new MemoryStream(buffer.Length);
mem.Write(buffer, 0, buffer.Length);
mem.Seek(0, SeekOrigin.Begin);
MyClass tst = (MyClass)bin.Deserialize(mem); //ERROR IS THROWN HERE!
MessageBox.Show(tst.msg);
theSockId.thisSocket.EndReceive(asyn);
WaitForData(m_socWorker);
}
catch (ObjectDisposedException )
{
System.Diagnostics.Debugger.Log(0,"1","\nOnDataReceived: Socket has been closed\n");
}
catch(SocketException se)
{
MessageBox.Show (se.Message );
}
}
So, i got exception when i try to deserialize. Have no idea what's wrong.
I have threatened my code "if you continue causing problems i'll report you to StackOverflow guys" so here i'm :)
There is some very odd code there that:
assumes we read 1024 bytes without checking
copies the 1024 buffer
assumes the serialized data is 1024 bytes, no more no less
deserializes from that
IMO there is your error; you should be reading the correct number of bytes from the stream (usually in a loop). Generally, you would be looping, checking the return value from Read until either we have read the amount of data we wanted, or we get EOF (return <= 0).
Or better; use serializers that do this for you... For example, protobuf-net has SerializeWithLengthPrefix and DeserializeWithLengthPrefix that handle all the length issues for you.
Since you mention "custom" serialization - if you are implementing ISerializable it is also possible that the problem is in there - but we can't see that without code. Besides, the current buffer/stream is so broken (sorry, but it is) that I doubt it is getting that far anyway.
First, while I'm not certain if this is the cause of your issue directly, you have a serious issue with your reading logic.
You create a 1024 byte buffer and read into it without checking to see how much was actually read; if the incoming buffer only has 56 bytes, you'll only read 56 bytes (unless you use a blocking read on the socket itself, which could time out). Likewise, your buffer could have 2000 bytes in it, which means you'd have 976 bytes left in the buffer that won't get processed until you receive more data. That could be an entire object, and the client could be waiting on a response to it before it sends any more.
You then take that buffer and copy it again into a MemoryStream. Rather than doing this, just take the overload of the MemoryStream constructor that takes a buffer. You won't be copying the data that way.
You call EndReceive after you've processed the incoming data; while this may not actually cause an error, it's not in the spirit of the Begin/End old-style async pattern. You should call EndXXX at the beginning of your callback to get the result.
I realize that this is not a direct answer to your question, but you really need to reconsider your decision not to use WCF.
I was in the same boat as you a couple of months ago; I had not used WCF before, and I hadn't bothered to look at how things work in it. It was a very large black box to me, and I had done socket-based communication on other platforms, so it was a known quantity. Looking back, my choice to take the plunge into WCF was the best decision I could have made. Once you've wrapped your head around some of the concepts (bindings, contracts, and how to use the various attributes), development of the service is simple and you don't have to spend your time writing plumbing.
NetTcpBinding provides a TCP-based binding that can support long-lived connections and sessions (which is how I'm using it), and even takes care of keep-alive messages to keep the connection open via Reliable Sessions. It's as simple as turning on a flag. If you need something more interopable (meaning cross-platform), you can write your own binding that does this and keep your code as-is.
Look at some of the TCP WCF examples; it won't take you terribly long to get something up and running, and once you've reached that point, modification is as simple as adding a function to your interface, then a corresponding function on your service class.

C# - TcpClient - Detecting end of stream?

I am trying to interface an ancient network camera to my computer and I am stuck at a very fundamental problem -- detecting the end of stream.
I am using TcpClient to communicate with the camera and I can actually see it transmitting the command data, no problems here.
List<int> incoming = new List<int>();
TcpClient clientSocket = new TcpClient();
clientSocket.Connect(txtHost.Text, Int32.Parse(txtPort.Text));
NetworkStream serverStream = clientSocket.GetStream();
serverStream.Flush();
byte[] command = System.Text.Encoding.ASCII.GetBytes("i640*480M");
serverStream.Write(command, 0, command.Length);
Reading back the response is where the problem begins though. I initially thought something simple like the following bit of code would have worked:
while (serverStream.DataAvailable)
{
incoming.Add(serverStream.ReadByte());
}
But it didn't, so I had a go another version this time utilising ReadByte(). The description states:
Reads a byte from the stream and
advances the position within the
stream by one byte, or returns -1 if
at the end of the stream.
so I thought I could implement something along the lines of:
Boolean run = true;
int rec;
while (run)
{
rec = serverStream.ReadByte();
if (rec == -1)
{
run = false;
//b = (byte)'X';
}
else
{
incoming.Add(rec);
}
}
Nope, still doesn't work. I can actually see data coming in and after a certain point (which is not always the same, otherwise I could have simply read that many bytes every time) I start getting 0 as the value for the rest of the elements and it doesn't halt until I manually stop the execution. Here's what it looks like:
So my question is, am I missing something fundamental here? How can I detect the end of the stream?
Many thanks,
H.
What you're missing is how you're thinking of a TCP data stream. It is an open connection, like an open phone line - someone on the other end may or may not be talking (DataAvailable), and just because they paused to take a breath (DataAvailable==false) it doesn't mean they're actually DONE with their current statement. A moment later they could start talking again (DataAvailable==true)
You need to have some kind of defined rules for the communication protocol ABOVE TCP, which is really just a transport layer. So for instance perhaps the camera will send you a special character sequence when it's current image transmission is complete, and so you need to examine every character sent and determine if that sequence has been sent to you, and then act appropriately.
Well you can't exactly says EOS on a network communication ( unless the other party drop the connection ) usually the protocol itself contains something to signal that the message is complete ( sometimes a new line, for example ). So you read the stream and feed a buffer, and you extract complete message by applying these strategies.

C# Filestream not blocking until write/read operation complete

I'm trying to write a class that will copy a file from one location to another and report progress. The problem that I'm having is that when the application is run, the progress will shoot from 0 to 100% instantly, but the file is still copying in the background.
public void Copy(string sourceFile, string destinationFile)
{
_stopWatch.Start();
_sourceStream = new FileStream(srcName, FileMode.Open);
_destinationStream = new FileStream(destName, FileMode.CreateNew);
read();
//On a 500mb file, execution will reach here in about a second.
}
private void read()
{
int i = _sourceStream.Read(_buffer, 0, bufferSize);
_completedBytes += i;
if (i != 0)
{
_destinationStream.Write(_buffer, 0, i);
TriggerProgressUpdate();
read();
}
}
private void TriggerProgressUpdate()
{
if (OnCopyProgress != null)
{
CopyProgressEventArgs arg = new CopyProgressEventArgs();
arg.CompleteBytes = _completedBytes;
if (_totalBytes == 0)
_totalBytes = new FileInfo(srcName).Length;
arg.TotalBytes = _totalBytes;
OnCopyProgress(this, arg);
}
}
What seems to be happening is that FileStream is merely queuing the operations in the OS, instead of blocking until the read or write is complete.
Is there any way to disable this functionality without causing a huge performance loss?
PS. I am using test source and destination variables, thats why they dont match the arguments.
Thanks
Craig
I don't think it can be queuing the read operations... after all, you've got a byte array, it will have some data in after the Read call - that data had better be correct. It's probably only the write operations which are being buffered.
You could try calling Flush on the output stream periodically... I don't know quite how far the Flush will go in terms of the various levels of caching, but it may well wait until the data has actually been written. EDIT: If you know it's a FileStream, you can call Flush(true) which will wait until the data has actually been written to disk.
Note that you shouldn't do this too often, or performance will suffer significantly. You'll need to balance the granularity of progress accuracy with the performance penalty for taking more control instead of letting the OS optimize the disk access.
I'm concerned about your use of recursion here - on a very large file you may well blow up with a stack overflow for no good reason. (The CLR can sometimes optimize tail-recursive methods, but not always). I suggest you use a loop instead. That would also be more readable, IMO:
public void Copy()
{
int bytesRead;
while ((bytesRead = _sourceStream.Read(_buffer, 0, _buffer.Length)) > 0)
{
_destinationStream.Write(_buffer, 0, bytesRead);
_completedBytes += bytesRead;
TriggerProgressUpdate();
if (someAppropriateCondition)
{
_destinationStream.Flush();
}
}
}
I hope you're disposing of the streams somewhere, by the way. Personally I try to avoid having disposable member variables if at all possible. Is there any reason you can't just use local variables in a using statement?
After investigating I found that using "FileOptions.WriteThrough" in a FileStream's constructor will disable write caching. This causes my progress to report correctly. It does however take a performance hit, the copy takes 13 seconds in windows and 20 second in my application. I'm going to try and optimize the code and adjust the buffer size to see if I can speeds things up a bit.

Categories

Resources