Guys i'm using SslStream as a server to test my app, but i have issues reading from the stream. I'm using the following code:
while (true)
{
int read = sslStream.Read(buffer, 0, buffer.Length);
string bufferString = System.Text.Encoding.Default.GetString(buffer);
// Check for End?
if (bufferString.IndexOf("\n\r\n", System.StringComparison.Ordinal) != -1)
{
break;
}
}
The problem is that the first loop returns:
G\0\0\0\0\0
and the second run returns:
ET /whateverman
while the result should be
GET /whateverman
What is the issue and is there a better way to read from an SslStream?
Result is exactly as expected (and not directly related to SSL stream) - you are converting bytes that you did not read.
If you want to manually read strings from Stream you must respect result of Read call that tells you home many bytes actually read from the stream.
string partialString = System.Text.Encoding.Default.GetString(buffer, 0, read);
And than don't forget to concatenate strings.
Note: using some sort of reader (StreamReader/BinaryReader) may be better approach.
Related
public static void RemoteDesktopFunction()
{
Task.Run(async() =>
{
while (!ClientSession.noConnection && data != "§Close§")
{
byte[] frameBytes = ScreenShotToByteArray();
byte[] buffer = new byte[900];
using (MemoryStream byteStream = new MemoryStream())
{
await byteStream.WriteAsync(frameBytes, 0, frameBytes.Length);
byteStream.Seek(0, SeekOrigin.Begin);
for (int i = 0; i <= frameBytes.Length; i+= buffer.Length)
{
await byteStream.ReadAsync(buffer, i, buffer.Length);
await ClientSession.SendData(Encoding.UTF8.GetString(buffer).Trim('\0')+ "§RemoteDesktop§");
}
await ClientSession.SendData("§RemoteDesktopFrameDone§§RemoteDesktop§");
};
}
});
}
I'm trying to add a remoteDesktop function to my program by passing chunks of bytes that are read from the byte stream. frameBytes.length is about 20,000b in the debugger. And the chunk is 900b. I expected it to read through and send chunks of data from the frameBytes array to a network stream. But it got stuck on :
await byteStream.ReadAsync(buffer, i, buffer.Length);
On the second loopthrough...
What could cause the issue?
There is no obvious reason why this code should hand on ReadAsync. But an obvious problem is that you are not using the return value that tells you how many bytes are actually read. So the last 'chunk' will likely have a bunch of invalid data from the last chunk at the end.
Note that there is really no reason to use async variants to read/write 900 bytes fro/to a memory stream. Async is mostly meant to hide IO latency, and writing to memory is not an IO operation.
If the goal is to chunk a byte array you can just use the overload of GetString that takes a span.
var chunk = frameBytes.AsSpan().Slice(i, Math.Min(900, frameBytes.Length - i);
At least on any modern c# version, on older versions you can just use Buffer.BlockCopy, no need for a memory stream.
All this assumes your actual idea is sound. I know little about RDP, but it seems odd to convert a array of more or less random data to a string as if it was UTF8 encoded. Normally when sending binary data over a text protocol you would encode it as a base64 string, or possibly prefix it with a command that includes the length. I'm also not sure what the purpose of sending it in chunks is, what is the client supposed to do with 900bytes of screenshot? But again, I know little about RDP.
I'm writing a screen mirroring app on WPF. My original code sends a bitmap over TCP from a server to a client. The original code works fine, but closes and recreates the tcp connection every time it sends a frame. This results in 30 socket open and close per second, which I assume isn't the ideal way to do it.
So I tried to rewrite it to reuse the stream each time it sends the data, but the stream starts to spit out wrong data after a while.
public void SendStream(byte[] byteArray)
{
/*
_client = IsServer ? _server.AcceptTcpClient() : new TcpClient(IP.ToString(), Port);
using (var clientStream = _client.GetStream())
{
var comp = Compress(byteArray);
clientStream.Write(comp, 0, comp.Length);
}
*/
var comp = Compress(byteArray);
_stream.Write(BitConverter.GetBytes(comp.Length), 0, 4);
_stream.Write(comp, 0, comp.Length);
}
public byte[] ReceiveStream()
{
/*
_client = IsServer ? _server.AcceptTcpClient() : new TcpClient(IP.ToString(), Port);
var stream = _client.GetStream();
return Decompress(stream);
*/
var lengthByte = new byte[4];
_stream.Read(lengthByte, 0, 4);
var length = BitConverter.ToInt32(lengthByte, 0);
var data = new byte[length];
_stream.Read(data, 0, length);
return Decompress(new MemoryStream(data));
}
Compress and Decompress function are just wrappers around the built in DeflateStream.
I have checked that the sent comp.Length and received length are the same when the error happens.
Any ideas on whats going on? Thanks. It always throws an exception after at least a few frames, never the first one (at least that I've tried so far)
(It seems to happen faster when the bitmaps are larger in size i.e. when the compression algorithm doesn't do as much cause the screen is more complicated. Not 100% sure though)
Try doing the following:
int receivedBytesCount = _stream.Read(data, 0, length);
The length variable you pass to the Read method is the maximum. The Read method may actually read less bytes than length. It will return the number of bytes it actually read. This will happen when your data is fragmented into TCP packets.
You need to keep calling Read until you receive enough bytes and combine everything to get the full frame. You will need to adjust the offset in order to avoid overwriting the buffer. In the code you posted it is hardcoded to 0.
I need to create an application which requires communicating to an existent software using TCP/IP, where both mine and the other application will be using the port number specified below.
private void frmScan_Load(object sender, EventArgs e)
{
clientSocket.Connect("100.100.100.30", 76545);
}
public void msg(string mesg)
{
textBox1.Text = textBox1.Text + Environment.NewLine + " >> " + mesg;
}
private void cmdSCANok_Click(object sender, EventArgs e)
{
msg("Client Started");
NetworkStream serverStream = clientSocket.GetStream();
byte[] outStream = Encoding.ASCII.GetBytes("PCK|SCAN|5025066840471");
serverStream.Write(outStream, 0, outStream.Length);
serverStream.Flush();
byte[] inStream = new byte[10025];
serverStream.Read(inStream, 0, (int)clientSocket.ReceiveBufferSize);
string returndata = Encoding.ASCII.GetString(inStream, 0, inStream.Length);
msg("Data from Server : " + returndata);
}
What happens is, the program I am communicating with has some in-built language where it will understand the code that I send, and it will return data according to the code that I have sent. So in the code above, I sent three bits of code: ("PCK|SCAN|5025066840471") which will find a specific item in the database. When it runs, I get an error on the line:
serverStream.Read(inStream, 0, (int)clientSocket.ReceiveBufferSize);
the error shows the following:
"Specified argument was out of the range of valid values.
Parameter name: size"
I followed the tutorial I saw on this website: http://csharp.net-informations.com/communications/csharp-client-socket.htm - But I did slightly different. So instead of putting
string returndata = Encoding.ASCII.GetString(inStream);
I wrote:
string returndata = Encoding.ASCII.GetString(inStream, 0, inStream.Length);
I am extremely confused on why I am getting those problems, and to be honest I am not understanding much of what the code is doing, I just have a rough idea, but not enough to troubleshoot this. Can someone help please?
Much appreciated!
PS: I am programming for Windows CE (portable device) on Visual Studio 2010.
Your code is a great example of how not to do TCP communication. I've seen this code copied over and over many times, and I'd be very happy to point you to a good tutorial on TCP - too bad I haven't seen one yet :)
Let me point out some errors first:
TCP doesn't guarantee you the packet arrives as one bunch of bytes. So (theoretically) the Write operation could result in a split, requiring two reads on the other side. Sending data without headers over TCP is a very bad idea - the receiving side has no idea how much it has to read. So you've got two options - either write the length of the whole bunch of data before the data itself, or use a control character to end the "packet"
The first point should also clarify that your reading is wrong as well. It may take more than a single read operation to read the whole "command", or a single read operation might give you two commands at once!
You're reading ReceiveBufferSize bytes into a 10025 long buffer. ReceiveBufferSize might be bigger than your buffer. Don't do that - read a max count of inStream.Length. If you were coding in C++, this would be a great example of a buffer overflow.
As you're converting the data to a string, you're expecting the whole buffer is full. That's most likely not the case. Instead, you have to store the return value of the read call - it tells you how many bytes were actually read. Otherwise, you're reading garbage, and basically having another buffer overflow.
So a much better (though still far from perfect) implementation would be like this:
NetworkStream serverStream = clientSocket.GetStream();
byte[] outStream = Encoding.ASCII.GetBytes("PCK|SCAN|5025066840471");
// It would be much nicer to send a terminator or data length first,
// but if your server doesn't expect that, you're out of luck.
serverStream.Write(outStream, 0, outStream.Length);
// When using magic numbers, at least use nice ones :)
byte[] inStream = new byte[4096];
// This will read at most inStream.Length bytes - it can be less, and it
// doesn't tell us how much data there is left for reading.
int bytesRead = serverStream.Read(inStream, 0, inStream.Length);
// Only convert bytesRead bytes - the rest is garbage
string returndata = Encoding.ASCII.GetString(inStream, 0, bytesRead);
Oh, and I have to recommend this essay on TCP protocol design.
It talks about many of the misconceptions about TCP, most importantly see the Message Framing part.
NetworkStream.Read method has the following check inside:
if(size < 0 || size > (buffer.Length - offset))
throw new ArgumentOutOfRanveException("size");
In your case:
size = clientSocket.ReceiveBufferSize
offset = 0
buffer = inStream
The error you received means that clientSocket.ReceiveBufferSize > inStream.Length. In other words you are trying to read more bytes than are available. Try to use the following code:
...
var count = serverStream.Read(inStream, 0, inStream.Length);
string returndata = Encoding.ASCII.GetString(inStream, 0, count);
See also an example here.
In my static class I have this:
static var cache = new ConcurrentDictionary<string, object>();
In thread #1 I do this:
cache.TryAdd (stringFromSomewhere, newlyCreatedObject);
Console.WriteLine(stringFromSomewhere); // Outputs "abc"
Couple of seconds after Thread #1, in Thread #2:
if(cache.ContainsKey(stringFromSomewhereElse))
Console.WriteLine("Yes, it exists.");
else
Console.WriteLine("This did not exist: " + stringFromSomewhereElse);
It outputs "This did not exist: abc"
Then in Thread #3 couple of seconds after the Thread #2:
foreach(var kvp in cache)
{
Console.WriteLine("string: " + kvp.Key);
if(cache.ContainsKey(kvp.Key))
Console.WriteLine("Yes, it exists.");
else
Console.WriteLine("This did not exist: " + kvp.Key);
}
I get the output "string: abc" and "Yes, it exists."
In Thread #1 I create the string using MD5 like this:
Convert.ToBase64String (md5.ComputeHash(Encoding.ASCII.GetBytes(value)))
And in Thread #2 I get the string from a byte stream, where the string was written to using UTF8 Encoding, and then read to string from bytes using UTF8 Encoding again.
In thread #3 I get the string by looping through the ConcurrentDictionary.
What am I missing here? To the best of my knowledge, Thread #2 should behave just like Thread #3 does.
I have two possibilities, which both are in my opinion long shots:
Is this some kind of synchronizing problem I am not aware of?
Or is the string different somehow? When I output it to the console, it does not differ.
Anyone got any other ideas, or solutions?
EDIT:
I write the data to the stream like this:
string data = System.Web.HttpUtility.UrlEncode(theString);
byte[] buffer = Encoding.UTF8.GetBytes (data);
NetworkStream stream = client.GetStream(); // TcpClient client;
stream.Write (buffer, 0, buffer.Length);
Then I read the data from the stream like this:
string data = "";
NetworkStream stream = client.GetStream(); // TcpClient client;
byte[] bytes = new byte[4096];
do {
int i = stream.Read (bytes, 0, bytes.Length);
data += Encoding.UTF8.GetString (bytes, 0, i);
} while(stream.DataAvailable);
string theString = HttpUtility.UrlDecode(data);
If you're writing bytes to a network stream and then reading them back, you have to either precede the data with a value that says how many bytes follow, or you need an end-of-data marker. The way your code is written, it's quite possible that the receive code is only picking up part of the data.
Imagine, for example, that your key is "HelloWorld". Your send code sends the string out. The receive code sees the "Hello" part in the buffer, grabs it, checks to see if more data is available, and it's not because the network transport thread hasn't finished copying it to the buffer.
So you get only part of the string.
Another thing that can happen is that you read too much. That can happen if you write two strings out to the network stream and your reader reads both of them as if they're a single string.
To do it right, you should do either this:
int dataLength = buffer.Length;
byte[] lengthBuff = BitConverter.GetBytes(dataLength);
stream.Write(lengthBuff, 0, lengthBuff.Length); // write length
stream.Write(buffer, 0, buffer.Length); // write data
And then read it by first reading the length and then reading that many bytes from the stream.
Or, you can use an end-of-data marker:
stream.Write(buffer, 0, buffer.Length); // write data
buffer[0] = end_of_data_byte;
stream.Write(buffer, 0, 1); // write end of data
And your reader reads bytes until it gets the end of data marker.
What you use for an end of data marker is up to you. It should be something that won't be in the normal data stream.
Personally, I'd go for the length prefix.
Difficult to understand the flow of your question - however one thing I notice: do you Flush() after the network Write()? Otherwise you can get delays in the write, which in general can cause timing related issues.
What's the most efficient way to read a stream into another stream? In this case, I'm trying to read data in a Filestream into a generic stream. I know I could do the following:
1. read line by line and write the data to the stream
2. read chunks of bytes and write to the stream
3. etc
I'm just trying to find the most efficient way.
Thanks
Stephen Toub discusses a stream pipeline in his MSDN .NET matters column here. In the article he describes a CopyStream() method that copies from one input stream to another stream. This sounds quite similar to what you're trying to do.
I rolled together a quick extension method (so VS 2008 w/ 3.5 only):
public static class StreamCopier
{
private const long DefaultStreamChunkSize = 0x1000;
public static void CopyTo(this Stream from, Stream to)
{
if (!from.CanRead || !to.CanWrite)
{
return;
}
var buffer = from.CanSeek
? new byte[from.Length]
: new byte[DefaultStreamChunkSize];
int read;
while ((read = from.Read(buffer, 0, buffer.Length)) > 0)
{
to.Write(buffer, 0, read);
}
}
}
It can be used thus:
using (var input = File.OpenRead(#"C:\wrnpc12.txt"))
using (var output = File.OpenWrite(#"C:\wrnpc12.bak"))
{
input.CopyTo(output);
}
You can also swap the logic around slightly and write a CopyFrom() method as well.
Reading a buffer of bytes and then writing it is fastest. Methods like ReadLine() need to look for line delimiters, which takes more time than just filling a buffer.
I assume by generic stream, you mean any other kind of stream, like a Memory Stream, etc.
If so, the most efficient way is to read chunks of bytes and write them to the recipient stream. The chunk size can be something like 512 bytes.