SslStream.Read returns 0 on connected socket - c#

there is an interface IStream to abstract NetworkStream and SslStream. With NetworkStream everything is fine while with SslStream having problems on Read method. This is how I establish Ssl stream:
class SecureStream : SslStream, IStream
{
TcpClient _tcpClient;
public SecureStream(TcpClient tcpClient) : base(tcpClient.GetStream()) {
_tcpClient = tcpClient;
var serverCertificate = new X509Certificate(#"C:\Cert.cer");
AuthenticateAsServer(serverCertificate);
ReadTimeout = -1;
this.InnerStream.ReadTimeout = -1;
_tcpClient.Client.ReceiveTimeout = -1;
}
...
}
After successful reading some portion of data (http header) have to wait several seconds on Read method, but Read method instantly returns 0. Ssl connection stays active and I can Write response back on other thread. What reasons could be that Read method not waiting for data to appear in stream?

Turns out that for such behavior responsible SSL stream establishment inside constructor (thanks Luaan). Encapsulated SslStream version works as expected.

Related

Receiving entire data from socket before reading contents

I'm currently trying to setup a server that accepts multiple clients and can receive and respond to messages.
The client and server use a common library at the core, which contains a Request class that gets serialized and sent from client to server and similar in reverse.
The server listens asyncronously to clients on each of their sockets and attempts to take the data received and deserialize the data into the Request class.
Data is sent via a NetworkStream using a BinaryFormatter to send directly on the socket. The received data is then parsed using a Network Stream on the other end.
I've tried using a MemoryStream to store the data to a buffer and then deserialize it as shown below, however this hasn't worked. Directly deserializing the NetworkStream didn't work either.
Searching around I haven't found much information that has worked for my use case.
This is the active code after the sockets are successfully connected:
On the request class, sending from the client:
public void SendData(Socket socket)
{
IFormatter formatter = new BinaryFormatter();
Stream stream = new NetworkStream(socket, false);
formatter.Serialize(stream, this);
stream.Close();
}
Server Code receiving this data:
public void Receive(Socket socket)
{
try
{
ReceiveState state = new ReceiveState(socket);
state.Stream.BeginRead(state.Buffer, 0, ReceiveState.BUFFER_SIZE, new AsyncCallback(DataReceived), state);
}
catch (Exception e)
{
Logger.LogError(e.ToString());
}
}
private void DataReceived(IAsyncResult ar)
{
ReceiveState state = (ReceiveState)ar.AsyncState;
int bytesRead = state.Stream.EndRead(ar);
//Resolve Message
try
{
IFormatter formatter = new BinaryFormatter();
MemoryStream memoryStream = new MemoryStream(state.Buffer, 0, bytesRead);
Request request = (Request)formatter.Deserialize(memoryStream);
Logger.Log("Request Received Successfully");
ResolveRequest(request, state.Socket);
}
catch (Exception e)
{
Logger.LogError(e.ToString());
}
//Resume listening
Receive(state.Socket);
}
public class ReceiveState
{
public byte[] Buffer;
public const int BUFFER_SIZE = 1024;
public Socket Socket;
public NetworkStream Stream;
public ReceiveState(Socket socket)
{
Buffer = new byte[BUFFER_SIZE];
Socket = socket;
Stream = new NetworkStream(Socket, false);
}
}
Currently, when BeginRead() is called on the NetworkStream I get a single byte of data, then the remaining data when the next BeginRead() is called.
e.g. The Serialized data should be: 00-01-00-00-00-FF-FF-FF-FF-01-...
I receive: 00 followed by 01-00-00-00-FF-FF-FF-FF-01-... which fails to deserialize.
I take it that the issue is that the DataReceived() method is called as soon as any data appears, which is the single byte taken, then the remainder arrives before listening is resumed.
Is there a way to make sure each message is received in full before deserializing? I'd like to be able to deserialize the object as soon as the last byte is received.
TCP is a stream protocol, not a packet protocol. That means you are only guaranteed to get the same bytes in the same order (or a network failure); you are not guaranteed to get them in the same chunk configurations. So: you need to implement your own framing protocol. A frame is how you partition messages. For binary messages, a simple framing protocol might be "length = 4 bytes little-endian int32, followed by {length} bytes of payload", in which case the correct decode is to buffer until you have 4 bytes, decode the length, buffer {length} bytes, then decode the payload. YOU NEED TO WRITE the code that buffers the correct amounts, and at every point you need to deal with over-reading, back-buffers, etc. It is a complex topic. Frankly, a lot of the nuances are solved by using the "pipelines" API (I have a multi-part discussion on that API here).
However, additional guidance:
never ever use BinaryFormatter, especially for scenarios like this; it will hurt you, and it is not a good fit for most use-cases (it also isn't a particularly good serializer); my recommendation would be something like protobuf (perhaps protobuf-net), but I'm arguably biased
network code is subtle and complex, and RPC is largely a "solved" problem; consider trying tools like gRPC instead of rolling it yourself; this can be very easy

C# waiting for data from a NetworkStream

I'm working on a asynchronous TCP server class that uses a TcpListener object. I'm using the BeginAcceptTcpClient method for the TcpListener, and when the callback fires and EndAcceptTcpClient, I get a TcpClient object. In order to receive and send with this TcpClient I need to use the NetworkStream provided by the client object.
The way I've been using the NetworkStream feels wrong though. I call BeginRead and a callback to eventually use EndRead, but this requires that I use byte[] buffer. This has worked fine for me so far, but I have to wonder if there is a cleaner way of doing things. My current flow is so as follows: receive data into a byte[] buffer, throw data into a MemoryStream, use a BinaryReader to get the data that I'm passing, and then I can ultimately get what I need for my protocol.
Is there a more elegant way to get from NetworkStream to BinaryReader (and ultimately BinaryWriter as I'm going to pass data back similarly as I received it)? It feels wasteful that I must first dump it into a byte[], then into a MemoryStream (does that copy the data?), then finally be able to create a reader/writer object.
I've looked into simply creating a BinaryReader/BinaryWriter using the NetworkStream, but from what I've gathered, those objects require a stream with data previously available. It seems like I just need to have some way to be notified that the NetworkStream has data available without reading into a buffer. Perhaps I am mistaken and this is exactly how NetworkStreams are supposed to be used. It just seems like things could be a lot more streamlined if I didn't have to copying buffers from one stream into another.
EDIT:
Here is an example of the source in question:
public class Server
{
TcpListener listener;
const int maxBufferSize = 0xFFFF;
byte[] clientBuffer = new byte[maxBuffersize];
public Server(IPAddress address, int port)
{
listener = new TcpListener(address, port);
}
public void Start()
{
listener.Start();
listener.BeginAcceptTcpClient(OnAccept, listener);
}
public void Stop()
{
listener.Stop();
}
private void OnAccept(IAsyncResult ar)
{
TcpListener listener = ar.AsyncState as TcpListener;
TcpClient client = listener.EndAcceptTcpClient(ar);
client.GetStream().BeginRead(clientBuffer, 0, maxBufferSize, OnReceive, client);
listener.BeginAcceptTcpClient(OnAccept, listener);
}
private void OnReceive(IAsyncResult ar)
{
TcpClient client = ar.AsyncState as TcpClient;
int len = client.GetStream().EndRead(ar);
if (len == 0)
{
client.Close();
return;
}
else
{
MemoryStream inStream = new MemoryStream(len == maxBufferSize ? clientBuffer : clientBuffer.Take(len).ToArray());
MemoryStream outStream = DoStuff(inStream); //Data goes off to the app at this point and returns a response stream
client.GetStream().Write(outStream.ToArray(), 0, outStream.Length);
inStream.Dispose();
outStream.Dispose();
}
}
}
My question revolves around what happens in OnReceive. You'll see that I finish the read operation with EndRead, at which point I can now retrieve the data from the byte[] field of the Server class. My concern is that the time spent copying data from the NetworkStream into an array, and then into a MemoryStream is wasteful (at least it feels that way, perhaps C# handles this stuff efficiently?)
Thanks in advance.

c# service to listen to a port

I believe what I am looking to create is a service that listens to a specific port, and when data is sent to that port, it sends off that data to another script for processing.
For some reason though, the service times out when I try to start it. My logs tells me TcpClient client = server.AcceptTcpClient(); is where it is stopping (actually, it is getting stuck on 'starting' in Services).
Since I have no experience with C#, making services, or working with servers in this manner, the code is pretty much just what I found online.
The OnStart method looks like this.
protected override void OnStart(string[] args)
{
try
{
TcpListener server = null;
// Set the TcpListener on port 13000.
Int32 port = 1234;
IPAddress localAddr = IPAddress.Parse("127.0.0.1");
// TcpListener server = new TcpListener(port);
server = new TcpListener(localAddr, port);
// Start listening for client requests.
server.Start();
// Buffer for reading data
Byte[] bytes = new Byte[256];
String data = null;
// Enter the listening loop.
while (true)
{
// Perform a blocking call to accept requests.
// You could also user server.AcceptSocket() here.
TcpClient client = server.AcceptTcpClient();
data = null;
// Get a stream object for reading and writing
NetworkStream stream = client.GetStream();
int i;
// Loop to receive all the data sent by the client.
while ((i = stream.Read(bytes, 0, bytes.Length)) != 0)
{
// Translate data bytes to a ASCII string.
data = System.Text.Encoding.ASCII.GetString(bytes, 0, i);
// Process the data sent by the client.
data = data.ToUpper();
byte[] msg = System.Text.Encoding.ASCII.GetBytes(data);
// Send back a response.
stream.Write(msg, 0, msg.Length);
}
// Shutdown and end connection
client.Close();
}
}
catch (SocketException e)
{
}
finally
{
}
}
As per MSDN, TcpServer.AcceptTcpClient blocks, so you're probably never returning from your Service's OnStart method, which causes the service to never actually "start".
You might consider using another thread and return from OnStart as soon as possible.
Cheers
As far as creating the Windows service itself, you should be able to use this link, even though it's dated. This companion link shows how to have the service install and uninstall itself. Finally, use this link to understand how to have your service run constantly and how to properly respond to start and stop commands.
To have your service interact with the socket, you'll want to modify the WorkerThreadFunc() from the last link. This is where you should start listening for and processing inbound socket connections.

Socket Shutdown: when should I use SocketShutdown.Both

I believe the shutdown sequence is as follows (as described here):
The MSDN documentation (remarks section) reads:
When using a connection-oriented Socket, always call the Shutdown method before closing the Socket. This ensures that all data is sent and received on the connected socket before it is closed.
This seems to imply that if I use Shutdown(SocketShutdown.Both), any data that has not yet been received, may still be consumed. To test this:
I continuously send data to the client (via Send in a separate thread).
The client executed Shutdown(SocketShutdown.Both).
The BeginReceive callback on the server executes, however, EndReceive throws an exception: An existing connection was forcibly closed by the remote host. This means that I am unable to receive the 0 return value and in turn call Shutdown.
As requested, I've posted the Server side code below (it's wrapped in a Windows Form and it was created just as an experiment). In my test scenario I did not see the CLOSE_WAIT state in TCPView as I normally did without sending the continuous data. So potentially I've done something wrong and I'm interrupting the consequences incorrectly. In another experiment:
Client connects to server.
Client executes Shutdown(SocketShutdown.Both).
Server receives shutdown acknowledgement and sends some data in response. Server also executes Shutdown.
Client receives data from server but the next BeginReceive is not allowed: A request to send or receive data was disallowed because the socket had already been shut down in that direction with a previous shutdown call
In this scenario, I was still expecting a 0 return value from EndReceive to Close the socket. Does this mean that I should use Shutdown(SocketShutdown.Send) instead? If so, when should one use Shutdown(SocketShutdown.Both)?
Code from first experiment:
private TcpListener SocketListener { get; set; }
private Socket ConnectedClient { get; set; }
private bool serverShutdownRequested;
private object shutdownLock = new object();
private struct SocketState
{
public Socket socket;
public byte[] bytes;
}
private void ProcessIncoming(IAsyncResult ar)
{
var state = (SocketState)ar.AsyncState;
// Exception thrown here when client executes Shutdown:
var dataRead = state.socket.EndReceive(ar);
if (dataRead > 0)
{
state.socket.BeginReceive(state.bytes, 0, state.bytes.Length, SocketFlags.None, ProcessIncoming, state);
}
else
{
lock (shutdownLock)
{
serverShutdownRequested = true;
state.socket.Shutdown(SocketShutdown.Both);
state.socket.Close();
state.socket.Dispose();
}
}
}
private void Spam()
{
int i = 0;
while (true)
{
lock (shutdownLock)
{
if (!serverShutdownRequested)
{
try { ConnectedClient.Send(Encoding.Default.GetBytes(i.ToString())); }
catch { break; }
++i;
}
else { break; }
}
}
}
private void Listen()
{
while (true)
{
ConnectedClient = SocketListener.AcceptSocket();
var data = new SocketState();
data.bytes = new byte[1024];
data.socket = ConnectedClient;
ConnectedClient.BeginReceive(data.bytes, 0, data.bytes.Length, SocketFlags.None, ProcessIncoming, data);
serverShutdownRequested = false;
new Thread(Spam).Start();
}
}
public ServerForm()
{
InitializeComponent();
var hostEntry = Dns.GetHostEntry("localhost");
var endPoint = new IPEndPoint(hostEntry.AddressList[0], 11000);
SocketListener = new TcpListener(endPoint);
SocketListener.Start();
new Thread(Listen).Start();
}
Shutdown(SocketShutdown.Both) disables both the send and receive operations on the current socket. Calling Shutdown(SocketShutdown.Both) is an actual disconnection of your client from the server. You can see this by checking the socket Connected property in your SocketState object on the server side: it will be false.
This happens because the Shutdown operation is not reversible, so after stopping both send and receive on the socket, there's no point in keeping it connected as it is isolated.
"Once the shutdown function is called to disable send, receive, or both, there is no method to re-enable send or receive for the existing socket connection."
(https://learn.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-shutdown)
As for your question:
I continuously send data to the client (via Send in a separate thread).
The client executed Shutdown(SocketShutdown.Both). --> this disconnects the client
The BeginReceive callback on the server executes, however, EndReceive throws an
exception: An existing connection was forcibly closed by the remote host. This means that
I am unable to receive the 0 return value and in turn call Shutdown.
EndReceive throws an exception because the client socket is not connected anymore.
To gracefully terminate the socket:
the client socket calls Shutdown(SocketShutdown.Send)) but should keep receiving
on the server, EndReceive returns 0 bytes read (the client signals there is no more data from its side)
the server
A) sends its last data
B) calls Shutdown(SocketShutdown.Send))
C) calls Close on the socket, optionally with a timeout to allow the data to be read from the client
the client
A) reads the remaining data from the server and then receives 0 bytes (the server signals there is no more data from its side)
B) calls Close on the socket
(https://learn.microsoft.com/it-it/windows/win32/winsock/graceful-shutdown-linger-options-and-socket-closure-2?redirectedfrom=MSDN)
Shutdown(SocketShutdown.Both) should be used when you don't want to receive or send. You either want to abruptly close connection or you know that other party has shutdown using SocketShutdown.Receive. For example, you have a time server that sends current time to the client that connects it, server sends time and calls Shutdown(SocketShutdown.Received) as it is not expecting any more data from client. The client upon receiving time data should call Shutdown(SocketShutdown.Both) as it is not going to send or receive any further data.

TCP Socket/NetworkStream Unexpectedly Failing

This specifically is a question on what is going on in the background communications of NetworkStream consuming raw data over TCP. The TcpClient connection is communicating directly with a hardware device on the network. Every so often, at random times, the NetworkStream appears to hiccup, and can be best described while observing in debug mode. I have a read timeout set on the stream and when everything is working as expected, when stepping over Stream.Read, it will sit there and wait the length of the timeout period for incoming data. When not, only a small portion of the data comes through, the TcpClient still shows as open and connected, but Stream.Read no longer waits for the timeout period for incoming data. It immediately steps over to the next line, no data is received obviously, and no data will ever come through until everything is disposed of and a new connection is reestablished.
The question is, in this specific scenario, what state is the NetworkStream in at this point, what causes it, and why is the TcpClient connection still in a seemingly open and valid state? What is going on in the background? No errors thrown and captured, is the stream silently failing in the background? What is the difference between states of TcpClient and NetworkStream?
private TcpClient Client;
private NetworkStream Stream;
Client = new TcpClient();
var result = Client.BeginConnect(IPAddress, Port, null, null);
var success = result.AsyncWaitHandle.WaitOne(TimeSpan.FromSeconds(2));
Client.EndConnect(result);
Stream = Client.GetStream();
try
{
while (Client.Connected)
{
bool flag = true;
StringBuilder sb = new StringBuilder();
while (!IsCompleteRecord(sb.ToString()) && Client.Connected)
{
string response = "";
byte[] data = new byte[512];
Stream.ReadTimeout = 60000;
try
{
int recv = Stream.Read(data, 0, data.Length);
response = Encoding.ASCII.GetString(data, 0, recv);
}
catch (Exception ex)
{
}
sb.Append(response);
}
string rec = sb.ToString();
// send off data
Stream.Flush();
}
}
catch (Exception ex)
{
}
You are not properly testing for the peer closing its end of the connection.
From this link : https://msdn.microsoft.com/en-us/library/system.net.sockets.networkstream.read%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396
This method reads data into the buffer parameter and returns the number of bytes successfully read. If no data is available for reading, the Read method returns 0. The Read operation reads as much data as is available, up to the number of bytes specified by the size parameter. If the remote host shuts down the connection, and all available data has been received, the Read method completes immediately and return zero bytes.
You are simply doing a stream.read, and not interpreting the fact that you might have received 0 bytes, which means that the peer closed its end of the connection. This is called a half close. It will not send to you anymore. At that point you should also close your end of the socket.
There is an example available here :
https://msdn.microsoft.com/en-us/library/bew39x2a(v=vs.110).aspx
// Read data from the remote device.
int bytesRead = client.EndReceive(ar);
if (bytesRead > 0) {
// There might be more data, so store the data received so far.
state.sb.Append(Encoding.ASCII.GetString(state.buffer,0,bytesRead));
// Get the rest of the data.
client.BeginReceive(state.buffer,0,StateObject.BufferSize,0,
new AsyncCallback(ReceiveCallback), state);
} else {
// All the data has arrived; put it in response.
if (state.sb.Length > 1) {
response = state.sb.ToString();
}
// Signal that all bytes have been received.
receiveDone.Set(); ---> not that this event is set here
}
and in the main code block it is waiting for receiveDone:
receiveDone.WaitOne();
// Write the response to the console.
Console.WriteLine("Response received : {0}", response);
// Release the socket.
client.Shutdown(SocketShutdown.Both);
client.Close();
Conclusion : check for reception of 0 bytes and close your end of the socket because that is what the other end has done.
A timeout is handled with an exception. You are not really doing anything with a timeout because your catch block is empty. You would just continue trying to receive.
#Philip has already answered ythe question.
I just want to add that I recommend the use of SysInternals TcpView, which is basically a GUI for netstat and lets you easily check the status of all network connections of your computer.
About the detection of the connection state in your program, see here in SO.

Categories

Resources