I'm totally confused and stunned about my problem. I wrote a multi client server using sockets.
I created a first GUI that receives clients and display them in a gridview, in the click gridview event it opens a new GUI with the socket between the clicked client and the server. So everything about the connection between server and the clients go well and fast
But my problems are :
the send information between them like sending full process it sometimes shown full and it sometimes less result .
send files management for example when the server request full folders/files in a directory sometimes it shows them all and sometimes less result .
But the command such open a window open an url, send a message , commands like those works so perfectly and instantly .
I am so confused right now what's the problem
Notice 1 : I used the extern IP for connection between server/client.
Notice 2 : internet connection is perfect (definitely not slow)
I tried to use different buffer sizes, but what I'm really confused about is that sometimes the result comes full with a specific buffer size and sometimes not with the same buffer size .
Thank you for your time !
I would also recommend you use wireshark so that you are able to get the full information and transactions that are really going on in the network.
This will help you to rule out where the issue is coming from.
To determine the packet size see the image
Here I am not talking about how the TCP handshaking protocol works as I assume it is working. However do note that in the transmission if you see something like RST then maybe somebody issued a reset command implying that some error occurred in the transmission. Such as something due to checksum for example. Although it is generally a problem for someone working directly on TCP/IP protocol
This should help you confirm that you sent and recieved the correct length.
Related
To begin with, why I've written a Telnet client is not important. It's my job and I have to do it. So, on to my question...
My Telnet client is simple and allows for both interactive communication (used by a human) as well as automated communication (used by a program). However, in order for my client to know when the server is done sending data so it can give control back to the user, I've had to hard code all of the possible command prompts for a given server into the client application. For instance, I create a list of prompts and then pass this list to the client so it can watch for these in the data stream from the server. When it sees that a data stream ends with one, it then gives control back to the user, who will enter a response or a command and send it. The client then watches again for any of these prompts and so on. The list could look like the following for a Telnet server running on CentOS:
private static void SetTerminalPrompts()
{
prompts.Add("login: ");
prompts.Add("Password: ");
prompts.Add("]$ ");
prompts.Add("]# ");
}
This would be used like:
response = telnetClient.ReceiveUntilPrompt(prompts);
And public string ReceiveUntilPrompt(List<string> prompts) simply reads the data stream watching for a prompt from the list.
The problem is watching for these prompts. In a general case, the Telnet client will not know the server to which it is connecting. And it seems silly to attempt to create a complete list of possible prompts by guessing all servers to which the client may ever connect (even though I've found several examples and posts that say just that).
What I would like to know is how do the Telnet clients that come with Windows or Linux know when to give control back to the user? How do they know when the Telnet server is done sending data?
I have one application running on specific port. I dont have access to this application but i know what it does. Now this application is listening on one specific port and process the data coming on that port
Now i have been assigned the task on logging all the packets received on that port with all the data details. I have used the wireshark and can apply the filter to check the data coming on that specific port. So i assume here that i have been stuck in creating this kind of snipping program so i can get all the data packet details. I have search the stack overflow and come up with
Code project
Stack overflow
Now i have evalute the sample and can see that it gives me packets but it listens on all port of the system. Not on the specific port. Can someone help me to achieve my solution? Basically if i will listen to all port then there are lots of app/program running on server so it may get bottleneck. Thank you all.
In MJsniffer.MJsnifferForm class you have ParseData method - at first step it converts received bytes to IPHeader object, an then there a huge SWITCH-CASE on ProtocolType field. Inside that SWITCH there are TCPHeader or UDPHeader is created and added to TreeView on form - there you can filter packages by SourcePort/DestinationPort fields tcpHeader/udpHeader.
Also if you are already familiar with WireShark, then you can easly switch to Pcap.NET wrapper(they use same packet capturing library - WinPcap)
I have 2 sets of code for you to look at, both are available at PasteBin here:
First is my c# Socket server: http://pastebin.com/wvT4f19m
Second is my code within my AS3 application: http://pastebin.com/bKvabFSP
In the code, what I am trying to do is a simple Send/Receive to see what happens. If I open my application in 2 instances the c# socket server registers that they exist and all is fine!. If I close one of my instances, the c# server still thinks that the user exists and the socket isn't closed.
My code is based off the example at : http://msdn.microsoft.com/en-us/library/fx6588te.aspx
In the MS example, the following lines are added to the SendCallBack() function:
handler.Shutdown(SocketShutdown.Both);
handler.Close();
These definately close the sockets, something I do not want to happen.
I am new at socket programming and it has taken me a fair amount of time to play with the MS example to get it working roughly how I need it. The only problem is the acknowledgement of user disconnects so that I can remove the user from the Clients list that I have set up in the server. Also, when disconnects are acknowledged, I can inform other clients.
thanks all!
Upon each attempt to send data to the user, I do a quick check for a successful transmission/Poll the user and if it fails, the user is removed from my server.
Closing the socket won't affect your listener, it will only affect the current connection. Why do you say this is not what you want?
It sounds like this is exactly what you want.
I'm writing a server application for an iPhone application im designing. iPhone app is written in C# (MonoTouch) and the server is written in C# too (.NET 4.0)
I'm using asynchronous sockets for the network layer. The server allows two or more iPhones ("devices") to connect to each other and be able to send data bi-directionally.
Depending on the incoming message, the server either processes the message itself , or relays the data through to the other device(s) in the same group as the sending device. It can make this decision by decoding the header of the packet first, and deciding what type of packet it is.
This is done by framing the stream in a way that the first 8 bytes are two integers, the length of the header and the length of the payload (which can be much larger than the header).
The server reads (asynchronously) from the socket the first 8 bytes so it has the lengths of the two sections. It then reads again, up to the total length of the header section.
It then deserializes the header, and based on the information within, can see if the remaining data (payload) should be forwarded onto another device, or is something that the server itself needs to work with.
If it needs to be forwarded onto another device, then the next step is to read data coming into the socket in chunks of say, 1024 bytes, and write these directly using an async send via another socket that is connected to the recipient device.
This reduces the memory requirements of the server, as i'm not loading in the entire packet into a buffer, then re-sending it down the wire to the recipient.
However, because of the nature of async sockets, I am not guaranteed to receive the entire payload in one read, so have to keep reading until I receive all the bytes. In the case of relaying onto its final destination, this means that i'm calling BeginSend() for each chunk of bytes I receive from the sender, and forwarding that chunk onto the recipient, one chunk at a time.
The issue with this is that because I am using async sockets, this leaves the possibility of another thread doing a similar operation with the same recipient (and therefore same final destination socket), and so it is likely that the chunks coming from both threads will get mixed up and corrupt all the data going to that recipient.
For example: If the first thread sends a chunk, and is waiting for the next chunk from the sender (so it can relay it onwards), the second thread could send one of its chunks of data, and corrupt the first thread's (and the second thread's for that matter) data.
As I write this, i'm just wondering is it as simple as just locking the socket object?! Would this be the correct option, or could this cause other issues (e.g.: issues with receiving data through the locked socket that's being sent BACK from the remote device?)
Thanks in advance!
I was facing a similar scenario a while back, I don't have the complete solution anymore, but here's pretty much what I did :
I didn't use sync sockets, decided to explore the async sockets in C# - fun ride
I don't allow multiple threads to share a single resource unless I really have to
My "packets" were containing information about size, index and total packet count for a message
My packet's 1st byte was unique to signify that it's a start of a message, I used 0xAA
My packets's last 2 bytes were a result of a CRC-CCITT checksum (ushort)
The objects that did the receiving bit contained a buffer with all received bytes. From that buffer I was extracting "complete" messages once the size was ok, and the checksum matched
The only "locking" I needed to do was in the temp buffer so I could safely analyze it's contents between write/read operations
Hope that helps a bit
Not sure where the problem is. Since you mentioned servers, I assume TCP, yes?
A phone needs to communicate some of your PDU to another phone. It connects as a client to the server on the other phone. A socket-pair is established. It sends the data off to the server socket. The socket-pair is unique - no other streams that might be happening between the two phones should interrupt this, (will slow it up, of course).
I don't see how async/sync sockets, assuming implemented correctly, should affect this, either should work OK.
Is there something I cannot see here?
BTW, Maciek's plan to bolster up the protocol by adding an 'AA' start byte is an excellent idea - protocols depending on sending just a length as the first element always seem to screw up eventually and result in a node trying to dequeue more bytes that there are atoms in the universe.
Rgds,
Martin
OK, now I understand the problem, (I completely misunderstood the topology of the OP network - I thought each phone was running a TCP server as well as client/s, but there is just one server on PC/whatever a-la-chatrooms). I don't see why you could not lock the socket class with a mutex, so serializing the messages. You could queue the messages to the socket, but this has the memory implications that you are trying to avoid.
You could dedicate a connection to supplying only instructions to the phone, eg 'open another socket connection to me and return this GUID - a message will then be streamed on the socket'. This uses up a socket-pair just for control and halves the capacity of your server :(
Are you stuck with the protocol you have described, or can you break your messages up into chunks with some ID in each chunk? You could then multiplex the messages onto one socket pair.
Another alternative, that again would require chunking the messages, is introduce a 'control message', (maybee a chunk with 55 at start instead of AA), that contains a message ID, (GUID?), that the phone uses to establish a second socket connection to the server, passes up the ID and is then sent the second message on the new socket connection.
Another, (getting bored yet?), way of persuading the phone to recognise that a new message might be waiting would be to close the server socket that the phone is receiving a message over. The phone could then connect up again, tell the server that it only got xxxx bytes of message ID yyyy. The server could then reply with an instruction to open another socket for new message zzzz and then resume sending message yyyy. This might require some buffering on the server to ensure no data gets lost during the 'break'. You might want to implement this kind of 'restart streaming after break' functionality anyway since phones tend to go under bridges/tunnels just as the last KB of a 360MB video file is being streamed :( I know that TCP should take care of dropped packets, but if the phone wireless layer decides to close the socket for whatever reason...
None of these solutions is particularly satisfying. Interested to see whay other ideas crop up..
Rgds,
Martin
Thanks for the help everyone, i've realised the simpliest approach is to use synchronous send commands on the client, or at least a send command that must complete before the next item is sent. Im handling this with my own send queue on the client, rather than various parts of the app just calling send() when they need to send something.
Despite the documentation, NetworkStream.Write does not appear to wait until the data has been sent. Instead, it waits until the data has been copied to a buffer and then returns. That buffer is transmitted in the background.
This is the code I have at the moment. Whether I use ns.Write or ns.BeginWrite doesn't matter - both return immediately. The EndWrite also returns immediately (which makes sense since it is writing to the send buffer, not writing to the network).
bool done;
void SendData(TcpClient tcp, byte[] data)
{
NetworkStream ns = tcp.GetStream();
done = false;
ns.BeginWrite(bytWriteBuffer, 0, data.Length, myWriteCallBack, ns);
while (done == false) Thread.Sleep(10);
}
public void myWriteCallBack(IAsyncResult ar)
{
NetworkStream ns = (NetworkStream)ar.AsyncState;
ns.EndWrite(ar);
done = true;
}
How can I tell when the data has actually been sent to the client?
I want to wait for 10 seconds(for example) for a response from the server after sending my data otherwise I'll assume something was wrong. If it takes 15 seconds to send my data, then it will always timeout since I can only start counting from when NetworkStream.Write returns - which is before the data has been sent. I want to start counting 10 seconds from when the data has left my network card.
The amount of data and the time to send it could vary - it could take 1 second to send it, it could take 10 seconds to send it, it could take a minute to send it. The server does send an response when it has received the data (it's a smtp server), but I don't want to wait forever if my data was malformed and the response will never come, which is why I need to know if I'm waiting for the data to be sent, or if I'm waiting for the server to respond.
I might want to show the status to the user - I'd like to show "sending data to server", and "waiting for response from server" - how could I do that?
I'm not a C# programmer, but the way you've asked this question is slightly misleading. The only way to know when your data has been "received", for any useful definition of "received", is to have a specific acknowledgment message in your protocol which indicates the data has been fully processed.
The data does not "leave" your network card, exactly. The best way to think of your program's relationship to the network is:
your program -> lots of confusing stuff -> the peer program
A list of things that might be in the "lots of confusing stuff":
the CLR
the operating system kernel
a virtualized network interface
a switch
a software firewall
a hardware firewall
a router performing network address translation
a router on the peer's end performing network address translation
So, if you are on a virtual machine, which is hosted under a different operating system, that has a software firewall which is controlling the virtual machine's network behavior - when has the data "really" left your network card? Even in the best case scenario, many of these components may drop a packet, which your network card will need to re-transmit. Has it "left" your network card when the first (unsuccessful) attempt has been made? Most networking APIs would say no, it hasn't been "sent" until the other end has sent a TCP acknowledgement.
That said, the documentation for NetworkStream.Write seems to indicate that it will not return until it has at least initiated the 'send' operation:
The Write method blocks until the requested number of bytes is sent or a SocketException is thrown.
Of course, "is sent" is somewhat vague for the reasons I gave above. There's also the possibility that the data will be "really" sent by your program and received by the peer program, but the peer will crash or otherwise not actually process the data. So you should do a Write followed by a Read of a message that will only be emitted by your peer when it has actually processed the message.
TCP is a "reliable" protocol, which means the data will be received at the other end if there are no socket errors. I have seen numerous efforts at second-guessing TCP with a higher level application confirmation, but IMHO this is usually a waste of time and bandwidth.
Typically the problem you describe is handled through normal client/server design, which in its simplest form goes like this...
The client sends a request to the server and does a blocking read on the socket waiting for some kind of response. If there is a problem with the TCP connection then that read will abort. The client should also use a timeout to detect any non-network related issue with the server. If the request fails or times out then the client can retry, report an error, etc.
Once the server has processed the request and sent the response it usually no longer cares what happens - even if the socket goes away during the transaction - because it is up to the client to initiate any further interaction. Personally, I find it very comforting to be the server. :-)
In general, I would recommend sending an acknowledgment from the client anyway. That way you can be 100% sure the data was received, and received correctly.
If I had to guess, the NetworkStream considers the data to have been sent once it hands the buffer off to the Windows Socket. So, I'm not sure there's a way to accomplish what you want via TcpClient.
I can not think of a scenario where NetworkStream.Write wouldn't send the data to the server as soon as possible. Barring massive network congestion or disconnection, it should end up on the other end within a reasonable time. Is it possible that you have a protocol issue? For instance, with HTTP the request headers must end with a blank line, and the server will not send any response until one occurs -- does the protocol in use have a similar end-of-message characteristic?
Here's some cleaner code than your original version, removing the delegate, field, and Thread.Sleep. It preforms the exact same way functionally.
void SendData(TcpClient tcp, byte[] data) {
NetworkStream ns = tcp.GetStream();
// BUG?: should bytWriteBuffer == data?
IAsyncResult r = ns.BeginWrite(bytWriteBuffer, 0, data.Length, null, null);
r.AsyncWaitHandle.WaitOne();
ns.EndWrite(r);
}
Looks like the question was modified while I wrote the above. The .WaitOne() may help your timeout issue. It can be passed a timeout parameter. This is a lazy wait -- the thread will not be scheduled again until the result is finished, or the timeout expires.
I try to understand the intent of .NET NetworkStream designers, and they must design it this way. After Write, the data to send are no longer handled by .NET. Therefore, it is reasonable that Write returns immediately (and the data will be sent out from NIC some time soon).
So in your application design, you should follow this pattern other than trying to make it working your way. For example, use a longer time out before received any data from the NetworkStream can compensate the time consumed before your command leaving the NIC.
In all, it is bad practice to hard code a timeout value inside source files. If the timeout value is configurable at runtime, everything should work fine.
How about using the Flush() method.
ns.Flush()
That should ensure the data is written before continuing.
Bellow .net is windows sockets which use TCP.
TCP uses ACK packets to notify the sender the data has been transferred successfully.
So the sender machine knows when data has been transferred but there is no way (that I am aware of) to get that information in .net.
edit:
Just an idea, never tried:
Write() blocks only if sockets buffer is full. So if we lower that buffers size (SendBufferSize) to a very low value (8? 1? 0?) we may get what we want :)
Perhaps try setting
tcp.NoDelay = true