Basically I got a listener that (when it receives a new connection) creates a new socketWorker and assigns the connection to the client to it.
Now if the client sends a huge file(that takes, say 30 seconds to be fully received) and afterwards sends a tiny file of a few bytes the tiny file isn't received until the huge file has been fully received.
This is obviously a bad approach and I wonder how I could do it so the files would be sent simultaneously?
As of now I'm using async methods, every time a file has been fully received BeginReceive() is called again to receive the next file (bad way).
Any way to fix this?
I'd appreciate it!
You'll have to implement multiplexing, like for example SPDY does. This is (basically) done by framing message parts and supplying a stream ID on each frame. This way, multiple streams can be exchanged over a single connection.
Alternatively, you could open one connection per file.
Related
I have created a stream based on a stateless protocol, think 2 web servers sending very limited requests to each other.
As such neither will know if I suddenly stop one as no connection will close, there will simply be no requests. There could legitimately be a gap in requests so I don't want to treat the lack of them as a lost connection.
What I want to do is send a heartbeat to say "I'm alive", I obviously don't want the heartbeat data when I read form the stream though, so my question.
How do I create a new stream class that wraps another stream and sends heartbeat data without exposing that to calling code?
Assuming 2 similar implementations on both sides: send each block of data with a header so you can safely send Zero-data heartbeat blocks. I.e. translate Write on outer stream into several writes on inner stream like "{Data, 100 bytes, [bytes]}, {Data, 13 bytes, [bytes]}", heartbeat would look like "{Ping, 0 bytes, []}". On receiving end immediately respond with similar empty Ping.
Here I am troubleshooting a theoretical problem about HOW servers and clients are working on machines. I know all NET Processes, but I am missing something referring to code. I was unable to find something related about this.
I code in Visual C# 2008, i use regular TCPClient / TCPListener with 2 different projects:
Project1 (Client)
Project2 (Server)
My issues are maybe so simple:
1-> About how server receives data, event handlers are possible?
In my first server codes i used to make this loop:
while (true)
{
if (NetworkStream.DataAvailable)
{
//stuff
}
Thread.Sleep(200);
}
I encounter this as a crap way to control the incoming data from a server. BUT server is always ready to receive data.
My question: There is anything like...? ->
AcceptTcpClient();
I want a handler that waits until something happen, in this case a specific socket data receiving.
2-> General networking I/O methods.
The problem is (beside I'm a noob) is how to handle multiple data writing.
If I use to send a lot of data in a byte array, the sending can break if I send more data. All data got joined and errors occurs when receiving. I want to handle multiple writes to send and receive.
Is this possible?
About how server receives data, event handlers are possible?
If you want to write call-back oriented server code, you may find MSDN's Asynchronous Server Socket Example exactly what you're looking for.
... the sending can break if I send more data. All data got joined and errors occurs when receiving.
That is the nature of TCP. The standardized Internet protocols fall into a few categories:
block oriented stream oriented
reliable SCTP TCP
unreliable UDP ---
If you really want to send blocks of data, you can use SCTP, but be aware that many firewalls DROP SCTP packets because they aren't "usual". I don't know if you can reliably route SCTP packets across the open Internet.
You can wrap your own content into blocks of data with your own headers or add other "synchronization" mechanisms to your system. Consider an HTTP server: it must wait until it reads an entire request like:
GET /index.html HTTP/1.1␍␊
Host: www.example.com␍␊
␍␊
Until the server sees the CRLFCRLF sequence, it must keep the partially-read data in a buffer. The bytes might come in one at a time in a dozen or more packets. Or, if the client is sending multiple requests in a single stream, a dozen requests might come in a single packet.
You just have to handle this.
I have a server app that listens for connections on port 8888. I am also creating the client application. It is a simple application the only hard thing about it is managing multiple connections. so I just need to send files between computers so the way I do that, I don't know if it is wright but it works maybe you guys can correct me. here is my algorithm when sending the file:
NetworkStream stream = \\ initialize it
while(someCondition)
{
// first I open the file for reading and read chunks of it
byte[] chunk = fileRead(file, indexStart, indexEnd) // I have a similar method this is just to illustate my point
stream.Write(chunk, \\other params)
// since I often send large files it will be nice if I can wait here
// until the stream.Write is done. when debuging this the while loop
// executes several times then it waits.
}
and on the other side I read bytes from that stream and write it to a file.
I also need to wait sometimes because I send multiple files and I want to make sure that the first file has been sent before moving to the next. I know I can solve this by using the stream.Read method once the transfer has been done. and sending data back from the client. but sometimes I believe it will be helpful to know when the stream.write is done.
Edit
ok so based on your answers I can for example send to the client the number of bytes that I am planing to send. and once the client recives that many bytes it means it is done. But my question is if this is efficient. I mean doing something like
on the server:
writing data "sending the file length"
read data "check to see if the client received the length" (expecting a string ok for example)
write data "tel the client the name of the file"
read data "check to see if the client recived the name of the file"
write data "start sending chuncks of the file"
read data "wait until client replies with the string ok for example"
The write is complete when the line
stream.Write(chunk, \\other params)
completes. It's worth noting that this does not imply that the other end has received anything. In fact, immediately subsequent to that line, the data is likely to be in some buffer on the sending machine. That means that it's now out of your control. If you want receipt confirmation, the remote end will have to let you know.
Stream.Write is synchronous, so it will always block your thread until the writing finishes.
Im writing a server application for my iPhone app. The section of the server im working on is the relay server. This essentially relays messages between iPhones, through a server, using TCP sockets. The server reads the length of the header from the stream, then reads that number of bytes from the stream. It deserializes the header, and checks to see if the message is to be relayed on to another iPhone (rather than being processed on the server).
If it has to be relayed, it begins reading bytes from the sender's socket, 1024 bytes at a time. After each 1024 bytes are received, it adds those bytes (as a "packet" of bytes) to the outgoing message queue, which is processed in order.
This is all fine, however, but what happens if the sender gets interrupted, so it hasn't sent all its bytes (say, out of the 3,000 bytes it had to send, the sending iPhone goes into a tunnel after 2,500 bytes)?
This means that all the other devices are waiting on the remaining 500 bytes, which dont get relayed to them. Then if the sender (or anyone else for that matter) sends data to these sockets, they think the start of the new message is the end of the last one, corrupting the data.
Obviously from the description above, im using message framing, but I think im missing something. From what I can see, message framing only seems to allow the receiver to know the exact amount of bytes to read from the socket, before assembling them into an object. Wont things start to get hairy once a byte or two goes astray at some point, throwing everything out of sync? Is there a standard way of getting back in sync again?
Wont things start to get hairy once a byte or two goes astray at some point, throwing everything out of sync? Is there a standard way of getting back in sync again?
TCP/IP itself ensures that no bytes go "missing" over a single socket connection.
Things are a bit more complex in your situation, where (if I understand correctly) you're using a server as a sort of multiplexer.
In this case, here's some options off the top of my head:
Have the server buffer the entire message from point A before sending it to point B.
Close the B-side sockets if an abnormal close is detected from the A side.
Change the receiving side of the protocol so that a B-side client can detect and recover from a partial A-stream without killing and re-establishing the socket. e.g., if the server gave a unique id to each incoming A-stream, then the B client would be able to detect if a different stream starts. Or have an additional length prefix, so the B client knows both the entire length to expect and the length for that individual message.
Which option you choose depends on what kind of data you're transferring and how easy the different parts are to change.
Regardless of the solution, be sure to include detection of half-open connections.
I'm writing a server application for an iPhone application im designing. iPhone app is written in C# (MonoTouch) and the server is written in C# too (.NET 4.0)
I'm using asynchronous sockets for the network layer. The server allows two or more iPhones ("devices") to connect to each other and be able to send data bi-directionally.
Depending on the incoming message, the server either processes the message itself , or relays the data through to the other device(s) in the same group as the sending device. It can make this decision by decoding the header of the packet first, and deciding what type of packet it is.
This is done by framing the stream in a way that the first 8 bytes are two integers, the length of the header and the length of the payload (which can be much larger than the header).
The server reads (asynchronously) from the socket the first 8 bytes so it has the lengths of the two sections. It then reads again, up to the total length of the header section.
It then deserializes the header, and based on the information within, can see if the remaining data (payload) should be forwarded onto another device, or is something that the server itself needs to work with.
If it needs to be forwarded onto another device, then the next step is to read data coming into the socket in chunks of say, 1024 bytes, and write these directly using an async send via another socket that is connected to the recipient device.
This reduces the memory requirements of the server, as i'm not loading in the entire packet into a buffer, then re-sending it down the wire to the recipient.
However, because of the nature of async sockets, I am not guaranteed to receive the entire payload in one read, so have to keep reading until I receive all the bytes. In the case of relaying onto its final destination, this means that i'm calling BeginSend() for each chunk of bytes I receive from the sender, and forwarding that chunk onto the recipient, one chunk at a time.
The issue with this is that because I am using async sockets, this leaves the possibility of another thread doing a similar operation with the same recipient (and therefore same final destination socket), and so it is likely that the chunks coming from both threads will get mixed up and corrupt all the data going to that recipient.
For example: If the first thread sends a chunk, and is waiting for the next chunk from the sender (so it can relay it onwards), the second thread could send one of its chunks of data, and corrupt the first thread's (and the second thread's for that matter) data.
As I write this, i'm just wondering is it as simple as just locking the socket object?! Would this be the correct option, or could this cause other issues (e.g.: issues with receiving data through the locked socket that's being sent BACK from the remote device?)
Thanks in advance!
I was facing a similar scenario a while back, I don't have the complete solution anymore, but here's pretty much what I did :
I didn't use sync sockets, decided to explore the async sockets in C# - fun ride
I don't allow multiple threads to share a single resource unless I really have to
My "packets" were containing information about size, index and total packet count for a message
My packet's 1st byte was unique to signify that it's a start of a message, I used 0xAA
My packets's last 2 bytes were a result of a CRC-CCITT checksum (ushort)
The objects that did the receiving bit contained a buffer with all received bytes. From that buffer I was extracting "complete" messages once the size was ok, and the checksum matched
The only "locking" I needed to do was in the temp buffer so I could safely analyze it's contents between write/read operations
Hope that helps a bit
Not sure where the problem is. Since you mentioned servers, I assume TCP, yes?
A phone needs to communicate some of your PDU to another phone. It connects as a client to the server on the other phone. A socket-pair is established. It sends the data off to the server socket. The socket-pair is unique - no other streams that might be happening between the two phones should interrupt this, (will slow it up, of course).
I don't see how async/sync sockets, assuming implemented correctly, should affect this, either should work OK.
Is there something I cannot see here?
BTW, Maciek's plan to bolster up the protocol by adding an 'AA' start byte is an excellent idea - protocols depending on sending just a length as the first element always seem to screw up eventually and result in a node trying to dequeue more bytes that there are atoms in the universe.
Rgds,
Martin
OK, now I understand the problem, (I completely misunderstood the topology of the OP network - I thought each phone was running a TCP server as well as client/s, but there is just one server on PC/whatever a-la-chatrooms). I don't see why you could not lock the socket class with a mutex, so serializing the messages. You could queue the messages to the socket, but this has the memory implications that you are trying to avoid.
You could dedicate a connection to supplying only instructions to the phone, eg 'open another socket connection to me and return this GUID - a message will then be streamed on the socket'. This uses up a socket-pair just for control and halves the capacity of your server :(
Are you stuck with the protocol you have described, or can you break your messages up into chunks with some ID in each chunk? You could then multiplex the messages onto one socket pair.
Another alternative, that again would require chunking the messages, is introduce a 'control message', (maybee a chunk with 55 at start instead of AA), that contains a message ID, (GUID?), that the phone uses to establish a second socket connection to the server, passes up the ID and is then sent the second message on the new socket connection.
Another, (getting bored yet?), way of persuading the phone to recognise that a new message might be waiting would be to close the server socket that the phone is receiving a message over. The phone could then connect up again, tell the server that it only got xxxx bytes of message ID yyyy. The server could then reply with an instruction to open another socket for new message zzzz and then resume sending message yyyy. This might require some buffering on the server to ensure no data gets lost during the 'break'. You might want to implement this kind of 'restart streaming after break' functionality anyway since phones tend to go under bridges/tunnels just as the last KB of a 360MB video file is being streamed :( I know that TCP should take care of dropped packets, but if the phone wireless layer decides to close the socket for whatever reason...
None of these solutions is particularly satisfying. Interested to see whay other ideas crop up..
Rgds,
Martin
Thanks for the help everyone, i've realised the simpliest approach is to use synchronous send commands on the client, or at least a send command that must complete before the next item is sent. Im handling this with my own send queue on the client, rather than various parts of the app just calling send() when they need to send something.