c# progressbar update according to file send progress - c#

i send a file converted to byte on a socket. and i want to update the current sending status(somthing similar when using weboscket and downloading somthing from the web).
this is the send code:
byte[] arr = File.ReadAllBytes(file); //sending simple text file.
s.Send(data); //s is a socket of course.
now of course i cant do this
s.Send(data);
progressBar1.Value=100;//show completed progress after sucesufully send.
but i want to do it while sending the file.. and update it according to the fil state... lets say if its 100kb file so every 1kb sent to update it.. im aware im sending the whole file at once and not parts of it but still... is there any way for that?

Socket.Send is blocking until all data in the byte[] is sent. So you will not get any chance to update a progress bar unless you do multiple sends of small chunks of data until the whole amount of data is transmitted.
If you want reliable communication, you need to use TCP/IP. The default maximum packet size of TCP/IP is around 1500 bytes ('MTU'). The exact value depends on your computer's network settings. If you split your data, I suggest you do it into multiple of the MTU size, so that you will have no half packets.
But if you're really transmitting over a network, I wonder why you are going to implement a progress bar for sending 100kB over a network that is capable of transmitting 100Mbit/s or even 1000MBit/s... Your data will be sent before the UI even has a chance to update the progress bar. This is only interesting for medium sized or large data of >> 1MB.
Enhancement:
See MSDN Socket.Send for details. Read especially the notes on that page:
The successful completion of a send does not indicate that the data was successfully delivered.
because it means that the data was only successfully stored in the output buffer. If you really want to know that the data has arrived, you should design your opponent to send a reply after each (n) packet(s).

Related

.Net Sockets - Socket.Receive reports bytes read, but the bytes do not appear in the buffer

I am calling (System.Net.Sockets) Socket.Receive. It returns that it read 13 bytes, which is the expected data length. However, the receive buffer I pass into the function does not become populated with the expected bytes. All of the bytes in the buffer are zero.
I simply don't understand how this could be happening - I thought Receive is supposed to block until data becomes available, and that it should put the data it read into the buffer.
Simple as it is, here is my code:
bytesRead = Socket.Receive(RecvBuffer.Buffer, offset, RecvBuffer.Buffer.Length - offset, SocketFlags.None, out SocketError error);
bytesRead = 13, and RecvBuffer.Buffer is all zeros.
On the sending side, it's not writing all zeros, as far as I can tell. I haven't found any tool to allow me to check what's on the wire.
I'm at a loss. Any help is appreciated.
You have to understand all possible scenarios. We will not consider chat application here where asynchronously both side can send simultaneously. Best way of understanding serial communications is an application using a scale to send weights. There are two different operating modes. One which is continuous where client sends a message to start weighing and the server constantly sends weights. Again this is probably not what you are doing. The second mode with a scale is a master/slave mode where a client request one weight and server sends the weight. The weight may not come in one chunk so client has to loop until the entire weight is received which is usually and end character or a byte count at beginning of the weight. There is no blocking. Windows uses timers and periodically sends the data in the UART is to the application (Socket.Receive). The number of bytes is random and less or equal to the entire data being received. You should be disabling hardware and software handshaking so you do not get null data. You also need to send a message to receive a message and not send until you receive a full block of data which may take multiple Socket.Receive() commands until you reach the end of data (a terminating character or byte count).
The problem here was the sending side. I got fooled by other messages working correctly, but in this particular case, the send buffer was never populated with the data. So none was received :).
Thanks for the comments and suggestions.

C# Socket send truncate info

Iam tring to send a file using a socket, but almost always it cuts the file over the transmission, but the socket never sends an exception, it just does not send all the file.
Trying to solve this im sending the lenght of the file as a header to the server, so it could validate if the file is complete, if not it send a signal back asking for retransmition.
This works, but some times it takes up to 250 tries to get the file well, ¿what do you thinks is happening?, I have in mind sending the file in smaller chunks, but what is a good size for the chunk?,
The files that fails to be send are of a size of 80kb, and some times even a file of 20kb fails!.
any tips?
After sending the file data, you can use the Flush() method to force sending of all local buffered data.
When sending, TCP breaks the data up into small packets.
When reading data, the Read() method returns an integer which is the number of bytes actually read.
Always make sure that the amount you intended to read is the amount that was read.
int actuallyRead = stream.Read(bufferToStoreData, 0, bytesToRead);
while (actuallyRead < bytesToRead) {
actuallyRead += stream.Read(bufferToStoreData, actuallyRead, bytesToRead - actuallyRead);
}
See this thread:
Is there a way to block on a socket send() until we get the ack for that packet?
It's the same for C# as for any other socket environment, you will have to do some kind of acknowledgement at the application level if you want to ensure receipt of data.
Sender tells receiver how much data to expect. Receiver reads that many bytes and responds saying "i got it". Sender is then free to close it's connection or go on about it's business. If the Sender closes before receiving the "i got it" from the receiver data likely won't get there.
As to why your app level ack isn't working, post some code and we can take a look at it.

How to wait until stream write is done

I have a server app that listens for connections on port 8888. I am also creating the client application. It is a simple application the only hard thing about it is managing multiple connections. so I just need to send files between computers so the way I do that, I don't know if it is wright but it works maybe you guys can correct me. here is my algorithm when sending the file:
NetworkStream stream = \\ initialize it
while(someCondition)
{
// first I open the file for reading and read chunks of it
byte[] chunk = fileRead(file, indexStart, indexEnd) // I have a similar method this is just to illustate my point
stream.Write(chunk, \\other params)
// since I often send large files it will be nice if I can wait here
// until the stream.Write is done. when debuging this the while loop
// executes several times then it waits.
}
and on the other side I read bytes from that stream and write it to a file.
I also need to wait sometimes because I send multiple files and I want to make sure that the first file has been sent before moving to the next. I know I can solve this by using the stream.Read method once the transfer has been done. and sending data back from the client. but sometimes I believe it will be helpful to know when the stream.write is done.
Edit
ok so based on your answers I can for example send to the client the number of bytes that I am planing to send. and once the client recives that many bytes it means it is done. But my question is if this is efficient. I mean doing something like
on the server:
writing data "sending the file length"
read data "check to see if the client received the length" (expecting a string ok for example)
write data "tel the client the name of the file"
read data "check to see if the client recived the name of the file"
write data "start sending chuncks of the file"
read data "wait until client replies with the string ok for example"
The write is complete when the line
stream.Write(chunk, \\other params)
completes. It's worth noting that this does not imply that the other end has received anything. In fact, immediately subsequent to that line, the data is likely to be in some buffer on the sending machine. That means that it's now out of your control. If you want receipt confirmation, the remote end will have to let you know.
Stream.Write is synchronous, so it will always block your thread until the writing finishes.

Using TCP Sockets (C#), How do I detect if a message got interrupted in transit, on the receiver's end?

Im writing a server application for my iPhone app. The section of the server im working on is the relay server. This essentially relays messages between iPhones, through a server, using TCP sockets. The server reads the length of the header from the stream, then reads that number of bytes from the stream. It deserializes the header, and checks to see if the message is to be relayed on to another iPhone (rather than being processed on the server).
If it has to be relayed, it begins reading bytes from the sender's socket, 1024 bytes at a time. After each 1024 bytes are received, it adds those bytes (as a "packet" of bytes) to the outgoing message queue, which is processed in order.
This is all fine, however, but what happens if the sender gets interrupted, so it hasn't sent all its bytes (say, out of the 3,000 bytes it had to send, the sending iPhone goes into a tunnel after 2,500 bytes)?
This means that all the other devices are waiting on the remaining 500 bytes, which dont get relayed to them. Then if the sender (or anyone else for that matter) sends data to these sockets, they think the start of the new message is the end of the last one, corrupting the data.
Obviously from the description above, im using message framing, but I think im missing something. From what I can see, message framing only seems to allow the receiver to know the exact amount of bytes to read from the socket, before assembling them into an object. Wont things start to get hairy once a byte or two goes astray at some point, throwing everything out of sync? Is there a standard way of getting back in sync again?
Wont things start to get hairy once a byte or two goes astray at some point, throwing everything out of sync? Is there a standard way of getting back in sync again?
TCP/IP itself ensures that no bytes go "missing" over a single socket connection.
Things are a bit more complex in your situation, where (if I understand correctly) you're using a server as a sort of multiplexer.
In this case, here's some options off the top of my head:
Have the server buffer the entire message from point A before sending it to point B.
Close the B-side sockets if an abnormal close is detected from the A side.
Change the receiving side of the protocol so that a B-side client can detect and recover from a partial A-stream without killing and re-establishing the socket. e.g., if the server gave a unique id to each incoming A-stream, then the B client would be able to detect if a different stream starts. Or have an additional length prefix, so the B client knows both the entire length to expect and the length for that individual message.
Which option you choose depends on what kind of data you're transferring and how easy the different parts are to change.
Regardless of the solution, be sure to include detection of half-open connections.

How to safely stream data through a server socket to another socket?

I'm writing a server application for an iPhone application im designing. iPhone app is written in C# (MonoTouch) and the server is written in C# too (.NET 4.0)
I'm using asynchronous sockets for the network layer. The server allows two or more iPhones ("devices") to connect to each other and be able to send data bi-directionally.
Depending on the incoming message, the server either processes the message itself , or relays the data through to the other device(s) in the same group as the sending device. It can make this decision by decoding the header of the packet first, and deciding what type of packet it is.
This is done by framing the stream in a way that the first 8 bytes are two integers, the length of the header and the length of the payload (which can be much larger than the header).
The server reads (asynchronously) from the socket the first 8 bytes so it has the lengths of the two sections. It then reads again, up to the total length of the header section.
It then deserializes the header, and based on the information within, can see if the remaining data (payload) should be forwarded onto another device, or is something that the server itself needs to work with.
If it needs to be forwarded onto another device, then the next step is to read data coming into the socket in chunks of say, 1024 bytes, and write these directly using an async send via another socket that is connected to the recipient device.
This reduces the memory requirements of the server, as i'm not loading in the entire packet into a buffer, then re-sending it down the wire to the recipient.
However, because of the nature of async sockets, I am not guaranteed to receive the entire payload in one read, so have to keep reading until I receive all the bytes. In the case of relaying onto its final destination, this means that i'm calling BeginSend() for each chunk of bytes I receive from the sender, and forwarding that chunk onto the recipient, one chunk at a time.
The issue with this is that because I am using async sockets, this leaves the possibility of another thread doing a similar operation with the same recipient (and therefore same final destination socket), and so it is likely that the chunks coming from both threads will get mixed up and corrupt all the data going to that recipient.
For example: If the first thread sends a chunk, and is waiting for the next chunk from the sender (so it can relay it onwards), the second thread could send one of its chunks of data, and corrupt the first thread's (and the second thread's for that matter) data.
As I write this, i'm just wondering is it as simple as just locking the socket object?! Would this be the correct option, or could this cause other issues (e.g.: issues with receiving data through the locked socket that's being sent BACK from the remote device?)
Thanks in advance!
I was facing a similar scenario a while back, I don't have the complete solution anymore, but here's pretty much what I did :
I didn't use sync sockets, decided to explore the async sockets in C# - fun ride
I don't allow multiple threads to share a single resource unless I really have to
My "packets" were containing information about size, index and total packet count for a message
My packet's 1st byte was unique to signify that it's a start of a message, I used 0xAA
My packets's last 2 bytes were a result of a CRC-CCITT checksum (ushort)
The objects that did the receiving bit contained a buffer with all received bytes. From that buffer I was extracting "complete" messages once the size was ok, and the checksum matched
The only "locking" I needed to do was in the temp buffer so I could safely analyze it's contents between write/read operations
Hope that helps a bit
Not sure where the problem is. Since you mentioned servers, I assume TCP, yes?
A phone needs to communicate some of your PDU to another phone. It connects as a client to the server on the other phone. A socket-pair is established. It sends the data off to the server socket. The socket-pair is unique - no other streams that might be happening between the two phones should interrupt this, (will slow it up, of course).
I don't see how async/sync sockets, assuming implemented correctly, should affect this, either should work OK.
Is there something I cannot see here?
BTW, Maciek's plan to bolster up the protocol by adding an 'AA' start byte is an excellent idea - protocols depending on sending just a length as the first element always seem to screw up eventually and result in a node trying to dequeue more bytes that there are atoms in the universe.
Rgds,
Martin
OK, now I understand the problem, (I completely misunderstood the topology of the OP network - I thought each phone was running a TCP server as well as client/s, but there is just one server on PC/whatever a-la-chatrooms). I don't see why you could not lock the socket class with a mutex, so serializing the messages. You could queue the messages to the socket, but this has the memory implications that you are trying to avoid.
You could dedicate a connection to supplying only instructions to the phone, eg 'open another socket connection to me and return this GUID - a message will then be streamed on the socket'. This uses up a socket-pair just for control and halves the capacity of your server :(
Are you stuck with the protocol you have described, or can you break your messages up into chunks with some ID in each chunk? You could then multiplex the messages onto one socket pair.
Another alternative, that again would require chunking the messages, is introduce a 'control message', (maybee a chunk with 55 at start instead of AA), that contains a message ID, (GUID?), that the phone uses to establish a second socket connection to the server, passes up the ID and is then sent the second message on the new socket connection.
Another, (getting bored yet?), way of persuading the phone to recognise that a new message might be waiting would be to close the server socket that the phone is receiving a message over. The phone could then connect up again, tell the server that it only got xxxx bytes of message ID yyyy. The server could then reply with an instruction to open another socket for new message zzzz and then resume sending message yyyy. This might require some buffering on the server to ensure no data gets lost during the 'break'. You might want to implement this kind of 'restart streaming after break' functionality anyway since phones tend to go under bridges/tunnels just as the last KB of a 360MB video file is being streamed :( I know that TCP should take care of dropped packets, but if the phone wireless layer decides to close the socket for whatever reason...
None of these solutions is particularly satisfying. Interested to see whay other ideas crop up..
Rgds,
Martin
Thanks for the help everyone, i've realised the simpliest approach is to use synchronous send commands on the client, or at least a send command that must complete before the next item is sent. Im handling this with my own send queue on the client, rather than various parts of the app just calling send() when they need to send something.

Categories

Resources