I'm adding file sharing capabilities to a basic .NET asynchronous socket server. The payload I'd like clients to send is just my headers+commandID+binaryFileData. This server needs to service requests from VB6 clients in addition to the .NET clients.
The party responsible for the VB6 clients worked out a convoluted way to transfer a file that I'm not particularly impressed by. It involves sending a small block of the file, at the end of which the server asks for the next block. This party claims that the VB6 Winsock control doesn't work well if you try to do a send with a large ("large" means anything not small -- 1MB is "large"). This sounds absurd to me.
I want clients to write a single large payload to the socket, and just do message reassembly/hashing at the server end. Are there actually problems with large writes in VB6 Winsock control, or is the other party making excuses?
No, Winsock and the socket controls have no concept of files or sizes, just a stream of bytes. I expect they're hitting the buffer size in which case they just need to send in chunks until it's all been sent. No need for the server to request the next chunk which will just slow things down.
The claim of VB6 having problems with large (for small values of large) payloads is absolutely true. Even more so, the resulting problems differ from installation to installation and by the phase of the moon. Sending more than a MB to a VB6 winsock control is asking for trouble. Been there, done that, just believe me.
That said, we went down another route: A function would accept a payload of arbitraty size, chunk it up in Megabytes and enqueue it. The winsock control events (SendComplete IIRC) would be used to dequeue the next Megabyte.
This is transparent to the consuming app (a singel call, indifferent of payload size), with the quirks worked around on the sending side - this works reliably without any sophisticated protocol, as the problems are completly inside the client.
Related
I'm working on a program which will asynchronously load large amounts of data from a server via gRPC (both ends use protobuf-net.Grpc).
While I can simply throw large amounts of data at the client via IAsyncEnumerable, I sometimes want to prioritize sending certain parts earlier (where the priorities are determined on the fly, and not know at the start, kind of like sending a video stream and skipping ahead).
If I were to send data and wait for a response every time I'd be leaving a lot of bandwidth unused. The alternative would be throwing tons of data at the client, which could cause network congestion and delay prioritized packets by an indefinite amount.
Can I somehow use HTTPS/2s / TCPs flow/congestion control for myself here? Or will I need to implement a basic flow/congestion control system on top of gRPC?
To be slightly more precise: I'd like to send as much data as possible without filling up any internal buffers, causing delays down the line.
Background:
The application I am programming uses async sockets (using BeginSend, EndSend, BeginReceive, EndReceive) to send data between each other. The sockets are TCP, no socket flags, on IPV4.
It uses the system where it sends a 4-byte (int) message, followed by a message with the length specified in the previous message. I use function helpers that handle the MessageLength, and the MessageBody. The flow is something like this
BeginReceive()
EndReceive()
MessageLengthReceived()
BeginReceive()
MessageBodyReceived()
Issue:
The issue arrives when I send file data, in chunks of 16kb (with an additional small overhead: offset, pieceIndex, etc). Occasionally, when receiving the MessageLength, it receives a data from a random part in the previous message, instead of the actual message length. Part of this issue is that it doesn't always happen at a set offset (eg beginning or end of file / piece / 16 kb chunk) and can happen with any file, but happens more if I send a lot more files / larger files.
There are internal messages that are sent (eg RequestMessages) that never experience this problem. All the internal messages are < 100 bytes.
I've tried waiting for the file chunk to save completely before requesting another chunk, but it still fails. I've also tried limiting how many chunks to send at a time, but this only resolves the issue when using 127.0.0.1 (local clients), and not cross network (LAN).
I've spent hours going through my application to see if there's any issues, but I have yet to see any where it would be sending the wrong data as a header. The issue always seems to inbetween the send and the receive of the two clients. Is there settings for socket / method of sending that I should be using? Or could it be some sort of race condition (I thought about race condition, but the fact that the data can be anywhere randomly in a file made me rethink this).
From the question, i guess the problem you are dealing with is inside the MonoTorrent library.
I myself has never encountered such problem. and by looking at the codes, i think the receive part is already ordered because the network IO will not try to receive a second message until the first one has been handled. PieceMessages' write requests are queued in DiskIO also so that should not be the problem.
however, in sending procedure, the ProcessQueue function can be called from several places. and the EnqueueSendMessage called by ProcessQueue indirectly doesn't actually enqueue the message to any queue. it just simply call the Socket.BeginSend. I don't know if Socket.BeginSend() has any queue mechanism inside. If there is not, this may bring some problem when multiple threads are trying to make the same socket "BeginSend" different data.
I am trying to send over image data from a compiled C++ process to a compiled C# process. The C++ process is accessing the webcam and doing some processing on the image. The image is represented by an 2D array of pixels with each pixel value being an 8 bit value (0-255) which is the gray-scale value of that pixel.
The image size is 640 by 480.
The C# application does some more processing and displays this image onto the screen. The processes are both running at the same time on my laptop (Windows 7 OS) but I cannot make a single process that does all the steps which is why I need my C++ and C# code to communicate.
I was wondering what is the best way to do this? I read about writing a UDP or TCP server in the C# part and a client on the C++ part, I can then send over the image data as a datagram. I was wondering if this is the best way and if it is whether UDP or TCP would be better?
EDIT: The C++ process is unmanaged C++, I don't have the option to run it as a managed DLL. Could I use named pipes to send over the image?
Finally is UDP guaranteed in order if it is communicating locally? I realise the image would be over the limit for UDP but if it is inorder I should be able to split the images up to send over.
Interprocess communication can be done via sockets or pipes.
With sockets(TCP and UDP) you're essentially sending the data over the internet to yourself. Luckily since your comp knows itself, the data shouldn't leave the comp so this should be pretty quick. TCP is guaranteed to be in order and has a bunch of other nice features while UDP is pretty much slap some headers onto the data and hope for the best. For this application TCP should be fine. UDP adds unneeded complexity.
Pipes are the other way to have two processes to communicate. You basically have the C++ or C# process create a pipe and start the other process. You just use the pipe like a file: write to and read from it. This can be done in C/C++ using a combination of the pipe, fork, and exec functions or simply using the popen function. C# probably has similar functions.
I suggest using a pipe using _popen, (popen for windows) and writing a series of ints to the pipe and reading it from the other side. This is probably the easiest way... besides using one language of course...
If you are writing both of the programs, you can compile C++ one as DLL, and call function that returns an array or some structure from your C# program with DllImport Attribute in System.Runtime.InteropServices namespace.
Why can't you do it in the same process? Is it because you need to mix C# and C++? In that case C++/CLI can be used as a bridge between the environments to have both C# code for the .NET CLR and C++ code compiled natively in one process.
If you really need two processes there are several options when running on a local machine, but a small TCP-based service is probably best. The size of each image will be 307kb which is larger than the 65kb limit of UDP.
I was wondering if this is the best way and if it is whether UDP or TCP would be better?
You usually resort to UDP as a speed optimization when TCP is not fast enough and packet loss is inconvenient rather than when it can't be handled. If you can't handle losing part of the image in the transmission I doubt you can resort to UDP.
Moreover, UDP is unlikely to give a performance boost in your case since you'll be using the loopback interface. This means that all TCP packets are likely to arrive in order and without loss, making TCP extra cheap.
If you write your application using TCP and in the future, for some reason, you decide the processes no longer run on the same machine, you won't have to change your code.
Finally, TCP sockets are just easier to use, so unless TCP is not fast enough on your machine, I would stick with TCP sockets.
is UDP guaranteed in order if it is communicating locally?
AFAIK, this behavior is not guaranteed. It is very likely to work most of the time, but unless you can find a quote from relevant documentation, I wouldn't count on this.
Could I use named pipes to send over the image?
Yes, named pipes are very similar to sockets, but they're known to be slow.
Once way of doing it apart from sockets would be to save the image data onto the disk from your C++ application and read it off the disk in your C# application. Of course you will need to make sure some sort of read/write synchronisation so that the file is not read before its fully written.
Or you finally decide to use UDP or TCP, try using RTP. RTP uses UDP with an extra layer of time stamps, sequence numbering to ensure correct order of data delivery. You should be able to find C++ and C# implementations of the protocol. Specifically to mention is that you can send images over a RTP/MJPEG stream if your application is producing JPEG images.
Just move to completely managed code :p (To keep it all in the same process)
https://net7mma.codeplex.com/SourceControl/latest has a C# RtspServer and RtpClient
I suppose similar questions were already asked, but I was unable to find any. Please feel free to point me to an existing solutions.
I'll explain my scenario. I'd like to create a server application. There are many clients (currently only a few dozens, but it should scale up to 1000+) that connect to the server (which is running on a single machine).
Each client periodically sends a small amount of data to the server to process (processing is quick). The server can also send small amounts of data to each client on a regular basis. The response time should be low (<100 ms), but realtime or anything like that is not required.
My first idea was back from when I was still programming in VB6: Create a server socket to listen to incoming requests, then create a client socket for each possible client (singlethreaded). I doubt this scales well. It is also difficult to implement the communication.
So I figured I'd create a listener thread to accept new client connections and a different thread to actually read the incoming data by the clients. Since there are going to be many clients, I don't want to create a thread for each client. Instead, I'd prefer to use a single thread to read all incoming data in a loop, then either processing these data directly or creating work items for a different thread to process. I guess this approach would scale well enough. Any comments on this idea are most welcome.
The remaining problem I'm worried about is easy of communication. The above solution seems to require a manual protocol, possibly sending ASCII commands via TCP. While this would work, I think there should be a better way nowadays.
Some interface/proxyish way seems reasonable. I worked a bit with Java RMI before. From my point of understanding, .NET Remoting serves a similar purpose. Is Remoting a feasible solution to the scenario I described (many clients)? Is there an even better way I don't know of yet?
Edit:
This is not in LAN, but internet, if that matters.
If possible, it should also run under Linux.
As AresnMkrt pointed out, you should try WCF.
Just take it as is (with netTcpBinding, but don't forget to switch security off) and create a Tracer Bullet - measure if performance meets your requirements.
If not, you can try to tune WCF - WCF is very extensible, and you can modify message serialization to send ASCII messages as you want.
Are you sure you need a binary protocol? Rather, are you sure you need to invent a whole new protocol where plain RESTful service with JSON/XML will suffice? WCF can help you in this regard a lot.
I am building a C# application, using the server-client model, where the server is sending an image (100kb) to the client through a socket every 50ms...
I was using TCP, but besides the overhead of this protocol, sometimes the client ended up with more than one image on the socket. And I still haven't though of a clever mechanism to split the bytes of each image (actually, I just need the most recent one).
I tried using UDP, but got to the conclusion that I can't send 100kb dgrams, only 64kb ones. And even so, I shouldn't use more than 1500bytes; otherwise the packet would be divided along the network and the chances of losing parts of the packet would be greater.
So now I'm a bit confused. Should I continue using TCP and put some escaping bytes in the end of each image so the client can separate them? Or should I use UDP, send dgrams of 1500 bytes and come up with a mechanism for ordering and recovering?
The key goal here is transmitting the images very fast. I don't mind losing some on the way as long as the client keeps receiving newer ones.
Or should I use another protocol? Thanks in advance!
You should consider using Real-time Transport Protocol (aka RTP).
The underlying IP protocol used by RTP is UDP, but it has additional layering to indicate time stamps, sequence order, etc.
RTP is the main media transfer protocol used by VoIP and video-over-IP systems. I'd be quite surprised if you can't find existing C# implementations of the protocol.
Also, if your image files are in JPEG format you should be able to produce an RTP/MJPEG stream. There are quite a few video viewers that already have native support for receiving and displaying such a stream, since some IP webcams output in that format.
First of all, your network might not be able to handle this no matter what you do, but I would go with UDP. You could try splitting up the images into smaller bits, and only display each image if you get all the parts before the next image has arrived.
Also, you could use RTP as others have mentioned, or try UDT. It's a fairly lightweight reliable layer on top of UDP. It should be faster than TCP.
I'd recommend using UDP if:
Your application can cope with an image or small burst of images not getting through,
You can squeeze your images into 65535 bytes.
If you're implementing a video conferencing application then it's worth noting that the majority use UDP.
Otherwise you should use TCP and implement an approach to delimit the images. One suggestoin in that regard is to take a look at the RTP protocol. It's sepcifically designed for carrying real-time data such as VoIP and Video.
Edit: I've looked around quite a few times in the past for a .Net RTP library and apart from wrappers for non .Net libraries or half completed ones I did not have much success. I just had another quick look and this may be of this one ConferenceXP looks a bit more promising.
The other answers cover good options re: UDP or a 'real' protocol like RTP.
However, if you want to stick with TCP, just build yourself a simple 'message' structure to cover your needs. The simplest? length-prefixed. First, send the length of the image as 4 bytes, then send the image itself. Easy enough to write the client and server for.
If the latest is more important than every picture, UDP should be your first choice.
But if you're dealing with frames lager than 64K your have to-do some form of re-framing your self. Don't be concerned with fragmented frames, as you'll have to deal with it or the lower layer will. And you only want completed pictures.
What you will want is some form of encapsulation with timestamps/sequences.