I hope the question title does not become imprecise, but it may happen that a direct replacement isn't available and a code restructuring becomes inevitable.
My task is to stream audio frames from HTTP, pipe them through ffmpeg and then shove them into some audio buffer.
Now a classical approach would probably involve multiple threads and lots of garbage collection I want to avoid.
My modern attempt was using async IAsyncEnumerable<Memory<byte>>, where I'd basically read content from HTTP into a fixed-size byte array, that is allocated once.
Then I'd yield return the Memory struct ("pointer"), which would cause the caller to immediately consume and transform that content via ffmpeg.
With that, I'd stay in lock-step defined by the chunk reads from the HttpClient. That way, the whole processing would happen on one thread and I would only have one big byte array as data store with a long lifetime.
The problem with this is, that Unity's Mono Version doesn't support the C# 8.0 feature of async streams (i.e. awaiting the async enumerable). So I need to come up with a replacement.
I thought about using System.Threading.Channels, but those already have a few caveats in the way the control flow is handled: With Channels, I cannot guarantee that the written Memory<T> is immediately read. As such it can happen that http is overwriting the backing buffer before the other end has read content. This would mean copying lots of data, causing garbage.
An oldschool alternative would be to maintain some kind of ring buffer, where I have a write and a read pointer that is moved whenever each end reads/writes. hand-rolling that felt dumb though, maybe there is a roughly equivalent and elegant API?
Also would you rather just have two Threads and have them busy wait? Or can I maybe just accept the garbage collector pressure and use some regular queue structure, that potentially even uses notify/wait to wake up waiting threads, so they don't have to SpinWait/busy wait?
Related
Let's say I have a static list List<string> dataQueue, where data keeps getting added at random intervals and also at a varying rate (1-1000 entries/second).
My main objective is to send the data from the list to the server, I'm using a TcpClient class.
What I've done so far is, I'm sending the data synchronously to the client in a Single thread
byte[] bytes = Encoding.ASCII.GetBytes(message);
tcpClient.GetStream().Write(bytes, 0, bytes.Length);
//The client is already connected at the start
And I remove the entry from the list, once the data is sent.
This works fine, but the speed of data being sent is not fast enough, the list gets populated and consumes more memory, as the list gets iterated and sent one by one.
My question is can I use the same tcpClient object to write concurrently from another thread or can I use another tcpClient object with a new connection to the same server in another thread? What is the most efficient(quickest) way to send this data to the server?
PS: I don't want to use UDP
Right; this is a fun topic which I think I can opine about. It sounds like you are sharing a single socket between multiple threads - perfectly valid as long as you do it very carefully. A TCP socket is a logical stream of bytes, so you can't use it concurrently as such, but if your code is fast enough, you can share the socket very effectively, with each message being consecutive.
Probably the very first thing to look at is: how are you actually writing the data to the socket? what is your framing/encoding code like? If this code is simply bad/inefficient: it can probably be improved. For example, is it indirectly creating a new byte[] per string via a naive Encode call? Are there multiple buffers involved? Is it calling Send multiple times while framing? How is it approaching the issue of packet fragmentation? etc
As a very first thing to try - you could avoid some buffer allocations:
var enc = Encoding.ASCII;
byte[] bytes = ArrayPool<byte>.Shared.Rent(enc.GetMaxByteCount(message.Length));
// note: leased buffers can be oversized; and in general, GetMaxByteCount will
// also be oversized; so it is *very* important to track how many bytes you've used
int byteCount = enc.GetBytes(message, 0, message.Length, bytes, 0);
tcpClient.GetStream().Write(bytes, 0, byteCount);
ArrayPool<byte>.Shared.Return(bytes);
This uses a leased buffer to avoid creating a byte[] each time - which can massively improve GC impact. If it was me, I'd also probably be using a raw Socket rather than the TcpClient and Stream abstractions, which frankly don't gain you a lot. Note: if you have other framing to do: include that in the size of the buffer you rent, use appropriate offsets when writing each piece, and only write once - i.e. prepare the entire buffer once - avoid multiple calls to Send.
Right now, it sounds like you have a queue and dedicated writer; i.e. your app code appends to the queue, and your writer code dequeues things and writes them to the socket. This is a reasonably way to implement things, although I'd add some notes:
List<T> is a terrible way to implement a queue - removing things from the start requires a reshuffle of everything else (which is expensive); if possible, prefer Queue<T>, which is implemented perfectly for your scenario
it will require synchronization, meaning you need to ensure that only one thread alters the queue at a time - this is typically done via a simple lock, i.e. lock(queue) {queue.Enqueue(newItem);} and SomeItem next; lock(queue) { next = queue.Count == 0 ? null : queue.Dequeue(); } if (next != null) {...write it...}.
This approach is simple, and has some advantages in terms of avoiding packet fragmentation - the writer can use a staging buffer, and only actually write to the socket when a certain threshold is buffered, or when the queue is empty, for example - but it has the possibility of creating a huge backlog when stalls occur.
However! The fact that a backlog has occurred indicates that something isn't keeping up; this could be the network (bandwidth), the remote server (CPU) - or perhaps the local outbound network hardware. If this is only happening in small blips that then resolve themselves - fine (especially if it happens when some of the outbound messages are huge), but: one to watch.
If this kind of backlog is recurring, then frankly you need to consider that you're simply saturated for the current design, so you need to unblock one of the pinch points:
making sure your encoding code is efficient is step zero
you could move the encode step into the app-code, i.e. prepare a frame before taking the lock, encode the message, and only enqueue an entirely prepared frame; this means that the writer thread doesn't have to do anything except dequeue, write, recycle - but it makes buffer management more complex (obviously you can't recycle buffers until they've been completely processed)
reducing packet fragmentation may help significantly, if you're not already taking steps to achieve that
otherwise, you might need (after investigating the blockage):
better local network hardware (NIC) or physical machine hardware (CPU etc)
multiple sockets (and queues/workers) to round-robin between, distributing load
perhaps multiple server processes, with a port per server, so your multiple sockets are talking to different processes
a better server
multiple servers
Note: in any scenario that involves multiple sockets, you want to be careful not to go mad and have too many dedicated worker threads; if that number goes above, say, 10 threads, you probably want to consider other options - perhaps involving async IO and/or pipelines (below).
For completeness, another basic approach is to write from the app-code; this approach is even simpler, and avoids the backlog of unsent work, but: it means that now your app-code threads themselves will back up under load. If your app-code threads are actually worker threads, and they're blocked on a sync/lock, then this can be really bad; you do not want to saturate the thread-pool, as you can end up in the scenario where no thread-pool threads are available to satisfy the IO work required to unblock whichever writer is active, which can land you in real problems. This is not usually a scheme that you want to use for high load/volume, as it gets problematic very quickly - and it is very hard to avoid packet fragmentation since each individual message has no way of knowing whether more messages are about to come in.
Another option to consider, recently, is "pipelines"; this is a new IO framework in .NET that is designed for high volume networking, giving particular attention to things like async IO, buffer re-use, and a well-implemented buffer/back-log mechanism that makes it possible to use the simple writer approach (syncronize while writing) and have that not translate into direct sends - it manifests as an async writer with access to a backlog, which makes packet fragmentation avoidance simple and efficient. This is quite an advanced area, but it can be very effective. The problematic part for you will be: it is designed for async usage throughout, even for writes - so if your app-code is currently synchronous, this could be a pain to implement. But: it is an area to consider. I have a number of blog posts talking about this topic, and a range of OSS examples and real-life libraries that make use of pipelines that I can point you at, but: this isn't a "quick fix" - it is a radical overhaul of your entire IO layer. It also isn't a magic bullet - it can only remove overhead due to local IO processing costs.
Is there a valid reason to not use TcpListener for implementing a high performance/high throughput TCP server instead of SocketAsyncEventArgs?
I've already implemented this high performance/high throughput TCP server using SocketAsyncEventArgs went through all sort of headaches to handling those pinned buffers using a big pre-allocated byte array and pools of SocketAsyncEventArgs for accepting and receiving, putting together using some low level stuff and shiny smart code with some TPL Data Flow and some Rx and it works perfectly; almost text book in this endeavor - actually I've learnt more than 80% of these stuff from other-one's code.
However there are some problems and concerns:
Complexity: I can not delegate any sort of modifications to this server to another
member of the team. That bounds me to this sort of tasks and I can
not pay enough attention to other parts of other projects.
Memory Usage (pinned byte arrays): Using SocketAsyncEventArgs the pools are needed to
be pre-allocated. So for handling 100000 concurrent connections
(worse condition, even on different ports) a big pile of RAM is uselessly hovers there;
pre-allocated (even if these conditions are met just at some times,
server should be able to handle 1 or 2 such peaks everyday).
TcpListener actually works good: I actually had put TcpListener into test (with some tricks like
using AcceptTcpClient on a dedicated thread, and not the async
version and then sending the accepted connections to a
ConcurrentQueue and not creating Tasks in-place and the like)
and with latest version of .NET, it worked very well, almost as good
as SocketAsyncEventArgs, no data-loss and a low memory foot-print
which helps with not wasting too much RAM on server and no pre-allocation is needed.
So why I do not see TcpListener being used anywhere and everybody (including myself) is using SocketAsyncEventArgs? Am I missing something?
I see no evidence that this question is about TcpListener at all. It seems you are only concerned with the code that deals with a connection that already has been accepted. Such a connection is independent of the listener.
SocketAsyncEventArgs is a CPU-load optimization. I'm convinced you can achieve a higher rate of operations per second with it. How significant is the difference to normal APM/TAP async IO? Certainly less than an order of magnitude. Probably between 1.2x and 3x. Last time I benchmarked loopback TCP transaction rate I found that the kernel took about half of the CPU usage. That means your app can get at most 2x faster by being infinitely optimized.
Remember that SocketAsyncEventArgs was added to the BCL in the year 2000 or so when CPUs were far less capable.
Use SocketAsyncEventArgs only when you have evidence that you need it. It causes you to be far less productive. More potential for bugs.
Here's the template that your socket processing loop should look like:
while (ConnectionEstablished()) {
var someData = await ReadFromSocketAsync(socket);
await ProcessDataAsync(someData);
}
Very simple code. No callbacks thanks to await.
In case you are concerned about managed heap fragmentation: Allocate a new byte[1024 * 1024] on startup. When you want to read from a socket read a single byte into some free portion of this buffer. When that single-byte read completes you ask how many bytes are actually there (Socket.Available) and synchronously pull the rest. That way you only pin a single rather small buffer and still can use async IO to wait for data to arrive.
This technique does not require polling. Since Socket.Available can only increase without reading from the socket we do not risk to perform a read that is too small accidentally.
Alternatively, you can combat managed heap fragmentation by allocating few very big buffers and handing out chunks.
Or, if you don't find this to be a problem you don't need to do anything.
So I've allready written a tcp-server with SocketAsyncEventArgs and socket.***Async methods. But I do love the await/async way of writing code using the Stream.ReadAsync and Stream.WriteAsync methods.
Is there any difference performance/memory wise, or do I simply make a difference in syntax?
The advantages of using the socketasynceventargs is to reduce strain on the garbage collector and to keep the buffers of the sockets grouped to prevent out of memory exception.
By using a pool of SocketAsyncEventArgs, you can eliminate the need to create and collect the IAsyncResult for every call to read and write.
Secondly, each SocketAsyncEventArgs can be assigned a portion of a large byte[] block. This prevents memory fragmentation which helps reduce the memory footprint. Although this can technically also be done with the Begin/End calls, it is easier to permenantly assign each SocketAsyncEventArgs with its own buffer instead of assigning a block every time that a read or write is called with the Begin/End methods.
One last difference that can be important is that according to the MSDN Library, the Begin/End Send method may block while the socket buffer is full. The SocketAsyncEventArgs will not block, instead, only a certain amount of bytes will be written depending on the amount of space in the sockets buffer, the callback will specify the amount of bytes written.
The advantages are really not noticeable unless it is an edge case. Unless you run into either of the problems specified, I suggest that you choose what is most comfortable and maintainable for you.
I am looking for advice how to best architecture a buffer structure that can handle a massive amount of incoming data that are processed at a slower speed than the incoming data.
I programmed a customized binary reader that can stream up to 12 million byte arrays per second on a single thread and look to process the byte array stream in a separate structure on the same machine and different thread. The problem is that the consuming structure cannot keep up with the amount of incoming data of the producer and thus I believe I need some sort of buffer to handle this properly. I am most interested in advice regarding the overall architecture rather than code examples. I target .Net 4.0. Here is more information of my current setup and requirements.
Producer: Runs on a dedicated thread and reads byte arrays from files on physical storage medium (SSD, OCZ Vertex 3 Max IOPS). Approximate throughput is 12 million byte arrays per second. Each array is only of 16 byte size. Fully implemented
Consumer: Supposed to run on a separate thread than the producer.Consumes byte arrays but must parse to several primitive data types before processing the data, thus the processing speed is significantly slower than the producer publishing speed. Consumer structure is fully implemented.
In between: Looking to set up a buffered structure that provides the producer to publish to and the consumer to, well, consume from. Not implemented.
I would be happy if some of you could comment from your own experience or expertise what best to consider in order to handle such structure. Should the buffer implement a throttling algorithm that only requests new data from the producer when the buffer/queue is half empty or so? How is locking and blocking handled? I am sorry I have very limited experience in this space and have so far handled it through the implementation of a messaging bus but any messaging bus technology I looked at is definitely unable to handle the throughput I am looking for. Any comments very welcome!!!
Edit: Forgot to mention, the data is only consumed by one single consumer. Also the order in which the arrays are published does matter; the order needs to be preserved such that the consumer consumes in the same order.
16 bytes, (call it 16B), is too small for efficient inter-thread comms. Queueing up such small buffers will result in more CPU spent on inter-thread comms than on actual useful processing of the data.
So, chunk them up.
Declare some buffer class, (C16B, say), that contains a nice, big array of these 16B's - at least 4K's worth, and a 'count' int to show how many are loaded, (the last buffer loaed from a file will probably not be full). It will help if you place a cache-line-sized empty byte array just in front of this 16B array - helps to avoid false-sharing, You can maybe put the code that processes the 16B's in as a method, 'Process16B', sya, and perhaps the code that loads the array too - takes a file descriptor as a parameter. This class can now be efficiently loaded up an queued to other threads.
You need a producer-consumer queue class - C# already has one in the BlockingCollection classes.
You need flow-control in this app. I would do it by creating a pool of C16B's - create a blocking queue and create/add a big pile of C16B's in a loop. 1024 is a nice, round number. Now you have a 'pool queue' that provides flow-control, avoids the need to new() any C16B's and you don't need them to be continually garbage-collected.
Once you have this, the rest is easy. In your loader thread, continually dequeue C16B's from the pool queue, load them up with data from the files and add() them off to the processing threads/s on a '16Bprocess' blocking queue. In the processing threads, take() from the 16Bprocess queue and process each C16B instance by calling its Process16B method. When the 16B's are processed, add() the C16B back to the pool queue for re-use.
The recycling of the C16B's via the pool queue provides end-to-end flow-control. If the producer is the fastest link, the pool will eventually empty and the producer will block there until the consumer/s returns some C16B's.
If the processing takes so much time, you could always add another processing thread if you have spare cores available. The snag is with such schemes is that the data will get processed out-of-order. This may, or may not, matter. If it does, the data flow might need 'straightening out' later, eg. using sequence numbers and a buffer-list.
I would advise dumping the pool queue count, (and maybe the 16Bprocess queue count as well), to a status component or command-line with a timer. This provides a useful snapshot of where the C16B instances are and you can see the bottlenecks and any C16B leaks without 3rd-party tools, (the ones that that slow the whole app down to a crawl and issue spurious leak reports on shutdown).
You can use a BlockingCollection it will block the producer from adding items to the collection as long as the consumer hasn't consumed enough items.
There are other concurrent collection classes as well, eg. ConcurrentQueue
IMO a blocking Queue of some kind may solve your problem. Essentially the Producer thread will block if the queue has no more capacity. Look at this Creating a blocking Queue<T> in .NET?
Why bother with a buffer at all? Use the disk files as a buffer. When the consumer starts processing a byte array, have the reader read the next one and that's it.
EDIT: After asking for decoupling of the consumer and producer.
You can have a coordinator that tells the producer to produce X byte arrays, and supplies X byte arrays to the consumer. The three parts can act like this:
Coordinator tells producer to produce X byte arrays.
Producer produces X byte arrays
And now do this in a loop:
Coordinator tells consumer to consumer X byte arrays
Coordinator tells producer to produce X byte arrays
Consumer tells coordinator it's done consuming
Loop until there are no more byte arrays
The producer and coordinator can run in the same thread. The consumer should have its own thread.
You will have almost no locking (I think you can do this with no locking at all, just a single wait handle the consumer uses to notify the coordinator it's done), and your coordinator is extremely simple.
REEDIT: Another really decoupled option
Use ZeroMQ for handling the communications. The producer reads byte arrays and posts each array to a ZeroMQ socket. The consumer reads arrays from a ZeroMQ socket.
ZeroMQ is very efficient and fast, and handles all the technicalities (thread synchronization, buffering, etc...) internally. When used on the same computer, you won't suffer any data loss, too (which might happen when using UDP on two different machines).
We need to develop some kind of buffer management for an application we are developing using C#.
Essentially, the application receives messages from devices as and when they come in (there could be many in a short space of time). We need to queue them up in some kind of buffer pool so that we can process them in a managed fashion.
We were thinking of allocating a block of memory in 256 byte chunks (all messages are less than that) and then using buffer pool management to have a pool of available buffers that can be used for incoming messages and a pool of buffers ready to be processed.
So the flow would be "Get a buffer" (process it) "Release buffer" or "Leave it in the pool". We would also need to know when the buffer was filling up.
Potentially, we would also need a way to "peek" into the buffers to see what the highest priority buffer in the pool is rather than always getting the next buffer.
Is there already support for this in .NET or is there some open source code that we could use?
C# sharps memory management is actually quite good, so instead of having a pool of buffers, you could just allocate exactly what you need and stick it into a queue. Once you are done with buffer just let the garbage collector handle it.
One other option (knowing only very little about your application), is to process the messages minimally as you get them, and turn them into full fledged objects (with priorities and all), then your queue could prioritize them just by investigating the correct set of attributes or methods.
If your messages come in too fast even for minimal processing you could have a two queue system. One is just a queue of unprocessed buffers, and the next queue is the queue of message objects built from the buffers.
I hope this helps.
#grieve: Networking is native, meaning that when buffers are used the receive/send data on the network, they are pinned in memory. see my comments below for elaboration.
Why wouldn't you just receive the messages, create a DeviceMessage (for lack of a better name) object, and put that object into a Queue ? If the prioritization is important, implement a PriorityQueue class that handles that automatically (by placing the DeviceMessage objects in priority order as they're inserted into the queue). Seems like a more OO approach, and would simplify maintenance over time with regards to the prioritization.
I know this is an old post, but I think you should take a look at the memory pool implemented in the ILNumerics project. I think they did exactly what you need and it is a very nice piece of code.
Download the code at http://ilnumerics.net/ and take a look at the file ILMemoryPool.cs
I'm doing something similar. I have messages coming in on MTA threads that need to be serviced on STA threads.
I used a BlockingCollection (part of the parallel fx extensions) that is monitored by several STA threads (configurable, but defaults to xr * the number of cores). Each thread tries to pop a message off the queue. They either time out and try again or successfully pop a message off and service it.
I've got it wired with perfmon counters to keep track of idle time, job lengths, incoming messages, etc, which can be used to tweak the queue's settings.
You'd have to implement a custom collection, or perhaps extend BC, to implement queue item priorities.
One of the reasons why I implemented it this way is that, as I understand it, queueing theory generally favors single-line, multiple-servers (why do I feel like I'm going to catch crap about that?).