How to get the number of elements in a Channel<T>? - c#

I am planning a subscribers and publishers system where I plan to use Channels. I would like to log the number of elements in each Channel I use so I can adjust the number of publishers/subs based on the bottlenecks in my system.
I started to wrap Channel, ChannelReader and ChannelWriter into my own classes to count the number of writes and reads but this feels like a hack. Is there a better way?

Use the source, Luke. The source tells you that (a) there is no public API to do that, (b) you can use reflection to get the value of the private property ItemsCountForDebugger on both bounded and unbounded channels, and (c) this is safe despite no locking in the getter. Of course this is a hack. Whether the reflection hack is better than wrapper class hack is a question of taste. Public API to get the approximate number of elements in a Channel<T> was requested back in 2018, and will be added in .NET Core 5.0 (slated for release in November).

Related

C# How do you monitor accesses to array elements?

Arrays are all of type Array, instead of their underlying type, meaning making arrays out of your own custom primitives with event handling and stuff is useless. Someone on Discord said that it'll probably take either reflection or unsafe constructs. Adding code to the get accessor is useless, because Array.Sort() only calls that accessor twice. What should I do instead to call code like events whenever an array element is accessed (like read or written)?
Here's what I'm trying to make, a benchmarker for sorting algorithms that charts the number of comparisons and total array accesses on a whole range of array sizes
Sort benchmarker
If you just want to count the number of comparisons you should probably provide an IComparer<T> implementation instead. Most sorting implementations take such an interface.
If you want to measure the number of accesses you need to use another interface, like IList<T>. But this will not be usable for most built in sort methods, since accessing elements thru an interface will reduce performance.
But measuring "array access" is probably not a meaningful metric. In many cases this will just be a memory access, and the time of this varies greatly depending on locality. A register access is "free", while a uncached memory read is many hundreds of cycles. So using profilers or writing an actual benchmark will probably be a much better tool to measure overall performance.

Does the need to make the code simpler justify the use of wrong abstractions?

Suppose we have a CommandRunner class that runs Commands, when a Command is created it's kept in the processingQueue for proccessing, if the execution of the Command finishes with errors the Command is moved to the faultedQueue for later processing but when everything is OK the Command is moved to the archiveQueue, the archiveQueue is not going to be processed in any way
the CommandRunner is something like this
class CommandRunner
{
public CommandRunner(IQueue<Command> processingQueue,
IQueue<Command> faultedQueue,
IQueue<Command> archiveQueue)
{
this.processingQueue = processingQueue;
this.faultedQueue= faultedQueue;
this.archiveQueue= archiveQueue;
}
public void RunCommands()
{
while(processingQueue.HasItems)
{
var current = processingQueue.Dequeue();
var result = current.Run();
if(result.HasError)
curent.MoveTo(faultedQueue);
else
curent.MoveTo(archiveQueue);
...
}
}
}
The CommandeRunner recives the three dependecies as a PersistentQueue the PersistentQueue is responsible for the long term storage of the Commands and so we free the CommandRunner from handling this
And the only purpose of the archiveQueue is to keep the design homogenous, to keep the CommandRunner persistence ignorant and with few dependencies
for example we can imagine a Property like this
IEnumerable<Command> AllCommands
{
get
{
return Enumerate(archiveQueue).Union(processingQueue).Union(faultedQueue);
}
}
many portions of the class need to do so(handle the Archive as a Queue to make the code simpler as shown above)
Does it make sense to use a Queue even if it's not the best abstraction, or do I have to use another abstraction for the archive concept.
what are other alternatives to meet these requirement?
Keep in mind that code, especially running code usually gets tangled and messy as time pass. To combat this, good names, good design, and meaningful comments come into play.
If you don't going to process the archiveQueue, and it's just a storage for messages that has been successfully processed, you can always store it as a different type (list, collection, set, whatever suits your needs), and then choose one of the following two:
Keep the name archiveQueue and change the underlying type. I would leave a comment where it's defined (or injected) saying : Notice that this might not be an actual queue. Name is for consistency reasons only.
Change the name to archiveRepository or something similar, while keeping the queue type. Obviously, since it's still a queue, you'll leave a comment saying: Notice, this is actually a queue.
Another thing to keep in mind, is that if you have n people working on your code base, you'll probably get n+1 different perferences about which way it shoud be done :)
Queue is a useful structure when you need to take care about the order of items inside it. If you need in your command post process, take care about the orders commands ran, then the queue can be a good choice.
If you don't need info about the order or commands, maybe you can use a List (on System.Collections namespace).
I think your choice are good, in the same case, I'll use a queues, we have a good example with OS design principles, inside OS (on Kernel) the process are queued for execution, clearly the OS queues are more complicated because they have other variables in mind like priority, and CPU utilization, but we can learn about the use of queues like data structures in process management.

Identifying a property name with a low footprint

I wish to send packets to sync properties of constantly changing game objects in a game. I've sent notifications of when a property changes on the server side to a EntitySync object that is in charge of sending out updates for the client to consume.
Right now, I'm pre-fixing the property string name. This is a lot of overhead for when you're sending a lot of updates (position, HP, angle). I'd like for a semi-unique way to idneity these packets.
I thought about attributes (reflection... slow?), using a suffix on the end and sending that as an ID (Position_A, HP_A) but I'm at a loss of a clean way to identify these properties quickly with a low foot print. It should consume as few bytes as possible.
Ideas?
Expanding on Charlie's explanation,
The protobuf-net library made by Marc Gravell is exactly what you are looking for in terms of serialization. To clarify, this is Marc Gravell's library, not Googles. It uses Google's protocol buffer encoding. It is one of the smallest footprint serializes out there, in fact it will likely generate smaller packets than you manually serializing it will ( How default Unity3D handles networking, yuck ).
As for speed, Marc uses some very clever trickery (Namely HyperDescriptors) http://www.codeproject.com/Articles/18450/HyperDescriptor-Accelerated-dynamic-property-acces
to all but remove the overhead of run time reflection.
Food for thought on the network abstraction; take a look at Rx http://msdn.microsoft.com/en-us/data/gg577609.aspx Event streams are the most elegant way I have dealt with networking and multithreaded intra-subsystem communication to date:
// Sending an object:
m_eventStream.Push(objectInstance);
// 'handling' an object when it arrives:
m_eventStream.Of(typeof(MyClass))
.Subscribe ( obj =>
{
MyClass thisInstance = (MyClass) obj;
// Code here will be run when a packet arrives and is deserialized
});
It sounds like you're trying to serialize your objects for sending over a network. I agree it's not efficient to send the full property name over the wire; this consumes way more bytes than you need.
Why not use a really fantastic library that Google invented just for this purpose.
This is the .NET port: http://code.google.com/p/protobuf-net/
In a nutshell, you define the messages you want to send such that each property has a unique id to make sending the properties more efficient:
SomeProperty = 12345
Then it just sends the id of the property and its value. It also optimizes the way it sends the values, so it might use only 1, 2, 3 bytes etc depending on how large the value is. Very clever, really.

How to see any changes (new row) in mongoDB?

Is there any way to observe each collection (or even one) in mongoDB? Now I think about timer to check document number or last Id, but maybe there is some possibility to implement mechanism like newDocumentAddedEvent?
There are no triggers in MongoDB (yet?), but if you're running a replica set (as you should), your app can pretend to be a catching-up secondary, tail the oplog collection and get information about new inserts/updates.
This is a very efficient approach (mongodb itself uses it for the replication).

How can I implement a "swapping" list?

I would need to manage some lists with timer: each element of these lists is associated with a timer, so when the timer expires, the corresponding element must be removed from the list.
In this way the length of the list does not grow too much because, as time goes on, the elements are progressively removed. The speed with which the list length increases also depends on the rate of addition of new elements.
However I need to add the following constraint: the amount of RAM used by the list does not exceed a certain limit, ie the user must specify the maximum number of items that can be stored in RAM.
Therefore, if the rate of addition of the elements is low, all items can be stored in RAM. If, however, the rate of addition of elements is high, old items are likely to be lost before the expiration of their timers.
Intuitively, I thought about taking a cue from swapping technique used by operating systems.
class SwappingList
{
private List<string> _list;
private SwapManager _swapManager;
public SwappingList(int capacity, SwapManager swapManager)
{
_list = new List<string>(capacity);
_swapManager = swapManager;
// TODO
}
}
One of the lists that I manage is made ​​up of strings of constant length, and it must work as a hash table, so I should use HashMap, but how can I define the maximum capacity of a HashMap object?
Basically I would like to implement a caching mechanism, but I wish that the RAM used by the cache is limited to a number of items or bytes, which means that old items that are not expired yet, must be moved to a file.
According to the comments above you want a caching mechanism.
.NET 4 has this build-in (see http://msdn.microsoft.com/en-us/library/system.runtime.caching.aspx) - it comes with configurable caching policy which you can use to configure expiration among other things... it provides even some events which you can assign delegates to that are called prior removing a cache entry to customize this process even further...
You cannot specify the maximum capacity of a HashMap. You need to implement a wrapper around it, which, after each insertion, checks to see if the maximum count has been reached.
It is not clear to me whether that's all you are asking. If you have more questions, please be sure to state them clearly and use a question mark with each one of them.

Categories

Resources