Couple of days ago, I turned up for c#.net developer interview in stock based company where their application had to give frequent updates within second. So, Interviewer told me that acquiring lock or providing thread synchronization on .NET Generic collection like List, Stack, Dictionary is very much slow. So, they use their custom collection.
So, I was wondering "Are .net collection really slow when acquiring locks and releasing locks and even if they are slow, then how we can improve that performance by writing custom generic classes"
Generics and multi-threading have nothing to do with each other. Given that, I'm not sure what you are asking.
Are .net collection really slow?
...is unanswerable because performance is relative.
how we can improve that performance by writing custom generic classes
You can't because generics have nothing to do with this. You can improve performance by writing custom collections that are tailored to the very specific needs of an application. It is rare that this is a good idea but it can be. For example it is easy to create a class that is faster than the built-in List<T>. Take List<T> as a template and remove all the iterator versioning logic to remove some overhead. This small win is rarely worth the cost.
In case you want advice: Try to use the built-in collections. There's a System.Collections.Concurrent namespace for synchronized ones.
Given the information that we have it is impossible to tell whether it was right or wrong for your interviewer to build custom collections.
my question is that why lock is slower on .NET Collection
You can only use lock with .NET so I'm not sure what you are asking here. Also: Slower than what?
is there any way to achieve synchronization with mutable objects in a more faster way than what lock provides?
Often, that is possible. How this is done depends entirely on the concrete case. If there was a general way to do what lock does but faster then we would not need lock in the first place.
I'm trying to help you by collecting all the different questions you have asked and addressing them. I think if you had asked fewer, more precise questions you would have found the answer yourself or recognized that some questions do not make much sense. Asking the right question often leads to the answer.
So here I am implementing some caching layer. Particurally I am stuck with
ConcurrentDictionary<SomeKey,HashSet<SomeKey2>>
I need to ensure that operations on HashSet are threadsafe too (ergo Update is threadsafe). Is it possible in any simple way or do I have to synchronize in the UpdateFactory delegate? If the answer is yes (which I presume) did any one of You encountered this problem before and solved it?
I want to avoid ConcurrentDictionary of ConcurrentDictionaries because they allocate a lot of synchronization objects and I potentially have around a million entries in this thing, so I want to have less pressure in on the GC.
HashSet was chosen because it guarantees amortized constant cost of insertion,deletion and access.
The aforementioned structure will be used as a index on a larger data set with to columns as a key (SomeKey and Somekey2) much like a database index.
Ok so finally I decided to go with Immutable set and lock striping because it is reasonably simple to implement and understand. If I will need more performance on the writes (no copying the whole hash set on insert) I will implement reader/writer locks with striping - which should be fine anyway.
Thanks for suggestions.
I need to implement concurrent Dictionary because .Net does not contain concurrent implementation for collections(Since .NET4 will be contains). Can I use for it "Power Threading Library" from Jeffrey Richter or present implemented variants or any advice for implemented?
Thanks ...
I wrote a thread-safe wrapper for the normal Dictionary class that uses Interlocked to protect the internal dictionary. Interlocked is by far the fastest locking mechanism available and will give much better performance than ReaderWriterLockSlim, Monitor or any of the other available locks.
The code was used to implement a Cache class for Fasterflect, which is a library to speed up reflection. As such we tried a number of different approaches in order to find the fastest possible solution. Interestingly, the new concurrent collections in .NET 4 are noticeably faster than my implementation, although both are pretty darn fast compared to solutions using a less performance locking mechanism. The implementation for .NET 3.5 is located inside a conditional region in the bottom half of the file.
You can use Reflector to view the source code of the concurrent implementation of .NET 4.0 RC and copy it to your own code. This way you will have the least problems when migrating to .NET 4.0.
I wrote a concurrent dictionary myself (prior to .NET 4.0's System.Collections.Concurrent namespace); there's not much to it. You basically just want to make sure certain methods are not getting called at the same time, e.g., Contains and Remove or something like that.
What I did was to use a ReaderWriterLock (in .NET 3.5 and above, you could go with ReaderWriterLockSlim) and call AcquireReaderLock for all "read" operations (like this[TKey], ContainsKey, etc.) and AcquireWriterLock for all "write" operations (like this[TKey] = value, Add, Remove, etc.). Be sure to wrap any calls of this sort in a try/finally block, releasing the lock in the finally.
It's also a good idea to modify the behavior of GetEnumerator slightly: rather than enumerate over the existing collection, make a copy of it and allow enumeration over that. Otherwise you'll face potential deadlocks.
Here's a simple implementation that uses sane locking (though Interlocked would likely be faster):
http://www.tech.windowsapplication1.com/content/the-synchronized-dictionarytkey-tvalue
Essentially, just create a Dictionary wrapper/decorator and synchronize access to any read/write actions.
When you switch to .Net 4.0, just replace all of your overloads with delegated calls to the underlying ConcurrentDictionary.
Given a case where I have an object that may be in one or more true/false states, I've always been a little fuzzy on why programmers frequently use flags+bitmasks instead of just using several boolean values.
It's all over the .NET framework. Not sure if this is the best example, but the .NET framework has the following:
public enum AnchorStyles
{
None = 0,
Top = 1,
Bottom = 2,
Left = 4,
Right = 8
}
So given an anchor style, we can use bitmasks to figure out which of the states are selected. However, it seems like you could accomplish the same thing with an AnchorStyle class/struct with bool properties defined for each possible value, or an array of individual enum values.
Of course the main reason for my question is that I'm wondering if I should follow a similar practice with my own code.
So, why use this approach?
Less memory consumption? (it doesn't seem like it would consume less than an array/struct of bools)
Better stack/heap performance than a struct or array?
Faster compare operations? Faster value addition/removal?
More convenient for the developer who wrote it?
It was traditionally a way of reducing memory usage. So, yes, its quite obsolete in C# :-)
As a programming technique, it may be obsolete in today's systems, and you'd be quite alright to use an array of bools, but...
It is fast to compare values stored as a bitmask. Use the AND and OR logic operators and compare the resulting 2 ints.
It uses considerably less memory. Putting all 4 of your example values in a bitmask would use half a byte. Using an array of bools, most likely would use a few bytes for the array object plus a long word for each bool. If you have to store a million values, you'll see exactly why a bitmask version is superior.
It is easier to manage, you only have to deal with a single integer value, whereas an array of bools would store quite differently in, say a database.
And, because of the memory layout, much faster in every aspect than an array. It's nearly as fast as using a single 32-bit integer. We all know that is as fast as you can get for operations on data.
Easy setting multiple flags in any order.
Easy to save and get a serie of 0101011 to the database.
Among other things, its easier to add new bit meanings to a bitfield than to add new boolean values to a class. Its also easier to copy a bitfield from one instance to another than a series of booleans.
It can also make Methods clearer. Imagine a Method with 10 bools vs. 1 Bitmask.
Actually, it can have a better performance, mainly if your enum derives from an byte.
In that extreme case, each enum value would be represented by a byte, containing all the combinations, up to 256. Having so many possible combinations with booleans would lead to 256 bytes.
But, even then, I don't think that is the real reason. The reason I prefer those is the power C# gives me to handle those enums. I can add several values with a single expression. I can remove them also. I can even compare several values at once with a single expression using the enum. With booleans, code can become, let's say, more verbose.
From a domain Model perspective, it just models reality better in some situations. If you have three booleans like AccountIsInDefault and IsPreferredCustomer and RequiresSalesTaxState, then it doesnn't make sense to add them to a single Flags decorated enumeration, cause they are not three distinct values for the same domain model element.
But if you have a set of booleans like:
[Flags] enum AccountStatus {AccountIsInDefault=1,
AccountOverdue=2 and AccountFrozen=4}
or
[Flags] enum CargoState {ExceedsWeightLimit=1,
ContainsDangerousCargo=2, IsFlammableCargo=4,
ContainsRadioactive=8}
Then it is useful to be able to store the total state of the Account, (or the cargo) in ONE variable... that represents ONE Domain Element whose value can represent any possible combination of states.
Raymond Chen has a blog post on this subject.
Sure, bitfields save data memory, but
you have to balance it against the
cost in code size, debuggability, and
reduced multithreading.
As others have said, its time is largely past. It's tempting to still do it, cause bit fiddling is fun and cool-looking, but it's no longer more efficient, it has serious drawbacks in terms of maintenance, it doesn't play nicely with databases, and unless you're working in an embedded world, you have enough memory.
I would suggest never using enum flags unless you are dealing with some pretty serious memory limitations (not likely). You should always write code optimized for maintenance.
Having several boolean properties makes it easier to read and understand the code, change the values, and provide Intellisense comments not to mention reduce the likelihood of bugs. If necessary, you can always use an enum flag field internally, just make sure you expose the setting/getting of the values with boolean properties.
Space efficiency - 1 bit
Time efficiency - bit comparisons are handled quickly by hardware.
Language independence - where the data may be handled by a number of different programs you don't need to worry about the implementation of booleans across different languages/platforms.
Most of the time, these are not worth the tradeoff in terms of maintance. However, there are times when it is useful:
Network protocols - there will be a big saving in reduced size of messages
Legacy software - once I had to add some information for tracing into some legacy software.
Cost to modify the header: millions of dollars and years of effort.
Cost to shoehorn the information into 2 bytes in the header that weren't being used: 0.
Of course, there was the additional cost in the code that accessed and manipulated this information, but these were done by functions anyways so once you had the accessors defined it was no less maintainable than using Booleans.
I have seen answers like Time efficiency and compatibility. those are The Reasons, but I do not think it is explained why these are sometime necessary in times like ours. from all answers and experience of chatting with other engineers I have seen it pictured as some sort of quirky old time way of doing things that should just die because new way to do things are better.
Yes, in very rare case you may want to do it the "old way" for performance sake like if you have the classic million times loop. but I say that is the wrong perspective of putting things.
While it is true that you should NOT care at all and use whatever C# language throws at you as the new right-way™ to do things (enforced by some fancy AI code analysis slaping you whenever you do not meet their code style), you should understand deeply that low level strategies aren't there randomly and even more, it is in many cases the only way to solve things when you have no help from a fancy framework. your OS, drivers, and even more the .NET itself(especially the garbage collector) are built using bitfields and transactional instructions. your CPU instruction set itself is a very complex bitfield, so JIT compilers will encode their output using complex bit processing and few hardcoded bitfields so that the CPU can execute them correctly.
When we talk about performance things have a much larger impact than people imagine, today more then ever especially when you start considering multicores.
when multicore systems started to become more common all CPU manufacturer started to mitigate the issues of SMP with the addition of dedicated transactional memory access instructions while these were made specifically to mitigate the near impossible task to make multiple CPUs to cooperate at kernel level without a huge drop in perfomrance it actually provides additional benefits like an OS independent way to boost low level part of most programs. basically your program can use CPU assisted instructions to perform memory changes to integers sized memory locations, that is, a read-modify-write where the "modify" part can be anything you want but most common patterns are a combination of set/clear/increment.
usually the CPU simply monitors if there is any other CPU accessing the same address location and if a contention happens it usually stops the operation to be committed to memory and signals the event to the application within the same instruction. this seems trivial task but superscaler CPU (each core has multiple ALUs allowing instruction parallelism), multi-level cache (some private to each core, some shared on a cluster of CPU) and Non-Uniform-Memory-Access systems (check threadripper CPU) makes things difficult to keep coherent, luckily the smartest people in the world work to boost performance and keep all these things happening correctly. todays CPU have a large amount of transistor dedicated to this task so that caches and our read-modify-write transactions work correctly.
C# allows you to use the most common transactional memory access patterns using Interlocked class (it is only a limited set for example a very useful clear mask and increment is missing, but you can always use CompareExchange instead which gets very close to the same performance).
To achieve the same result using a array of booleans you must use some sort of lock and in case of contention the lock is several orders of magnitude less permorming compared to the atomic instructions.
here are some examples of highly appreciated HW assisted transaction access using bitfields which would require a completely different strategy without them of course these are not part of C# scope:
assume a DMA peripheral that has a set of DMA channels, let say 20 (but any number up to the maximum number of bits of the interlock integer will do). When any peripheral's interrupt that might execute at any time, including your beloved OS and from any core of your 32-core latest gen wants a DMA channel you want to allocate a DMA channel (assign it to the peripheral) and use it. a bitfield will cover all those requirements and will use just a dozen of instructions to perform the allocation, which are inlineable within the requesting code. basically you cannot go faster then this and your code is just few functions, basically we delegate the hard part to the HW to solve the problem, constraints: bitfield only
assume a peripheral that to perform its duty requires some working space in normal RAM memory. for example assume a high speed I/O peripheral that uses scatter-gather DMA, in short it uses a fixed-size block of RAM populated with the description (btw the descriptor is itself made of bitfields) of the next transfer and chained one to each other creating a FIFO queue of transfers in RAM. the application prepares the descriptors first and then it chains with the tail of the current transfers without ever pausing the controller (not even disabling the interrupts). the allocation/deallocation of such descriptors can be made using bitfield and transactional instructions so when it is shared between diffent CPUs and between the driver interrupt and the kernel all will still work without conflicts. one usage case would be the kernel allocates atomically descriptors without stopping or disabling interrupts and without additional locks (the bitfield itself is the lock), the interrupt deallocates when the transfer completes.
most old strategies were to preallocate the resources and force the application to free after usage.
If you ever need to use multitask on steriods C# allows you to use either Threads + Interlocked, but lately C# introduced lightweight Tasks, guess how it is made? transactional memory access using Interlocked class. So you likely do not need to reinvent the wheel any of the low level part is already covered and well engineered.
so the idea is, let smart people (not me, I am a common developer like you) solve the hard part for you and just enjoy general purpose computing platform like C#. if you still see some remnants of these parts is because someone may still need to interface with worlds outside .NET and access some driver or system calls for example requiring you to know how to build a descriptor and put each bit in the right place. do not being mad at those people, they made our jobs possible.
In short : Interlocked + bitfields. incredibly powerful, don't use it
It is for speed and efficiency. Essentially all you are working with is a single int.
if ((flags & AnchorStyles.Top) == AnchorStyles.Top)
{
//Do stuff
}
So, I've come up with this scheme for locking nodes when rotating in a binary tree that several threads have both read and write access to at the same time, which involves locking four nodes per rotation, which seems to be an awfully lot? I figured some one way smarter then me had come up with a way to reduce the locking needed, but google didn't turn up much (I'm probably using the wrong terminology anyways).
This is my current scheme, Orange and Red nodes are either moved or changed by the rotation and need to be locked and Green nodes are adjacent to any node that is affected by the rotation but are not affected by it themselves.
binary tree rotation
I figured there has to be a better way to do this, one idea I have is to take a snapshot of the four nodes affected, rotate them in the snapshot and then replace the current nodes with the snapshot ones (assuming nothing has changed while I was doing the rotations) - this would allow me to be almost lock free but I'm afraid the memory overhead might be to much, considering that rotation is a rather quick operation (re-assigning three pointers) ?
I guess I'm looking for pointers (no pun) on how to do this efficiently.
Have a look at immutable binary trees. You have to change a bit more nodes (every node to the root), but that doesn't change the complexity of an insert which is log n both ways. Indeed, it may even improve the performance because you do not need any synchronization-code.
For example, Eric Lippert wrote some posts about immutable avl trees in C#. The last one was: Immutability in C# Part Nine: Academic? Plus my AVL tree implementation
Lock-free algorithms, as far as I can see, lack reference counts, which makes their general use problematic; you can't know if the pointer you have to an element is valid - maybe that element went away.
Valois' gets around this with some exotic memory allocation arrangement which is not useful in practise.
The only way I can see to do lock-free balanced trees is to use a modified MCAS, where you can also do increment/decrement, so you can maintain a reference count. I've not looked to see if MCAS can be modified in this way.
There would usually be one lock for the entire data structure, given that a lock itself usually takes up some serious resources. I'm guessing given that you are trying to avoid that, this must be some very specialized scenario where you have a lot of CPU intensive worker threads constantly updating the tree?
You could get a balance by making every N nodes share one lock. When entering a rotation operation, find the set of unique locks used by the affected nodes. Most of the time it will be just one lock.
the best solution depends on your application. But if you have like an in-memory database of stuff and you want to do concurrent updates to it AND want to have very fine-grained locking, you could also look at B-trees which do not require node rotation as often as binary trees. They are also automatically balanced, unlike binary trees, which require more rotations to maintain balancing (e.g. Splay or AVL trees).
If you want to have transactional modifications to the trees, instead, you could use functional B-trees (which Thomas Danecker calls immutable trees). These are kind of "copy-on-write" binary trees, and there the only thing you need to lock is the root node. So you need only one lock. And the operations have in practice the same complexity as for binary trees, because all ops on functional binary trees are O(log n) and you spend the same logarithmic time whenever you descend down any tree.
Having one lock per node is most likely not the correct solution.
John Valois' paper briefly describes an approach to implementing a binary search tree using auxilary nodes. I am using the auxilary node approach in my design of a concurrent LinkedHashMap. You can probably find many other papers on concurrent trees on CiteSeerX (an invaluable resource). I would stray away from using trees as a concurrent data collection unless truly necessary, as they tend to have poor concurrency characteristics due to rebalancing. Often more pragmatic approaches, like skip lists, work out better.
I played around making binary trees read lock-free using partial copy on write. I posted a incomplete implementation here http://groups.google.com/group/comp.programming.threads/msg/6c0775e9882516a2?hl=en&