I have recently been looking at code, specifically component oriented code that uses threads internally. Is this a bad practise. The code I looked at was from an F# example that showed the use of event based programming techniques. I can not post the code in case of copyright infringements, but it does spin up a thread of its own. Is this regarded as bad practise or is it feasible that code not written by yourself has full control of thread creation. I do point out that this code is not a visual component and is very much "built from scratch".
What are the best practises of component creation where threading would be helpful?
I am completely language agnostic on this, the f# example could have been in c# or python.
I am concerned about the lack of control over the components run time and hogging of resources, the example just implemented another thread, but as far as I can see there is nothing stopping this type of design from spawning as many threads as it wishes, well to the limit of what your program allows.
I did think of methods such as object injecting and so fourth, but threads are weird as they are from a component perspective pure "action" as opposed to "model, state, declarations"
any help would be great.
This is too general a question to bear any answer more specific than "it depends" :-)
There are cases when using internal threads within a component is completely valid, and there are cases when not. This has to be decided on a case by case basis. Overall, though, since threads do make the code much more difficult to test and maintain, and increase the chances of subtle, hard to find bugs, they should be used with caution, only when there is a really decisive reason to use them.
An example to the legitimate use of threads is a worker thread, where a component handling an event starts an action which takes a long time to execute (such as a lengthy computation, a web request, or extensive file I/O), and spawns a separate thread to do the job, so that the control can be immediately returned to the interface to handle further user input. Without the worker thread, the UI would be totally unresponsive for a long time, which usually makes users angry.
Another example is a lengthy calculation/process which lends itself well to parallel execution, i.e. it consists of many smaller independent tasks of more or less similar size. If there are strong performance requirements, it does indeed make sense to execute the individual tasks in a concurrent fashion using a pool of worker threads. Many languages provide high level support for such designs.
Note that components are generally free to allocate and use any other kinds of resources too and thus wreak havoc in countless other ways - are you ever worried about a component eating up all memory, exhausting the available file handles, reserving ports etc.? Many of these can cause much more trouble globally within a system than spawning extra threads.
There's nothing wrong about creating new threads in a component/library. The only thing wrong would be if it didn't give the consumer of the API/component a way to synchronize whenever necessary.
First of all, what is the nature of component you are talking about? Is it a dll to be consumed by some different code? What does it do? What are the business requirements? All these are essential to determine if you do need to worry about parallelism or not.
Second of all, threading is just a tool to acheive better performance, responsivness so avoiding it at all cost everywhere does not sound like a smart approach - threading is certainly vital for some business needs.
Third of all, when comparing threading symantics in c# vs f#, you have to remember that those are very different beasts in theirselfs - f# implicitly makes threading safer to code as there is no notion of global variables hence the critical section in your code is something easier to eschew in f# than in c#. That puts your as a deleloper in a better place bc you dont have to deal with memoryblocks, locks, semaphores etc.
I would say if your 'component' relies heavily on threading you might want to consider using either the parallel FX in c# or even go with f# since it kind of approaches working with processer time slicing and parallelism in more elegant way (IMHO).
And last but not least, when you say about hogging up computer resources by using threading in your component - please remember that coding threads do not necessarily impose higher resource impact per se – you can just as easily do the same damage on one thread if you don’t dispose of your objects (unmaneged) properly, granted you might get OutOfMemeory Exception faster when you make the same mistake on several threads…
Related
I am dealing with Multi-Threaded code written by my predecessor within C# WinForm application that handles large volumes of data concurrently in production environment. I have identified that at some points within the code Thread.Sleep(20) is used. I am not very expert in multithreading and have basic knowledge of threading and synchronisation primitives. I need to know whether there are any dangers associated with Thread.Sleep
It's not going to be explicitly or directly dangerous. It's almost certainly wasting effort, as explicitly forcing your program to not do work when it has work to do is almost never sensible.
It's also a pretty significant red flag that there's a race condition in the code and, rather than actually figure out what it is or how to fix it, the programmer simply added in Sleep calls until he stopped seeing it. If true, it would mean that the program is still unstable and could potentially break at any time if enough other variables change, and that the issue should be actually fixed using proper synchronization techniques.
In which situations are CERs useful? I mean, real-life situations, not some abstract examples.
Do you personally use them? Haven't seen their usage except for examples in books and articles. That, for sure, can be because of my insufficient programming experience. So I am also interested how wide-spread technique it is.
What are the pros and cons for using them?
In which situations are CERs useful? I mean, real-life situations, not some abstract examples.
When building software that has stringent reliability requirements. Database servers, for example, must not leak resources, must not corrupt internal data structures, and must keep running, period, end of story, even in the face of godawful scenarios like thread aborts.
Building managed code that cannot possibly leak, that maintains consistent data structures when aborts can happen at arbitrary places, and keeps the service going is a difficult task. CERs are one of the tools in the toolbox for building such reliable services; in this case, by restricting where aborts can occur.
One can imagine other services that must stay reliable in difficult circumstances. Software that, say, finds efficient routes for ambulances, or moves robot arms around in factories, has higher reliability constraints than your average end user code running on a desktop machine.
Do you personally use them?
No. I build compilers that run on end-user machines. If the compiler fails halfway through a compilation, that's unfortunate but it is not likely to have a human life safety impact or result in the destruction of important data.
I am also interested how wide-spread technique it is.
I have no idea.
What are the pros and cons for using them?
I don't understand the question. You might as well ask what the pros and cons of a roofing hatchet are; unless you state the task that you intend to use the hatchet for, it's hard to say what the pros and cons of the tool are. What task do you wish to perform with CERs? Once we know the task we can describe the pros and cons of using any particular tool to accomplish that task.
I have some C# class libraries, that were designed without taking into account things like concurrency, multiple threads, locks, etc...
The code is very well structured, it is easily expandable, but it can benefit a lot from multithreading: it's set of scientific/engineering libraries that need to perform billions of calculations in very-very short time (and now they don't take benefit from the available cores).
I want to transform all this code into a set of multithreaded libraries, but I don't know where to start and I don't have any previous experience.
I could use any available help, and any recommendations/suggestions.
My recommendation would be to not do it. You didn't write that code to be used in parallel, so it's not going to work, and it's going to fail in ways that will be difficult to debug.
Instead, I recommend you decide ahead of time which part of that code can benefit the most from parallelism, and then rewrite that code, from scratch, to be parallel. You can take advantage of having the unmodified code in front of you, and can also take advantage of existing automated tests.
It's possible that using the .NET 4.0 Task Parallel Library will make the job easier, but it's not going to completely bridge the gap between code that was not designed to be parallel and code that is.
I'd highly recommend looking into .NET 4 and the Task Parallel Library (also available in .NET 3.5sp1 via the Rx Framework).
It makes many concurrency issues much simple, in particular, data parallelism becomes dramatically simpler. Since you're dealing with large datasets in most scientific/engineering libraries, data parallelism is often the way to go...
For some reference material, especially on data parallelism and background about decomposing and approaching the problem, you might want to read my blog series on Parallelism in .NET 4.
If you don't have any previous experience in multithreading then I would recommend that you get the basics first by looking at the various resources: https://stackoverflow.com/questions/540242/book-or-resource-on-c-concurrency
Making your entire library multithreaded requires a brand new architectural approach. If you simply go around and start putting locks everywhere in your code you'll end up making your code very cumbersome and you might not even achieve any performance increases.
The best concurrent software is lock-free and wait-free... this is difficult to achieve in C# (.NET) since most of your Collections are not lock-free, wait-free or even thread-safe. There are various discussions on lock-free data structures. A lot of people have referenced Boyet's articles (which are REALLY good) and some people have been throwing around The Task Parallel Library as the next thing in .NET concurrency, but TPL really doesn't give you much in terms of thread-safe collections.
.NET 4.0 is coming out with Collections.Concurrent which should help a lot.
Making your entire library concurrent would not be recommended since it wasn't designed with concurrency in mind from the start. Your next option is to go through your library and identify which portions of it are actually good candidates for multithreading, then you can pick the best concurrency solution for them and implement it. The main thing to remember is that when you write multithreaded code, the concurrency should result in increased throughput of your program. If increased throughput is not achieved (i.e. you either match or the throughput is less than in the sequential version), then you should simply not use concurrency in that code.
The best place to start is probably http://msdn.microsoft.com/en-us/concurrency/default.aspx
Good luck!
I might inherit a somewhat complex multithreaded application, which currently has several files with 2+k loc, lots of global variables accessed from everywhere and other practices that I would consider quite smelly.
Before I start adding new features with the current patterns, I'd like to try and see if I can make the basic architecture of the application better. Here's a short description :
App has in memory lists of data, listA, listB
App has local copy of the data (for offline functionality) dataFileA, dataFileB
App has threads tA1, tB1 which update dirty data from client to server
Threads tA2, tB2 update dirty data from server to client
Threads tA3, tB3 update dirty data from in memory lists to local files
I'm kinda trouble on what different patterns, strategies, programming practices etc I should look into in order to have the knowledge to make the best decisions on this.
Here's some goals I've invented for myself:
Keep the app as stable as possible
Make it easy for Generic Intern to add new features (big no-no to 50 lines of boilerplate code in each new EditRecordX.cs )
Reduce complexity
Thanks for any keywords or other tips which could help me on this project.
To Quibblesome's excellent suggestions, I might also add that using immutable objects is often an effective way to reduce the risk of threading problems. (Immutable objects, like strings in .NET and Java, cannot be modified once they are created.)
I'd suggest another goal would be to remove/reduce global state and keep information on the stack as often as possible to reduce the possibility of race conditions and weird threading issues.
Perhaps it might be worth seeing if you can incorporate tA2, tB2, tA3 and tB3 into the same threads to kill a few. If that isn't possible consider putting them behind a facade (a thread that concerns itself with moving data requests between the UI and the service that is talking to the server). This is so the "user facing" code only has to deal with one client as opposed to two. (I don't count the backup as a client as this sounds like a one-way process).
If the threads (UI and facade) wait for one another to finish their requests then this should prevent a "pull update" happening at the same time as a "push update".
For making these kind of changes in general you will want to look at Martin Fowler's Refactoring: Improving the Design of Existing Code (much of which is on the refactoring website) and also Refactoring to Patterns. You also might find Working Effectively with Legacy Code useful in supporting safe changes. None of this is as much help specifically with multithreading, except in that simpler code is easier to handle in a multithreading environment.
I think you should take a look at this: http://msdn.microsoft.com/en-us/concurrency/default.aspx
And this blog entry: http://blogs.msdn.com/pfxteam/
And this: http://msdn.microsoft.com/en-us/devlabs/ee794896.aspx
Hope it helps.
How often do you find yourself actually using spinlocks in your code? How common is it to come across a situation where using a busy loop actually outperforms the usage of locks?
Personally, when I write some sort of code that requires thread safety, I tend to benchmark it with different synchronization primitives, and as far as it goes, it seems like using locks gives better performance than using spinlocks. No matter for how little time I actually hold the lock, the amount of contention I receive when using spinlocks is far greater than the amount I get from using locks (of course, I run my tests on a multiprocessor machine).
I realize that it's more likely to come across a spinlock in "low-level" code, but I'm interested to know whether you find it useful in even a more high-level kind of programming?
It depends on what you're doing. In general application code, you'll want to avoid spinlocks.
In low-level stuff where you'll only hold the lock for a couple of instructions, and latency is important, a spinlock mat be a better solution than a lock. But those cases are rare, especially in the kind of applications where C# is typically used.
In C#, "Spin locks" have been, in my experience, almost always worse than taking a lock - it's a rare occurrence where spin locks will outperform a lock.
However, that's not always the case. .NET 4 is adding a System.Threading.SpinLock structure. This provides benefits in situations where a lock is held for a very short time, and being grabbed repeatedly. From the MSDN docs on Data Structures for Parallel Programming:
In scenarios where the wait for the lock is expected to be short, SpinLock offers better performance than other forms of locking.
Spin locks can outperform other locking mechanisms in cases where you're doing something like locking through a tree - if you're only having locks on each node for a very, very short period of time, they can out perform a traditional lock. I ran into this in a rendering engine with a multithreaded scene update, at one point - spin locks profiled out to outperform locking with Monitor.Enter.
For my realtime work, particularly with device drivers, I've used them a fair bit. It turns out that (when last I timed this) waiting for a sync object like a semaphore tied to a hardware interrupt chews up at least 20 microseconds, no matter how long it actually takes for the interrupt to occur. A single check of a memory-mapped hardware register, followed by a check to RDTSC (to allow for a time-out so you don't lock up the machine) is in the high nannosecond range (basicly down in the noise). For hardware-level handshaking that shouldn't take much time at all, it is really tough to beat a spinlock.
My 2c: If your updates satisfy some access criteria then they are good spinlock candidates:
fast, ie you will have time to acquire the spinlock, perform the updates and release the spinlock in a single thread quanta so that you don't get pre-empted while holding the spinlock
localized all data you update are in preferably one single page that is already loaded, you do not want a TLB miss while you holding the spinlock, and you definetely don't want an page fault swap read!
atomic you do not need any other lock to perform the operation, ie. never wait for locks under spinlock.
For anything that has any potential to yield, you should use a notified lock structure (events, mutex, semaphores etc).
One use case for spin locks is if you expect very low contention but are going to have a lot of them. If you don't need support for recursive locking, a spinlock can be implemented in a single byte, and if contention is very low then the CPU cycle waste is negligible.
For a practical use case, I often have arrays of thousands of elements, where updates to different elements of the array can safely happen in parallel. The odds of two threads trying to update the same element at the same time are very small (low contention) but I need one lock for every element (I'm going to have a lot of them). In these cases, I usually allocate an array of ubytes of the same size as the array I'm updating in parallel and implement spinlocks inline as (in the D programming language):
while(!atomicCasUbyte(spinLocks[i], 0, 1)) {}
myArray[i] = newVal;
atomicSetUbyte(spinLocks[i], 0);
On the other hand, if I had to use regular locks, I would have to allocate an array of pointers to Objects, and then allocate a Mutex object for each element of this array. In scenarios such as the one described above, this is just plain wasteful.
If you have performance critical code and you have determined that it needs to be faster than it currently is and you have determined that the critical factor is the lock speed, then it'd be a good idea to try a spinlock. In other cases, why bother? Normal locks are easier to use correctly.
Please note the following points :
Most mutexe's implementations spin for a little while before the thread is actually unscheduled. Because of this it is hard to compare theses mutexes with pure spinlocks.
Several threads spining "as fast as possible" on the same spinlock will consome all the bandwidth and drasticly decrease your program efficiency. You need to add tiny "sleeping" time by adding noop in your spining loop.
You hardly ever need to use spinlocks in application code, if anything you should avoid them.
I can't thing of any reason to use a spinlock in c# code running on a normal OS. Busy locks are mostly a waste on the application level - the spinning can cause you to use the entire cpu timeslice, vs a lock will immediatly cause a context switch if needed.
High performance code where you have nr of threads=nr of processors/cores might benefit in some cases, but if you need performance optimization at that level your likely making next gen 3D game, working on an embedded OS with poor synchronization primitives, creating an OS/driver or in any case not using c#.
I used spin locks for the stop-the-world phase of the garbage collector in my HLVM project because they are easy and that is a toy VM. However, spin locks can be counter-productive in that context:
One of the perf bugs in the Glasgow Haskell Compiler's garbage collector is so annoying that it has a name, the "last core slowdown". This is a direct consequence of their inappropriate use of spinlocks in their GC and is excacerbated on Linux due to its scheduler but, in fact, the effect can be observed whenever other programs are competing for CPU time.
The effect is clear on the second graph here and can be seen affecting more than just the last core here, where the Haskell program sees performance degradation beyond only 5 cores.
Always keep these points in your mind while using spinlocks:
Fast user mode execution.
Synchronizes threads within a single process, or multiple processes if in shared memory.
Does not return until the object is owned.
Does not support recursion.
Consumes 100% of CPU while "waiting".
I have personally seen so many deadlocks just because someone thought it will be a good idea to use spinlock.
Be very very careful while using spinlocks
(I can't emphasize this enough).