Okay, I just can't get my head around multi-threading scenarios properly. Sorry for asking a similar question again, I'm just seeing many different "facts" around the internet.
public static class MyClass {
private static List<string> _myList = new List<string>;
private static bool _record;
public static void StartRecording()
{
_myList.Clear();
_record = true;
}
public static IEnumerable<string> StopRecording()
{
_record = false;
// Return a Read-Only copy of the list data
var result = new List<string>(_myList).AsReadOnly();
_myList.Clear();
return result;
}
public static void DoSomething()
{
if(_record) _myList.Add("Test");
// More, but unrelated actions
}
}
The idea is that if Recording is activated, calls to DoSomething() get recorded in an internal List, and returned when StopRecording() is called.
My specification is this:
StartRecording is not considered Thread-Safe. The user should call this while no other Thread is calling DoSomething(). But if it somehow could be, that would be great.
StopRecording is also not officially thread-safe. Again, it would be great if it could be, but that is not a requirement.
DoSomething has to be thread-safe
The usual way seems to be:
public static void DoSomething()
{
object _lock = new object();
lock(_lock){
if(_record) _myList.Add("Test");
}
// More, but unrelated actions
}
Alternatively, declaring a static variable:
private static object _lock;
public static void DoSomething()
{
lock(_lock){
if(_record) _myList.Add("Test");
}
// More, but unrelated actions
}
However, this answer says that this does not prevent other code from accessing it.
So I wonder
How would I properly lock a list?
Should I create the lock object in my function or as a static class variable?
Can I wrap the functionality of Start and StopRecording in a lock-block as well?
StopRecording() does two things: Set a boolean variable to false (to prevent DoSomething() from adding more stuff) and then copying the list to return a copy of the data to the caller). I assume that _record = false; is atomic and will be in effect immediately? So normally I wouldn't have to worry about Multi-Threading here at all, unless some other Thread calls StartRecording() again?
At the end of the day, I am looking for a way to express "Okay, this list is mine now, all other threads have to wait until I am done with it".
I will lock on the _myList itself here since it is private, but using a separate variable is more common. To improve on a few points:
public static class MyClass
{
private static List<string> _myList = new List<string>;
private static bool _record;
public static void StartRecording()
{
lock(_myList) // lock on the list
{
_myList.Clear();
_record = true;
}
}
public static IEnumerable<string> StopRecording()
{
lock(_myList)
{
_record = false;
// Return a Read-Only copy of the list data
var result = new List<string>(_myList).AsReadOnly();
_myList.Clear();
return result;
}
}
public static void DoSomething()
{
lock(_myList)
{
if(_record) _myList.Add("Test");
}
// More, but unrelated actions
}
}
Note that this code uses lock(_myList) to synchronize access to both _myList and _record. And you need to sync all actions on those two.
And to agree with the other answers here, lock(_myList) does nothing to _myList, it just uses _myList as a token (presumably as key in a HashSet). All methods must play fair by asking permission using the same token. A method on another thread can still use _myList without locking first, but with unpredictable results.
We can use any token so we often create one specially:
private static object _listLock = new object();
And then use lock(_listLock) instead of lock(_myList) everywhere.
This technique would have been advisable if myList had been public, and it would have been absolutely necessary if you had re-created myList instead of calling Clear().
Creating a new lock in DoSomething() would certainly be wrong - it would be pointless, as each call to DoSomething() would use a different lock. You should use the second form, but with an initializer:
private static object _lock = new object();
It's true that locking doesn't stop anything else from accessing your list, but unless you're exposing the list directly, that doesn't matter: nothing else will be accessing the list anyway.
Yes, you can wrap Start/StopRecording in locks in the same way.
Yes, setting a Boolean variable is atomic, but that doesn't make it thread-safe. If you only access the variable within the same lock, you're fine in terms of both atomicity and volatility though. Otherwise you might see "stale" values - e.g. you set the value to true in one thread, and another thread could use a cached value when reading it.
There are a few ways to lock the list. You can lock on _myList directly providing _myList is never changed to reference a new list.
lock (_myList)
{
// do something with the list...
}
You can create a locking object specifically for this purpose.
private static object _syncLock = new object();
lock (_syncLock)
{
// do something with the list...
}
If the static collection implements the System.Collections.ICollection interface (List(T) does), you can also synchronize using the SyncRoot property.
lock (((ICollection)_myList).SyncRoot)
{
// do something with the list...
}
The main point to understand is that you want one and only one object to use as your locking sentinal, which is why creating the locking sentinal inside the DoSomething() function won't work. As Jon said, each thread that calls DoSomething() will get its own object, so the lock on that object will succeed every time and grant immediate access to the list. By making the locking object static (via the list itself, a dedicated locking object, or the ICollection.SyncRoot property), it becomes shared across all threads and can effectively serialize access to your list.
The first way is wrong, as each caller will lock on a different object.
You could just lock on the list.
lock(_myList)
{
_myList.Add(...)
}
You may be misinterpreting the this answer, what is actually being stated is that they lock statement is not actually locking the object in question from being modified, rather it is preventing any other code using that object as a locking source from executing.
What this really means is that when you use the same instance as the locking object the code inside the lock block should not get executed.
In essence you are not really attempting to "lock" your list, you are attempting to have a common instance that can be used as a reference point for when you want to modify your list, when this is in use or "locked" you want to prevent other code from executing that would potentially modify the list.
Related
Image this code:
You have 2 arrays, and you need to lock both of them in same moment (for any reason - you just need to keep locked both of them because they are somehow depending on each other) - you could nest the lock
lock (array1)
{
lock (array2)
{
... do your code
}
}
but this may result in a deadlock in case that someone in other part of your code would do
lock (array2)
{
lock (array1)
{
... do your code
}
}
and array 1 was locked - execution context switched - then array 2 was locked by second thread.
Is there a way to atomically lock them? such as
lock_array(array1, array2)
{
....
}
I know I could just create some extra "lock object" and lock that instead of both arrays everywhere in my code, but that just doesn't seem correct to me...
In general you should avoid locking on publicly accessible members (the arrays in your case). You'd rather have a private static object you'd lock on.
You should never allow locking on publicly accessible variable as Darin said. For example
public class Foo
{
public object Locker = new object();
}
public class Bar
{
public void DoStuff()
{
var foo = new Foo();
lock(foo.Locker)
{
// doing something here
}
}
}
rather do something like this.
public class Foo
{
private List<int> toBeProtected = new List<int>();
private object locker = new object();
public void Add(int value)
{
lock(locker)
{
toBeProtected.Add(value);
}
}
}
The reason for this is if you have multiple threads accessing multiple public synchronization constructs then run the very real possiblity of deadlock. Then you have to be very careful about how you code. If you are making your library available to others can you be sure that you can grab the lock? Perhaps someone using your library has also grabbed the lock and between the two of you have worked your way into a deadlock scenario. This is the reason Microsoft recommend not using SyncRoot.
I am not sure what you mean by lock to arrays.
You can easily perform operation on both arrays in single lock.
static readonly object a = new object();
lock(a){
//Perform operation on both arrays
}
I know that is wrong to use lock(this) or any shared object.
I wonder if this usage is OK?
public class A
{
private readonly object locker = new object();
private List<int> myList;
public A()
{
myList = new List<int>()
}
private void MethodeA()
{
lock(locker)
{
myList.Add(10);
}
}
public void MethodeB()
{
CallToMethodInOtherClass(myList);
}
}
public class OtherClass
{
private readonly object locker = new object();
public CallToMethodInOtherClass(List<int> list)
{
lock(locker)
{
int i = list.Count;
}
}
}
Is this thread safe? In OtherClass we lock with a private object so if the class A lock with its private lock can the list still change in the the lock block in OtherClass?
No, it's not thread safe. Add and Count may be executed at the "same" time. You have two different lock objects.
Always lock your own lock object when passing the list:
public void MethodeB()
{
lock(locker)
{
CallToMethodInOtherClass(myList);
}
}
No this is not thread safe. To make it thread safe you can use lock on static objects because they are shared between threads, this may cause deadlocks in the code but it can be handle by maintaining proper order for locking. There is a performance cost associated with lock so use it wisely.
Hope this helps
No, this is not thread-safe. A.MethodeA and OtherClass.CallToMethodInOtherClass are locking on different objects, so they're not mutually exclusive. If you need to protect the access to the list, don't pass it to external code, keep it private.
No, that is not thread-safe.
Your 2 methods are locking on 2 different objects, they will not lock out each other.
Because CallToMethodInOtherClass() only retrieves the value of Count nothing will go horribly wrong. But the lock() around it is useless and misleading.
If the method would make changes in the list you would have a nasty problem. To solve it, change MethodeB:
public void MethodeB()
{
lock(locker) // same instance as MethodA is using
{
CallToMethodInOtherClass(myList);
}
}
No, they have to lock the same object. With your code they both lock on a different and each call could be executed simultaneous.
To make the code thread safe place a lock in MethodeB or use the list itself as lock object.
It actually is thread-safe (purely as a matter of an implementation detail on Count), but:
Thread-safe snippets of code do not a thread-safe application make. You can combine different thread-safe operations into non-thread-safe operations. Indeed, much non-thread-safe code can be broken down into smaller pieces all of which are thread-safe on their own.
It's not thread-safe for the reason you were hoping, which means that extending it further would not be thread-safe.
This code would be thread-safe:
public void CallToMethodInOtherClass(List<int> list)
{
//note we've no locks!
int i = list.Count;
//do something with i but don't touch list again.
}
Call it with any list, and it'll give i a value based on the state of that list, regardless of what other threads are up to. It will not corrupt list. It will not give i an invalid value.
So while this code is also thread-safe:
public void CallToMethodInOtherClass(List<int> list)
{
Console.WriteLine(list[93]); // obviously only works if there's at least 94 items
// but that's nothing to do with thread-safety
}
This code would not be thread-safe:
public void CallToMethodInOtherClass(List<int> list)
{
lock(locker)//same as in the question, different locker to that used elsewhere.
{
int i = list.Count;
if(i > 93)
Console.WriteLine(list[93]);
}
}
Before going further, the two bits I described as thread-safe are not promised to be by the spec for List. Conservative coding would assume they are not thread-safe rather than depending upon implementation details, but I'm going to depend on the implementation details because it affects the question of how to use locks in an important way:
Because there is code operating on list that is not acquiring the lock on locker first, that code is not prevented from running concurrently with CallToMethodInOtherClass. Now, while list.Count is thread-safe and list[93] is tread-safe,* the combination of the two where we depend on the first to ensure that the second works is not thread-safe. Because code outside the lock can affect list, it's possible for code to call Remove or Clear in between Count assuring us that list[93] would work, and list[93] being called.
Now, if we know that list is only ever added to, that's fine, even if a resize is happening concurrently we'll end up with the value of list[93] either way. If something is writing to list[93] and it's a type that .NET will write to atomically (and int is one such type), we'll end up with either the old one or the new one, just as if we'd locked correctly we'd get the old or the new depending on which thread go the lock first. Again, this is an implementation detail not a specified promise, I'm stating this just to point out how the thread-safety given still results in non thread-safe code.
Moving this toward real code. We shouldn't assume that list.Count and list[93] is threadsafe because we weren't promised they would be and that could change, but even if we did have that promise, those two promises won't add up to a promise that they'd be thread-safe together.
The important thing is to use the same lock to protect blocks of code that can interfere with each other. Hence, consider the variant below that is guaranteed to be threadsafe:
public class ThreadSafeList
{
private readonly object locker = new object();
private List<int> myList = new List<int>();
public void Add(int item)
{
lock(locker)
myList.Add(item);
}
public void Clear()
{
lock(locker)
myList.Clear();
}
public int Count
{
lock(locker)
return myList.Count;
}
public int Item(int index)
{
lock(locker)
return myList[index];
}
}
This class is guaranteed to be thread-safe in everything it does. Without depending on any implementation details, there is no method here that will corrupt state or give incorrect results because of what another thread is doing with the same instance. The following code still doesn't work though:
// (l is a ThreadSafeList visible to multiple threads.
if(l.Count > 0)
Console.WriteLine(l[0]);
We've guaranteed the thread-safety of each call 100%, but we haven't guaranteed the combination, and we can't guarantee the combination.
There's two things we can do. We can add a method for the combination. Something like the following would be common for many classes specifically designed for multi-threaded use:
public bool TryGetItem(int index, out int value)
{
lock(locker)
{
if(l.Count > index)
{
value = l[index];
return true;
}
value = 0;
return false;
}
}
This makes the count test and the item retrieval part of a single operation which is guaranteed to be thread-safe.
Alternatively, and most often what we need to do, we have the lock happen at the place where the operations are grouped:
lock(lockerOnL)//used by every other piece of code operating on l
if(l.Count > 0)
Console.WriteLine(l[0]);
Of course, this makes the locks within ThreadSafeList redundant and just a waste of effort, space, and time. This is the main reason that most classes don't provide thread-safety on their instance members - since you can't meaningfully protect groups of calls on members from within the class, it's a waste of time trying to unless the thread-safety promises are very well specified and useful on their own.
To come back to the code in your question:
The lock in CallToMethodInOtherClass should be removed unless OtherClass has its own reason for locking internally. It can't make a meaningful promise that it won't be combined in a non-threadsafe way and adding more locks to a program just increases the complexity of analysing it to be sure there are no deadlocks.
The call to CallToMethodInOtherClass should be protected by the same lock as other operations in that class:
public void MethodeB()
{
lock(locker)
CallToMethodInOtherClass(myList);
}
Then as long as CallToMethodInOtherClass doesn't store myList somewhere it can be seen by other threads later on, it doesn't matter that CallToMethodInOtherClass isn't thread-safe because the only code that can access myList brings its own guarantee not to call it concurrently with other operations on myList.
The two important things are:
When something is described as "thread-safe", know just what it's promising by that, as there are different sorts of promise that fall under "thread-safe" and on its own it just means "I won't put this object into a nonsensical state", which while an important building block, is not a lot on its own.
Lock on groups of operations, with the same lock for each group that'll affect the same data, and guard the access to objects so that there can't possibly be another thread not playing ball with this.
*This is a very limited definition of thread-safe. Calling list[93] on a List<T> where T is a type that will be written and read atomically and we don't know whether it actually has at least 94 items is equally safe whether or not there are other threads operating on it. Of course, the fact that it can throw ArgumentOutOfRangeException in either case is not what most people would consider "safe", but the guarantee we have with multiple threads remains the same as with one. It's that we obtain a stronger guarantee by checking Count in a single thread but not in a multi-thread situation that leads me to describe that as not thread-safe; while that combo still won't corrupt state it can lead to an exception we'd assured ourselves couldn't happen.
Probably the easiest way to do the trick
public class A
{
private List<int> myList;
public A()
{
myList = new List<int>()
}
private void MethodeA()
{
lock(myList)
{
myList.Add(10);
}
}
public void MethodeB()
{
CallToMethodInOtherClass(myList);
}
}
public class OtherClass
{
public CallToMethodInOtherClass(List<int> list)
{
lock(list)
{
int i = list.Count;
}
}
}
Many of the answers have mentioned using a static readonly lock.
However, you really should try to avoid this static lock. It would be easy to create a deadlock where multiple threads are using the static lock.
What you could use instead is one of the .net 4 concurrent collections, these do provide some thread synchronisation on your behalf, so that you do not need to use the locking.
Take a look at the System.collections.Concurrent namespace.
For this example, you could use the ConcurrentBag<T> class.
Ass all the answers say these are different lock objects.
a simple way is to have a static lock object f.ex:
publc class A
{
public static readonly object lockObj = new object();
}
and in both classes use lock like:
lock(A.lockObj)
{
}
I've a class that contains a static collection to store the logged-in users in an ASP.NET MVC application. I just want to know about the below code is thread-safe or not. Do I need to lock the code whenever I add or remove item to the onlineUsers collection.
public class OnlineUsers
{
private static List<string> onlineUsers = new List<string>();
public static EventHandler<string> OnUserAdded;
public static EventHandler<string> OnUserRemoved;
private OnlineUsers()
{
}
static OnlineUsers()
{
}
public static int NoOfOnlineUsers
{
get
{
return onlineUsers.Count;
}
}
public static List<string> GetUsers()
{
return onlineUsers;
}
public static void AddUser(string userName)
{
if (!onlineUsers.Contains(userName))
{
onlineUsers.Add(userName);
if (OnUserAdded != null)
OnUserAdded(null, userName);
}
}
public static void RemoveUser(string userName)
{
if (onlineUsers.Contains(userName))
{
onlineUsers.Remove(userName);
if (OnUserRemoved != null)
OnUserRemoved(null, userName);
}
}
}
That is absolutely not thread safe. Any time 2 threads are doing something (very common in a web application), chaos is possible - exceptions, or silent data loss.
Yes you need some kind of synchronization such as lock; and static is usually a very bad idea for data storage, IMO (unless treated very carefully and limited to things like configuration data).
Also - static events are notorious for a good way to keep object graphs alive unexpectedly. Treat those with caution too; if you subscribe once only, fine - but don't subscribe etc per request.
Also - it isn't just locking the operations, since this line:
return onlineUsers;
returns your list, now unprotected. all access to an item must be synchronized. Personally I'd return a copy, i.e.
lock(syncObj) {
return onlineUsers.ToArray();
}
Finally, returning a .Count from such can be confusing - as it is not guaranteed to still be Count at any point. It is informational at that point in time only.
Yes, you need to lock the onlineUsers to make that code threadsafe.
A few notes:
Using a HashSet<string> instead of the List<string> may be a good idea, since it is much more efficient for operations like this (Contains and Remove especially). This does not change anything on the locking requirements though.
You can declare a class as "static" if it has only static members.
Yes you do need to lock your code.
object padlock = new object
public bool Contains(T item)
{
lock (padlock)
{
return items.Contains(item);
}
}
Yes. You need to lock the collection before you read or write to the collection, since multiple users are potentially being added from different threadpool workers. You should probably also do it on the count as well, though if you're not concerned with 100% accuracy that may not be an issue.
As per Lucero's answer, you need to lock onlineUsers. Also be careful what will clients of your class do with the onlineUsers returned from GetUsers(). I suggest you change your interface - for example use IEnumerable<string> GetUsers() and make sure the lock is used in its implementation. Something like this:
public static IEnumerable<string> GetUsers() {
lock (...) {
foreach (var element in onlineUsers)
yield return element;
// We need foreach, just "return onlineUsers" would release the lock too early!
}
}
Note that this implementation can expose you to deadlocks if users try to call some other method of OnlineUsers that uses lock, while still iterating over the result of GetUsers().
That code it is not thread-safe per se.
I will not make any suggestions relative to your "design", since you didn't ask any. I'll assume you found good reasons for those static members and exposing your list's contents as you did.
However, if you want to make your code thread-safe, you should basically use a lock object to lock on, and wrap the contents of your methods with a lock statement:
private readonly object syncObject = new object();
void SomeMethod()
{
lock (this.syncObject)
{
// Work with your list here
}
}
Beware that those events being raised have the potential to hold the lock for an extended period of time, depending on what the delegates do.
You could omit the lock from the NoOfOnlineUsers property while declaring your list as volatile. However, if you want the Count value to persist for as long as you are using it at a certain moment, use a lock there, as well.
As others suggested here, exposing your list directly, even with a lock, will still pose a "threat" on it's contents. I would go with returning a copy (and that should fit most purposes) as Mark Gravell advised.
Now, since you said you are using this in an ASP.NET environment, it is worth saying that all local and member variables, as well as their member variables, if any, are thread safe.
Sorry if this has been answered elsewhere... I have found a lot of posts on similar things but not the same.
I want to ensure that only one instance of an object exists at a time BUT I don't want that object to be retained past its natural life-cycle, as it might be with the Singleton pattern.
I am writing some code where processing of a list gets triggered (by external code that I have no control over) every minute. Currently I just create a new 'processing' object each time and it gets destroyed when it goes out of scope, as per normal. However, there might be occasions when the processing takes longer than a minute, and so the next trigger will create a second instance of the processing class in a new thread.
Now, I want to have a mechanism whereby only one instance can be around at a time... say, some sort of factory whereby it'll only allow one object at a time. A second call to the factory will return null, instead of a new object, say.
So far my (crappy) solution is to have a Factory type object as a nested class of the processor class:
class XmlJobListProcessor
{
private static volatile bool instanceExists = false;
public static class SingletonFactory
{
private static object lockObj = new object();
public static XmlJobListProcessor CreateListProcessor()
{
if (!instanceExists)
{
lock (lockObj)
{
if (!instanceExists)
{
instanceExists = true;
return new XmlJobListProcessor();
}
return null;
}
}
return null;
}
}
private XmlJobListProcessor() { }
....
}
I was thinking of writing an explicit destructor for the XmlJobListProcessor class that reset the 'instanceExists' field to false.
I Realise this is a seriously terrible design. The factory should be a class in its own right... it's only nested so that both it and the instance destructors can access the volatile boolean...
Anyone have any better ways to do this? Cheers
I know .NET 4 is not as widely used, but eventually it will be and you'll have:
private static readonly Lazy<XmlJobListProcessor> _instance =
new Lazy<XmlJobListProcessor>(() => new XmlJobListProcessor());
Then you have access to it via _instance.Value, which is initialized the first time it's requested.
Your original example uses double-check locking, which should be avoided at all costs.
See msdn Singleton implementation on how to do initialize the Singleton properly.
just make one and keep it around, don't destroy and create it every minute
"minimize the moving parts"
I would instance the class and keep it around. Certainly I wouldn't use a destructor (if you mean ~myInstance() )...that increases GC time. In addition, if a process takes longer than a minute, what do you do with the data that was suppose to be processed if you just return a null value?
Keep the instance alive, and possibly build a buffer mechanism to continue taking input while the processor class is busy. You can check to see:
if ( isBusy == true )
{
// add data to bottom of buffer
}
else
{
// call processing
}
I take everyone's point about not re-instantiating the processor object and BillW's point about a queue, so here is my bastardized mashup solution:
public static class PRManager
{
private static XmlJobListProcessor instance = new XmlJobListProcessor();
private static object lockobj = new object();
public static void ProcessList(SPList list)
{
bool acquired = Monitor.TryEnter(lockobj);
try
{
if (acquired)
{
instance.ProcessList(list);
}
}
catch (ArgumentNullException)
{
}
finally
{
Monitor.Exit(lockobj);
}
}
}
The processor is retained long-term as a static member (here, long term object retention is not a problem since it has no state variables etc.) If a lock has been acquired on lockObj, the request just isn't processed and the calling thread will go on with its business.
Cheers for the feedback guys. Stackoverflow will ensure my internship! ;D
Okay, newbie multi-threading question:
I have a Singleton class. The class has a Static List and essentially works like this:
class MyClass {
private static MyClass _instance;
private static List<string> _list;
private static bool IsRecording;
public static void StartRecording() {
_list = new List<string>();
IsRecording = true;
}
public static IEnumerable<string> StopRecording() {
IsRecording = false;
return new List<string>(_list).AsReadOnly();
}
public MyClass GetInstance(){
}
public void DoSomething(){
if(IsRecording) _list.Add("Something");
}
}
Basically a user can call StartRecording() to initialize a List and then all calls to an instance-method may add stuff to the list. However, multiple threads may hold an instance to MyClass, so multiple threads may add entries to the list.
However, both list creation and reading are single operations, so the usual Reader-Writer Problem in multi-threading situations does not apply. The only problem I could see is the insertion order being weird, but that is not a problem.
Can I leave the code as-is, or do I need to take any precautions for multi-threading? I should add that in the real application this is not a List of strings but a List of Custom Objects (so the code is _list.Add(new Object(somedata))), but these objects only hold data, no code besides a call to DateTime.Now.
Edit: Clarifications following some answers: DoSomething cannot be static (the class here is abbreviated, there is a lot of stuff going on that is using instance-variables, but these created by the constructor and then only read).
Is it good enough to do
lock(_list){
_list.Add(something);
}
and
lock(_list){
return new List<string>(_list).AsReadOnly();
}
or do I need some deeper magic?
You certainly must lock the _list. And since you are creating multiple instances for _list you can not lock on _list itself but you should use something like:
private static object _listLock = new object();
As an aside, to follow a few best practices:
DoSomething(), as shown, can be static and so it should be.
for Library classes the recommended pattern is to make static members thread-safe, that would apply to StartRecording(), StopRecording() and DoSomething().
I would also make StopRecording() set _list = null and check it for null in DoSomething().
And before you ask, all this takes so little time that there really are no performance reasons not to do it.
You need to lock the list if multiple threads are adding to it.
A few observations...
Maybe there's a reason not to, but I would suggest making the class static and hence all of its members static. There's no real reason, at least from what you've shown, to require clients of MyClass to call the GetInstance() method just so they can call an instance method, DoSomething() in this case.
I don't see what prevents someone from calling the StartRecording() method multiple times. You might consider putting a check in there so that if it is already recording you don't create a new list, pulling the rug out from everyone's feet.
Finally, when you lock the list, don't do it like this:
static object _sync = new object();
lock(_sync){
_list.Add(new object(somedata));
}
Minimize the amount of time spent inside the lock by moving the new object creation outside of the lock.
static object _sync = new object();
object data = new object(somedata);
lock(_sync){
_list.Add(data);
}
EDIT
You said that DoSomething() cannot be static, but I bet it can. You can still use an object of MyClass inside DoSomething() for any instance-related stuff you have to do. But from a programming usability perspective, don't require the users to MyClass to call GetInstance() first. Consider this:
class MyClass {
private static MyClass _instance;
private static List<string> _list;
private static bool IsRecording;
public static void StartRecording()
{
_list = new List<string>();
IsRecording = true;
}
public static IEnumerable<string> StopRecording()
{
IsRecording = false;
return new List<string>(_list).AsReadOnly();
}
private static MyClass GetInstance() // make this private, not public
{ return _instance; }
public static void DoSomething()
{
// use inst internally to the function to get access to instance variables
MyClass inst = GetInstance();
}
}
Doing this, the users of MyClass can go from
MyClass.GetInstance().DoSomething();
to
MyClass.DoSomething();
.NET collections are not fully thread-safe. From MSDN: "Multiple readers can read the collection with confidence; however, any modification to the collection produces undefined results for all threads that access the collection, including the reader threads." You can follow the suggestions on that MSDN page to make your accesses thread-safe.
One problem that you would probably run into with your current code is if StopRecording is called while some other thread is inside DoSomething. Since creating a new list from an existing one requires enumerating over it, you are likely to run into the old "Collection was modified; enumeration operation may not execute" problem.
The bottom line: practice safe threading!
It's possible, albeit tricky, to write a linked list that allows simultaneous insertions from multiple threads without a lock, but this isn't it. It's just not safe to call _list.Add in parallel and hope for the best. Depending how it's written, you could lose one or both values, or corrupt the entire structure. Just lock it.