I have a ThreadStatic member in a static class. The static class is used in a multi threaded environment. I want to make sure that when a thread is returned to threadpool (or re-used), the member is disposed (or re-initialized), so any subsequent uses of the particular thread gets a fresh copy of the variable. The member has to stay static so an instance member will not really help.
I have tried using ThreadLocal, AsyncLocal and CallContext but none of these really help. (CallContext is mostly for proof of concept, its a .net standard app so callcontext won't work anyways).
This is just a sample code I wrote to recreate my problem having ThreadStatic, ThreadLocal, AsyncLocal and CallContext for testing.
class Program
{
static void Main(string[] args)
{
var act = new List<Action<int>>()
{
v=> ThreadClass.Write(v),
v=> ThreadClass.Write(v),
};
Parallel.ForEach(act, new ParallelOptions { MaxDegreeOfParallelism = 1 }, (val, _, index) => val((int)index));
Console.WriteLine($"Main: ThreadId: {Thread.CurrentThread.ManagedThreadId} ThreadStatic = {ThreadClass.ThreadStatic} ThreadLocal = {ThreadClass.ThreadLocal.Value} AsyncLocal = {ThreadClass.AsyncLocal.Value} CallContext: {ThreadClass.CallContextData}");
Console.ReadKey();
}
}
public static class ThreadClass
{
static object _lock = new object();
[ThreadStatic]
public static string ThreadStatic;
public static ThreadLocal<string> ThreadLocal = new ThreadLocal<string>(() => "default");
public static readonly AsyncLocal<string> AsyncLocal = new AsyncLocal<string>();
public static string CallContextData
{
get => CallContext.LogicalGetData("value") as string;
set => CallContext.LogicalSetData("value", value);
}
static ThreadClass()
{
AsyncLocal.Value = "default";
}
public static void Write(int id)
{
lock (_lock)
{
Console.WriteLine($"{id} Init: ThreadId: {Thread.CurrentThread.ManagedThreadId} ThreadStatic = {ThreadStatic} ThreadLocal = {ThreadLocal.Value} AsyncLocal = {AsyncLocal.Value} CallContext: {ThreadClass.CallContextData}");
ThreadStatic = $"Static({id})";
ThreadLocal.Value = $"Local({id})";
AsyncLocal.Value = $"Async({id})";
CallContextData = $"Call({id})";
Console.WriteLine($"{id} Chng: ThreadId: {Thread.CurrentThread.ManagedThreadId} ThreadStatic = {ThreadStatic} ThreadLocal = {ThreadLocal.Value} AsyncLocal = {AsyncLocal.Value} CallContext: {ThreadClass.CallContextData}");
}
}
}
The above code is run in a single thread so the thread can be re-used.
0 Init: ThreadId: 1 ThreadStatic = ThreadLocal = default AsyncLocal = default CallContext:
0 Chng: ThreadId: 1 ThreadStatic = Static(0) ThreadLocal = Local(0) AsyncLocal = Async(0) CallContext: Call(0)
--------------------
1 Init: ThreadId: 1 ThreadStatic = Static(0) ThreadLocal = Local(0) AsyncLocal = Async(0) CallContext: Call(0)
1 Chng: ThreadId: 1 ThreadStatic = Static(1) ThreadLocal = Local(1) AsyncLocal = Async(1) CallContext: Call(1)
--------------------
Main: ThreadId: 1 ThreadStatic = Static(1) ThreadLocal = Local(1) AsyncLocal = CallContext:
However, as seen in the output, when the second call is made and thread 1 is reused, it still has the values set by thread 0.
Is there any way to reset ThreadStatic variable to default value or null when thread is re used?
TL;DR
If don't want a variable reused by multiple threads in a multi-threaded application, there's no reason to make it static.
If we don't want a variable reused by the same thread, it's questionable why we would deliberately use [ThreadStatic], since that's what it allows us to do.
I'm focusing on the ThreadStatic aspect of this since it seems to be a focus of the question.
so any subsequent uses of the particular thread gets a fresh copy of the variable.
Uses of the thread don't need their own copy of the variable - methods that use the variable may or may not need their own copy of the variable. That sounds like a hair-splitting thing to say, but a thread, by itself, doesn't need a copy of any variable. It could be doing things unrelated to this static class and this variable.
It's when we use the variable that we care whether it is a "fresh copy." That is, when we invoke a method that uses the variable.
If, when we use the static variable (which is declared outside of a method), what we want is to ensure that it's newly instantiated before we use it and disposed when we're done with it, then we can accomplish that within the method that uses it. We can instantiate it, dispose it, even set it to null if we want to. What becomes apparent as we do this, however, is that it usually eliminates any need for the variable to be declared outside the method that uses it.
If we do this:
public static class HasDisposableThreadStaticThing
{
[ThreadStatic]
public static DisposableThing Foo;
public static void UseDisposableThing()
{
try
{
using (Foo = new DisposableThing())
{
Foo.DoSomething();
}
}
finally
{
Foo = null;
}
}
}
We've accomplished the goal.
Is there any way to reset ThreadStatic variable to default value or null when thread is re used?
Done. Every time the same thread enters the method ("the thread is re used") it's null.
But if that's what we want, then why not just do this?
public static class HasDisposableThreadStaticThing
{
public static void UseDisposableThing()
{
using (var foo = new DisposableThing())
{
foo.DoSomething();
}
}
}
The result is exactly the same. Every thread starts with a new instance of DisposableThing because when it executes the method it declares the variable and creates a new instance. Instead of setting it to null the reference goes out of scope.
The only difference between the two is that in the first example, DisposableThing is publicly exposed outside of the class. That means that other threads could use it instead of declaring their own variable, which is weird. Since they would also need to make sure it's instantiated before using it, why wouldn't they also just create their own instance as in the second example?
The easiest and most normal way to ensure that a variable is initialized and disposed every time it's needed in a static method is to declare that variable locally within the static method and create a new instance. Then regardless of how many threads call it concurrently they will each use a separate instance.
Unfortunately, ThreadPool does not provide an API to listen to repool events to do this universally. However, if you have control over every place that queues work to the ThreadPool, you can write a simple wrapper to do what you want.
public struct DisposableThreadStatic<T> : IDisposable where T : class, IDisposable
{
[ThreadStatic]
private static T ts_value;
private bool _shouldDispose;
public T Value => ts_value;
public static DisposableThreadStatic<T> GetOrCreate(Func<T> creator)
{
if (ts_value == null)
{
ts_value = creator();
return new DisposableThreadStatic<T>() { _shouldDispose = true };
}
return default;
}
public void Dispose()
{
if (_shouldDispose && ts_value != null)
{
ts_value.Dispose();
ts_value = null;
}
}
}
With this, you can wrap your threaded function with this.
ThreadPool.QueueUserWorkItem(_ =>
{
using var dts = DisposableThreadStatic<MyDisposable>.GetOrCreate(() => new MyDisposable());
// Use value, call any other functions, etc.
dts.Value.Func();
});
And using that same GetOrCreate call anywhere deeper in the call stack will just return the cached value, and only the top-most call (when the work completes) will dispose it.
Related
I have my singleton as below:
public class CurrentSingleton
{
private static CurrentSingleton uniqueInstance = null;
private static object syncRoot = new Object();
private CurrentSingleton() { }
public static CurrentSingleton getInstance()
{
if (uniqueInstance == null)
{
lock (syncRoot)
{
if (uniqueInstance == null)
uniqueInstance = new CurrentSingleton();
}
}
return uniqueInstance;
}
}
I would like check, if I will have two thread, are there two different singletons? I think, I shall have two different singletons (with different references), so what I'm doing:
class Program
{
static void Main(string[] args)
{
int currentCounter = 0;
for (int i = 0; i < 100; i++)
{
cs1 = null;
cs2 = null;
Thread ct1 = new Thread(cfun1);
Thread ct2 = new Thread(cfun2);
ct1.Start();
ct2.Start();
if (cs1 == cs2) currentCounter++;
}
Console.WriteLine(currentCounter);
Console.Read();
}
static CurrentSingleton cs1;
static CurrentSingleton cs2;
static void cfun1()
{
cs1 = CurrentSingleton.getInstance();
}
static void cfun2()
{
cs2 = CurrentSingleton.getInstance();
}
}
I suppose that I should got currentCounter = 0 (in this case every two singleton are different - because are creating by other threrad). Unfortunately, I got for example currentCounter = 70 so in 70 cases I have the same singletons... Could you tell me why?
I would like check, if I will have two thread, are there two different singletons
No, there are not. A static field is shared across each entire AppDomain, not each thread.
If you want to have separate values per thread, I'd recommend using ThreadLocal<T> to store the backing data, as this will provide a nice wrapper for per-thread data.
Also, in C#, it's typically better to implement a lazy singleton via Lazy<T> instead of via double checked locking. This would look like:
public sealed class CurrentSingleton // Seal your singletons if possible
{
private static Lazy<CurrentSingleton> uniqueInstance = new Lazy<CurrentSingleton>(() => new CurrentSingleton());
private CurrentSingleton() { }
public static CurrentSingleton Instance // use a property, since this is C#...
{
get { return uniqueInstance.Value; }
}
}
To make a class that provides one instance per thread, you could use:
public sealed class InstancePerThread
{
private static ThreadLocal<InstancePerThread> instances = new ThreadLocal<InstancePerThread>(() => new InstancePerThread());
private InstancePerThread() {}
public static InstancePerThread Instance
{
get { return instances.Value; }
}
}
By default, a static field is a single instance shared by all threads that access it.
You should take a look at the [ThreadStatic] attribute. Apply it to a static field to make it have a distinct instance for each thread that accesses it.
Use of a locking object ensures that only one value gets created; you can verify this by putting some logging in your CurrentSingleton constructor.
However, I think there's a small gap in your logic: imagine that two threads simultaneously call this method, while uniqueInstance is null. Both will evaluate the = null clause, and advance to the locking. One will win, lock on syncRoot, and initialize uniqueInstance. When the lock block ends, the other will get its own lock, and initialize uniqueInstance again.
You need to lock on syncRoot before even testing whether uniqueInstance is null.
No matter what you do you are never going to get currentCounter = 0.
Because we are forgetting the the fact that application/C# code is also running in some thread and there are some priorities set by C# to run the code. If you debug the code by putting break points in Main method and CurrentSingleton you will notice that. By the time you reach and create the new Object for CurrentSingleton, for loop may be iteration 3 or 4 or any number. Iterations are fast and code is comparing null values and Object or Object and null value. And I think this is the catch.
Reed has got point static will always be shared hence you need to change your code in following way
public class CurrentSingleton
{
[ThreadStatic]
private static CurrentSingleton uniqueInstance = null;
private static object syncRoot = new Object();
private CurrentSingleton() { }
public static CurrentSingleton getInstance()
{
if (uniqueInstance == null)
uniqueInstance = new CurrentSingleton();
return uniqueInstance;
}
}
And as per analysis you are getting two different objects at 70th iteration but, that is something just mismatch may be null and Object or Object and null. To get successful two different object you need to use [ThreadStatic]
Image this code:
You have 2 arrays, and you need to lock both of them in same moment (for any reason - you just need to keep locked both of them because they are somehow depending on each other) - you could nest the lock
lock (array1)
{
lock (array2)
{
... do your code
}
}
but this may result in a deadlock in case that someone in other part of your code would do
lock (array2)
{
lock (array1)
{
... do your code
}
}
and array 1 was locked - execution context switched - then array 2 was locked by second thread.
Is there a way to atomically lock them? such as
lock_array(array1, array2)
{
....
}
I know I could just create some extra "lock object" and lock that instead of both arrays everywhere in my code, but that just doesn't seem correct to me...
In general you should avoid locking on publicly accessible members (the arrays in your case). You'd rather have a private static object you'd lock on.
You should never allow locking on publicly accessible variable as Darin said. For example
public class Foo
{
public object Locker = new object();
}
public class Bar
{
public void DoStuff()
{
var foo = new Foo();
lock(foo.Locker)
{
// doing something here
}
}
}
rather do something like this.
public class Foo
{
private List<int> toBeProtected = new List<int>();
private object locker = new object();
public void Add(int value)
{
lock(locker)
{
toBeProtected.Add(value);
}
}
}
The reason for this is if you have multiple threads accessing multiple public synchronization constructs then run the very real possiblity of deadlock. Then you have to be very careful about how you code. If you are making your library available to others can you be sure that you can grab the lock? Perhaps someone using your library has also grabbed the lock and between the two of you have worked your way into a deadlock scenario. This is the reason Microsoft recommend not using SyncRoot.
I am not sure what you mean by lock to arrays.
You can easily perform operation on both arrays in single lock.
static readonly object a = new object();
lock(a){
//Perform operation on both arrays
}
I have 1 static class and 1 field and 2 methods within it:
static class MyClass{
private static HttpClient client = new HttpClient();
private static string SendRequestToServer(int id)
{
Task<HttpResponseMessage> response = client.GetAsync("some string");
responseTask.ContinueWith(x => PrintResult(x));
return "some new value";
}
private static void Print(Task<HttpResponseMessage> task)
{
Task<string> r = task.Result.Content.ReadAsStringAsync();
r.ContinueWith(resultTask => Console.WriteLine("result is: " + resultTask.Result));
}
}
The question is, if many threads start using MyClass and its methods, would it cause some problems?
All the resources accessed through these methods need to be thread-safe. In your case, they are not. If you look at the HttpClient documentation, it states:
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
You're calling an instance method (client.GetAsync), which is not be guaranteed to be thread-safe, so that could potentially cause problems for you.
To mitigate this, you could:
create a new (local) HttpClient on each call.
synchronize access to client (e.g. using a lock).
Also, I can't tell you if PrintResult will be thread-safe, but Console.WriteLine should be thread-safe.
You are likely to expect unpredictable results with such setup.You need to have threads access the data in a synchronized manner.A lock statement need to used in your case to make sure the execution happens in a synchronized and stable manner.
private static Object locker= new Object();
private static string SendRequestToServer(int id)
{
lock(locker)
{
Task<HttpResponseMessage> response = client.GetAsync("some string");
responseTask.ContinueWith(x => PrintResult(x));
return "some new value";
}
}
I have class Lazy which lazily evaluates an expression:
public sealed class Lazy<T>
{
Func<T> getValue;
T value;
public Lazy(Func<T> f)
{
getValue = () =>
{
lock (getValue)
{
value = f();
getValue = () => value;
}
return value;
};
}
public T Force()
{
return getValue();
}
}
Basically, I'm trying to avoid the overhead of locking objects after they've been evaluated, so I replace getValue with another function on invocation.
It apparently works in my testing, but I have no way of knowing if it'll blow up in production.
Is my class threadsafe? If not, what can be done to guarantee thread safety?
Can’t you just omit re-evaluating the function completely by either using a flag or a guard value for the real value? I.e.:
public sealed class Lazy<T>
{
Func<T> f;
T value;
volatile bool computed = false;
void GetValue() { lock(LockObject) { value = f(); computed = true; } }
public Lazy(Func<T> f)
{
this.f = f;
}
public T Force()
{
if (!computed) GetValue();
return value;
}
}
Your code has a few issues:
You need one object to do the locking on. Don't lock on a variable that gets changed - locks always deal with objects, so if getValue is changed, multiple threads might enter the locked section at once.
If multiple threads are waiting for the lock, all of them will evaluate the function f() after each other. You'd have to check inside the lock that the function wasn't evaluated already.
You might need a memory barrier even after fixing the above issues to ensure that the delegate gets replaced only after the new value was stored to memory.
However, I'd use the flag approach from Konrad Rudolph instead (just ensure you don't forget the "volatile" required for that). That way you don't need to invoke a delegate whenever the value is retrieved (delegate calls are quite fast; but not they're not as fast as simply checking a bool).
I'm not entirely sure what you're trying to do with this code, but I just published an article on The Code Project on building a sort of "lazy" class that automatically, asynchronously calls a worker function and stores its value.
This looks more like a caching mechanism than a "lazy evaluation". In addition, do not change the value of a locking reference within the lock block. Use a temporary variable to lock on.
The wait you have it right now would work in a large number of cases, but if you were to have two different threads try to evaluate the expression in this order:
Thread 1
Thread 2
Thread 1 completes
Thread 2 would never complete, because Thread 1 would be releasing a lock on a different reference than was used to acquire the lock (more precisely, he'd be releasing a nonexistent lock, since the newly-created reference was never locked to begin with), and not releasing the original lock, which is blocking Thread 2.
While I'm not entirely certain what this would do (aside from perform a synchronized evaluation of the expression and caching of the result), this should make it safer:
public sealed class Lazy<T>
{
Func<T> getValue;
T value;
object lockValue = new object();
public Lazy(Func<T> f)
{
getValue = () =>
{
lock (lockValue)
{
value = f();
getValue = () => value;
}
return value;
};
}
public T Force()
{
return getValue();
}
}
I've found the "ThreadStatic" attribute to be extremely useful recently, but makes me now want a "ThreadLocal" type attribute that lets me have non-static data members on a per-thread basis.
Now I'm aware that this would have some non-trivial implications, but:
Does such a thing exist already built into C#/.net? or since it appears so far that the answer to this is no (for .net < 4.0), is there a commonly used implementation out there?
I can think of a reasonable way to implement it myself, but would just use something that already existed if it were available.
Straw Man example that would implement what I'm looking for if it doesn't already exist:
class Foo
{
[ThreadStatic]
static Dictionary<Object,int> threadLocalValues = new Dictionary<Object,int>();
int defaultValue = 0;
int ThreadLocalMember
{
get
{
int value = defaultValue;
if( ! threadLocalValues.TryGetValue(this, out value) )
{
threadLocalValues[this] = value;
}
return value;
}
set { threadLocalValues[this] = value; }
}
}
Please forgive any C# ignorance. I'm a C++ developer that has only recently been getting into the more interesting features of C# and .net
I'm limited to .net 3.0 and maybe 3.5 (project has/will soon move to 3.5).
Specific use-case is callback lists that are thread specific (using imaginary [ThreadLocal] attribute) a la:
class NonSingletonSharedThing
{
[ThreadLocal] List<Callback> callbacks;
public void ThreadLocalRegisterCallback( Callback somecallback )
{
callbacks.Add(somecallback);
}
public void ThreadLocalDoCallbacks();
{
foreach( var callback in callbacks )
callback.invoke();
}
}
Enter .NET 4.0!
If you're stuck in 3.5 (or earlier), there are some functions you should look at, like AllocateDataSlot which should do what you want.
You should think about this twice. You are essentially creating a memory leak. Every object created by the thread stays referenced and can't be garbage collected. Until the thread ends.
If you looking to store unique data on a per thread basis you could use Thread.SetData. Be sure to read up on the pros and cons http://msdn.microsoft.com/en-us/library/6sby1byh.aspx as this has performance implications.
Consider:
Rather than try to give each member variable in an object a thread-specific value, give each thread its own object instance. -- pass the object to the threadstart as state, or make the threadstart method a member of the object that the thread will "own", and create a new instance for each thread that you spawn.
Edit
(in response to Catskul's remark.
Here's an example of encapsulating the struct
public class TheStructWorkerClass
{
private StructData TheStruct;
public TheStructWorkerClass(StructData yourStruct)
{
this.TheStruct = yourStruct;
}
public void ExecuteAsync()
{
System.Threading.ThreadPool.QueueUserWorkItem(this.TheWorkerMethod);
}
private void TheWorkerMethod(object state)
{
// your processing logic here
// you can access your structure as this.TheStruct;
// only this thread has access to the struct (as long as you don't pass the struct
// to another worker class)
}
}
// now hte code that launches the async process does this:
var worker = new TheStructWorkerClass(yourStruct);
worker.ExecuteAsync();
Now here's option 2 (pass the struct as state)
{
// (from somewhere in your existing code
System.Threading.Threadpool.QueueUserWorkItem(this.TheWorker, myStruct);
}
private void TheWorker(object state)
{
StructData yourStruct = (StructData)state;
// now do stuff with your struct
// works fine as long as you never pass the same instance of your struct to 2 different threads.
}
I ended up implementing and testing a version of what I had originally suggested:
public class ThreadLocal<T>
{
[ThreadStatic] private static Dictionary<object, T> _lookupTable;
private Dictionary<object, T> LookupTable
{
get
{
if ( _lookupTable == null)
_lookupTable = new Dictionary<object, T>();
return _lookupTable;
}
}
private object key = new object(); //lazy hash key creation handles replacement
private T originalValue;
public ThreadLocal( T value )
{
originalValue = value;
}
~ThreadLocal()
{
LookupTable.Remove(key);
}
public void Set( T value)
{
LookupTable[key] = value;
}
public T Get()
{
T returnValue = default(T);
if (!LookupTable.TryGetValue(key, out returnValue))
Set(originalValue);
return returnValue;
}
}
Although I am still not sure about when your use case would make sense (see my comment on the question itself), I would like to contribute a working example that is in my opinion more readable than thread-local storage (whether static or instance). The example is using .NET 3.5:
using System;
using System.Collections.Generic;
using System.Text;
using System.Threading;
using System.Linq;
namespace SimulatedThreadLocal
{
public sealed class Notifier
{
public void Register(Func<string> callback)
{
var id = Thread.CurrentThread.ManagedThreadId;
lock (this._callbacks)
{
List<Func<string>> list;
if (!this._callbacks.TryGetValue(id, out list))
{
this._callbacks[id] = list = new List<Func<string>>();
}
list.Add(callback);
}
}
public void Execute()
{
var id = Thread.CurrentThread.ManagedThreadId;
IEnumerable<Func<string>> threadCallbacks;
string status;
lock (this._callbacks)
{
status = string.Format("Notifier has callbacks from {0} threads, total {1} callbacks{2}Executing on thread {3}",
this._callbacks.Count,
this._callbacks.SelectMany(d => d.Value).Count(),
Environment.NewLine,
Thread.CurrentThread.ManagedThreadId);
threadCallbacks = this._callbacks[id]; // we can use the original collection, as only this thread can add to it and we're not going to be adding right now
}
var b = new StringBuilder();
foreach (var callback in threadCallbacks)
{
b.AppendLine(callback());
}
Console.ForegroundColor = ConsoleColor.DarkYellow;
Console.WriteLine(status);
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine(b.ToString());
}
private readonly Dictionary<int, List<Func<string>>> _callbacks = new Dictionary<int, List<Func<string>>>();
}
public static class Program
{
public static void Main(string[] args)
{
try
{
var notifier = new Notifier();
var syncMainThread = new ManualResetEvent(false);
var syncWorkerThread = new ManualResetEvent(false);
ThreadPool.QueueUserWorkItem(delegate // will create closure to see notifier and sync* events
{
notifier.Register(() => string.Format("Worker thread callback A (thread ID = {0})", Thread.CurrentThread.ManagedThreadId));
syncMainThread.Set();
syncWorkerThread.WaitOne(); // wait for main thread to execute notifications in its context
syncWorkerThread.Reset();
notifier.Execute();
notifier.Register(() => string.Format("Worker thread callback B (thread ID = {0})", Thread.CurrentThread.ManagedThreadId));
syncMainThread.Set();
syncWorkerThread.WaitOne(); // wait for main thread to execute notifications in its context
syncWorkerThread.Reset();
notifier.Execute();
syncMainThread.Set();
});
notifier.Register(() => string.Format("Main thread callback A (thread ID = {0})", Thread.CurrentThread.ManagedThreadId));
syncMainThread.WaitOne(); // wait for worker thread to add its notification
syncMainThread.Reset();
notifier.Execute();
syncWorkerThread.Set();
syncMainThread.WaitOne(); // wait for worker thread to execute notifications in its context
syncMainThread.Reset();
notifier.Register(() => string.Format("Main thread callback B (thread ID = {0})", Thread.CurrentThread.ManagedThreadId));
notifier.Execute();
syncWorkerThread.Set();
syncMainThread.WaitOne(); // wait for worker thread to execute notifications in its context
syncMainThread.Reset();
}
finally
{
Console.ResetColor();
}
}
}
}
When you compile and run the above program, you should get output like this:
alt text http://img695.imageshack.us/img695/991/threadlocal.png
Based on your use-case I assume this is what you're trying to achieve. The example first adds two callbacks from two different contexts, main and worker threads. Then the example runs notification first from main and then from worker threads. The callbacks that are executed are effectively filtered by current thread ID. Just to show things are working as expected, the example adds two more callbacks (for a total of 4) and again runs the notification from the context of main and worker threads.
Note that Notifier class is a regular instance that can have state, multiple instances, etc (again, as per your question's use-case). No static or thread-static or thread-local is used by the example.
I would appreciate if you could look at the code and let me know if I misunderstood what you're trying to achieve or if a technique like this would meet your needs.
I'm not sure how you're spawning your threads in the first place, but there are ways to give each thread its own thread-local storage, without using hackish workarounds like the code you posted in your question.
public void SpawnSomeThreads(int threads)
{
for (int i = 0; i < threads; i++)
{
Thread t = new Thread(WorkerThread);
WorkerThreadContext context = new WorkerThreadContext
{
// whatever data the thread needs passed into it
};
t.Start(context);
}
}
private class WorkerThreadContext
{
public string Data { get; set; }
public int OtherData { get; set; }
}
private void WorkerThread(object parameter)
{
WorkerThreadContext context = (WorkerThreadContext) parameter;
// do work here
}
This obviously ignores waiting on the threads to finish their work, making sure accesses to any shared state is thread-safe across all the worker threads, but you get the idea.
Whilst the posted solution looks elegant, it leaks objects. The finalizer - LookupTable.Remove(key) - is run only in the context of the GC thread so is likely only creating more garbage in creating another lookup table.
You need to remove object from the lookup table of every thread that has accessed the ThreadLocal. The only elegant way I can think of solving this is via a weak keyed dictionary - a data structure which is strangely lacking from c#.