I have a very large project with multiple pages, where each page has many IDisposable members.
I'm trying to figure out a way to dispose all the IDisposable members in a loop so I won't have to type x1.Dispose(); x2.Dispose; ... xn.Dispose on each class.
Is there a way to do this?
Thank you.
Sure, just make sure that you create a list to hold them, and a try finally block to protect yourself from leaking them.
// List for holding your disposable types
var connectionList = new List<IDisposable>();
try
{
// Instantiate your page states, this may be need to be done at a high level
// These additions are over simplified, as there will be nested calls
// building this list, in other words these will more than likely take place in methods
connectionList.Add(x1);
connectionList.Add(x2);
connectionList.Add(x3);
}
finally
{
foreach(IDisposable disposable in connectionList)
{
try
{
disposable.Dispose();
}
catch(Exception Ex)
{
// Log any error? This must be caught in order to prevent
// leaking the disposable resources in the rest of the list
}
}
}
However, this approach is not always ideal. The nature of the nested calls will get complicated and require the call to be so far up in the architecture of your program that you may want to consider just locally handling these resources.
Moreover, this approach critically fails in the scenario where these Disposable resources are intensive and need to be immediately released. While you can do this, i.e. track your Disposable elements and then do it all at once, it is best to try to get the object lifetime as short as possible for managed resources like this.
Whatever you do, ensure not to leak the Disposable resource. If these are connection threads, and they are inactive for some period of time, it may also be wise to simply look at their state and then re-use them in different places instead of letting them hang around.
Using reflection (not tested):
public static void DisposeAllMembersWithReflection(object target)
{
if (target == null) return;
// get all fields, you can change it to GetProperties() or GetMembers()
var fields = target.GetType().GetFields(System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.NonPublic);
// get all fields that implement IDisposable
var disposables = fields.Where(x => x.FieldType.GetInterfaces().Contains(typeof(IDisposable)));
foreach (var disposableField in disposables)
{
var value = (IDisposable)disposableField.GetValue(target);
if (value != null)
value.Dispose();
}
}
Create method which will dispose all your disposable objects:
public void DisposeAll()
{
x1.Dispose();
x2.Dispose();
x3.Dispose();
. . .
}
and call it wherever you need it.
Related
So I'm running a Parallel.ForEach that basically generates a bunch of data which is ultimately going to be saved to a database. However, since collection of data can get quite large I need to be able to occasionally save/clear the collection so as to not run into an OutOfMemoryException.
I'm new to using Parallel.ForEach, concurrent collections, and locks, so I'm a little fuzzy on what exactly needs to be done to make sure everything works correctly (i.e. we don't get any records added to the collection between the Save and Clear operations).
Currently I'm saying, if the record count is above a certain threshold, save the data in the current collection, within a lock block.
ConcurrentStack<OutRecord> OutRecs = new ConcurrentStack<OutRecord>();
object StackLock = new object();
Parallel.ForEach(inputrecords, input =>
{
lock(StackLock)
{
if (OutRecs.Count >= 50000)
{
Save(OutRecs);
OutRecs.Clear();
}
}
OutRecs.Push(CreateOutputRecord(input);
});
if (OutRecs.Count > 0) Save(OutRecs);
I'm not 100% certain whether or not this works the way I think it does. Does the lock stop other instances of the loop from writing to output collection? If not is there a better way to do this?
Your lock will work correctly but it will not be very efficient because all your worker threads will be forced to pause for the entire duration of each save operation. Also, locks tends to be (relatively) expensive, so performing a lock in each iteration of each thread is a bit wasteful.
One of your comments mentioned giving each worker thread its own data storage: yes, you can do this. Here's an example that you could tailor to your needs:
Parallel.ForEach(
// collection of objects to iterate over
inputrecords,
// delegate to initialize thread-local data
() => new List<OutRecord>(),
// body of loop
(inputrecord, loopstate, localstorage) =>
{
localstorage.Add(CreateOutputRecord(inputrecord));
if (localstorage.Count > 1000)
{
// Save() must be thread-safe, or you'll need to wrap it in a lock
Save(localstorage);
localstorage.Clear();
}
return localstorage;
},
// finally block gets executed after each thread exits
localstorage =>
{
if (localstorage.Count > 0)
{
// Save() must be thread-safe, or you'll need to wrap it in a lock
Save(localstorage);
localstorage.Clear();
}
});
One approach is to define an abstraction that represents the destination for your data. It could be something like this:
public interface IRecordWriter<T> // perhaps come up with a better name.
{
void WriteRecord(T record);
void Flush();
}
Your class that processes the records in parallel doesn't need to worry about how those records are handled or what happens when there's too many of them. The implementation of IRecordWriter handles all those details, making your other class easier to test.
An implementation of IRecordWriter could look something like this:
public abstract class BufferedRecordWriter<T> : IRecordWriter<T>
{
private readonly ConcurrentQueue<T> _buffer = new ConcurrentQueue<T>();
private readonly int _maxCapacity;
private bool _flushing;
public ConcurrentQueueRecordOutput(int maxCapacity = 100)
{
_maxCapacity = maxCapacity;
}
public void WriteRecord(T record)
{
_buffer.Enqueue(record);
if (_buffer.Count >= _maxCapacity && !_flushing)
Flush();
}
public void Flush()
{
_flushing = true;
try
{
var recordsToWrite = new List<T>();
while (_buffer.TryDequeue(out T dequeued))
{
recordsToWrite.Add(dequeued);
}
if(recordsToWrite.Any())
WriteRecords(recordsToWrite);
}
finally
{
_flushing = false;
}
}
protected abstract void WriteRecords(IEnumerable<T> records);
}
When the buffer reaches the maximum size, all the records in it are sent to WriteRecords. Because _buffer is a ConcurrentQueue it can keep reading records even as they are added.
That Flush method could be anything specific to how you write your records. Instead of this being an abstract class the actual output to a database or file could be yet another dependency that gets injected into this one. You can make decisions like that, refactor, and change your mind because the very first class isn't affected by those changes. All it knows about is the IRecordWriter interface which doesn't change.
You might notice that I haven't made absolutely certain that Flush won't execute concurrently on different threads. I could put more locking around this, but it really doesn't matter. This will avoid most concurrent executions, but it's okay if concurrent executions both read from the ConcurrentQueue.
This is just a rough outline, but it shows how all of the steps become simpler and easier to test if we separate them. One class converts inputs to outputs. Another class buffers the outputs and writes them. That second class can even be split into two - one as a buffer, and another as the "final" writer that sends them to a database or file or some other destination.
public static void CacheUncachedMessageIDs(List<int> messageIDs)
{
var uncachedRecordIDs = LocalCacheController.GetUncachedRecordIDs<PrivateMessage>(messageIDs);
if (!uncachedRecordIDs.Any()) return;
using (var db = new DBContext())
{
.....
}
}
The above method is repeated regularly throughout the project (except with different generics passed in). I'm looking to avoid repeated usages of the if (!uncachedRecordIDs.Any()) return; lines.
In short, is it possible to make the LocalCacheController.GetUncachedRecordIDs return the CacheUncachedMessageIDs method?
This will guarantee a new data context is not created unless it needs to be (stops accidentally forgetting to add the return line in the parent method).
It is not possible for a nested method to return from parent method.
You can do some unhandled Exception inside GetUncachedRecordIDs, that will do the trick, but it is not supposed to do this, so it creates confusion. Moreover, it is very slow.
Another not suggested mechanic is to use some goto magic. This also generates confusion because goto allows unexpected behaviour in program execution flow.
Your best bet would be to return a Result object with simple bool HasUncachedRecordIDs field and then check it. If it passes, then return. This solution solves the problem of calling a method, which is Any() in this case.
var uncachedRecordIDsResult = LocalCacheController.GetUncachedRecordIDs<PrivateMessage>(messageIDs);
if(uncachedRecordIDsResult.HasUncachedRecordIDs) return;
My reasoning for lack of this feature in the language is that calling GetUncachedRecordIDs in basically any function would unexpectedly end that parent function, without warning. Also, it would intertwine closely both functions, and best programming practices involve loose coupling of classes and methods.
You could pass an Action to your GetUncachedRecordIDs method which you only invoke if you need to. Rough sketch of the idea:
// LocalCacheController
void GetUncachedRecordIDs<T>(List<int> messageIDs, Action<List<int>> action)
{
// ...
if (!cached) {
action(recordIds);
}
}
// ...
public static void CacheUncachedMessageIDs(List<int> messageIDs)
{
LocalCacheController.GetUncachedRecordIDs<PrivateMessage>(messageIDs, uncachedRecordIDs => {
using (var db = new DBContext())
{
// ...
}
});
}
So i am currently disposing many objects when i close my form. Even though it probably disposes it automatically. But still i prefer to follow the "rules" in disposing, hopefully it will stick and help prevent mistakes.
So here is how i currently dispose, which works.
if (connect == true)
{
Waloop.Dispose();
connect = false;
UninitializeCall();
DropCall();
}
if (KeySend.Checked || KeyReceive.Checked)
{
m_mouseListener.Dispose();
k_listener.Dispose();
}
if (NAudio.Wave.AsioOut.isSupported())
{
Aut.Dispose();
}
if (Wasout != null)
{
Wasout.Dispose();
}
if (SendStream != null)
{
SendStream.Dispose();
}
So basically, the first is if a bool is true, meaning if it isn´t those can be ignore, as they haven´t been made i think.
The others are just ways for me to dispose if it´s there. but it´s not a very good way, i would like to have it in 1 big function, meaning.
Dispose if it´s NOT disposed. or something.
I know that many of them has the "isdisposed" bool, so it should be possible if i can check every object, and dispose if it´s false.
How about a helper method which takes objects which implement IDisposable as params?
void DisposeAll(params IDisposable[] disposables)
{
foreach (IDisposable id in disposables)
{
if (id != null) id.Dispose();
}
}
When you want to dispose multiple objects, call the method with whatever objects you want to dispose.
this.DisposeAll(Wasout, SendStream, m_mouseListener, k_listener);
If you want to avoid calling them explicity, then store them all in a List<>:
private List<IDisposable> _disposables;
void DisposeAll() {
foreach(IDisposable id in _disposables) {
if(id != null) id.Dispose();
}
}
You can implement a Disposer class, that will do the work for you, along these lines:
public class Disposer
{
private List<IDisposable> disposables = new List<IDisposable>();
public void Register(IDisposable item)
{
disposables.Add(item);
}
public void Unregister(IDisposable item)
{
disposables.Remove(item);
}
public void DisposeAll()
{
foreach (IDisposable item in disposables)
{
item.Dispose();
}
disposables.Clear();
}
}
Then, instead of the ugly code in your main class, you can have something like:
public class Main
{
//member field
private Disposer m_disposer;
//constructor
public Main()
{
....
m_disposer = new Disposer();
//register any available disposables
disposer.Register(m_mouseListener);
disposer.Register(k_listener);
}
...
public bool Connect()
{
...
if (isConnected)
{
Waloop = ...
Wasout = ...
// register additional disposables as they are created
disposer.Register(Waloop);
disposer.Register(Wasout);
}
}
...
public void Close()
{
//disposal
disposer.DisposeAll();
}
}
I suggest you use the using statement. So with your code, it would look something like this:
using (WaloopClass Waloop = new WaloopClass())
{
// Some other code here I know nothing about.
connect = false; // Testing the current value of connect is redundant.
UninitializeCall();
DropCall();
}
Note there is now no need to explicitly Dispose Waloop, as it happens automatically at the end of the using statement.
This will help to structure your code, and makes the scope of the Waloop much clearer.
I am going to suppose that the only problem you’re trying to solve is how to write the following in a nicer way:
if (Wasout != null)
Wasout.Dispose();
if (SendStream != null)
SendStream.Dispose();
This is a lot of logic already implemented by the using keyword. using checks that the variable is not null before calling Dispose() for you. Also, using guarantees that thrown exceptions (perhap by Wasout.Dispose()) will not interrupt the attempts to call Dispose() on the other listed objects (such as SendStream). It seems that using was intended to allow management of resources based on scoping rules: using using as an alternative way to write o.Dispose() may be considered an abuse of the language. However, the benefits of using’s behavior and the concision it enables are quite valuable. Thus, I recommend to replace such mass statically-written batches of the “if (o != null) o.Dispose()” with an “empty” using:
using (
IDisposable _Wasout = Wasout,
_SendStream = SendStream)
{}
Note that the order that Dispose() is called in is in reverse of how objects are listed in the using block. This follows the pattern of cleaning up objects in reverse of their instantiation order. (The idea is that an object instantiated later may refer to an object instantiated earlier. E.g., if you are using a ladder to climb a house, you might want to keep the ladder around so that you can climb back down before putting it away—the ladder gets instantiated first and cleaned up last. Uhm, analogies… but, basically, the above is shorthand for nested using. And the unlike objects can be smashed into the same using block by writing the using in terms of IDisposable.)
dotnetfiddle of using managing exceptions.
I see this a lot:
object lockObj;
List<string> myStrs;
// ...
lock(lockObj)
{
myStrs.Add("hello world");
}
Why have the separate object? Surely you can just do this:
List<string> myStrs;
// ...
lock(myStrs)
{
myStrs.Add("hello world");
}
It is a problem to lock directly on the list only if myStrs is public, and thus can be locked by other callers as well, resulting in a possible deadlock.
If it is a private member, then there should be no problem, but locking on a seperate object is a good habit in any case.
See this similar question for a more detailed answer:
Why is lock(this) {...} bad?
In general, avoid locking on a public type, or instances beyond your code's control. The common constructs lock (this), lock (typeof (MyType)), and lock ("myLock") violate this guideline:
lock (this) is a problem if the instance can be accessed publicly.
lock (typeof (MyType)) is a problem if MyType is publicly accessible.
lock(“myLock”) is a problem since any other code in the process using the same string, will share the same lock.
Best practice is to define a private object to lock on, or a private static object variable to protect data common to all instances.
Form the documentation lock c#
The idea is to always lock a private members that can only be accessed by the code that we are looking at. Whereas when we lock the members that we do not have control upon like public member or similar, chances are some other part of code can already hold a lock. This could lead to unexpected blocking behavior.
So, I think this had lead to thumb rule / best practice of having a private object especially for locking.
I would be interested in seeing if there are more reasons coming up.
Your list of string is a list used for internal implementation details.
The problem with the second version could rise if you change your implementation such a way that you re-initilazie the strings list.
Then the thread-safety of your implementation could be broken.
So it's better that you use a seperate object for synchronization and declare this object as read only.
If you use the list as the lock object, and it is reset to null, then the lock(myStringList) will throw an ArgumentNullException. Below the simple test code of a console application.
private static IList<string> mystringList = new List<string>();
static void Main(string[] args)
{
new Thread(() =>
{
try
{
while (true)
{
//Acquire the lock
lock (mystringList)
{
//Do something with the data
Thread.Sleep(100);
Console.WriteLine("Lock acquired");
}
}
}
catch (Exception exception)
{
Console.WriteLine("Exception: " +exception.Message);
}
}).Start();
new Thread(() =>
{
//Suppose we do something
Thread.Sleep(1000);
//And by some how reset the list to null
mystringList = null;
}).Start();
Console.ReadLine();
}
I have a question about improving the efficiency of my program. I have a Dictionary<string, Thingey> defined to hold named Thingeys. This is a web application that will create multiple named Thingey’s over time. Thingey’s are somewhat expensive to create (not prohibitively so) but I’d like to avoid it whenever possible. My logic for getting the right Thingey for the request looks a lot like this:
private Dictionary<string, Thingey> Thingeys;
public Thingey GetThingey(Request request)
{
string thingeyName = request.ThingeyName;
if (!this.Thingeys.ContainsKey(thingeyName))
{
// create a new thingey on 1st reference
Thingey newThingey = new Thingey(request);
lock (this.Thingeys)
{
if (!this.Thingeys.ContainsKey(thingeyName))
{
this.Thingeys.Add(thingeyName, newThingey);
}
// else - oops someone else beat us to it
// newThingey will eventually get GCed
}
}
return this. Thingeys[thingeyName];
}
In this application, Thingeys live forever once created. We don’t know how to create them or which ones will be needed until the app starts and requests begin coming in. The question I have is in the above code is there are occasional instances where newThingey is created because we get multiple simultaneous requests for it before it’s been created. We end up creating 2 of them but only adding one to our collection.
Is there a better way to get Thingeys created and added that doesn’t involve check/create/lock/check/add with the rare extraneous thingey that we created but end up never using? (And this code works and has been running for some time. This is just the nagging bit that has always bothered me.)
I'm trying to avoid locking the dictionary for the duration of creating a Thingey.
This is the standard double check locking problem. The way it is implemented here is unsafe and can cause various problems - potentially up to the point of a crash in the first check if the internal state of the dictionary is screwed up bad enough.
It is unsafe because you are checking it without synchronization and if your luck is bad enough you can hit it while some other thread is in the middle of updating internal state of the dictionary
A simple solution is to place the first check under a lock as well. A problem with this is that this becomes a global lock and in web environment under heavy load it can become a serious bottleneck.
If we are talking about .NET environment, there are ways to work around this issue by piggybacking on the ASP.NET synchronization mechanism.
Here is how I did it in NDjango rendering engine: I keep one global dictionary and one dictionary per rendering thread. When a request comes I check the local dictionary first - this check does not have to be synchronized and if the thingy is there I just take it
If it is not I synchronize on the global dictionary check if it is there and if it is add it to my thread dictionary and release the lock. If it is not in the global dictionary I add it there first while still under lock.
Well, from my point of view simpler code is better, so I'd only use one lock:
private readonly object thingeysLock = new object();
private readonly Dictionary<string, Thingey> thingeys;
public Thingey GetThingey(Request request)
{
string key = request.ThingeyName;
lock (thingeysLock)
{
Thingey ret;
if (!thingeys.TryGetValue(key, out ret))
{
ret = new Thingey(request);
thingeys[key] = ret;
}
return ret;
}
}
Locks are really cheap when they're not contended. The downside is that this means that occasionally you will block everyone for the whole duration of the time you're creating a new Thingey. Clearly to avoid creating redundant thingeys you'd have to at least block while multiple threads create the Thingey for the same key. Reducing it so that they only block in that situation is somewhat harder.
I would suggest you use the above code but profile it to see whether it's fast enough. If you really need "only block when another thread is already creating the same thingey" then let us know and we'll see what we can do...
EDIT: You've commented on Adam's answer that you "don't want to lock while a new Thingey is being created" - you do realise that there's no getting away from that if there's contention for the same key, right? If thread 1 starts creating a Thingey, then thread 2 asks for the same key, your alternatives for thread 2 are either waiting or creating another instance.
EDIT: Okay, this is generally interesting, so here's a first pass at the "only block other threads asking for the same item".
private readonly object dictionaryLock = new object();
private readonly object creationLocksLock = new object();
private readonly Dictionary<string, Thingey> thingeys;
private readonly Dictionary<string, object> creationLocks;
public Thingey GetThingey(Request request)
{
string key = request.ThingeyName;
Thingey ret;
bool entryExists;
lock (dictionaryLock)
{
entryExists = thingeys.TryGetValue(key, out ret);
// Atomically mark the dictionary to say we're creating this item,
// and also set an entry for others to lock on
if (!entryExists)
{
thingeys[key] = null;
lock (creationLocksLock)
{
creationLocks[key] = new object();
}
}
}
// If we found something, great!
if (ret != null)
{
return ret;
}
// Otherwise, see if we're going to create it or whether we need to wait.
if (entryExists)
{
object creationLock;
lock (creationLocksLock)
{
creationLocks.TryGetValue(key, out creationLock);
}
// If creationLock is null, it means the creating thread has finished
// creating it and removed the creation lock, so we don't need to wait.
if (creationLock != null)
{
lock (creationLock)
{
Monitor.Wait(creationLock);
}
}
// We *know* it's in the dictionary now - so just return it.
lock (dictionaryLock)
{
return thingeys[key];
}
}
else // We said we'd create it
{
Thingey thingey = new Thingey(request);
// Put it in the dictionary
lock (dictionaryLock)
{
thingeys[key] = thingey;
}
// Tell anyone waiting that they can look now
lock (creationLocksLock)
{
Monitor.PulseAll(creationLocks[key]);
creationLocks.Remove(key);
}
return thingey;
}
}
Phew!
That's completely untested, and in particular it isn't in any way, shape or form robust in the face of exceptions in the creating thread... but I think it's the generally right idea :)
If you're looking to avoid blocking unrelated threads, then additional work is needed (and should only be necessary if you've profiled and found that performance is unacceptable with the simpler code). I would recommend using a lightweight wrapper class that asynchronously creates a Thingey and using that in your dictionary.
Dictionary<string, ThingeyWrapper> thingeys = new Dictionary<string, ThingeyWrapper>();
private class ThingeyWrapper
{
public Thingey Thing { get; private set; }
private object creationLock;
private Request request;
public ThingeyWrapper(Request request)
{
creationFlag = new object();
this.request = request;
}
public void WaitForCreation()
{
object flag = creationFlag;
if(flag != null)
{
lock(flag)
{
if(request != null) Thing = new Thingey(request);
creationFlag = null;
request = null;
}
}
}
}
public Thingey GetThingey(Request request)
{
string thingeyName = request.ThingeyName;
ThingeyWrapper output;
lock (this.Thingeys)
{
if(!this.Thingeys.TryGetValue(thingeyName, out output))
{
output = new ThingeyWrapper(request);
this.Thingeys.Add(thingeyName, output);
}
}
output.WaitForCreation();
return output.Thing;
}
While you are still locking on all calls, the creation process is much more lightweight.
Edit
This issue has stuck with me more than I expected it to, so I whipped together a somewhat more robust solution that follows this general pattern. You can find it here.
IMHO, if this piece of code is called from many thread simultaneous, it is recommended to check it twice.
(But: I'm not sure that you can safely call ContainsKey while some other thread is call Add. So it might not be possible to avoid the lock at all.)
If you just want to avoid the Thingy is created but not used, just create it within the locking block:
private Dictionary<string, Thingey> Thingeys;
public Thingey GetThingey(Request request)
{
string thingeyName = request.ThingeyName;
if (!this.Thingeys.ContainsKey(thingeyName))
{
lock (this.Thingeys)
{
// only one can create the same Thingy
Thingey newThingey = new Thingey(request);
if (!this.Thingeys.ContainsKey(thingeyName))
{
this.Thingeys.Add(thingeyName, newThingey);
}
}
}
return this. Thingeys[thingeyName];
}
You have to ask yourself the question whether the specific ContainsKey operation and the getter are themselfes threadsafe (and will stay that way in newer versions), because those may and willbe invokes while another thread has the dictionary locked and is performing the Add.
Typically, .NET locks are fairly efficient if used correctly, and I believe that in this situation you're better of doing this:
bool exists;
lock (thingeys) {
exists = thingeys.TryGetValue(thingeyName, out thingey);
}
if (!exists) {
thingey = new Thingey();
}
lock (thingeys) {
if (!thingeys.ContainsKey(thingeyName)) {
thingeys.Add(thingeyName, thingey);
}
}
return thingey;
Well I hope not being to naive at giving this answer. but what I would do, as Thingyes are expensive to create, would be to add the key with a null value. That is something like this
private Dictionary<string, Thingey> Thingeys;
public Thingey GetThingey(Request request)
{
string thingeyName = request.ThingeyName;
if (!this.Thingeys.ContainsKey(thingeyName))
{
lock (this.Thingeys)
{
this.Thingeys.Add(thingeyName, null);
if (!this.Thingeys.ContainsKey(thingeyName))
{
// create a new thingey on 1st reference
Thingey newThingey = new Thingey(request);
Thingeys[thingeyName] = newThingey;
}
// else - oops someone else beat us to it
// but it doesn't mather anymore since we only created one Thingey
}
}
return this.Thingeys[thingeyName];
}
I modified your code in a rush so no testing was done.
Anyway, I hope my idea is not so naive. :D
You might be able to buy a little bit of speed efficiency at the expense of memory. If you create an immutable array that lists all of the created Thingys and reference the array with a static variable, then you could check the existance of a Thingy outside of any lock, since immutable arrays are always thread safe. Then when adding a new Thingy, you can create a new array with the additional Thingy and replace it (in the static variable) in one (atomic) set operation. Some new Thingys may be missed, because of race conditions, but the program shouldn't fail. It just means that on rare occasions extra duplicate Thingys will be made.
This will not replace the need for duplicate checking when creating a new Thingy, and it will use a lot of memory resources, but it will not require that the lock be taken or held while creating a Thingy.
I'm thinking of something along these lines, sorta:
private Dictionary<string, Thingey> Thingeys;
// An immutable list of (most of) the thingeys that have been created.
private string[] existingThingeys;
public Thingey GetThingey(Request request)
{
string thingeyName = request.ThingeyName;
// Reference the same list throughout the method, just in case another
// thread replaces the global reference between operations.
string[] localThingyList = existingThingeys;
// Check to see if we already made this Thingey. (This might miss some,
// but it doesn't matter.
// This operation on an immutable array is thread-safe.
if (localThingyList.Contains(thingeyName))
{
// But referencing the dictionary is not thread-safe.
lock (this.Thingeys)
{
if (this.Thingeys.ContainsKey(thingeyName))
return this.Thingeys[thingeyName];
}
}
Thingey newThingey = new Thingey(request);
Thiney ret;
// We haven't locked anything at this point, but we have created a new
// Thingey that we probably needed.
lock (this.Thingeys)
{
// If it turns out that the Thingey was already there, then
// return the old one.
if (!Thingeys.TryGetValue(thingeyName, out ret))
{
// Otherwise, add the new one.
Thingeys.Add(thingeyName, newThingey);
ret = newThingey;
}
}
// Update our existingThingeys array atomically.
string[] newThingyList = new string[localThingyList.Length + 1];
Array.Copy(localThingyList, newThingey, localThingyList.Length);
newThingey[localThingyList.Length] = thingeyName;
existingThingeys = newThingyList; // Voila!
return ret;
}