For an assignment (it's for concurrency if you wonder) - I have to implement my own lock
(more specific: a TaS, a TTas and an Array-Lock, as described in "The Art of Multiprocessor Programming")
There are some test in- and output schemes online that I tried (too bad they take pretty long to try).
Your program is to count 9-digit numbers that pass a certain test
(it's called the elfproef in dutch, I don't know the english equivalence, sorry).
Sometimes I got a slightly different number, which suggests that my lock doesn't work a 100% right.
I have implemented the locks like this:
interface Lock
{
void Lock();
void Unlock();
}
class TaSLock : Lock
{
AtomicBool state = new AtomicBool(false);
void Lock.Lock()
{ while (state.getAndSet(true)); }
void Lock.Unlock()
{ state.getAndSet(false); }
}
The AtomicBool is implemented with an integer, because the Interlocked class doesn't have operations for Boolean variables. This isn't optimal in terms of memory usage but it doesn't (or shouldn't) matter for the speed.
class AtomicBool
{
int value;
static int True = 1, False = -1;
public AtomicBool(bool value)
{
if (value) this.value = True;
else this.value = False;
}
public void set(bool newValue)
{
if (newValue) Interlocked.Exchange(ref value, True);
else Interlocked.Exchange(ref value, False);
}
public bool getAndSet(bool newValue)
{
int oldValue;
if (newValue) oldValue = Interlocked.Exchange(ref value, True);
else oldValue = Interlocked.Exchange(ref value, False);
return (oldValue == True);
}
public bool get()
{
return (Interlocked.Add(ref value, 0) == 1);
}
}
Now in the parallel part I have just used:
theLock.Lock();
counter++;
theLock.Unlock();
But each time I get slightly different results.
Is there something obvious I'm doing wrong?
Hans is right. Your atomic get-and-set boolean appears to be correct -- in fact, it appears to me to be somewhat over-engineered. And the lock appears to be correct as well, insofar as you've built yourself a potentially highly inefficient "spin lock". (That is, all the waiting threads just sit there in a tight loop asking "can I go yet? can I go yet?" instead of going to sleep until it is their turn.)
What is not correct is that your lock provides no guarantee whatsoever that any two threads that both have a view of "counter" have a consistent view of "counter". Two threads could be on different processors, and those different processors could have different copies of "counter" in their caches. The cached copies will be updated, and only occasionally written back to main memory, thereby effectively "losing" some increases.
The real implementation of locking in C# ensures that a full-fence memory barrier is imposed so that reads and writes cannot move "forwards and backwards in time" across the fence. That gives a hint to the processors that they need to not be so smart about caching "counter" so aggressively.
Related
I have understood that lock() locks a region of lines of code, other threads cannot access the locked line(s) of code. EDIT: this turns out to be just wrong.
Is it also possible to do that per instance of an object? EDIT: yes, that's is just the difference between static and non-static.
E.g. a null reference is checked during a lazy load, but in fact there is no need to lock other objects of the same type?
object LockObject = new object();
List<Car> cars;
public void Method()
{
if (cars == null)
{
cars = Lookup(..)
foreach (car in cars.ToList())
{
if (car.IsBroken())
{
lock(LockObject)
{
cars.Remove(car)
}
}
}
}
return cars;
}
EDIT, would this be a correct way to write this code:
Because when cars == null and thread A locks it, then another thread B will wait. Then when A is ready, B continues, but should check again whether cars == null, otherwise the code will execute again.
But this looks unnatural, I never saw such a pattern.
Note that locking the first null-check would mean that you acquire a lock even to check for null and every time again and again .. so that is not good.
public void Method()
{
if (cars == null)
{
lock(LockObject)
{
if (cars == null)
{
cars = Lookup(..)
foreach (car in cars.ToList())
{
if (car.IsBroken())
{
cars.Remove(car)
}
}
}
}
}
return cars;
}
It's important to realise that locking is very much a matter of the object locked on.
Most often we want to lock particular blocks of code entirely. As such we use a readonly field to lock a section and hence prevent any other running of that code either at all (if the field is static) or for the given instance (if the field is not static). However, that is a matter of the most common use, not the only possible use.
Consider:
ConcurrentDictionary<string, List<int>> idLists = SomeMethodOrSomething();
List<int> idList;
if (idLists.TryGetValue(someKey, out idList))
{
lock(idList)
{
if (!idList.Contains(someID))
idList.Add(someID);
}
}
Here "locked" section can be called simultaneously by as many threads as you can get going. It cannot, however, be called simultaneously on the same list. Such a usage is unusual, and one has to be sure that nothing else can try to lock on one of the lists (easy if nothing else can access idLists or access any of the lists either before or after they are added into it, hard otherwise), but it does come up in real code.
But the important thing here is that obtaining the idList is itself threadsafe. When it came to creating a new idList this more narrowly-focused locking would not work.
Instead we'd have to do one of two things:
The simplest is just lock on a readonly field before any operation (the more normal approach)
The other is to use GetOrAdd:
List<int> idList = idLists.GetOrAdd(someKey, () => new List<int>());
lock(idList)
{
if (!idList.Contains(someID))
idList.Add(someID);
}
Now an interesting thing to note here is that GetOrAdd() doesn't guarantee that if it calls the factory () => new List<int>() that the result of that factor is what will be returned. Nor does it promise to call it only once. Once we move away from the sort of code that just locks on a readonly field the potential races get more complicated and more thought has to go into them (in this case the likely thought would be that if a race means more than one list is created, but only one is ever used and the rest get GC'd then that's fine).
To bring this back to your case. While the above shows that it's possible to lock not just as finely as your first example does, but much more finely again, the safety of it depends on the wider context.
Your first example is broken:
cars = Lookup(..)
foreach (car in cars.ToList()) // May be different cars to that returned from Lookup. Is that OK?
{
if (car.IsBroken()) // May not be in cars. Is that OK?
{ // IsBroken() may now return false. Is that OK?
lock(LockObject)
When the ToList() is called it may not be calling it on the same instance that was put into cars. This is not necessarily a bug, but it very likely is. To leave it you have to prove that the race is safe.
Each time a new car is obtained, again cars may have been over-written in the meantime. Each time we enter the lock the state of car may have changed so that IsBroken() will return false in the meantime.
It's possible for all of this to be fine, but showing that they are fine is complicated.
Well, it tends to be complicated when it is fine, sometimes complicated when it's not fine, but most often it's very simple to get the answer, "no, it is not okay". And in fact that is the case here, because of one last point of non-thread-safety that is also present in your second example:
if (cars == null)
{
lock(LockObject)
{
if (cars == null)
{
cars = Lookup(..)
foreach (car in cars.ToList())
{
if (car.IsBroken())
{
cars.Remove(car)
}
}
}
}
}
return cars; // Not thread-safe.
Consider, thread 1 examines cars and finds it null. Then it obtains a lock, checks that cars is still null (good), and if it is it sets it to a list it obtained from Lookup and starts removing "broken" cars.
Now, at this point thread 2 examines cars and finds it not-null. So it returns cars to the caller.
Now what happens?
Thread 2 can find "broken" cars in the list, because they haven't been remove yet.
Thread 2 can skip past cars because the list's contents are being moved by Remove() around while it is working on it.
Thread 2 can have the enumerator used by a foreach throw an exception because List<T>.Enumerator throws if you change the list while enumerating and the other thread is doing that.
Thread 2 can have an exception thrown that List<T> should never throw because Thread 1 is half-way in the middle of one of its methods and its invariants only hold before and after each method call.
Thread 2 can obtain a bizarre franken-car because it read part of a car before a Remove() and part after it. (Only if the the type of Car is a value-type; reads and writes of references is always individually atomic).
All of this is obviously bad. The problem is that you are setting cars before it is in a state that is safe for other threads to look at. Instead you should do one of the following:
if (cars == null)
{
lock(LockObject)
{
if (cars == null)
{
cars = Lookup(..).RemoveAll(car => car.IsBroken());
}
}
}
return cars;
This doesn't set anything in cars until after the work on it has been done. As such another thread can't see it until it's safe to do so.
Alternatively:
if (cars == null)
{
var tempCars = Lookup(..).RemoveAll(car => car.IsBroken());
lock(LockObject)
{
if (cars == null)
{
cars = tempCars;
}
}
}
return cars;
This hold the lock for less time, but at the cost of potentially doing wasteful work just to throw it away. If it's safe to do this at all (it might not be) then there's a trade-off between potential extra time on the first few look-ups for less time in the lock. It's some times worth it, but generally not.
The best strategy to perform lazy initializing is by using properties for fields:
private List<Car> Cars
{
get
{
lock (lockObject)
{
return cars ?? (cars = Lockup(..));
}
}
}
Using your lock object here also makes sure, that no other thread also creates an instance of it.
Add and remove operations have also to be performed while locked:
void Add(Car car)
{
lock(lockObject) Cars.Add(car);
}
void Remove(Car car)
{
lock(lockObject) Cars.Remove(car);
}
Recognize the use of the Cars property to access the list!
Now you can get a copy of your list:
List<Car> copyOfCars;
lock(lockObject) copyOfCars = Cars.ToList();
Then it is possible to safely remove certain objects from the original list:
foreach (car in copyOfCars)
{
if (car.IsBroken())
{
Remove(car);
}
}
But be sure to use your own Remove(car) method which is locked inside.
Especially for List there is another possibility to cleanup elements inside:
lock(lockObject) Cars.RemoveAll(car => car.IsBroken());
As working on multi-threaded application, I have once scenario where I need to assign value to static field. I want to use the latest value of static field in all rest of the threads.
Code is seems like below:
Main() Method:
for (var i = 1; i <= 50; i++)
{
ProcessEmployee processEmployee = new ProcessEmployee();
Thread thread = new Thread(processEmployee.Process);
thread.Start(i);
}
public class ProcessEmployee
{
public void Process(object s)
{
// Sometimes I get value 0 even if the value set to 1 by other thread.
// Want to resolve this issue.
if (StaticContainer.LastValue == 0)
{
Console.WriteLine("Last value is 0");
}
if (Convert.ToInt32(s) == 5)
{
StaticContainer.LastValue = 1;
Console.WriteLine("Last Value is set to 1");
}
// Expectation: want to get last value = 1 in all rest of the threads.
Console.WriteLine(StaticContainer.LastValue);
}
}
public static class StaticContainer
{
private static int lastValue = 0;
public static int LastValue
{
get
{
return lastValue;
}
set
{
lastValue = value;
}
}
}
Question:
Basically, I want to know that once I set specific value to static field by any thread, I want to get the same value (latest value set by another thread) in rest of the threads always.
Please do give me any idea on this.
Thanks in advance!
Basically, I want to know that once I set specific value to static field by any thread, I want to get the same value (latest value set by another thread) in rest of the threads always.
It sounds like you're basically missing a memory barrier. You could work this out with explicit barriers but no locks - or you could just go for the brute-force lock approach, or you could use Interlocked:
private static int lastValue;
public int LastValue
{
// This won't actually change the value - basically if the value *was* 0,
// it gets set to 0 (no change). If the value *wasn't* 0, it doesn't get
// changed either.
get { return Interlocked.CompareExchange(ref lastValue, 0, 0); }
// This will definitely change the value - we ignore the return value, because
// we don't need it.
set { Interlocked.Exchange(ref lastValue, value); }
}
You could use volatile as suggested by newStackExchangeInstance in comments - but I'm never actually sure I fully understand exactly what it means, and I strongly suspect it doesn't mean what most people think it means, or indeed what the MSDN documentation states. You may want to read Joe Duffy's blog post on it (and this one too) for a bit more background.
If two different threads may access the same field/variable and at least one of them will be writing, you need to use some sort of locking. For primitive types use the Interlocked class.
1) I'm working on a project and I saw this piece of code, I don't understand what is the point of the Monitor.Lock statement. Can someone explain what its trying to do?
2) the postscript underscroll in the parameter name is really annoying, anyone else seen this naming convention?
public class FieldsChangeableHelper<T> : IFieldsChangeable<T>
{
object _lock;
int _lockCount;
FieldChanges<T> _changes;
public FieldsChangeableHelper()
{
_lock = new object();
_lockCount = 0;
}
public void AddChange(T field_, object oldValue_)
{
if (_changes == null)
_changes = new FieldChanges<T>(field_, oldValue_);
else
_changes.AddChange(field_, oldValue_);
if (RaiseEvent(_changes))
_changes = null;
}
#region IFieldsChangeable Members
public void BeginUpdate()
{
if (System.Threading.Interlocked.Increment(ref _lockCount) == 1)
Monitor.Enter(_lock);
}
public void EndUpdate()
{
if (System.Threading.Interlocked.Decrement(ref _lockCount) == 0)
{
FieldChanges<T> changes = _changes;
_changes = null;
Monitor.Exit(_lock);
RaiseEvent(changes);
}
}
protected bool RaiseEvent(FieldChanges<T> changes_)
{
if (_lockCount == 0 && Changed != null && changes_ != null)
{
Changed(this, changes_);
return true;
}
return false;
}
public event FieldsChanged<T> Changed;
#endregion
}
Monitor.Lock locks the portion of code when multiple thread tries to execute the same piece in parallel. It is made to ensure that only 1 guy is altering/executing the context. Look at the MSDN.
Although its best practice that the locking object is always static, but in your case it is not. Which might pose some problem if your instantiating multiple objects on an open type.
Note one thing, in generics static on open T is different for different type, i.e static member in an Open Type class in your case is different for T i.e DateTime, string, etc.
In csharp, private members of a type are usually named with prefixed _
The way i read it: BeginUpdate() ensures that the current thread calling has exclusive access to the instance and that change events practically will be batched and raised once EndUpdate is called. The author wanted to deal with recursion by itself (e.g. calling BeginUpdate() on the same thread multiple times) and a mechanism to batch UpdateEvents untill after the lock has been released. Because, there is a potential deadlock when raising Events when you still have a lock on yourself. event subscribers might want to access your members and therefore have to lock the sender instance which is already locked.
The whole conditional locking is not required (if my analyses is correct ofcourse) since locks based on the Monitor class are recursive and counted.
There is another problem with the locking mechanism, that is: currently when one thread holds a lock. The second thread wont even wait for the lock but will simply continue without a lock since the lock is conditional! this seems like a big bug!
Regarding the naming convention. I use it myself for a way of differentiating privates from parameters and locals. Its a preference which many C# coding conventions recommend. This helps in a case like this:
void Method(int number)
{
// no need to refer to this since:
//this.number = number;
// can be replaced with
_number = number;
}
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I need to implement Undo/Redo frame work for my window application(editor like powerpoint), what should be the best practice to follow, how would be handle all property changes of my objects and it reflection on UI.
There are two classic patterns to use. The first is the memento pattern which is used to store snapshots of your complete object state. This is perhaps more system intensive than the command pattern, but it allows rollback very simply to an older snapshot. You could store the snapshots on disk a la PaintShop/PhotoShop or keep them in memory for smaller objects that don't require persistence. What you're doing is exactly what this pattern was designed for, so it should fit the bill slightly better than the Command Pattern suggested by others.
Also, an additional note is that because it doesn't require you to have reciprocal commands to undo something that was previously done, it means that any potentially one way functions [such as hashing or encryption] which can't be undone trivially using reciprocal commands can still be undone very simply by just rolling back to an older snapshot.
Also as pointed out, the command pattern which is potentially less resource intensive, so I will concede that in specific cases where:
There is a large object state to be persisted and/or
There are no destructive methods and
Where reciprocal commands can be used very trivially to reverse any action taken
the command pattern may be a better fit [but not necessarily, it will depend very much on the situation]. In other cases, I would use the memento pattern.
I would probably refrain from using a mashup of the two because I tend to care about the developer that's going to come in behind me and maintain my code as well as it being my ethical responsibility to my employer to make that process as simple and inexpensive as possible. I see a mashup of the two patterns easily becoming an unmaintainable rat hole of discomfort that would be expensive to maintain.
There are three approaches here that are viable. Memento Pattern (Snapshots), Command Pattern and State Diffing. They all have advantages and disadvantages and it really comes down to your use case, what data you are working with and what you are willing to implement.
I would go with State Diffing if you can get away with it as it combines memory reduction with ease of implementation and maintainability.
I'm going to quote an article describing the three approaches (Reference below).
Note that VoxelShop mentioned in the article is open source. So you can take a look at the complexity of the command pattern here:
https://github.com/simlu/voxelshop/tree/develop/src/main/java/com/vitco/app/core/data/history
Below is an adapted excerpt from the article. However I do recommend that you read it in full.
Memento Pattern
Each history state stores a full copy. An action creates a new state and a pointer is used to move between the states to allow for undo and redo.
Pros
Implementation is independent of the applied action. Once implemented we can add actions without worrying about breaking history.
It is fast to advance to a predefined position in history. This is interesting when the actions applied between current and desired history position are computationally expensive.
Cons
Memory Requirements can be significantly higher compared to other approaches.
Loading time can be slow if the snapshots are large.
Command Pattern
Similar to the Memento Pattern, but instead of storing the full state, only the difference between the states is stored. The difference is stored as actions that can be applied and un-applied. When introducing a new action, apply and un-apply need to be implemented.
Pros
Memory footprint is small. We only need to store the changes to the model and if these are small, then the history stack is also small.
Cons
We can not go to an arbitrary position directly, but rather need to un-apply the history stack until we get there. This can be time consuming.
Every action and it's reverse needs to be encapsulated in an object. If your action is non trivial this can be difficult. Mistakes in the (reverse) action are really hard to debug and can easily result in fatal crashes. Even simple looking actions usually involve a good amount of complexity. E.g. in case of the 3D Editor, the object for adding to the model needs to store what was added, what color was currently selected, what was overwritten, if mirror mode active etc.
Can be challenging to implement and memory intensive when actions do not have a simple reverse, e.g when blurring an image.
State Diffing
Similar to the Command Pattern, but the difference is stored independent of the action by simply xor-nig the states. Introducing a new action does not require any special considerations.
Pros
Implementation is independent of the applied action. Once the history functionality is added we can add actions without worrying about breaking history.
Memory Requirements is usually much lower than for the Snapshot approach and in a lot of cases comparable to the Command Pattern approach. However this highly depends on the type of actions applied. E.g. inverting the color of an image using the Command Pattern should be very cheap, while State Diffing would save the whole image. Conversely when drawing a long free-form line, the Command Pattern approach might use more memory if it chained history entries for each pixel.
Cons / Limitations
We can not go to an arbitrary position directly, but rather need to un-apply the history stack until we get there.
We need to compute the diff between states. This can be expensive.
Implementing the xor diff between model states might be hard to implement depending on your data model.
Reference:
https://www.linkedin.com/pulse/solving-history-hard-problem-lukas-siemon
The classic practice is to follow the Command Pattern.
You can encapsulate any object that performs an action with a command, and have it perform the reverse action with an Undo() method. You store all the actions in a stack for an easy way of rewinding through them.
Take a look at the Command Pattern.
You have to encapsulate every change to your model into separate command objects.
I wrote a really flexible system to keep track of changes. I have a drawing program which implements 2 types of changes:
add/remove a shape
property change of a shape
Base class:
public abstract class Actie
{
public Actie(Vorm[] Vormen)
{
vormen = Vormen;
}
private Vorm[] vormen = new Vorm[] { };
public Vorm[] Vormen
{
get { return vormen; }
}
public abstract void Undo();
public abstract void Redo();
}
Derived class for adding shapes:
public class VormenToegevoegdActie : Actie
{
public VormenToegevoegdActie(Vorm[] Vormen, Tekening tek)
: base(Vormen)
{
this.tek = tek;
}
private Tekening tek;
public override void Redo()
{
tek.Vormen.CanRaiseEvents = false;
tek.Vormen.AddRange(Vormen);
tek.Vormen.CanRaiseEvents = true;
}
public override void Undo()
{
tek.Vormen.CanRaiseEvents = false;
foreach(Vorm v in Vormen)
tek.Vormen.Remove(v);
tek.Vormen.CanRaiseEvents = true;
}
}
Derived class for removing shapes:
public class VormenVerwijderdActie : Actie
{
public VormenVerwijderdActie(Vorm[] Vormen, Tekening tek)
: base(Vormen)
{
this.tek = tek;
}
private Tekening tek;
public override void Redo()
{
tek.Vormen.CanRaiseEvents = false;
foreach(Vorm v in Vormen)
tek.Vormen.Remove(v);
tek.Vormen.CanRaiseEvents = true;
}
public override void Undo()
{
tek.Vormen.CanRaiseEvents = false;
foreach(Vorm v in Vormen)
tek.Vormen.Add(v);
tek.Vormen.CanRaiseEvents = true;
}
}
Derived class for property changes:
public class PropertyChangedActie : Actie
{
public PropertyChangedActie(Vorm[] Vormen, string PropertyName, object OldValue, object NewValue)
: base(Vormen)
{
propertyName = PropertyName;
oldValue = OldValue;
newValue = NewValue;
}
private object oldValue;
public object OldValue
{
get { return oldValue; }
}
private object newValue;
public object NewValue
{
get { return newValue; }
}
private string propertyName;
public string PropertyName
{
get { return propertyName; }
}
public override void Undo()
{
//Type t = base.Vorm.GetType();
PropertyInfo info = Vormen.First().GetType().GetProperty(propertyName);
foreach(Vorm v in Vormen)
{
v.CanRaiseVeranderdEvent = false;
info.SetValue(v, oldValue, null);
v.CanRaiseVeranderdEvent = true;
}
}
public override void Redo()
{
//Type t = base.Vorm.GetType();
PropertyInfo info = Vormen.First().GetType().GetProperty(propertyName);
foreach(Vorm v in Vormen)
{
v.CanRaiseVeranderdEvent = false;
info.SetValue(v, newValue, null);
v.CanRaiseVeranderdEvent = true;
}
}
}
With each time Vormen = the array of items that are submitted to the change.
And it should be used like this:
Declaration of the stacks:
Stack<Actie> UndoStack = new Stack<Actie>();
Stack<Actie> RedoStack = new Stack<Actie>();
Adding a new shape (eg. Point)
VormenToegevoegdActie vta = new VormenToegevoegdActie(new Vorm[] { NieuweVorm }, this);
UndoStack.Push(vta);
RedoStack.Clear();
Removing a selected shape
VormenVerwijderdActie vva = new VormenVerwijderdActie(to_remove, this);
UndoStack.Push(vva);
RedoStack.Clear();
Registering a property change
PropertyChangedActie ppa = new PropertyChangedActie(new Vorm[] { (Vorm)e.Object }, e.PropName, e.OldValue, e.NewValue);
UndoStack.Push(ppa);
RedoStack.Clear();
Finally the Undo/Redo action
public void Undo()
{
Actie a = UndoStack.Pop();
RedoStack.Push(a);
a.Undo();
}
public void Redo()
{
Actie a = RedoStack.Pop();
UndoStack.Push(a);
a.Redo();
}
I think this is the most effective way of implementing a undo-redo algorithm.
For an example, look at this page on my website: DrawIt.
I implemented the undo redo stuff at around line 479 of the file Tekening.cs. You can download the source code. It can be implemented by any kind of application.
I have a strange phenomenon while continuously instantiating a com-wrapper and then letting the GC collect it (not forced).
I'm testing this on .net cf on WinCE x86. Monitoring the performance with .net Compact framework remote monitor. Native memory is tracked with Windows CE Remote performance monitor from the platform builder toolkit.
During the first 1000 created instances every counter in perfmon seems ok:
GC heap goes up and down but the average remains the same
Pinned objects is 0
native memory keeps the same average
...
However, after those 1000 (approximately) the Pinned object counter goes up and never goes down in count ever again. The memory usage stays the same however.
I don't know what conclusion to pull from this information... Is this a bug in the counters, is this a bug in my software?
[EDIT]
I do notice that the Pinned objects counter starts to go up as soon the total bytes in use after GC stabilises as does the Objects not moved by compactor counter.
The graphic of the counters http://files.stormenet.be/gc_pinnedobj.jpg
[/EDIT]
Here's the involved code:
private void pButton6_Click(object sender, EventArgs e) {
if (_running) {
_running = false;
return;
}
_loopcount = 0;
_running = true;
Thread d = new Thread(new ThreadStart(LoopRun));
d.Start();
}
private void LoopRun() {
while (_running) {
CreateInstances();
_loopcount++;
RefreshLabel();
}
}
void CreateInstances() {
List<Ppb.Drawing.Image> list = new List<Ppb.Drawing.Image>();
for (int i = 0; i < 10; i++) {
Ppb.Drawing.Image g = resourcesObj.someBitmap;
list.Add(g);
}
}
The Image object contains an AlphaImage:
public sealed class AlphaImage : IDisposable {
IImage _image;
Size _size;
IntPtr _bufferPtr;
public static AlphaImage CreateFromBuffer(byte[] buffer, long size) {
AlphaImage instance = new AlphaImage();
IImage img;
instance._bufferPtr = Marshal.AllocHGlobal((int)size);
Marshal.Copy(buffer, 0, instance._bufferPtr, (int)size);
GetIImagingFactory().CreateImageFromBuffer(instance._bufferPtr, (uint)size, BufferDisposalFlag.BufferDisposalFlagGlobalFree, out img);
instance.SetImage(img);
return instance;
}
void SetImage(IImage image) {
_image = image;
ImageInfo imgInfo;
_image.GetImageInfo(out imgInfo);
_size = new Size((int)imgInfo.Width, (int)imgInfo.Height);
}
~AlphaImage() {
Dispose();
}
#region IDisposable Members
public void Dispose() {
Marshal.FinalReleaseComObject(_image);
}
}
Well, there's a bug in your code in that you're creating a lot of IDisposable instances and never calling Dispose on them. I'd hope that the finalizers would eventually kick in, but they shouldn't really be necessary. In your production code, do you dispose of everything appropriately - and if not, is there some reason why you can't?
If you put some logging in the AlphaImage finalizer (detecting AppDomain unloading and application shutdown and not logging in those cases!) does it show the finalizer being called?
EDIT: One potential problem which probably isn't biting you, but may be worth fixing anyway - if the call to CreateImageFromBuffer fails for whatever reason, you still own the memory created by AllocHGlobal, and that will currently be leaked. I suspect that's not the problem or it would be blowing up more spectacularly, but it's worth thinking about.
I doubt it's a bug in RPM. What we don't have here is any insight into the Ppb.Drawing stuff. The place I see for a potential problem is the GetIImagingFactory call. What does it do? It's probably just a singleton getter, but it's something I'd chase.
I also see an AllochHGlobal, but nowhere do I see that allocation getting freed. For now that's where I'd focus.