What causes these spikes in freed memory? - c#

This is a chart of available virtual memory, in bytes, for a small application I'm prototyping with. The application runs in a tight loop which adds an integer to the end of a linked list. If there is a certain percentage of virtual memory remaining it goes to the beginning of the loop, otherwise it exits. What is causing the sudden increases in virtual memory at those six places in the graph? My first thought was garbage collection, but since I'm adding new elements to the linked list without deleting any there shouldn't be any unused references to clean up, certainly not in those large amounts. The final jump is over a third of a gigabyte. This is an application compiled for x86 but running on an x64 machine under .NET 4.5. Sorry if this is a duplicate, I don't know what to call the event so it's hard to search for it.
Edit: I can't provide the exact code but this code does demonstrate what I'm seeing.
public void DoIt()
{
ComputerInfo ci = new ComputerInfo();
double memory = 0;
double count = 0;
for (int i = 0; i < 2; i ++)
{
Console.WriteLine(ci.AvailableVirtualMemory);
try
{
while (true)
{
list.AddFirst(892);
count++;
}
}
catch (OutOfMemoryException)
{
memory = ci.AvailableVirtualMemory;
list.Clear();
list = null;
System.GC.Collect();
list = new LinkedList<int>();
Console.WriteLine(memory);
Console.WriteLine(count);
count = 0;
}
}
}

Related

Populate multiple Datagridviews with streaming data

I have being battling with this problem quite a few days.
It seems every single example i have seen relies on present data.
But in my case i have a stream that is processed and splitted into a number of string Arrays and with these i populate my DGVs.
The only (working) solution is this
public void AddRowsToDGVs()
{
for (int i = 0; i < dtblRepository.Length; i++)
{
if (dgvRepository[i].InvokeRequired)
{
dgvRepository[i].Invoke(new MethodInvoker(delegate
{
foreach (String[] datao in dataToAdd.GetConsumingEnumerable())
{
int indexOfDGV = Array.IndexOf(activeDatas, datao[0]);
dgvRepository[indexOfDGV].Rows.Insert(0, datao);
dgvRepository[indexOfDGV].Refresh();
Application.DoEvents();
}
}));
}
else
Application.DoEvents();
}
}
Some explaining :
dtblRepository an Array of DataTables (for the time being it provides
the index but later maybe i will dump it
dgvRepository an Array holding the DGVs
dataToAdd BlockingCollection populated from another routine
activeDatas an Array holding the elements by which the stream will be
processed
And yes all the code depends on Application.DoEvents()
The whole code runs by this:
trd = new Thread(AddRowsToDGVs);
trd.Start()
Eveyone seems to point to async/await but i just can't find a sample that fits my needs....can i get some help :)

how incrementing a shared value from several threads affine to one CPU manages to produce correct results with ++ being three instructions?

I thought I know a lot on the subject, but the following puzzles me. I know that on the particular PC I run this on, "_x++" is translated to three assembly instructions (move value to register from memory, increment in register, write back to memory). So why on Earth, when locked to a single CPU the threads incrementing the value manage to not be preempted at the exact time when a value is incremented, but still in a Register and not written back to memory? Does anybody know an answer to this? I'm guessing this works by sheer luck, but I can't exactly prove it.
I was able to make the correct result go away by setting a break point on inc eax in the disassembly window in VS, then making a few steps in the debugger, but that's no proof. Not a solid one.
[This code IS NOT THREAD-SAFE on purpose. This is not production code, it serves educational purposes. I'm trying to dig deep into the subject. Of course, with cache lines out of the picture, we get a lot, but still -- three instructions, and not even a single miss, that's weird! This is a 4 core CPU, x64.]
{
private static int NumberOfThreads = 2;
private const long IncrementIterations = 1000000000;
private static int _x;
static void Main(string[] args)
{
Process.GetCurrentProcess().ProcessorAffinity = (IntPtr)1;
Console.WriteLine("Hit Enter");
Console.ReadLine();
while (true)
{
var barrier = new Barrier(NumberOfThreads);
_x = 0;
var threads = new List<Thread>();
for (int i = 0; i < NumberOfThreads; i++)
{
var thread = new Thread(
delegate()
{
barrier.SignalAndWait();
for (int j = 0; j < IncrementIterations; j++)
{
_x++;
}
});
thread.Start();
threads.Add(thread);
}
BlockUntilAllThreadsQuit(threads);
Console.WriteLine(_x);
Console.WriteLine("Actual increments: " + (IncrementIterations * NumberOfThreads));
if (_x != (IncrementIterations * NumberOfThreads))
{
Console.WriteLine("Observed: " + _x);
break;
}
}
Console.ReadLine();
}
private static void BlockUntilAllThreadsQuit(IEnumerable<Thread> threadsToWaitFor)
{
foreach (var thread in threadsToWaitFor)
{
thread.Join();
}
}
}
Your program would be clearer if you used a class with a _x field, rather than using a _x variable that gets hoisted to a closure. That having been said, the increment would only fail to be atomic if a thread got preempted while it was doing an increment, in particular between the time the load completes and the time the store begins. That's a very small window. It's possible that some arbitrary event could occur and trigger a task switch at that exact moment, but it's not very likely. Note that it's entirely possible that the amount of time required to run your loop will be such that the ordinary timer-tick events would consistently miss the magic window of opportunity.
If you want to test things a little better, I'd suggest perhaps adding a little code to the loop to randomize the timing a little bit (e.g. using a bona fide auto variable randThing initialized to 1, do something like:
if (randthing & 0x20000000)
randthing ^= 0x20043251); // Number picked out of hat; a primitive polynomial
// would be better.
else
randthing <<= 1;
might cause the loop timing to vary a little from one iteration to the next, in a non-periodic fashion.

Monotouch ipad memory / animation problem

All, I'm working on what I thought was a fairly simple app. I'm using multiple view controllers with a view - under which there are buttons and a single image view. the buttonpress event triggers the other viewcontroller's view to display. That works perfectly. However, I'm also wanting to animate the transition to simulate a page turn. I use the code below to do that. It works well, however, every time I use this method the memory used increases. The memory used appears to be disconnected from the actual size of the image array. Also, I changed from png to jpeg ( much smaller images ) and it doesn't make a bit of difference. I thought about using .mov but the load time is very noticeable.
Please help. I've tried a ton of different way to force garbage collection. I've dug through the limited texts, and searched this website to no avail.
Here's a sample of the code.
public partial class AppDelegate : UIApplicationDelegate
{
// This method is invoked when the application has loaded its UI and its ready to run
public override bool FinishedLaunching (UIApplication app, NSDictionary options)
{
UIApplication.SharedApplication.SetStatusBarHidden( true, true);
// If you have defined a view, add it here:
// window.AddSubview (navigationController.View);
//window.AddSubview(mainController.View);
window.MakeKeyAndVisible ();
coverOpenbtn.TouchUpInside += HandleCoverOpenbtnTouchUpInside;
backBtn1.TouchUpInside += HandleBackBtn1TouchUpInside;
return true;
}
void HandleBackBtn1TouchUpInside (object sender, EventArgs e)
{
this.navView.RemoveFromSuperview();
List<UIImage> myImages = new List<UIImage>();
myImages.Add(UIImage.FromFile("c_1_00011.jpg"));
myImages.Add(UIImage.FromFile("c_1_00010.jpg"));
myImages.Add(UIImage.FromFile("c_1_00009.jpg"));
myImages.Add(UIImage.FromFile("c_1_00008.jpg"));
myImages.Add(UIImage.FromFile("c_1_00007.jpg"));
myImages.Add(UIImage.FromFile("c_1_00006.jpg"));
myImages.Add(UIImage.FromFile("c_1_00005.jpg"));
myImages.Add(UIImage.FromFile("c_1_00004.jpg"));
myImages.Add(UIImage.FromFile("c_1_00003.jpg"));
myImages.Add(UIImage.FromFile("c_1_00002.jpg"));
myImages.Add(UIImage.FromFile("c_1_00001.jpg"));
myImages.Add(UIImage.FromFile("c_1_00000.jpg"));
//myImages.Add(UIImage.FromFile("c_1_00012.jpg"));
var myAnimatedView = new UIImageView(window.Bounds);
myAnimatedView.AnimationImages = myImages.ToArray();
myAnimatedView.AnimationDuration = 1; // Seconds
myAnimatedView.AnimationRepeatCount = 1;
myAnimatedView.StartAnimating();
window.AddSubview(myAnimatedView);
}
void HandleCoverOpenbtnTouchUpInside (object sender, EventArgs e)
{
this.coverView.AddSubview(navView);
List<UIImage> myImages = new List<UIImage>();
myImages.Add(UIImage.FromFile("c_1_00000.jpg"));
myImages.Add(UIImage.FromFile("c_1_00001.jpg"));
myImages.Add(UIImage.FromFile("c_1_00002.jpg"));
myImages.Add(UIImage.FromFile("c_1_00003.jpg"));
myImages.Add(UIImage.FromFile("c_1_00004.jpg"));
myImages.Add(UIImage.FromFile("c_1_00005.jpg"));
myImages.Add(UIImage.FromFile("c_1_00006.jpg"));
myImages.Add(UIImage.FromFile("c_1_00007.jpg"));
myImages.Add(UIImage.FromFile("c_1_00008.jpg"));
myImages.Add(UIImage.FromFile("c_1_00009.jpg"));
myImages.Add(UIImage.FromFile("c_1_00010.jpg"));
myImages.Add(UIImage.FromFile("c_1_00011.jpg"));
//myImages.Add(UIImage.FromFile("c_1_00012.jpg"));
var myAnimatedView = new UIImageView(window.Bounds);
myAnimatedView.AnimationImages = myImages.ToArray();
myAnimatedView.AnimationDuration = 1; // Seconds
myAnimatedView.AnimationRepeatCount = 1;
opened++;
}
myAnimatedView.StartAnimating();
window.AddSubview(myAnimatedView);
}
Here's a few hints (just by reading the code):
There no difference between JPEG and PNG once the images are loaded in memory. The format only matters when the image is stored, not displayed. Once loaded (and decompressed) they will take a bit over (Width * Height * BitCount) of memory.
Consider caching your images and load them only they are not available. The GC will decide when to collect them (so many copies could exists at the same time). Right now you're loading each image twice when you could do it once (and use separate array for ordering them).
Even if you cache them also be ready to clear them on demand, e.g. if iOS warns you memory is low. Override ReceiveMemoryWarning to clear your list (or better arrays).
Don't call ToArray if you can avoid it (like your sample code). If you know how many images you have them simply create the array with the right size (and cache both array too ;-). It will cut down (a bit) the allocations;
Even consider caching the 'myAnimatedView' UIImageView (if the above did not help enough)
Be helpful to others, try them one-by-one and tell us what help you the most :-)
The images are to "animate" a page turn...is this to navigate through the app?
E.g. you start at the "home" page, press a button then it animates a page turn to the next screen in your app?
I think you would be better off looking at using CoreGraphics to try and achieve this effect, it'll both be a lot more efficient memory wise, and it will probably look a lot better as well. There are a few projects in Objective-C to get you started, such as Tom Brow's excellent Leaves project.
Okay, here is the best solution I found, doesn't crash the hardware, and is generally useful for other tasks.
This is the code that goes in the handler for the button press, each NavImage is a UIImage I built under the same view in interface builder. I just turned the alpha to 0 initially, and light them up one by one...
NSTimer.CreateScheduledTimer(.1,delegate { navImage1.Alpha = 1; NSTimer.CreateScheduledTimer(.1,delegate { navImage2.Alpha = 1;
NSTimer.CreateScheduledTimer(.05,delegate { navImage3.Alpha = 1;
NSTimer.CreateScheduledTimer(.05,delegate { navImage4.Alpha = 1;
NSTimer.CreateScheduledTimer(.05,delegate { navImage5.Alpha = 1;
NSTimer.CreateScheduledTimer(.05,delegate { navImage6.Alpha = 1;
NSTimer.CreateScheduledTimer(.05,delegate { navImage7.Alpha = 1;
NSTimer.CreateScheduledTimer(.05,delegate { navImage8.Alpha = 1;
NSTimer.CreateScheduledTimer(.05,delegate { navImage9.Alpha = 1;
NSTimer.CreateScheduledTimer(.05,delegate { navImage.Alpha = 1; });});});});});});});});});});

List<T>.Clear - Does it have to be called?

So I've been fighting another memory problem in my project for the past week. I tried a couple of memory profilers but nothing gave me insight into what was causing the minor memory leak. The following code turned out to be causing it:
private void DeleteAll( FlowLayoutPanel flp)
{
List<ImageControl> AllList = GetAllList(flp);
List<ImageControl> LockedList = GetLockedList(flp);
for (int i = 0; i < LockedList.Count; i++)
{
AllList.Remove(LockedList[i]);
}
flp.SuspendLayout();
for (int i = 0; i < AllList.Count; i++)
{
flp.Controls.Remove(AllList[i]);
}
DisposeList(AllList);
flp.ResumeLayout();
}
In the code, ImageControl is a UserControl, and the entire method above just removes ImageControls from a FlowLayoutPanel. The DisposList() method just calls ImageControl.Dispose() for all the controls in the list passed to it.
Now, I thought that once this method had exited, AllList would be out of scope and hence all its references to the ImageControl's would be nonexistent. So the GC would do it's stuff. But it wasn't. I found it requires
AllList.Clear();
added to the end of the DeleteAll() method, before AllList was out of scope.
So do you have to always explicitly clear a generic list to free up resources? Or is it something I'm doing wrong above? I'd like to know since I'm making fairly heavy use of temporary Lists in this project.
Ok, here's the GetAllList method. Doesn't look like a problem to me:
private List<ImageControl> GetAllList(FlowLayoutPanel flp)
{
List<ImageControl> List = new List<ImageControl>();
for (int i = 0; i < flp.Controls.Count; i++)
{
List.Add((ImageControl)flp.Controls[i]);
}
return List;
}
BTW, if you see my last couple of topics here I've been fighting memory leaks in my quest to become a proficient c# programmer :) I added the DisposeList() method since I've read Dispose() should be called on any object that implements IDisposable, which UserControl does. I also needed a way to fix up a "bug" with the ToolStrip class (which ImageControl contains), where it causes resources to remain unless the Visible property is set to false before it's destroyed. So I've overridden the Dispose method of ImageControl to do just that.
Oh, and DisposeList() also unsubscribes from an event handler:
private void DisposeList( List<ImageControl> IC )
{
for (int i=0;i<IC.Count;i++)
{
IC[i].DoEvent -= ImageButtonClick;
IC[i].Dispose();
}
}
If AllList were the only reference to the list and the elements in the list, then the list and all its elements would become eligible for garbage collection as soon as you exit the DeleteAll method.
If calling AllList.Clear() makes a difference, then I would conclude that there is a reference to the same list being held elsewhere in your code. Maybe a closer look at the GetAllList() method would give a clue where.
You shouldn't have to clear the list. Can you share your GetAllList() function? The fact that you even need a corresponding "DisposeList()" method tells me there are probably side effects there that keep a reference to your list somewhere.
Also, I'd simplify that code like this:
private void DeleteAll( FlowLayoutPanel flp)
{
var UnlockedImages = flp.Controls.OfType<ImageControl>().Except(GetLockedList(flp));
flp.SuspendLayout();
foreach (ImageControl ic in UnlockedImages)
{
flp.Controls.Remove(ic);
}
flp.ResumeLayout();
}

GC contains lots of pinned objects after a while

I have a strange phenomenon while continuously instantiating a com-wrapper and then letting the GC collect it (not forced).
I'm testing this on .net cf on WinCE x86. Monitoring the performance with .net Compact framework remote monitor. Native memory is tracked with Windows CE Remote performance monitor from the platform builder toolkit.
During the first 1000 created instances every counter in perfmon seems ok:
GC heap goes up and down but the average remains the same
Pinned objects is 0
native memory keeps the same average
...
However, after those 1000 (approximately) the Pinned object counter goes up and never goes down in count ever again. The memory usage stays the same however.
I don't know what conclusion to pull from this information... Is this a bug in the counters, is this a bug in my software?
[EDIT]
I do notice that the Pinned objects counter starts to go up as soon the total bytes in use after GC stabilises as does the Objects not moved by compactor counter.
The graphic of the counters http://files.stormenet.be/gc_pinnedobj.jpg
[/EDIT]
Here's the involved code:
private void pButton6_Click(object sender, EventArgs e) {
if (_running) {
_running = false;
return;
}
_loopcount = 0;
_running = true;
Thread d = new Thread(new ThreadStart(LoopRun));
d.Start();
}
private void LoopRun() {
while (_running) {
CreateInstances();
_loopcount++;
RefreshLabel();
}
}
void CreateInstances() {
List<Ppb.Drawing.Image> list = new List<Ppb.Drawing.Image>();
for (int i = 0; i < 10; i++) {
Ppb.Drawing.Image g = resourcesObj.someBitmap;
list.Add(g);
}
}
The Image object contains an AlphaImage:
public sealed class AlphaImage : IDisposable {
IImage _image;
Size _size;
IntPtr _bufferPtr;
public static AlphaImage CreateFromBuffer(byte[] buffer, long size) {
AlphaImage instance = new AlphaImage();
IImage img;
instance._bufferPtr = Marshal.AllocHGlobal((int)size);
Marshal.Copy(buffer, 0, instance._bufferPtr, (int)size);
GetIImagingFactory().CreateImageFromBuffer(instance._bufferPtr, (uint)size, BufferDisposalFlag.BufferDisposalFlagGlobalFree, out img);
instance.SetImage(img);
return instance;
}
void SetImage(IImage image) {
_image = image;
ImageInfo imgInfo;
_image.GetImageInfo(out imgInfo);
_size = new Size((int)imgInfo.Width, (int)imgInfo.Height);
}
~AlphaImage() {
Dispose();
}
#region IDisposable Members
public void Dispose() {
Marshal.FinalReleaseComObject(_image);
}
}
Well, there's a bug in your code in that you're creating a lot of IDisposable instances and never calling Dispose on them. I'd hope that the finalizers would eventually kick in, but they shouldn't really be necessary. In your production code, do you dispose of everything appropriately - and if not, is there some reason why you can't?
If you put some logging in the AlphaImage finalizer (detecting AppDomain unloading and application shutdown and not logging in those cases!) does it show the finalizer being called?
EDIT: One potential problem which probably isn't biting you, but may be worth fixing anyway - if the call to CreateImageFromBuffer fails for whatever reason, you still own the memory created by AllocHGlobal, and that will currently be leaked. I suspect that's not the problem or it would be blowing up more spectacularly, but it's worth thinking about.
I doubt it's a bug in RPM. What we don't have here is any insight into the Ppb.Drawing stuff. The place I see for a potential problem is the GetIImagingFactory call. What does it do? It's probably just a singleton getter, but it's something I'd chase.
I also see an AllochHGlobal, but nowhere do I see that allocation getting freed. For now that's where I'd focus.

Categories

Resources