Creating many ScriptScopes in a remote AppDomain causing memory leak? - c#

I'm currently working with IronPython and for some reason when I try to create many ScriptScopes with variables in a separate AppDomain, my memory usage grows without ever GC'ing. When I run this same code in the current AppDomain, garbage collection works fine and the memory never grows. Below is a simple program to reproduce the OutOfMemoryException.
I have also tested setting one large string variable instead of many variables and get the same result. I'm running IronPython 2.7.4 on .NET version 4.5.50709, with my builds targeting x86 and .NET Framework 4. Is there something special I need to do to release unused ScriptScopes in a separate AppDomain or is this a memory leak?
public static void Main()
{
var testLeak = true;
var appDomain = testLeak ? AppDomain.CreateDomain("test") : AppDomain.CurrentDomain;
var engine = Python.CreateEngine(
appDomain, new Dictionary<string, object> { { "LightweightScopes", true } });
var scriptSource = engine.CreateScriptSourceFromString("pass");
var compiledCode = scriptSource.Compile();
for (var i = 0; i < 10000; i++)
{
var scope = engine.CreateScope();
for (var j = 0; j < 100000; j++)
{
scope.SetVariable("test" + j, j);
}
compiledCode.Execute(scope);
}
}

I figured this one out myself. If you look at the source for ScriptScope, this is how it overrides MarshalByRefObject:
public override object InitializeLifetimeService()
{
return (object) null;
}
This means it will never be garbage collected in a remote AppDomain. I guess I'll have to find a way to work with a single ScriptEngine and ScriptScope.

As you are just creating the AppDomain to host the PythonEngine, why are you not unloading it after usage? This takes care of the memory leak.
AppDomain.Unload(appDomain);

Related

C# Memory leak query - Why GC is not releasing memory of the local scope?

class Program
{
static void Main(string[] args)
{
Console.WriteLine("Before 1st call TestFunc");
TestFunc();
Console.WriteLine("After 1st call TestFunc");
Console.WriteLine("Before 2nd call TestFunc");
TestFunc();
Console.WriteLine("After 2nd call TestFunc");
Console.ReadLine();
}
public static void TestFunc()
{
List<Employee> empList = new List<Employee>();
for (int i = 1; i <= 50000000; i++)
{
Employee obj = new Employee();
obj.Name = "fjasdkljflasdkjflsdjflsjfkldsjfljsflsdjlkfajsd";
obj.ID = "11111111111111112222222222222222222222222222222";
empList.Add(obj);
}
}
}
public class Employee
{
public string Name;
public string ID;
}
I am creating lot of employees only inside a location(local scope), when the control comes back to main function, why DOTNET is not releasing the memory?
Lets say each function call is using 1 GB of memory, at the end of the function main, still the application uses more than 1Gb of memory. Why GC is not collecting after the scope goes off?
This might be a simple question, Any help would be great.
GC doesn't start automatically at the end of scope or function call. According to MS documentation:
Garbage collection occurs when one of the following conditions is true:
•The system has low physical memory. This is detected by either the low memory notification from the OS or low memory indicated by the host.
•The memory that is used by allocated objects on the managed heap surpasses an acceptable threshold. This threshold is continuously adjusted as the process runs.
•The GC.Collect method is called. In almost all cases, you do not have to call this method, because the garbage collector runs continuously. This method is primarily used for unique situations and testing.
Fundamentals of Garbage Collection

Object pool seems to be returning an object copy rather than an object reference

I have written a simple object pool manager to dish out vertex buffers wrapped in a very simple class on demand. Everything works OK except that it seems the objects being returned are being copied somehow rather than referenced, as memory use goes up unexpectedly. All the objects in the pool are instantiated at runtime in a bog standard list. This is the initializer code:
public static void InitVBPoolManager()
{
int i;
// INIT POOL VB OBJECT LIST
VBPool = new List<PoolManagerVBObject>();
VBPool.Capacity = POOL_CAPACITY;
// INIT POOL INDEX POINTERS
nextItemIndex = 0;
// FILL POOLMANAGER WITH INITIAL BATCH OF VBO
for (i = 0; i < POOL_CAPACITY; ++i)
{
VBPool.Add(new PoolManagerVBObject(VERTEX_BUFFER_SIZE,0));
}
}
This is the Get Object method:
public static PoolManagerVBObject GetVB()
{
// RETURN POOL VB
if (nextItemIndex < POOL_CAPACITY)
{
VBRecycled++;
return VBPool[nextItemIndex++];
}
else
{
VBPool.Add(new PoolManagerVBObject(VERTEX_BUFFER_SIZE,0));
POOL_CAPACITY++;
VBCreated++;
return VBPool[nextItemIndex++];
}
}
And finally the code that uses the objects:
for (j = 0; j < limit; ++j)
{
if (thisChunk.voxelVB.Count <= j)
{
thisChunk.voxelVB.Add(VBPoolManager.GetVB());
}
It seems like when GetVB() is called the returned object is having a copied made as ~260MB of RAM is eaten up. This obviously should not happen as the objects are already created in the PoolManager. If I replace the GetVB() call with just a new object() the memory consumption is the same, which is why I am led to believe the copy is being made. Anyone got any ideas ?
This implementation will always leak memory. You never remove references from the pool, and you never return objects to the pool.
A correctly implemented object pool would only hold references to available objects. When an object is retrieved from the pool, it shouldn't be referenced by it anymore. This way the objects can be garbage collected in case they are not returned to the pool. This points to another problem - you don't seem to have any way to return objects to the pool.
Also, your pool is not thread safe.
If you want to see how to implement an object pool correctly, you can check the Roslyn ObjectPool
This is the dispose method of pool manager code :
public static void DestroyVB(PoolManagerVBObject PvbObject)
{
VBPool[--nextItemIndex] = PvbObject;
}
And this is what calls it :
private static void ClearChunkVB(Vector2 PchunkXZ)
{
VBContainer thisChunk;
int j;
thisChunk = Landscape.chunkVB[(int)PchunkXZ.X, (int)PchunkXZ.Y];
if (thisChunk.voxelVB.Count > 0)
{
for (j = 0; j < thisChunk.voxelVB.Count; ++j)
{
thisChunk.voxelVB[j].VertCount = 0;
VBPoolManager.DestroyVB(thisChunk.voxelVB[j]);
}
thisChunk.voxelVB.Clear();
}
}

LuaInterface/C# - Closures created with .NET objects never get cleaned up

I am using the latest version of LuaInterface (http://code.google.com/p/luainterface/) in a C# application. I'm running into a problem where the Lua class is failing to clean up internal references in the ObjectTranslator 'objects' and 'objectsBackMap' dictionaries, resulting in always-growing memory usage.
The following code illustrates the problem:
public class Program
{
public static void Main()
{
Lua lua = new Lua();
string f = #"
return function(myClass, dotNetObject)
local f = function() dotNetObject:TestMethod() end
myClass:StoreFunction(f);
end";
var result = lua.DoString(f)[0] as LuaFunction;
MyClass testItem = new MyClass();
for (int i = 0; i < 50; i++)
{
DotNetObject o = new DotNetObject();
result.Call(testItem, o);
}
lua.DoString("collectgarbage()");
ObjectTranslator translator = (ObjectTranslator)typeof(Lua).GetField("translator", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance).GetValue(lua);
Console.WriteLine("Object count: " + translator.objects.Count);
//Prints out 51. 50 of these items are instances of 'DotNetObject', despite them being unreachable.
//Forcing a .NET garbage collection does not help. They are still being held onto by the object translator.
}
public class MyClass
{
public void StoreFunction(LuaFunction function)
{
//I would normally save off the function here to be executed later
}
}
public class DotNetObject
{
public void TestMethod()
{
}
}
}
The problem arises when the anonymous function (local f = ...) creates a closure involving a .NET object from the outer scope. As long as the Lua interpreter stays alive, 50 instances of the DotNetObject class I've created will never be garbage collected, even when forcing a GC in Lua.
Manually disposing of the LuaFunction (function.Dispose()) object in MyClass.StoreFunction solves the problem, but this is not desirable because in my real application I don't know when the Function will execute--or if it ever will. Forcing me to dispose of the LuaFunction changes the entire architecture of the application such that I'm basically doing manual memory management by disposing the object that contains the LuaFunction, and the object that contains that, all the way up the chain.
So, is this a bug in LuaInterface, or am I using the library incorrectly? Any advice is greatly appreciated, thanks!

Why does the c# garbage collector not keep trying to free memory until a request can be satisfied?

Consider the code below:
using System;
namespace memoryEater
{
internal class Program
{
private static void Main(string[] args)
{
Console.WriteLine("alloc 1");
var big1 = new BigObject();
Console.WriteLine("alloc 2");
var big2 = new BigObject();
Console.WriteLine("null 1");
big1 = null;
//GC.Collect();
Console.WriteLine("alloc3");
big1 = new BigObject();
Console.WriteLine("done");
Console.Read();
}
}
public class BigObject
{
private const uint OneMeg = 1024 * 1024;
private static int _idCnt;
private readonly int _myId;
private byte[][] _bigArray;
public BigObject()
{
_myId = _idCnt++;
Console.WriteLine("BigObject {0} creating... ", _myId);
_bigArray = new byte[700][];
for (int i = 0; i < 700; i++)
{
_bigArray[i] = new byte[OneMeg];
}
for (int j = 0; j < 700; j++)
{
for (int i = 0; i < OneMeg; i++)
{
_bigArray[j][i] = (byte)i;
}
}
Console.WriteLine("done");
}
~BigObject()
{
Console.WriteLine("BigObject {0} finalised", _myId);
}
}
}
I have a class, BigObject, which creates a 700MiB array in its constructor, and has a finalise method which does nothing other than print to console. In Main, I create two of these objects, free one, and then create a third.
If this is compiled for 32 bit (so as to limit memory to 2 gigs), an out of memory exception is thrown when creating the third BigObject. This is because, when memory is requested for the third time, the request cannot be satisfied and so the garbage collector runs. However, the first BigObject, which is ready to be collected, has a finaliser method so instead of being collected is placed on the finalisation queue and is finalised. The garbage collecter then halts and the exception is thrown. However, if the call to GC.Collect is uncommented, or the finalise method is removed, the code will run fine.
My question is, why does the garbage collector not do everything it can to satisfy the request for memory? If it ran twice (once to finalise and again to free) the above code would work fine. Shouldn't the garbage collector continue to finalise and collect until no more memory can be free'd before throwing the exception, and is there any way to configure it to behave this way (either in code or through Visual Studio)?
Its undeterministic when GC will work and try to reclaim memory.
If you add this line after big1 = null . However you should be carefult about forcing GC to collect. Its not recommended unless you know what you are doing.
GC.Collect();
GC.WaitForPendingFinalizers();
Best Practice for Forcing Garbage Collection in C#
When should I use GC.SuppressFinalize()?
Garbage collection in .NET (generations)
I guess its because the time the finalizer executes during garbage collection is undefined. Resources are not guaranteed to be released at any specific time (unless calling a Close method or a Dispose method.), also the order that finalizers are run is random so you could have a finalizer on another object waiting, while your object waits for that.

ManagementObject leaking in C# COM DLL

I have a C# COM DLL that calls WMI using the System.Management namespace. The DLL is being loaded into a C++ service. Every time I call the into the WMI classes I'm seeing a HUGE memory leak. After about an hour I am well over 1 GB of memory used.
If I take the same COM DLL and load it into PowerShell using Reflection.LoadFrom it does not leak memory. I have modified the DLL like so and it no longer leaks (still loading into the service with COM):
public class MyComObject
{
public void CallCom()
{
CallSomeWMIStuff();
}
}
To this. This no longer leaks!
public class MyComObject
{
public void CallCom()
{
//CallSomeWMIStuff();
}
}
Here's an example of some of the WMI code:
var scope = new ManagementScope( "root\\cimv2" );
scope.Connect();
using (var myservice = GetService("SomeService", scope))
{
//Some Stuff
}
...
ManagementObject GetService(string serviceName, MangementScope scope)
{
ManagementPath wmiPath = new ManagementPath( serviceName );
using (ManagementClass serviceClass = new ManagementClass( scope, wmiPath, null ))
{
using (ManagementObjectCollection services = serviceClass.GetInstances())
{
ManagementObject serviceObject = null;
// If this service class does not have an instance, create one.
if (services.Count == 0)
{
serviceObject = serviceClass.CreateInstance();
}
else
{
foreach (ManagementObject service in services)
{
serviceObject = service;
break;
}
}
return serviceObject;
}
}
}
EDIT: C++ Snippet:
NAMESPACE::ICSharpComPtr pCSharpCom = NULL;
HRESULT hr = pCSharpCom .CreateInstance(NAMESPACE::CLSID_CSharpCom);
if (FAILED(hr))
{
Log("Failed (hr=%08x)", hr);
return hr;
}
try
{
_bstr_t bstrData = pCSharpCom ->GetData();
strLine = (LPCTSTR)bstrData;
strMessage += strLine;
}
catch (_com_error& err)
{
_bstr_t desc = GetErrorMessage(err);
Log("Excepton %S", (const wchar_t*)desc);
return 0;
}
pCSharpCom ->Release();
Has anyone seen anything like this? We are seeing a similar issue with C++\CLI that's loading a different WMI related DLL directly.
Eventually, the WMI service will no longer be responsive and I will have to restart that service as well.
Edit:
This has to do with the apartment state of the COM object. Added a CoInitializeEx rather than a CoInitialize. I set the thread to MTA. At first it didn't look like this was working until I realized that first time the method was called we were seeing the thread state set to STA rather than MTA! Every subsequent call would be MTA. If I returned right away, before calling the System.Management classes when the thread was STA, I would no longer leak memory!
Any idea why the first one would be STA?
There is not a dispose in the RCW implementation so you are at the mercy of the GC to release the com objects you have created by default. However, you can try using Marshal.FinalReleaseComObject on the RCW instance once you are done with your COM objects. The will force the ref count to zero on the wrapped COM object and it should release. However, this also makes the RCW instance useless so be careful where you call it.
The problem had to do with the apartment state of the thread creating the COM object. There was one thread that created the COM object as MTA and another thread that was creating the COM object as STA. The STA thread was created first and then led to issues with the MTA threads. This caused the finalizer to block on GetToSTA.

Categories

Resources