I experience strange memory leak in computation expensive content-based image retrieval (CBIR) .NET application
The concept is that there is service class with thread loop which captures images from some source and then passes them to image tagging thread for annotation.
Image tags are queried from repository by the service class at specified time intervals and stored in its in-memory cache (Dictionary) to avoid frequent db hits.
The classes in the project are:
class Tag
{
public Guid Id { get; set; } // tag id
public string Name { get; set; } // tag name: e.g. 'sky','forest','road',...
public byte[] Jpeg { get; set; } // tag jpeg image patch sample
}
class IRepository
{
public IEnumerable<Tag> FindAll();
}
class Service
{
private IDictionary<Guid, Tag> Cache { get; set; } // to avoid frequent db reads
// image capture background worker (ICBW)
// image annotation background worker (IABW)
}
class Image
{
public byte[] Jpeg { get; set; }
public IEnumerable<Tag> Tags { get; set; }
}
ICBW worker captures jpeg image from some image source and passes it to IABW worker for annotation. IABW worker first tries to update Cache if time has come and then annotates the image by some algorithm creating Image object and attaching Tags to it then storing it to annotation repository.
Service cache update snippet in IABW worker is:
IEnumerable<Tag> tags = repository.FindAll();
Cache.Clear();
tags.ForEach(t => Cache.Add(t.Id, t));
IABW is called many times a second and is pretty processor extensive.
While running it for days I found memory increase in task manager. Using Perfmon to watch for Process/Private Bytes and .NET Memory/Bytes in all heaps I found them both increasing over the time.
Experimenting with the application I found that Cache update is the problem. If it is not updated there is no problem with the mem increase. But if the Cache update is as frequent as once in 1-5 minutes application gets ouf of mem pretty fast.
What might be the reason of that mem leak? Image objects are created quite often containing references to Tag objects in Cache. I presume when the Cache dictionary is created those references somehow are not garbage collected in the future.
Does it need to explicitly null managed byte[] objects to avoid memory leak e.g. by implementing Tag, Image as IDisposable?
Edit: 4 aug 2001, addition of the buggy code snippet causing quick mem leak.
static void Main(string[] args)
{
while (!Console.KeyAvailable)
{
IEnumerable<byte[]> data = CreateEnumeration(100);
PinEntries(data);
Thread.Sleep(900);
Console.Write(String.Format("gc mem: {0}\r", GC.GetTotalMemory(true)));
}
}
static IEnumerable<byte[]> CreateEnumeration(int size)
{
Random random = new Random();
IList<byte[]> data = new List<byte[]>();
for (int i = 0; i < size; i++)
{
byte[] vector = new byte[12345];
random.NextBytes(vector);
data.Add(vector);
}
return data;
}
static void PinEntries(IEnumerable<byte[]> data)
{
var handles = data.Select(d => GCHandle.Alloc(d, GCHandleType.Pinned));
var ptrs = handles.Select(h => h.AddrOfPinnedObject());
IntPtr[] dataPtrs = ptrs.ToArray();
Thread.Sleep(100); // unmanaged function call taking byte** data
handles.ToList().ForEach(h => h.Free());
}
No, you don't need to set anything to null or dispose of anything if it's just memory as you've shown.
I suggest you get hold of a good profiler to work out where the leak is. Do you have anything non-memory-related that you might be failing to dispose of, e.g. loading a GDI+ image to get the bytes?
Related
Been having some problems with a web based .Net(C#) application. I'm using the LazyCache library to cache frequent JSON responses (some in & around 80+KB) for users belonging to the same company across user sessions.
One of the things we need to do is to keep track of the cache keys for a particular company so when any user in the company makes mutating changes to items being cached we need to clear the cache for those items for that particular company's users to force the cache to be repopulated immediately upon the receiving the next request.
We choose LazyCache library as we wanted to do this in memory without needing to use an external cache source such as Redis etc as we don't have heavy usage.
One of the problems we have using this approach is we need to keep track of all the cache keys belonging to a particular customer anytime we cache. So when any mutating change is made by company user's to the relevant resource we need to expire all the cache keys belonging to that company.
To achieve this we have a global cache which all web controllers have access to.
private readonly IAppCache _cache = new CachingService();
protected IAppCache GetCache()
{
return _cache;
}
A simplified example (forgive any typos!) of our controllers which use this cache would be something like below
[HttpGet]
[Route("{customerId}/accounts/users")]
public async Task<Users> GetUsers([Required]string customerId)
{
var usersBusinessLogic = await _provider.GetUsersBusinessLogic(customerId)
var newCacheKey= "GetUsers." + customerId;
CacheUtil.StoreCacheKey(customerId,newCacheKey)
return await GetCache().GetOrAddAsync(newCacheKey, () => usersBusinessLogic.GetUsers(), DateTimeOffset.Now.AddMinutes(10));
}
We use a util class with static methods and a static concurrent dictionary to store the cache keys - each company (GUID) can have many cache keys.
private static readonly ConcurrentDictionary<Guid, ConcurrentHashSet<string>> cacheKeys = new ConcurrentDictionary<Guid, ConcurrentHashSet<string>>();
public static void StoreCacheKey(Guid customerId, string newCacheKey)
{
cacheKeys.AddOrUpdate(customerId, new ConcurrentHashSet<string>() { newCacheKey }, (key, existingCacheKeys) =>
{
existingCacheKeys.Add(newCacheKey);
return existingCacheKeys;
});
}
Within that same util class when we need to remove all cache keys for a particular company we have a method similar to below (which is caused when mutating changes are made in other controllers)
public static void ClearCustomerCache(IAppCache cache, Guid customerId)
{
var customerCacheKeys = new ConcurrentHashSet<string>();
if (!cacheKeys.TryGetValue(customerId,out customerCacheKeys))
{
return new ConcurrentHashSet<string>();
}
foreach (var cacheKey in customerCacheKeys)
{
cache.Remove(cacheKey);
}
cacheKeys.TryRemove(customerId, out _);
}
We have recently been getting performance problems that our web requests response time slow significantly over time - we don't see significant change in terms of the number of requests per second.
Looking at the garbage collection metrics we seem to notice a large Gen 2 heap size and a large object size which seem to going upwards - we don't see memory been reclaimed.
We are still in the middle of debugging this but I'm wondering could using the approach described above lead to the problems we are seeing. We want thread safety but could there be an issue using the concurrent dictionary we have above that even after we remove items that memory is not being freed leading to excessive Gen 2 collection.
Also we are using workstation garbage collection mode, imagine switching to server mode GC will help us (our IIS server has 8 processors + 16 GBs ram) but not sure switching will fix all the problems.
You may want to take advantage of the ExpirationTokens property of the MemoryCacheEntryOptions class. You can also use it from the ICacheEntry argument passed in the delegate of the LazyCache.Providers.MemoryCacheProvider.GetOrCreateAsync method. For example:
Task<T> GetOrAddAsync<T>(string key, Func<Task<T>> factory,
int durationMilliseconds = Timeout.Infinite, string customerId = null)
{
return GetMemoryCacheProvider().GetOrCreateAsync<T>(key, (options) =>
{
if (durationMilliseconds != Timeout.Infinite)
{
options.SetSlidingExpiration(TimeSpan.FromMilliseconds(durationMilliseconds));
}
if (customerId != null)
{
options.ExpirationTokens.Add(GetCustomerExpirationToken(customerId));
}
return factory();
});
}
Now the GetCustomerExpirationToken should return an object implementing the IChangeToken interface. Things are becoming a bit complex, but bear with me for a minute. The .NET platform doesn't provide a built-in IChangeToken implementation suitable for this case, since it is mainly focused on file system watchers. Implementing one is not difficult though:
class ChangeToken : IChangeToken, IDisposable
{
private volatile bool _hasChanged;
private readonly ConcurrentQueue<(Action<object>, object)>
registeredCallbacks = new ConcurrentQueue<(Action<object>, object)>();
public void SignalChanged()
{
_hasChanged = true;
while (registeredCallbacks.TryDequeue(out var entry))
{
var (callback, state) = entry;
callback?.Invoke(state);
}
}
bool IChangeToken.HasChanged => _hasChanged;
bool IChangeToken.ActiveChangeCallbacks => true;
IDisposable IChangeToken.RegisterChangeCallback(Action<object> callback,
object state)
{
registeredCallbacks.Enqueue((callback, state));
return this; // return null doesn't work
}
void IDisposable.Dispose() { } // It is called by the framework after each callback
}
This is a general implementation of the IChangeToken interface, that is activated manually with the SignalChanged method. The signal will be propagated to the underlying MemoryCache object, which will subsequently invalidate all entries associated with this token.
Now what is left to do is to associate these tokens with a customer, and store them somewhere. I think that a ConcurrentDictionary should be quite adequate:
private static readonly ConcurrentDictionary<string, ChangeToken>
CustomerChangeTokens = new ConcurrentDictionary<string, ChangeToken>();
private static ChangeToken GetCustomerExpirationToken(string customerId)
{
return CustomerChangeTokens.GetOrAdd(customerId, _ => new ChangeToken());
}
Finally the method that is needed to signal that all entries of a specific customer should be invalidated:
public static void SignalCustomerChanged(string customerId)
{
if (CustomerChangeTokens.TryRemove(customerId, out var changeToken))
{
changeToken.SignalChanged();
}
}
Large objects (> 85k) belong in gen 2 Large Object Heap (LOH), and they are pinned in memory.
GC scans LOH and marks dead objects
Adjacent dead objects are combined into free memory
The LOH is not compacted
Further allocations only try to fill in the holes left by dead objects.
No compaction, but only reallocation may lead to memory fragmentation.
Long running server processes can be done in by this - it is not uncommon.
You are probably seeing fragmentation occur over time.
Server GC just happens to be multi-threaded - I wouldn't expect it to solve fragmentation.
You could try breaking up your large objects - this might not be feasible for your application.
You can try setting LargeObjectHeapCompaction after a cache clear - assuming it's infrequent.
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();
Ultimately, I'd suggest profiling the heap to find out what works.
I have a problem with freeing memory in C#. I have a static class containing a static dictionary, which is filled with references to objects. Single object zajumie large amount of memory. From time to time I release the memory by deleting obsolete references to the object set to null and remove the item from the dictionary. Unfortunately, in this case, the memory is not slowing down, time after reaching the maximum size of the memory in the system is as if a sudden release of unused resources and the amount of memory used correctly decreases.
Below is the diagram of classes:
public class cObj
{
public DateTime CreatedOn;
public object ObjectData;
}
public static class cData
{
public static ConcurrentDictionary<Guid, cObj> ObjectDict = new ConcurrentDictionary<Guid, cObj>();
public static FreeData()
{
foreach(var o in ObjectDict)
{
if (o.Value.CreatedOn <= DateTime.Now.AddSeconds(-30))
{
cObj Data;
if (ObjectDict.TryGetValue(o.Key, out Data))
{
Data.Status = null;
Data.ObjectData = null;
ObjectDict.TryRemove(o.Key, out Data);
}
}
}
}
}
In this case, the memory is released. If, however, after this operation, I call
GC.Collect ();
Followed by the expected release of unused objects.
How to solve the problem, so you do not have to use the GC.Collect()?
You shouldn't have to call GC.Collect() in most cases. To GC.Collect or not?
I've had similar scenarios where I've just created a dictionary that's limited to n entries, I did this myself on top of ConcurrentDictionary but you could use BlockingCollection.
One possible advantage is that if 1 million entries get added at the same time, all except n will be available for garbage collection rather than 30 seconds later.
I am trying to implement some caching in my application and i want to use the default memory cache in C# (this requirement can be changed if this it wont work). My problem is that don't want to exceed the maximum amount of physical memory i have on the machine, but as i understand i can't add such a constraint to the default memory cache.
In general the policy is:
If the object has been in the cache for 10 min with no requests it is removed
If a new object is added to the cache and the maximum amount of avaliable physical memory is close to used, elements are removed based on LRU
My cache can contain many different objects and they range from 10mb to 2-3gb, so i can't really get the trim function to work.
Are there any suggestions on how to implement a LRU cache monitoring the ram usage? And hoppefully it can be done using the caches in .net?
Edit
I've added a simple example where the MemoryCache is limited to 100Mb and 20% of the physical memory, but that does not change anything. My memory is filled with no removal of cache entries. Note that the polling interval is changed to evert 5 second.
class Item
{
private List<Guid> data;
public Item(int capacity)
{
this.data = new List<Guid>(capacity);
for (var i = 0; i < capacity; i++)
data.Add(Guid.NewGuid());
}
}
class Program
{
static void Main(string[] args)
{
var cache = new MemoryCache(
"MyCache",
new NameValueCollection
{
{ "CacheMemoryLimitMegabytes", "100" },
{ "PhysicalMemoryLimitPercentage", "20" },
{ "PollingInterval", "00:00:05" }
});
for (var i = 0; i < 10000; i++)
{
var key = String.Format("key{0}", i);
Console.WriteLine("Add item: {0}", key);
cache.Set(key, new Item(1000000), new CacheItemPolicy() { UpdateCallback = UpdateHandler } );
}
Console.WriteLine("\nDone");
Console.ReadKey();
}
public static void UpdateHandler(CacheEntryUpdateArguments args)
{
Console.WriteLine("Remove: {0}, Reason: {1}", args.Key, args.RemovedReason.ToString());
}
}
Looks like the System.Runtime.Caching.MemoryCache class would fit this bill nicely. You set caching policy on a per-item basis, so if you add an item with a cache policy of SlidingExpiration with a TimeSpan of 10min, you should get the behavior you are looking for.
This is a .Net v4 class only, so it doesn't exist on earlier runtime versions. If you are in a web context, the ASP.Net cache behaves similarly, but will probably not let you manage system information.
You can set limits on cache size when you create it so that it does not exceed a certain memory footprint:
var myCache = new MemoryCache(
"MyCache",
new NameValueCollection { { "PhysicalMemoryLimit", "50" }} // set max mem pct
);
This should prevent any paging to disk, at least within your application. If there are outside memory pressures or the system is overly aggressive about paging memory to disk, your cache may still be paged out, but not due to overuse within your application. To my knowledge there is no way to reserve the memory pages in C#.
MSDN:
public IntPtr MaxWorkingSet { get; set; }
Gets or sets the maximum allowable
working set size for the associated
process. Property Value: The maximum
working set size that is allowed in
memory for the process, in bytes.
So, as far as I understand, I can limit amount of memory that can be used by a process. I've tried this, but with no luck..
Some code:
public class A
{
public void Do()
{
List<string> guids = new List<string>();
do
{
guids.Add(Guid.NewGuid().ToString());
Thread.Sleep(5);
} while (true);
}
}
public static class App
{
public static void Main()
{
Process.GetCurrentProcess().MaxWorkingSet = new IntPtr(2097152);
try
{
new A().Do();
}
catch (Exception e)
{
}
}
}
I'm expecting OutOfMemory exception after the limit of 2mb is reached, but nothing happens.. If I open Task Manager I can see that the amount of memory my application uses is growing continiously without any limits.
What am I doing wrong?
Thanks in advance
No, this doesn't limit the amount of memory used by the process. It's simply a wrapper around SetProcessWorkingSetSize which a) is a recommendation, and b) limits the working set of the process, which is the amount of physical memory (RAM) this process can consume.
It will absolutely not cause an out of memory exception in that process, even if it allocates significantly more than what you set the MaxWorkingSet property to.
There is an alternative to what you're trying to do -- the Win32 Job Object API. There's a managed wrapper for that on Codeplex (http://jobobjectwrapper.codeplex.com/) to which I contributed. It allows you to create a process and limit the amount of memory this process can use.
We are dealing with a lot of files which need to be opened and close for data reads mostly.
Is it a good idea or not to cache the memorystream of each file in a temp hashtable or some other object?
We have noticed when opening files over 100MB we are running into out of memory exceptions.
We are using a wpf app.
We could successfully open the files 1 or 2 time sometimes 3 to 4 times but after that we are running into out of memory exceptions.
If you are currently caching these files, then you would expect to run out of memory quite quickly.
If you aren't caching them yet, don't, because you'll just make it worse. Perhaps you have a memory leak? Are you disposing of the memorystream once you've used it?
The best way to deal with large files is to stream data in and out (using FileStreams), so that you don't have to have the whole file in memory at once...
One issue with the MemoryStream is the internal buffer doubles in size each time the capacity is forced to increase. Even if your MemoryStream is 100MB and your file is 101MB, as soon as you try to write that last 1MB to the MemoryStream the internal buffer on MemoryStream is doubled to 200MB. You may reduce this if you give the Memory Buffer a starting capacity equal to that of you files. But this will still allow the files to use all of the memory and stop any new allocations after the some of the files are loaded. If create a cache object that is help inside of a WeakReference object you would be able to allow the garbage collector to toss a few of your cached files as needed. But don't forget you will need to add code to recreate the lost cache on demand.
public class CacheStore<TKey, TCache>
{
private static object _lockStore = new object();
private static CacheStore<TKey, TCache> _store;
private static object _lockCache = new object();
private static Dictionary<TKey, TCache> _cache =
new Dictionary<TKey, TCache>();
public TCache this[TKey index]
{
get
{
lock (_lockCache)
{
if (_cache.ContainsKey(index))
return _cache[index];
return default(TCache);
}
}
set
{
lock (_lockCache)
{
if (_cache.ContainsKey(index))
_cache.Remove(index);
_cache.Add(index, value);
}
}
}
public static CacheStore<TKey, TCache> Instance
{
get
{
lock (_lockStore)
{
if (_store == null)
_store = new CacheStore<TKey, TCache>();
return _store;
}
}
}
}
public class FileCache
{
private WeakReference _cache;
public FileCache(string fileLocation)
{
if (!File.Exists(fileLocation))
throw new FileNotFoundException("fileLocation", fileLocation);
this.FileLocation = fileLocation;
}
private MemoryStream GetStream()
{
if (!File.Exists(this.FileLocation))
throw new FileNotFoundException("fileLocation", FileLocation);
return new MemoryStream(File.ReadAllBytes(this.FileLocation));
}
public string FileLocation { get; private set; }
public MemoryStream Data
{
get
{
if (_cache == null)
_cache = new WeakReference(GetStream(), false);
var ret = _cache.Target as MemoryStream;
if (ret == null)
{
Recreated++;
ret = GetStream();
_cache.Target = ret;
}
return ret;
}
}
public int Recreated { get; private set; }
}
class Program
{
static void Main(string[] args)
{
var cache = CacheStore<string, FileCache>.Instance;
var fileName = #"c:\boot.ini";
cache[fileName] = new FileCache(fileName);
var ret = cache[fileName].Data.ToArray();
Console.WriteLine("Recreated {0}", cache[fileName].Recreated);
Console.WriteLine(Encoding.ASCII.GetString(ret));
GC.Collect();
var ret2 = cache[fileName].Data.ToArray();
Console.WriteLine("Recreated {0}", cache[fileName].Recreated);
Console.WriteLine(Encoding.ASCII.GetString(ret2));
GC.Collect();
var ret3 = cache[fileName].Data.ToArray();
Console.WriteLine("Recreated {0}", cache[fileName].Recreated);
Console.WriteLine(Encoding.ASCII.GetString(ret3));
Console.Read();
}
}
It's very dificutl say "yes" or "no", if is file content caching right in the common case and/or with so little informations. However - finited resources are real state of world, and you (as developer) must count with it. If you want cache something, you should use some mechanism for auto unloading data. In .NET framework you can use a WeakReference class, which unloads the target object (byte array and memory stream are objects too).
If you have the HW in you control, and you can use 64bit and have funds for very big RAM, you can cache big files.
However, you should be humble to resources (cpu,ram) and use the "cheap" way of implementation.
I think that the problem is that after you are done, the file is not disposed immediatly, it is waiting to the next GC cycle.
Streams are IDisposable, whice means you can and should use the using block. then the stream will dispose immidiatly when your are done dealing with it.
I don't think that caching such amount of data is a good solution, even if you don't get ever memroy overflow. Check out Memory Mapped files solution, which means that file lays on file system but speed of reading is almost equal to the in memory ones (there is an overhead for sure). Check out this link. MemoryMappedFiles
P.S. Ther are pretty good articles and examples on this topic arround in internet.
Good Luck.