I have a system in winforms C#.Net 2.0 with ActiveRecord + NHibernate communicating with a PostgreSQL 9 database.
When user open the system, starts the communication with the DB by a new SessionScope(). For some users it works perfectly... but for others the system generates a memoryexception, identical to the problem of Marcio in msdn forum: link.
How can I solve this problem? The problem is in NHibernate! The error occurs when I try to close the ISession object or when I try to Commit the transaction.
An underlying reason for OutOfMemoryException can be outside of the code that you posted. You simply have a memory leak and it can be anywhere in your app. The exception will be thrown from the code that tries to allocate more memory, not necessarily from the code that causes memory leak. Use memory profiler to figure out what causes the memory leak.
It is very likely however that this issue is due to bloated 1st level cache in NHibernate. From SessionScope document:
At the same time, NHibernate is keeping tracks of changes to objects
within the scope. If there are too many objects and too many changes
to keep track, then performance will slowly downgrade. So a flushing
now and then will be required.
Get rid of GC calls, you don't need them.
Limit the scope of the session
Flush/Clear session periodically
Make sure that you use lazy loading appropriately (don't load information you don't need from database)
Related
We have a WCF service developed in C# running in a production environment where it crashes every few hours with no observable pattern. Memory usage will hover at ~250mb for a while, then all of a sudden memory usage starts going up until it crashes with an OutOfMemoryException at 4gb (it's a 32bit process).
We have a hard time identifying the problem, our exceptions logged are from different places in the code, presumably from another request trying to use some memory and it receive the exception.
We have taken a memory dump when the process is at 4gb and a list of ~750k database objects is in memory when the crash occurs. We have looked up the queries of those said objects but can't pinpoint the one that loads up the entire table. The service make calls to the database using EF6.
Another thing to note, this problem never occured in our preproduction environment. The data in the database is sufficient in our preproduction environment for this to occur, if it were to load the entire table also. It's probably a specific call with a specific parameter that triggers this issue, but we can't pinpoint it.
I am out of ideas what to try next to solve our issues. Is there a tool that can help us in this situation ?
Thanks
If you want to capture all your SQL and are using Entity, you can print out queries like this
Context.Database.Log = s => Debug.Print(s);
If you mess around with that a bit you can get it to output to a variable and save the result to text file or Db. You would have to wrap it around all Db calls-not sure how big your project is?
Context.Database.Log = null;
turns it off
I am writing a C# database program that reads data from one database, does calculations on the data, then writes the results to another database. This involves a large volume of data requiring repeated cycles of reads from the source database then writes to the destination database.
I am having problems with having my memory eaten up. I use dispose and clear functions when I can, but my research indicates that these don't really free memory of data tables and data connections.
One suggestion I found was to put the database calls within using structures. But for me this would mean opening and closing both data connections many times during a run. This seems rather inelegant and way to make the program run slower. Also, the program may evolve to the point where I will also need to write data back to the source database.
The most straight forward way to structure my workflow is to keep both database connections open all the time. But this seems to be part of my problem.
Does anyone have suggestions at to program structure to help with memory usage?
Let's break this problem down a little.
Memory is being used to hold the data you retrieve from the database.
Clearing this data will fix your memory usage problem.
So the question is, how is your data stored?
Is it a list of objects? Is it in a dataset? Is it in some sort of ORM model?
If you can clear/flush/null/dispose of the DATA you should be good to go.
Instantiating connection object inside a "using" statement only makes sure the connection object is disposed of, it doesn't explicitly clear any data you retrieved. (ORMs may dispose of the data inside of them, but not any copies you made calling .ToList() for example.)
External resources like databases, filesystem etc, it is always best to leverage a lock late release early policy and that is why you are probably getting feedback to open/close your connections as needed. But this won't help your memory issue. And it is completely fine to keep a database connection open for multiple commands.
I am running a large ASP.net 4.0 website. It uses a popular .Net content management system, has thousands of content items, hundreds of concurrent users - is basically a heavy website.
Over the course of 1 day the memory usage of the IIS7 worker process can rise to 8-10GB. The server has 16GB installed and is currently set to recycle the app pool once per day.
I am getting pressured to reduce memory usage. Much of the memory usage is due to caching of large strings of data - but the cache interval is only set to 5-10 minutes - so these strings should eventually expire from memory.
However after running RedGate Memory Profiler I can see what I think are memory leaks. I have filtered my Instance List results by objects that are "kept in memory exclusively by Disposed Objects" ( I read on the RedGate forum that this is how you find memory leaks ). This gave me a long list of strings that are being held in memory.
For each string I use Instance Retention Graph to see what holds it in memory. The System.string objects seem to have been cached at some point by System.Web.Caching.CacheDependency. If I follow the graph all the way up it goes through various other classes including System.Collections.Specialized.ListDictionary until it reaches System.Web.FileMonitor. This makes some sense as the strings are paths to a file (images / PDFs / etc).
It seems that the CMS is caching paths to files, but these cached objects are then "leaked". Over time this builds up and eats up RAM.
Sorry this is long winded... Is there a way for me to stop these memory leaks? Or to clear them down without resorting to recycling the app pool? Can I find what class / code is doing the caching to see if I can fix the leak?
It sounds like the very common problem of stuff being left in memory as part of session state. If that's the case your only options are 1. don't put so much stuff in each user's session, 2. Set the session lifetime to something shorter (the default is 20 minutes, I think), and 3. periodically recycle the app pool.
As part of 1. I found that there are "good ways" and "bad ways" of presenting data in a data grid control. You may want to check that you are copying only the data you need and not accidentally maintaining references to the entire datagrid.
Is it possible to lazy load a related object during an open session but to still have the related object available after the session closes?
For example, we have a USER class and a related ROLE class. When we load a USER we also lazy load the related ROLE object. Can we have the USER and ROLE class fully loaded and available after the session is closed?
Is this functionality possible?
Short answer: no. You must initialize anything you will need after the session closes, before closing the session. The method to use to force loading a lazy proxy (without enumerating it) is NHibernateUtil.Initialize(USER.ROLES).
Long answer... kind of. It is possible to "reattach" objects to a new session, thereby allowing PersistentBags and other NH proxies to be initialized. The best method to use, given that you know the object exists in the DB but not in your new session, and that you haven't yet modified it, is Session.Lock(USER, LockMode.None). This will associate the object with the new session without telling NHibernate to do anything regarding reads or writes of the object.
HOWEVER, be advised that this is a code smell. If you are regularly reattaching objects to new sessions, it is a sign that you are not keeping sessions open long enough. There is no problem with opening one session per windows form, for instance, and keeping it open as long as the form is open, PROVIDED you close it when the window closes.
If you're dealing with a 1-1 relationship (0-1 role per user) then possibly the simplest option would be to configure it for eager fetching it rather than lazy loading. Lazy loading is really geared towards 1-* relatives, or objects that are particularly large and rarely needed. NH does a pretty fine job of optimizing queries to include eager data fast in scenarios like that.
Yes. Once the session is closed, any objects that were lazy-loaded will remain in memory and you can access them without any problems.
We have a web service that uses up more and more private bytes until that application stops responding. The managed heap (mostly Gen2) will show some 200-250 MB, while private bytes shows over 1GB. What are possible causes of a memory leak outside of the managed heap?
I've already checked for the following:
Prolific dynamic assemblies (Xml serialization, regex, etc.)
Session state (turned off)
System.Policy.Evidence memory leak (SP1 installed)
Threading deadlock (no use of Join, only lock)
Use of SQLOLEDB (using SqlClient)
What other sources can I check for?
Make sure your app is complied in release mode. If you compile under debug mode, and deploy that, simply instantiating a class that has an event defined (event doesn't even need to be raised), will cause a small piece of memory to leak. Instantiating enough of these objects over a long enough period of time will cause all the memory to be used. I've seen web apps that would use up all the memory within a matter of hours, simply because a debug build was used. Compiling as a release build immediately and permanently fixed the problem.
I would recommend you view snapshots of the stack at various times, and see what's using up the memory. If your application is using Java, then jmap works extremely well - you just give it the PID of the java process.
If using something else, try Lambda Probe (http://www.lambdaprobe.org/d/index.htm). It doesn't show as much detail, but will at least show you memory use.
I had a bad memory leak in my JDBC code that ended up being traced to a change in the JDBC specification a few years ago that I missed (with respect to closing statements and such). It took a combination of Lamdba Probe and then jmap to localize the problem enough to fix it.
Cheers,
-R
Also look for:
COM Assemblies being loaded
DB Connections not being closed
Cache & State (Session, Application)
Try forcing the Garbage Collector (GC) to run (write a page that does it when it loads) or try the instrumentation, but that's a bit hit and miss in my experience. Another thing would be to keep it running and see if it runs out of memory.
What could be happening is that there is plenty of memory and Windows does not signal your app to clean up. This causes the app to look like its using more and more memory because it can, when in fact the system can reclaim the memory when it needs. SQL Server and Exchange do this a lot. The idea is why cause a unnecessary cleanup when there are plenty of resources.
Rob
Garbage collection does not run until a request for memory is denied due to lack of available memory. This can often make things look like a memory leak when one is not around.
Do you have any events and event handlers within the service? Services often have static variables, and if you are creating event handlers from the static instances, connected to a non-static instance object, the static will hold a reference to the instance forever, which will stop it from releasing.
Double check that trace is not enabled. I've seen instances of trace slowly consuming memory until the app reaches it's app pool limit.
For what it is worth, my issue was not with the service, but the with HttpClient that was calling it.
The client was not properly disposed, so it kept the connection open and the memory locked.
After disposing the client the service released the memory as expected.