We've run into a resource management problem that we've been struggling with for several weeks now and while we finally have a solution, it still seems weird to me.
We have a significant amount of interop code we've developed against a legacy system, which exposes a C API. One of the many peculiarities of this system is that (for reasons unknown), the "environment", which appears to be process-scoped must be initialized prior to the API being consumed. However, it can only be initialized once and must be "shutdown" once you're finished with it.
We were originally using a singleton pattern to accomplish this but as we're consuming this system inside an IIS hosted web service, our AppDomain will occasionally be recycled, leading to "orphaned" environments that leak memory. Since finalization and (apparently) even IIS-recycling is non-deterministic and hard to detect in all cases, we've switched to a disposal+ref counting pattern that seems to work well. However, doing reference counting manually feels weird and I'm sure there's a better approach.
Any thoughts on managing a static global disposable resource in an environment like this?
Here's the rough structure of the environment management:
public class FooEnvironment : IDisposable
{
private bool _disposed;
private static volatile int _referenceCount;
private static readonly object InitializationLock = new object();
public FooEnvironment()
{
lock(InitilizationLock)
{
if(_referenceCount == 0)
{
SafeNativeMethods.InitFoo();
_referenceCount++;
}
}
}
public void Dispose()
{
if(_disposed)
return;
lock(InitilizationLock)
{
_referenceCount--;
if(_referenceCount == 0)
{
SafeNativeMethods.TermFoo();
}
}
_disposed = true;
}
}
public class FooItem
{
public void DoSomething()
{
using(new FooEnvironment())
{
// environment is now initialized (count == 1)
NativeMethods.DoSomething();
// superfluous here but for our purposes...
using(new FooEnvironment())
{
// environment is initialized (count == 2)
NativeMethods.DoSomethingElse();
}
// environment is initialized (count == 1)
}
// environment is unloaded
}
}
I'm jumping in feet first here as there are a lot of unknowns about you particular code base, but I'm wondering is there is any mileage in a session based approach? You could have a (thread safe) session factory singleton that is responsible for ensuring only one environment is initialised and that environment is disposed appropriately by binding it to events on the ASP.NET AppDomain and/or similar. You would need to bake this session model into your API so that all client first established a session before making any calls. Apologies for the vagueness of this answer. If you can provide some example code perhaps I could give a more specific/detail answer.
One approach you might want to consider is to create an isolated AppDomain for your unmanaged component. In this way it won't be orphaned when an IIS-hosted AppDomain is recycled.
Related
I am creating an ASP.NET MVC web application. It has service classes to execute business logic and it access data through Entity Framework.
I want to change some business logic based on application variable. These variables are global variables and load from app config and don't change after the initial loading.
public class BroadcastService : IBroadcastService
{
private static readonly ILog Logger = LogProvider.GetCurrentLogger();
private readonly IUnitOfWork _worker;
private readonly IGlobalService _globalService;
public BroadcastService(IUnitOfWork worker, IGlobalService globalService)
{
_worker = worker;
_globalService = globalService;
}
public IEnumerable<ListItemModel> GetBroadcastGroups()
{
if(Global.EnableMultiTenant)
{
//load data for all tenants
}
else
{
//load data for current tenant only
}
return broadcastGroups ?? new List<ListItemModel>();
}
...
}
public static class Global
{
public static bool EnableMultiTenant{get;set;}
}
For example, EnableMultiTenant will hold application is running in multi-tenant mode or not.
My concerns are:
Is it ok to use a static global variable class to holds those values?
This application is hosting on Azure app service with load balancing. Is there any effect when running multi-instance and when app pool restarts?
To answer your question as to whether it is 'okay' to do this, I think that comes down to you.
I think the biggest thing to know is when that data is going to get refreshed. From experience I believe that static information gets stored in the application pool, so if it is restarted then the information will be refreshed.
Lifetime of ASP.NET Static Variable
Consider how many times you need that information, if you only need it once at startup, is it worth having it as a static. If you are getting that information a lot (and say for example it is stored in a database) then it may be sensible to store that in a cache somewhere such as a static member.
I think my only recommendation with static member variables is asp is keep them simple, booleans seem fine to me. Remember that users do share the same application meaning that static variables are global for all users. If you want a user specific variable then you want to use sessions cache.
Always remember the two hardest thing in programming
Naming things
Cache invalidation
Off by one errors
https://martinfowler.com/bliki/TwoHardThings.html
Even though this is a joke, it holds a lot of truth
Hope this helps
This is thread safe if you initialize these values once and then only read from them. It is also safe in the presence of multiple worker processes and restarts because the multiple processes don't share variables.
As an alternative consider creating an instance of a class holding your settings:
class MySettings {
bool IsEnabled;
}
Then you can use dependency injection to inject a singleton value of this class to your code. This makes it easier to tests and makes the code more uniform.
I have a question, I hope you can help me, Thank you in advance.
I am working in a project, a WEB Application hosted in IIS; the approach is that I have a LogIn for users, but the LogIn must allow one user to login at time, so if two users are trying to access to the site at the same time only one should access, while the other one waits until the first is logged in. I thought of using threads, with a lock statement in the Sign In validation, but I don't know if it is a good practice to use threads in this scenario, due to multiple users may try to Log In at the same time, and only one must access at time. Also, I need to have a log for the users in the order they have accessed the site, to verify that two users did not access at the same time.
Is multithreading a good practice or recommendation for making this?
Any suggestions? Thank you so much.
First off, when using threads its good practice to avoid anything that will block a thread, if at all possible.
You could use a lock which would cause incoming threads to block until the first thread has completed the login process, although I can't see how this would help in understanding multithreading. This will only help in learning how to block threads, which you should try to avoid at all costs, threads are expensive.
IMHO you should never have more threads than CPU cores, use the threadpool, understand the difference between compute bound and I\O bound threads. I say again threads are expensive, in both time and memory.
Well, this is solution is not so much about multithreading - but i would do something like this:
public class SingleUserLock : IDisposable
{
private SingleUserSemaphore _parent;
public SingleUserLock(SingleUserSemaphore parent)
{
_parent = parent;
}
public bool IsLoggedIn => _parent?.CurrentUser == this;
public void Unlock()
{
_parent?.Unlock();
_parent = null;
}
public void Dispose()
{
Unlock();
}
}
public class SingleUserSemaphore
{
private readonly object _lockObject = new object();
public SingleUserLock CurrentUser { get; private set; }
public bool TryLogin()
{
if (Monitor.TryEnter(_lockObject))
{
CurrentUser = new SingleUserLock(this);
return true;
}
return false;
}
public void Unlock()
{
try
{
Monitor.Exit(_lockObject);
CurrentUser = null;
}
catch (Exception ex) { };
}
}
Register an instance of SingleUserSemaphore as a Singleton in your DependecyInjection framework for the Web application. Every time a user logs in, you get the singleton SingleUserSemaphore instance and call TryLogin. If true you can the store SingleUserLock in the Session if possible.
For every request check the session for IsLoggedIn == true.
When the user logs out you call the returned SingleUserLock.Unlock(); from the session or directly SingleUserSemaphore.Unlock();
Now the challenge will be, if the user never Logs out. Your web application will be locked forever. To avoid this you could make an Update method on SingleUserSemaphore with a timestamp for every request made by the logged in user. So when a user logs in, you also check for last activity...
Good luck with you homework.
We have a MVC application with a traffic of about 800 users per day, lately we have observed that our App Pool is getting stopped on its own. Going to the logs we have found MemoryOutOfException. We were not able to figure it out why this might be happening so we did a code review. During code review we found out that we have static classes, static methods / Extension methods. We don't have any static variables and we are using using block to dispose DbContext.
So is it possible that our static class/ static methods be the reason for memory issues ?
How are instances created inside static methods and classes disposed ? Are they collected by GC ?
Please suggest what more can we do to figure out the issue.
EDIT
Sorry for not sharing any code.
I want to understand the lifecycle of static class in web application. Can they create problem if I am doing complex operation that takes memory ?
For example if I translate my Domain model to View Model inside my static class like so :
public static class PersonTranslator{
public static PersonVM (this Person p)
{
return new PersonVM{
Name = p.Name,
//etc...
//lots of property here
}
}
}
Is it a good practice or I should just use normal classes rather going for
extension methods. Can code like this create issues ?
Thanks
EDIT 2:
Our db context is implemented in base class and all the data access class derieve from it. I think (and I may be wrong ) that something is wrong here.
public class DataAccessBase : IDisposable
{
protected ApplicationDataContext dataContext = null;
public DataAccessBase()
{
dataContext = new ApplicationDataContext();
}
public DataAccessBase(ApplicationDataContext dataContext)
{
if (dataContext == null)
dataContext = new ApplicationDataContext();
this.dataContext = dataContext;
}
~DataAccessBase()
{
Dispose(false);
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
// The bulk of the clean-up code is implemented in Dispose(bool)
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
// free managed resources
}
// get rid of unmanaged resources
if (dataContext != null)
{
dataContext.Dispose();
}
}
}
Frankly it could be anything:
How much RAM does your server have?
How many other sites are running on the server?
How extensively are you using caching?
What kind of session storage are you using and what are you allowing users to store in a session?
At this point, Id suggest profiling your application with the Performance Wizard. Typically though you would have done this prior to production and have instrumented your application because how else can you decide what size server / VM your app needs?
.NET memory allocation gives you insight into the memory management of
your application because it analyzes every object in memory from
creation to garbage collection. The monitor can work in two different
ways. The first and less impactful is through sampling. It can also
take a much deeper look through instrumentation where code is added
into the binary to keep track of memory work.
You could also consider using Performance Counters and perfmon on the server. At the very least you should be getting alerts prior to the AppPool being dropped.i.e. at 80% of capacity for example.
It might even be that there is no "code" problem. You may have just underestimated what kind of resources are required to run your site (factoring in growth patterns) because as above without performance profiling pre-production it could have only have been a guess as to what you actually need.
What are the ramifications of not calling .Dispose() on an OrganizationServiceProxy object?
Sometimes, during testing, code crashes before the object can be disposed; does this mean that a service channel is left open for all eternity?
I have the same question about OrganizationServiceContext, which I had not been disposing until reading this today.
/* Synchronizes with CRM * */
public class CRMSync
{
[ThreadStatic] // ThreadStatic ensures that each thread gets a copy of these fields
private static OrganizationServiceProxy service;
[ThreadStatic]
private static Context linq;
/* Tries to connect to CRM and return false if failure - credentials arguments */
private bool Connect(string username = #"username", string password = "password", string uri = #"orgUrl/XRMServices/2011/Organization.svc")
{
try
{
var cred = new ClientCredentials();
cred.UserName.UserName = username;
cred.UserName.Password = password;
service = new OrganizationServiceProxy(new Uri(uri), null, cred, null);
service.EnableProxyTypes(); // this has to happen to allow LINQ early bound queries
linq = new Context(service);
var who = new Microsoft.Crm.Sdk.Messages.WhoAmIRequest(); // used to test the connection
var whoResponse = (Microsoft.Crm.Sdk.Messages.WhoAmIResponse)service.Execute(who); // this fails if not connected
}
catch (Exception e)
{
Log(e.Message); // Write to Event Log
return false;
}
return true;
}
}
Is there another way to use the same OrganizationServiceContext and OrganizationServiceProxy in multiple methods?
I plan to use this destructor to dispose the OrganizationServiceProxy and OrganizationServiceContext:
~CRMSync()
{
if (service != null)
service.Dispose();
if(linq!=null)
linq.Dispose();
}
EDIT
This is the method that is called by the service OnStart
/* Called by CRMAUX.OnStart when it is time to start the service */
public async void Start()
{
this.ProcessCSVFiles(); // Creates a ThreadPool thread that processes some CSV files
this.ProcessCases(); // Imports cases into CRM from a db (on this thread)
var freq = 0;
ConfigurationManager.RefreshSection("appSettings");
var parse = int.TryParse(ConfigurationManager.AppSettings["Frequency"], out freq);
await System.Threading.Tasks.Task.Delay((parse) ? freq * 1000 * 60 : 15000 * 60); // 15 minutes default or user defined
Start(); // Start again after the wait above
}
This is the Windows service
public partial class CRMAUX : ServiceBase
{
private CRMSync crmSync;
public CRMAUX()
{
InitializeComponent();
}
protected override void OnStart(string[] args)
{
ConfigurationManager.RefreshSection("userSettings"); // Get the current config file so that the cached one is not useds
if (TestConfigurationFile())
{
crmSync = new CRMSync();
Thread main = new Thread(crmSync.Start);
main.IsBackground = true;
main.Start();
}
else //The configuration file is bad
{
Stop(); // inherited form ServiceBase
return;
}
}
protected override void OnStop()
{
}
/* Checks the configuration file for the necessary keys */
private bool TestConfigurationFile()...
}
The OrganizationServiceProxy is a wrapper around a WCF Channel which utilises unmanaged resources (sockets etc.).
A class (our proxy) that implements IDisposable is essentially stating that it will be accessing unmanaged resources and you should therefore explicitly tell it when you're finished with it rather than just allowing it to go out of scope. This will allow it to release the handles to those resources and free them up for use elsewhere. Unfortunately our code isn't the only thing running on the server!
Unmanaged resources are finite and expensive (SQL connections are the classic example). If your code executes correctly but you don't explicitly call dispose then the clean up of those resources will be non-deterministic which is a fancy way of saying the garbage collector will only call dispose on those managed objects "eventually", which will as stated in turn clean up the unmanaged resources they're holding onto. This will hurt scalability of your application and any other services running on the same hardware that might be in contention with you for those resources. That's the best case scenario, if an exception occurs at any point in the stack subsequent to those resources being acquired they will not be released, ergo a memory leak and fewer resources available for use elsewhere.
Wrapping your code in a using statement is syntactic sugar as this compiles down to the proxy being wrapped in a try/finally with the dispose being called in the finally.
In terms of using the proxy/context across multiple methods you should take a look at the Unit of Work pattern. The OrganizationServiceContext is exactly that, something that you apply changes to over the course of a request (likely across multiple method calls) and then submit to the datastore (CRM) at the end when done, in our case using context.SaveChanges().
Where are you using this code as I'm curious to know what you're looking to achieve with the use of the [ThreadStatic] attribute? If it's within an IIS hosted application I don't think you'll see any benefit as you don't manage the thread pool so the proxy still only has a lifetime matching the HttpRequest. If this is the case there are several better ways of managing the lifetime of these objects, dependency injection frameworks and a HttpRequest lifetime behaviour being the obvious one.
If your app crashes, the operating system will automatically reclaim all your resources, i.e. close all network ports, files etc. So there's nothing kept open forever. Of course, on the server side something unexpected can happen if it is not handled properly and the app crashes in the middle of a request. But that's what transactions are for, such that the state of the server data is always consistent.
I want to create a singleton that remains alive for the life of the app pool using HttpContent.Current.Cache.
Where would I create the class and how should it be implemented? I understand how to implement a singleton but am not too familiar with threading and httpcontent.current.cache.
Thanks!
It doesn't matter where to put the singleton code.
As soon as you access the instance and the type is initialized, it will remain in memory for the entire life of your ApplicationDomain. So use it as a normal class and the rest is done on first use.
Perhaps you are over-complicating the issue? i'm not sure why you need to use the cache. Could you not just add a file to the App_Code folder to house your class e.g "mSingleton.cs"
public sealed class mSingleton
{
static readonly mSingleton _instance = new mSingleton();
public int MyVal { get; set; }
public static mSingleton Instance
{
get { return _instance; }
}
private mSingleton()
{
// Initialize members, etc. here.
}
}
Then it is global to all your code and pages, maintains state until the app pool recycles or there is a app rebuild (i don't know if this causes the app to recycle as well - if it does then it suits your criteria anyway), doesn't need to be added to any cache, application or session variables.. no messy handling
You can do this on page_load in any aspx.cs file and refresh it to see the count go up each time to prove state is maintained:
mSingleton getMyObj = mSingleton.Instance;
getMyObj.MyVal++;
I'd not use the cache for this. I'd recommend a static class or singleton with static getInstance().