I have following scenario:
Lets say we have two different webparts operating on the same data - one is a pie chart, another is a data table.
in their Page_Load they asynchronously load data from the database and when loaded place it in application cache for further use or use by other web parts. So each o the web parts has code similar to this:
protected void Page_Load(object sender, EventArgs e)
{
if (Cache["dtMine" + "_" + Session["UserID"].ToString()]==null)
{
...
Page.RegisterAsyncTask(new PageAsyncTask(
new BeginEventHandler(BeginGetByUserID),
new EndEventHandler(EndGetByUserID),
null, args, true));
}
else
{
get data from cache and bind to controls of the webpart
}
}
Since both webparts operate on the same data it does not make sense for me to execute the code twice.
What is the best approach to have one web part communicate to the other "i am already fetching data so just wait until i place it in cache"?
I have been considering mutex, lock, assigning temporary value to the cache item and waiting until that temporary value changes... many options - which one should I use.
You will want to take advantage of the lock keyword to make sure that the data is loaded and added to the cache in an atomic manner.
Update:
I modified the example to hold the lock accessing Cache for as short as possible. Instead of storing the data directly in the cache a proxy will be stored instead. The proxy will be created and added to the cache in an atomic manner. The proxy will then use its own locking to make sure that the data is only loaded once.
protected void Page_Load(object sender, EventArgs e)
{
string key = "dtMine" + "_" + Session["UserID"].ToString();
DataProxy proxy = null;
lock (Cache)
{
proxy = Cache[key];
if (proxy == null)
{
proxy = new DataProxy();
Cache[key] = proxy;
}
}
object data = proxy.GetData();
}
private class DataProxy
{
private object data = null;
public object GetData()
{
lock (this)
{
if (data == null)
{
data = LoadData(); // This is what actually loads the data.
}
return data;
}
}
}
Why don't you load the data and put it in the cache in the Application_Start in Global.asax, then no lock will be needed since locking Cache is a serious thing.
You could use a mutex around the test Cache["key"]==null:
The first thread acquires the lock, tests and sees that there's nothing in the cache and goes off to fetch the data. The second thread has to wait for the first to release the mutex. Once the second thread gets in the mutex, it tests, sees the data is there and then continues.
But this would lock up the thread that is running the Page_Load() method - probably a bad thing.
Perhaps a better solution would be to also test if the PageAsyncTask to fetch the data has been started? If not, start it. If so, you shouldn't start another so you may want to register your own event handler to catch when it completes...
Related
Short question
How can I lock my entity so that only one operation by only one user can be performed on it at a time in MVC project?
Long question
I have MVC project where I want my action methods to be [SessionState(SessionStateBehavior.ReadOnly)]. But when doing this users can execute another action methods even before one long running action method has not completed. As I have a lot calculations and action methods have to be executed in predefined order, executing another Action method before one ends creates lots of problems. To give example I have main entity called Report, I have to somehow ensure that one report undergoes only one operation by only one user at a time. So I have to lock my Report. Even if I do not use [SessionState(SessionStateBehavior.ReadOnly)] I have to lock report so that multiple users do not edit same reports at a time and for other specific reasons. Currently I am writing this information to database roughly something like:
ReportId
LockedUserId
IsInPorcess
I have to set IsInProcess to true every time before operation begins and reset it to false after operation completed. As I have lots of action methods I created ActionFilter something like below:
public class ManageReportLockAttribute
: FilterAttribute, IActionFilter
{
public ManageReportLockAttribute()
{
}
public void OnActionExecuting(ActionExecutingContext filterContext)
{
...
ReportLockInfo lockInfo = GetFromDatabase(reportId);
if(lockInfo.IsInProcess)
RedirectToInformationView();
lockInfo.IsInProcess = true;
SaveToDatabase(lockInfo);
...
}
public void OnActionExecuted(ActionExecutedContext filterContext)
{
...
ReportLockInfo lockInfo = GetFromDatabase(reportId);
lockInfo.IsInProcess = false;
SaveToDatabase(lockInfo);
...
}
}
It works, for most part, but it has some strange problems (see this question for more info).
My question is that "How can I achieve same functionality (locking report) by different more acceptable way?".
I feel like it is something similar to locking when using multithreading, but it is not exactly same IMO.
Sorry for long, broad and awkward question, but I want a direction to follow. Thanks in advance.
One reason why OnActionExecuted is not called though OnActionExecuting runs as expected is that there are unhandled exceptions that occur in OnActionExecuting. Especially when dealing with the database, there are various reasons that could lead to an exception, e.g.:
User1 starts the process and locks the entity.
User2 also wants to start the process before User1 has saved the change. So the check of IsInProcess does not lead to the redirection and User2 also wants to save the lock. In this case, a concurrency violation should occur because User1 has saved the entity in the meantime.
To illustrate the process over time (C is the check whether IsInProcess is set, S is SaveChanges): first a good case:
User1 CS
User2 CS (check is done after save, no problem)
Now a bad case:
User1 CS
User2 CS (check takes place after check for User1, but before SaveChanges becomes effective ==> concurrency violation)
As the example shows, it is critical to make sure that only one user can place the lock. There are several ways to handle this. In all cases make sure that there are as few reasons for exceptions in OnActionExecuting as possible. Handle and log the exceptions.
Please note that all synchronisation methods will have a negative impact on the performance of your application. So if you haven't already thought about whether you could avoid having to lock the report by restructuring your actions or the data model, this would be the first thing to do.
Easy approach: thread synchronisation
An easy approach is to use thread synchronisation. This approach will only work if the application runs in a single process and not in a web farm/the cloud. You need to decide whether you will be able to change the application if it will be installed in a farm at a later point in time. This sample shows an easy approach (that uses a static object for locking):
public class ManageReportLockAttribute
: FilterAttribute, IActionFilter
{
private static readonly object lockObj = new object();
// ...
public void OnActionExecuting(ActionExecutingContext filterContext)
{
...
ReportLockInfo lockInfo = GetFromDatabase(reportId);
if(lockInfo.IsInProcess)
RedirectToInformationView();
lock(lockObj)
{
// read anew just in case the lock was set in the meantime
// A new context should be used.
lockInfo = GetFromDatabase(reportId);
if(lockInfo.IsInProcess)
RedirectToInformationView();
lockInfo.IsInProcess = true;
SaveToDatabase(lockInfo);
...
}
}
public void OnActionExecuted(ActionExecutedContext filterContext)
{
...
lock(lockObj)
{
lockInfo = GetFromDatabase(reportId);
if (lockInfo.IsInProcess) // check whether lock was released in the meantime
{
lockInfo.IsInProcess = false;
SaveToDatabase(lockInfo);
}
...
}
}
}
For details on using lock see this link. If you need more control, have a look at the overview of thread synchronization with C#. A named mutex is an alternative that provides locking in a more fine coarsed manner.
If you want to lock on reportId instead of a static object, you need to use a lock object that is the same for the same reportId. A dictionary can store the lock objects:
private static readonly IDictionary<int, object> lockObjectsByReportId = new Dictionary<int, object>();
private static object GetLockObjectByReportId(int reportId)
{
int lockObjByReportId;
if (lockObjectsByReportId.TryGetValue(reportId, out lockObjByReportId))
return lockObjByReportId;
lock(lockObj) // use global lock for a short operation
{
if (lockObjectsByReportId.TryGetValue(reportId, out lockObjByReportId))
return lockObjByReportId;
lockObjByReportId = new object();
lockObjectsByReportId.Add(reportId, lockObjByReportId);
return lockObjByReportId;
}
}
Instead of using lockObj in OnActionExecuting and OnActionExecuted, you'd use the function:
// ...
lock(GetLockObjectByReportId(reportId))
{
// ...
}
Database approach: Transactions and isolation levels
Another way to handle this is to use database transactions and isolation levels. This approach will also work in a multi-server environment. In this case, you'd not use the entity framework for database access but move the code to a stored procedure that is run on the database server. By running the stored procedure in a transaction and picking the right isolation level, you can avoid that a user can read the data while another one is changing them.
This link shows an overview of isolation levels for SQL Server.
What are the ramifications of not calling .Dispose() on an OrganizationServiceProxy object?
Sometimes, during testing, code crashes before the object can be disposed; does this mean that a service channel is left open for all eternity?
I have the same question about OrganizationServiceContext, which I had not been disposing until reading this today.
/* Synchronizes with CRM * */
public class CRMSync
{
[ThreadStatic] // ThreadStatic ensures that each thread gets a copy of these fields
private static OrganizationServiceProxy service;
[ThreadStatic]
private static Context linq;
/* Tries to connect to CRM and return false if failure - credentials arguments */
private bool Connect(string username = #"username", string password = "password", string uri = #"orgUrl/XRMServices/2011/Organization.svc")
{
try
{
var cred = new ClientCredentials();
cred.UserName.UserName = username;
cred.UserName.Password = password;
service = new OrganizationServiceProxy(new Uri(uri), null, cred, null);
service.EnableProxyTypes(); // this has to happen to allow LINQ early bound queries
linq = new Context(service);
var who = new Microsoft.Crm.Sdk.Messages.WhoAmIRequest(); // used to test the connection
var whoResponse = (Microsoft.Crm.Sdk.Messages.WhoAmIResponse)service.Execute(who); // this fails if not connected
}
catch (Exception e)
{
Log(e.Message); // Write to Event Log
return false;
}
return true;
}
}
Is there another way to use the same OrganizationServiceContext and OrganizationServiceProxy in multiple methods?
I plan to use this destructor to dispose the OrganizationServiceProxy and OrganizationServiceContext:
~CRMSync()
{
if (service != null)
service.Dispose();
if(linq!=null)
linq.Dispose();
}
EDIT
This is the method that is called by the service OnStart
/* Called by CRMAUX.OnStart when it is time to start the service */
public async void Start()
{
this.ProcessCSVFiles(); // Creates a ThreadPool thread that processes some CSV files
this.ProcessCases(); // Imports cases into CRM from a db (on this thread)
var freq = 0;
ConfigurationManager.RefreshSection("appSettings");
var parse = int.TryParse(ConfigurationManager.AppSettings["Frequency"], out freq);
await System.Threading.Tasks.Task.Delay((parse) ? freq * 1000 * 60 : 15000 * 60); // 15 minutes default or user defined
Start(); // Start again after the wait above
}
This is the Windows service
public partial class CRMAUX : ServiceBase
{
private CRMSync crmSync;
public CRMAUX()
{
InitializeComponent();
}
protected override void OnStart(string[] args)
{
ConfigurationManager.RefreshSection("userSettings"); // Get the current config file so that the cached one is not useds
if (TestConfigurationFile())
{
crmSync = new CRMSync();
Thread main = new Thread(crmSync.Start);
main.IsBackground = true;
main.Start();
}
else //The configuration file is bad
{
Stop(); // inherited form ServiceBase
return;
}
}
protected override void OnStop()
{
}
/* Checks the configuration file for the necessary keys */
private bool TestConfigurationFile()...
}
The OrganizationServiceProxy is a wrapper around a WCF Channel which utilises unmanaged resources (sockets etc.).
A class (our proxy) that implements IDisposable is essentially stating that it will be accessing unmanaged resources and you should therefore explicitly tell it when you're finished with it rather than just allowing it to go out of scope. This will allow it to release the handles to those resources and free them up for use elsewhere. Unfortunately our code isn't the only thing running on the server!
Unmanaged resources are finite and expensive (SQL connections are the classic example). If your code executes correctly but you don't explicitly call dispose then the clean up of those resources will be non-deterministic which is a fancy way of saying the garbage collector will only call dispose on those managed objects "eventually", which will as stated in turn clean up the unmanaged resources they're holding onto. This will hurt scalability of your application and any other services running on the same hardware that might be in contention with you for those resources. That's the best case scenario, if an exception occurs at any point in the stack subsequent to those resources being acquired they will not be released, ergo a memory leak and fewer resources available for use elsewhere.
Wrapping your code in a using statement is syntactic sugar as this compiles down to the proxy being wrapped in a try/finally with the dispose being called in the finally.
In terms of using the proxy/context across multiple methods you should take a look at the Unit of Work pattern. The OrganizationServiceContext is exactly that, something that you apply changes to over the course of a request (likely across multiple method calls) and then submit to the datastore (CRM) at the end when done, in our case using context.SaveChanges().
Where are you using this code as I'm curious to know what you're looking to achieve with the use of the [ThreadStatic] attribute? If it's within an IIS hosted application I don't think you'll see any benefit as you don't manage the thread pool so the proxy still only has a lifetime matching the HttpRequest. If this is the case there are several better ways of managing the lifetime of these objects, dependency injection frameworks and a HttpRequest lifetime behaviour being the obvious one.
If your app crashes, the operating system will automatically reclaim all your resources, i.e. close all network ports, files etc. So there's nothing kept open forever. Of course, on the server side something unexpected can happen if it is not handled properly and the app crashes in the middle of a request. But that's what transactions are for, such that the state of the server data is always consistent.
I have created a Singleton-patterned class which contains some instance variables (Dictionaries) which are very expensive to fill.
This class is used in an .NET MVC 4 project. And they key is that the data provided by the dictionaries in this Singleton class is nice to have, but is not required for the web app to run.
In other words, when we process a web request, the request would be enhanced with the information from the dictionaries if they are available, but if it's not available, it's fine.
So what I would like to do is find the best way to load the data into these Dictionaries within the Singleton, without blocking the web activity as they are filled with data.
I would normally find a way to do this with multithreading, but in the past I read about and ran into problems using multithreaded techniques within ASP.NET. Have things changed in .NET 4 / MVC 4? How should I approach this?
UPDATE
Based on feedback below and more research, what I am doing now is below, and it seems to work fine. Does anyone see any potential problems? In my testing, no matter how many times I call LazySingleton.Instance, the constructor only gets called once, and returns instantly. I am able to access LazySingleton.EXPENSIVE_CACHE immediately, although it may not contain the values I am looking for (which I test for in my app using .Contains() call). So it seems like it's working...
If I'm only ever editing the EXPENSIVE_CACHE Dictionary from a single thread (the LazySingleton constructor), do I need to worry about thread safety when reading from it in my web app?
public class LazySingleton
{
public ConcurrentDictionary<string, string> EXPENSIVE_CACHE = new ConcurrentDictionary<string, string>(1, 80000); // writing to cache in only one thread
private static readonly Lazy<LazySingleton> instance = new Lazy<LazySingleton>(() => new LazySingleton());
private LazySingleton()
{
Task.Factory.StartNew(() => expensiveLoad());
}
public static LazySingleton Instance
{
get
{
return instance.Value;
}
}
private void expensiveLoad()
{
// load data into EXPENSIVE_CACHE
}
}
You may fill your cash repository on any of
Application_Start
Session_Start
your web application events.
Something like this
<%# Application Language="C#" %>
<script runat="server">
void Application_Start(object sender, EventArgs e)
{
SingletonCache.LoadStaticCache();
}
</script>
May this be useful
I am using C# 5.0, VS 2012, MVC4. I have a scenario where I need to cache employees data and query the cache when performing a search on employees info.
I am not displaying all employees initially but wanted to initiate a thread to cache all employees. So in index method when view is displayed, I am doing this
//Starting a thread to load the cache if its null
if (HttpRuntime.Cache["AllEmployees"] == null)
{
thCacheAllEmployees = new Thread(new ThreadStart(CacheAllEmployees));
thCacheAllEmployees.Name = "CacheAllEmployees";
thCacheAllEmployees.Start();
}
CacheAllEmployees is a separate method which will query LDAP and stores all employees in Cache. It will take about 15 secs for LDAP query. But within these first 15 secs after view is loaded and cache is not yet loaded, then when user starts typing in search box, I am making a ajax method call to GetFilteredEmployees action method. I want to access previously started thread, check if its alive, then to wait for that thread to complete so that I don't need to do a fresh LDAP query again.
if (thCacheAllEmployees.IsAlive)
{
thCacheAllEmployees.Join();
if (HttpRuntime.Cache["AllEmployees"] != null)
return (List<CMSUser>)HttpRuntime.Cache["AllEmployees"];
}
But the problem is, when its ajax call seems like it will be a new Main Thread and doesn't know about thCacheAllEmployees. So thCacheAllEmployees will be null object. So I need to get the instance of this thread from currently all active threads in application.
I can store thread id of thCacheAllEmployees when view is loaded first time in a session variable, but how can I access that thread from pool of threads when making ajax method call ?
Is there any better way of doing this ? please give ur suggestions.
When you think threads think actions not data. When you want to store data you do not put that data on a thread, you put it into memory, then that memory is accessible by one ore more threads depending on the scope.
There are lots of ways you can store that data. I'm not sure if the data to cache is unique for different user sessions or if you want just one global cache. Anything that you want to access from anywhere you can put in a static variable. You just have to be sure to use locks so that multiple threads do not try to access that data at the same time which is never safe.
Model
public static class MyCache
{
private static object LockToken = new object();
private static List<CMSUser> _Users { get; set; }
static MyCache()
{
_Users = GetUsers();
}
public static List<CMSUser> Users
{
lock (LockToken)
{
return _Users;
}
}
}
Controller
public class UsersController : ApiController
{
public List<CMSUser> Get()
{
return MyCache.Users;
}
}
View
$.ajax({
url: '/api/users',
dataType: 'json',
success: function(users) {
// do something with users here
}
});
Why wait for the first call to cache data? You could do this during the application start by adding it to the Application_Start function in Global.asax.
This will mean you will have a 15 sec overhead when your application starts, but after that you are good to go.
If you do want to use a thread here too, you could put its id in a static variable, and use that to check if the list has loaded.
I read many posts saying multithreaded applications must use a separate session per thread. Perhaps I don't understand how the locking works, but if I put a lock on the session in all repository methods, would that not make a single static session thread safe?
like:
public void SaveOrUpdate(T instance)
{
if (instance == null) return;
lock (_session)
using (ITransaction transaction = _session.BeginTransaction())
{
lock (instance)
{
_session.SaveOrUpdate(instance);
transaction.Commit();
}
}
}
EDIT:
Please consider the context/type of applications I'm writing:
Not multi-user, not typical user-interaction, but a self-running robot reacting to remote events like financial data and order-updates, performing tasks and saves based on that. Intermittently this can create clusters of up to 10 saves per second. Typically it's the same object graph that needs to be saved every time. Also, on startup, the program does load the full database into an entity-object-graph. So it basically just reads once, then performs SaveOrUpdates as it runs.
Given that the application is typically editing the same object graph, perhaps it would make more sense to have a single thread dedicated to applying these edits to the object graph and then saving them to the database, or perhaps a pool of threads servicing a common queue of edits, where each thread has it's own (dedicated) session that it does not need to lock. Look up producer/consumer queues (to start, look here).
Something like this:
[Producer Threads]
Edit Event -\ [Database Servicer Thread]
Edit Event ------> Queue -> Dequeue and Apply to Session -> Database
Edit Event -/
I'd imagine that a BlockingCollection<Action<Session>> would be a good starting point for such an implementation.
Here's a rough example (note this is obviously untested):
// Assuming you have a work queue defined as
public static BlockingCollection<Action<Session>> myWorkQueue = new BlockingCollection<Action<Session>>();
// and your eventargs looks something like this
public class MyObjectUpdatedEventArgs : EventArgs {
public MyObject MyObject { get; set; }
}
// And one of your event handlers
public MyObjectWasChangedEventHandler(object sender, MyObjectUpdatedEventArgs e) {
myWorkQueue.Add(s=>SaveOrUpdate(e.MyObject));
}
// Then a thread in a constant loop processing these items could work:
public void ProcessWorkQueue() {
var mySession = mySessionFactory.CreateSession();
while (true) {
var nextWork = myWorkQueue.Take();
nextWork(mySession);
}
}
// And to run the above:
var dbUpdateThread = new Thread(ProcessWorkQueue);
dbUpdateThread.IsBackground = true;
dbUpdateThread.Start();
At least two disadvantages are:
You are reducing the performance significantly. Having this on a busy web server is like having a crowd outside a cinema but letting people go in through a person-wide entrance.
A session has its internal identity map (cache). A single session per application means that the memory consumption grows as users access different data from the database. Ultimately you can even end up with the whole database in the memory which of course would just not work. This requires then calling a method to drop the 1st level cache from time to time. However, there is no good moment to drop the cache. You just can't drop in at the beginning of a request because other concurrent sessions could suffer from this.
I am sure people will add other disadvantages.