I have implemented REST service using WebAPI2, service implemeted to manage different sessions which are created and joined by different clients which are accessing service.
Session contains information about access of application functionality and information of participants which have joined same session.
Each client get session information and access list from server for synchronization purpose on every second. According to access changed, client functionality will changed(Enable/Disable).
I am using MemoryCache class to store session info in WebAPI service as below.
public static class SessionManager{
private static object objForLock = new object();
public static List<Session> SessionCollection
{
get
{
lock (objForLock)
{
MemoryCache memoryCache = MemoryCache.Default;
return memoryCache.Get("SessionCollection") as List<Session>;
// return HttpContext.Current.Application["SessionCollection"] as List<Session>;
}
}
set
{
lock (objForLock)
{
MemoryCache memoryCache = MemoryCache.Default;
memoryCache.Add("SessionCollection", value, DateTimeOffset.UtcNow.AddHours(5));
//HttpContext.Current.Application["SessionCollection"] = value;
}
}
}
}
My problem is regarding inconsistent behavior of cache.
When clients send synchronization call, it will gives inconsistent results. For some requests, clients gets proper data and for some requests client gets null data alternative after some requests.
I have add debugger and monitor the object for null result, then "memoryCache.Get("SessionCollection")" also null. After some consecutive request it will be proper again. I am not getting why this object is not persistent.
Alternative, I have tried "HttpContext.Current.Application["SessionCollection"]" as well, But same issue is there.
I have read about "app pool recycle", it recycle all cache after particulate time. If my cached object is recycled by app pool recycle, then how can I get this object again?
Please some can help me to get out of this issue. Thanks in advance.
You should store client specific information in Session instead of Cache. Cache should be for the whole application (shared)
However, it's not recommended as web api is built with RESTful in mind and RESTful services should be stateless (APIs do not cache state). Stateless applications have many benefits:
Reduce Memory Usage
Better scalability: Your application scales better. Image what happens if you store information of millions of client at the same time.
Better in load balancing scenario: every server can handle every client without losing state.
Session expiration problem.
In case you want to store client state, you could do it anyway. Please try the suggestions in the following post: ASP.NET Web API session or something?
In general, caching state locally on the web server is bad (both Session and local MemoryCache). The cache could lose for many reasons:
App pool recycle.
Load balancing environment
Multiple worker processes in IIS
Regarding your requirements:
Each client get session information and access list from server for
synchronization purpose on every second. According to access changed,
client functionality will changed(Enable/Disable).
I'm not sure if you want to update the other clients with new access list immediately when a client sends synchronization call. If that's the case, SignalR would be a better choice.
Otherwise, you could just store the updated access list somewhere (shared cache or even in database) and update the other clients whenever they reconnect with another request.
#ScottHanselman said about a bug in .NET 4 here. I hope this fix help you:
The temporary fix:
Create memory cache instance under disabled execution context flow
using (ExecutionContext.SuppressFlow()) {
// Create memory cache instance under disabled execution context flow
return new YourCacheThing.GeneralMemoryCache(…);
}
The Hotfix is http://support.microsoft.com/kb/2828843 and you can request it here: https://support.microsoft.com/contactus/emailcontact.aspx?scid=sw;%5BLN%5D;1422
Just a caution, MemoryCache will keep data in memory in single server. So if you have multiple web servers(in front of load balancers), that cache will not be available to other servers. You also use the cache name - "SessionCollection". That data will be shared to all clients. If you need to store data in cache unique to each client, you need to return a token (guid) to the client and use that token to get/update data in cache in subsequent requests.
Try introducing a class level variable. So your code will look like below. (Some code remove for clarity)
private readonly MemoryCache _memCache = MemoryCache.Default;
....
return _memCache.Get("SessionCollection") as List<Session>;
...
_memCache .Add("SessionCollection", value, DateTimeOffset.UtcNow.AddHours(5));
Related
I have a (classic) cloud service that needs to create an expensive object that I want to be reused in subsequent requests. It takes a long time to create so creating it each time slows down the requests unacceptably.
public class MyService : IHttpHandler
{
public static ExpensiveObject MyObject;
public void ProcessRequest(HttpContext context)
{
if (MyObject == null)
MyObject = new ExpensiveObject(); // very time consuming operation
// do stuff with MyObject
}
}
(I realise the lack of consideration for multiple concurrent requests running, please disregard that) When I post two requests, one after the other, it creates a new MyObject each time. How can I ensure that it reuses the same object created for each request?
Setting IsReusable to return true in the MyService seemingly makes no difference.
It looks like you need to move out the shared object from HttpHandler to separate hosted service, for example, Azure App Service, Azure WebJob (it isn't suited for all scenarios of using), etc.
Azure App Service scenario: web app communicates with App Service by HTTP (see HttpClient). Azure App Service has the configuration option Always On that keep the app loaded even when there's no traffic.
If you deal with a long-running operation (although you wrote that problem is long-initialization) then make sense to look at the standard REST-pattern resolving such problems - Polling.
Maybe this link be useful for you: Common causes of Cloud Service roles recycling.
If you’re running inside IIS you cant. The application pool is at work. Additionally, multiple requests typically won’t cross paths in-process.
Your typical options include the following. It will only create one expensive service per thread:
IoC registering the service’s lifecycle per thread (or request scope).
a singleton (app pool already in use)
-Best of luck!
To achieve this easily (without dealing with arcane Azure crap) I just made a separate executable that hosts the ExpensiveObject in a Nancy localhost server (started in a startup script).
In my case this has no significant drawbacks as I just need to request the object to consume a string and return another string. This might not be the right solution for everyone however.
To access DocumentDB/CosmosDB I'm using package Microsoft.Azure.DocumentDB.Core(v1.3.2). I have noticed when I create and initialise DocumentClient class:
var documentClient = new DocumentClient(new Uri(endpointUrl), primaryKey);
await documentClient.OpenAsync();
There is a number of requests fired to the endpoint to get information about indexes and other information. To be exact there are 9 HTTP requests going out on .OpenAsync(). This makes the creation and activation of the client a very costly operation in terms of performance - takes up to a second to get all the requests back home.
So to mitigate this costly operation I'm making DocumentClient to be a singleton and keep the reference around for the lifetime of the application.
Application is Asp.Net Core MVC and this might keep the reference of this object in memory for days.
Question: is it OK to keep this object as a singleton for that long? if not, what should be the strategy to dispose it? Or is there a way to make the initialisation cheaper (i.e. don't make these initial requests?).
We've wondered that for ourselves as well and found this:
From the docs
SDK Usage Tip #1: Use a singleton DocumentDB client for the lifetime of your application Note that each DocumentClient instance is thread-safe and performs efficient connection management and address caching when operating in Direct Mode. To allow efficient connection management and better performance by DocumentClient, it is recommended to use a single instance of DocumentClient per AppDomain for the lifetime of the application.
I suppose this is still valid now you can address CosmosDB with it as well.
How can I use server side caching on a C# WCF Rest service?
For example, I generate a lot of data into one object (not through database) and I do not want to do that every call a (random) user makes. How can I cache the object.
Verifying question: Is it right that a HttpContext cache object is only between a specific client and the host?
Is it right that a HttpContext cache object is only between a specific client and the host?
No, it is a shared object, as per msdn
There is one instance of the Cache class per application domain. As a
result, the Cache object that is returned by the Cache property is the
Cache object for all requests in the application domain.
Depending on the load, you may also use a database for chaching (depending what you call caching). There are also in-memory databases specifically optimised for distributed caching, see memchached, redis and Memcache vs. Redis?
The HttpContext.Cache is local to the Application Domain, and so is shared by all code that runs in that Application Domain. It is certainly fast and flexible enough for most applications.
How you would use it, depends of course on your needs. You may use a serialized version of input parameters as the key, for instance, like in this example:
public MyObject GetMyObject(int size, string cultureId, string extra)
{
// Input validation first
...
// Determine cache key
string cacheKey = size.ToString() + cultureId.ToString() + extra.ToString();
// rest of your code here
}
I have a self hosted WCF service that I am using in a silverlight application. I am trying to store a list of user guids in an IDictionary object. Each time a user hits the service, it updates the users datetime so I can keep track of which users have active "sessions". The problem is, every time I am hitting the service, the list is empty. It appears to be dropping the values on each soap request?
Can you store information in a self hosted service that will be available across multiple service requests?
Thanks in advance!
It's on a per instance basis. I.e session-less by default.
Have a look at this
When a service contract sets the
System.ServiceModel.ServiceContractAttribute.SessionMode property to
System.ServiceModel.SessionMode.Required, that contract is saying that
all calls (that is, the underlying message exchanges that support the
calls) must be part of the same conversation.
If you need to store things in between requests you will need to create either a static dictionary with the appropriate locking to store these requests as they come in, or store this info in a database (or other external store) and check to see if it exists there in each method call. The reason for this is that the service class is instantiated on every client request.
Since you are already updating the users datetime when a user hits the service it would be better do a lookup to see if this is an active user or not by comparing to the datetime field. This has the advantage of being accurate on every call (the dictionary could get out of sync with the db if the service is restarted). Databases already have mechanisms in place to deal with concurrency, so rather than rolling your own locking solution around a singleton object you can push the complexity to the data store.
If the second solution is not fast enough (and you have profiled the app and determined it's the bottleneck), then the other option is to use some kind of cache solution in front of the db so that data can first be checked in memory before going to the db. This cache object would need to be static like the dictionary and has the same pitfalls around locking as any other multi-threaded application.
EDIT: If this hosted WCF service is being used as session storage for the users of the silverlight application and the data is not being stored in an external data store, then you better be sure that tracking if they are active is not mission critical. This data cannot be guaranteed to be correct as described.
Based on the accepted answer if your service faults and needs to be rebooted (since this is self hosted it is advised that you monitor the faulted event) you have to dispose of the service host and instantiate a new one. The only way the Guid data can be kept is if it is rebound to the service in between restarts (assuming the host app itself isn't restarted which is a different issue).
private Dictionary<Guid,string> _session;
Service service = new Service(_session);
_serviceHost = new ServiceHost(service, GetUriMethodInHostApp());
Better would be to store this externally and do a lookup as #marc_s suggests. Then this complexity goes away.
You need to change the InstanceContextMode. You can do so by adding the following compiler directive to your WCF class:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]
This will run the WCF service as a singleton of sorts. See more on WCF Instance Context Mode
And then you should construct your service host with your singleton object. Here's code from a working example where I'm doing something similar:
private ServiceHost serviceHost;
if (serviceHost != null)
serviceHost.Close();
if (log.IsInfoEnabled)
log.Info("Starting WCF service host for endpoint: " + ConfiguredWCFEndpoint);
// Create our service instance, and add create a new service host from it
ServiceLayer.TagWCFService service = new ServiceLayer.TagWCFService(ApplicationName,
ApplicationDescription,
SiteId,
ConfiguredUpdateRateMilliseconds);
serviceHost = new ServiceHost(service, new Uri(ConfiguredWCFEndpoint));
// Open the ServiceHostBase to create listeners and start listening for messages.
serviceHost.Open();
As others have politely noted, this can have "consequences" if you're not familiar with how it works or if it's not a good fit for your particular application.
If you don't what to involve locking and thread-safe specific code, you can use a NoSQL database to store your session data, something like MongoDB or RavenDB
Like #marc_s, I think that using the Singleton mode is a risky thing, you have to be very careful in making your own thread-safe session mechanism.
I am creating a graphical tool in silverlight which reads data from multiple files and database.
i dont want to call the database again and again. i want to retrieve the data when required and keep it safe somewhere so if the user or any other user visits the same page, they can then access the data.
i want to use application state of asp.net Cache["Object"] but in Silverlight? what is the best methodolgy?
Since silverlight is running client side you need to cache serverside.
You could fetch your data with WCF.
Something along these lines:
What I have done in the past is to cache the query using a WCF using enterprise library:
public class YourWcfService
{
ICacheManager _cacheManager = null;
public YourWcfService()
{
_cacheManager = EnterpriseLibraryContainer.Current.GetInstance<ICacheManager>("Cache Manager");
}
}
your web method would look something like:
[OperationContract]
public List<Guid> SomeWebMethod()
{
if (_cacheManager.Contains("rgal")) // data in cache?
result = (List<Guid>)_cacheManager.GetData("rgal");
if (result == null)
{
result = FETCH FROM DATABASE HERE;
// cache for 120 minutes
_cacheManager.Add("rgal", result, CacheItemPriority.Normal, null, new AbsoluteTime(TimeSpan.FromMinutes(120)));
}
return result;
}
Silverlight controls run in browser/client side per user, so caching something for all users on the server is not possible.
You can cache data in the control for given user's session or in isolated storage for given user. But you can't do anything on the server without writing corresponding server side code.
Is the caching really necessary? Are you really pounding your database that bad?
Your DB is your storage. Unless you have a performance issue, this is premature optimization.
The new Enterprise Library Silverlight Integration Pack provides you with capabilities of caching on the client. 2 types of data caching are supported: in-memory and to isolated storage. You'll also get flexibility of configuration of the expiration policies (programmatically or via external config) and a config tool support.
Note: it's a code preview now, but should be releasing as final in May.