I have a (classic) cloud service that needs to create an expensive object that I want to be reused in subsequent requests. It takes a long time to create so creating it each time slows down the requests unacceptably.
public class MyService : IHttpHandler
{
public static ExpensiveObject MyObject;
public void ProcessRequest(HttpContext context)
{
if (MyObject == null)
MyObject = new ExpensiveObject(); // very time consuming operation
// do stuff with MyObject
}
}
(I realise the lack of consideration for multiple concurrent requests running, please disregard that) When I post two requests, one after the other, it creates a new MyObject each time. How can I ensure that it reuses the same object created for each request?
Setting IsReusable to return true in the MyService seemingly makes no difference.
It looks like you need to move out the shared object from HttpHandler to separate hosted service, for example, Azure App Service, Azure WebJob (it isn't suited for all scenarios of using), etc.
Azure App Service scenario: web app communicates with App Service by HTTP (see HttpClient). Azure App Service has the configuration option Always On that keep the app loaded even when there's no traffic.
If you deal with a long-running operation (although you wrote that problem is long-initialization) then make sense to look at the standard REST-pattern resolving such problems - Polling.
Maybe this link be useful for you: Common causes of Cloud Service roles recycling.
If you’re running inside IIS you cant. The application pool is at work. Additionally, multiple requests typically won’t cross paths in-process.
Your typical options include the following. It will only create one expensive service per thread:
IoC registering the service’s lifecycle per thread (or request scope).
a singleton (app pool already in use)
-Best of luck!
To achieve this easily (without dealing with arcane Azure crap) I just made a separate executable that hosts the ExpensiveObject in a Nancy localhost server (started in a startup script).
In my case this has no significant drawbacks as I just need to request the object to consume a string and return another string. This might not be the right solution for everyone however.
Related
I'm new in DDD/ Clean Architecture
I'm trying to implement this architecture in a new from scratch application and I feel confused in some points.
I'm trying to make the best choice to not regret it as application will start growing.
Probably my question is a bit stupid, but again i'm new in DDD and trying to make the best choices.
I'm trying to stick to this example https://github.com/ardalis/CleanArchitecture from Ardalis
Here is my model/problem simplified
-ApplicationAggregateRoot
---Application
---Instance
Application has a list of Instance.
Now I have to do an HTTPRequest "/operationA" on the Instance, this can be done by my blazor UI or by my API via controllers.
The result of this HTTP Request "/operationA" will have to be saved in my repository, and do other stuff, so from what I understood here I need an event when I have the HTPP Response something like "OperationAFinishedEvent"
What I don't really know how to figure it out is how should I make this call in my controller/blazor for example.
Should I do (pseudo code):
A)
_repository.GetApplicationById(1).Instances.First(i => i == id).OperationA()
and have some event raised in OperationA() Method of Instance
(something like "OperationASentEvent") which will be wired to a handler that will call _httpClient.OperationA(instance.Url)
Or should I pass by a domain service class for doing the call instead of an event like:
B)
class Controller
{
OperationA(Instance instance)
{
_instanceService.OperationA(instance)
}
}
class InstanceService
{
void OperationA(Instance instance)
{
_httpClient.OperationA(instance.Url);
new OperationAFinishedEvent(instance);
}
}
C) Or call directly
_httpClient.OperationA(instance.Url);
new OperationAFinishedEvent(instance);
from both controller and blazor
Or maybe something else ?
Thank's
It sounds like you have a Blazor client side app as well as a server-side app that you access via an API. So let's address both sides of the app.
In Blazor, you're typically going to minimize application logic and mostly just make calls to the API. So the code required to kick off an operation for an application instance in Blazor should look like this:
var result = await _httpClient.PostAsync(endpointUrl, data);
If that's a long-running process, you might bet back a result that provides you with another endpoint you can query for status. Otherwise the result should just let you know if the process completed successfully or not.
In your API, you will have various endpoints. Normally these endpoints correspond to resources and operations you can take to alter the state of these resources. Your API resources usually correspond to your domain model, but not always 100%. You should generally avoid using HTTP APIs for Remote Procedure Call (RPC) operations, since they're not really designed for that purpose. Instead, think in terms of requests and responses, typically. Imagine you're trying to get your city government to do something, and the way you do that is by filling out a form to hand to a clerk. Then when the action has been completed, they hand you back some more paperwork. The clerk is your API. The papers are your request and response objects. The actual action - the "instance operation" is happening back inside the office where you don't see it as a client, and none of your interactions are with it directly.
So you might have a resource like this:
/Applications/123/Instances/234/PendingOperations
You can list pending operations. You can POST a new operation request. Etc. There might also be a resource for .../CompletedOperations or you might get back an id for your pending operation that you can later use to view its status. The idea is to have an endpoint that represents a noun (a resource) and not a verb (do something).
Hope that helps!
Your domain layer (aggregate root is in there) should only be concerned about their internal state.
The applications layer (where you also use the repository) can call an interface to an other service, using the data from the aggregate root.
The interface is then implemented in a seperate layer.
We are creating range of dotnet core 2.0 microservices based on the servicestack framework. We want to use http-header based correlation tokens, so we can track a request in our distributed logging system (Seq).
We would like to use IoC to setup a a class holding a threadsafe JsonServiceClient for performance reasons, but how can we ensure that headers placed on one thread will not leak into another concurrent request? Client code example:
public TResponse Get(IReturn requestDto)
...
_serviceClient.AddHeader("r-id", theReqId); // how can we make these specific for the thread request only?
var responseFromDownstreamService = _serviceClient.Get(requestDto);
If you’re modifying the service client instance the dependency needs to be transient so each thread receives a new instance they can mutate without modifying the same instance used by other threads.
I have implemented REST service using WebAPI2, service implemeted to manage different sessions which are created and joined by different clients which are accessing service.
Session contains information about access of application functionality and information of participants which have joined same session.
Each client get session information and access list from server for synchronization purpose on every second. According to access changed, client functionality will changed(Enable/Disable).
I am using MemoryCache class to store session info in WebAPI service as below.
public static class SessionManager{
private static object objForLock = new object();
public static List<Session> SessionCollection
{
get
{
lock (objForLock)
{
MemoryCache memoryCache = MemoryCache.Default;
return memoryCache.Get("SessionCollection") as List<Session>;
// return HttpContext.Current.Application["SessionCollection"] as List<Session>;
}
}
set
{
lock (objForLock)
{
MemoryCache memoryCache = MemoryCache.Default;
memoryCache.Add("SessionCollection", value, DateTimeOffset.UtcNow.AddHours(5));
//HttpContext.Current.Application["SessionCollection"] = value;
}
}
}
}
My problem is regarding inconsistent behavior of cache.
When clients send synchronization call, it will gives inconsistent results. For some requests, clients gets proper data and for some requests client gets null data alternative after some requests.
I have add debugger and monitor the object for null result, then "memoryCache.Get("SessionCollection")" also null. After some consecutive request it will be proper again. I am not getting why this object is not persistent.
Alternative, I have tried "HttpContext.Current.Application["SessionCollection"]" as well, But same issue is there.
I have read about "app pool recycle", it recycle all cache after particulate time. If my cached object is recycled by app pool recycle, then how can I get this object again?
Please some can help me to get out of this issue. Thanks in advance.
You should store client specific information in Session instead of Cache. Cache should be for the whole application (shared)
However, it's not recommended as web api is built with RESTful in mind and RESTful services should be stateless (APIs do not cache state). Stateless applications have many benefits:
Reduce Memory Usage
Better scalability: Your application scales better. Image what happens if you store information of millions of client at the same time.
Better in load balancing scenario: every server can handle every client without losing state.
Session expiration problem.
In case you want to store client state, you could do it anyway. Please try the suggestions in the following post: ASP.NET Web API session or something?
In general, caching state locally on the web server is bad (both Session and local MemoryCache). The cache could lose for many reasons:
App pool recycle.
Load balancing environment
Multiple worker processes in IIS
Regarding your requirements:
Each client get session information and access list from server for
synchronization purpose on every second. According to access changed,
client functionality will changed(Enable/Disable).
I'm not sure if you want to update the other clients with new access list immediately when a client sends synchronization call. If that's the case, SignalR would be a better choice.
Otherwise, you could just store the updated access list somewhere (shared cache or even in database) and update the other clients whenever they reconnect with another request.
#ScottHanselman said about a bug in .NET 4 here. I hope this fix help you:
The temporary fix:
Create memory cache instance under disabled execution context flow
using (ExecutionContext.SuppressFlow()) {
// Create memory cache instance under disabled execution context flow
return new YourCacheThing.GeneralMemoryCache(…);
}
The Hotfix is http://support.microsoft.com/kb/2828843 and you can request it here: https://support.microsoft.com/contactus/emailcontact.aspx?scid=sw;%5BLN%5D;1422
Just a caution, MemoryCache will keep data in memory in single server. So if you have multiple web servers(in front of load balancers), that cache will not be available to other servers. You also use the cache name - "SessionCollection". That data will be shared to all clients. If you need to store data in cache unique to each client, you need to return a token (guid) to the client and use that token to get/update data in cache in subsequent requests.
Try introducing a class level variable. So your code will look like below. (Some code remove for clarity)
private readonly MemoryCache _memCache = MemoryCache.Default;
....
return _memCache.Get("SessionCollection") as List<Session>;
...
_memCache .Add("SessionCollection", value, DateTimeOffset.UtcNow.AddHours(5));
I'm going to be creating a service that needs to make a call to a hosted WCF service halfway around the world. This isn't that big of a deal since the number of transactions that will be made is relatively low. However, I need to pass in an instance of a class that will possibly be defined in the WCF to the necessary WCF function.
So my question is, will that instance of the class exist on my server? Or will I be contacting the host server every time I attempt to set a variable in the object?
EXAMPLE:`
public class Dog
{
public string noise;
public int numLegs;
}
public class doSomething
{
public string makeNoise(Dog x)
{
return x.noise;
}
}
`
All of those are defined in the WCF. So when I create an instance of class Dog locally, will that instance exist on my side or the server hosting the WCF service? If I'm setting 1000 instances of Dog, the latency will definitely build up. Whereas if I DON'T have to contact the server every time I make a change to my instance of Dog, then the only time I have to worry about latency is when I pass it into doSomething.makeNoise.
The host creates a new instance of the service class for each request, if you're using the default per-call instantiation method (which is the recommended way).
So either this is the IIS server which hosting your WCF service that creates an instance of your service class, or it is the ServiceHost instance that you've created inside your own self-hosting setup (a console app, a Windows service etc.).
The service class instance is used to handle your request - execute the appropriate method on the service class, send back any results - and then it's disposed again.
There's also the per-session mode in which case (assuming the binding you've chosen support sessions) your first call will create a service-class instance, and then your subsequent calls will go to the same, already created instance (until timeouts come into play etc.).
And there's also the singleton mode, where you have a single instance of the service class that handles all requests - this is however rather tricky to get right in terms of programming, and "challenged" in terms of scalability and performance
You will need to host your WCF service on a public available server (for example IIS). Successful hosting will provide you with a link for the svc file. Clicking on that will give you a link ending in singleWsdl. You need to copy that link. On your client side, the one that requires a reference to the WCF, you will need to Add Service Reference and pass that link. This will generate proxy code with Client objects that you can use to access your WCF ServiceOperation methods.
At a minimum you should have three projects. A website project to host the actual site. A WCF project to host your services. And finally a shared project, which should contain the classes you are concerned with (the models).
Both the website and wcf projects should reference the shared project, this way they both know how the models look.
The wcf project should return serialzed models as json objects, which I usually do by referencing Newtonsoft.Json.
Your website project should expect this json, and deserialize them, also using Newtonsoft.Json. This is why your class (model) should exist in the shared project, so you can use the same class on both sides of your service call.
I have a self hosted WCF service that I am using in a silverlight application. I am trying to store a list of user guids in an IDictionary object. Each time a user hits the service, it updates the users datetime so I can keep track of which users have active "sessions". The problem is, every time I am hitting the service, the list is empty. It appears to be dropping the values on each soap request?
Can you store information in a self hosted service that will be available across multiple service requests?
Thanks in advance!
It's on a per instance basis. I.e session-less by default.
Have a look at this
When a service contract sets the
System.ServiceModel.ServiceContractAttribute.SessionMode property to
System.ServiceModel.SessionMode.Required, that contract is saying that
all calls (that is, the underlying message exchanges that support the
calls) must be part of the same conversation.
If you need to store things in between requests you will need to create either a static dictionary with the appropriate locking to store these requests as they come in, or store this info in a database (or other external store) and check to see if it exists there in each method call. The reason for this is that the service class is instantiated on every client request.
Since you are already updating the users datetime when a user hits the service it would be better do a lookup to see if this is an active user or not by comparing to the datetime field. This has the advantage of being accurate on every call (the dictionary could get out of sync with the db if the service is restarted). Databases already have mechanisms in place to deal with concurrency, so rather than rolling your own locking solution around a singleton object you can push the complexity to the data store.
If the second solution is not fast enough (and you have profiled the app and determined it's the bottleneck), then the other option is to use some kind of cache solution in front of the db so that data can first be checked in memory before going to the db. This cache object would need to be static like the dictionary and has the same pitfalls around locking as any other multi-threaded application.
EDIT: If this hosted WCF service is being used as session storage for the users of the silverlight application and the data is not being stored in an external data store, then you better be sure that tracking if they are active is not mission critical. This data cannot be guaranteed to be correct as described.
Based on the accepted answer if your service faults and needs to be rebooted (since this is self hosted it is advised that you monitor the faulted event) you have to dispose of the service host and instantiate a new one. The only way the Guid data can be kept is if it is rebound to the service in between restarts (assuming the host app itself isn't restarted which is a different issue).
private Dictionary<Guid,string> _session;
Service service = new Service(_session);
_serviceHost = new ServiceHost(service, GetUriMethodInHostApp());
Better would be to store this externally and do a lookup as #marc_s suggests. Then this complexity goes away.
You need to change the InstanceContextMode. You can do so by adding the following compiler directive to your WCF class:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]
This will run the WCF service as a singleton of sorts. See more on WCF Instance Context Mode
And then you should construct your service host with your singleton object. Here's code from a working example where I'm doing something similar:
private ServiceHost serviceHost;
if (serviceHost != null)
serviceHost.Close();
if (log.IsInfoEnabled)
log.Info("Starting WCF service host for endpoint: " + ConfiguredWCFEndpoint);
// Create our service instance, and add create a new service host from it
ServiceLayer.TagWCFService service = new ServiceLayer.TagWCFService(ApplicationName,
ApplicationDescription,
SiteId,
ConfiguredUpdateRateMilliseconds);
serviceHost = new ServiceHost(service, new Uri(ConfiguredWCFEndpoint));
// Open the ServiceHostBase to create listeners and start listening for messages.
serviceHost.Open();
As others have politely noted, this can have "consequences" if you're not familiar with how it works or if it's not a good fit for your particular application.
If you don't what to involve locking and thread-safe specific code, you can use a NoSQL database to store your session data, something like MongoDB or RavenDB
Like #marc_s, I think that using the Singleton mode is a risky thing, you have to be very careful in making your own thread-safe session mechanism.