We've created several WCF services that process asynchronous requests. We're using basicHttpBinding, consequently our InstanceContextMode is PerCall and this is what's causing a little confusion. We're seeing unusual behavior from those parts of the application being injected using Microsoft's Unity container.
We're resolving the reference below to create a singleton of Foo that is used throughout the application. However, when the service is hit in quick succession, Foo will occasionally throw exceptions that indicate that it is being accessed by multiple threads and having its state changed in unexpected ways as a result.
Container.RegisterType<IFoo, Foo>(new ContainerControlledLifetimeManager());
Now, if we change the lifetime manager to TransientLifetimeManager - essentially telling the container to inject a new instance of the class every time it's resolved, the issue is corrected.
Container.RegisterType<IFoo, Foo>(new TransientLifetimeManager());
From my understanding, the WCF doesn't control the lifetime of the AppDomain, the host does. In our case, that is IIS. So, given this information is it possible that our PerCall WCF requests are working correctly, but due to how the AppDomain is being managed, could we be accessing the same injected object due to its singleton implementation?
Thanks for your time!
Have a look at UnityWcf. I have tried a couple of different approaches to aligning the lifetime of objects in Unity to the InstanceContextMode in WCF. This works very well:
http://unitywcf.codeplex.com
Related
For a REST api, that has no dependencies between requests and is using ASP.NET Core DI.
I have heard conflicting arguments when choosing the most appropriate:
Prefer transient and then scoped and avoid singleton due to memory and multi-threading issues and also because singletons should not be injected with other lifetimes due to captive dependencies
Prefer singletons to save time instantiating objects and prevent opening multiple connections
I understand they work differently but is there a 'go-to' lifetime? Should it start with 'transient' and move to others as required?
Are there any investigations proving the instantiation time saved by singleton is actually relevant?
I'm not going to say there's a right or wrong way here, but I'll share my personal experience.
I've tried both ways, and ended up finding that it's best to start with transient and then expand the scope as necessary.
Think in terms of the Single Responsibility Principle and Reasons for Change. The reason that your service's lifetime scope may need to change will likely be linked to a change you make to the service implementation itself. When that Reason happens, you want to only need to change a single class.
If you have a service that needs to be longer-lived than most of your other services, all you have to do is make it longer-lived and inject factories for any shorter-lived dependencies into it.
On the other hand, if you have a class that needs to be shorter-lived than most of your other services, you'll end up having to inject it as a factory into all the longer-lived services. So one Reason for Change impacts code all over your code base.
Of course, this is assuming you're doing things correctly: avoiding captive dependencies and remembering to make non-thread-safe services transient. I find it's far easier to remember which services need to be long-lived and override the defaults in that direction, than it is to remember which services shouldn't be long-lived and override the defaults that way. And I've found that failure to do the latter leads to much more insidious bugs that are harder to notice and result in greater damage.
With regards to performance: I haven't done performance testing with the ASP.NET Core built-in DI framework, but I've found that SimpleInjector is fast enough that the time and memory overhead of creating new instances of services is trivial in comparison with the other work my app does (like querying databases).
With regards to preventing opening multiple connections: SQL Server connections are pooled automatically so the cost to creating a new SqlConnection() and disposing it several times in a request is generally pretty trivial.
Should it start with 'transient' and move to others as required?
Yep. Transient services are simpler to implement correctly, and simply instantiating an object every time a dependency is resolved would normally not cause a significant performance problem.
If you use the TypedFactoryFacility in Windsor to generate factories for you based on an interface, the factory itself can be registered as Transient. It turns out that the factory will then release all transient objects that it created when the factory itself is disposed after release.
container.AddFacility<TypedFactoryFacility>();
container.Register(
Types.FromThisAssembly().BasedOn<IFactory>().Configure(
x => x.AsFactory()).LifestyleTransient());
This means that if I create the auto-generated factories as transient it neatly allows me to forget about Releaseing the object created from the factory (as long as I manage the lifetime of the factory itself, of course). The lifetime of such a transient factory would be tied to some other transient object, so when that object gets released, the factory is released, and everything that factory created.
Personally, I hate having to remember to explicitly Release() objects, and this cleans up the code a lot, so this makes me very happy. But the question is: Is this reasonable practice? It all just seems too easy.
Is there a downside to creating a new factory instance each time a factory is required?
Am I digging myself a terrible hole?
The automatic factory mechanism in Castle is based on Castle.DynamicProxy; I imagine that if you use the factory as a transient component you have to pay for the creation of the ProxyGenerator at each resolution, unless some other caching mechanism exists.
The documentation warns against recreating the the ProxyGenerator each time:
If you have a long running process (web site, windows service, etc.)
and you have to create many dynamic proxies, you should make sure to
reuse the same ProxyGenerator instance. If not, be aware that you will
then bypass the caching mechanism. Side effects are high CPU usage and
constant increase in memory consumption
However it is entirely possible that there is some mechanism that prevents this problem in Windsor, or that the documentation is obsolete.
My recommendation: test with a loop on your transient factory resolution, then resolve your component; keep an eye on memory (also loaded assemblies; maybe the proxied code is loaded in a side assembly) and processor usage to determine whether or not your trick is viable
So the deeper question is related to IDisposable. Why are you using it? Are you really keeping track of unmanaged/limited resources? Are you using it to really dispose something or as a shutdown protocol?
All these will go away if you change Windsor's release policy -- thus making you take care of when/if [to] call dispose. The container is trying to help you by keeping track of what and when to dispose things, but it can only do so much, so sometimes it's better to step up and take over the responsibility.
Another option is to turn off the tracking and expose a service that keeps track of things to dispose. This can be more semantical and understand your business/infrastructure needs.
In sum: what you're doing is fine, but it's not all that common to instantiate a transient factory every time you need the factory to create something. Make sure you document this for the your own future self.
I'm working with an application that uses WSTrustChannelFactory to create trust channels, I noticed that the code is creating a new WSTrustChannelFactory everytime a new channel is needed.
I've never worked with this before but since this is a factory I suppose it can be implemented as a singleton.
Am I right? If so, is there any additional consideration to take (will the factory always be "usable" or there are any scenarios/exceptions where it should be replaced with a new instance)?. Also, is the factory creation an expensive operation, such as a WCF ChannelFactory creation?
Am I right?
Yes, I think you are. I've worked on several projects where we used a channel factory and each time it was a singleton. It certainly has its limits and can become a bottleneck under a very high load, but for a lot implementations I think you are fine.
I will go ahead and preface this by saying: I am somewhat new to WCF.
I'm working on a server-side routine that's responsible for doing a great deal of business logic. It's accessible from a client via WCF.
My main WCF method calls off to several other private methods. Instead of passing around all of the "lookup data" I need for the business logic to each private method, I decided to use a singleton instance of a class named DataProvider that contains all of this "lookup data".
At the end of the routine, I "release" the DataProvider's lookup data so the next time the routine is executed, the latest lookup data will be used.
So, here's a simplified example:
public void Generate()
{
try
{
//populate singleton DataProvider with it's lookup data...
DataProvider.Instance.LoadLookupData();
//do business logic...
}
finally
{
//release provider's lookup data...
DataProvider.Release();
}
}
This works great until I have two different clients that execute the method at (or near) the same time. Problems occur because they share the same singleton instance and the task who finishes first will release the DataProvider before the other completes.
So...
What are my options here?
I'd like to avoid passing around all of the lookup data so the singleton pattern (or some derivative) seems like a good choice. I also need to be able to support multiple clients calling the method at the same time.
I believe the WCF service is configured as "Per-Call". I'm not sure if there's a way to configure a WCF service so that the static memory is not shared between service invocations.
Any help would be appreciated.
By default WCF is using "Per-Call", which means new instance of the WCF service is created for each client's call. Now since you implemented singleton even though new instance of the WCF is created it still calls your singleton.
If you would like to create lookup that is created for each call (like you have now) you should not do it as singleton. This way each client that calls your method will have new instance of the lookup, I think that was your intention.
However if you have lookup that is not changing that fast, I would recommend to share it between all calls, this will improve performance of your WCF service. You will need to declare your WCF service as
InstanceContextMode = InstanceContextMode.Single
ConcurrencyMode = ConcurrencyMode.Multiple
What this does is creating Singleton automatically for you by WCF, so you don't have to do it yourself, second it will support > 1 concurrent user (ConcurrencyMode.Multiple).
Now if you have your lookup that is changing and it needs to be reloaded after some period of time, I still would recommend using
InstanceContextMode = InstanceContextMode.Single
ConcurrencyMode = ConcurrencyMode.Multiple
but inside in your code cache it and then expire your cache at specific time or relative time (1 hours).
Here are some links that might help you:
3 ways to do WCF instance management (Per call, Per session and Single)
Hope this will help.
The static variables in a WCF service are always shared between instances regardless of the WCF InstanceContextMode setting. It seems you would be better off using a caching pattern for your look up data. The answers to this caching question provide some alternatives to rolling your own although they are a bit dated.
Also, if you decide that making the whole service instance a singleton (InstanceContextMode=Single) is the easiest solution be aware that you'll generally kill service scalability unless you also make your code multi-threaded (ConcurrencyMode=Multiple). If you can knock out thread-safe code in your sleep then a singleton service might be for you.
simplest is to use a synchronization mechanism - have you looked at lock(...) - this will act as a gatekeeper a lot like a critical section (if you have come across those in windows programming)
define a static object in your class
i.e.
static object lockObject = new object();
and use it in Generate method
i.e.
void Generate()
{
lock(lockObject)
{
...
}
}
I have created a simple WCF (.NET 3.5) service which defines 10 contracts which are basically calculations on the supplied data. At the moment I expect quite few clients to make a call to some of these contracts. How do I make the service more responsive ? I have a feeling that the service will wait until it process one request to go to the next one.
How can I use multithreading in WCF to speed things up ?
While I agree with Justin's answer, I believe some more light can be shed here on how WCF works.
You make the specific statement:
I have a feeling that the service will
wait until it process one request to
go to the next one. How can I use
multithreading in WCF to speed things
up ?
The concurrency of a service (how many calls it can take simultaneously) depends on the ConcurrencyMode value for the ServiceBehavior attached to the service. By default, this value is ConcurrencyMode.Single, meaning, it will serialize calls one after another.
However, this might not be as much of an issue as you think. If your service's InstanceContextMode is equal to InstanceContextMode.PerCall then this is a non-issue; a new instance of your service will be created per call and not used for any other calls.
If you have a singleton or a session-based service object, however, then the calls to that instance of the service implementation will be serialized.
You can always change the ConcurrencyMode, but be aware that if you do, you will have to handle concurrency issues and access to your resources manually, as you have explicitly told WCF that you will do so.
It's important not to change these just because you think that it will lead to increased performance. While not so much for concurrency, the instancing aspect of your service is very much a part of the identity of the service (if it is session or not session-based) and changing them impacts the clients consuming the service, so don't do it lightly.
Of course, this speaks nothing to whether or not the code that is actually implementing the service is efficient. That is definitely something that needs looking into, when you indicate that is the case.
This is definitely pre-mature optimization. Implement your services first and see if there's an issue or not.
I think you'll find that you are worrying about nothing. The server won't block on a single request as that request processes. IIS/WCF should handle things for you nicely as-is.
I'm not familiar with WCF, but can the process be async?
If you are expecting a huge amount of data and intensive calculations, one option could be to send an id, calculate the values in a separate thread and then provide a method to return the result using the initial id.
Something like:
int id = Service.CalculateX(...);
...
var y = Service.GetResultX(id);
By default the Instancing is PerSession.
See WCF Service defaults
However if you use a session binding that doesn't support sessions (like BasicHttpBinding) or the the channel/client does not create a session then this behaves like PerCall
See [Binding type session support] (https://learn.microsoft.com/en-us/dotnet/framework/wcf/system-provided-bindings).
Each WCF client object will create a Session and for each session there will be a server instance with a single thread that services all calls from that particular WCF client object synchronously.
Multiple clients therefore would each have their own session and therefore server instance and thread by default and would not block each other.
They will only affect each other on shared resources like DB, CPU etc.
See Using sessions
Like others suggested you should make sure the implementation is efficient BEFORE you start playing with the Instancing and Concurrency modes.
You could also consider client side calculations if there is no real reason to make a call to the server.