I'm working with an application that uses WSTrustChannelFactory to create trust channels, I noticed that the code is creating a new WSTrustChannelFactory everytime a new channel is needed.
I've never worked with this before but since this is a factory I suppose it can be implemented as a singleton.
Am I right? If so, is there any additional consideration to take (will the factory always be "usable" or there are any scenarios/exceptions where it should be replaced with a new instance)?. Also, is the factory creation an expensive operation, such as a WCF ChannelFactory creation?
Am I right?
Yes, I think you are. I've worked on several projects where we used a channel factory and each time it was a singleton. It certainly has its limits and can become a bottleneck under a very high load, but for a lot implementations I think you are fine.
Related
For a REST api, that has no dependencies between requests and is using ASP.NET Core DI.
I have heard conflicting arguments when choosing the most appropriate:
Prefer transient and then scoped and avoid singleton due to memory and multi-threading issues and also because singletons should not be injected with other lifetimes due to captive dependencies
Prefer singletons to save time instantiating objects and prevent opening multiple connections
I understand they work differently but is there a 'go-to' lifetime? Should it start with 'transient' and move to others as required?
Are there any investigations proving the instantiation time saved by singleton is actually relevant?
I'm not going to say there's a right or wrong way here, but I'll share my personal experience.
I've tried both ways, and ended up finding that it's best to start with transient and then expand the scope as necessary.
Think in terms of the Single Responsibility Principle and Reasons for Change. The reason that your service's lifetime scope may need to change will likely be linked to a change you make to the service implementation itself. When that Reason happens, you want to only need to change a single class.
If you have a service that needs to be longer-lived than most of your other services, all you have to do is make it longer-lived and inject factories for any shorter-lived dependencies into it.
On the other hand, if you have a class that needs to be shorter-lived than most of your other services, you'll end up having to inject it as a factory into all the longer-lived services. So one Reason for Change impacts code all over your code base.
Of course, this is assuming you're doing things correctly: avoiding captive dependencies and remembering to make non-thread-safe services transient. I find it's far easier to remember which services need to be long-lived and override the defaults in that direction, than it is to remember which services shouldn't be long-lived and override the defaults that way. And I've found that failure to do the latter leads to much more insidious bugs that are harder to notice and result in greater damage.
With regards to performance: I haven't done performance testing with the ASP.NET Core built-in DI framework, but I've found that SimpleInjector is fast enough that the time and memory overhead of creating new instances of services is trivial in comparison with the other work my app does (like querying databases).
With regards to preventing opening multiple connections: SQL Server connections are pooled automatically so the cost to creating a new SqlConnection() and disposing it several times in a request is generally pretty trivial.
Should it start with 'transient' and move to others as required?
Yep. Transient services are simpler to implement correctly, and simply instantiating an object every time a dependency is resolved would normally not cause a significant performance problem.
I'm trying to figure out exactly how to set up my channelFactory and channels - to reuse the same instance vs creating new for each call. I've done a lot of research and I see many conflicting opinions. I'm coming to the following conclusion, but I'm not sure, so I'd like to hear some expert advice.
Using .NET 4, I'm creating a channel factory, adding an endpoint behavior, and then making calls.
Seems like I should reuse the same instance of channel factory but probably safest to make sure it's open first incase it faulted for any reason
If the factory faulted, try factory.close() and in a catch factory.abort()
Seems like there will not be a lot of overhead by doing factory.CreateChannel() for each call and that this is probably safer than sharing channels.
For each call, I should try ((IChannel)_client).Close() and in a catch((IChannel)_client).Abort();
One more thing that I'd like to confirm but I don't know how to test - is let's say I reuse channels and then the channel gets into faulted state - if I didn't code to check the state of the channel first, what would happen there?
Or should I share my channels - auto open with the first call and not close until I close my form?
If you use the TypedFactoryFacility in Windsor to generate factories for you based on an interface, the factory itself can be registered as Transient. It turns out that the factory will then release all transient objects that it created when the factory itself is disposed after release.
container.AddFacility<TypedFactoryFacility>();
container.Register(
Types.FromThisAssembly().BasedOn<IFactory>().Configure(
x => x.AsFactory()).LifestyleTransient());
This means that if I create the auto-generated factories as transient it neatly allows me to forget about Releaseing the object created from the factory (as long as I manage the lifetime of the factory itself, of course). The lifetime of such a transient factory would be tied to some other transient object, so when that object gets released, the factory is released, and everything that factory created.
Personally, I hate having to remember to explicitly Release() objects, and this cleans up the code a lot, so this makes me very happy. But the question is: Is this reasonable practice? It all just seems too easy.
Is there a downside to creating a new factory instance each time a factory is required?
Am I digging myself a terrible hole?
The automatic factory mechanism in Castle is based on Castle.DynamicProxy; I imagine that if you use the factory as a transient component you have to pay for the creation of the ProxyGenerator at each resolution, unless some other caching mechanism exists.
The documentation warns against recreating the the ProxyGenerator each time:
If you have a long running process (web site, windows service, etc.)
and you have to create many dynamic proxies, you should make sure to
reuse the same ProxyGenerator instance. If not, be aware that you will
then bypass the caching mechanism. Side effects are high CPU usage and
constant increase in memory consumption
However it is entirely possible that there is some mechanism that prevents this problem in Windsor, or that the documentation is obsolete.
My recommendation: test with a loop on your transient factory resolution, then resolve your component; keep an eye on memory (also loaded assemblies; maybe the proxied code is loaded in a side assembly) and processor usage to determine whether or not your trick is viable
So the deeper question is related to IDisposable. Why are you using it? Are you really keeping track of unmanaged/limited resources? Are you using it to really dispose something or as a shutdown protocol?
All these will go away if you change Windsor's release policy -- thus making you take care of when/if [to] call dispose. The container is trying to help you by keeping track of what and when to dispose things, but it can only do so much, so sometimes it's better to step up and take over the responsibility.
Another option is to turn off the tracking and expose a service that keeps track of things to dispose. This can be more semantical and understand your business/infrastructure needs.
In sum: what you're doing is fine, but it's not all that common to instantiate a transient factory every time you need the factory to create something. Make sure you document this for the your own future self.
We've created several WCF services that process asynchronous requests. We're using basicHttpBinding, consequently our InstanceContextMode is PerCall and this is what's causing a little confusion. We're seeing unusual behavior from those parts of the application being injected using Microsoft's Unity container.
We're resolving the reference below to create a singleton of Foo that is used throughout the application. However, when the service is hit in quick succession, Foo will occasionally throw exceptions that indicate that it is being accessed by multiple threads and having its state changed in unexpected ways as a result.
Container.RegisterType<IFoo, Foo>(new ContainerControlledLifetimeManager());
Now, if we change the lifetime manager to TransientLifetimeManager - essentially telling the container to inject a new instance of the class every time it's resolved, the issue is corrected.
Container.RegisterType<IFoo, Foo>(new TransientLifetimeManager());
From my understanding, the WCF doesn't control the lifetime of the AppDomain, the host does. In our case, that is IIS. So, given this information is it possible that our PerCall WCF requests are working correctly, but due to how the AppDomain is being managed, could we be accessing the same injected object due to its singleton implementation?
Thanks for your time!
Have a look at UnityWcf. I have tried a couple of different approaches to aligning the lifetime of objects in Unity to the InstanceContextMode in WCF. This works very well:
http://unitywcf.codeplex.com
I will go ahead and preface this by saying: I am somewhat new to WCF.
I'm working on a server-side routine that's responsible for doing a great deal of business logic. It's accessible from a client via WCF.
My main WCF method calls off to several other private methods. Instead of passing around all of the "lookup data" I need for the business logic to each private method, I decided to use a singleton instance of a class named DataProvider that contains all of this "lookup data".
At the end of the routine, I "release" the DataProvider's lookup data so the next time the routine is executed, the latest lookup data will be used.
So, here's a simplified example:
public void Generate()
{
try
{
//populate singleton DataProvider with it's lookup data...
DataProvider.Instance.LoadLookupData();
//do business logic...
}
finally
{
//release provider's lookup data...
DataProvider.Release();
}
}
This works great until I have two different clients that execute the method at (or near) the same time. Problems occur because they share the same singleton instance and the task who finishes first will release the DataProvider before the other completes.
So...
What are my options here?
I'd like to avoid passing around all of the lookup data so the singleton pattern (or some derivative) seems like a good choice. I also need to be able to support multiple clients calling the method at the same time.
I believe the WCF service is configured as "Per-Call". I'm not sure if there's a way to configure a WCF service so that the static memory is not shared between service invocations.
Any help would be appreciated.
By default WCF is using "Per-Call", which means new instance of the WCF service is created for each client's call. Now since you implemented singleton even though new instance of the WCF is created it still calls your singleton.
If you would like to create lookup that is created for each call (like you have now) you should not do it as singleton. This way each client that calls your method will have new instance of the lookup, I think that was your intention.
However if you have lookup that is not changing that fast, I would recommend to share it between all calls, this will improve performance of your WCF service. You will need to declare your WCF service as
InstanceContextMode = InstanceContextMode.Single
ConcurrencyMode = ConcurrencyMode.Multiple
What this does is creating Singleton automatically for you by WCF, so you don't have to do it yourself, second it will support > 1 concurrent user (ConcurrencyMode.Multiple).
Now if you have your lookup that is changing and it needs to be reloaded after some period of time, I still would recommend using
InstanceContextMode = InstanceContextMode.Single
ConcurrencyMode = ConcurrencyMode.Multiple
but inside in your code cache it and then expire your cache at specific time or relative time (1 hours).
Here are some links that might help you:
3 ways to do WCF instance management (Per call, Per session and Single)
Hope this will help.
The static variables in a WCF service are always shared between instances regardless of the WCF InstanceContextMode setting. It seems you would be better off using a caching pattern for your look up data. The answers to this caching question provide some alternatives to rolling your own although they are a bit dated.
Also, if you decide that making the whole service instance a singleton (InstanceContextMode=Single) is the easiest solution be aware that you'll generally kill service scalability unless you also make your code multi-threaded (ConcurrencyMode=Multiple). If you can knock out thread-safe code in your sleep then a singleton service might be for you.
simplest is to use a synchronization mechanism - have you looked at lock(...) - this will act as a gatekeeper a lot like a critical section (if you have come across those in windows programming)
define a static object in your class
i.e.
static object lockObject = new object();
and use it in Generate method
i.e.
void Generate()
{
lock(lockObject)
{
...
}
}