WCF channel Factory caching - c#

A WCF service will consume another Wcf service. Now, i want to create channel factory object and cache it manually. I know performance will be good but concern any other issue will be raised or not.
I have found info as follows:
"Using ChannelFactory you can still achieve channel factory caching with your own custom MRU cache. This still implies an important restriction: calls to the same service endpoint that share the channel factory must also share the same credentials. That means you can t pass different credentials for each thread calling application services from the Web server tier. One scenario where this is not an issue is if you use the same certificate or Windows credential to authenticate to downstream services. In this case, if you need to pass information about the authenticated user, you can use custom headers rather than a security token."
Link: http://devproconnections.com/net-framework/wcf-proxies-cache-or-not-cache
I have found a sample code in Google as follows.
internal delegate void UseServiceDelegate<in T>(T proxy);
internal static class Service<T>
{
private static readonly IDictionary<Type, string>
cachedEndpointNames = new Dictionary<Type, string>();
private static readonly IDictionary<string, ChannelFactory<T>>
cachedFactories =
new Dictionary<string, ChannelFactory<T>>();
internal static void Use(UseServiceDelegate<T> codeBlock)
{
var factory = GetChannelFactory();
var proxy = (IClientChannel)factory.CreateChannel();
var success = false;
try
{
using (proxy)
{
codeBlock((T)proxy);
}
success = true;
}
finally
{
if (!success)
{
proxy.Abort();
}
}
}
private static ChannelFactory<T> GetChannelFactory()
{
lock (cachedFactories)
{
var endpointName = GetEndpointName();
if (cachedFactories.ContainsKey(endpointName))
{
return cachedFactories[endpointName];
}
var factory = new ChannelFactory<T>(endpointName);
cachedFactories.Add(endpointName, factory);
return factory;
}
}
private static string GetEndpointName()
{
var type = typeof(T);
var fullName = type.FullName;
lock (cachedFactories)
{
if (cachedEndpointNames.ContainsKey(type))
{
return cachedEndpointNames[type];
}
var serviceModel =
ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None)
.SectionGroups["system.serviceModel"] as ServiceModelSectionGroup;
if ((serviceModel != null) && !string.IsNullOrEmpty(fullName))
{
foreach (var endpointName in
serviceModel.Client.Endpoints.Cast<ChannelEndpointElement>()
.Where(endpoint => fullName.EndsWith(endpoint.Contract)).Select(endpoint
=> endpoint.Name))
{
cachedEndpointNames.Add(type, endpointName);
return endpointName;
}
}
}
throw new InvalidOperationException("Could not find endpoint element
for type '" + fullName + "' in the ServiceModel client
configuration section. This might be because no configuration file
was found for your application, or because no endpoint element
matching this name could be found in the client element.");
}
}
I am totally confused what should i do. Can anyone give me a best practice guideline?

This is a complex topic with a lot of details to go over, but here it goes.
First, as a general rule you should be caching a ChannelFactory and not an individual Channel. A ChannelFactory is expensive to construct as well as thread-safe so it is a great candidate for caching. A Channel is cheap to construct and it is generally recommended to only create channels on an as-needed basis and to close them as early as possible. Additionally, when you cache a Channel then you have to worry about it timing out which will cause it to fault which invalidates the entire benefit of caching it in the first place.
The article you linked to by Michele Leroux Bustamante is one of the best resources out there. As she states, there are differences to consider between Windows clients and server-side clients. Mostly only Windows clients benefit from caching as typically the credentials differ from thread to thread on server-side clients. For your typical Windows clients, there are two main options: Caching the references yourself or leveraging the MRU cache.
Leveraging the MRU cache: Essentially this means that you are letting Microsoft take the wheel. The ClientBase class will use an MRU cache for the internal ChannelFactory instance. The caching behavior is controlled via a CacheSetting property and by default caching will be disabled if any of the "security-sensitive" properties are accessed. ClientBase properties which will invalidate and remove a ChannelFactory from the MRU cache when accessed include the Endpoint, ClientCredentials or the ChannelFactory itself. There is a way to override this behavior by setting the CacheSettings property to CacheSettings.AlwaysOn. Additionally, if the Binding is run-time defined then the ChannelFactory is no longer a candidate for the MRU cache. See more details here.
Caching the references yourself: This means that you are going to keep a collection of ChannelFactory references yourself. The snippet you provide in your question uses this approach. The best approach I have ever seen and admittedly use a modified version of at work is by Darin Dimitrov via this related SO question. For those of us who like to have more fine-grained control over the caching mechanism then this is the approach to use. This is typically used when credentials must be set at run-time like is often required by internet services.
Quite similarly, client proxies can be cached to improve performance - Wenlong Dong has an article about this topic.
(Update) Server-side clients as noted before are quite limited in their options when it comes to ChannelFactory caching. For this brief discussion, we will assume that our deployment scenario looks like this:
Client -> Service A -> Service B
The most likely method to use in order to leverage ChannelFactory caching in this scenario is to cache the references yourself for the session between the Client and Service A. This way Service A does not have to construct a different ChannelFactory instance every time Service A needs to call into Service B. However, if the properties of the ChannelFactory need change for each call, then this is no longer going to be appropriate.
Of course this also holds if Service A is a Singleton and each call to the downstream service (Service B) does not require new credentials, but Singleton services have their own set of performance problems.

Related

SslStream, disable session caching

The MSDN documentation says
The Framework caches SSL sessions as they are created and attempts to reuse a cached session for a new request, if possible. When attempting to reuse an SSL session, the Framework uses the first element of ClientCertificates (if there is one), or tries to reuse an anonymous sessions if ClientCertificates is empty.
How can I disable this caching?
At the moment I am experiencing a problem with a reconnect to a server (i.e., the first connection works good, but at attempt to reconnect the servers breaks the session). Restarting the application helps (but of course only for the first connection attempt). I assume the problem root is caching.
I've checked the packets with a sniffer, the difference is at just single place only at Client Hello messages:
First connection to the server (successful):
Second connection attempt (no program restart, failed):
The difference seems to be just the session identifier.
P.S. I'd like to avoid using 3rd-party SSL clients. Is there a reasonable solution?
This is a translation of this question from ru.stackoverflow
Caching is handled inside SecureChannel - internal class that wraps SSPI and used by SslStream. I don't see any points inside that you can use to disable session caching for client connections.
You can clear cache between connections using reflection:
var sslAssembly = Assembly.GetAssembly(typeof(SslStream));
var sslSessionCacheClass = sslAssembly.GetType("System.Net.Security.SslSessionsCache");
var cachedCredsInfo = sslSessionCacheClass.GetField("s_CachedCreds", BindingFlags.NonPublic | BindingFlags.Static);
var cachedCreds = (Hashtable)cachedCredsInfo.GetValue(null);
cachedCreds.Clear();
But it's very bad practice. Consider to fix server side.
So I solved this problem a bit differently. I really didn't like the idea of reflecting out this private static method to dump the cache because you don't really know what you're getting into by doing so; you're basically circumventing encapsulation and that could cause unforeseen problems. But really, I was worried about race conditions where I dump the cache and before I send the request, some other thread comes in and establishes a new session so then my first thread inadvertently hijacks that session. Bad news... anyway, here's what I did.
I stopped to think about whether or not there was a way to sort of isolate the process and then an Android co-worker of mine recalled the availability of AppDomains. We both agreed that spinning one up should allow the Tcp/Ssl call to run, isolated from everything else. This would allow the caching logic to remain intact without causing conflicts between SSL sessions.
Basically, I had originally written my SSL client to be internal to a separate library. Then within that library, I had a public service act as a proxy/mediator to that client. In the application layer, I wanted the ability to switch between services (HSM services, in my case) based on the hardware type, so I wrapped that into an adapter and interfaced that to be used with a factory. Ok, so how is that relevant? Well it just made it easier to do this AppDomain thing cleanly, without forcing this behavior any other consumer of the public service (the proxy/mediator I spoke of). You don't have to follow this abstraction, I just like to share good examples of abstraction whenever I find them :)
Now, in the adapter, instead of calling the service directly, I basically create the domain. Here is the ctor:
public VCRklServiceAdapter(
string hostname,
int port,
IHsmLogger logger)
{
Ensure.IsNotNullOrEmpty(hostname, nameof(hostname));
Ensure.IsNotDefault(port, nameof(port), failureMessage: $"It does not appear that the port number was actually set (port: {port})");
Ensure.IsNotNull(logger, nameof(logger));
ClientId = Guid.NewGuid();
_logger = logger;
_hostname = hostname;
_port = port;
// configure the domain
_instanceDomain = AppDomain.CreateDomain(
$"vcrypt_rkl_instance_{ClientId}",
null,
AppDomain.CurrentDomain.SetupInformation);
// using the configured domain, grab a command instance from which we can
// marshall in some data
_rklServiceRuntime = (IRklServiceRuntime)_instanceDomain.CreateInstanceAndUnwrap(
typeof(VCServiceRuntime).Assembly.FullName,
typeof(VCServiceRuntime).FullName);
}
All this does is creates a named domain from which my actual service will run in isolation. Now, most articles that I came across on how to actually execute within the domain sort of over-simplify how it works. The examples typically involve calling myDomain.DoCallback(() => ...); which isn't wrong, but trying to get data in and out of that domain will likely become problematic as serialization will likely stop you dead in your tracks. Simply put, objects that are instantiated outside of DoCallback() are not the same objects when called from inside of DoCallback since they were created outside of this domain (see object marshalling). So you'll likely get all kinds of serialization errors. This isn't a problem if running the entire operation, input and output and all can occur from inside myDomain.DoCallback() but this is problematic if you need to use external parameters and return something across this AppDomain back to the originating domain.
I came across a different pattern here on SO that worked out for me and solved this problem. Look at _rklServiceRuntime = in my sample ctor. What this is doing is actually asking the domain to instantiate an object for you to act as a proxy from that domain. This will allow you to marshall some objects in and out of it. Here is my implemenation of IRklServiceRuntime:
public interface IRklServiceRuntime
{
RklResponse Run(RklRequest request, string hostname, int port, Guid clientId, IHsmLogger logger);
}
public class VCServiceRuntime : MarshalByRefObject, IRklServiceRuntime
{
public RklResponse Run(
RklRequest request,
string hostname,
int port,
Guid clientId,
IHsmLogger logger)
{
Ensure.IsNotNull(request, nameof(request));
Ensure.IsNotNullOrEmpty(hostname, nameof(hostname));
Ensure.IsNotDefault(port, nameof(port), failureMessage: $"It does not appear that the port number was actually set (port: {port})");
Ensure.IsNotNull(logger, nameof(logger));
// these are set here instead of passed in because they are not
// serializable
var clientCert = ApplicationValues.VCClientCertificate;
var clientCerts = new X509Certificate2Collection(clientCert);
using (var client = new VCServiceClient(hostname, port, clientCerts, clientId, logger))
{
var response = client.RetrieveDeviceKeys(request);
return response;
}
}
}
This inherits from MarshallByRefObject which allows it to cross AppDomain boundaries, and has a single method that takes your external parameters and executes your logic from within the domain that instantiated it.
So now back to the service adapter: All the service adapters has to do now is call _rklServiceRuntime.Run(...) and feed in the necessary, serializable parameters. Now, I just create as many instances of the service adapter as I need and they all run in their own domain. This works for me because my SSL calls are small and brief and these requests are made inside of an internal web service where instancing requests like this is very important. Here is the complete adapter:
public class VCRklServiceAdapter : IRklService
{
private readonly string _hostname;
private readonly int _port;
private readonly IHsmLogger _logger;
private readonly AppDomain _instanceDomain;
private readonly IRklServiceRuntime _rklServiceRuntime;
public Guid ClientId { get; }
public VCRklServiceAdapter(
string hostname,
int port,
IHsmLogger logger)
{
Ensure.IsNotNullOrEmpty(hostname, nameof(hostname));
Ensure.IsNotDefault(port, nameof(port), failureMessage: $"It does not appear that the port number was actually set (port: {port})");
Ensure.IsNotNull(logger, nameof(logger));
ClientId = Guid.NewGuid();
_logger = logger;
_hostname = hostname;
_port = port;
// configure the domain
_instanceDomain = AppDomain.CreateDomain(
$"vc_rkl_instance_{ClientId}",
null,
AppDomain.CurrentDomain.SetupInformation);
// using the configured domain, grab a command instance from which we can
// marshall in some data
_rklServiceRuntime = (IRklServiceRuntime)_instanceDomain.CreateInstanceAndUnwrap(
typeof(VCServiceRuntime).Assembly.FullName,
typeof(VCServiceRuntime).FullName);
}
public RklResponse GetKeys(RklRequest rklRequest)
{
Ensure.IsNotNull(rklRequest, nameof(rklRequest));
var response = _rklServiceRuntime.Run(
rklRequest,
_hostname,
_port,
ClientId,
_logger);
return response;
}
/// <summary>
/// Releases unmanaged and - optionally - managed resources.
/// </summary>
public void Dispose()
{
AppDomain.Unload(_instanceDomain);
}
}
Notice the dispose method. Don't forget to unload the domain. This service implements IRklService which implements IDisposable, so when I use it, it used with a using statement.
This seems a bit contrived, but it's really not and now the logic will be run on it's own domain, in isolation, and thus the caching logic remains intact but non-problematic. Much better than meddling with the SSLSessionCache!
Please forgive any naming inconsistencies as I was sanitizing the actual names quickly after writing the post.. I hope this helps someone!

Is it possible to track all outgoing WCF call?

Our application calls external services like
//in client factory
FooServiceClient client = new FooServiceClient(binding, endpointAddress);
//in application code
client.BarMethod(); //or other methods
Is it possible to track all of these calls (e.g by events or something like that) so that the application can collect the statistics like number of call, response time, etc? Note that my application itself needs to access the values, not only to write to a log file.
What I can think is to create a subclass of VisualStudio-generated FooServiceClient and then add codes like this
override void BarMethod()
{
RaiseStart("BarMethod");
base.BarMethod();
RaiseEnd("BarMethod);
}
and the RaiseStart and RaiseEnd method will raise events that will be listened by my code.
But this seems tedious (because there are a lot of methods to override) and there is a lot of repeated codes, my code needs to change everytime the service contract changes, etc. Is there a simpler way to achieve this, for example by using reflection to create the subclass or by tapping into a built-in method in WCF, if any?
The first thing I would look at is to see if the counters available in your server's Performance Monitor can provide you with the kind of feedback you need. There's built in counters for a variety of metrics for ServiceModel Endpoints, Operations and Services. Here is some more info http://msdn.microsoft.com/en-us/library/ms735098.aspx
You could try building an implementation of IClientMessageInspector, which has a method to be called before the request is sent and when the reply is received. You can inspect the message, make logs etc in these methods.
You provide an implementation of IEndpointBehavior which applies your message inspector, and then add the endpoint behavior to your proxy client instance.
client.Endpoint.Behaviors.Add(new MyEndpointBehavior())
Check out the docs for MessageInspectors and EndpointBehaviors, there are many different ways of applying them (attributes, code, endpoint xml config), I can't remember of the top of my head which apply to which, as there also IServiceBehavior and IContractBehavior. I do know for sure that the endpoint behaviors can be added to the client proxy collection though.
I found a simple way to do it by using dynamic proxy, for example Castle's Dynamic Proxy.
Firstly, use a factory method to generate your client object
IFooClient GetClient()
{
FooClient client = new FooClient(); //or new FooClient(binding, endpointAddress); if you want
ProxyGenerator pg = new ProxyGenerator();
return pg.CreateInterfaceProxyWithTarget<IFoo>(client, new WcfCallInterceptor());
}
And define the interceptor
internal class WcfCallInterceptor : IInterceptor
{
public void Intercept(IInvocation invocation)
{
try
{
RaiseStart(invocation.Method.Name);
invocation.Proceed();
}
finally
{
RaiseEnd(invocation.Method.Name);
}
}
//you can define your implementation for RaiseStart and RaiseEnd
}
I can also change the intercept method as I wish, for example I can add a catch block to call a different handler in case the method throw exception, etc.

How to share static object across processes in WCF?

Background:
I asked this question about creating a cached provider structure for my WCF service. I've implemented that design now, but what I've noticed in testing, is that the providers aren't actually being cached. How do I know this? I added the following debug-level logging to my service:
private static readonly IDictionary<string, XmlLoaderProviderBase> _providerDictionary =
new Dictionary<string, XmlLoaderProviderBase>();
public void Load(LoadRequest loadRequest)
{
XmlLoaderProviderBase xmlLoader;
if (_providerDictionary.ContainsKey(loadRequest.TransferTypeCode))
{
// Use cached provider...
xmlLoader = _providerDictionary[loadRequest.TransferTypeCode];
Logger.Log.DebugFormat("Found cached provider: {0} for transfer type: {1}",
xmlLoader.GetType(), loadRequest.TransferTypeCode);
}
else
{
// Instantiate provider for the first time; add provider to cache...
xmlLoader = XmlLoaderProviderFactory.CreateProvider(loadRequest.TransferTypeCode);
_providerDictionary.Add(loadRequest.TransferTypeCode, xmlLoader);
Logger.Log.DebugFormat("Instantiating provider: {0} for transfer type: {1}",
xmlLoader.GetType(), loadRequest.TransferTypeCode);
}
xmlLoader.Load(loadRequest);
}
And what I notice, is that no matter how many times I call the service, a provider is always instantiated (it never finds the cached version). Thankfully log4net is pretty helpful, and it shows that each call to the service runs in it's own unique process (i.e. it has a unique process ID). So as is, the providers will never be cached. How can I get this to actually cache providers, and read that dictionary across processes? Is this even possible?
I also read similar questions to this here on SO, and I notice the InstanceContextMode setting. I don't think I want this, because I think that will hurt performance (am I wrong? way off?) In a nut shell, my desire is to share the _providerDictionary across all processes/service instances... please help!
I'm going to steal #slfan's comment:
Use
[ServiceBehavior(ConcurrencyMode=ConcurrencyMode.Multiple, InstanceContextMode=InstanceContextMode.Single)]
Default InstanceContextMode is PerSession.
I would look into creating a custom caching framework using System.Runtime.Caching.ObjectCache / MemoryCache. To my knowledge this should be accessible across processes. It is also threadsafe.
See the following links:
http://technovivek.blogspot.com/2013/08/c-in-memory-cache-using-net-40-object.html
http://www.codeproject.com/Articles/290935/Using-MemoryCache-in-Net

WCF, ServiceHost - CreateChannel, don't create new remote instance

I got a usual WCF service set up like this:
private ServiceHost serviceHost = null;
protected override void OnStart(string[] args)
{
if (serviceHost != null)
serviceHost.Close();
Uri[] baseAddress = new Uri[]{
new Uri("net.pipe://localhost")};
string PipeName = "DatabaseService";
serviceHost = new ServiceHost(typeof(Kernel), baseAddress); // Kernel implements IDatabase
serviceHost.AddServiceEndpoint(typeof(IDatabase), new NetNamedPipeBinding(), PipeName);
serviceHost.Open();
}
protected override void OnStop()
{
if (serviceHost != null && serviceHost.State != CommunicationState.Closed)
{
serviceHost.Close();
serviceHost = null;
}
}
From this code, i guess, one instance of "Kernel" is created, because I got this service running only once.
I create a proxy Object using the ChannelFactory like this:
pipeFactory = new ChannelFactory<IDatabase>(new NetNamedPipeBinding(), new EndpointAddress("net.pipe://localhost/DatabaseService"));
m_Database = pipeFactory.CreateChannel();
I have to say, that my Kernel instance access a local file, and therefore it's very important I got only once physical instance of this class. I want my service to take care of that but here come's my problem.
While the service is running and a single channel is created and active, a second client comes up and wants to create a channel too. That works properly but if I start using the proxy Object a FaultException is thrown because a second instance of my Kernel class is created.
Therefore I'm guessing that an instance of the Kernel class is created by every CreateChannel call.
Is it possible to avoid the creation of a new instance and return always a reference to a single Kernel class instance when CreateChannel is called?
Regards,
inva
Yes, by default, WCF uses the per-session or per-call calling convention, e.g. each incoming service request from a client gets a new, separate instance of your service (implementation) class.
You can control this, of course, using things like the InstanceContextMode (PerSession is the default - at least on bindings that support it -, PerCall the recommended best practice, and Single is the Singleton) and the ConcurrencyMode settings on your service.
You can define these either in config, or directly on your service class.
[ServiceBehavior(InstanceContextMode=InstanceContextMode.Single)]
public class CalculatorService : ICalculatorInstance
{
...
}
See the MSDN documentation on WCF Sessions, Instancing and Concurrency for a great and extensive explanation of all details. Also read the excellent MSDN Magazine article Discover Mighty Instance Management Techniques For Developing WCF Apps by Juval Lowy, a great resource always!
If you do switch your service class to be a singleton (InstanceContextMode=InstanceContextMode.Single), you need to be aware of the two trade-offs:
either you define the ConcurrencyMode to also be Single, which effectively means only one single request can ever be handled at once; requests will be serialized, that is, if handling the request takes a fairly long time, subsequent requests will have to start waiting and might end up timing out
the other option is to set the ConcurrencyMode to Multiple, then your singleton service class can handle multiple requests at once; but this also means, you have to write your service class in a fully thread-safe manner and you need to synchronize and protect any concurrent access to shared data members - typically a very tricky and hard-to-do-right programming exercise

Dealing with concurrency and complex WCF services interacting with objects of the overall application

I am enjoying creating and hosting WCF services.
Up until now I can create services defining contracts for the service and data (interfaces) and defining hosts and configuration options to reach them (endpoint specifications).
Well, consider this piece of code defining a service and using it (no mention for endpoints that are defined in app.config not shown here):
[ServiceContract]
public interface IMyService {
[OperationContract]
string Operation1(int param1);
[OperationContract]
string Operation2(int param2);
}
public class MyService : IMyService {
public string Operation1(int param1) { ... }
public string Operation2(int param2) { ... }
}
public class Program {
public static void Main(stirng[] args) {
using (ServiceHost host = new ServiceHost(typeof(MyService))) {
host.Open();
...
host.Close();
}
}
}
Well, this structure is good when creating something that could be called a Standalone service.
What if I needed my service to use objects of a greater application.
For example I need a service that does something basing on a certain collection defined somewhere in my program (which is hosting the service). The service must look into this collection and search and return a particular element.
The list I am talking about is a list managed by the program and edited and modified by it.
I have the following questions:
1) How can I build a service able to handle this list?
I know that a possible option is using the overloaded ServiceHost constructor accepting an Object instead of a Type service.
So I could pass my list there. Is it good?
[ServiceContract]
public interface IMyService {
[OperationContract]
string Operation1(int param1);
[OperationContract]
string Operation2(int param2);
}
public class MyService : IMyService {
private List<> myinternallist;
public MyService(List<> mylist) {
// Constructing the service passing the list
}
public string Operation1(int param1) { ... }
public string Operation2(int param2) { ... }
}
public class Program {
public static void Main(stirng[] args) {
List<> thelist;
...
MyService S = new MyService(thelist)
using (ServiceHost host = new ServiceHost(S)) {
host.Open();
...
host.Close();
// Here my application creates a functions and other that manages the queue. For this reason my application will edit the list (it can be a thread or callbacks from the user interface)
}
}
}
This example should clarify.
Is it the good way of doing? Am I doing right?
2) How to handle conflicts on this shared resource between my service and my application?
When my application runs, hosting the service, my application can insert items in the list and delete them, the same can do the service too. Do I need a mutex? how to handle this?
Please note that the concurrency issue concerns two actors: the main application and the service. It is true that the service is singleton but the application acts on the list!!!
I assume that the service is called by an external entity, when this happens the application still runs right? Is there concurrency in this case???
Thankyou
Regarding point 2, you can use Concurrent Collections to manage most of the thread safety required.
I'm not sure what you mean by point 1. It sounds like you're describing basic polymorphism, but perhaps you could clarify with an example please?
EDIT: In response to comments you've made to Sixto's answer, consider using WCF's sessions. From what you've described it sounds to me like the WCF service should be sat on a seperate host application. The application you are using currently should have a service reference to the service, and using sessions would be able to call an operation mimicking your requirement for instantiating the service with a list defined by the current client application.
Combine this with my comment on exposing operations that allow interaction with this list, and you'll be able to run multiple client machines, working on session stored Lists?
Hope that's explained well enough.
Adding the constructor to MyService for passing the list certainly will work as you'd expect. Like I said in my comment to the question however, the ServiceHost will only ever contain a single instance of the MyService class so the list will not be shared because only one service instance will consume it.
I would look at a dependency injector (DI) container for WCF to do what you are trying do. Let the DI container provide the singleton list instance to your services. Also #Smudge202 is absolutely correct that using the Concurrent Collection functionality is what you need to implement the list.
UPDATE based on the comments thread:
The DI approach would works by getting all of an object's dependencies from the DI container instead of creating them manually in code. You register all the types that will be provided by the container as part of the application start up. When the application (or WCF) needs a new object instance it requests it from the container instead of "newing" it up. The Castle Windsor WCF integration library for example implements all the wiring needed to provide WCF a service instance from the container. This posts explains the details of how to use the Microsoft Unity DI container with WCF if you want to roll your own WCF integration.
The shared list referenced in this question would be registered in the container as an already instantiated object from your application. When a WCF service instance is spun up from the DI container, all the constructor parameters will be provided including a reference to the shared list. There is a lot of information out there on dependency injection and inversion of control but this Martin Fowler article is a good place to start.

Categories

Resources