Does WCF Run the session on more than one thread? - c#

I have a wcf service (hosted in IIS) that is setup to use sessions. It seems to work. When Application_PostAcquireRequestState is called I have a session ID.
I end up using it like this (in my Global.asax):
if (Context.Handler is IRequiresSessionState)
{
log4net.ThreadContext.Properties["sessionId"] = Session.SessionID;
}
That seems to work fine. The value is stored off into my log4net property.
But when my service operation begins (my actual WCF service code) the log4net property is null again.
Since the property is stored per thread (ThreadContext), I can only assume that this means that the session is setup on one thread then executed on another thread. Am I right?
Is there anyway to get my log4net property set on the on the correct thread (without having to remember to make the above call at the start of every single service operation)?

Yes, IIS may use multiple thread to service multiple WCF requests. See http://msdn.microsoft.com/en-us/library/cc512374.aspx for more detail.
You might consider using different instances of a logger for each WCF request.

There are multiple scenarios where WCF might change threads on you:
The Global.asx thread is not guaranteed to be used for a service call (in fact its unlikely).
If there are multiple calls during the same session, the thread may also change between calls to the same service instance.
In theory state information like this should be stored in an Operation Context object. However because log4net uses thread local storage it becomes an awkward solution.
Is there anyway to get my log4net property set on the on the correct
thread (without having to remember to make the above call at the start
of every single service operation)?
Yes. Create a custom IOperationInvoker. The best example I know of is Carlos Figueira's blog. If you apply this as a service behavior your log4net property should always be defined for the service code.
One warning: When adding to thread local storage be sure to clean up. That's why log4net.ThreadContext.Stacks[].Push() returns a IDisposable. In other words your Invoke method should look like (incomplete and untested):
public object Invoke(object instance, object[] inputs, out object[] outputs)
{
using (log4net.ThreadContext.Stacks[key].Push(value))
{
return this.originalInvoker.Invoke(instance, inputs, out outputs);
}
}
See Carlos' blog to understand why you are calling the "originalInvoker". Note that if you want to support async operations that you need to implement additional methods.

Custom properties do not need to be strings. So you could store an instance of the following class in the global context:
public class SessionIdProperty
{
public override string ToString()
{
// error handling omitted
return Session.SessionID;
}
}
This way log4net can access the Session object directly when it logs a message. Log4net calls the ToString() method on non-string properties.

Related

Using a WCF service to share information across clients

I'm trying to develop a system to share information across 2 windows applications with different update loops.
I developed a solution that uses a WCF service to store and retrieve data. However this data is different across clients and therefore showing different values for each applications.
The service I tried to implement are similar to this
namespace TEST_Service_ServiceLibrary
{
[ServiceContract]
public interface TEST_ServiceInterface
{
[OperationContract]
string GetData();
[OperationContract]
void StoreData(string data);
}
}
namespace TEST_Service_ServiceLibrary
{
// Core service of the application, stores and provides data:
public class TEST_Service : TEST_ServiceInterface
{
string TEST_string;
// Used to pull stored data
public string GetData()
{
return TEST_string;
}
// Used to store data
public void StoreData(string data)
{
TEST_string = data;
}
}
}
Each of the applications creates a TEST_Service client.
I tested the GetData and StoreData functions and they work fine independently, however when I use StoreData on one application and test the GetData method from the other the data appears to be empty.
I have looked around but haven't found a solution to this problem, is there a work around for this? or should I change my approach? I thought of using a local data base but I'm not sure this is the best way to solve it
Thanks a lot
You have more than one instance of your service class. If you want to have your data in memory, you will need to run it in single instance mode:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]
Now keeping your data in memory might not be the best option anyway. You should look for a data store of some kind and then make that store a persistent instance with a single interface. Then it does not matter how many of your service instances are used.
If your WCF service was storing information in a database, then information stored on one request would go to the database, and when another request retrieved it, the result would come from that database. So if one client stored something, another could retrieve it.
The reason why this isn't working is because in response to each request your application is creating a new instance of the TEST_Service class. That means TEST_string, where you are storing values between requests, is a new string. It doesn't contain the previous value.
For experimentation you could try changing the string to static:
static string TEST_string;
...and then the value would persist between instances of the service class. But that still wouldn't be effective because your WCF service could be deployed to multiple servers, and then each one would have a separate instance of the class. Updating one wouldn't update the others. Or, if the service application restarted then the value would be lost. (From the context I assume that you're just experimenting with this.)
So ultimately you'd want some way to persist data that wouldn't depend on any of those factors, but would "survive" even when the instance of the service class goes out of scope or the application shuts down.

WebAPI Controller reuse

Let's say I have the following ApiController in my ASP.NET WebAPI Application:
class MyApiController : ApiController
{
private string str;
[HttpGet]
public string SetStr(string str)
{
this.str = str;
MaybeSleep(); // Some executions take longer, some don't.
return this.str;
}
}
(Reality is a bit more complicated, but this should be all that matters)
This is running fine in my environment and certain others, always returning the input value, even under heavy server load.
In two environments, however, str sometimes "magically" changes between set and return, even without too much server load. However, it always changes to values that were sent to the server around that time, just not always the ones sent in this request.
So, my questions are:
Is ApiController reuse a behaviour that I just have to expect, or should a new ApiController be created, used and destroyed for every single request the server processes?
Is this behaviour depending on ASP.NET version, IIS version and/or a Web.config setting?
Is there documentation about the behaviour of private ApiController variables available from Microsoft?
Or is this possibly a known bug in a certain .NET or ASP.NET version?
A request should not modify the state of a controller. From the entry method to any other methods called, you can pass parameters as needed, so there's no need to modify the controller object itself according to the request.
If there's some state that you need to maintain throughout the request that you can't pass through parameters to other methods, the best place to do that is on the HttpContext since that is always specific to the request. (Even then that scenario probably isn't too common.)
Instead of this:
public string SetStr(string str)
{
this.str = str;
MaybeSleep(); // Some executions take longer, some don't.
return this.str;
}
this:
public string SetStr(string str)
{
HttpContext.Items["str"] = str; //I'd declare a constant for "str"
MaybeSleep();
return HttpContext.Items["str"];
}
Is ApiController reuse a behaviour that I just have to expect, or
should a new ApiController be created, used and destroyed for every
single request the server processes?
When a request is received, a new controller instance is created by ControllerFactory or DependencyResolver.
Basically, main thread creates a controller instance, and then the same instance is shared between multiple threads until the request is completed.
The rest of the question is not relevant anymore since the first assumption is not correct.
Ideally, if you execute long running process, you want to use a scheduler so that it will not freeze the UI.
You can read more at Scott Hanselman's blog - How to run Background Tasks in ASP.NET

How to separate the layer of the communication and processing?

I created an application that provides several services. Each service provides a specific processing capabilities, except one service (that is the main service) that returns true or false to the clients which request if the specified processing capabilities is available or not.
Now I would modify the application, leaving the main service unchanged and adding the support for the installation of plugin with new processing capabilities: each plugin should add new processing capabilities without the need of implement a new service, but after installing the new plugin, a new service should be avaible. In this way, a plugin should not handle the communication layer. In other words, I would like to separate the layer of the communication and processing, in order to simplify the creation of new plugins.
Is it possible?
I could create two services: the main service and the service for processing.
The first service may be used by clients to know if a certain feature is present on the server (for example, clients may ask the server if it has installed the plugin that provides the functionality for solving differential equations).
The second service could be used to send a generic task and to receive a general result, for example:
Result executeTask(Task task);
where Result and Task are abstract classes...
For example, if I develop a plugin to solve the differential equations, I first create the classes for transferring data:
public class DifferentialEquationTask : Task
// This class contains the data of the differential equation to be solved.
...
public class DifferentialEquationResult : Result
// This class contains the the result.
...
Therefore, the client should instantiate a new object DifferentialEquationTask and pass it to the method of the second service:
DifferentialEquationTask myTask = new DifferentialEquationTask(...);
...
Result result = executeTask(myTask); // called by basic application
// The second service receives myTask as a Task object.
// This Task object also contains the destination plugin, so myTask is send
// to the correct plugin, which converts it to DifferentialEquationTask
...
myResult = result as DifferentialEquationResult;
// received by the client
Moreover, each plugin should have a version for the application server and a version for the client application.
An alternative would be to include the service in the plugin itself: in this way, a new plugin should implement a new functionality and expose it via an additional service.
In summary, I thought the following two alternatives:
a main service to ask the server if it has a plugin or not, and a second service to deliver tasks at the correct plugin;
a main service to ask if the server has a plugin or not, and various additional services (an additional service for each plugin installed).
In order to choose the best approach, I could use the following requirements:
Which of the two alternatives may provide better performance?
What advantages would be obtained using a new service for each plugin than using a single service that delivers tasks at the correct plugin?
Which of the two alternatives simplifies the development of a new plugin?
Being a novice, I was wondering if there was a better approach...
Thanks a lot!
It seems like the main service could maintain a dictionary of plugins, indexed by name. Then for a client to see if the server provides a particular service, all the main service has to do is look up the name in the dictionary. And to process, the service just has to call a method on the object that's in the value portion of the dictionary entry. An example:
You have three abstract classes: Service, ServiceResult, and ServiceTask. The contents of ServiceTask and ServiceResult aren't really important for this discussion. Service must have a parameterless constructor and a method called Process that takes a ServiceTask as its sole parameter. So your differential equation solver would look like:
public class DiffeqSolver : Service
{
public DiffeqSolver()
{
// do any required initialization here
}
public ServiceResult Process(ServiceTask task)
{
DiffeqTask dtask = task as DiffeqTask;
if (dtask == null)
{
// Error. User didn't pass a DiffeqTask.
// Somehow communicate error back to client.
}
// Here, solve the diff eq and return the result.
}
}
The main service is somehow notified of existing plugins. It maintains a dictionary:
Dictionary<string, Service> Services = new Dictionary<string, Service>();
I assume you have some idea how you're going to load the plugins. What you want, in effect, is for the dictionary to contain:
Key = "DiffeqSolver", Value = new DiffeqSolver();
Key = "ServiceType1", Value = new ServiceType1();
etc., etc.
You can then have two methods for the main service: ServiceIsSupported and Process:
bool ServiceIsSupported(string serviceName)
{
return Services.ContainsKey(serviceName);
}
ServiceResult Process(string serviceName, ServiceTask task)
{
Service srv;
if (Services.TryGetValue(serviceName, out srv))
{
return srv.Process(task);
}
else
{
// The service isn't supported.
// Return a failure result
return FailedServiceResult;
}
}
I've simplified that to some extent. In particular, I'm using a Dictionary, which is not thread safe. You'd want to use a ConcurrentDictionary, or use locks to synchronize access to your dictionary.
The more difficult part, I think, will be loading the plugins. But there are many available examples of creating a plugin architecture. I think you can find what you need.

Locking an ASP.NET application variable

I'm using a 3rd party web service in my ASP.NET application. Calls to the 3rd party web service have to be synchronized, but ASP.NET is obviously multi-threaded and multiple page requests could be made that result in simultaneous calls to the 3rd party web service. Calls to the web service are encapsulated in a custom object. My thought is to store the object in an application variable and use the C# lock keyword to force synchronized use of it.
I'm nervous, because I'm new to multi threaded concepts and I've read that you shouldn't lock a public object (which my application variable effectively is). I've also read that if the locked block of code fails (which it could if the web service fails), then it could destabilize the app domain and bring down the application.
I should mention that the 3rd party web service is rarely used in my website and it's going to be rare that 2 requests to it are made at the same time.
Here's a rough code sample of how I'd make calls to the web service:
ThirdPartWebService objWebService = Application["ThirdPartWebService"] As ThirdPartWebService;
lock (objWebService)
{
objWebService.CallThatNeedsToBeSynchronized();
}
You should create a private static readonly object _lock = new object(); in the class that makes the webservice calls, and use that as a lock. Since the object is static there will only be one of them throughout all of your application, a Singleton object if you wish (http://en.wikipedia.org/wiki/Singleton_pattern)
public class MyWebServiceWrapper
{
private static readonly object _lock = new object();
public void CallWebService()
{
lock(_lock)
{
var objWebService = (ThirdPartWebService)Application["ThirdPartWebService"];
objWebService.CallThatNeedsToBeSynchronized();
}
}
}
If your class that makes the WebService call doesn't do anything else, you can also just make a lock on this (lock(this)). Just remember, that this will mean, that if you have several methods, the call to one method will block all the other methods as well, which is why you normally shouldn't lock this.
If it is vital you should only have a single call to the service at any time I recommend you write your own Windows Service. This depends on how much fault tolerance you want.
Let's say for example you make a call to the web service, but then the application pool is recycled. When a new request comes in it would be handled by a new instance of your application which could then make the call to the web service (Even if the other instance is running).
You could pass this off to a windows a service, then use a polling mechanism from the client to check if the service has finished (Client would ask IIS are you done, IIS would look for some indication from windows service that it was done). This approach will avoid locking anything in IIS, and you won't waste critical resources such as threads in your thread pool waiting on a third party service.
You should never lock on a single resource in your web application...it's just too risky.
Edit
Another option is to use the Monitor object directly:
if (System.Threading.Monitor.TryEnter(syncObj,10))
{
try
{
//CallWebService
}
finally
{
System.Threading.Monitor.Exit(syncObj);
}
}
else
{
//Tell Client they are still waiting
}
TryEnter will block until a lock is made or 10 milliseconds has passed. You could then in your timeout tell the client they need to retry. You could then have your client code decide if it should reissue the request. You could also use a semaphore or mutex (forget which one is more appropiate here). But it would assuming you have permissions to use them, give you something you can lock on at the machine level which would prevent the app recycling use case.
You can lock on a static shared object. This is a common way to use lockĀ“s in .Net. By using a static object you know it will be shared among all threads, and the lock is ensured.
As for making the app unstable if the call fails, that has to be due to the call not disposing properly. By using the "using" statement you are ensuring that dispose is called at the end of the call. Read this SO thread on why/why not you should dispose a web service regarding performance.
static readonly object _lockObj = new object();
...
lock( _lockObj )
{
ThirdPartWebService objWebService = Application["ThirdPartWebService"] As ThirdPartWebService;
objWebService.CallThatNeedsToBeSynchronized();
}
lock() will not prevent multiple call to your webservice. It will only ensure that no thread is executing code block within lock() {} at the same time.
So the the question is what does that webservice do?
1) Performs some action on third party (updates their DB with some values you supply?)
You can do as you've yourself suggested. Though I would say that if their service cannot handle simultaneous calls, then they should fix it. Thats really not your problem to worry about.
2) It queries and returns some data for your use.
In this case lock is useless unless you plan on caching the result of the call.
var cachedValue = ReadValueFromCache();
if (cachedValue != null)
return cachedValue;
lock (objWebService)
{
// yes you need to do it second time inside the lock
cachedValue = ReadValueFromCache();
if (cachedValue != null)
return cachedValue;
cachedValue = objWebService.CallThatNeedsToBeSynchronized();
SaveValueToCache(cachedValue);
}
return cachedValue;
How you implement the cache is kinda secondary. It maybe web cache object or just a static variable.

A question about making a C# class persistent during a file load

Apologies for the indescriptive title, however it's the best I could think of for the moment.
Basically, I've written a singleton class that loads files into a database. These files are typically large, and take hours to process. What I am looking for is to make a method where I can have this class running, and be able to call methods from within it, even if it's calling class is shut down.
The singleton class is simple. It starts a thread that loads the file into the database, while having methods to report on the current status. In a nutshell it's al little like this:
public sealed class BulkFileLoader {
static BulkFileLoader instance = null;
int currentCount = 0;
BulkFileLoader()
public static BulkFileLoader Instance
{
// Instanciate the instance class if necessary, and return it
}
public void Go() {
// kick of 'ProcessFile' thread
}
public void GetCurrentCount() {
return currentCount;
}
private void ProcessFile() {
while (more rows in the import file) {
// insert the row into the database
currentCount++;
}
}
}
The idea is that you can get an instance of BulkFileLoader to execute, which will process a file to load, while at any time you can get realtime updates on the number of rows its done so far using the GetCurrentCount() method.
This works fine, except the calling class needs to stay open the whole time for the processing to continue. As soon as I stop the calling class, the BulkFileLoader instance is removed, and it stops processing the file. What I am after is a solution where it will continue to run independently, regardless of what happens to the calling class.
I then tried another approach. I created a simple console application that kicks off the BulkFileLoader, and then wrapped it around as a process. This fixes one problem, since now when I kick off the process, the file will continue to load even if I close the class that called the process. However, now the problem I have is that cannot get updates on the current count, since if I try and get the instance of BulkFileLoader (which, as mentioned before is a singleton), it creates a new instance, rather than returning the instance that is currently in the executing process. It would appear that singletons don't extend into the scope of other processes running on the machine.
In the end, I want to be able to kick off the BulkFileLoader, and at any time be able to find out how many rows it's processed. However, that is even if I close the application I used to start it.
Can anyone see a solution to my problem?
You could create a Windows Service which will expose, say, a WCF endpoint which will be its API. Through this API you'll be able to query services' status and add more files for processing.
You should make your "Bulk Uploader" a service, and have your other processes speak to it via IPC.
You need a service because your upload takes hours. And it sounds like you'd like it to run unattended if necessary,, and you'd like it to be detached from the calling thread. That's what services do well.
You need some form of Inter-Process Communication because you'd like to send information between processes.
For communicating with your service see NetNamedPipeBinding
You can then send "Job Start" and "Job Status" commands and queries whenever you feel like to your background service.

Categories

Resources