User request queue in WCF - c#

I have a WCF service which creates a number of files at a server location doing various calculation on seed file depending upon the params given. The problem is that, the when 2 or more clients try to do calculation on same seed file, it is returning error. The cause is simply due to read/write access by multiple users at a time.
So I want to create a user request queue in WCF from where server does its calculation one at a time and returns calculated response to the user. The problem is I dont know how to do it.
I have not implemented any request queue technique in WCF before. Does anyone know how to implement this in WCF Sevcices. I cannot do threading as calculation depends upon the file I/O so handling one request at a time is only one solution at this time.
Any tutorial or video tutorial will be highly appreciated.

Finally I did it.
Here I am posting my soluton for other users who may be new to the WCF request queuing.
At first, we need to implement the throttling settings in WCF host file.
Throttling can be done in two ways (Either way is OK):
Config file
Code
Throttling settings in config file as follows:
[behaviors]
[serviceBehaviors] [behavior name="throttlingBehavior"] [serviceThrottling maxConcurrentCalls="3" maxConcurrentInstances="3" maxConcurrentSessions="100"/] [/behavior]
[/serviceBehaviors]
[/behaviors]
Or throttling settings in code
using (ServiceHost host = new ServiceHost(typeof(SimpleService.SimpleS­ervice)))
{
ServiceThrottlingBehavior throttlingBehavior = new ServiceThrottlingBehavior { MaxConcurrentCalls = 3, MaxConcurrentInstances = 3, MaxConcurrentSessions = 100 };
host.Description.Behaviors.Add(throttlin­gBehavior);
host.Open();
Console.WriteLine("Host started # " + DateTime.Now.ToString());
Console.ReadLine();
}
With the above throttling settings a maximum of 3 concurrent calls are processed. In addition to maxConcurrentCalls property, maxConcurrentInstances and maxConcurrentSessions may also impact the number of calls processed concurrently.
Now after defining throttling behavior, we need to define concurrency mode in service contract as follows:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall,ConcurrencyMode=ConcurrencyMode.Multiple)]
public class Service:IService
{...
With these settings in placed, we can easily get the the request queuing in WCF service.

Related

429 Too many requests only production server side, not localhost, not browser

I readed this post: C# (429) Too Many Requests
and i understod the responde code but... why only return this status code when the call is done from server side (backend) and production mode (hosted)? the service never return this code when call (the same service) from chrome's navigate url or when i do the call server side (backend) but my localhost.
CASE 1 (works fine in localhost - the service url is not localhost, is hosted)
App A (localhost) call App B (hosted) --> works fine
for (int i = 0; i < 1000; i++)
{
HttpClient client = new HttpClient();
client.BaseAddress = new Uri(url);
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
String response = client.GetStringAsync(urlParameters).Result;
client.Dispose();
}
CASE 2 (work fine)
Chrome navigator call App B (hosted) --> works fine
CASE 3 (similar to case 1 but too less requests - NOT WORK)
App A (hosted) call App B (hosted) --> 429
Why? What is the problem? How can solve it?
What's Happening
The HTTP 429 response code indicates you have been rate limited. The idea is to prevent one caller from overwhelming a service, making it less availabe to other callers.
Most Common
That limiting can be based on many things. Most common are
Number of calls per unit time (usually per second)
Number of concurrent calls
The General Case
A rate limiter may also forgive a short burst of calls that happens occasionally, may allow more calls before hitting the brakes based on who you are (using your IP or an API key for example), dynamically adjust its limits based on total system load, or do other things.
Probably Happening Here
Based on your description, I would guess the number of concurrent calls could be causing production rate limiting. Rather than hitting the external API hard trying to guess what the rules are, try reaching out to them to ask. If that is not an option, running multiple requests in parallel could validate this theory.
Handling
A great way to deal with this is to back off your requests when you receive an HTTP 429.
The service should return a Retry-After header indicating how many seconds you should wait before trying again. If it does, wait that long before resubmitting your request.
If the service does not provide that header (I work with a major one that does not), use exponential backoff instead.
Depending on your needs, you may want to tell your own caller to try again later (return an HTTP 429 yourself) or you may want to queue up pending requests and work off the queue to submit them all.
Preventing
If you know the rate limits, you can pre-emptively limit your outbound call rate so you get into this situation less often.
For call-per-second limits, you can use a counter variable that you reset (in a thread-safe way) every second. If the known call limit would be exceeded, calculate when the counter will reset (store a timestamp when it does) and delay processing that long.
For a concurrent-call limit, a SemaphoreSlim works nicely. Set the maximum count to whatever your concurrent rate limit is. Acquire the semaphore before making a request and release it (in a finally block) after your call completes.
If you have multiple servers subject to the same rate limit (e.g. if rate limiting is based on an API key rather than IP address), it gets harder to self-limit, but you can set self-limiting parameters (calls per second and concurrent calls) in a configuration file, and tune them over time to maximize your throughput without hitting excessive HTTP 429's.

MongoDB connection problems on Azure

We have an ASP.NET MVC application deployed to an Azure Website that connects to MongoDB and does both read and write operations. The application does this iteratively. A few thousand times per minute.
We initialize the C# driver using Autofac and we set the MaxConnectionIdleTime to 45 seconds as suggested in https://groups.google.com/forum/#!topic/mongodb-user/_Z8YepNHnbI and a few other places.
We are still getting a large number of the below error:
Unable to read data from the transport connection: A connection
attempt failed because the connected party did not properly respond
after a period of time, or established connection failed because
connected host has failed to respond. Method
Message:":{"ClassName":"System.IO.IOException","Message":"Unable to
read data from the transport connection: A connection attempt failed
because the connected party did not properly respond after a period of
time, or established connection failed because connected host has
failed to respond.
We get this error while connecting to both a MongoDB instance deployed on a VM in the same datacenter/region on Azure and also while connecting to an external PaaS MongoDB provider.
I run the same code in my local computer and connect to the same DB and I don't receive these errors. It's only when I deploy the code to an Azure Website.
Any suggestions?
A few thousand requests per minute is a big load, and the only way to do it right, is by controlling and limiting the maximum number of threads which could be running at any one time.
As there's not much information posted as to how you've implemented this. I'm going to cover a few possible circumstances.
Time to experiment...
The constants:
Items to process:
50 per second, or in other words...
3,000 per minute, and one more way to look at it...
180,000 per hour
The variables:
Data transfer rates:
How much data you can transfer per second is going to play a role no matter what we do, and this will vary through out the day depending on the time of day.
The only thing we can do is fire off more requests from different cpu's to distribute the weight of traffic we're sending back n forth.
Processing power:
I'm assuming you have this in a WebJob as opposed to having this coded inside the MVC site it's self. It's highly inefficient and not fit for the purpose that you're trying to achieve. By using a WebJob we can queue work items to be processed by other WebJobs. The queue in question is the Azure Queue Storage.
Azure Queue storage is a service for storing large numbers of messages
that can be accessed from anywhere in the world via authenticated
calls using HTTP or HTTPS. A single queue message can be up to 64 KB
in size, and a queue can contain millions of messages, up to the total
capacity limit of a storage account. A storage account can contain up
to 200 TB of blob, queue, and table data. See Azure Storage
Scalability and Performance Targets for details about storage account
capacity.
Common uses of Queue storage include:
Creating a backlog of work to process asynchronously
Passing messages from an Azure Web role to an Azure Worker role
The issues:
We're attempting to complete 50 transactions per second, so each transaction should be done in under 1 second if we were utilising 50 threads. Our 45 second time out serves no purpose at this point.
We're expecting 50 threads to run concurrently, and all complete in under a second, every second, on a single cpu. (I'm exaggerating a point here, just to make a point... but imagine downloading 50 text files every single second. Processing it, then trying to shoot it back over to a colleague in the hopes they'll even be ready to catch it)
We need to have a retry logic in place, if after 3 attempts the item isn't processed, they need to be placed back in to the queue. Ideally we should be providing more time to the server to respond than just one second with each failure, lets say that we gave it a 2 second break on first failure, then 4 seconds, then 10, this will greatly increase the odds of us persisting / retrieving the data that we needed.
We're assuming that our MongoDb can handle this number of requests per second. If you haven't already, start looking at ways to scale it out, the issue isn't in the fact that it's a MongoDb, the data layer could have been anything, it's the fact that we're making this number of requests from a single source that is going to be the most likely cause of your issues.
The solution:
Set up a WebJob and name it EnqueueJob. This WebJob will have one sole purpose, to queue items of work to be process in the Queue Storage.
Create a Queue Storage Container named WorkItemQueue, this queue will act as a trigger to the next step and kick off our scaling out operations.
Create another WebJob named DequeueJob. This WebJob will also have one sole purpose, to dequeue the work items from the WorkItemQueue and fire out the requests to your data store.
Configure the DequeueJob to spin up once an item has been placed inside the WorkItemQueue, start 5 separate threads on each and while the queue is not empty, dequeue work items for each thread and attempt to execute the dequeued job.
Attempt 1, if fail, wait & retry.
Attempt 2, if fail, wait & retry.
Attempt 3, if fail, enqueue item back to WorkItemQueue
Configure your website to autoscale out to x amount of cpu's (note that your website and web jobs share the same resources)
Here's a short 10 minute video that gives an overview on how to utilise queue storages and web jobs.
Edit:
Another reason you may be getting those errors could be because of two other factors as well, again caused by it being in an MVC app...
If you're compiling the application with the DEBUG attribute applied but pushing the RELEASE version instead, you could be running into issues due to the settings in your web.config, without the DEBUG attribute, an ASP.NET web application will run a request for a maximum of 90 seconds, if the request takes longer than this, it will dispose of the request.
To increase the timeout to longer than 90 seconds you will need to change the [httpRuntime][3] property in your web.config...
<!-- Increase timeout to five minutes -->
<httpRuntime executionTimeout="300" />
The other thing that you need to be aware of is the request timeout settings of your browser > web app, I'd say that if you insist on keeping the code in MVC as opposed to extracting it and putting it into a WebJob, then you can use the following code to fire a request off to your web app and offset the timeout of the request.
string html = string.Empty;
string uri = "http://google.com";
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri);
request.Timeout = TimeSpan.FromMinutes(5);
using (HttpWebResponse response = (HttpWebResonse)request.GetResponse())
using (Stream stream = response.GetResponseStream())
using (StreamReader reader = new StreamReader(stream))
{
html = reader.ReadToEnd();
}
Are you using mongoDB in a VM? It seems to be a network problem. This kind of transient faults should occur, so the best you can do is implement a retry pattern or use a lib such as Polly to do that:
Policy
.Handle<IOException>()
.Retry(3, (exception, retryCount) =>
{
// do something
});
https://github.com/michael-wolfenden/Polly

WCF service Multiple Users at same time

I will be deploying my first application based on WCF and would like to know the best way to deploy. Here is my architecture. Please see the attached image.
We have a WCF written using 4.0 framework and has 3 methods. A front end ASP.NET website (www.site.com) calls the WCF to save data as well as read data. In figure method1 is saving to data and method2 and 3 are for reading the data from SQL server 2008 R2 database.
In my ASP.Net webstie...
I am calling the Method1 and closing the connection...like this..
ServiceClient client = new ServiceClient();
client.Method1(data to be saved)
client.close();
I am calling method 2 and 3 as follows
ServiceClient client = new ServiceClient();
dropDown1list.datasource = client.Method2()
dropDown2list.datasource = client.Method3()
client.close();
Multiple users could be using the website at the same time to submit the data. Considering this architecture..what would be the best way to deploy the WCF so that it could handle multiple users at same time?. I read the article http://www.codeproject.com/Articles/89858/WCF-Concurrency-Single-Multiple-and-Reentrant-and and http://www.codeproject.com/Articles/86007/ways-to-do-WCF-instance-management-Per-call-Per.
I now believe I need to have my WCF service as
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple , InstanceContextMode = InstanceContextMode.PerCall )]
public class Service : IService
{
public bool Method1(data to be saved)
{
}
public List<string> Method2()
{
}
public List<string> Method2()
{
}
}
Am I right ?. Any suggestions ?.
Just answered a similar question yesterday. Based on your description and the picture, I don't see a need to change your architecture. If you're using one of the main WCF bindings (webHttpBinding, wsHttpBinding or BasicHTTPBinding), the service you deploy should easily be able handle dozens of concurrent users, all saving and reading at the same time.
Each client request will generate its own connection and web service objects, each of which can communicate concurrently with your database, whether that request is to read data or write data. When the response is sent back to the client, your WCF service will destroy the objects and clean up the memory for you as long as you're not doing something strange.
I've spent the last two years working on WCF web services on and industrial scale. Lately I've been working on a load testing / benchmarking project that spins up hundreds of concurrent users, each of which is slamming our WCF test server with XML artifacts that get loaded into the database. We've managed to load up to 160 packages (about 110kb - each per client) per second. WCF is not perfect, but it's quick, clean and scales really well.
My experience has been that your database will be your bottleneck, not your WCF web service. If your client wants to scale this archtecture up to an Amazon size web service, then you bring in an F5 load balancer and scale it up that way.

WCF - Determining when session ends on the server side

In the project I'm working on, we have several services implemented using WCF. The situation I'm facing is that some of the services need to know when a session ends, so that it can appropriately update the status of that client. Notifying the service when a client gracefully terminates (e.g. the user closes the application) is easy, however, there are cases where the application might crash, or the client machine might restart, in which case the client won't be able to notify the service about its status.
Initially, I was thinking about having a timer on the server side, which is triggered once a client connects, and changes the status of that client to "terminated" after, let's say, 1 minute. Now the client sends its status every 30 seconds to the service, and the service basically restarts its timer on every request from the client, which means it (hopefully) never changes the status of the client as long as the client is alive.
Even though this method is pretty reliable (not fully reliable; what if it takes the client more than 1 minute to send its status?) it's still not the best approach to solving this problem. Note that due to the original design of the system, I cannot implement a duplex service, which would probably make things a lot simpler. So my question is: Is there a way for the sevice to know when the session ends (i.e. the connection times out or the client closes the proxy)? I came accross this question: WCF: How to find out when a session is ending but the link on the answer seems to be broken.
Another thing that I'm worried about is; they way I'm currently creating my channel proxies is implemented like this:
internal static TResult ExecuteAndReturn<TProxy, TResult>(Func<TProxy, TResult> delegateToExecute)
{
string endpointUri = ServiceEndpoints.GetServiceEndpoint(typeof(TProxy));
var binding = new WSHttpBinding();
binding.Security.Mode = SecurityMode.Message;
binding.Security.Message.ClientCredentialType = MessageCredentialType.UserName;
TResult valueToReturn;
using (ChannelFactory<TProxy> factory = new ChannelFactory<TProxy>(binding,
new EndpointAddress(new Uri(endpointUri),
EndpointIdentity.CreateDnsIdentity(ServiceEndpoints.CertificateName))))
{
TProxy proxy = factory.CreateChannel();
valueToReturn = delegateToExecute(proxy);
}
return valueToReturn;
}
So the channel is closed immediately after the service call is made (since it's in a using block), is that, from a service standpoint, an indication that the session is terminated? If so, should I keep only one instance of each service during application runtime, by using a singleton maybe? I apologize if the questions seem a little vague, I figured there would be plenty of questions like these but wasn't able to find something similar.
Yes, closing the channel terminates the session, but if there is an error of some kind then you are subject to the timeout settings of the service, like this:
<binding name="tcpBinding" receiveTimeout="00:00:10" />
This introduces a ten second timeout if an error occurs.
Check out Managing WCF Session Lifetime with IsInitiating and IsTerminating

Programatically configure individual WCF operations with different WCF configurations

I am just getting started with WCF and would like to set up a distributable networked system as follows: (but am not sure if it is possible.)
I have a .net client that has business logic. It will need various data from various sources so I would like to add a 'server' that contains an in-memory cache but also WCF capabilities to send/receive and publish/subscribe from data sources for data that is not cached. I think it should be possible for these server applications to be identical in terms of code, but highly configurable so that requests could be dealt with in a peer to peer fashion, or traditional client-server as required. I think it could be done so that essentially a server sends a request to wherever it has the endpoint configured and gets a response.
Essentially a server would be configured as below:
Server A
========
Operation 1 - Endpoint I
Operation 2 - Endpoint II
Server B
========
Operation 1 - Endpoint IV
Operation 2 - Endpoint III
The configuration would be stored for each server in app.config and loaded into memory at startup. So each WCF operation would have its own WCF config (in terms of endpoints etc.) and it would send particular requests to different places according to that configuration.
From what I have read of WCF I think this is possible. I don't know have enough experience to know if this is a standard WCF pattern that I am describing (if so please let me know). Otherwise, my main question is, how do I programatically configure each operation (as above) in WCF?
Please let me know if I have not explained myself clearly.
Thanks in advance for any help,
Will
I don't know if this exactly will get you what you are looking for, but I this is what I use to add my WCF endpoints to my Windows Service. This is the code that the service runs to load all the wcf services:
IDictionary<string, ServiceHost> hosts;
NetTcpBinding binding;
CustomBinding mexBinding;
private void AddService(Type serviceImp, Type serviceDef, string serviceName)
{
ServiceHost host = new ServiceHost(serviceImp);
string address = String.Format(baseAddress, wcfPort, serviceName);
string endAdd = address;
string mexAdd = address + "/mex";
ServiceMetadataBehavior behavior = new ServiceMetadataBehavior();
host.Description.Behaviors.Add(behavior);
host.AddServiceEndpoint(serviceDef, binding, endAdd);
host.AddServiceEndpoint(typeof(IMetadataExchange), mexBinding, mexAdd);
host.Open();
hosts.Add(serviceDef.Name, host);
}
There's a baseAddress string that I didn't copy in, but it just has the net.tcp address for the endpoint. Likewise for the wcfPort. Different baseAddresses and ports are used for debug, testing and production.
Just in case it isn't clear, serviceImp is the service implementation and serviceDef is the interface that defines the contract. Hope this helps.
EDIT - Here are some references I used to help me figure all of this stuff out:
Creating WCF Service Host Programmatically
Net.Tcp Port Sharing Sample, Part 2
Service Station: WCF Addressing In Depth
As far as I know you can't specify configuration on per operation basis. The lowest level is the interface level. The simplest (ugly) solution would be to put each operation in a separate interface.
Putting each operation in a separate interface is a valid and good design approach. Agatha Request/Response Layer follows this approach. Have a look at this and this is pretty useful and extensible
http://code.google.com/p/agatha-rrsl/

Categories

Resources