I'm implementing a Silverlight application that uses WCF services heavily, I've got to the point now where occasionally there are several long service calls that block other service calls from running.
These service calls eventually time out. I'd like to see if its possible to a queue system that executes service calls one after another, this way long calls will hold up other calls but won't cause them to timeout.
I'm using service agents to wrap the service calls
public interface IExampleServiceAgent
{
void ProcessData(int a, string b, EventHandler<ProcessDataCompletedEventArgs> callback);
}
Public ExampleServiceAgent1 : IExampleServiceAgent
{
ExampleClient _Client = new ExampleClient();
public void ProcessData(int anInt, string aString, EventHandler<ProcessDataCompletedEventArgs> callback)
{
EventHandler<ProcessDataCompletedEventArgs> wrapper = null;
wrapper = (a,b) =>
{
callback(a,b);
_Client.ProcessDataCompleted -= wrapper;
}
_Client.ProcessDataCompleted += wrapper;
_Client.ProcessDataAsync(anInt,aString);
}
}
The above service agent would then be called from code as follows:
ServiceAgent.ProcessData(1,"STRING", (a,b) =>
{
if (b.Error != null)
{
//Handle Error
}
else
{
//DO something with the data
}
}
Is there a way I could put these service calls into a queue and execute them one by one?
I've tried wrapping them as Actions and adding them to a queue, but this does not wait for one to finish executing before starting the next one and although they do call the service correctly no data is returned to the calling ViewModel.
WCF services can cope with a huge number of calls but, to avoid denial of service attacks, the number of requests that can be processed is limited by default.
The significant limitations for Silverlight WCF services would be
A default limit of 2 simultaneous calls from the same IP address.
A limit of approx 10-16 concurrent connections (documentation varies on this one).
This CodeProject article on Quick Ways to Boost Performance and Scalability of ASP.NET, WCF and Desktop Clients was useful.
I am guessing you are immediately hitting the first issue. In your WCF config you need to add the following to increase the single IP connections:
<system.net>
<connectionManagement>
<add address="*" maxconnection="100" />
</connectionManagement>
</system.net>
You may then hit the second limit for which the solution is tweak the service behaviors in the web/app.config files.
Here are a few more references I found while sorting out these issues myself:
http://weblogs.asp.net/paolopia/archive/2008/03/23/wcf-configuration-default-limits-concurrency-and-scalability.aspx
Why does WCF limit concurrent connections to 5?
http://msdn.microsoft.com/en-us/magazine/cc163590.aspx#S10
http://blogs.msdn.com/b/stcheng/archive/2009/01/06/wcf-things-that-will-impact-concurrency-capacity-behavior-of-wcf-service-with-simoultaneous-client-requests-connections.aspx
http://www.codeproject.com/Articles/133738/Quick-Ways-to-Boost-Performance-and-Scalability-of
http://www.danrigsby.com/blog/index.php/2008/02/20/how-to-throttle-a-wcf-service-help-prevent-dos-attacks-and-maintain-wcf-scalability/
http://msdn.microsoft.com/en-us/library/7w2sway1%28v=vs.71%29.aspx
http://www.codeproject.com/Articles/89858/WCF-Concurrency-Single-Multiple-and-Reentrant-and
WCF stops responding after about 10 or so calls (throttling)
Related
I have pretty naive code :
public async Task Produce(string topic, object message, MessageHeader messageHeaders)
{
try
{
var producerClient = _EventHubProducerClientFactory.Get(topic);
var eventData = CreateEventData(message, messageHeaders);
messageHeaders.Times?.Add(DateTime.Now);
await producerClient.SendAsync(new EventData[] { eventData });
messageHeaders.Times?.Add(DateTime.Now);
//.....
Log.Info($"Milliseconds spent: {(messageHeaders.Times[1]- messageHeaders.Times[0]).TotalMilliseconds});
}
}
private EventData CreateEventData(object message, MessageHeader messageHeaders)
{
var eventData = new EventData(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(message)));
eventData.Properties.Add("CorrelationId", messageHeaders.CorrelationId);
if (messageHeaders.DateTime != null)
eventData.Properties.Add("DateTime", messageHeaders.DateTime?.ToString("s"));
if (messageHeaders.Version != null)
eventData.Properties.Add("Version", messageHeaders.Version);
return eventData;
}
in logs I had values for almost 1 second (~ 800 milliseconds)
What could be a reason for such long execution time?
The EventHubProducerClient opens connections to the Event Hubs service lazily, waiting until the first time an operation requires it. In your snippet, the call to SendAsync triggers an AMQP connection to be created, an AMQP link to be created, and authentication to be performed.
Unless the client is closed, most future calls won't incur that overhead as the connection and link are persistent. Most being an important distinction in that statement, as the client may need to reconnect in the face of a network error, when activity is low and the connection idles out, or if the Event Hubs service terminates the connection/link.
As Serkant mentions, if you're looking to understand timings, you'd probably be best served using a library like Benchmark.NET that works ove a large number of iterations to derive statistically meaningful results.
You are measuring the first 'Send'. That will incur some overhead that other Sends won't. So, always do warm up first like send single event and then measure the next one.
Another important thing. It is not right to measure just single 'Send' call. Measure bunch of calls instead and calculate latency percentile. That should provide a better figure for your tests.
I am using Azure SDKs on IoT devices. One of the methods I rely on is
public Task<Message> ReceiveAsync();
which appears in this namespace
namespace Microsoft.Azure.Devices.Client
Under this class
public sealed class DeviceClient : IDisposable
I am calling this method continuously within a while loop as follows
while (true)
{
var receivedMessage = await _deviceClient.ReceiveAsync(TimeSpan.FromSeconds(3)).ConfigureAwait(false);
if (receivedMessage != null)
{
//Do staff
}
}
My question is: does this consume internet quotas even though the receivedMessage always shows null?
Digging through the source, you'll find three handlers:
HttpTransportHandler
MqttTransportHandler
AmqpTransportHandler
Which one is used, depends on your configuration. The HTTP one will issue a GET request per ReceiveAsync(), costing network traffic.
The MQTT handler operates on TCP or WebSockets, where keepalive traffic may be involved. But given this communication is bidirectional, most traffic that occurs involves actual messages being delivered. ReceiveAsync() simply gets the first message from the internal receive queue, if any, or waits for one to arrive, it doesn't poll.
The AMPQ handler also operates on a message queue, and I can't quite figure out whether a ReceiveAsync() will ultimately incur network traffic.
We have a Rebus message handler that talks to a third party webservice. Due to reasons beyond our immediate control, this WCF service frequently throws an exception because it encountered a database deadlock in its own database. Rebus will then try to process this message five times, which in most cases means that one of those five times will be lucky and not get a deadlock. But it frequently happens that a message does get deadlock after deadlock and ends up in our error queue.
Besides fixing the source of the deadlocks, which would be a longterm goal, I can think of two options:
Keep trying with only this particular message type until it succeeds. Preferably I would be able to set a timeout, so "if five deadlocks then try again in 5 minutes" rather than choke the process up even more by trying continuously. I already do a Thread.Sleep(random) to spread the messages somewhat, but it will still give up after five tries.
Send this particular message type to a different queue that has only one worker that processes the message, so that this happens serially rather than in parallel. Our current configuration uses 8 worker threads, but this just makes the deadlock situation worse as the webservice now gets called concurrently and the messages get in each other's way.
Option #2 has my preference, but I'm not sure if this is possible. Our configuration on the receiving side currently looks like this:
var adapter = new Rebus.Ninject.NinjectContainerAdapter(this.Kernel);
var bus = Rebus.Configuration.Configure.With(adapter)
.Logging(x => x.Log4Net())
.Transport(t => t.UseMsmqAndGetInputQueueNameFromAppConfig())
.MessageOwnership(d => d.FromRebusConfigurationSection())
.CreateBus().Start();
And the .config for the receiving side:
<rebus inputQueue="app.msg.input" errorQueue="app.msg.error" workers="8">
<endpoints>
</endpoints>
</rebus>
From what I can tell from the config, it's only possible to set one input queue to 'listen' to. I can't really find a way to do this via the fluent mapping API either. That seems to take only one input- and error queue as well:
.Transport(t =>t.UseMsmq("input", "error"))
Basically, what I'm looking for is something along the lines of:
<rebus workers="8">
<input name="app.msg.input" error="app.msg.error" />
<input name="another.input.queue" error="app.msg.error" />
</rebus>
Any tips on how to handle my requirements?
I suggest you make use of a saga and Rebus' timeout service to implement a retry strategy that fits your needs. This way, in your Rebus-enabled web service facade, you could do something like this:
public void Handle(TryMakeWebServiceCall message)
{
try
{
var result = client.MakeWebServiceCall(whatever);
bus.Reply(new ResponseWithTheResult{ ... });
}
catch(Exception e)
{
Data.FailedAttempts++;
if (Data.FailedAttempts < 10)
{
bus.Defer(TimeSpan.FromSeconds(1), message);
return;
}
// oh no! we failed 10 times... this is probably where we'd
// go and do something like this:
emailService.NotifyAdministrator("Something went wrong!");
}
}
where Data is the saga data that is made magically available to you and persisted between calls.
For inspiration on how to create a saga, check out the wiki page on coordinating stuff that happens over time where you can see an example on how a service might have some state (i.e. number of failed attempts in your case) stored locally that is made available between handling messages.
When the time comes to make bus.Defer work, you have two options: 1) use an external timeout service (which I usually have installed one of on each server), or 2) just use "yourself" as a timeout service.
At configuration time, you go
Configure.With(...)
.(...)
.Timeouts(t => // configure it here)
where you can either StoreInMemory, StoreInSqlServer, StoreInMongoDb, StoreInRavenDb, or UseExternalTimeoutManager.
If you choose (1), you need to check out the Rebus code and build Rebus.Timeout yourself - it's basically just a configurable, Topshelf-enabled console application that has a Rebus endpoint inside.
Please let me know if you need more help making this work - bus.Defer is where your system becomes awesome, and will be capable of overcoming all of the little glitches that make all others' go down :)
To make this easier to understand: We are using a database that does not have connection pooling built in. We are implementing our own connection pooler.
Ok so the title probably did not give the best description. Let me first Describe what I am trying to do. We have a WCF Service (hosted in a windows service) that needs to be able to take/process multiple requests at once. The WCF service will take the request and try to talk to (say) 10 available database connections. These database connections are all tracked by the WCF service and when processing are set to busy. If a request comes in and the WCF tries to talk to one of the 10 database connections and all of them are set to busy we would like the WCF service to wait for and return the response when it becomes available.
We have tried a few different things. For example we could have while loop (yuck)
[OperationContract(AsyncPattern=true)]
ExecuteProgram(string clientId, string program, string[] args)
{
string requestId = DbManager.RegisterRequest(clientId, program, args);
string response = null;
while(response == null)
{
response = DbManager.GetResponseForRequestId(requestId);
}
return response;
}
Basically the DbManager would track requests and responses. Each request would call the DbManager which would assign a request id. When a database connection is available it would assign (say) Responses[requestId] = [the database reponse]. The request would constantly ask the DbManager if it had a response and when it did the request could return it.
This has problems all over the place. We could possibly have multiple threads stuck in while loops for who knows how long. That would be terrible for performance and CPU usage. (To say the least)
We have also looked into trying this with events / listeners. I don't know how this would be accomplished so the code below is more of how we envisioned it working.
[OperationContract(AsyncPattern=true)]
ExecuteProgram(string clientId, string program, string[] args)
{
// register an event
// listen for that event
// when that event is called return its value
}
We have also looked into the DbManager having a queue or using things like Pulse/Monitor.Wait (which we are unfamiliar with).
So, the question is: How can we have an async WCF Operation that returns when it is able to?
WCF supports the async/await keywords in .net 4.5 http://msdn.microsoft.com/en-us/library/vstudio/hh191443.aspx. You would need to do a bit of refactoring to make your ExecuteProgram async and make your DbManager request operation awaitable.
If you need your DbManager to manage the completion of these tasks as results become available for given clientIds, you can map each clientId to a TaskCompletionSource. The TaskCompletionSource can be used to create a Task and the DbManager can use the TaskCompletionSource to set the results.
This should work, with a properly-implemented async method to call:
[OperationContract]
string ExecuteProgram(string clientId, string program, string[] args)
{
Task<string> task = DbManager.DoRequestAsync(clientId, program, args);
return task.Result;
}
Are you manually managing the 10 DB connections? It sounds like you've re-implemented database connection pooling. Perhaps you should be using the connection pooling built-in to your DB server or driver.
If you only have a single database server (which I suspect is likely), then just use a BlockingCollection for your pool.
I am trying to host a WCF service, using NetTcpBinding in a Windows service. (I'm going to use it as an API for various clients both Web and Windows32) Obviously, I am doing this within a test host before putting it in a Windows service.
I have the following contract:
namespace yyy.xxx.Server.API.WCF
{
[ServiceContract]
public interface ISecureSessionBroker
{
[OperationContract]
string GetSessionToken(string username, string encryptedPassword, string clientApiKey, string clientAddress);
}
}
with the following implementation:
namespace yyy.xxx.Server.API.WCF
{
public class SecureSessionBroker : ISecureSessionBroker
{
#region ~ from ISecureSessionBroker ~
public string GetSessionToken(string username, string encryptedPassword, string clientApiKey, string clientAddress)
{
return Guid.NewGuid().ToString();
}
#endregion
}
}
I am hosting the WCF service using the code below (within a class/method):
try
{
_secureSessionBrokerHost = new ServiceHost(typeof(SecureSessionBroker));
NetTcpBinding netTcpBinding = new NetTcpBinding();
_secureSessionBrokerHost.AddServiceEndpoint(typeof(ISecureSessionBroker), netTcpBinding, "net.tcp://localhost:8080/secureSessionBrokerTcp");
int newLimit = _secureSessionBrokerHost.IncrementManualFlowControlLimit(100);
// Open the ServiceHost to start listening for messages.
_secureSessionBrokerHost.Open();
}
catch (Exception ex)
{
throw;
}
The key thing here is that I do not want to have to rely on an App.config file. Everything must be configured programmatically. When I run this code, the service appears to come "up" and listen. (ie. I have no exceptions)
BUT when I use the client code below:
string secureSessionBrokerUrl = string.Format("{0}/secureSessionBrokerTcp","net.tcp://localhost/8080",url);
EndpointAddress endpointAddress=new EndpointAddress(secureSessionBrokerUrl);
System.ServiceModel.Channels.Binding binding = new NetTcpBinding();
yyy.xxx.Windows.AdminTool.API.WCF.SecureSessions.SecureSessionBrokerClient
client = new yyy.xxx.Windows.AdminTool.API.WCF.SecureSessions.SecureSessionBrokerClient(binding,endpointAddress);
string sessionToken=client.GetSessionToken("", "", ""); // exception here
MessageBox.Show(sessionToken);
... I always get an exception. At the moment, I am getting:
This request operation sent to
net.tcp://localhost:8080/secureSessionBrokerTcp
did not receive a reply within the
configured timeout (00:01:00). The
time allotted to this operation may
have been a portion of a longer
timeout. This may be because the
service is still processing the
operation or because the service was
unable to send a reply message.
Please consider increasing the
operation timeout (by casting the
channel/proxy to IContextChannel and
setting the OperationTimeout property)
and ensure that the service is able to
connect to the client.
So I guess it cannot resolve the service.
Where am I going wrong? How do I test for the existence of the service over TCP? I have used the SvcTraceViewer and I just get the same message, so no news there.
I would prefer to ask the user for a URL of the service, so they would enter "net.tcp://localhost:8080" or something, which would then be used as a BaseAddress for the various calls to the SecureSessionBroker (and other) WCF services ... without resorting to App.config.
Unfortunately, all the examples I can find all use the App.config.
Interestingly, I can host the service using the VS Host and the client connects fine. (Using:
D:\dev2008\xxx\yyy.xxx.Server>WcfSvcHost.exe /service:bin/debug/yyy.
xxx.Server.dll /config:App.config)
Ok, it came to me in a flash of inspiration.
I was using a Windows Form (alarm bells) to "host" the service. Clicking out of the form, I used a bit of code to call the service (included) on a button click. Of course, the service was not in its own thread, so the service could not respond.
I've fixed it by putting the Service container (which contains the host) within its own thread:
Thread thread = new Thread(new ThreadStart(_serviceWrapper.Start));
thread.Start();
The Start() method sets up the ServiceHost.
I incorrectly thought that while a WCF Service Host will create threads for incoming requests, it will only do this if it is in its own non-blocking thread (ie. not a UI thread).
Hope it helps someone else.