Logging Via WCF Without Slowing Things Down - c#

We have a large process in our application that runs once a month. This process typically runs in about 30 minutes and generates 342000 or so log events. Recently we updated our logging to a centralized model using WCF and are now having difficulty with performance. Whereas the previous solution would complete in about 30 minutes, with the new logging, it now takes 3 or 4 hours. The problem it seems is because the application is actually waiting for the WCF request to complete before execution continues. The WCF method is already configured as IsOneWay and I wrapped the call on the client side to that WCF method in a different thread to try to prevent this type of problem but it doesn't seem to have worked. I have thought about using the async WCF calls but thought before I tried something else I would ask here to see if there is a better way to handle this.

342000 log events in 30 minutes, if I did my math correctly, comes out to 190 log events per second. I think your problem may have to do with the default throttling settings in WCF. Even if your method is set to one-way, depending on if you're creating a new proxy for each logged event, calling the method will still block while the proxy is created, the channel is opened, and if you're using an HTTP-based binding, it will block until the message has been received by the service (an HTTP-based binding sends back a null response for a 1-way method call when the message is received). The default WCF throttling limits concurrent instances to 10 on the service side, which means only 10 requests will be handled at a time, and any further requests will get queued, so pair that with an HTTP binding, and anything after the first 10 requests are going to block at the client until it's one of the 10 requests getting handled. Without knowing how your services are configured (instance mode, etc.) it's hard to say more than that, but if you're using per-call instancing, I'd recommend setting MaxConcurrentCalls and MaxConcurrentInstances on your ServiceBehavior to something much higher (the defaults are 16 and 10, respectively).
Also, to build on what others have mentioned about aggregating multiple events and submitting them all at once, I've found it helpful to setup a static Logger.LogEvent(eventData) method. That way it's simple to use throughout your code, and you can control in your LogEvent method how you want logging to behave throughout your application, such as configuring how many events should get submitted at a time.

Making a call to another process or remote service (i.e. calling a WCF service) is about the most expensive thing you can do in an application. Doing it 342,000 times is just sheer insanity!
If you must log to a centralized service, you need to accumulate batches of log entries and then, only when you have say 1000 or so in memory, send them all to the service in one hit. This will give you a reasonable performance improvement.

log4net has a buffering system that exists outside the context of the calling thread, so it won't hold up your call while it logs. Its usage should be clear from the many appender config examples - search for the term bufferSize. It's used on many of the slower appenders (eg. remoting, email) to keep the source thread moving without waiting on the slower logging medium, and there is also a generic buffering meta-appender that may be used "in front of" any other appender.
We use it with an AdoNetAppender in a system of similar volume and it works wonderfully.

There's always the traditional syslog there are plenty of syslog daemons that run on Windows. Its designed to be a more efficient way of centralised logging than WCF, which is designed for less intensive opertions, especially if you're not using the tcpip WCF configuration.
In other words, have a go with this - the correct tool for the job.

Related

WCF Service called from SharePoint workflows - Underlying connection was closed errors

I have developed a WCF Web Service that is called from several SharePoint Online workflows. At certain points there could be around 4 users starting up to 10 workflows within a very short time frame: one workflow could possibly make as much as 3 requests to the web service. Needless to say, at certain points, the WCF Service becomes overloaded. When SharePoint workflows make HTTP web service calls and the service is unavailable, the workflow runs into an error and attempts to restart the workflow after a short period of time: which only contributes to making things worse.
These are some of the exceptions logged today from the web service during an approximate 40 minute of "overloading":
Unable to read data from the transport connection: An existing
connection was forcibly closed by the remote host.
The underlying connection was closed: An unexpected error occurred on
a receive.
The underlying connection was closed: A connection that was expected
to be kept alive was closed by the server.
I have tried to look into ways to avoid the WCF web service from malfunctioning when several requests are being made and besides the obvious actions of finding ways to decrease the amount of calls made to the web service (which is not always an option), I came into the terms: WCF Concurrency Modes and Throttling Limits.
Given the scenario described above, could anyone guide me into the right direction as to which Concurrency Mode and Throttling limits would be most ideal? Presently, my WCF service has default configuration.
Concurrency Modes can be:
Single or
Multiple or
Reentrant
Throttling Limit options are shown below:
<serviceThrottling maxConcurrentCalls="Integer"
maxConcurrentInstances="Integer"
maxConcurrentSessions="Integer" />
I am still quite new to this area of programming and am finding it a tad complicated, so any help would be greatly appreciated!
Update: The SharePoint system is highly customised and it covers a Business process that is quite complicated. The Web Service methods are varied and it would take me a long time to explain what every method does but I will mention some examples. The web service is used for operations that either cannot be done (easily or at all) using out of the box SharePoint designer actions. For example: moving documents and copying metadata from one folder to another (in the same or different lists), syncing information between lists/libraries, calculating values based on metadata of several documents living within a given folder, scheduling data into an external database to be used with other components such as a console application running as a scheduled task, etc.
The web service calls take an average of 2 minutes to execute and return a value. The fastest methods take around 30 seconds, and the slowest around 4 minutes. Both the slow and fast methods are frequently utilised.
Your problem could be caused by a number of things, and you need to gather more information in order for anyone to be helpful to you.
With that said, the best I can do here is give you some pointers on how to gather such information, such as:
Turn on WCF tracing and try to understand when does the error occur on Sharepoint side. Does the error occur while the webservice is processing the request, after, or does it never receive the request in the first place?
If this tracing doesn't give you much answers, write code in your webservices to Trace specific messages to give you more information on what the webservice is doing and what it is receiving/returning from/to Sharepoint, or use your preferred logging library.
In specific cases, the EventViewer might have some information on what is happening. Check for any messages that show up at a similar time of when the error occurs on the client.
At last, relaxing your serviceThrottling settings might mitigate some of your issues, but won't solve them.
If you have alot of I/O operations in your webservices (access to Databases, Filesystem or other Webservices) you might improve your webservices performance by using asynchronous I/O, using the TPL framework.
If you are returning a lot of data from your webservice (like a big object, an object with cyclic references, or a big file), this might be also the reason why the server is forcing the connections to be closed.
Hope this helps you in solving your issue.

how to make web service run to finish even if user leaves page

I have a website where I need to take a bit of data from the user, make an ajax call to a .net webservice, and then the webservice does some work for about 5-10 minutes.
I naturally dont want the user to have to sit there that whole time, so I have made it an asynchronous ajax call to the webservice, and after the call has been sent, I redirect the user to a "you are done!" page.
What I want to happen is for the webservice to keep running to finish--and not abort--after it receives the information from the user.
From my testing, this is more or less what happens, but now I'm finding that this might be limited by time? I.e. if the webservice runs past a certain amount of time, it will abort if the user isnt still connected.
I might be off here in this assessment, but this is what I THINK is going on from my testing.
So my question is whether with .net web services, if this is indeed what happens? Does it get aborted after some time if the user isnt still on the other end? Is there any way to disable this abort?
Thanks in advance!
when you invoke a web service, it will always finish its work, even if user leaves the page that invoked it.
Of course webservices have their own configuration and one of them sets timeout.
If you're creating a WCF service (SOAP Service) you can set it in its contract (changing binding properties), if you're creating a service with WebApi or MVC (REST/Http Service) then you can either add to its config file or programmatically set in its controller as it follows.
HttpContext.Server.ScriptTimeout = 3600; //Number of seconds
That can be a reason causing webservice to interrupt its work but it is not related to what happens on client side.
Have a nice day,
Alberto
Whilst I agree that the answer here is technically correct, I just
wanted to post a more robust alternative approach that avoids some of
the pitfalls possible with your current approach such as
Web Server being bounced during the long-running processing of request
Web Server App pool being recycled during processing
Web server running out of threads due to too many long-running requests and not being able to process any more requests
I would recommend you take a thoroughly ansynchronous approach and use
Message Queues (MSMQ for example) with a trigger on the queue that
will execute the work.
The process would be:
Your page makes Ajax call to the Webservice
Webservice writes a message into the Queue and returns right away. The message contains details of what work needs to be carried out.
User continues on your site as usual, or goes home, etc.
A trigger on the Queue is watching for messages and when a message
arrives in the queue, it activates a process which:
Reads the message
Performs the necessary work
Updates any back-end storage, etc, with the results of the work
This is much more robust because it totaly decouples the Web service from any long-running work and means that if the user makes a request and the web server goes down a moment later (for whatever reason) then the work will still be queued up when the server comes back online, etc.
You can read more about it here (MSMQ is the MS Message Queue tech; there are many others!)
Just my 2c

How expensive is it to call a web service?

I've had a fairly good search on google and nothing has popped up to answer my question. As I know very little about web services (only started using them, not building them in the last couple of months) I was wondering whether I should be ok to call a particular web service as frequently as I wish (within reason), or should I build up requests to do in one go.
To give you an example, my app is designed to make job updates, which for certain types of updates will call the web service. It seems like my options are that I could create a datatable in my app of updates that require the web service and pass the whole datatable to the web service and then write a method in the web service to process the datatable's updates. Alternatively I could iterate through my entire table of updates (which includes other updates than those requiring the web service) and call the web service as when an update requires it.
At the moment it seems like it would be simpler for me to pass each update rather than a datatable to the web service.
In terms of data being passed to the web service each update would contain a small amount of data (3 strings, max 120 characters in length). In terms of numbers of updates there would probably be no more than 200.
I was wondering whether I should be ok to call a particular web service as frequently as I wish (within reason), or should I build up requests to do in one go.
Web services or not, any calls routed over the network would benefit from building up multiple requests, so that they could be processed in a single round-trip. In your case, building an object representing all the updates is going to be a clear winner, especially in setups with slower connections.
When you make a call over the network, these things need to happen when a client communicates to a server (again, web services or not):
The data associated with your call gets serialized on the client
Serialized data is sent to the server
Server deserializes the data
Server processes the data, producing a response
Server serializes the response
Server sends serialized response back to the client
The response is deserialized on the client
Steps 2 and 6 usually cause a delay due to network latency. For simple operations, latency often dominates the timing of the call.
The latency on fastest networks used for high-frequency trading is in microseconds; on regular ones it is in milliseconds. If you are sending 100 packages one by one on a network with 1ms lag (2ms per roundtrip), you are wasting 200ms just on the network latency! This one fifth of a second, a lot of time by the standards of today's CPUs. If you can eliminate it simply by restructuring your requests, it's a great reason to do it.
You should usually favor coarse-grained remote interfaces over a fine-grained ones.
Consider adding a 10ms network latency to each call - what would be the delay for 100 updates?

Separate threads in a web service after it's completed

If this has been asked before my apologies, and this is .NET 2.0 ASMX Web services, again my apologies =D
A .NET Application that only exposes web services. Roughly 10 million messages per day load balanced between multiple IIS Servers. Each incoming messages is XML, and an outgoing message is XML. (XMLElement) (we have beefy servers that run on steroids).
I have a SLA that all messages are processed in under X Seconds.
One function, Linking Methods, in the process is now taking 10-20 seconds, it is required for every transaction, however is not critical that it happens before the web service returns the results. Because of this I made a suggestion to throw it on another thread, but now realize that my words and the eager developers behind them might have not fully thought this through.
The below example shows on the left the current flow. On the right what is being attempted
Effectively what I'm looking for is to have a web service spawn a long running (10-20 second) thread that will execute even after the web service is completed.
This is what, effectively, is going on:
Thread linkThread= new Thread(delegate()
{
Linkmembers(GetContext(), ID1, ID2, SomeOtherThing, XMLOrSomething);
});
linkThread.Start();
Using this we've reduced the time from 19 seconds to 2.1 seconds on our dev boxes, which is quite substantial.
I am worried that with the amount of traffic we get, and if a vendor/outside party decides to throttle us, IIS might decide to recycle/kill those threads before they're done processing. I agree our solution might not be the "best" however we don't have the time to build in a Queue system or another Windows Service to handle this.
Is there a better way to do this? Any caveats that should be considered?
Thanks.
Apart from the issues you've described, I cannot think of any. That being said, there are ways to fix the problem that do not involve building your own solution from scratch.
Use MSMQ with WCF: Create a WCF service with an MSMQ endpoint that is IIS hosted (no need to use a windows service as long as WAS is enabled) and make calls to the service from within your ASMX service. You reap all the benefits of reliable queueing without having to build your own.
Plus, if your MSMQ service fails or throws an exception, it will reprocess automatically. If you use DTC and are hitting a database, you can even have the MSMQ transaction flow to the DB.

making a web service faster (wcf)

We are trying to write an inner wcf service between 2 servers.
one off the application is a server application for our clients.
the clients sends us files and we then process them and converting them.
this whole process takes some time mean while the client session is open, i dont this using async is possible? which way can we make this methodology faster ?
keep in mind that we have aprox 1000 files an hour ...each client sends up to 200 files an hour also
G
You could to send an address to be called back when that file processing is done and it will notify the consumer server. Or to use a message queue on both ends.
This article (link) by Juval Lowy is all about one-way services, wcf call-back methods, etc. It should show you how to set your services up to handle what you're looking for.
One-way services make the call asynchronous - fire and forget. Setting up a call-back does what it sounds like - you can specify a service/method to be called back after a method executes.
Better yet, check out chapter 5 in Lowy's Programming WCF Services (link). It goes into MUCH greater detail than the article above.
I think the first link is enough to get started though.

Categories

Resources