I have a class library I developed that is rather processing intensive that I currently call through a WCF REST service.
The REST service directly accesses the DLLs for the class library and more or less the WCF rest service is an interface for the system.
Let's say the following methods are defined:
Create Request
Starts a thread that takes five minutes, but immediately returns a session ID that the process generates and the thread uses to report when it is completed to the database.
Check Status
Accepts a session id and checks the database to see if the process has completed.
I have to think that there is a better way to "manage" the threads running, however, my requirements state that the user should receive an immediate response from the REST service upon issuing a request.
I am using the WCF Message property to return XML to the browser and as this application can be called from any programming language I can't use classic WCF and callbacks (I think, correct me if I am wrong).
Sometimes I run into an issue where an error occurs and the iscomplete event never gets written to the database and therefore the "Check Status" method says it's processing forever.
Does anyone have any ideas about what is normally done and what can be done in this situation?
Thanks!
Jeffrey Kevin Pry
Your service should return a 202 Accepted at the initial request with a way for the client to check the current status, either through the Location header or as part of the content.
As you indicate the client then polls the URL indicated to check the current status. I would also suggest adding a bit of cache time to this response in case a client just starts looping.
How you handle things on the server is up to you and in no way related to REST. For one thing I would put all logic that executes as the background thread in a try/catch to you can return an error status back if an error occurs and possibly retry the action depending on the circumstances.
I implemented a similiar process for importing/processing of large files and to be honest, I have never had a problem. Perhaps resolving the reason that the IsComplete never gets set will make this more resilient.
Not much of an answer, but still..
Related
I have a .NET Core 3.1 Web API that exposes access to some long-running operations. Say for example a client is requesting the API to perform some calculation that takes a while. API delegates that request to a Service that is injected with a Repo and a Driver. The service will call the driver and pass in an anonymous function for the driver to communicate back progress, so that the service can update the repo accordingly. Put very simply, like this:
//throw new Exception("This exception won't cause a crash");
_driver.startCalculation(arg1, arg2, (status) => {
//throw new Exception("This exception will cause a crash");
_repo.updateStatus(status);
});
By the time the code in the anonymous function is executed, a response has already been sent to the client.
If an exception occurs outside of there, the client gets a 500 response and server survives. Inside of there, however, an exception will crash the whole API process.
I'm looking for any insight that will help me figure out how I should deal with this. Why does an exception in there cause a full-blown crash? My first idea is to just eat any exceptions in the anonymous function, but I'm concerned I might just be covering up a symptom of a design flaw.
One idea is to change your architecture to.
Request -> Server
(Server accepts the request with basic validation, but does not completely complete the workload).
Response <- 202 Response (NOT 200).
and then you come back later and poll for the completed work.
This is described more fully in the article below.
//startQuote// One solution to this problem is to use HTTP polling.
Polling is useful to client-side code, as it can be hard to provide
call-back endpoints or use long running connections. Even when
callbacks are possible, the extra libraries and services that are
required can sometimes add too much extra complexity. //endQuote//
(from)
https://learn.microsoft.com/en-us/azure/architecture/patterns/async-request-reply
We are trying to Consume REST API, for message processor which has some operation which might take more than configured timeout.
Would like to know, if the timeout of Http call to API, will stop execution of API, or API will keep executing?
Idea is that, we can fire and forget API, we are not worried if API does not return 404 or 503. But would like to hear if API will continue to execute?
Any input or suggestion appreciated.
You should use some kind of background processing to handle the process.
I recommend using Hangfire for it.
https://www.hangfire.io/
Use Hangfire to enqueue a job, it will return a job id. You can return this job id to client side.
Expose another API to check for the status of this job.
Great way is to handle this with callback/observer pattern. First of all, understand that there are two types of timeout, server and client. You can explicitly specify client timeout, server timeout is handled by server itself.
So, you will need to implement algorithm such that,
you identify each request unique way and mark it before firing into
memory or file/db.
Fire request with associated callback method.
Hence on response you have control to do stuff like, mark request
fulfilled or failed or what ever it is.
Mark/delete request data.
As you'll most likely figure it out, I'm not very experienced with async operations in general (only used Android's AsyncTask).
This is the outline of a WCF REST POST method; I'll use this image to hopefully explain what I'm trying to achieve.
The FirstJob saves some stuff to the database.
SecondJob reads what was saved in the database and does some work with the data.
The client does not care about what happens in SecondJob and just wants to receive the response from FirstJob.
So the two jobs don't need to run in parallel as SecondJob depends on FirstJob; the SecondJob would ideally run in a separate thread/context(?) or similar.
From what I've noticed, the second job does start in a separate thread, the execution reaches the return statement while the 2nd job is running, but the request does not end until SecondJob finishes.
I'd personally treat the second job as a separate POST operation and call the second job POST from the controller. The controller is the controller for the first job and can return the correct status from the first job; it just happens to call a POST out to a second endpoint while doing it.
The benefit of this approach is that the second job doesn't even need to be on the same IIS (in an NLB farm it could be anywhere) so you get load balancing thrown in for free. Alternatively the "second job server" can be on a specific URL reserved just for this kind of background processing task.
I suggest you not to rely on the IIS to handle your background task as it can shut down it without waiting. I suggest you to create a windows service application, which will accept the requests for a second jobs, via another WCF binding or database requests or something else.
You can get the results of the second jobs with another request from your controller, as #PhillipH stated.
The thing I was trying to do was actually working in the first place, but the visual studio debugger fooled me. I tested again without the debugger, but with a Tread.Sleep(60000) and it looks like it behaves as expected. The SecondJob keeps running in the background after the api call returned the response.
I was given the task of creating a web based client for a web service.
I began building it out in c# | .net 4.0 | MVC3 (i can use 4.5 if necessary)
Sounded like a piece of cake until I found out that some of their responses would be asynchronous. This is the flow ... you call a method and they return a response of ack or nack letting you know if your request was valid. Upon an ack response you should expect an async response with the data you requested, which will be sent to a callback url that you provide in your request.
Here are my questions:
If I'm building a web app and debugging on localhost:{portnum} how can I give them a callback url.
If I have already received a response (ack/nack) and my function finishes firing isn't my connection to the client then over ? How would I then get the data back to the client? My only thought is maybe using something like signalR, but that seems crazy for a customer buy flow.
Do I have to treat their response like a webhook? Build something separate that just listens and has no knowledge of the initial request. Just save the data to a db and then have the initial request while loop until there is a record for the unique id sent from the webhook.... oye vey
This really has my brain hurting :-/
Any help would be greatly appreciated. Articles, best practices, anything.
Thanks in advance.
If you create your service reference, it will generate a *ServiceMethod*Completed delegate. Register an event handler on it to process your data.
Use the ServiceMethod_Async() method to call the service.
The way I perceived your question is as follows, though please correct me if I'm wrong about this:
1) You send a request to their endpoint with parameters filled with your data. In addition, you send a:
callback url that you provide in your request. (quoted from your question)
2) They (hopefully) send an ack for your valid request
3) They eventually send the completed data to your callback url (which you specified).
If this is the flow, it's not all that uncommon especially if the operations on their side may take long periods of time. So let's say that you have some method, we'll call it HandleResponse(data). If you had originally intended to do this synchronously, which rarely happens in the web world, you would presumably have called HandleResponse( http-webservice-call-tothem );
Instead, since it is they who are initiating the call to HandleResponse, you need to set a route in your web app like /myapp/givemebackmydata/{data} and hook that to HandleResponse. Then, you specify the callbackurl to them as /myapp/givemebackmydata/{data}. Keep in mind without more information I can't say if they will send it as the body of a POST request to your handler or if they will string replace a portion of the url with the actual data, in which case you'd need to substitute {data} in your callback url with whatever placeholder they stipulate in their docs. Do they have docs? If they don't, none of this will help all that much.
Lastly, to get the data back on the client you will likely want some sort of polling loop in your web client, preferably via AJAX. This would run on a setInterval and periodically hit some page on your server that keeps state for whether or not their webservice has called your callback url yet. This is the gnarlier part because you will need to provide state for each request, since multiple people will presumably be waiting for a callback and each callback url hit will map to one of the waiting clients. A GUID may be good for this.
Interesting question, by the way.
I have a WCF service, marked with the OperationContract attribute.
I have a potentially long running task I want to perform when this operation is carried out, but I don't want the caller (in this case Silverlight) to have to wait for that to complete.
What is my best option for this?
I was thinking of either
something like the OnActionExecuted method of ActionFilterAttibute in System.Web.Mvc, but couldn't see an equivilent.
something listening to an event. (The process I want to call is a static, so I'm not too sure about this approach)
something else:
In the scenario I'm working in, I lock the app so the user cannot make any changes during the save until I get the response (a status code) back.
Keep in mind, Silverlight won't actually have to 'wait' for the call to finish. When you create a service reference within Silverlight you will automatically get async calls.
Assuming you really don't need to wait for the call to finish (ie: your service method uses a 'void' return type) you can mark the service method as one-way via:
[OperationContract(IsOneWay = true)]
void MyServiceMethod(some args);
In general, I suggest having another process service handle long-running actions. Create a simple Windows Service, and have it pull requests from an MSMQ queue via WCF. Have the main service post requests to the background service, then return to its caller. If anyone cares about the results, then the results may be placed in an output queue, and the Silverlight application could get them by querying the output queue.
You might also look into Windows Workflow Foundation, which is made to fit very well with WCF. In fact, you can have just this kind of service, where all the logic of the service is in the workflow. If the workflow takes too long, it can be persisted to disk until it's ready to go again.
my suggestion is to go for nettcp binding for your distributed computing
try it and you will get a solution for your problem
for nettcpbinding usage please follow below link
http://msdn.microsoft.com/en-us/library/ff183865.aspx