I am testing a SOAP service using a client generated in VS2010 via "Add Service Reference".
I am running tests in parallel (c. 10 threads) and this has exposed some DB locking issues in the system under test. However, this is not going to be fixed straight away and I don't want my functional tests failing due to this problem.
As a result I have reduced by test threads to 1, and as expected I do not see the locking issue, however, this obviously makes my test suites a great deal slower. Therefore I was wondering if it is possible to use client configuration to restrict the client to only make one request concurrently?
Its not the soap client that needs to be restricted its the calling code. A Soap call will be performed in which ever thread it is being made from. If you have a problem with multiple threads this is because you have multiple threads in your code, or you are trying to make additional service calls or updating something in a callbacks without understanding what thread you are in.
Depending on the problem you have many solutions which could include:
Remove the multi-threading from your application, don't use callbacks and don't fire up additional threads.
or ideally Make sure you dispatch back to the UI when appropriate, understand which thread you are in so you fix the underlying locking problem
http://msdn.microsoft.com/en-us/library/ms591206.aspx
Me.Dispatcher.BeginInvoke(Sub()
' This will be executed on the UI Thread
End Sub)
It would be nice (and very useful) to have the code in question..... in your question. Which version of .net your using and what your app is written in (Asp.net, WinForms?) would also help get some context.
NB: Sample code in vb.net but you get the idea ;p
Related
I am in need of some guidance on a project I am working on. We are looking for a replacement for CORBA server setup. In a nutshell we currently run a CORBA deamon service that hosts 10 instances of a C++ exe that is the entry point into our calculation process. The C++ code hooks into a bunch of different .net and C++ dlls and OCXs via COM. We also have another version of the executable that is compiled as a .dll that we are able to call in a similar fashion but it is only a single instance system so all is well there.
We are now looking to replace the CORBA components with a WebAPI so I have put together a basic ASP.net webAPI project that is able to process the requests into this C++ dll. Again, this works great when it only needs to handle 1 request at a time. Things start going sideways when I start testing concurrent requests. The request come into my handler just fine and I can see the 5 requests (I have logging everywhere tracking whats going on,) and each thread creates an instance of the dll but they are run synchronously.
What I have figured out is that even though there are multiple threads going in the ASP.net handler, the dll is STAThreaded (this is confirmed in the code) so the calls are queued up and only processed 1 at a time. My guess here is because the threads are all inside the same process the dll treats all the threads as the same apartment (STAThread) and causes the queue.
I have tried different async/await and task.run code and I can see different threads but it still comes down to the same process which makes the dll run synchronously. I did try change the dll to be MTA by changing the CoInitializeEx(NULL,0x2) to CoInitializeEx(NULL,0x0) but that didn't seem to change anything.
I am now running out of ideas and I don't think changing to use the .exe version and spawning multiple process is going to work because there is the CORBA stuff that allows a return object to be created and communicated back to the calling code. I need to be able to get the objects that are created in the exe to send back in the request.
Sorry for the long post, hopefully someone will take the time to read this wall of text and have some ideas of what I can try.
Thank you!
I would suggest that the WebAPI architecture is a poor solution to your problem. Typically you do not want to spawn long-running or blocking processes from ASP.NET, because it's quite easy to exhaust the threadpool and prevent the server from being able to handle new requests.
If you do want to continue having a WebAPI endpoint, I would start with taking the requests and putting them in a queue, and having the client poll or subscribe for the completed result.
You may be interested in looking a what they're doing with gRPC in dotnetcore 3.0 - if you want to keep that kind of architecture, but update the platform.
You can create multiple app domains. App domain is "It can be considered as a Lightweight process which is both a container and boundary" ref. Load your DLLs into that different domains. This way every app domain you create will load your COM DLLs separately. Create proxies using MarshalByRefObject as used here. Write an orchestrator that distributes requests to app domains and get the results from appdomains and send responses. Keeps tracks of which domain is busy which is not or create new domains for the request.
Also different methods mentioned in this link
Is there a way to fire an Http call to an external web API within my own web API without having to wait for results?
The scenario I have is that I really don't care whether or not the call succeeds and I don't need the results of that query.
I'm currently doing something like this within one of my web API methods:
var client = new HttpClient() { BaseAddress = someOtherApiAddress };
client.PostAsync("DoSomething", null);
I cannot put this piece of code within a using statement because the call doesn't go through in that case. I also don't want to call .Result() on the task because I don't want to wait for the query to finish.
I'm trying to understand the implications of doing something like this. I read all over that this is really dangerous, but I'm not sure why. What happens for example when my initial query ends. Will IIS dispose the thread and the client object, and can this cause problems at the other end of the query?
Is there a way to fire an Http call to an external web API within my own web API without having to wait for results?
Yes. It's called fire and forget. However, it seems like you already have discovered it.
I'm trying to understand the implications of doing something like this
In one of the links in the answers you linked above state the three risks:
An unhandled exception in a thread not associated with a request will take down the process. This occurs even if you have a handler setup via the Application_Error method.
This means that any exception thrown in your application or in the receiving application won't be caught (There are methods to get past this)
If you run your site in a Web Farm, you could end up with multiple instances of your app that all attempt to run the same task at the same time. A little more challenging to deal with than the first item, but still not too hard. One typical approach is to use a resource common to all the servers, such as the database, as a synchronization mechanism to coordinate tasks.
You could have multiple fire-and forget calls when you mean to have just one.
The AppDomain your site runs in can go down for a number of reasons and take down your background task with it. This could corrupt data if it happens in the middle of your code execution.
Here is the danger. Should your AppDomain go down, it may corrupt the data that is being sent to the other API causing strange behavior at the other end.
I'm trying to understand the implications of doing something like
this. I read all over that this is really dangerous
Dangerous is relative. If you execute something that you don't care at all if it completes or not, then you shouldn't care at all if IIS decides to recycle your app while it's executing either, should you? The thing you'll need to keep in mind is that offloading work without registration might also cause the entire process to terminate.
Will IIS dispose the thread and the client object?
IIS can recycle the AppDomain, causing your thread to abnormally abort. Will it do so depends on many factors, such as how recycling is defined in your IIS, and if you're doing any other operations which may cause a recycle.
In many off his posts, Stephan Cleary tries to convey the point that offloading work without registering it with ASP.NET is dangerous and may cause undesirable side effects, for all the reason you've read. That's also why there are libraries such as AspNetBackgroundTasks or using Hangfire for that matter.
The thing you should most worry about is a thread which isn't associated with a request can cause your entire process to terminate:
An unhandled exception in a thread not associated with a request will
take down the process. This occurs even if you have a handler setup
via the Application_Error method.
Yes, there are a few ways to fire-and-forget a "task" or piece of work without needing confirmation. I've used Hangfire and it has worked well for me.
The dangers, from what I understand, are that an exception in a fire-and-forget thread could bring down your entire IIS process.
See this excellent link about it.
I need to build a web application (MVC) that uses a third-party win32 dll as gateway for all business logic (contains logon-mechanism, functions and maintains some state).
This dll is not designed for multithreading. In an MTA scenario, the dll stumbles after a certain time.
The recommended solution is to run ASP.NET MVC in ASP-Classic Mode (STA using an Asp-CompatHandler). I tried this with success - everything runs stable.
The probblem is, there will be a lot concurrent users and some of the function calls takes some seconds (up to 10 secodns!). This will get horrible if all users block each other.
What would be the best approach to minimize the blocking-effects within the same application? Say only ten users should block each other?
It would be nice if:
...the web runs in MTA
...the web is just deployed once
...everything runs within the same process
Can anyone give me some advice for a good concept solving this?
Thank you! Martin
Update - Found a Solution:
Thanks to the "Smart Thread Pool" from Ami Bar I could accomplish the behavior I was looking for (easily). I implemented a worker concept (a specific amout of users share a worker and block each other in this worker), and for each worker, I have now my own thread pool instance with a max and min number of one thread. Well, it's not the idea of a thread pool, but it makes it very easy to handle the work-items and it also has some nice other features. The web application is running on MTA now.
I'm going to prepare some load tests to see if its stable over hours.
see here: http://www.codeproject.com/Articles/7933/Smart-Thread-Pool
The answer is very simple, although I don't think you will like much: you cannot use something that is not designed to be used in a multithreaded environment in a multithreaded environment.
2 possibilities:
Live with STA
Replace the single threaded COM object with something that is intended to be used in a web application
Unfortunately, if the dll is not designed to be used with parallel requests, you cannot use it with parallel requests.
The only solution I see for you to increase the number of concurrent users without having the application running parallely is to have multiple instances of the application running at the same time, with maybe a load balancer in front of them to dispatch the queries.
This is something of a sibling question to this programmers question.
Briefly, we're looking at pushing some work that's been piggy-backing on user requests into the background "properly." The linked question has given me plenty of ideas should we go the service route, but hasn't really provided any convincing arguments as to why, exactly, we should.
I will admit that, to me, the ability to do the moral equivalent of
WorkQueue.Push(delegate(object context) { ... });
is really compelling, so if its just a little difficult (rather than inherently unworkable) I'm inclined to go with the background thread approach.
So, the problems with background threads I'm aware of (in the context of an AppPool):
They can die at any time due to the AppPool being recycled
Solution: track when a task is being executed, so it can be re-run* should a new thread be needed
The ThreadPool is used to respond to incoming HTTP queries, so using it can starve IIS
Solution: build our own thread pool, capping the number of threads as well.
My question is, what am I missing, if anything? What else can go wrongǂ with background threads in ASP.NET?
* The task in questions are already safe to re-run, so this isn't a problem.
ǂ Assume we're not doing anything really dumb, like throwing exceptions in background threads.
I would stay away from launching threads from with-in your IIS AppDomain for StackOverflow. I don't have any hard evidence to support what I am going to say, but working with IIS for 10 years, I know that it works best when it is the only game in town.
There is also an alternative, I know this is going to be sort of a take off on my answer over on the programmers thread. But as I understand it you already have a solution that works by piggy-backing the work on user requests. Why not use that code, but only launch it when a special internal API is called. Then use Task Scheduler to call a CURL command that calls that API every 30 seconds or so to launch the tasks. This way you are letting IIS handle the threading and your code is handling something that it already does easily.
One danger I ran into personally is the CallContext. We were using the CallContext to set user identity data, because the same code was shared across our web application and our .NET Remoting based application services (which is designed to use the CallContext for storing call specific data) - so we weren't using the HttpContext.
We noticed that, sometimes, a new request would end up with a non-null identity in the CallContext. In other words, ASP .NET was not nulling out the data stored in the CallContext between requests...and thus an unauthenticated user might get into the application if they picked up a thread which still had the CallContext containing validated user identity info.
Let me tell you about a non-obvious danger :)
I used threads to collect Update some RSS feeds into my database for a website I was hosting with GoDaddy. The threads worked fine (if they were terminated, they would be restarted automatically due to some checks I had built in some web pages).
It was working excellently and I was very happy, until GoDaddy (my host then) first started killing the threads, and then blocked them completely. So my app just died!
If that wasn't non-obvious, what is?
One could be are you overly complicating your architecture without getting any benefits.
You program will be more expensive to write, more expensive to maintain and have a greater chance of having bugs.
Sometimes there is a lot that needs to be done when a given Action is called. Many times, there is more that needs to be done than what needs to be done to generate the next HTML for the user. In order to make the user have a faster experience, I want to only do what I need to do to get them their next view and send it off, but still do more things afterwards. How can I do this, multi-threading? Would I then need to worry about making sure different threads don't step on each others feet? Is there any built in functionality for this type of thing in ASP.NET MVC?
As others have mentioned, you can use a spawned thread to do this. I would take care to consider the 'criticality' of several edge cases:
If your background task encounters an error, and fails to do what the user expected to be done, do you have a mechanism of report this failure to the user?
Depending on how 'business critical' the various tasks are, using a robust/resilient message queue to store 'background tasks to be processed' will help protected against a scenario where the user requests some action, and the server responsible crashes, or is taken offline, or IIS service is restarted, etc. and the background thread never completes.
Just food for though on other issues you might need to address.
How can I do this, multi-threading?
Yes!
Would I then need to worry about making sure different threads don't step on each others feet?
This is something you need to take care of anyway, since two different ASP.NET request could arrive at the same time (from different clients) and be handled in two different worker threads simultaneously. So, any code accessing shared data needs to be coded in a thread-safe way anyway, even without your new feature.
Is there any built in functionality for this type of thing in ASP.NET MVC?
The standard .net multi-threading techniques should work just fine here (manually starting threads, or using the Task features, or using the Async CTP, ...).
It depends on what you want to do, and how reliable you need it to be. If the operaitons pending after the response was sent are OK to be lost, then .Net Async calls, ThreadPool or new Thread are all going to work just fine. If the process crashes the pending work is lost, but you already accepted that this can happen.
If the work requires any reliable guarantee, for instance the work incurs updates in the site database, then you cannot use the .Net process threading, you need to persist the request to do the work and then process this work even after a process restart (app-pool recycle as IIS so friendly calls them).
One way to do this is to use MSMQ. Other way is to use the a database table as a queue. The most reliable way is to use the database activation mechanisms, as described in Asynchronous procedure execution.
You can start a background task, then return from the action. This example is using the task Parallel Library, found in .NET 4.0:
public ActionResult DoSomething()
{
Task t = new Task(()=>DoSomethingAsynchronously());
t.Start();
return View();
}
I would use MSMQ for this kind of work. Rather than spawning threads in an ASP.NET application, I'd use an Asynchronous out of process way to do this. It's very simple and very clean.
In fact I've been using MSMQ in ASP.NET applications for a very long time and have never had any issues with this approach. Further, having a different process (that is an executable in a different app domain) do the long running work is an ideal way to handle it since your web application is no being used to do this work. So IIS, the threadpool and your web application can continue to do what they need to, while other processes handle long running tasks.
Maybe you should give it a try: Using an Asynchronous Controller in ASP.NET MVC