At the moment, I'm doing something similar to this to integration test a library that communicates with our API controllers, and so far so good, but I've run into a snag. In all of our other integration tests, we run the test inside an MSDTC transaction at isolation level ReadCommitted so that each one gets its own little private session with the databases and such, and at the end of each test, the transactions are rolled back. ..But that doesn't work for these tests because the transactions are per-thread and all of the HttpClient/HttpServer methods are asynchronous, so the work is done on a different thread than the main one for that test, doesn't have an ambient transaction to subscribe to, and goes right along and commits.
I've come across a few posts about how to open a TransactionScope on one thread and then create a dependent transaction to be passed to a new task via a closure, but I have no idea how to apply that to an HttpClient that's connected to an in-memory HttpServer. I suspect I'm just not thinking about it the right way, but that's about all I have to go on.
What would make sense/work/etc? I have full control over the creation of the HttpServer and the HttpClient that will connect to it, but I'm at a loss as to what to do with them.
UPDATE:
Some progress has been made- I wrote a message handler that can create a dependent transaction on the worker thread if Transaction.Current is populated when it gets there, and for some of my calls it is, but for others it isn't, and I'm wondering if I may be chasing shadows - like, there's a lot of ContinueWith around, and I think it's executed on the calling thread (which would naturally have a transaction) if the antecedent task is already complete.
Would it be possible just to run the whole thing synchronously and carry the test's thread all the way through? I've experimented some with ContinueWith'ing synchronously without much success..
If you aren't dead-set on using a real HTTP connection, you could call the interfaces directly via code (by using an assembly reference) from a test framework that allows you to do per-session or per-test start-up and shut-down work (such as MSTest's class and test initialize functions). In this case, you would open a TransactionScope that is shared across the class in a member variable and dispose it in the class or test shut-down function. Because you didn't call .Commit(), it will roll-back the operations that occurred during the transaction.
As it turns out, the HttpClient and HttpServer weren't spinning up background threads - rather, I had some errant Task.StartNew's in my code that were causing the problem. Removing those got me going.
Related
I'm maintaining an ASP.NET website on .NET 4.7.1 that displays some fairly extensive information using Entity Framework 6.0. Right now, all these DB queries are being performed in serial, so I'm attempting to improve performance by implementing async/await.
The problem I'm having is that running multiple simultaneous queries against the same database seems to be somewhat delicate, and I'm having trouble searching up any best practices for this type of scenario.
The site's initial implementation created a context for each of these queries inside an ambient transaction, and disposed the context after use. Upon converting the whole site to use async (and noting TransactionScopeAsyncFlowOption.Enabled), the page load began throwing exceptions claiming Distributed Transaction Coordinator needed to be configured.
System.Transactions.TransactionManagerCommunicationException: Network access for Distributed Transaction Manager (MSDTC) has been disabled.
Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool.
Some searching at that point led me to believe that this could be remedied in code without perturbing configuration, so I next redesigned data layer to manage connections in a way that would allow them to share the same context. However, when testing that approach, new exceptions are thrown claiming that the connection is too busy.
System.Data.SqlClient.SqlException: Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The request failed to run because the batch is aborted, this can be caused by abort signal sent from client, or another request is running in the same session, which makes the session busy.
Normally this page's load time is slow (several seconds) but nowhere near the default timeout threshold.
Is async/await best suited only when the queries to run in parallel connect to different databases? If not, is MSDTC the only way to enable this behavior? Or is it perhaps just not a wise thing to blast a single database with so many simultaneous queries?
I am not able to understand exactly what changes you have done to the application. I am also not sure that the application was correctly written in the first place and it was following reasonable practices. But here are a few data points that I hope can help:
Async support in EF is designed to be used to yield threads back to the pool while waiting for I/O so that the application can process a higher number of requests using less threads and resources. It is not meant to enable parallel execution using the same DbContext. Like the majority of the types in .NET, the DbContext is not thread safe (in any version of EF) so you cannot safely execute multiple queries (async or not) in parallel on the same context instance.
Using separate DbContext instances that don't share state or connection objects should be fine, however it is recommended that in ASP.NET you still use a single thread at any point in time to process a request (when you make an async call that yields, processing may continue on a different thread, but that is not a concern) rather than trying to parallelize work within the same request.
Also, regarding the exception from System.Transaction, it may very well be that something you changed is now causing multiple connections to auto-enlist in the same System.Transactions.Transaction, which may require escalating the transaction to a distributed transaction.
I won't try to come up with a complete explanation for the timeouts, because as I said, I am not sure I understand what changes you made to the application. But it is perfectly possible that if you create too many threads, some of them will end up starving and timing out. It also extremely hard to anticipate everything that could go wrong if you start using types are not thread safe (e.g. database connections, DbContext) from multiple threads.
I have a number of integration tests that test my services api's and check responses (http response codes and where applicable the body). The api's themselves are hosted in a webapi service in the azure cloud. My integration tests are a separate project that use my proxy to make restful calls to the webapi and check responses and report back success and failure. The integration tests are subsequently executed using MSTest. Every one of my webapi's use an async await pattern and every one of my proxy calls are async and (as far as I can see) I am await'ing in all cases on my test api calls (indeed, I'm not sure you cannot with MSTest). It seems that on some occasions I receive the dreaded
'The agent process was stopped while the test was running.
when some failures occur and from what I can tell this is likely to be one of my integration tests failing on another thread other than the test-runner (a known problem). As a point of reference, I have created an IDisposable style pattern for my test scenarios:
using ( var scenario = new TestScenario() )
{
// do some testing stuff
}
which manages and cleans up downloaded files, manages proxy instances and generally logs information. The question is this: Is there a way to use my existing IDisposable pattern here to try/catch any unhandled exceptions that MSTest seems to not be able to cope with? Or am I stuck with having to wrap every test in a try/catch block of their own and unwrap an AggregateException?
I don't have the rep to comment, otherwise I would just comment this...
Anyhow, I'm not entirely familiar with MSTest and if it allows async test execution (e.g. like XUnit does).
Are you awaiting in your tests? If not, are you blocking properly using var response = obj.ExecuteAsync().Result or obj.ExecuteAsync.Wait()?
If you are blocking properly, then I'd suspect the using statement is disposing early. If that's the case, you could implement IDisposable which would execute during teardown and dispose of your managed items there.
Hello i want to use the State Machine Compiler (SMC) with C#
http://smc.sourceforge.net/
i have created the sm-File to describe the state machine and generated c# code from it.
Then i created my own class MyClass,add the generated class which was generated with smc and implement the methods.
My Problem is how can i run this statemachine? With a While-loop, a async call or the Task Library ? What is an elegant way?
The Statemachine is a behaivior for sending data throught the serialport. So that the user can call MyClass.Send(Data) and the StateMachine should work behind the curtains.
Can someone give me an example how to use the statemachine in own code?
Regards
rubiktubik
I've used SMC in many application and I was very satisfied with it. I hit the same problem as you. SMC generates code for C# which is synchronous and linear. This means if you issue a transaction by calling fsm.YourTransaction() and by chance somewhere in the middle of that transaction you issue another transaction, it would be directly called. It is very dangerous, because it breaks the base principle of a state machine - that transactions are atomic and the system is guaranteed to be in single state, or single transition all the time.
When I realized this hidden problem I implemented an asynchronous layer above the state machine code generated by SMC. Unfortunately I cannot provide you with the code, because is licensed, but I can describe the principle here.
I replaced direct method calls with asynchronous event processing: there is a queue awaiting transactions. Transactions are represented by strings, which must be the same as transaction methods in fsm. They are removed from the queue one by one in an independent thread. Transaction string is transformed to a fsm method call.
This concept proved to work very well in many critical applications. It is rather simple to implement and easy to extend with new features.
Final form of this asynchronous layer had these additional features:
Perfect logging: all transactions and their arguments, time of arrival, time of processing ...etc.
Possibility to replace the independent thread with an external thread - sometimes it is better to process things in thread provided from outside (Windows GUI is very sensitive to external thread calls...)
Preprocessing of transactions - sometimes the system was expected to change state only if a sequence of transaction occured. It is rather clumsy to achieve it directly with SMC. I implemented so called "transaction transformations" - it was possible to declare how a set of transactions is transformed into a single transaction.
I have several long-running threads in an MVC3 application that are meant to run forever.
I'm running into a problem where a ThreadAbortException is being called by some other code (not mine) and I need to recover from this gracefully and restart the thread. Right now, our only recourse is to recycle the worker process for the appDomain, which is far from ideal.
Here's some details about this code works:
A singleton service class exists for this MVC3 application. It has to be a singleton because it caches data. This service is responsible for making request to a database. A 3rd party library is used for the actual database connection code.
In this singleton class we use a collection of classes that are called "QueryRequestors". These classes identify unique package+stored_procedure names for requests to the database, so that we can queue those calls. That is the purpose of the QueryRequestor class: to make sure calls to the same package+stored_procedure (although they may have infinite different parameters) are queued, and do not happen simultaneously. This eases our database strain considerably and improves performance.
The QueryRequestor class uses an internal BlockingCollection and an internal Task (thread) to monitor its queue (blocking collection). When a request comes into the singleton service, it finds the correct QueryRequestor class via the package+stored_procedure name, and it hands the query over to that class. The query gets put in the queue (blocking collection). The QueryRequestor's Task sees there's a request in the queue and makes a call to the database (now the 3rd party library is involved). When the results come back they are cached in the singleton service. The Task continues processing requests until the blocking collection is empty, and then it waits.
Once a QueryRequestor is created and up and running, we never want it to die. Requests come in to this service 24/7 every few minutes. If the cache in the service has data, we use it. When data is stale, the very next request gets queued (and subsequent simultaneous requests continue to use the cache, because they know someone (another thread) is already making a queued request, and this is efficient).
So the issue here is what to do when the Task inside a QueryRequestor class encounters a ThreadAbortException. Ideally I'd like to recover from that and restart the thread. Or, at the very least, dispose of the QueryRequestor (it's in a "broken" state now as far as I'm concerned) and start over. Because the next request that matches the package+stored_procedure name will create a new QueryRequestor if one is not present in the service.
I suspect the thread is being killed by the 3rd party library, but I can't be certain. All I know is that nowhere do I abort or attempt to kill the thread/task. I want it to run forever. But clearly we have to have code in place for this exception. It's very annoying when the service bombs because a thread has been aborted.
What is the best way to handle this? How can we handle this gracefully?
You can stop re-throwing of ThreadAbortException by calling Thread.ResetAbort.
Note that most common case of the exception is Redirect call, and canceling thread abort may case undesired effects of execution of request code that otherwise would be ignored due to killing the thread. It is common issue in WinForms (where separation of code and rendering is less clear) than in MVC (where you can return special redirect results from controllers).
Here's what I came up with for a solution, and it works quite nicely.
The real issue here isn't preventing the ThreadAbortException, because you can't prevent it anyway, and we don't want to prevent it. It's actually a good thing if we get an error report telling us this happened. We just don't want our app coming down because of it.
So, what we really needed was a graceful way to handle this Exception without bringing down the application.
The solution I came up with was to create a bool flag property on the QueryRequestor class called "IsValid". This property is set to true in the constructor of the class.
In the DoWork() call that is run on the separate thread in the QueryRequestor class, we catch the ThreadAbortException and we set this flag to FALSE. Now we can tell other code that this class is in an Invalid (broken) state and not to use it.
So now, the singleton service that makes use of this QueryRequestor class knows to check for this IsValid property. If it's not valid, it replaces the QueryRequestor with a new one, and life moves on. The application doesn't crash and the broken QueryRequestor is thrown away, replaced with a new version that can do the job.
In testing, this worked quite well. I would intentionally call Thread.Abort() on the DoWork() thread, and watch the Debug window for output lines. The app would report that the thread had been aborted, and then the singleton service was correctly replacing the QueryRequestor. The replacement was then able to successfully handle the request.
I am testing a SOAP service using a client generated in VS2010 via "Add Service Reference".
I am running tests in parallel (c. 10 threads) and this has exposed some DB locking issues in the system under test. However, this is not going to be fixed straight away and I don't want my functional tests failing due to this problem.
As a result I have reduced by test threads to 1, and as expected I do not see the locking issue, however, this obviously makes my test suites a great deal slower. Therefore I was wondering if it is possible to use client configuration to restrict the client to only make one request concurrently?
Its not the soap client that needs to be restricted its the calling code. A Soap call will be performed in which ever thread it is being made from. If you have a problem with multiple threads this is because you have multiple threads in your code, or you are trying to make additional service calls or updating something in a callbacks without understanding what thread you are in.
Depending on the problem you have many solutions which could include:
Remove the multi-threading from your application, don't use callbacks and don't fire up additional threads.
or ideally Make sure you dispatch back to the UI when appropriate, understand which thread you are in so you fix the underlying locking problem
http://msdn.microsoft.com/en-us/library/ms591206.aspx
Me.Dispatcher.BeginInvoke(Sub()
' This will be executed on the UI Thread
End Sub)
It would be nice (and very useful) to have the code in question..... in your question. Which version of .net your using and what your app is written in (Asp.net, WinForms?) would also help get some context.
NB: Sample code in vb.net but you get the idea ;p