Error on inserting data in SQL Server db from Excel Sheet - c#

I am inserting data from an Excel sheet to SQL Server 2005 db. I am getting this error randomly, sometimes after 20-30 records and sometimes after 1000s. I am unable to find the reason.
I am using Visual Studio 2008.
The CLR has been unable to transition
from COM context 0x21a7b0 to COM
context 0x21a920 for 60 seconds. The
thread that owns the destination
context/apartment is most likely
either doing a non pumping wait or
processing a very long running
operation without pumping Windows
messages. This situation generally has
a negative performance impact and may
even lead to the application becoming
non responsive or memory usage
accumulating continually over time. To
avoid this problem, all single
threaded apartment (STA) threads
should use pumping wait primitives
(such as CoWaitForMultipleHandles) and
routinely pump messages during long
running operations.
Can anybody tell me what this error is and why I am getting this.
Thanks.

edit: This thread seems to directly answer your question and the steps involved to solve it.
If you want to learn more check out this MSDN article.

Are you closing connections when you are done with them? It may be that connections are left open, consuming the maximum number of connections available to your application, and eventually timing out.

Related

CustomXMLParts.Add slow due to ContextSwitchDeadlock

I'm getting a
ContextSwitchDeadlock
when adding a CustomXMLPart after performing a Documents.Add().
The same code was working fine last week..
I understand that ContextSwitchDeadlock is caused by a long running operation (this is not a duplicate question).
Why would the CustomXMLParts.Add() command result in a long running operation?
Anyone come across this? and any ideas how to troubleshoot?
ContextSwitchDeadlock occurred Message: Managed Debugging Assistant
'ContextSwitchDeadlock' has detected a problem in 'C:\Program Files
(x86)\Microsoft Office\root\Office16\WINWORD.EXE'. Additional
information: The CLR has been unable to transition from COM context
0xfdb520 to COM context 0xfdb468 for 60 seconds. The thread that owns
the destination context/apartment is most likely either doing a non
pumping wait or processing a very long running operation without
pumping Windows messages. This situation generally has a negative
performance impact and may even lead to the application becoming non
responsive or memory usage accumulating continually over time. To
avoid this problem, all single threaded apartment (STA) threads should
use pumping wait primitives (such as CoWaitForMultipleHandles) and
routinely pump messages during long running operations.
The context switch deadlock can show up when debugging a long running process. For the most part you can ignore it if the process is expected to be long running.
see previous stackoverflow answer

Could MSMQ resolve performance bottleneck of out multithreaded services?

We wrote service that using ~200 threads .
200 Threads must do:
1- Download from internet
2- Parse the raw data (html,xml,json...)
3- Store the newly created data to db
For ~10 threads elapsed time for second operation(Parsing) is 50ms (per thread)
For ~50 threads elapsed time for second operation(Parsing) is 80-18000 ms (per thread)
So we have an idea !
We can download documents as multithreaded but using MSMQ we can send rawdata to another process (consumer). And another process implement second part (Parsing) as single threaded.
You can say why dont you use c# Queue class in same process.. We could not prevent our "precious parsing thread" from Thread Context switch. If there are 200 threads in same process the precious will be context switch victim.
Using MSMQ for this requirement is normal?
Yes, this is an excellent example of where MSMQ makes a lot of sense. You can offload your difficult work to a different process to handle without affecting the performance of your current process which clearly doesn't care about the results. Not only that, but if your new worker process goes down, the queue will preserve state and messages (other than maybe the one being worked on when it went down) will not be lost.
Depending on your needs and goals I'd consider offloading the download to the other process as well - passing URLs to work on to the queue for example. Then, scaling up your system is as easy as dialing up the queue receivers, since queue messages are received in a thread safe manner when implemented correctly.
Yes, it is normal. And there are frameworks/libraries that help you building these kind of solutions providing you more than only transports.
NServiceBus or MassTransit are examples (both can sit on top of MSMQ)

multiprocessing in winforms application

I'm working on about 35 batches updating many databases as a part of our daily process at work. Every batch of them was developed in a single web app. Due to database issues, i have collected all of them in one windows application to make use of DB connection pooling and i have assigned a single backgroundworker for each batch. Reaching to 20 batch in the application, every thing is working good. But when i add any other backgroundworker for any other batch, the application hangs.I think this is because i'm running too many threads in one process. Is there a solution for this problem, for example, making the application working with many processes ??!!!.
Regards,
Note,
I have assigned a single machine for this application (Core i7 cpu, 8 gb ram).
How many databases you have to update?
I think it is more recommended to have the number of Threads as the number of Databases
If your UI is freezing while many background workers are active, but recovers when those background workers are finished processing, then likely the UI thread is executing a method which waits for a result or signal from one of the background worker threads.
To fix your problem, you will have to look for UI-related code that deals with synchronization / multi-threading. This might be places where one of the many synchronization objects of .NET are being used (including the lock statement), but it could also involve "dumb" polling loops a-ka while(!worker.IsFinished) Thread.Sleep();.
Another possible reason for the freeze might be that you are running a worker (or worker-related method) accidentally in the UI thread instead in a background thread.
But you will find out when you use the debugger.
To keep the scope of your hunt for problematic methods managable, let your program run in the debugger until the UI freezes. At that moment, pause the program execution in the debugger. Look which code the UI thread is processing then, and you will have found one instance of offending code. (Whatever there is wrong, i can't tell you - because i don't know your code.)
It is quite possible that different UI-related methods in your code will suffer from the same issue. So, if you found the offending code (and were able to fix it) you would want to check on for other problematic methods, but that should be rather easy since at that point of time you will know what to look for...

Too many Tasks causes SQL db to timeout

My problem is that I'm apparently using too many tasks (threads?) that call a method that queries a SQL Server 2008 database. Here is the code:
for(int i = 0; i < 100000 ; i++)
{
Task.Factory.StartNew(() => MethodThatQueriesDataBase()).ContinueWith(t=>OtherMethod(t));
}
After a while I get a SQL timeout exception. I want keep the actual number of threads low(er) than 100000 to a buffer of say "no more than 10 at a time". I know I can manage my own threads using the ThreadPool, but I want to be able to use the beauty of TPL with the ContinueWith.
I looked at the Task.Factory.Scheduler.MaximumConcurrencyLevel but it has no setter.
How do I do that?
Thanks in advance!
UPDATE 1
I just tested the LimitedConcurrencyLevelTaskScheduler class (pointed out by Skeet) and still doing the same thing (SQL Timeout).
BTW, this database receives more than 800000 events per day and has never had crashes or timeouts from those. It sounds kinda weird that this will.
You could create a TaskScheduler with a limited degree of concurrency, as explained here, then create a TaskFactory from that, and use that factory to start the tasks instead of Task.Factory.
Tasks are not 1:1 with threads - tasks are assigned threads for execution out of a pool of threads, and the pool of threads is normally kept fairly small (number of threads == number of CPU cores) unless a task/thread is blocked waiting for a long-running synchronous result - such as perhaps a synchronous network call or file I/O.
So spinning up 10,000 tasks should not result in the production of 10,000 actual threads. However, if every one of those tasks immediately dives into a blocking call, then you may wind up with more threads, but it still shouldn't be 10,000.
What may be happening here is you are overwhelming the SQL db with too many requests all at once. Even if the system only sets up a handful of threads for your thousands of tasks, a handful of threads can still cause a pileup if the destination of the call is single-threaded. If every task makes a call into the SQL db, and the SQL db interface or the db itself coordinates multithreaded requests through a single thread lock, then all the concurrent calls will pile up waiting for the thread lock to get into the SQL db for execution. There is no guarantee of which threads will be released to call into the SQL db next, so you could easily end up with one "unlucky" thread that starts waiting for access to the SQL db early but doesn't get into the SQL db call before the blocking wait times out.
It's also possible that the SQL back-end is multithreaded, but limits the number of concurrent operations due to licensing level. That is, a SQL demo engine only allows 2 concurrent requests but the fully licensed engine supports dozens of concurrent requests.
Either way, you need to do something to reduce your concurrency to more reasonable levels. Jon Skeet's suggestion of using a TaskScheduler to limit the concurrency sounds like a good place to start.
I suspect there is something wrong with the way you're handling DB connections. Web servers could have thousands of concurrent page requests running all in various stages of SQL activity. I'm betting that attempts to reduce the concurrent task count is really masking a different problem.
Can you profile the SQL connections? Check out perfmon to see how many active connections there are. See if you can grab-use-release connections as quickly as possible.

ContextSwitchDeadlock when running unit (integration) tests

We get the following error when running at test:
ContextSwitchDeadlock was detected
Message: The CLR has been unable to transition from COM context 0x344b0c0 to COM
context 0x344b230 for 60 seconds. The thread that owns the destination context/apartment is
most likely either doing a non pumping wait or processing a very long running operation
without pumping Windows messages. This situation generally has a negative performance
impact and may even lead to the application becoming non responsive or memory usage
accumulating continually over time. To avoid this problem, all single threaded apartment
(STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and
routinely pump messages during long running operations.
The test does a WCF call to a method on the service layer that gets data from the database using Entity Framework. Data is also Cached on the server side using EntLib Caching Application Block.
The test that tests the same code on the server side passes without error.
Found the problem.
We were not closing the WCF proxies correctly. We were using "using" instead of a try catch with close or abort.
Therefore, an error in one test would cause a ContextSwitchDeadlock in a subsequent test that tried to use the same WCF service.

Categories

Resources