Simultaneous tasks not run [closed] - c#

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I have a windows forms app that works well on my development machine. However, I see strange behavior trying to run multiple tasks in parallel after publishing the application. There is no error, but it doesn't work as expected. Here is the code:
private async void Button1_Click(object sender, EventArgs e)
{
button1.Enabled = false;
try
{
var watch = Stopwatch.StartNew();
textBox1.Text = $"Processing...";
await SyncAppDbAsync();
watch.Stop();
var time = watch.ElapsedMilliseconds;
textBox1.Text = $"End successfully. Minutes: {String.Format("{0:0.00}", (double)(time / 1000) / 60)}";
}
catch (Exception ex)
{
textBox1.Text = $"Message: {ex.Message}, Source: {ex.Source}, HResult: {ex.InnerException}";
}
}
public async Task SyncAppDbAsync()
{
//delete tables rows
// I block the UI for some seconds because not want to write
// a record if is not deleted
Task.WaitAll(
AgenteApp.RemoveAllAgentiAppAsync(),
RubricaApp.RemoveAllRubricheAppAsync(),
...
);
//read data da from database
var readAgents = Task.Run(Agent.GetAgentAsync);
var readAddressBooks = Task.Run(AddressBook.GetAddressBookAsync);
...
await Task.WhenAll(
readAgents,
readAddressBooks,
...
);
//save data on sqlite database(myDb.db)
var addAgenti = Task.Run(async () =>
{
var progrIndicator = new Progress<int>(AgentiProgress);
var agenti = AgenteApp.FillAgentiAppFromCompanyAsync(await readAgents, progrIndicator);
await AgenteApp.AddAgentiAppAsync(await agenti);
});
var addRubriche = Task.Run(async () =>
{
var progrIndicator = new Progress<int>(RubricheProgress);
var rubriche = RubricaApp.FillRubricheAppFromCompanyAsync(await readAddressBooks, progrIndicator);
await RubricaApp.AddRubricheAppAsync(await rubriche);
});
await Task.WhenAll(
addAgenti,
addRubriche,
...
);
}
Each task in that code corresponds to a table in an sqlite database. The code reads data from one sqlite database and writes to another sqlite database.
I expect this code to take a few minutes to run. In the meantime, there is a progress bar for each table that should update. Instead, the code runs in just a few seconds, the progress bars never update, and the database tables are unchanged. I see this text in my textbox at the end: End successfully. Minutes: 0,02.
What can I do to understand the problem and fix it? Again, this works correctly on my development machine.
UPDATE:
Sorry to everyone: code works perfectly fine! I make stupid mistake
with a path of sqlite database.
I hardcoded in app.config:
I accept suggests on how make dynamic that path
So again sorry

There's not enough information in the question at the time I'm writing this for me to evaluate the problem. But I can at least give some strategies that will help you find a solution on your own:
Add excessive and detailed logging to the code (You can remove it later). That will help you understand what is happening as the program runs, and potentially see where it goes wrong. Run this in production if you have to, but preferably:
If you don't already have one, get a staging or QA environment separate from your own machine, (use a local VM if you really have to) where you can reproduce the problem on demand, away from production. The logging information from the previous step may help with this.
Look for exceptions that might be hidden by the async code. Make sure you're checking the result of each of those operations.
Remove most of the code. The program will be incomplete, but it will run that incomplete section as expected. Keep adding more small chunks of the complete program back until it breaks again. At this point, you will (probably) know where the issue is... though it could be a race condition caused by an earlier block, but at least you'll have a clue where to start looking.
Unroll the async code and run everything using traditional synchronized methods. Make sure the simple synchronized code works in the production environment before trying to adding parallelism.
When you finally track down this issue, make sure you have a unit test that will detect the problem in the future before it goes to production, to avoid a regression.

Related

Monitor.TryEnter and Threading.Timer race condition

I have a Windows service that every 5 seconds checks for work. It uses System.Threading.Timer for handling the check and processing and Monitor.TryEnter to make sure only one thread is checking for work.
Just assume it has to be this way as the following code is part of 8 other workers that are created by the service and each worker has its own specific type of work it needs to check for.
readonly object _workCheckLocker = new object();
public Timer PollingTimer { get; private set; }
void InitializeTimer()
{
if (PollingTimer == null)
PollingTimer = new Timer(PollingTimerCallback, null, 0, 5000);
else
PollingTimer.Change(0, 5000);
Details.TimerIsRunning = true;
}
void PollingTimerCallback(object state)
{
if (!Details.StillGettingWork)
{
if (Monitor.TryEnter(_workCheckLocker, 500))
{
try
{
CheckForWork();
}
catch (Exception ex)
{
Log.Error(EnvironmentName + " -- CheckForWork failed. " + ex);
}
finally
{
Monitor.Exit(_workCheckLocker);
Details.StillGettingWork = false;
}
}
}
else
{
Log.Standard("Continuing to get work.");
}
}
void CheckForWork()
{
Details.StillGettingWork = true;
//Hit web server to grab work.
//Log Processing
//Process Work
}
Now here's the problem:
The code above is allowing 2 Timer threads to get into the CheckForWork() method. I honestly don't understand how this is possible, but I have experienced this with multiple clients where this software is running.
The logs I got today when I pushed some work showed that it checked for work twice and I had 2 threads independently trying to process which kept causing the work to fail.
Processing 0-3978DF84-EB3E-47F4-8E78-E41E3BD0880E.xml for Update Request. - at 09/14 10:15:501255801
Stopping environments for Update request - at 09/14 10:15:501255801
Processing 0-3978DF84-EB3E-47F4-8E78-E41E3BD0880E.xml for Update Request. - at 09/14 10:15:501255801
Unloaded AppDomain - at 09/14 10:15:10:15:501255801
Stopping environments for Update request - at 09/14 10:15:501255801
AppDomain is already unloaded - at 09/14 10:15:501255801
=== Starting Update Process === - at 09/14 10:15:513756009
Downloading File X - at 09/14 10:15:525631183
Downloading File Y - at 09/14 10:15:525631183
=== Starting Update Process === - at 09/14 10:15:525787359
Downloading File X - at 09/14 10:15:525787359
Downloading File Y - at 09/14 10:15:525787359
The logs are written asynchronously and are queued, so don't dig too deep on the fact that the times match exactly, I just wanted to point out what I saw in the logs to show that I had 2 threads hit a section of code that I believe should have never been allowed. (The log and times are real though, just sanitized messages)
Eventually what happens is that the 2 threads start downloading a big enough file where one ends up getting access denied on the file and causes the whole update to fail.
How can the above code actually allow this? I've experienced this problem last year when I had a lock instead of Monitor and assumed it was just because the Timer eventually started to get offset enough due to the lock blocking that I was getting timer threads stacked i.e. one blocked for 5 seconds and went through right as the Timer was triggering another callback and they both somehow made it in. That's why I went with the Monitor.TryEnter option so I wouldn't just keep stacking timer threads.
Any clue? In all cases where I have tried to solve this issue before, the System.Threading.Timer has been the one constant and I think its the root cause, but I don't understand why.
I can see in log you've provided that you got an AppDomain restart over there, is that correct? If yes, are you sure that you have the one and the only one object for your service during the AppDomain restart? I think that during that not all the threads are being stopped right in the same time, and some of them could proceed with polling the work queue, so the two different threads in different AppDomains got the same Id for work.
You probably could fix this with marking your _workCheckLocker with static keyword, like this:
static object _workCheckLocker;
and introduce the static constructor for your class with initialization of this field (in case of the inline initialization you could face some more complicated problems), but I'm not sure is this be enough for your case - during AppDomain restart static class will reload too. As I understand, this is not an option for you.
Maybe you could introduce the static dictionary instead of object for your workers, so you can check the Id for documents in process.
Another approach is to handle the Stopping event for your service, which probably could be called during the AppDomain restart, in which you will introduce the CancellationToken, and use it to stop all the work during such circumstances.
Also, as #fernando.reyes said, you could introduce heavy lock structure called mutex for a synchronization, but this will degrade your performance.
TL;DR
Production stored procedure has not been updated in years. Workers were getting work they should have never gotten and so multiple workers were processing update requests.
I was able to finally find the time to properly set myself up locally to act as a production client through Visual Studio. Although, I wasn't able to reproduce it like I've experienced, I did accidentally stumble upon the issue.
Those with the assumptions that multiple workers were picking up the work was indeed correct and that's something that should have never been able to happen as each worker is unique in the work they do and request.
It turns out that in our production environment, the stored procedure to retrieve work based on the work type has not been updated in years (yes, years!) of deploys. Anything that checked for work automatically got updates which meant when the Update worker and worker Foo checked at the same time, they both ended up with the same work.
Thankfully, the fix is database side and not a client update.

C# File.Copy to multiple Network destination block other threads [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Hi I am using multithread to copy many files from a source to multi-network destinations, each thread copy a bulk of files to Different network!
I use .net File.Copy(...)
I see 100% uses on only one network, each moment
the 100% change from network to network.
I tried to change the destinations to local one, then i see balanced bytes copy over all threads
I tried to run 10 processes(each one to a different destination) instead of 10 thread, then I get all 10 network at 100% use.
I use .net 4.5
any idea ?
I'd suggest you to replace threads (good for CPU bound operations) with async/await model which perform excellent for IO bound operations.
Let's rewrite File.Copy operation as
public static async Task Copy(string src, string dest)
{
await Task.Run(() => {
System.IO.File.Copy(src, dest);
});
}
You can call it from calling method as snippet below.
var srcPath = "your source location";
var dstPath = "your destination location";
foreach(var file in System.IO.Directory.EnumerateFiles(srcPath))
{
var dstFile = System.IO.Path.Combine(dstPath, System.IO.Path.GetFileName (file));
Copy (file, dstFile);
}
Now you can simply pump this method with src and destination paths as fast as you can. Your limitation will be IO speed (disk/network etc) but your Cpu will be mostly free.
Have you taken at look at asynchronous operations. There is some really good documentations on MSDN about specifically async IO.
There are also some questions about async IO on stack overflow such as this one.
Using Task.Run you can queue all of your file copy operations asynchronously.
Try something like:
List<String> fileList = new List<string>();
fileList.Add("TextFile1.txt");
fileList.Add("TextFile2.txt");
Parallel.For(0, fileList.Count, x =>
{
File.Copy(fileList[x], #"C:\" + fileList[x]);
}
);
Change c:\ to match your multiple destinations. If you provide the orignal code we could do more.

Web API - Accept incoming POST, pass off operation and close connection ASAP

I have an interaction with another server which makes POST calls to my web app. The problem I have is that the server making the calls tends to lock records which my app would go back to update.
So I need to accept the post, pass it off to another thread/process in the background and get the connection closed as soon as possible.
I've tried things like:
public IHttpActionResult Post(myTestModel passIn)
{
if (ModelState.IsValid) {
logger.debut ("conn open);
var tasks = new []
{
_mymethod.PassOutOperation(passIn)
}
logger.debug ("conn closed");
return Ok("OK");
}
return BadRequest("Error in model");
}
I can tell by the amount of time the inbound requests take that the connections aren't being closed down as quickly as it could be. In testing they are just 3 consecutive posts to my web app.
Looking at my logs I would have expected my entries for connection open and closed to be at the top of the log. However the closed connections are at the bottom, after the operations that I was trying to pass out have completed.
Has anyone got any tips?
Thanks in advance!
for anyone interested I solved the problem.
I'm now using:
var tasks = new thread(() =>
{
_mymethod.PassOutOperation(passIn);
});
tasks.start();
The reason the code was stopping was because I was originally passing HttpContext.Current.Request.UserHostName in my other method. Which was out of scope when I setup the new thread. I've since changed now and declare a variable outside of the code block which create the new thread, and pass in via the methods constructor e.g.
_myMethod.PassOutOperation(passIn, userHostName);
Hope that helps someone in the future!

The process cannot access the file because it is being used by another process

I am trying to do the following:
var path = Server.MapPath("File.js"));
// Create the file if it doesn't exist or if the application has been restarted
// and the file was created before the application restarted
if (!File.Exists(path) || ApplicationStartTime > File.GetLastWriteTimeUtc(path)) {
var script = "...";
using (var sw = File.CreateText(path)) {
sw.Write(script);
}
}
However occasionally the following error is sometimes thrown:
The process cannot access the file '...\File.js' because it is being
used by another process
I have looked on here for similar questions however mine seems slightly different from the others. Also I cannot replicate it until the server is under heavy load and therefore I wish to make sure it is correct before I upload the fix.
I'd appreciate it if someone could show me how to fix this.
Thanks
It sounds like two requests are running on your server at the same time, and they're both trying to write to that file at the same time.
You'll want to add in some sort of locking behavior, or else write a more robust architecture. Without knowing more about what specifically you're actually trying to accomplish with this file-writing procedure, the best I can suggest is locking. I'm generally not a fan of locking like this on web servers, since it makes requests depend on each other, but this would solve the problem.
Edit: Dirk pointed out below that this may or may not actually work. Depending on your web server configuration, static instances may not be shared, and the same result could occur. I've offered this as a proof of concept, but you should most definitely address the underlying problem.
private static object lockObj = new object();
private void YourMethod()
{
var path = Server.MapPath("File.js"));
lock (lockObj)
{
// Create the file if it doesn't exist or if the application has been restarted
// and the file was created before the application restarted
if (!File.Exists(path) || ApplicationStartTime > File.GetLastWriteTimeUtc(path))
{
var script = "...";
using (var sw = File.CreateText(path))
{
sw.Write(script);
}
}
}
}
But, again, I'd be tempted to reconsider what you're actually trying to accomplish with this. Perhaps you could build this file in the Application_Start method, or even just a static constructor. Doing it for every request is a messy approach that will be likely to cause issues. Particularly under heavy load, where every request will be forced to run synchronously.

Xamarin.Mac NSThread.Start() stack overflow

I've got a rather complex Xamarin.Mac application. In fact, it's a windows forms application, but we're using Mono for Mac compatibility with a native Mac GUI. One of our business logic components involves watching the filesystem for changes using FSWatcher. Unfortunately, FSWatcher on Mac is horribly broken, leaving us to use the native FSEvents API via Xamarin.Mac.
Deep down in business logic, I've got a custom class called CBFileSystemWatcher which wraps the .NET FSWatcher, and on mac provides an adapter between the FSWatcher-expecting business logic and FSEvents on mac. INSIDE this compatibility class, I've got
private FSEventStream eventStream;
//...
this.eventStream.ScheduleWithRunLoop (NSRunLoop.Main);
which schedules the filesystem events on the main run loop. Unfortunately, this means the GUI blocks FS event handling, so suddenly if a modal dialog is open, for example, fs events stop getting processed.
My thought is to create a new runloop for the FS event scheduling, which I figure looks like
NSThread.Start(()=>{
// Some other code
this.eventStream.ScheduleWithRunLoop (NSRunLoop.Current);
});
The snag is, I think, that this code runs inside maybe two other layers of thread starts. For testing purposes, I've got the following code where I NEED the above code:
NSThread.Start(()=>{
int i = 0;
});
with a breakpoint on the middle line to determine whether it was hit. 9 times out of ten I get the following stack overflow:
Stack overflow in unmanaged: IP: 0x261ba35, fault addr: 0xb02174d0
Stack overflow in unmanaged: IP: 0x261ba35, fault addr: 0xb02174d0
(the addresses change, though often recur)
One time out of ten the code works exactly as expected and I break on i=0
To test this further, I placed the above test inside my main AppDelegate.cs FinishedLaunching method. There, the code reliably works.
To further confuse matters, I placed the following code at the start of FinishedLaunching:
var fooThread = new Thread(() =>
{
var barThread = new Thread(()=>{
NSThread.Start(() =>
{
int i = 4;
});
});
barThread.Start();
});
fooThread.Start();
With breakpoints on fooThread.Start();, barThread.Start();, and int i = 4; the code works exactly as expected, where the points are hit in reverse order.
My question is, does anyone have any ideas on how to even begin deubgging this? The SO is so out of the blue I don't even know where to start.
A year later, I have this answer for you:
http://forums.xamarin.com/discussion/37451/more-modal-problems-nsmenu-nsstatusitem

Categories

Resources