I have thef ollowing background worker in my app which is meant to start a user's session automatically if there is not already one available.
This is done on a backgroundworker (backgroundInit) on initialisation. As you can see below, I have a while loop which continues to run as long as the var checker remains false:
var checker = false;
var i = 0;
while (checker == false)
{
_session = funcs.GetSession(_servers, _name);
_sessID = _session[0].Trim();
_servName = _session[1];
checker = funcs.CheckRunning("lync.exe");
i++;
if (i > 200)
{
break;
}
}
The CheckRunning method just checks if a specified program (in this case, "lync") is currently running and returns either true or false accordingly (This is done via a CMD command).
When I run the app in an empty session however, the while loop only iterates one time before breaking out, even though "Lync" is definitely not running.
Is there any reason why running a process or too many processes from within a Backgroundworker may cause it to exit?
As the comments mentioned, this was not an issue with the BackgroundWorker, but rather an exception occurring at _sessID = session[0].Trim(); where the session had not yet started, so there is no ID.
To resolve this, I simply placed a Try/Catch block around this assignment, and let the program silently ignore the exception:
try
{
_sessID = _session[0].Trim();
_servName = _session[1];
}
catch (Exception exp)
{
// MessageBox.Show(exp.Message);
}
This works for me, as the loop will continue checking until the counter i reaches the 200 limit, at which stage the program will accept failure.
Related
I'm trying to start a NoGCRegion. Before I do, I check to see if the GC is already in a NoGCRegion. My debugger says it isn't yet it still throws as if it was.
And here is the surrounding code for context:
var firstTurn = true;
GCSettings.LatencyMode = GCLatencyMode.LowLatency;
while (state0.Round < Constants.MAX_ROUNDS && state1.Round < Constants.MAX_ROUNDS)
{
Log($"GC Mode at begining: {GCSettings.LatencyMode}");
if (GCSettings.LatencyMode != GCLatencyMode.NoGCRegion) GC.TryStartNoGCRegion(15728640, true);
var time = firstTurn ? 900 : 90;
var action0 = mcts0.GetBest(state0, time);
var action1 = mcts1.GetBest(state1, time);
state0.Players[0].Action = action0;
state0.Players[1].Action = action1;
state1.Players[0].Action = action1;
state1.Players[1].Action = action0;
state0 = actionApplyer.ApplyActions(state0);
state1 = actionApplyer.ApplyActions(state1);
firstTurn = false;
Log($"GC Mode at end: {GCSettings.LatencyMode}");
if (GCSettings.LatencyMode == GCLatencyMode.NoGCRegion) GC.EndNoGCRegion();
}
What am I doing wrong? And how come the runtime is saying that the NoCGRegion mode is in progress when I am checking that it isn't right before.
The problem is that that the code is calling TryStartNoGCRegion multiple times, in a nested fashion which the docs (remarks) advise against:
You cannot nest calls to the TryStartNoGCRegion method, and you should
only call the EndNoGCRegion method if the runtime is currently in no
GC region latency mode. In other words, you should not call
TryStartNoGCRegion multiple times (after the first method call,
subsequent calls will not succeed), and you should not expect calls to
EndNoGCRegion to succeed just because the first call to
TryStartNoGCRegion succeeded.
The call to EndNoGCRegion doesn't happen because the GC is low latency mode:
if (GCSettings.LatencyMode == GCLatencyMode.NoGCRegion)
GC.EndNoGCRegion();
So in the next iteration of the loop the code attempts to enter the NoGCRegion again and bonks out:
if (GCSettings.LatencyMode != GCLatencyMode.NoGCRegion)
GC.TryStartNoGCRegion(15728640, true);
It is worth pointing out that the remarks in the docs offer a number of caveats about this type of functionality and about how it may not even work at all, even if the code calls it correctly. Personally, I would try to find a solution that doesn't require interacting with the GC at all; a mempool comes to mind.
I'm attempting to reimplement functionality from a system class (Lazy<T>) and I found this unusual bit of code. I get the basic idea. The first thread to try for a value performs the calculations. Any threads that try while that's happening get locked at the gate, wait until release, and then go get the cached value. Any later calls notice the sentinel value and don't bother with the locks any more.
bool lockWasTaken = false;
var obj = Volatile.Read<object>(ref this._locker);
object returnValue = null;
try
{
if (obj != SENTINEL_VALUE)
{
Monitor.Enter(obj, ref lockWasTaken);
}
if (this.cachedValue != null) // always true after code has run once
{
returnValue = this.cachedValue;
}
else //only happens on the first thread to lock and enter
{
returnValue = SomeCalculations();
this.cachedValue = returnValue;
Volatile.Write<object>(ref this._locker, SENTINEL_VALUE);
}
return returnValue
}
finally
{
if (lockWasTaken)
{
Monitor.Exit(obj);
}
}
But let's say, after a change in the code, that another method resets the this._locker to it's original value and then goes in to lock and recalculate the cached value. While it does this, another thread happened to be picking up the cached value, so it's inside the locked section, but without a lock. What happens? Does it just execute normally while the thread with the lock also goes in parallel?
While it does this, another thread happened to be picking up the cached value, so it's inside the locked section, but without a lock. What happens? Does it just execute normally while the thread with the lock also goes in parallel?
Yes, it'll just execute normally.
That being said, this code appears like it could be removed entirely by using Lazy<T>. The Lazy<T> class provides a thread safe way to handle lazy instantiation of data, which appears to be the goal of this code.
Basically, the entire code could be replaced by:
// Have a field like the following:
Lazy<object> cachedValue = new Lazy<object>(() => SomeCalculations());
// Code then becomes:
return cachedValue.Value;
I currently just inherited some complex code that unfortunately I do not fully understand. It handles a large number of inventory records inputting/outputting to a database. The solution is extremely large/advanced where I am still on the newer side of c#. The issue I am encountering is that periodically the program will throw an IO Exception. It doesn't actually throw a failure code, but it messes up our output data.
the try/catch block is as follows:
private static void ReadRecords(OleDbRecordReader recordReader, long maxRows, int executionTimeout, BlockingCollection<List<ProcessRecord>> processingBuffer, CancellationTokenSource cts, Job theStack, string threadName) {
ProcessRecord rec = null;
try {
Thread.CurrentThread.Name = threadName;
if(null == cts)
throw new InvalidOperationException("Passed CancellationToken was null.");
if(cts.IsCancellationRequested)
throw new InvalidOperationException("Passed CancellationToken is already been cancelled.");
long reportingFrequency = (maxRows <250000)?10000:100000;
theStack.FireStatusEvent("Opening "+ threadName);
recordReader.Open(maxRows, executionTimeout);
theStack.FireStatusEvent(threadName + " Opened");
theStack.FireInitializationComplete();
List<ProcessRecord> inRecs = new List<PIRecord>(500);
ProcessRecord priorRec = rec = recordReader.Read();
while(null != priorRec) { //-- note that this is priorRec, not Rec. We process one row in arrears.
if(cts.IsCancellationRequested)
theStack.FireStatusEvent(threadName + " cancelling due to request or error.");
cts.Token.ThrowIfCancellationRequested();
if(rec != null) //-- We only want to count the loop when there actually is a record.
theStack.RecordCountRead++;
if(theStack.RecordCountRead % reportingFrequency == 0)
theStack.FireProgressEvent();
if((rec != null) && (priorRec.SKU == rec.SKU) && (priorRec.Store == rec.Store) && (priorRec.BatchId == rec.BatchId))
inRecs.Add(rec); //-- just store it and keep going
else { //-- otherwise, we need to process it
processingBuffer.Add(inRecs.ToList(),cts.Token); //-- note that we don't enqueue the original LIST! That could be very bad.
inRecs.Clear();
if(rec != null) //-- Again, we need this check here to ensure that we don't try to enqueue the EOF null record.
inRecs.Add(rec); //-- Now, enqueue the record that fired this condition and start the loop again
}
priorRec = rec;
rec = recordReader.Read();
} //-- end While
}
catch(OperationCanceledException) {
theStack.FireStatusEvent(threadName +" Canceled.");
}
catch(Exception ex) {
theStack.FireExceptionEvent(ex);
theStack.FireStatusEvent("Error in RecordReader. Requesting cancellation of other threads.");
cts.Cancel(); // If an exception occurs, notify all other pipeline stages, then rethrow
// throw; //-- This will also propagate Cancellation, but that's OK
}
In the log of our job we see the output loader stopping and the exception is
System.Core: Pipe is broken.
Does any one have any ideas as to what may cause this? More importantly, the individual who made this large-scale application is no longer here. When I debug all of my applications, I am able to add break points in the solution and do the standard VS stepping through everything to find the issue. However, this application is huge and has a GUI that pops up when I debug the application. I believe the GUI was made for testing purposes, but it hinders me from actually being able to step through everything. However when the .exe is run from our actual job stream, there is no GUI it just executes the way it's supposed to.
The help I am asking for is 2 things:
just suggestions as to what may cause this. Could an OleDB driver be the cause? Reason I ask is because I have this running on 2 different servers. One test and one not. The one with a new OleDB driver version does not fail (7.0 i believe whereas the other where it fails is 6.0).
Is there any code that I could add that may give me a better indication as to what may be causing the broken pipe? The error only happens periodically. If I run the job again right after, it may not happen. I'd say it's 30-40% of the time it throws the exception.
If you have any additional questions about the structure just let me know.
I've a got a problem with the infamous message "The thread xxx has exited with code 0 (0x0)".
In my code I have a main class called "Load" that starts with a Windows Form load event:
public class Load
{
public Load()
{
Device[] devices = GetDevices(); // Get an array of devices from an external source
for (int i = 0; i < devices.Length; i++)
{
DeviceDiagnosticCtrl deviceDiagnostic = new DeviceDiagnosticCtrl(devices[i].name);
}
}
}
Inside the constructor, for each generic device read from an external source, I initialize a custom diagnostic class that runs a thread:
public class DeviceDiagnosticCtrl
{
private Thread diagnosticController;
private volatile bool diagnosticControllerIsRunning = false;
public DeviceDiagnosticCtrl(string _name)
{
// Thread initialization
this.diagnosticController = new Thread(new ThreadStart(this.CheckDiagnostic));
this.diagnosticController.Start();
this.diagnosticControllerIsRunning = true;
}
private void CheckDiagnostic()
{
while (this.diagnosticControllerIsRunning)
{
try
{
// Custom 'Poll' message class used to request diagnostic to specific device
Poll poll = new Poll();
// Generic Message result to diagnostic request
IGenericMessage genericResult;
// Use a custom driver to send diagnostic request
SendSyncMsgResult res = this.customDriver.SendSyncMessage(poll, out genericResult);
switch (res)
{
case SendSyncMessageResult.GOOD:
{
// Log result
}
break;
case SendSyncMessageResult.EXCEPTION:
{
// Log result
}
break;
}
Thread.Sleep(this.customDriver.PollScantime);
}
catch (Exception ex)
{
// Loggo exception
}
}
}
}
When I run the above code in debug mode I always read 8 devices from external source, and for each of them I continuously run a managed thread to retrieve diagnostic.
My problem is that randomly one or more of the 8 threads I expect from the code above exit with code 0, without any exception.
I've started/restarted the code in debug mode a lot of time, and almost everytime one of the thread exits.
I've read somewhere (i.e. this SO question) that it could depends of Garbage Collector action, but I'm not too sure if this is my case - and how to prevent it.
Do someone see something strange/wrong in the sample code I posted above? Any suggestion?
'while (this.diagnosticControllerIsRunning)' is quite likely to fail immediate, in which case the thread drops out. It's no good starting the thread and THEN setting 'this.diagnosticControllerIsRunning = true;' - you're quite likely to be too late.
Bolt/stable-door. Something like:
do{
lengthyStuff with Sleep() in it
}
while (this.diagnosticControllerRun);
Copied from Here
Right click in the Output window when you're running your program and
uncheck all of the messages you don't want to see (like Thread Exit
messages).
I wanted to parallelize a piece of code, but the code actually got slower probably because of overhead of Barrier and BlockCollection. There would be 2 threads, where the first would find pieces of work wich the second one would operate on. Both operations are not much work so the overhead of switching safely would quickly outweigh the two threads.
So I thought I would try to write some code myself to be as lean as possible, without using Barrier etc. It does not behave consistent however. Sometimes it works, sometimes it does not and I can't figure out why.
This code is just the mechanism I use to try to synchronize the two threads. It doesn't do anything useful, just the minimum amount of code you need to reproduce the bug.
So here's the code:
// node in linkedlist of work elements
class WorkItem {
public int Value;
public WorkItem Next;
}
static void Test() {
WorkItem fst = null; // first element
Action create = () => {
WorkItem cur=null;
for (int i = 0; i < 1000; i++) {
WorkItem tmp = new WorkItem { Value = i }; // create new comm class
if (fst == null) fst = tmp; // if it's the first add it there
else cur.Next = tmp; // else add to back of list
cur = tmp; // this is the current one
}
cur.Next = new WorkItem { Value = -1 }; // -1 means stop element
#if VERBOSE
Console.WriteLine("Create is done");
#endif
};
Action consume = () => {
//Thread.Sleep(1); // this also seems to cure it
#if VERBOSE
Console.WriteLine("Consume starts"); // especially this one seems to matter
#endif
WorkItem cur = null;
int tot = 0;
while (fst == null) { } // busy wait for first one
cur = fst;
#if VERBOSE
Console.WriteLine("Consume found first");
#endif
while (true) {
if (cur.Value == -1) break; // if stop element break;
tot += cur.Value;
while (cur.Next == null) { } // busy wait for next to be set
cur = cur.Next; // move to next
}
Console.WriteLine(tot);
};
try { Parallel.Invoke(create, consume); }
catch (AggregateException e) {
Console.WriteLine(e.Message);
foreach (var ie in e.InnerExceptions) Console.WriteLine(ie.Message);
}
Console.WriteLine("Consume done..");
Console.ReadKey();
}
The idea is to have a Linkedlist of workitems. One thread adds items to the back of that list, and another thread reads them, does something, and polls the Next field to see if it is set. As soon as it is set it will move to the new one and process it. It polls the Next field in a tight busy loop because it should be set very quickly. Going to sleep, context switching etc would kill the benefit of parallizing the code.
The time it takes to create a workitem would be quite comparable to executing it, so the cycles wasted should be quite small.
When I run the code in release mode, sometimes it works, sometimes it does nothing. The problem seems to be in the 'Consumer' thread, the 'Create' thread always seems to finish. (You can check by fiddling with the Console.WriteLines).
It has always worked in debug mode. In release it about 50% hit and miss. Adding a few Console.Writelines helps the succes ratio, but even then it's not 100%. (the #define VERBOSE stuff).
When I add the Thread.Sleep(1) in the 'Consumer' thread it also seems to fix it. But not being able to reproduce a bug is not the same thing as knowing for sure it's fixed.
Does anyone here have a clue as to what goes wrong here? Is it some optimization that creates a local copy or something that does not get updated? Something like that?
There's no such thing as a partial update right? like a datarace, but then that one thread is half doen writing and the other thread reads the partially written memory? Just checking..
Looking at it I think it should just work.. I guess once every few times the threads arrive in different order and that makes it fail, but I don't get how. And how I could fix this without adding slowing it down?
Thanks in advance for any tips,
Gert-Jan
I do my damn best to avoid the utter minefield of closure/stack interaction at all costs.
This is PROBABLY a (language-level) race condition, but without reflecting Parallel.Invoke i can't be sure. Basically, sometimes fst is being changed by create() and sometimes not. Ideally, it should NEVER be changed (if c# had good closure behaviour). It could be due to which thread Parallel.Invoke chooses to run create() and consume() on. If create() runs on the main thread, it might change fst before consume() takes a copy of it. Or create() might be running on a separate thread and taking a copy of fst. Basically, as much as i love c#, it is an utter pain in this regard, so just work around it and treat all variables involved in a closure as immutable.
To get it working:
//Replace
WorkItem fst = null
//with
WorkItem fst = WorkItem.GetSpecialBlankFirstItem();
//And
if (fst == null) fst = tmp;
//with
if (fst.Next == null) fst.Next = tmp;
A thread is allowed by the spec to cache a value indefinitely.
see Can a C# thread really cache a value and ignore changes to that value on other threads? and also http://www.yoda.arachsys.com/csharp/threads/volatility.shtml