Would anyone care to explain to me how the value of this.oBalance.QouteBalance is evaluated to be true for being less than zero when it clearly isn't? Please see image below.
Am I missing something fundamental when it comes to comparing doubles in C#??
public double QouteBalance { get; set; }
UpdateBalance_PositionOpenned() is not being called in a loop, but is being called as part of a more complex event driven procedure that runs on the ticks of a timer (order of milliseconds)
EDIT: Pardon the code if it's messy but I couldn't edit it as this was a run-time error after quite a long run-time so was afraid wouldn't be able to recreate it. The Exception message is not correct and just a reminder for myself. The code after the exception is code I forgot to comment out before starting this particular run.
EDIT 2: I am building and running in Release Mode.
EDIT 3: Pardon my ignorance, but it would seem that I am in fact running in a multi-threaded environment since this code is being called as part of a more complex object method that gets executed on the ticks (Events) of a timer. Would it possible to ask the timer to wait until all code inside its event handler has finished before it can tick again?
EDIT 4: Since this has been established to be a multi-threading issue; I will try to give wider context to arrive at an optimized solution.
I have a Timer object, which executes the following on every tick:
Run a background worker to read data from file
When background worker finishes reading data from file, raise an
Event
In the event handler, run object code that calls the method below
(in the image) and other multiple routines, including GUI updates.
I suppose this problem can be avoided by using the timer Tick events to read the from file but changing this will break other parts of my code.
You're accessing shared variables from multiple threads. It's probably a race condition where one thread has thrown the error but by the time the debugger has caught and attached, the variable's value has changed.
You would need to look at implementing synchronizing logic like locking around the shared variables, etc.
Edit: To answer your edit:
You can't really tell the timer to not tick (well you can, but then you're starting and stopping and even after calling Stop you might still receive a few more events depending on how fast they are being dispatched). That said, you could look at Interlocked namespace and use it to set and clear and IsBusy flag. If your tick method fires and sees you're already working, it just sits out that round and waits for a future tick to handle work. I wouldn't say it's a great paradigm but it's an option.
The reason I specify using the Interlocked class versus just using a shared variable against comes down to the fact you're access from multiple threads at once. If you're not using Interlocked, you could get two ticks both checking the value and getting an answer they can proceed before they've flipped the flag to keep others out. You'd hit the same problem.
The more traditional way of synchronizing access to shared data member is with locking but you'll quickly run into problems with the tick events firing too quickly and they'll start to back up on you.
Edit 2: To answer your question about an approach to synchronizing the data with shared variables on multiple threads, it really depends on what you're doing specifically. We have a very small window into what your application is doing so I'm going to piece this together from all the comments and answers in hopes it will inform your design choice.
What follows is pseudo-code. This is based on a question you asked which suggests you don't need to do work on every tick. The tick itself isn't important, it just needs to keep coming in. Based on that premise, we can use a flagging system to check if you're busy.
...
Timer.Start(Handle_Tick)
...
public void Handle_Tick(...)
{
//Check to see if we're already busy. We don't need to "pump" the work if
//we're already processing.
if (IsBusy)
return;
try
{
IsBusy = true;
//Perform your work
}
finally
{
IsBusy = false;
}
}
In this case, IsBusy could be a volatile bool, it could be accessed with Interlocked namespace methods, it could be a locking, etc. What you choose is up to you.
If this premise is incorrect and you do in fact have to do work with every tick of the timer, this won't work for you. You're throwing away ticks that come in when you're busy. You'd need to implement a synchronized queue if you wanted to keep hold of every tick that came in. If your frequency is high, you'll have to be careful as you'll eventually overflow.
This isn't really an answer but:
UpdateBalance_PositionOpenned() is not being called in a loop, but is
being called as part of a more complex event driven procedure that
runs on the ticks of a timer (order of milliseconds)
see:
Multi-threading? – abatishchev 30 mins ago
Tight timer driven event-loop on the order of milliseconds probably has all the problems of threads, and will be almost entirely impossible to trouble-shoot with a step-through debugger. Stuff is happening way faster than you can hit 'F10'. Not to mention, you're accessing a variable from a different thread each event cycle, but there's no synchronization in sight.
Not really a full answer but too much for a comment
This is how I could code defensively
Local scope leads to less unexpected stuff
And it make code easier to debug and test
public void updateBalance(double amount, double fee, out double balance)
{
try
{
balance = amount * (1.0 + fee);
if (balance < 0.0) balance = 0.0;
}
catch (Exception Ex)
{
System.Diagnostics.Debug.WriteLine(Ex.Message);
throw Ex;
}
}
Value type is copied so even if then input variable for amount changed while the method was executing the value for amount in the method would not.
Now the out balance without locks is a different story.
Related
In my app I need a process that will work in the background and check for changes for various things. Then do some logic. Most of the time this process will be idle and just be waiting for the trigger point. So this is what I did:
private void MyBackgoroundThread()
{
while (isRunning)
{
if (MyStatus == 1)
{
//Log removed
}
}
}
Then at run time it would be called by the constructor with the follow;
await Task.Run(() => MyBackgoroundThread());
Now this works perfectly. The problem now is that my app when idle uses about 35% CPU usage. Disabling the MyBackgoroundThread the app uses 0% CPU usage at idle. So I know it's this thread.
I understand why this is happening but my question is what is best practice for handling this situation so I don't burn 35% CPU for doing nothing.
Edit: based on comments;
#Dour High Arch Explain what “the trigger point” is
Basically the variable MyStatus is a global variable that when the process has to be "triggered" the status gets change to 1 for example. Sorry thought was clear in the code.
#Ron Beyer This seems dangerous given that the "background" task is an
infinite loop, how is the await supposed to return?
Well you are at the meat of the issues. The global variable isRunning gets changed to false on the app closing. I am looking for a better solution
You are using 1 CPU, more or less, to constantly iterate your while statement.
The best solution depends on what you are doing in that code. If at all possible, use an event or similar notification to trigger background work rather than a polling thread. For example if you're looking for changes to files, use a FileSystemWatcher.
If your code, rather than an external agent, is causing the need to do work, you can also consider a Producer/Consumer pattern. In that case, have a look at BlockingCollection, which makes implementing that pattern a snap.
If there is no way to use an event-based notification mechanism to trigger the background work, you can use Thread.Sleep() to at least have your polling thread sleep for a time, until it has to wake up and check for work again.
UPDATE based on your edits
basically the variable MyStatus is a global variable that when the process has to be "triggered" the status gets change to 1 for example. Sorry thought was clear in the code.
Change it from a global variable to a static property, and have it fire off an event when the value is changed. Instead of using your polling thread, have some code that subscribes to your new event.
The global variable isRunning gets changed to false on the app closing.
A background thread will automatically close when the application closes.
I am having a bit of a conundrum here, and would like to know a couple of things:
Am i doing this wrong?
What is the expected behaviour of a backgroundworker in different scenarios...
If possible, get an answer as to why i am getting specific behaviour would be nice...
For point 1, and ultimately 3 as well, i will explain what i am doing in Pseudo-Code so that you have the details without actually spitting out thousands of lines of code. While i write this post, i will look at the code itself to ensure that the information is accurate as far as when and what is happening. At the very end, i will also detail what is happening and why i am having issues.
Pseudo-Code details:
I have a main UI thread (WinForms form), where after selecting a few configuration options you click a button.
This button's event does some preliminary setup work in memory and on the file system to get things going and once that's done fires off ONE backgroundworker. This backgroundworker initializes 5 other backgroundworkers (form scope variables), sets their "Done" flags (bool - same scope) to true, sets their "Log" vars to a new List<LogEntry> (same scope) and once that's done calls a method called CheckEndConditions. This method call is done within the DoWork() of the initial backgroundworker, and not in the RunWorkerCompleted event.
The CheckEndConditions method does the following logic:
IF ALL "Done" vars are set to True...
Grab the "Log" vars for all 5 BWs and adds their content to a master log.
Reset the "Log" vars for all 5 BWs to a new List<LogEntry>
Reset the "Done" vars for all 5 BWs to False.
Call MoveToNextStep() method which returns an Enum value representative of the next step to perform
Based on the result of (5), grab a List<ActionFileAction> that needs to be processed
Check to ensure (6) has actions to perform
If NO, set ALL "Done" flags to true, and call itself to move to the next step...
If YES, partition this list of actions into 5 lists and place them in an array of List<ActionFileAction> called ThreadActionSets[]
Check EACH partitioned list for content, and if none, sets the "Done" flag for the respective thread to true (this ensures there are no "end race scenarios")
Fire off all 5 threads using RunWorkerAsync() (unless we are at the Finished step of course)
Return
Each BW has the exact same DoWork() code, which basically boils down to the following:
Do i have any actions to perform?
If NO, set my e.Result var to an empty list of log entries and exit.
If YES, loop for each action in the set and perform 4-5-6 below...
What context of action am i doing? (Groups, Modules, etc)
Based on (4), what type of action am i doing? (Add, Delete, Modify)
Based on (5), perform the right action and log everything you do locally
When all actions are done, set my e.Result var to the "log of everything i've done", and exit.
Each BW has the same RunWorkerCompleted() code, which basically boils down to the following:
TRY
From the e.Result var, grab the List<LogEntry> and put it in my respective thread's "Log" var.
Set my respective "Done" var to true
Call CheckEndConditions()
CATCH
Set my respective "Done" var to true
Call CheckEndConditions()
So that is basically it... in summary, i am splitting a huge amount of actions into 5 partitions, and sending those off to 5 threads to perform them at a faster rate than on a single thread.
The Problem
The problem i am having is that i often find myself, regardless of how much thought i put into this for race scenarios (specifically end ones), with a jammed/non-responsive program.
In the beginning, i had setup my code inefficiently and the problem was with End Race Scenarios and the threads would complete so fast that the last call made to CheckEndConditions saw one of the "Done" vars still set to false, when in fact it wasn't/it had completed... So i changed my code to what you see above which, i thought, would fix the problem, but it hasn't. The whole process still jams/falls asleep, and no threads are actually running any processing when this happens which means that something went wrong (i think, not sure) with the last call to CheckEndConditions.
So my 1st question: Am i doing this wrong? What is the standard way of doing what it is i want to do? The logic of what i've done feels sound to me, but it doesn't behave how i expect it to so maybe the logic isn't sound? ...
2nd question: What is the expected behaviour of a BW, when this scenario occurs:
An error occurred within the DoWork() method that was un-caught... does it fire off the RunWorkerCompleted() event? If not, what happens?
3rd question: Does anyone see something obvious as to why my problem is occurring?
Thanks for the help!
Reposting my comment as answer per OP's request:
The RunWorkerCompleted event will not necessarily be raised on the same thread that it was created on (unless it is created on UI thread) See BackgroundWorker RunWorkerCompleted Event
See OP comments for more details.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Edit a few years on: this was clearly a terrible approach, at the start of me using C#/.NET. Hopefully this question helps another noob with the same "problem".
Is this the best way to approach this scenario?
while(true)
{
if(Main.ActiveForm != null)
{
Main.ActiveForm.Invoke(new MethodInvoker(Main.SomeMethod));
break;
}
}
This is performed on a second thread.
Is this the best way to approach this scenario?
Just to clarify, the scenario is "I have a property of reference type; as soon as the property is not null I wish to invoke one of its methods", and the technique is "spin up another thread, busy-wait until the value is not null, invoke, and stop waiting".
The answer to your question is no, this is not the best way to approach this scenario. This is a terrible way to solve this problem for several reasons.
First, the code is simply wrong. The C# language makes no guarantee that this code works. If it works, then it is working by accident, and it can stop working at any time.
There are three reasons that this code is wrong.
The first reason it is wrong is because of the way threads work on modern operating systems. It is possible that the two threads are each assigned to their own processor. When a processor accesses memory on a modern machine, it does not go out to main memory every time. Rather, it fetches hundreds or thousands of nearby values into a cache the first time you hit an address. From then on, it accesses the local cache rather than taking the expensive bus ride back to main memory. The implications of that should be obvious: if one thread is writing and another thread is reading, then one thread might be writing to one processor cache and the other might be reading from an entirely different processor cache. They can be inconsistent forever if nothing forces them to be consistent, and therefore your loop can run forever even if the property has been set on another thread.
(And the "backwards" case is also possible; if the value of the property is now null, and was set at some time in the past, then it is possible that the second thread is reading the old, stale value and not the fresh null value. It therefore could decide to not wait at all, and invoke the method on a stale value of the property.)
The second reason this code is wrong is because it has a race condition. Suppose someone assigns the property to non-null on thread one, and then thread two reads it as non-null so you enter the body of the "if", and then thread three assigns it back to null, and then thread two reads it as null and crashes.
The third reason this code is wrong is because the compiler -- either the C# compiler or the jitter -- is permitted to "optimize" it so that it stays in the loop forever without doing the test a second time. The optimizer is allowed to analyze the code and realize that after the first time through the loop, if the test fails then nothing in the rest of the loop can cause it to succeed. It is permitted to then skip the test the next time through because it "knows" that it cannot succeed. Remember, the optimizer is permitted to make any optimization that would be invisible in a single-threaded program.
The optimizer does not actually make this optimization (to my knowledge) but it is permitted to, and a future version could do so. The optimizer can and does make similar optimizations in similar situations.
In order to make this code correct there must be a memory barrier in place. The most common technique for introducing a barrier is to make the access "volatile". The memory barrier forces the processor to abandon its cache and go back to main memory, and discourages the compiler from making aggressive optimizations. Of course, properties may not be volatile, and this technique utterly wrecks performance because it eliminates the one of the most important optimizations in modern processors. You might as well be accessing main memory by carrier pigeon, the cost is so onerous compared to hitting the cache.
Second, the code is bad because you are burning an entire processor sitting there in a tight loop checking a property. Imagine a processor is a car. Maybe your business owns four cars. You are taking one of them and driving it around the block non-stop, at high speed, until the mailman arrives. That is a waste of a valuable resource! It will make the entire machine less responsive, on laptops it will chew through battery like there is no tomorrow, it'll create waste heat, it's just bad.
I note however that at least you are marshalling the cross-thread call back to the UI thread, which is correct.
The best way to solve this problem is to not solve it. If you need something to happen when a property becomes non-null, then the best solution is to handle a change event associated with that property.
If you cannot do that then the best solution is to make the action the responsibility of the property. Change the setter so that it does the action when it is set to non-null.
If you can't make it the responsibility of the property, then make it the responsibility of the user who is setting the property. Require that every time the property be set to non-null, that the action be performed.
If you can't do that then the safest way to solve this problem is to NOT spin up another thread. Instead, spin up a timer that signals the main thread every half second or so, and have the timer event handler do the check and perform the action.
Busy-waiting is almost always the wrong solution.
All you need to do is attach an event handler to the Activated event of your form. Add the following inside that form's constructor:
Activated += SomeMethod;
And it will be fired whenever you re-activate the form after previously using another application.
The primary advantage of this approach is that you avoid creating a new thread just to have it sitting around doing a spinwait (using up a lot of CPU cycles).
If you want to use this approach, note that you have a race condition: someone else might set Main.ActiveForm to null between your test and your Invoke() call. That would result in an exception.
Copy the variable locally before doing any tests to make sure that the variable cannot be made null.
while(true)
{
var form = Main.ActiveForm;
if(form != null)
{
form.Invoke(new MethodInvoker(Main.SomeMethod));
break;
}
}
When you use a loop You are waste CPU.
The beter way to do this is use events:
// make event object in some shared place
var ev = new ManualResetEvent(false);
// do when form loaded
ev.Set();
// wait in thread
ev.WaitOne();
use :
while(Main.ActiveForm == null) { }
I would do it like that.
while(Main.ActiveForm == null)
{
//maybe a sleep here ?!
}
Main.ActiveForm.Invoke(new MethodInvoker(Main.SomeMethod));
I am working with RX scheduler classes using the .Schedule(DateTimeOffset, Action>) stuff. Basically I've a scheduled action that can schedule itself again.
Code:
public SomeObject(IScheduler sch, Action variableAmountofTime)
{
this.sch = sch;
sch.Schedule(GetNextTime(), (Action<DateTimeOffset> runAgain =>
{
//Something that takes an unknown variable amount of time.
variableAmountofTime();
runAgain(GetNextTime());
});
}
public DateTimeOffset GetNextTime()
{
//Return some time offset based on scheduler's
//current time which is irregular based on other inputs that i have left out.
return this.sch.now.AddMinutes(1);
}
My Question is concerning simulating the amount of time variableAmountofTime might take and testing that my code behaves as expected and only triggers calling it as expected.
I have tried advancing the test scheduler's time inside the delegate but that does not work. Example of code that I wrote that doesnt work. Assume GetNextTime() is just scheduleing one minute out.
[Test]
public void TestCallsAppropriateNumberOfTimes()
{
var sch = new TestScheduler();
var timesCalled = 0;
var variableAmountOfTime = () =>
{
sch.AdvanceBy(TimeSpan.FromMinutes(3).Ticks);
timescalled++;
};
var someObject = new SomeObject(sch, variableAmountOfTime);
sch.AdvanceTo(TimeSpan.FromMinutes(3).Ticks);
Assert.That(timescalled, Is.EqualTo(1));
}
Since I am wanting to go 3 minutes into the future but the execution takes 3 minutes, I want to see this only trigger 1 time..instead it triggers 3 times.
How can I simulate something taking time during execution using the test scheduler.
Good question. Unfortunately, this is currently not supported in Rx v1.x and Rx v2.0 Beta (but read on). Let me explain the complication of nested Advance* calls to you.
Basically, Advance* implies starting the scheduler to run work till the point specified. This involves running the work in order on a single logical thread that represents the flow of time in the virtual scheduler. Allowing nested Advance* calls raises a few questions.
First of all, should a nested Advance* call cause a nested worker loop to be run? If that were the case, we're no longer mimicking a single logical thread of execution as the current work item would be interrupted in favor of running the inner loop. In fact, Advance* would lead to an implicit yield where the rest of the work (that was due now) after the Advance* call would not be allowed to run until all nested work has been processed. This leads to the situation where future work cannot depend on (or wait for) past work to finish its execution. One way out is to introduce real physical concurrency, which defeats various design points of the virtual time and historical schedulers to begin with.
Alternatively, should a nested Advance* call somehow communicate to the top-most worker loop dispatching call (Advance* or Start) it may need to extend its due time because a nested invocation has asked to advance to a point beyond the original due time. Now all sorts of things are getting weird though. The clock doesn't reflect the changes after returning from Advance* and the top-most call no longer finishes at a predictable time.
For Rx v2.0 RC (coming next month), we took a look at this scenario and decided Advance* is not the right thing to emulate "time slippage" because it'd need an overloaded meaning depending on the context where it's invoked from. Instead, we're introducing a Sleep method that can be used to slip time forward from any context, without the side-effect of running work. Think of it as a way to set the Clock property but with safeguarding against going back in time. The name also reflects the intent clearly.
In addition to the above, to reduce the surprise factor of nested Advance* calls having no effect, we made it detect this situation and throw an InvalidOperationException in a nested context. Sleep, on the other hand, can be called from anywhere.
One final note. It turns out we needed exactly the same feature for work we're doing in Rx v2.0 RC with regards to our treatment of time. Several tests required a deterministic way to emulate slippage of time due to the execution of user code that can take arbitrarily long (think of the OnNext handler to e.g. Observable.Interval).
Hope this helps... Stay tuned for our Rx v2.0 RC release in the next few weeks!
-Bart (Rx team)
In my apps i find the need to have infinite while loops mostly to do some repeated action continuosly unless another event takes place so what i am doing is
while(chkFlag)
{
//do something here which takes around 30 seconds
}
Then in some other event say a button press to stop the loop i do
chkFlag = false;
Now this does the work but the problem is this does not stop the loop instantaneously as the chkFlag is checked only after the complete execution of the loop takes place. So can anybody please tell me how i can exit a loop instantaneouly based on an event.
The "blocking" code should likely be moved into some kind of worker thread (which can be terminated and/or have the results discarded). If using a BackgroundWorker (recommended, as it makes this simple), there is built-in support to handle a cancel operation.
Then the loop can either be moved inside the BackgroundWorker or the completion (RunWorkerCompleted) event of the worker can trigger the next worker to start (which causes an implicit loop).
Happy coding.
There are more "aggressive" ways of terminating/signaling a thread; but suggesting these would require more information than present.
you can't make it exit instantly (well, you could run the loop in a new thread and Abort it, if it's really safe to have an exception thrown from it at any time), but you could scatter if(!chkFlag) break; at various points within the loop that it's safe to exit. The usual method of doing this is to use a BackgroundWorker or a CancellationToken rather than a simple boolean flag.
Of course, it will still need to be run in another thread so that the button event can run at all. BackgroundWorker will take care of this automatically.
You are looking for break;.
I suppose, based on the anonymous downvoter, I should elaborate. The syntax above will immediately exit the loop that you are in (it works in the other loops as well; it's probably worth noting that continue exists to restart the loop at the beginning, which will perform increment logic in for-style loops).
How you decide to execute break is up to you, but it must be within the loop itself.
There are multiple approaches to this, such as placing checks for the event within the loop and calling break; if it occurs. Others have noted the other approaches with BackgroundWorkers and Cancel Tokens (this is preferred given it's not within the loop).
Is it possible you want to use a new thread? What are you doing for 30 seconds in the loop. Sounds like maybe there's a better design to use.
Have you considered using a timer, or setting up an event handler?