I have a long running method that must run on UI thread. (Devex - gridView.CopyToClipboard())
I do not need the UI to be responsive while copying and I added a splash screen so the user isn't bored out of his mind.
When I run this program all is well.
Trouble starts when I run a different program which in turn starts a new process and runs the program on it.
After a few seconds of copying the title reads (Not Responding) and the mouse cursor shows busy, it of course clears up within a few seconds but I'd like to get rid of it since it gives the user the misconstrued feeling that the program is faulty.
Is there any way to set the "Time out" of a process I create?
EDIT:
The main program calls the following code:
fillsProcess = new Process();
fillsProcess.StartInfo.FileName = Application.ExecutablePath;
fillsProcess.Start();
In the fillsProcess, when a certain button is clicked the following code is called:
gridViewToCopy.CopyToClipboard();
This line of code takes a while to process and after a few seconds the window of the fillsProcess looks unresponsive since this method runs on the UI thread..
EDIT The 2nd:
Apparently (and really quite understandably)
gridViewToCopy.CopyToClipboard();
Is not the only method causing this problem. Many Devex methods have to run on UI thread (e.g. Sorting of data, Filtering of data)
So thanks to anyone who offered specific solution (that either worked or didn't) but my original question pops right back up again:
Is there any way to change the time-out time or somehow control the whole "Not Responding" fiasco?
You can use DisableProcessWindowsGhosting win32 function:
[DllImport("user32.dll")]
public static extern void DisableProcessWindowsGhosting();
This actually doesn't prevent the window from freezing, but prevents the "Not Respongind" text in the title.
I am afraid the simplest solution is to make your own CopyToClipboard() where you in your for loop, every now and then, do an Application.DoEvents, which keeps the ui thread responsive.
I guess most licenses of DevExpress have the source code available, so you can probably copy paste most if it.
Since you know the data you can probably make a much simpler procedure than the generic that DevExpress uses.
like this:
const int feedbackinterval = 1000;
private void btnCopy_Click(object sender, EventArgs e)
{
StringBuilder txt2CB = new StringBuilder();
int[] rows = gridView1.GetSelectedRows();
if (rows == null) return;
for (int n = 0; n < rows.Length; n++)
{
if ((n % feedbackinterval) == 0) Application.DoEvents();
if (!gridView1.IsGroupRow(rows[n]))
{
var item = gridView1.GetRow(rows[n]) as vWorkOrder;
txt2CB.AppendLine(String.Format("{0}\t{1}\t{2}",
item.GroupCode, item.GroupDesc, item.note_no??0));
}
}
Clipboard.SetText(txt2CB.ToString());
}
This is because you call a long running method synchronously in your main application thread. As your applicaton is busy it does not respond to messages from windows and is marked as (Not Responding) until finished.
To handle this do your copying asynchronously e.g. using a Task as one simplest solution.
Task task = new Task(() =>
{
gridView.Enabled = false;
gridView.CopyToClipboard();
gridView.Enabled = true;
});
task.Start();
Disable your grid so nobody can change values in the GUI.
The rest of your application remains responsive (may has side effects!).
You could start the process hidden and then check if responding and bring it back into view when complete....your splash screen would show its still "responding".
Process proc = new Process();
proc.StartInfo.FileName = "<Your Program>.exe"
proc.StartInfo.WindowStyle = ProcessWindowStyle.Hidden;
Edit:
You could also create a Timer event watching the other process and roll your own timeout logic
DateTime dStartTime = DateTime.Now;
TimeSpan span = new TimeSpan(0, 0, 0);
int timeout = 30; //30 seconds
private void timer1_Tick(Object myObject, EventArgs myEventArgs)
{
while (span.Seconds < timeout)
{
Process[] processList = Process.GetProcessesByName("<YourProcess.exe>");
if (processList.Length == 0)
{
//process completed
timer1.Stop();
break;
}
span = DateTime.Now.Subtract(dStartTime);
}
if (span.Seconds > timeout)
{
Process[] processList = Process.GetProcessesByName("<YourProcess.exe>");
//Give it one last chance to complete
if (processList.Length != 0)
{
//process not completed
foreach (Process p in processList)
{
p.Kill();
}
}
timer1.Stop();
}
}
Edit2
You could use pInvoke "ShowWindow" to accomplish hiding and showing the window too after its started
private const int SW_HIDE = 0x00;
private const int SW_SHOW = 0x05;
[DllImport("user32.dll")]
static extern bool ShowWindow(IntPtr hWnd, int nCmdShow);
There are several possible approaches
Hide the main form for the period of the operation
Somehow clone/serialize the control and pass it to a thread with another UI dispatcher
Get the selected cells via gridView.GetSelectedCells() and then put their contents to the clipboard asynchronously
It would be more helpful if you've uploaded the GridView library somewhere, so that we could look inside.
I'm unclear whether your user needs to see the screen that is "unresponsive". If it unnecessary, you might try letting the application run in the background after the main thread for this app is closed; or you may minimize the app.
If it is necessary to see the application and for it to appear to be working, can you segment your "copy to clipboard" function such that is is threaded and takes in either an array or the gridview and an index range. The advantage to this is that your main thread on your subordinate process would never hang. The disadvantage is that people don't like to work with threading and delegates in C#.
Okay, the 'Not Responding' and window artifacting you've described are just symptoms of running a long term activity on your UI thread. The UI thread is blocked, so the UI is frozen. There is no avoiding this. To be honest, it's just 'lucky' that your application appears as responsive as it does.
As far as I can see every workaround that has been described here is just a hack to fudge the fact that your UI thread is frozen. Don't do this. Fix your program so the UI thread isn't frozen.
Ask yourself: do my users really need to copy all the rows from this view? Can the data be filtered in some way to limit the rows? If not, there is a property called MaxRowCopyCount which limits the number of rows that are copied - can this be leveraged without breaking your workflow?
Finally, if all else fails, is there some other medium you can use (perhaps an intermediate file), to which data can be copied on a background thread?
The timeout, documented in IsHungAppWindow, cannot be changed. Don't use global state to manage a local problem.
You have to optimize the part that causes unresponsiveness. For example use caching, virtual grid (DevExpress calls it "server mode"), paging, delegate the sorting to an ibindinglistview filter that does a database query (uses database index) instead of in-memory sorting (no indexing) or implement IAsyncOperation on your clipboard data so you would only need to populate the data when the user does a paste.
Related
using System.Windows.Forms;
public class App
{
[STAThread]
public static void Main()
{
string fname;
using (var d = new OpenFileDialog())
{
if (d.ShowDialog() != DialogResult.OK)
{
return;
}
fname = d.FileName;
}
//Application.ExitThread();
for (; ;)
;
}
}
The above code shows me a file dialog. Once I select a file and press open, the for loop is executed, but the (frozen) dialog remains.
Once I uncomment Application.ExitThread() the dialog disappears as expected.
Does that work as intended? Why doesn't using make the window disappear? Where can I find more info about this?
You have discovered the primary problem with single-threaded applications... long running operations freeze the user interface.
Your DoEvents() call essentially "pauses" your code and gives other operations, like the UI, a chance to run, then resumes. The problem is that your UI is now frozen again until you call DoEvents() again. Actually, DoEvents() is a very problematic approach (some call it evil). You really should not use it.
You have better options.
Putting your long running operation in another thread helps to ensure that the UI remains responsive and that your work is done as efficiently as possible. The processor is able to switch back and forth between the two threads to give the illusion of simultaneous execution without the difficulty of full-blown multi-processes.
One of the easier ways to accomplish this is to use a BackgroundWorker, though they have generally fallen out of favor (for reasons I'm not going to get into in this post: further reading). They are still part of .NET however and have a lower learning curve then other approaches, so I'd still suggest that new developers play around with them in hobby projects.
The best approach currently is .NET's Tasks library. If your long running operation is already in a thread (for example, it's a database query and you are just waiting for it to complete), and if the library supports it, then you could take advantage of Tasks using the async keyword and not have to think twice about it. Even if it's not already in a thread or in a supported library, you could still spin up a new Task and have it executed in a separate Thread via Task.Run(). .NET Tasks have the advantage of baked in language support and a lot more, like coordinating multiple Tasks and chaining Tasks together.
JDB already explained in his answer why (generally speaking) your code doesn't work as expected. Let me add a small bit to suggest a workaround (for your specific case and for when you just need to use a system dialog and then go on like it was a console application).
You're trying to use Application.DoEvents(), OK it seems to work and in your case you do not have re-entrant code. However are you sure that all relevant messages are correctly processed? How many times you should call Application.DoEvents()? Are you sure you correctly initialize everything (I'm talking about the ApplicationContext)? Second problem is more pragmatic, OpenFileDialog needs COM, COM (here) needs STAThread, STAThread needs a message pump. I can't tell you in which way it will fail but for sure it may fail.
First of all note that usually applications start main message loop using Application.Run(). You don't expect to see new MyWindow().ShowDialog(), right? Your example is not different, let Application.Run(Form) overload creates the ApplicationContext for you (and handle HandleDestroyed event when form closes which will finally call - surprise - Application.ExitThread()). Unfortunately OpenFileDialog does not inherit from Form then you have to host it inside a dummy form to use Application.Run().
You do not need to explicitly call dlg.Dispose() (let WinForms manage objects lifetime) if you add the dialog inside the form with the designer.
using System;
using System.Windows.Forms;
public class App
{
[STAThread]
public static void Main()
{
string fname = AskForFile();
if (fname == null)
return;
LongRunningProcess(fname);
}
private static string AskForFile()
{
string fileName = null;
var form = new Form() { Visible = false };
form.Load += (o, e) => {
using (var dlg = new OpenFileDialog())
{
if (dlg.ShowDialog() == DialogResult.OK)
fileName = dlg.FileName;
}
((Form)o).Close();
};
Application.Run(form);
return fileName;
}
}
No, you don't have to call Application.ExitThread().
Application.ExitThread() terminates the calling thread's message loop and forces the destruction of the frozen dialog. Although "that works", it's better to unfreeze the dialog if the cause of the freeze is known.
In this case pressing open seems to fire a close-event which doesn't have any chance to finish. Application.DoEvents() gives it that chance and makes the dialog disappear.
I wrote an API that automates a certain website. However, on the testing stage, I noticed that (not very sure), my thread is not being terminated correctly.
I am using the WebBrowser object to navigate inside a thread, so that it works synchronously with my program:
private void NavigateThroughTread(string url)
{
Console.WriteLine("Defining thread...");
var th = new Thread(() =>
{
_wb = new WebBrowser();
_wb.DocumentCompleted += PageLoaded;
_wb.Visible = true;
_wb.Navigate(url);
Console.WriteLine("Web browser navigated.");
Application.Run();
});
Console.WriteLine("Thread defined.");
th.SetApartmentState(ApartmentState.STA);
Console.WriteLine("Before thread start...");
th.Start();
Console.WriteLine("Thread started.");
while (th.IsAlive) { }
Console.WriteLine("Journey ends.");
}
private void PageLoaded(object sender, WebBrowserDocumentCompletedEventArgs e)
{
Console.WriteLine("Pages loads...");
.
.
.
switch (_action)
{
.
.
.
case ENUM.FarmActions.Idle:
_wb.Navigate(new Uri("about:blank"));
_action = ENUM.FarmActions.Exit;
return;
case ENUM.FarmActions.Exit:
Console.WriteLine("Disposing wb...");
_wb.DocumentCompleted -= PageLoaded;
_wb.Dispose();
break;
}
Application.ExitThread(); // Stops the thread
}
Here is how I call this function:
public int Attack(int x, int y, ArmyBuilder army)
{
// instruct to attack the village
_action = ENUM.FarmActions.Attack;
//get the army and coordinates
_army = army;
_enemyCoordinates[X] = x;
_enemyCoordinates[Y] = y;
//Place the attack command
_errorFlag = true; // the action is not complated, the flag will set as false once action is complete
_attackFlag = false; // attack is not made yet
Console.WriteLine("Journey starts");
NavigateThroughTread(_url.GetUrl(ENUM.Screens.RallyPoint));
return _errorFlag ? -1 : CalculateDistance();
}
So the problem is, when I call the Attack function, couple times like this:
_command.Attack(509, 355, new ArmyBuilder(testArmy_lc));
_command.Attack(509, 354, new ArmyBuilder(testArmy_lc));
_command.Attack(505, 356, new ArmyBuilder(testArmy_lc));
_command.Attack(504, 356, new ArmyBuilder(testArmy_lc));
_command.Attack(504, 359, new ArmyBuilder(testArmy_lc));
_command.Attack(505, 356, new ArmyBuilder(testArmy_lc));
_command.Attack(504, 356, new ArmyBuilder(testArmy_lc));
_command.Attack(504, 359, new ArmyBuilder(testArmy_lc));
My application most of the times, gets stuck in one of these function (usually happens after the 4th or 5th). When it gets stuck the last log that I see is
Web browser navigated.
I assume it is something to do with termination of my thread. Can someone show me how I can run a thread which runs the DocumentCompleted event ?
I don't see any obvious reason for deadlock, nor did it reproduce at all when testing the code. There are a number of flaws in the code but nothing that yells "here!" loudly. I can only make recommendations:
Consider that you do not need a thread at all. The while (th.IsAlive) { } hot loop blocks your main thread while you wait for the browser code to finish the job. That is not a useful way to use a thread, you might as well use your main thread. This instantly eliminates a large number of potential hang causes.
The state logic in PageLoaded is risky. We cannot see all of it but one glaring issue is that you dispose the WebBrowser twice. If you have a case where you use return without a Navigate() call then you'll hang as described. No need to unsubscribe the event but same story, if you do unsubscribe but don't all Application.Exit() then you'll hang as described. State machines can be hard to debug, thorough logging is necessary. Minimize the risk by moving the Dispose() call and unsubscribing the event out of the logic, it doesn't belong there. And you need to test what happens when any Navigate() call ends up in failure, redirecting to a page you did not expect.
The _wb.Dispose() call is risky. Note that you destroy the WebBrowser while its DocumentCompleted event is in flight. Technically that can return code execution to code that is no longer alive or present. That can trip a race condition in the browser. As well as in the debugger, there is a dedicated MDA that checks for this problem. It is trivially avoided by moving the Dispose() call after the Application.Run() call where it belongs.
The while-loop burns 100% core, potentially starving the worker thread. Not a good enough reason to explain deadlock, but certainly unnecessary. Use Thread.Join() instead.
You create a lot of WebBrowser objects in this code. It is a very heavy object, as you can imagine, you need to keep an eye on memory usage in your program. Especially the unmanaged kind. If the browser leaks, like they so often do, you could technically create a scenario where the WB initializes okay but does not have enough memory left to load the page. Strongly favor using only one WB.
You need to consider that this might well be an environmental problem. On the top of that list is forever anti-malware and firewall, they always have a very good reason to treat a browser specially since that is the most common malware injection vector. You'll need to run your test with anti-malware and firewall disabled to ensure that it is not the cause of the hang.
Another environmental problem is one I noticed while testing this code, Google got sulky about me hitting it so often and started to throttle the requests, greatly slowing down the code. Talk to the web site owner and ask if he's got similar blocking or throttling counter-measures in place, most do. You need to test your state logic to verify that it still works properly when the browser redirects to an error page.
Yet another environmental issue is the WB will display a dialog itself in certain cases. This can deadlock in 3rd party code, very hard to diagnose. You should at least set the WebBrower.ScriptErrorsSuppressed to true but beware of Javascript code in the web page you load that itself creates new windows or displays alert dialogs. Using one WB is the workaround.
Keep in mind that your program can only be as reliable as your Internet connection and the web page server. That's not a terribly good place to be of course, both are quite out of your reach and you don't get nice exceptions to help you diagnose such a failure. And consider that you probably have not yet tested your program well enough yet to check if it can survive such a failure, it doesn't happen enough.
Quite a laundry list, focus first on eliminating the unnecessary thread and temporarily suppressing anti-malware. That's quick, focus next on using only one WebBrowser.
Hans thank you, I was able to fix this issue with one of your ideas. As you spent your time giving me a long answer, I wanted respond in same manner.
2 - I built the state machine structure carefully and with a lot logs (you can see it from my git account) also did a lot of debugs. I am sure that after I'm done navigating, I use Application.ExitThread() and wb.Dispose() only once.
3 - I tried placing the wb.Dispose() outside the event, however I couldn't find any other place where the Thread is still alive. If I try disposing WebBrowser outside the thread which is created inside the thread, the application gives me an error.
4 - I changed the code while (th.IsAlive) { } with th.Join(2000) this is absolutely a better idea but did not change anything. It optimized the code and as you mentioned, it prevented burning 100% core of my CPU.
5 - I tried using a single WebBrowser object which is instantiated in the constructor. However when I tried to navigate inside the thread, the application wouldnt even fire the events anymore. For some reason, I couldn't make it running whit a single WB object.
6,7 - I tested my application with different PC's and diffrent networks(with firewall and non-firewall protection). I changed windows firewall options as well but no travail. On my original code I do have _wb.ScriptErrorsSuppressed = true; so this shouldn't also be the issue.
8,9 - If these are the reasons, I can't do anything about it. But I doubt the real problem is caused because of them.
1 - This one was a good suggestion. I tried implementing my code without using a thread and it is now working fine. Here is how it looks like (still needs a lot optimization)
// Constructer
public FarmActions(string token)
{
// set the urls using the token
_url = new URL(token);
// define web browser properties
_wb = new WebBrowser();
_wb.DocumentCompleted += PageLoaded;
_wb.Visible = true;
_wb.AllowNavigation = true;
_wb.ScriptErrorsSuppressed = true;
}
public int Attack(int x, int y, ArmyBuilder army)
{
// instruct to attack the village
_action = ENUM.FarmActions.Attack;
//get the army and coordinates
_army = army;
_enemyCoordinates[X] = x;
_enemyCoordinates[Y] = y;
//Place the attack command
_errorFlag = true; // the action is not complated, the flag will set as false once action is complete
_attackFlag = false; // attack is not made yet
_isAlive = true;
Console.WriteLine("-------------------------");
Console.WriteLine("Journey starts");
NavigateThroughTread(_url.GetUrl(ENUM.Screens.RallyPoint));
return _errorFlag ? -1 : CalculateDistance();
}
private void NavigateThroughTread(string url)
{
Console.WriteLine("Defining thread...");
_wb.Navigate(url);
while (_isAlive) Application.DoEvents();
}
private void PageLoaded(object sender, WebBrowserDocumentCompletedEventArgs e)
{
Console.WriteLine("Pages loads...");
.
.
.
switch (_action)
{
.
.
.
case ENUM.FarmActions.Idle:
_wb.Navigate(new Uri("about:blank"));
_action = ENUM.FarmActions.Exit;
return;
case ENUM.FarmActions.Exit:
break;
}
_isAlive = false;
}
This is how I was able to wait without using a thread.
The main problem was probably as you mentioned in number 3 or 5. But I wasn't able to fix the problem as I spent couple of hours.
Anyway thanks for your help it works.
I have a relatively large system (~25000 lines so far) for monitoring radio-related devices. It shows graphs and such using latest version of ZedGraph.
The program is coded using C# on VS2010 with Win7.
The problem is:
when I run the program from within VS, it runs slow
when I run the program from the built EXE, it runs slow
when I run the program though Performance Wizard / CPU Profiler, it runs Blazing Fast.
when I run the program from the built EXE, and then start VS and Attach a profiler to ANY OTHER PROCESS, my program speeds up!
I want the program to always run that fast!
Every project in the solution is set to RELEASE, Debug unmanaged code is DISABLED, Define DEBUG and TRACE constants is DISABLED, Optimize Code - I tried either, Warning Level - I tried either, Suppress JIT - I tried either,
in short I tried all the solutions already proposed on StackOverflow - none worked. Program is slow outside profiler, fast in profiler.
I don't think the problem is in my code, because it becomes fast if I attach the profiler to other, unrelated process as well!
Please help!
I really need it to be that fast everywhere, because it's a business critical application and performance issues are not tolerated...
UPDATES 1 - 8 follow
--------------------Update1:--------------------
The problem seems to Not be ZedGraph related, because it still manifests after I replaced ZedGraph with my own basic drawing.
--------------------Update2:--------------------
Running the program in a Virtual machine, the program still runs slow, and running profiler from the Host machine doesn't make it fast.
--------------------Update3:--------------------
Starting screen capture to video also speeds the program up!
--------------------Update4:--------------------
If I open the Intel graphics driver settings window (this thing: http://www.intel.com/support/graphics/sb/img/resolution_new.jpg)
and just constantly hover with the cursor over buttons, so they glow, etc, my program speeds up!.
It doesn't speed up if I run GPUz or Kombustor though, so no downclocking on the GPU - it stays steady 850Mhz.
--------------------Update5:--------------------
Tests on different machines:
-On my Core i5-2400S with Intel HD2000, UI runs slow and CPU usage is ~15%.
-On a colleague's Core 2 Duo with Intel G41 Express, UI runs fast, but CPU usage is ~90% (which isn't normal either)
-On Core i5-2400S with dedicated Radeon X1650, UI runs blazing fast, CPU usage is ~50%.
--------------------Update6:--------------------
A snip of code showing how I update a single graph (graphFFT is an encapsulation of ZedGraphControl for ease of use):
public void LoopDataRefresh() //executes in a new thread
{
while (true)
{
while (!d.Connected)
Thread.Sleep(1000);
if (IsDisposed)
return;
//... other graphs update here
if (signalNewFFT && PanelFFT.Visible)
{
signalNewFFT = false;
#region FFT
bool newRange = false;
if (graphFFT.MaxY != d.fftRangeYMax)
{
graphFFT.MaxY = d.fftRangeYMax;
newRange = true;
}
if (graphFFT.MinY != d.fftRangeYMin)
{
graphFFT.MinY = d.fftRangeYMin;
newRange = true;
}
List<PointF> points = new List<PointF>(2048);
int tempLength = 0;
short[] tempData = new short[2048];
int i = 0;
lock (d.fftDataLock)
{
tempLength = d.fftLength;
tempData = (short[])d.fftData.Clone();
}
foreach (short s in tempData)
points.Add(new PointF(i++, s));
graphFFT.SetLine("FFT", points);
if (newRange)
graphFFT.RefreshGraphComplete();
else if (PanelFFT.Visible)
graphFFT.RefreshGraph();
#endregion
}
//... other graphs update here
Thread.Sleep(5);
}
}
SetLine is:
public void SetLine(String lineTitle, List<PointF> values)
{
IPointListEdit ip = zgcGraph.GraphPane.CurveList[lineTitle].Points as IPointListEdit;
int tmp = Math.Min(ip.Count, values.Count);
int i = 0;
while(i < tmp)
{
if (values[i].X > peakX)
peakX = values[i].X;
if (values[i].Y > peakY)
peakY = values[i].Y;
ip[i].X = values[i].X;
ip[i].Y = values[i].Y;
i++;
}
while(ip.Count < values.Count)
{
if (values[i].X > peakX)
peakX = values[i].X;
if (values[i].Y > peakY)
peakY = values[i].Y;
ip.Add(values[i].X, values[i].Y);
i++;
}
while(values.Count > ip.Count)
{
ip.RemoveAt(ip.Count - 1);
}
}
RefreshGraph is:
public void RefreshGraph()
{
if (!explicidX && autoScrollFlag)
{
zgcGraph.GraphPane.XAxis.Scale.Max = Math.Max(peakX + grace.X, rangeX);
zgcGraph.GraphPane.XAxis.Scale.Min = zgcGraph.GraphPane.XAxis.Scale.Max - rangeX;
}
if (!explicidY)
{
zgcGraph.GraphPane.YAxis.Scale.Max = Math.Max(peakY + grace.Y, maxY);
zgcGraph.GraphPane.YAxis.Scale.Min = minY;
}
zgcGraph.Refresh();
}
.
--------------------Update7:--------------------
Just ran it through the ANTS profiler. It tells me that the ZedGraph refresh counts when the program is fast are precisely two times higher compared to when it's slow.
Here are the screenshots:
I find it VERY strange that, considering the small difference in the length of the sections, performance differs twice with mathematical precision.
Also, I updated the GPU driver, that didn't help.
--------------------Update8:--------------------
Unfortunately, for a few days now, I'm unable to reproduce the issue... I'm getting constant acceptable speed (which still appear a bit slower than what I had in the profiler two weeks ago) which isn't affected by any of the factors that used to affect it two weeks ago - profiler, video capturing or GPU driver window. I still have no explanation of what was causing it...
Luaan posted the solution in the comments above, it's the system wide timer resolution. Default resolution is 15.6 ms, the profiler sets the resolution to 1ms.
I had the exact same problem, very slow execution that would speed up when the profiler was opened. The problem went away on my PC but popped back up on other PCs seemingly at random. We also noticed the problem disappeared when running a Join Me window in Chrome.
My application transmits a file over a CAN bus. The app loads a CAN message with eight bytes of data, transmits it and waits for an acknowledgment. With the timer set to 15.6ms each round trip took exactly 15.6ms and the entire file transfer would take about 14 minutes. With the timer set to 1ms round trip time varied but would be as low as 4ms and the entire transfer time would drop to less than two minutes.
You can verify your system timer resolution as well as find out which program increased the resolution by opening a command prompt as administrator and entering:
powercfg -energy duration 5
The output file will have the following in it somewhere:
Platform Timer Resolution:Platform Timer Resolution
The default platform timer resolution is 15.6ms (15625000ns) and should be used whenever the system is idle. If the timer resolution is increased, processor power management technologies may not be effective. The timer resolution may be increased due to multimedia playback or graphical animations.
Current Timer Resolution (100ns units) 10000
Maximum Timer Period (100ns units) 156001
My current resolution is 1 ms (10,000 units of 100nS) and is followed by a list of the programs that requested the increased resolution.
This information as well as more detail can be found here: https://randomascii.wordpress.com/2013/07/08/windows-timer-resolution-megawatts-wasted/
Here is some code to increase the timer resolution (originally posted as the answer to this question: how to set timer resolution from C# to 1 ms?):
public static class WinApi
{
/// <summary>TimeBeginPeriod(). See the Windows API documentation for details.</summary>
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Interoperability", "CA1401:PInvokesShouldNotBeVisible"), System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Security", "CA2118:ReviewSuppressUnmanagedCodeSecurityUsage"), SuppressUnmanagedCodeSecurity]
[DllImport("winmm.dll", EntryPoint = "timeBeginPeriod", SetLastError = true)]
public static extern uint TimeBeginPeriod(uint uMilliseconds);
/// <summary>TimeEndPeriod(). See the Windows API documentation for details.</summary>
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Interoperability", "CA1401:PInvokesShouldNotBeVisible"), System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Security", "CA2118:ReviewSuppressUnmanagedCodeSecurityUsage"), SuppressUnmanagedCodeSecurity]
[DllImport("winmm.dll", EntryPoint = "timeEndPeriod", SetLastError = true)]
public static extern uint TimeEndPeriod(uint uMilliseconds);
}
Use it like this to increase resolution :WinApi.TimeBeginPeriod(1);
And like this to return to the default :WinApi.TimeEndPeriod(1);
The parameter passed to TimeEndPeriod() must match the parameter that was passed to TimeBeginPeriod().
There are situations when slowing down a thread can speed up other threads significantly, usually when one thread is polling or locking some common resource frequently.
For instance (this is a windows-forms example) when the main thread is checking overall progress in a tight loop instead of using a timer, for example:
private void SomeWork() {
// start the worker thread here
while(!PollDone()) {
progressBar1.Value = PollProgress();
Application.DoEvents(); // keep the GUI responisive
}
}
Slowing it down could improve performance:
private void SomeWork() {
// start the worker thread here
while(!PollDone()) {
progressBar1.Value = PollProgress();
System.Threading.Thread.Sleep(300); // give the polled thread some time to work instead of responding to your poll
Application.DoEvents(); // keep the GUI responisive
}
}
Doing it correctly, one should avoid using the DoEvents call alltogether:
private Timer tim = new Timer(){ Interval=300 };
private void SomeWork() {
// start the worker thread here
tim.Tick += tim_Tick;
tim.Start();
}
private void tim_Tick(object sender, EventArgs e){
tim.Enabled = false; // prevent timer messages from piling up
if(PollDone()){
tim.Tick -= tim_Tick;
return;
}
progressBar1.Value = PollProgress();
tim.Enabled = true;
}
Calling Application.DoEvents() can potentially cause allot of headaches when GUI stuff has not been disabled and the user kicks off other events or the same event a 2nd time simultaneously, causing stack climbs which by nature queue the first action behind the new one, but I'm going off topic.
Probably that example is too winforms specific, I'll try making a more general example. If you have a thread that is filling a buffer that is processed by other threads, be sure to leave some System.Threading.Thread.Sleep() slack in the loop to allow the other threads to do some processing before checking if the buffer needs to be filled again:
public class WorkItem {
// populate with something usefull
}
public static object WorkItemsSyncRoot = new object();
public static Queue<WorkItem> workitems = new Queue<WorkItem>();
public void FillBuffer() {
while(!done) {
lock(WorkItemsSyncRoot) {
if(workitems.Count < 30) {
workitems.Enqueue(new WorkItem(/* load a file or something */ ));
}
}
}
}
The worker thread's will have difficulty to obtain anything from the queue since its constantly being locked by the filling thread. Adding a Sleep() (outside the lock) could significantly speed up other threads:
public void FillBuffer() {
while(!done) {
lock(WorkItemsSyncRoot) {
if(workitems.Count < 30) {
workitems.Enqueue(new WorkItem(/* load a file or something */ ));
}
}
System.Threading.Thread.Sleep(50);
}
}
Hooking up a profiler could in some cases have the same effect as the sleep function.
I'm not sure if I've given representative examples (it's quite hard to come up with something simple) but I guess the point is clear, putting sleep() in the correct place can help improve the flow of other threads.
---------- Edit after Update7 -------------
I'd remove that LoopDataRefresh() thread altogether. Rather put a timer in your window with an interval of at least 20 (which would be 50 frames a second if none were skipped):
private void tim_Tick(object sender, EventArgs e) {
tim.Enabled = false; // skip frames that come while we're still drawing
if(IsDisposed) {
tim.Tick -= tim_Tick;
return;
}
// Your code follows, I've tried to optimize it here and there, but no guarantee that it compiles or works, not tested at all
if(signalNewFFT && PanelFFT.Visible) {
signalNewFFT = false;
#region FFT
bool newRange = false;
if(graphFFT.MaxY != d.fftRangeYMax) {
graphFFT.MaxY = d.fftRangeYMax;
newRange = true;
}
if(graphFFT.MinY != d.fftRangeYMin) {
graphFFT.MinY = d.fftRangeYMin;
newRange = true;
}
int tempLength = 0;
short[] tempData;
int i = 0;
lock(d.fftDataLock) {
tempLength = d.fftLength;
tempData = (short[])d.fftData.Clone();
}
graphFFT.SetLine("FFT", tempData);
if(newRange) graphFFT.RefreshGraphComplete();
else if(PanelFFT.Visible) graphFFT.RefreshGraph();
#endregion
// End of your code
tim.Enabled = true; // Drawing is done, allow new frames to come in.
}
}
Here's the optimized SetLine() which no longer takes a list of points but the raw data:
public class GraphFFT {
public void SetLine(String lineTitle, short[] values) {
IPointListEdit ip = zgcGraph.GraphPane.CurveList[lineTitle].Points as IPointListEdit;
int tmp = Math.Min(ip.Count, values.Length);
int i = 0;
peakX = values.Length;
while(i < tmp) {
if(values[i] > peakY) peakY = values[i];
ip[i].X = i;
ip[i].Y = values[i];
i++;
}
while(ip.Count < values.Count) {
if(values[i] > peakY) peakY = values[i];
ip.Add(i, values[i]);
i++;
}
while(values.Count > ip.Count) {
ip.RemoveAt(ip.Count - 1);
}
}
}
I hope you get that working, as I commented before, I hav'nt got the chance to compile or check it so there could be some bugs there. There's more to be optimized there, but the optimizations should be marginal compared to the boost of skipping frames and only collecting data when we have the time to actually draw the frame before the next one comes in.
If you closely study the graphs in the video at iZotope, you'll notice that they too are skipping frames, and sometimes are a bit jumpy. That's not bad at all, it's a trade-off you make between the processing power of the foreground thread and the background workers.
If you really want the drawing to be done in a separate thread, you'll have to draw the graph to a bitmap (calling Draw() and passing the bitmaps device context). Then pass the bitmap on to the main thread and have it update. That way you do lose the convenience of the designer and property grid in your IDE, but you can make use of otherwise vacant processor cores.
---------- edit answer to remarks --------
Yes there is a way to tell what calls what. Look at your first screen-shot, you have selected the "call tree" graph. Each next line jumps in a bit (it's a tree-view, not just a list!). In a call-graph, each tree-node represents a method that has been called by its parent tree-node (method).
In the first image, WndProc was called about 1800 times, it handled 872 messages of which 62 triggered ZedGraphControl.OnPaint() (which in turn accounts for 53% of the main threads total time).
The reason you don't see another rootnode, is because the 3rd dropdown box has selected "[604] Mian Thread" which I didn't notice before.
As for the more fluent graphs, I have 2nd thoughts on that now after looking more closely to the screen-shots. The main thread has clearly received more (double) update messages, and the CPU still has some headroom.
It looks like the threads are out-of-sync and in-sync at different times, where the update messages arrive just too late (when WndProc was done and went to sleep for a while), and then suddenly in time for a while. I'm not very familiar with Ants, but does it have a side-by side thread timeline including sleep time? You should be able to see what's going on in such a view. Microsofts threads view tool would come in handy for this:
When I have never heard or seen something similar; I’d recommend the common sense approach of commenting out sections of code/injecting returns at tops of functions until you find the logic that’s producing the side effect. You know your code and likely have an educated guess where to start chopping. Else chop mostly all as a sanity test and start adding blocks back. I’m often amazed how fast one can find those seemingly impossible bugs to track. Once you find the related code, you will have more clues to solve your issue.
There is an array of potential causes. Without stating completeness, here is how you could approach your search for the actual cause:
Environment variables: the timer issue in another answer is only one example. There might be modifications to the Path and to other variables, new variables could be set by the profiler. Write the current environment variables to a file and compare both configurations. Try to find suspicious entries, unset them one by one (or in combinations) until you get the same behavior in both cases.
Processor frequency. This can easily happen on laptops. Potentially, the energy saving system sets the frequency of the processor(s) to a lower value to save energy. Some apps may 'wake' the system up, increasing the frequency. Check this via performance monitor (permon).
If the apps runs slower than possible there must be some inefficient resource utilization. Use the profiler to investigate this! You can attache the profiler to the (slow) running process to see which resources are under-/ over-utilized. Mostly, there are two major categories of causes for too slow execution: memory bound and compute bound execution. Both can give more insight into what is triggering the slow-down.
If, however, your app actually changes its efficiency by attaching to a profiler you can still use your favorite monitor app to see, which performance indicators do actually change. Again, perfmon is your friend.
If you have a method which throws a lot of exceptions, it can run slowly in debug mode and fast in CPU Profiling mode.
As detailed here, debug performance can be improved by using the DebuggerNonUserCode attribute. For example:
[DebuggerNonUserCode]
public static bool IsArchive(string filename)
{
bool result = false;
try
{
//this calls an external library, which throws an exception if the file is not an archive
result = ExternalLibrary.IsArchive(filename);
}
catch
{
}
return result;
}
I'm currently having a problem, which seems to be related to closing a Form, while a scale, which is connected through a Serial Connection keeps sending data (about 3 packages per sek).
I handle new data over the DataReceived-Event (handling itself might be uninteresting for this issue, since I'm just matching data) Keep an eye on the COM_InUse variable and the allowFireDataReceived check.):
private void COMScale_DataReceived(object sender, EventArgs e)
{
if (allowFireDataReceived)
{
//set atomar state
COM_InUse = true;
//new scale:
if (Properties.Settings.Default.ScaleId == 1)
{
strLine = COMScale.ReadTo(((char)0x2).ToString());
//new scale:
Regex reg = new Regex(Constants.regexScale2);
Match m = reg.Match(strLine);
if (m.Success)
{
strGewicht = m.Groups[1].Value + m.Groups[2];
double dblComWeight;
double.TryParse(strGewicht, out dblComWeight);
dblScaleActiveWeight = dblComWeight / 10000;
//add comma separator and remove zeros
strGewicht = strGewicht.Substring(0, 1) + strGewicht.Substring(1, 2).TrimStart('0') + strGewicht.Substring(3);
strGewicht = strGewicht.Insert(strGewicht.Length - 4, ",");
//write to textbox
ThreadSafeSetActiveScaleText(strGewicht);
COMScale.DiscardInBuffer();
//MessageBox.Show(dblScaleActiveWeight.ToString(), "dblScaleActiveWeight");
}
}
//free atomar state
COM_InUse = false;
}
}
The COM_InUse variable is a global bool and "tells" if there is a current process of handling data.
The allowFireDataReceived is also a global bool and if set to false will lead to no extra handling of the data which has been sended.
My problem now is the following:
It seems that Eventhandling is a separate Thread, which leads to a deadlock on klicking the Cancel-Button since the COM_InUse will never turn to false, even if the Event was handled (see end of COMScale_DataReceived, where COM_InUse is set to false).
While setting allowFireDataReceived = false works perfectly (no handling any more), as I said: the while loop will not terminate.
private void bScaleCancel_Click(object sender, EventArgs e)
{
allowFireDataReceived = false;
while (COM_InUse)
{
;
}
if (!COM_InUse)
{
ret = 1;
SaveClose();
}
}
When I comment out the while-block I have to click twice on the button, but it works without a crash. Since this very user unfriendly, I'm searching for an alternative way to safely close the window.
Info:
Simply closing (without checking if the COM-Data was processed) lead to a fatal crash.
So, maybe someone can explain to me what exactly causes this problem or can provide a solution to this. (Maybe one would be to trigger the Cancel-Clicking Event again, but that is very ugly)
Greetings!
I count on you :)
//edit:
Here is the current code of
private void ThreadSafeSetActiveScaleText(string text)
{
// InvokeRequired required compares the thread ID of the
// calling thread to the thread ID of the creating thread.
// If these threads are different, it returns true.
if (this.lScaleActive.InvokeRequired)
{
SafeActiveScaleTextCallback d = new SafeActiveScaleTextCallback(ThreadSafeSetActiveScaleText);
this.Invoke(d, new object[] { text });
}
else
{
this.lScaleActive.Text = text;
}
}
ThreadSafeSetActiveScaleText(strGewicht);
Yes, the DataReceived event runs on a threadpool thread. You already knew that, you wouldn't have called it "ThreadSafe" otherwise. What we can't see is what is inside this method. But given the outcome, it is highly likely that you are using Control.Invoke().
Which is going to cause deadlock when you loop on COM_InUse in code that runs on the UI thread. The Control.Invoke() method can only complete when the UI thread has executed the delegate target method. But the UI thread can only do that when it is idle, pumping the message loop and waiting for Windows messages. And invoke requests. It cannot do this while it looping inside the Click event handler. So Invoke() cannot complete. Which leaves the COM_InUse variable for ever set to true. Which leaves the Click event handler forever looping. Deadlock city.
The exact same problem occurs when you call the SerialPort.Close() method, the port can only be closed when all events have been processed.
You will need to fix this by using Control.BeginInvoke() instead. Make sure the data is still valid by the time the delegate target starts executing. Pass it as an argument for example, copying if necessary.
Closing the form while the scale is unrelentingly sending data is in general a problem. You'll get an exception when you invoke on a disposed form. To fix this, you'll need to implement the FormClosing event handler and set e.Cancel to true. And unsubscribe the DataReceived event and start a timer. Make the Interval a couple of seconds. When the timer Ticks, you can close the form again, now being sure that all data was drained and no more invokes can occur.
Also note that calling DiscardInBuffer() is only good to randomly lose data.
Hello I'm currently having an issue with a timer in a program I'm developing. The timer runs and calls methods which retrieve Windows Management Information from remote PC's after a set period of time and repeat this.
The first time the timer calls these all is well, however the second time, after the timer has completed its task, it loops through itself again and the third time it runs it does it 3 times etc. The for loop in the code below works fine its the timer itself.
So any help would be appareciated and if you require any further details please let me know.
Below is my code:
private void tmrStore_Tick(object sender, EventArgs e)
{
string ipAdd;
ipAdd = "127.0.0.1";
List<TblServer> Server;
WMIInfo localDB = new WMIInfo("Data Source=|DataDirectory|\\WMIInfo.sdf");
Server = localDB.TblServer.ToList();
if (Server.Count == 0)
{
}
else
{
for (int counter = 0; counter < Server.Count; counter++)
{
CPUStore cpu = new CPUStore();
cpu.Store(Server[counter].IpAdd);
HDDStore hdd = new HDDStore();
hdd.Store(Server[counter].IpAdd);
MemStore mem = new MemStore();
mem.Store(Server[counter].IpAdd);
//delete items over 24 hours old
}
}
Try disabling the timer before performing the management task, then reenabling:
tmrStore.Enabled = false;
try{
// do stuff
}finally{
tmrStore.Enabled = true;
}
The cause of the problem is probably that the body of your timer handler takes longer to execute than your Timer.Ticks value, so your timer events start to stack on top of each other.
You might also consider putting this code in a thread instead of a timer, so that it's independent of your user interface.
My first guess is that you are setting your Timer.Tick event in a place that is being executed multiple times. I would try searching for "tmrStore.Tick +=" to see where all the methods are being added to the event.
Right I've resolved the issue its because I had a class I was using to write the retrieved information into text boxes and within that I called a new instance of the form to gain access to the text boxes doh!
Thanks for your help though guys no doubt I'll be back soon for some more lol