Check when task starts / shuts down - c#

I wanted to know how I could read from the taskmanager whether a task has started / been shut down. The way I thought about it is, that I'd have a loop, which constantly checks, whether a new task has started and search for a specific string within the task-manager. Though this is possible, I didn't really want to use that method, because it would "eat" a lot of performance, I think and so I wanted to ask, if you have a way to check, if a program has started / shut down. This is how I thought about it:
while(!"notepad.exe found")
{
SearchForTask("notepad.exe");
if(notepad.exe found)
//Do Something
}
If there is another way, please let me know.
Regards

Checking for a running process is easy:
if(Process.GetProcessesByName("notepad").Length == 0)
{
// "notepad" not running
} else {
// At least one "notepad" process is running
}
You can also check for Length being greater than zero, or store the amount of running processes the last time you checked and see if it's different (if less than the last amount, one of the processes closed, if more, one started), since you can actually have more than one "notepad" running.
This uses the "friendly name" (generally, the executable name without the exe or path), if you are interested on a very specific process with a very specific path, then you'd need to iterate through the array that GetProcessesByName returns.
If you are doing this in a loop, I'd leave some free time on the loop so you are not checking constantly (that'd depend on what you are doing with all of this), otherwise you can use a timer (one of the many available) and poll every 'n' milliseconds.

Related

C# Console application: Console.Readkey() has odd initial skipping behaviour on high framerates

For the challenge and educational gain, i am currently trying to make a simple game in the console window. I use a very primitive "locked" framerate system as such:
using System.Threading;
// ...
static private void Main(string[] args)
{
AutoResetEvent autoEvent = new AutoResetEvent(false);
Timer timer = new Timer(Update);
timer.Change(0, GameSpeed);
autoEvent.WaitOne();
}
So, a timer ticks every GameSpeed miliseconds, and calls the method Update().
The way that i have understood input in the console window so far is as follows:
The console application has a "queue" where it stores any keyboard input as metadata + an instance of a ConsoleKey enum. The user can add to this queue at any time. If the user holds down, say A, it will add A every computer frame. That is, the actual fastest amount the computer can do, not the locked frames i am working with.
Calling Console.Readkey() will remove and return the first element on this list. Console.KeyAvailable returns a bool indicating whether the list is empty.
If GameSpeedis set to anything higher than 400 everything consistently works fine. The below image displays the results of some Console.WriteLine() debug messages that give the amount of keyboard inputs detected in this locked/custom frame, using the following code:
int counter = 0;
while (Console.KeyAvailable) { counter++; Console.ReadKey(true); }
Console.WriteLine(counter);
Results
I use only the A key. I hold it for some time, then release it again. The GameSpeed is set to 1000. As expected, the first frames give low numbers as i might start pressing half into the frame, and so too with the last frames, as i might release the A early.
Now, the exact same experiment but with a GameSpeed of only 200
As you can see, i've marked the places i begun pressing with yellow. It always, perfectly gets the first frame. But then theres either one, two, or three frames where it acts as if it has gotten no inputs, but then after those frames it's fine and gets around 7 inputs pr frame.
I recognize that you are not supposed to make games in the console window. It is not made for scenarios like this. That does not however eliminate the possibility that there is some specific, logical reason this happens, that i might be able to fix. So, concretely the question is: can anyone provide some knowledge / ideas of why this happens?
If computer specs are needed, just say so in the comments and i'll add them.
Edit:
I think i have found the cause of this error, and it is windows keyboard repeat delay. While you can change this in the control panel, i have searched the web and found no examples of how you would change it in a c# application. The question then boils down to: how do you change windows keyboard repeat delay?

How to apply Dynamic wait in Ranorex?

I wants to apply dynamic wait in ranorex.
To open a webpage I used static wait like this :-
Host.Local.OpenBrowser("http://www.ranorex.com/Documentation/Ranorex/html/M_Ranorex_WebDocument_Navigate_2.htm",
"firefox.exe");
Delay.Seconds(15);
Please provide me a proper solution in details. Waiting for your humble reply.
The easiest way is use the wait for document loaded method. This allows you to set a timeout that is the maximum to wait, but will continue when the element completes it's load. Here is the documentation on it,
http://www.ranorex.com/Documentation/Ranorex/html/M_Ranorex_WebDocument_WaitForDocumentLoaded_1.htm
First of all, you should be more detailed about your issues. Atm you actually don't state any issue and don't even specify the reason for the timeout.
I don't actually see why you would need a timeout there. The next element to be interacted with in your tests will have it's own search timeouts. In my experience I haven't had a need or a reason to have a delay for the browser opening.
If you truelly need a dynamic delay there, here's what you actually should validate.
1) Either select an element that always exists on the webpage when you open the browser or
2) Select the next element to be interacted with and build the delay ontop of either of these 2
Let's say that we have a Input field that we need to add text to after the page has opened. The best idea would be do wait for that element to exists and then continue with the test case.
So, we wait for the element to exist (add the element to the repository):
repo.DomPart.InputElementInfo.WaitForExists(30000);
And then we can continue with the test functionality:
repo.DomPart.InputElement.InnerText = "Test";
What waitForExists does is it waits for 30 seconds (30000 ms) for the element to exists. It it possible to catch an exception from this and add error handleing if the element is not found.
The dynamic functionality has to be added by you. In ranorex at one point you will always run into a timeout. It might be a specified delay, it might be the timeout for a repo element, etc. The "dynamic" functionality is mostly yours to do.
If this is not the answer you were looking for, please speicify the reason for the delay and i'll try to answer your specific issue more accurately.

How to prevent loops in JavaScript that crash the browser or Apps?

I am creating a live editor in Windows 8.1 App using JavaScript. Almost done with that, but the problem is whenever I run such bad loops or functions then it automatically hangs or exits.
I test it with a loop such as:( It just a example-user may write its loop in its own way..)
for(i=0;i<=50000;i++)
{
for(j=0;j<5000;j++){
$('body').append('hey I am a bug<br>');
}
}
I know that this is a worst condition for any app or browser to handle that kind of loop. So here I want that if user uses such a loop then how I handle it, to produce their output?
Or if its not possible to protect my app for that kind of loop, if it is dangerous to my app so I alert the user that:
Running this snippet may crash the app!
I have an idea to check the code by using regular expressions if code have something like for(i=0;i<=5000;i++) then the above alert will show, how to do a Regex for that?
Also able to include C# as back-end .
Unfortunately, without doing some deep and complex code analysis of the edited code, you'll not be able to fully prevent errant JavaScript that kills your application. You could use, for example, a library that builds an abstract syntax tree from JavaScript and not allow code execution if certain patterns are found. But, the number of patterns that could cause an infinite loop are large, so it would not be simple to find, and it's likely to not be robust enough.
In the for example, you could modify the code to be like this:
for(i=0;!timeout() && i<=50000;i++)
{
for(j=0;!timeout() && j<5000;j++){
$('body').append('hey I am a bug<br>');
}
}
I've "injected" a call to a function you'd write called timeout. In there, it would need to be able to detect whether the loop should be aborted because the script has been running too long.
But, that could have been written with a do-while, so that type of loop would need to be handled.
The example of using jQuery for example in a tight loop, and modifying the DOM means that solutions that trying to isolate the JavaScript into a Web Worker would be complex, as it's not allowed to manipulate the DOM directly. It can only send/receive "string" messages.
If you had used the XAML/C# WebView to host (and build) the JavaScript editor, you could have considered using an event that is raised called WebView.LongRunningScriptDetected. It is raised when a long running script is detected, providing the host the ability to kill the script before the entire application becomes unresponsive and is killed.
Unfortunately, this same event is not available in the x-ms-webview control which is available in a WinJS project.
I've got 2 solutions:
1.
My first solution would be defining a variable
startSeconds=new Date().getSeconds();.
Then, using regex, I'm inserting this piece of code inside the nested loop.
;if(startSecond < new Date().getSeconds())break;
So, what it does is each time the loop runs, it does two things:
Checks if startSecond is less than current seconds new Date().getSeconds();.
For example, startSecond may be 22. new Date().getSeconds() may return 24.Now, the if condition succeeds so it breaks the loop.
Mostly, a non dangerous loop should run for about 2 to 3 seconds
Small loops like for(var i=0;i<30;i++){} will run fully, but big loops will run for 3 to 4 seconds, which is perfectly ok.
My solution uses your own example of 50000*5000, but it doesn't crash!
Live demo:http://jsfiddle.net/nHqUj/4
2.
My second solution would be defining two variables start, max.
Max should be the maximum number of loops that you are willing to run. Example 1000.
Then, using regex, I'm inserting this piece of code inside the nested loop.
;start+=1;if(start>max)break;
So, what it does is each time the loop runs, it does two things:
Increments the value of start by 1.
Checks whether start is greater than the max. If yes, it breaks the loop.
This solution also uses your own example of 50000*5000, but it doesn't crash!
Updated demo:http://jsfiddle.net/nHqUj/3
Regex I'm using:(?:(for|while|do)\s*\([^\{\}]*\))\s*\{([^\{\}]+)\}
One idea, but not sure what is your editor is capable of..
If some how you can understand that this loop may cause problem(like if a loop is more than 200 times then its a issue) and for a loop like that from user if you can change the code to below to provide the output then it will not hang. But frankly not sure if it will work for you.
var j = 0;
var inter = setInterval( function(){
if( j<5000 ){
$('#test').append('hey I am a bug<br>');
++j;
} else {
clearInterval(inter);
}
}, 100 );
Perhaps inject timers around for loops and check time at the first line. Do this for every loop.
Regex: /for\([^{]*\)[\s]*{/
Example:
/for\([^{]*\)[\s]*{/.test("for(var i=0; i<length; i++){");
> true
Now, if you use replace and wrap the for in a grouping you can get the result you want.
var code = "for(var i=0; i<length; i++){",
testRegex = /(?:for\([^{]*\)[\s]*{)/g,
matchReplace = "var timeStarted = new Date().getTime();" +
"$1" +
"if (new Date().getTime() - timeStarted > maxPossibleTime) {" +
"return; // do something here" +
"}";
code.replace(textRegex, matchReplace);
You cannot find what user is trying to do with a simple regex. Lets say, the user writes his code like...
for(i=0;i<=5;i++)
{
for(j=0;j<=5;j++){
if(j>=3){
i = i * 5000;
j = j * 5000;
}
$('body').append('hey I am a bug<br>');
}
}
Then with a simple regex you cannot avoid this. Because the value of i is increased after a time period. So the best way to solve the problem is to have a benchmark. Say, your app hangs after continuos processing of 3 minutes(Assume, until your app hits 3 minutes of processing time, its running fine). Then, whatever the code the user tries to run, you just start a timer before the process and if the process takes more than 2.5 minutes, then you just kill that process in your app and raise a popup to the user saying 'Running this snippet may crash the app!'... By doing this way you dont even need a regex or to verify users code if it is bad...
Try this... Might help... Cheers!!!
Let's assume you are doing this in the window context and not in a worker. Put a function called rocketChair in every single inner loop. This function is simple. It increments a global counter and checks the value against a global ceiling. When the ceiling is reached rocketChair summarily throws "eject from perilous code". At this time you can also save to a global state variable any state you wish to preserve.
Wrap your entire app in a single try catch block and when rocket chair ejects you can save the day like the hero you are.

Norms, rules or guidelines for calculating and showing "ETA/ETC" for a process

ETC = "Estimated Time of Completion"
I'm counting the time it takes to run through a loop and showing the user some numbers that tells him/her how much time, approximately, the full process will take. I feel like this is a common thing that everyone does on occasion and I would like to know if you have any guidelines that you follow.
Here's an example I'm using at the moment:
int itemsLeft; //This holds the number of items to run through.
double timeLeft;
TimeSpan TsTimeLeft;
list<double> avrage;
double milliseconds; //This holds the time each loop takes to complete, reset every loop.
//The background worker calls this event once for each item. The total number
//of items are in the hundreds for this particular application and every loop takes
//roughly one second.
private void backgroundWorker1_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
//An item has been completed!
itemsLeft--;
avrage.Add(milliseconds);
//Get an avgrage time per item and multiply it with items left.
timeLeft = avrage.Sum() / avrage.Count * itemsLeft;
TsTimeLeft = TimeSpan.FromSeconds(timeLeft);
this.Text = String.Format("ETC: {0}:{1:D2}:{2:D2} ({3:N2}s/file)",
TsTimeLeft.Hours,
TsTimeLeft.Minutes,
TsTimeLeft.Seconds,
avrage.Sum() / avrage.Count);
//Only using the last 20-30 logs in the calculation to prevent an unnecessarily long List<>.
if (avrage.Count > 30)
avrage.RemoveRange(0, 10);
milliseconds = 0;
}
//this.profiler.Interval = 10;
private void profiler_Tick(object sender, EventArgs e)
{
milliseconds += 0.01;
}
As I am a programmer at the very start of my career I'm curious to see what you would do in this situation. My main concern is the fact that I calculate and update the UI for every loop, is this bad practice?
Are there any do's/don't's when it comes to estimations like this? Are there any preferred ways of doing it, e.g. update every second, update every ten logs, calculate and update UI separately? Also when would an ETA/ETC be a good/bad idea.
The real problem with estimation of time taken by a process is the quantification of the workload. Once you can quantify that, you can made a better estimate
Examples of good estimates
File system I/O or network transfer. Whether or not file systems have bad performance, you can get to know in advance, you can quantify the total number of bytes to be processed and you can measure the speed. Once you have these, and once you can monitor how many bytes have you transferred, you get a good estimate. Random factors may affect your estimate (i.e. an application starts meanwhile), but you still get a significative value
Encryption on large streams. For the reasons above. Even if you are computing a MD5 hash, you always know how many blocks have been processed, how many are to be processed and the total.
Item synchronization. This is a little trickier. If you can assume that the per-unit workload is constant or you can make a good estimate of the time required to process an item when variance is low or insignificant, then you can make another good estimate of the process. Pick email synchronization: if you don't know the byte size of the messages (otherwise you fall in case 1) but common practice tells that the majority of emails have quite the same size, then you can use the mean of the time taken to download/upload all processed emails to estimate the time taken to process a single email. This won't work in 100% of the cases and is subject to error, but you still see progress bar progressing on a large account
In general the rule is that you can make a good estimate of ETC/ETA (ETA is actually the date and time the operation is expected to complete) if you have a homogeneous process about of which you know the numbers. Homogeneity grants that the time to process a work item is comparable to others, i.e. the time taken to process a previous item can be used to estimate future. Numbers are used to make correct calculations.
Examples of bad estimates
Operations on a number of files of unknown size. This time you know only how many files you want to process (e.g. to download) but you don't know their size in advance. Once the size of the files has a high variance you see troubles. Having downloaded half of the file, when these were the smallest and sum up to 10% of total bytes, can be said being halfway? No! You just see the progress bar growing fast to 50% and then much slowly
Heterogenous processes. E.g. Windows installations. As pointed out by #HansPassant, Windows installations provide a worse-than-bad estimate. Installing a Windows software involves several processes including: file copy (this can be estimated), registry modifications (usually never estimated), execution of transactional code. The real problem is the last. Transactional processes involving execution of custom installer code are discusses below
Execution of generic code. This can never be estimated. A code fragment involves conditional statements. The execution of these involve changing paths depending on a condition external to the code. This means, for example, that a program behaves differently whether you have a printer installed or not, whether you have a local or a domain account, etc.
Conclusions
Estimating the duration of a software process isn't both an impossible and an exact/*deterministic* task.
It's not impossible because, even in the case of code fragments, you can either find a model for your code (pick a LU factorization as an example, this may be estimated). Or you might redesign your code splitting it into an estimation phase - where you first determine the branch conditions - and an execution phase, where all pre-determined branches are taken. I said might because this task is in practice impossible: most code determines branches as effects of previous conditions, meaning that estimating a branch actually involves running the code. Chicken and egg circle
It's not a deterministic process. Computer systems, especially if multitasking are affected by a number of random factors that may impact on your estimated process. You will never get a correct estimate before running your process. At most, you can detect external factors and re-estimate your process. The fork between your estimate and the real duration of process is mathematically converging to zero when you get closer to process end (lim [x->N] |est(N) - real(N)| == 0, where N is the process duration)
If your user interface is so obscure that you have to explain that ETC doesn't mean Etcetera then you are doing it wrong. Every user understands what a progress bar does, don't help.
Nothing is quite as annoying as an inaccurate progress bar. Particularly ones that promise a quick finish but then don't deliver. I'd give the progress bar displayed by any installer on Windows as a good example of one that is fundamentally broken. Just not a shining example of an implementation that you should pursue.
Such a progress bar is broken because it is utterly impossible to guess up front how long it is going to take to install a program. File systems have very unpredictable perf. This is a very common problem with estimating execution time. Better UI models are the spinning dots you'd see in a video player and many programs in Windows 8. Or the marquee style supported by the common ProgressBar control. Just feedback that says "I'm not dead, working on it". Even the hour-glass cursor is better than a bad estimate. If you have something to report beyond a technicality that no user is really interested in then don't hesitate to display that. Like the number of files you've processed or the number of kilobytes you've downloaded. The actual value of the number isn't that useful, seeing the rate at which it increases is the interesting tidbit.

Displaying images with a high frame rate

here's the problem: I have a custom hardware device and I have to grab images from it in C#/WPF and display them in a window, all with 120+ FPS.
The problem is that there is no event to indicate the images are ready, but I have to constantly poll the device and check whether there are any new images and then download them.
There are apparently a handful of ways to do it, but I haven't been able to find the right one yet.
Here's what I tried:
A simple timer (or DispatcherTimer) - works great for slower frame rates but I can't get it past let's say, 60 FPS.
An single threaded infinite loop - quite fast but I have to put the DoEvents/it's WPF equivalent in the loop in order for window to be redrawn; this has some other unwanted (strange) consequences such as key press events from some controls not being fired etc..
Doing polling/downloading in another thread and displaying in UI thread, something like this:
new Thread(() =>
{
while (StillCapturing)
{
if (Camera.CheckForAndDownloadImage(CameraInstance))
{
this.Dispatcher.Invoke((Action)this.DisplayImage);
}
}
}).Start();
Well, this works relatively well, but puts quite a load on a CPU and of course completely kills the machine if it doesn't have more than one CPU/core, which is unacceptable. Also, I there is a large number of thread contentions this way.
The question is obvious - are there any better alternatives, or is one of these the way to go in this case?
Update:
I somehow forgot to mention that (well, forgot to think about it while writing this question), but of course I don't need all frames to be displayed, however I still need to capture all of them so they can be saved to a hard drive.
Update2:
I found out that the DispatcherTimer method is slow not because it can't process everything fast enough, but because the DispatcherTimer waits for the next vertical sync before firing the tick event; which is actually good in my case, because in the tick event I can save all pending images to a memory buffer (used for saving images to disk) and display just the last one.
As for the old computers being completely "killed" by capturing, it appears that WPF falls back to software rendering which is very slow. There's probably nothing I can do about.
Thanks for all the answers.
I think you're trying for too simplistic of an approach. Here's what I would do.
a) put a Thread.Sleep(5) in your polling loop, that should allow you to get close to 120fps while still keeping CPU times low.
b) Only update the display with every 5th frame or so. That will cut down on the amount of processing as I'm not sure that WPF is made to handle much more than 60fps.
c) Use ThreadPool to spawn a subtask for each frame that will then go and save it to the disk (in a seperate file per frame), that way you won't be as limited by disk performance. Extra frames will just pile up in memory.
Personally I would implement them in that order. Chances are a or b will fix your problems.
You could do the following (all psuedocode):
1. Have a worker thread running dealing with the capture process:
List<Image> _captures = new List<Image>();
new Thread(() =>
{
while (StillCapturing)
{
if (Camera.CheckForAndDownloadImage(CameraInstance))
{
lock(_locker){_captures.Add(DisplayImage);
}
}
}).Start();
Have the dispatcher timer thread take latest captured image (obviously it will have missed some captures since last tick) and display. Therefore, UI thread is throttled and doing as little as possible, it isn't doing all the "capturing", this is done by worker threads. sorry I can't get this bit to format (but you get the idea):
void OnTimerTick(can't remember params)
{
Image imageToDisplay;
lock(_locker){imageToDisplay = _captures[k.Count - 1];
DisplayFunction(imageToDisplay);
}
it might be that the list is a queue and another thread is used to bleed the queue and write to disk or whatever.

Categories

Resources