I have a .net app that does a variety of file operations. It has been scheduled via task manager and runs without issue. We are moving the job to be controlled by autosys and have the job configured. When it kicks off I see all the files move as expected and I get a log file indicating that everything ran as expected. The app is working. Autosysy, however, reports that it failed.
Status/[Event] Time Ntry ES ProcessTime Machine
-------------- --------------------- -- -- --------------------- ----------------------------------------
RUNNING 09/26/2013 15:30:21 1 PD 09/26/2013 15:31:12
FAILURE 09/26/2013 15:31:59 1 PD 09/26/2013 15:32:17
[*** ALARM ***]
JOBFAILURE 09/26/2013 15:32:16 1 PD 09/26/2013 15:32:17
[STARTJOB] 09/26/2013 16:00:00 0 UP
The application is a winform app - here's the meat of the code:
static int Main(string[] args)
{
Console.WriteLine("Starting processing...");
Console.WriteLine(DateTime.Now.ToString(CultureInfo.InvariantCulture));
if (args.Length > 0) //if we call the app with args we do some stuff, otherwise we show the UI to let the user choose what to do
{
//stuff happens here that works, other method calls, etc.
Console.WriteLine(DateTime.Now.ToString(CultureInfo.InvariantCulture));
Console.WriteLine("Process complete.");
return 0;
}
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new FileLoader());
return 0;
}
The job IS working, it's doing everything that it is supposed to do without logging any exceptions BUT autosys still reports failure. What am I doing wrong?
Autosys will mark the job as successfully completed when the process ends. In your case you've told that it is an Win-Form application. So what may happen here is the autosys is starting the Application and also the application is working fine and it is doing what it suppose to do. But the process will not end unless untill someone manually close the window application or if you have some techinque to close itself. And so Autosys is assuming that the process not ended thus marking the job as failed.
The solution for this is make your application as console application or else I remember there is one property when setting up the Autosys job not to consider the Process ending so the job will just start the program and mark the job as completed.
Related
I have created windows task scheduler programmatically in c#. Task is created successfully and is scheduled to run correctly. At scheduled time, it says task is running but without any result and next schedule time is updated.
But last run time and last run result does not update.
Last run result is: The task has not yet run.(0x41303)
But when run manually from task scheduler it executes successfully but not automatically.
Below code that i used to create task
var ts = new TaskService();
var td = ts.NewTask();
td.RegistrationInfo.Author = "My company";
td.RegistrationInfo.Description = "Runs test application";
var trigger = new WeeklyTrigger { StartBoundary = startDate, DaysOfWeek = daysOfWeek, Enabled = enabled };
trigger.Repetition.Interval = TimeSpan.FromSeconds(((minutes == 0) ? 60 : minutes) * 60);
td.Triggers.Add(trigger);
var action = new ExecAction(Assembly.GetExecutingAssembly().Location, null, null);
if (filePath != string.Empty && File.Exists(filePath))
{
action = new ExecAction(filePath);
}
action.Arguments = "AutoRun";
td.Actions.Add(action);
ts.RootFolder.RegisterTaskDefinition(TaskName, td);
Any help would be much appreciated!
Check the execution privileges first.
Then check the task manager if the process is really running when it seems 'running'. If yes, try to use some try-catch blocks and create event logs as exceptions.
I think when you run manually from task scheduler, its executed by a user that belongs to task scheduler (maybe administrator). But at scheduled time, application trying to be executed as a user that won't have enough privileges to do some stuff in your code.
UPDATE
Set Start in (optional) value to target file location. Without it,
the task scheduler runs in system32 folder but like i said before,
target application wouldn't have privileges to run in system32.
Try to change the version of the console application to 32 bit.
i.e. Right click Goto -> Properties -> Build -> Platform Target
= x86.
Turns out, for running any task in scheduler, laptop charger must be plugged in else scheduler does not execute the task.
This is not the case with windows server or desktop systems.
Not sure about this behavior but this is what i figured it out.
For me the issue was the executable crashing with "Application Error". You can't see any error in Task scheduler. It will just show last run result as "The task has not yet run.(0x41303)"
TO get the error please check Event Viewer
EventViewer -> Windows Logs -> Aplication
I'm developing an app which basically performs some tasks on timer tick (in this case - searching for beacons) and sends results to the server. My goal was to create an app which does its job constantly in the background. Fortunately, I'm using logging all over the code, so when we started to test it we found that sometime later the timer's callback wasn't being called on time. There were some pauses which obviously had been caused by standby and doze mode. At that moment I was using a background service and System.Threading.Timer. Then, after some research, I rewrote the services to use Alarm Manager + Wake locks, but the pauses were still there. The next try was to make the service foreground and use it with a Handler to post delayed tasks and everything seemed to be fine while the device was connected to the computer. When the device is not connected to a charger those pauses are here again. The interesting thing is that we cannot actually predict this behavior. Sometimes it works perfectly fine and sometimes not. And this is really strange because the code to schedule it is pretty simple and straightforward:
...
private int scanThreadsCount = 0;
private Android.OS.Handler handler = new Android.OS.Handler();
private bool LocationInProgress
{
get { return Interlocked.CompareExchange(ref scanThreadsCount, 0, 0) != 0; }
}
public void ForceLocation()
{
if (!LocationInProgress) DoLocation();
}
private async void DoLocation()
{
Interlocked.Increment(ref scanThreadsCount);
Logger.Debug("Location is started");
try
{
// Location...
}
catch (Exception e)
{
Logger.Error(e, "Location cannot be performed due to an unexpected error");
}
finally
{
if (LocationInterval > 0)
{
# It's here. The location interval is 60 seconds
# and the service is running in the foreground!
# But in the screenshot we can see the delay which
# sometimes reaches 10 minutes or even more
handler.PostDelayed(ForceLocation, LocationInterval * 1000);
}
Logger.Debug("Location has been finished");
Interlocked.Decrement(ref scanThreadsCount);
}
}
...
Actually it can be ok, but I need that service to do its job strictly on time, but the callback is being called with a few seconds delay or a few minutes and that's not acceptable.
The Android documentation says that foreground services are not restricted by standby and doze mode, but I cannot really find the cause of that strange behavior. Why is the callback not being called on time? Where do these 10 minutes pauses come from? It's pretty frustrating because I cannot move further unless I have the robust basis. Does anybody know the reason of such a strange behavior or any suggestions how I can achieve the callback to be executed on time?
P.S. The current version of the app is here. I know, it's quite boring trying to figure out what is wrong with one's code, but there are only 3 files which have to do with that problem:
~/Services/BeaconService.cs
~/Services/BeaconServiceScanFunctionality.cs
~/Services/BeaconServiceSyncFunctionality.cs
The project was provided for those who would probably want to try it in action and figure it out by themselves.
Any help will be appreciated!
Thanks in advance
I have a Windows service that every 5 seconds checks for work. It uses System.Threading.Timer for handling the check and processing and Monitor.TryEnter to make sure only one thread is checking for work.
Just assume it has to be this way as the following code is part of 8 other workers that are created by the service and each worker has its own specific type of work it needs to check for.
readonly object _workCheckLocker = new object();
public Timer PollingTimer { get; private set; }
void InitializeTimer()
{
if (PollingTimer == null)
PollingTimer = new Timer(PollingTimerCallback, null, 0, 5000);
else
PollingTimer.Change(0, 5000);
Details.TimerIsRunning = true;
}
void PollingTimerCallback(object state)
{
if (!Details.StillGettingWork)
{
if (Monitor.TryEnter(_workCheckLocker, 500))
{
try
{
CheckForWork();
}
catch (Exception ex)
{
Log.Error(EnvironmentName + " -- CheckForWork failed. " + ex);
}
finally
{
Monitor.Exit(_workCheckLocker);
Details.StillGettingWork = false;
}
}
}
else
{
Log.Standard("Continuing to get work.");
}
}
void CheckForWork()
{
Details.StillGettingWork = true;
//Hit web server to grab work.
//Log Processing
//Process Work
}
Now here's the problem:
The code above is allowing 2 Timer threads to get into the CheckForWork() method. I honestly don't understand how this is possible, but I have experienced this with multiple clients where this software is running.
The logs I got today when I pushed some work showed that it checked for work twice and I had 2 threads independently trying to process which kept causing the work to fail.
Processing 0-3978DF84-EB3E-47F4-8E78-E41E3BD0880E.xml for Update Request. - at 09/14 10:15:501255801
Stopping environments for Update request - at 09/14 10:15:501255801
Processing 0-3978DF84-EB3E-47F4-8E78-E41E3BD0880E.xml for Update Request. - at 09/14 10:15:501255801
Unloaded AppDomain - at 09/14 10:15:10:15:501255801
Stopping environments for Update request - at 09/14 10:15:501255801
AppDomain is already unloaded - at 09/14 10:15:501255801
=== Starting Update Process === - at 09/14 10:15:513756009
Downloading File X - at 09/14 10:15:525631183
Downloading File Y - at 09/14 10:15:525631183
=== Starting Update Process === - at 09/14 10:15:525787359
Downloading File X - at 09/14 10:15:525787359
Downloading File Y - at 09/14 10:15:525787359
The logs are written asynchronously and are queued, so don't dig too deep on the fact that the times match exactly, I just wanted to point out what I saw in the logs to show that I had 2 threads hit a section of code that I believe should have never been allowed. (The log and times are real though, just sanitized messages)
Eventually what happens is that the 2 threads start downloading a big enough file where one ends up getting access denied on the file and causes the whole update to fail.
How can the above code actually allow this? I've experienced this problem last year when I had a lock instead of Monitor and assumed it was just because the Timer eventually started to get offset enough due to the lock blocking that I was getting timer threads stacked i.e. one blocked for 5 seconds and went through right as the Timer was triggering another callback and they both somehow made it in. That's why I went with the Monitor.TryEnter option so I wouldn't just keep stacking timer threads.
Any clue? In all cases where I have tried to solve this issue before, the System.Threading.Timer has been the one constant and I think its the root cause, but I don't understand why.
I can see in log you've provided that you got an AppDomain restart over there, is that correct? If yes, are you sure that you have the one and the only one object for your service during the AppDomain restart? I think that during that not all the threads are being stopped right in the same time, and some of them could proceed with polling the work queue, so the two different threads in different AppDomains got the same Id for work.
You probably could fix this with marking your _workCheckLocker with static keyword, like this:
static object _workCheckLocker;
and introduce the static constructor for your class with initialization of this field (in case of the inline initialization you could face some more complicated problems), but I'm not sure is this be enough for your case - during AppDomain restart static class will reload too. As I understand, this is not an option for you.
Maybe you could introduce the static dictionary instead of object for your workers, so you can check the Id for documents in process.
Another approach is to handle the Stopping event for your service, which probably could be called during the AppDomain restart, in which you will introduce the CancellationToken, and use it to stop all the work during such circumstances.
Also, as #fernando.reyes said, you could introduce heavy lock structure called mutex for a synchronization, but this will degrade your performance.
TL;DR
Production stored procedure has not been updated in years. Workers were getting work they should have never gotten and so multiple workers were processing update requests.
I was able to finally find the time to properly set myself up locally to act as a production client through Visual Studio. Although, I wasn't able to reproduce it like I've experienced, I did accidentally stumble upon the issue.
Those with the assumptions that multiple workers were picking up the work was indeed correct and that's something that should have never been able to happen as each worker is unique in the work they do and request.
It turns out that in our production environment, the stored procedure to retrieve work based on the work type has not been updated in years (yes, years!) of deploys. Anything that checked for work automatically got updates which meant when the Update worker and worker Foo checked at the same time, they both ended up with the same work.
Thankfully, the fix is database side and not a client update.
I'm making an application that will monitor the state of another process and restart it when it stops responding, exits, or throws an error.
However, I'm having trouble to make it reliably check if the process (Being a C++ Console window) has stopped responding.
My code looks like this:
public void monitorserver()
{
while (true)
{
server.StartInfo = new ProcessStartInfo(textbox_srcdsexe.Text, startstring);
server.Start();
log("server started");
log("Monitor started.");
while (server.Responding)
{
if (server.HasExited)
{
log("server exitted, Restarting.");
break;
}
log("server is running: " + server.Responding.ToString());
Thread.Sleep(1000);
}
log("Server stopped responding, terminating..");
try
{ server.Kill(); }
catch (Exception) { }
}
}
The application I'm monitoring is Valve's Source Dedicated Server, running Garry's Mod, and I'm over stressing the physics engine to simulate it stopping responding.
However, this never triggers the process class recognizing it as 'stopped responding'.
I know there are ways to directly query the source server using their own protocol, but i'd like to keep it simple and universal (So that i can maybe use it for different applications in the future).
Any help appreciated
The Responding property indicates whether the process is running a Windows message loop which isn't hung.
As the documentation states,
If the process does not have a MainWindowHandle, this property returns true.
It is not possible to check whether an arbitrary process is doing an arbitrary thing, as you're trying to.
Or not!
I have a fairly simple application timer program. The program will launch a user selected (from file dialog) executable and then terminate the process after the user specified number of minutes. During testing I found that a crash occurs when I call the Process.Kill() method and the application is minimized to the system tray.
The executable in question is Frap.exe which I use frequently and is the reason I wrote the app timer in the first place. I always minimize fraps to the tray, and this is when the crash occurs.
My use of Kill() is straight forward enough...
while (true)
{
//keep checking if timer expired or app closed externally (ie. by user)
if (dtEndTime <= DateTime.Now || p.HasExited)
{
if (!p.HasExited)
p.Kill();
break;
}
System.Threading.Thread.Sleep(500);
}
In searching for alternatives methods to close an external application programmatically, I found only Close() and Kill() (CloseMainWindow is not helpful to me at all). I tried using Close(), which works providing the application is minimized the tray. If the app is minimized, Close() doesn't cause a crash but the app remains open and active.
One thing I noticed in a few posts posts regarding closing external applications was the comment: "Personally I'd try to find a more graceful way of shutting it down though." made in THIS thread found here at stack flow (no offense to John). Thing is, I ran across comments like that on a few sites, with no attempt at what a graceful or elegant (or crash-free!!) method might be.
Any suggestions?
The crash experienced is not consistant and I've little to offer as to details. I am unable to debug using VS2008 as I get message - cant debug crashing application (or something similar), and depending on what other programs I have running at the time, when the Kill() is called some of them also crash (also programs only running in the tray) so I'm thinking this is some sort of problem specifically related to the system tray.
Is it possible that your code is being executed in a way such that the Kill() statement could sometimes be called twice? In the docs for Process.Kill(), it says that the Kill executes asynchronously. So, when you call Kill(), execution continues on your main thread. Further, the docs state that Kill will throw a Win32Exception if you call it on an app that is already in the process of closing. The docs state that you can use WaitForExit() to wait for the process to exit. What happens if you put a call to WaitForExit() immediately following the call to Kill(). The loop looks ok (with the break statement). Is it possible that you have code entering that loop twice?
If that's not the problem, maybe there is another way to catch that exception:
Try hooking the AppDomain.CurrentDomain.UnhandledException event
(currentDomain is a static member)
The problem is that Kill runs asynchronously, so if it's throwing an exception, it's occurring on a different thread. That's why your exception handler doesn't catch it. Further (I think) that an unhandled async exception (which is what I believe you have) will cause an immediate unload of your application (which is what is happening).
Edit: Example code for hooking the UnhandledExceptionEvent
Here is a simple console application that demonstrates the use of AppDomain.UnhandledException:
using System;
public class MyClass
{
public static void Main()
{
System.AppDomain.CurrentDomain.UnhandledException += MyExceptionHandler;
System.Threading.ThreadPool.QueueUserWorkItem(DoWork);
Console.ReadLine();
}
private static void DoWork(object state)
{
throw new ApplicationException("Test");
}
private static void MyExceptionHandler(object sender, System.UnhandledExceptionEventArgs e)
{
// get the message
System.Exception exception = e.ExceptionObject as System.Exception;
Console.WriteLine("Unhandled Exception Detected");
if(exception != null)
Console.WriteLine("Message: {0}", exception.Message);
// for this console app, hold the window open until I press enter
Console.ReadLine();
}
}
My first thought is to put a try/catch block around the Kill() call and log the exception you get, if there is one. It might give you a clue what's wrong. Something like:
try
{
if(!p.HasExited)
{
p.Kill();
}
break;
}
catch(Exception ex)
{
System.Diagnostics.Trace.WriteLine(String.Format("Could not kill process {0}, exception {1}", p.ToString(), ex.ToString()));
}
I dont think I should claim this to be "THE ANSWER" but its a decent 'work around'. Adding the following to lines of code...
p.WaitForInputIdle(10000);
am.hWnd = p.MainWindowHandle;
...stopped the crashing issue. These lines were placed immediately after the Process.Start() statement. Both lines are required and in using them I opened the door to a few other questions that I will be investigating over the next few days. The first line is just an up-to 10 second wait for the started process to go 'idle' (ie. finish starting). am.hWnd is a property in my AppManagement class of type IntPtr and this is the only usage of both sides of the assignment. For lack of better explaination, these two lines are analguous to a debouncing method.
I modified the while loop only slightly to allow for a call to CloseMainWindow() which seems to be the better route to take - though if it fails I then Kill() the app:
while (true)
{
//keep checking if timer expired or app closed externally (ie. by user)
if (dtEndTime <= DateTime.Now || p.HasExited) {
try {
if (!p.HasExited) // if the app hasn't already exitted...
{
if (!p.CloseMainWindow()) // did message get sent?
{
if (!p.HasExited) //has app closed yet?
{
p.Kill(); // force app to exit
p.WaitForExit(2000); // a few moments for app to shut down
}
}
p.Close(); // free resources
}
}
catch { // blah blah }
break;
}
System.Threading.Thread.Sleep(500);
}
My initial intention for getting the MainWindowHandle was to maximize/restore an app if minimized and I might implement that in the near future. I decided to see if other programs that run like Fraps (ie, a UI but mostly run in the system tray (like messanger services such as Yahoo et al.)). I tested with XFire and nothing I could do would return a value for the MainWindowHandle. Anyways, this is a serperate issue but one I found interesting.
PS. A bit of credit to JMarsch as it was his suggestion RE: Win32Exception that actually lead me to finding this work around - as unlikely as it seems it true.