I need a operations which needs to run every x seconds forever, and to achieve this I did:
protected void Application_Start()
{
InitialieOnce.Initialize();
}
public static class InitialieOnce
{
private static bool initialized = false;
public static void Initialize()
{
if (initialized == false)
{
initialized = true;
Thread t = new Thread(x => CheckStatus());
t.IsBackground = true;
t.Start();
}
}
private static void CheckStatus()
{
//My script goes here.
Thread.Sleep(8000);
CheckStatus();
}
}
After some time (about 5 minutes) I get this error:
"An unhandled exception of type 'System.StackOverflowException' occurred in mscorlib.dll"
Can this error be related to how I made my infinite loop?
If yes, is there a better way to achieve this, can I fix it, or is this code ok?
You are calling "CheckStatus" recursively. So every 8 seconds there will be one more entry on your call stack:
CheckStatus() -> CheckStatus() -> CheckStatus() -> and so on.. until you get a StackOverflowException.
Instead you should use
while (true)
{
/* Your Code */
Thread.Sleep(8000);
}
Please also note that by default IIS will unload your application if there was no request in 15 minutes, resulting in your thread being killed.
Consider creating a Windows service for such a thing, instead of abusing IIs..
I don't know where, but I read that IIs isn't good for creating long living services like in javas servers.
I would also suggest creating a Windows service for that, something like a daemon. You can create a service that will just call a special action on your application on regular intervals. The rest of the work will be done within your MVC application. Have a look at this post for an example.
Related
Background info
I am writing an integration test that spawns a child process (c# console app). The test is counting some rows in the database after the process is spun up and after the process is closed. The process is closed via process.Kill()
When the process is killed in this manner, it doesn't hit the Stop method within the process. I need to call this stop method to stop threads and remove entries from the database in order for the test to pass.
Original Code
The console app process that I am spawning in my test:
static void Main(string[] args)
{
TaskManager tm = new TaskManagerProcess();
if (Environment.UserInteractive ||
(args.EmptyForNull().Any(a => a.Equals("-RunInteractive", StringComparison.OrdinalIgnoreCase) || a.Equals("/RunInteractive"))))
{
tm.ConsoleStart(args);
Console.WriteLine("Press [Enter] to shut down, any other key to mark");
while (true)
{
ConsoleKeyInfo key = Console.ReadKey(true);
if (key.Key == ConsoleKey.Enter)
break;
Console.WriteLine("========================================================");
Console.Out.Flush();
}
Console.WriteLine("Shutting down...");
tm.ConsoleStop();
}
else
{
ServiceBase.Run(tm);
}
}
}
The test code:
//count before starting child proc
int preCount;
//count after process is spun up
int runningsCount;
//count after stopped
int postCount;
//Get an initial count of the logged in modules before svc host is started
user = ApiMethod.GetLoggedInUsers().Where(x => x.RecId == userRecID).FirstOrDefault();
preCount = user.LoggedInModules.Count;
Process proc = Helper.StartProcess(ConnectionBundle);
//Give process time to spin up leaders and workers
await Task.Delay(TimeSpan.FromSeconds(30));
//Get a count of modules after process is spun up
user = ApiMethod.GetLoggedInUsers().Where(x => x.RecId == userRecID).FirstOrDefault();
runningCount = user.LoggedInModules.Count;
//Write a line terminator to the child svc host process -
//this allows it to shutdown normally
Helper.ProcessInput.WriteLine();
Helper.ProcessInput.Close();
Helper.KillProcess(proc);
await Task.Delay(TimeSpan.FromSeconds(5));
//Get count of logged in modules after process is closed
user = ApiMethod.GetLoggedInUsers().Where(x => x.RecId == userRecID).FirstOrDefault();
postCount = user.LoggedInModules.Count;
Helper is a static class that sets up the process start info(including args) and starts the process. In helper I've redirected the StandardInput and added a property ProcessInput which is set to the StandardInput of the created process.
My goal is to send input of "Enter" from the test to the spawned process so that it will break from the loop and call tm.ConsoleStop()
TaskManagerProcess is a private custom class that controls the process. It does not inherit from System.Diagnostics.Process. As an alternate approach, my test could interact with TaskManagerProcess directly. However, I can't make TaskManagerProcess public and I need to run TaskManagerProcess in its own AppDomain because calling ConsoleStop is disposing objects in the API that I need to finish the test.
Things I've Tried
[DllImport("Kernel32")]
private static extern bool SetConsoleCtrlHandler(CloseProcDelgate handler, bool add);
I tried adding a call to Kernel32.SetConsoleCtrlHandler (and the necessary delegate) to call ConsoleStop when the process is exited. This doesn't seem to work when the process is killed via process.Kill()
With the original process code, I noticed an exception when I wrote to the StandardInput. The exception message told me to use Console.Read instead of Console.ReadKey(). This actually works intermittently! I can sometimes get a breakpoint on int cKey = Console.Read() (with debugger attached to child process) but other times it doesn't hit the breakpoint.
while (true)
{
//Changing this to Console.Read instead of Console.ReadKey
//Allows us to send redirected input to process?
int cKey = Console.Read();
if ((ConsoleKey)cKey == ConsoleKey.Enter)
break;
Console.WriteLine("========================================================");
Console.Out.Flush();
}
Finally, I tried interacting with TaskManagerProcess directly. I made the private class internal, and marked the internals visible to my test assembly. I cannot make the class public.
When I go this route, calling tm.ConsoleStop() blows away some objects in my API so I can't check the count after this method is called. For this reason, I thought I would create a new AppDomain and call AppDomain.CreateInstanceAndUnwrap() on the TaskManagerProcess class. However, I get an exception here, I believe its due to the the fact that the class is internal.
I am really stuck at this point! Any help is appreciated and thanks for taking the time to read this!
Edit
I created a demo project here
that shows what I am trying to do and has both approaches in the Test method.
Initially I thought I couldn't call AppDomain.CreateInstanceAndUnwrap() because the TaskManagerProcess class was internal. However, after playing with my demo project, I think I just can't load the assembly.
I'm guessing here, but I believe your TaskManagerProcess is a service application. If it is not, please ignore this. If it is, be advised of including details like this in your question. Debugging service applications can be complicated, believe me, I've been there. But before proceed, more advise.
Test the methods in your modules, no whole running programs, as Michael Randall just said.
Unless absolutely necessary, don't do tests against a database. Mock whatever you need to test your code.
You should go back to your alternate approach of interact with TaskManagerProcess directly. From the code of your console app, the only working method I see called is tm.ConsoleStart(args), the rest inside the loop is console writing and reading. So you can't change the acces level of that class, again, I've been there. What I have done in the past to overcome this is to use conditional compilation to create a kind of public facade in my private or internal modules.
Suppose you have:
internal class TaskManagerContainer
{
private class TaskManagerProcess
{
internal void Start()
{
// stuff
}
private void DoSomething(int arg)
{
// more stuff
}
}
}
Change it like this:
#define TEST
// Symbol TEST can also be defined using the GUI of your IDE or compiler /define option
internal class TaskManagerContainer
{
//
#if TEST
public class TaskManagerProcess
#else
private class TaskManagerProcess
#endif
{
internal void Start()
{
// stuff
}
private void DoSomething(int arg)
{
// more stuff
}
#region Methods Facade for Testing
#if TEST
public void Start_Test()
{
Start();
}
private void DoSomething_Test(int arg)
{
DoSomething(arg);
}
#endif
#endregion
}
}
I really hope it will help you making the methods visible to the test assembly and it won't blow objects in you API.
I think I got it with a brute force approach.
while (!testProcess.HasExited)
{
testProcess.StandardInput.WriteLine();
}
Thanks everyone for the input!
I need to test if there's any memory leak in our application and monitor to see if memory usage increases too much while processing the requests.
I'm trying to develop some code to make multiple simultaneous calls to our api/webservice method. This api method is not asynchronous and takes some time to complete its operation.
I've made a lot of research about Tasks, Threads and Parallelism, but so far I had no luck. The problem is, even after trying all the below solutions, the result is always the same, it appears to be processing only two requests at the time.
Tried:
-> Creating tasks inside a simple for loop and starting them with and without setting them with TaskCreationOptions.LongRunning
-> Creating threads inside a simple for loop and starting them with and without high priority
-> Creating a list of actions on a simple for loop and starting them using
Parallel.Foreach(list, options, item => item.Invoke)
-> Running directly inside a Parallel.For loop (below)
-> Running TPL methods with and without Options and TaskScheduler
-> Tried with different values for MaxParallelism and maximum threads
-> Checked this post too, but it didn't help either. (Could I be missing something?)
-> Checked some other posts here in Stackoverflow, but with F# solutions that I don't know how to properly translate them to C#. (I never used F#...)
(Task Scheduler class taken from msdn)
Here's the basic structure that I have:
public class Test
{
Data _data;
String _url;
public Test(Data data, string url)
{
_data = data;
_url = url;
}
public ReturnData Execute()
{
ReturnData returnData;
using(var ws = new WebService())
{
ws.Url = _url;
ws.Timeout = 600000;
var wsReturn = ws.LongRunningMethod(data);
// Basically convert wsReturn to my method return, with some logic if/else etc
}
return returnData;
}
}
sealed class ThreadTaskScheduler : TaskScheduler, IDisposable
{
// The runtime decides how many tasks to create for the given set of iterations, loop options, and scheduler's max concurrency level.
// Tasks will be queued in this collection
private BlockingCollection<Task> _tasks = new BlockingCollection<Task>();
// Maintain an array of threads. (Feel free to bump up _n.)
private readonly int _n = 100;
private Thread[] _threads;
public TwoThreadTaskScheduler()
{
_threads = new Thread[_n];
// Create unstarted threads based on the same inline delegate
for (int i = 0; i < _n; i++)
{
_threads[i] = new Thread(() =>
{
// The following loop blocks until items become available in the blocking collection.
// Then one thread is unblocked to consume that item.
foreach (var task in _tasks.GetConsumingEnumerable())
{
TryExecuteTask(task);
}
});
// Start each thread
_threads[i].IsBackground = true;
_threads[i].Start();
}
}
// This method is invoked by the runtime to schedule a task
protected override void QueueTask(Task task)
{
_tasks.Add(task);
}
// The runtime will probe if a task can be executed in the current thread.
// By returning false, we direct all tasks to be queued up.
protected override bool TryExecuteTaskInline(Task task, bool taskWasPreviouslyQueued)
{
return false;
}
public override int MaximumConcurrencyLevel { get { return _n; } }
protected override IEnumerable<Task> GetScheduledTasks()
{
return _tasks.ToArray();
}
// Dispose is not thread-safe with other members.
// It may only be used when no more tasks will be queued
// to the scheduler. This implementation will block
// until all previously queued tasks have completed.
public void Dispose()
{
if (_threads != null)
{
_tasks.CompleteAdding();
for (int i = 0; i < _n; i++)
{
_threads[i].Join();
_threads[i] = null;
}
_threads = null;
_tasks.Dispose();
_tasks = null;
}
}
}
And the test code itself:
private void button2_Click(object sender, EventArgs e)
{
var maximum = 100;
var options = new ParallelOptions
{
MaxDegreeOfParallelism = 100,
TaskScheduler = new ThreadTaskScheduler()
};
// To prevent UI blocking
Task.Factory.StartNew(() =>
{
Parallel.For(0, maximum, options, i =>
{
var data = new Data();
// Fill data
var test = new Test(data, _url); //_url is pre-defined
var ret = test.Execute();
// Check return and display on screen
var now = DateTime.Now.ToString("HH:mm:ss");
var newText = $"{Environment.NewLine}[{now}] - {ret.ReturnId}) {ret.ReturnDescription}";
AppendTextBox(newText, ref resultTextBox);
}
}
public void AppendTextBox(string value, ref TextBox textBox)
{
if (InvokeRequired)
{
this.Invoke(new ActionRef<string, TextBox>(AppendTextBox), value, textBox);
return;
}
textBox.Text += value;
}
And the result that I get is basically this:
[10:08:56] - (0) OK
[10:08:56] - (0) OK
[10:09:23] - (0) OK
[10:09:23] - (0) OK
[10:09:49] - (0) OK
[10:09:50] - (0) OK
[10:10:15] - (0) OK
[10:10:16] - (0) OK
etc
As far as I know there's no limitation on the server side. I'm relatively new to the Parallel/Multitasking world. Is there any other way to do this? Am I missing something?
(I simplified all the code for clearness and I believe that the provided code is enough to picture the mentioned scenarios. I also didn't post the application code, but it's a simple WinForms screen just to call and show results. If any code is somehow relevant, please let me know, I can edit and post it too.)
Thanks in advance!
EDIT1: I checked on the server logs that it's receiving the requests two by two, so it's indeed something related to sending them, not receiving.
Could it be a network problem/limitation related to how the framework manages the requests/connections? Or something with the network at all (unrelated to .net)?
EDIT2: Forgot to mention, it's a SOAP webservice.
EDIT3: One of the properties that I send (inside data) needs to change for each request.
EDIT4: I noticed that there's always an interval of ~25 secs between each pair of request, if it's relevant.
I would recommend not to reinvent the wheel and just use one of the existing solutions:
Most obvious choice: if your Visual Studio license allows you can use MS Load Testing Framework, most likely you won't even have to write a single line of code: How to: Create a Web Service Test
SoapUI is a free and open source web services testing tool, it has some limited load testing capabilities
If for some reasons SoapUI is not suitable (i.e. you need to run load tests in clustered mode from several hosts or you need more enhanced reporting) you can use Apache JMeter - free and open source multiprotocol load testing tool which supports web services load testing as well.
A good solution to create load tests without write a own project is use this service https://loader.io/targets
It is free for small tests, you can POST Parameters, Header,... and you have a nice reporting.
Isnt the "two requests at a time" the result of the default maxconnection=2 limit on connectionManagement?
<configuration>
<system.net>
<connectionManagement>
<add address = "http://www.contoso.com" maxconnection = "4" />
<add address = "*" maxconnection = "2" />
</connectionManagement>
</system.net>
</configuration>
My favorite load testing library is NBomber. It has an easy and powerful API, realistic user simulations, and provides you with nice HTML reports about latency and requests per second.
I used it to test my API and wrote an article about how I did it.
I'm writing a program that will analyze changes in the stock market.
Every time the candles on the stock charts are updated, my algorithm scans every chart for certain pieces of data. I've noticed that this process is taking about 0.6 seconds each time, freezing my application. Its not getting stuck in a loop, and there are no other problems like exception errors slowing it down. It just takes a bit of time.
To solve this, I'm trying to see if I can thread the algorithm.
In order to call the algorithm to check over the charts, I have to call this:
checkCharts.RunAlgo();
As threads need an object, I'm trying to figure out how to run the RunAlgo(), but I'm not having any luck.
How can I have a thread run this method in my checkCharts object? Due to back propagating data, I can't start a new checkCharts object. I have to continue using that method from the existing object.
EDIT:
I tried this:
M4.ALProj.BotMain checkCharts = new ALProj.BotMain();
Thread algoThread = new Thread(checkCharts.RunAlgo);
It tells me that the checkCharts part of checkCharts.RunAlgo is gives me, "An object reference is required for the non-static field, method, or property "M4.ALProj.BotMain"."
In a specific if statement, I was going to put the algoThread.Start(); Any idea what I did wrong there?
The answer to your question is actually very simple:
Thread myThread = new Thread(checkCharts.RunAlgo);
myThread.Start();
However, the more complex part is to make sure that when the method RunAlgo accesses variables inside the checkCharts object, this happens in a thread-safe manner.
See Thread Synchronization for help on how to synchronize access to data from multiple threads.
I would rather use Task.Run than Thread. Task.Run utilizes the ThreadPool which has been optimized to handle various loads effectively. You will also get all the goodies of Task.
await Task.Run(()=> checkCharts.RunAlgo);
Try this code block. Its a basic boilerplate but you can build on and extend it quite easily.
//If M4.ALProj.BotMain needs to be recreated for each run then comment this line and uncomment the one in DoRunParallel()
private static M4.ALProj.BotMain checkCharts = new M4.ALProj.BotMain();
private static object SyncRoot = new object();
private static System.Threading.Thread algoThread = null;
private static bool ReRunOnComplete = false;
public static void RunParallel()
{
lock (SyncRoot)
{
if (algoThread == null)
{
System.Threading.ThreadStart TS = new System.Threading.ThreadStart(DoRunParallel);
algoThread = new System.Threading.Thread(TS);
}
else
{
//Recieved a recalc call while still calculating
ReRunOnComplete = true;
}
}
}
public static void DoRunParallel()
{
bool ReRun = false;
try
{
//If M4.ALProj.BotMain needs to be recreated for each run then uncomment this line and comment private static version above
//M4.ALProj.BotMain checkCharts = new M4.ALProj.BotMain();
checkCharts.RunAlgo();
}
finally
{
lock (SyncRoot)
{
algoThread = null;
ReRun = ReRunOnComplete;
ReRunOnComplete = false;
}
}
if (ReRun)
{
RunParallel();
}
}
I have a silverlight 5 app that depends on several asynchronous calls to web services to populate the attributes of newly created graphics. I am trying to find a way to handle those asynchronous calls synchronously. I have tried the suggestions listed in this article and this one. i have tried the many suggestions regarding the Dispatcher object. None have worked well, so I am clearly missing something...
Here is what I have:
public partial class MainPage : UserControl {
AutoResetEvent waitHandle = new AutoResetEvent(false);
private void AssignNewAttributeValuesToSplitPolygons(List<Graphic> splitGraphics)
{
for (int i = 0; i < splitGraphics.Count; i++)
{
Graphic g = splitGraphics[i];
Thread lookupThread1 = new Thread(new ParameterizedThreadStart(SetStateCountyUtm));
lookupThread1.Start(g);
waitHandle.WaitOne();
Thread lookupThread2 = new Thread(new ParameterizedThreadStart(SetCongressionalDistrict));
lookupThread1.Start(g);
waitHandle.WaitOne();
}
private void SetStateCountyUtm(object graphic)
{
this.Dispatcher.BeginInvoke(delegate() {
WrapperSetStateCountyUtm((Graphic)graphic);
});
}
private void WrapperSetStateCountyUtm(Graphic graphic)
{
GISQueryEngine gisQEngine = new GISQueryEngine();
gisQEngine.StateCountyUtmLookupCompletedEvent += new GISQueryEngine.StateCountyUtmLookupEventHandler(gisQEngine_StateCountyUtmLookupCompletedEvent);
gisQEngine.PerformStateCountyUtmQuery(graphic.Geometry, graphic.Attributes["clu_number"].ToString());
}
void gisQEngine_StateCountyUtmLookupCompletedEvent(object sender, StateCountyUtmLookupCompleted stateCountyUtmLookupEventArgs)
{
string fred = stateCountyUtmLookupEventArgs.
waitHandle.Set();
}
}
public class GISQueryEngine
{
public void PerformStateCountyUtmQuery(Geometry inSpatialQueryGeometry, string cluNumber)
{
QueryTask queryTask = new QueryTask(stateandCountyServiceURL);
queryTask.ExecuteCompleted += new EventHandler<QueryEventArgs>(queryTask_StateCountyLookupExecuteCompleted);
queryTask.Failed += new EventHandler<TaskFailedEventArgs>(queryTask_StateCountyLookupFailed);
Query spatialQueryParam = new ESRI.ArcGIS.Client.Tasks.Query();
spatialQueryParam.OutFields.AddRange(new string[] { "*" });
spatialQueryParam.ReturnGeometry = false;
spatialQueryParam.Geometry = inSpatialQueryGeometry;
spatialQueryParam.SpatialRelationship = SpatialRelationship.esriSpatialRelIntersects;
spatialQueryParam.OutSpatialReference = inSpatialQueryGeometry.SpatialReference;
queryTask.ExecuteAsync(spatialQueryParam, cluNumber);
}
//and a whole bunch of other stuff i can add if needed
}
If I leave the 'waitHandle.WaitOne()' method uncommented, no code beyond that method is ever called, at least that I can see with the step through debugger. The application just hangs.
If I comment out the 'waitHandle.WaitOne()', everything runs just fine - except asynchronously. In other words, when the app reads the Attribute values of the new graphics, those values may or may not be set depending on how quickly the asynch methods return.
Thanks for any help.
It's going to be rather difficult to work through a problem like this as there are a few issues you'll need to address. SL is by nature asynch so forcing it to try and work synchronously is usually a very bad idea. You shouldn't do it unless it's absolutely necessary.
Is there a reason that you cannot wait for an async. callback? From what I see you appear to be making two calls for every state that is being rendered. I'm guessing the concern is that one call must complete before the second is made? In scenarios like this, I would kick off the first async call, and in it's response kick off the second call passing along the result you'll want to use from the first call. The second call response updates the provided references.
However, in cases where you've got a significant number of states to update, this results in a rather chatty, and difficult to debug set of calls. I'd really be looking at creating a service call that can accept a set of state references and pass back a data structure set for the values to be updated all in one hit. (or at least grouping them up to one call per state if the batch will be too time consuming and you want to render/interact with visual elements as they load up.)
We have a C# WebMethod that is called synchronously by a Delphi CGI (don't ask!). This works fine except when we switch to our disaster recovery environment, which runs a lot slower. The problem is that the Delphi WinInet web request has a timeout of 30 seconds, which cannot be altered due a Microsoft-acknowledged bug. In the disaster recovery environment, the C# WebMethod can take longer than 30 seconds, and the Delphi CGI falls flat on its face.
We have now coded the C# WebMethod to recognise the environment it is in, and if it is in disaster recovery mode then we call the subsequent method in a thread and immediately respond to the CGI so that it is well within the 30 seconds. This makes sense in theory, but we are finding that these threaded calls are erratic and are not executing 100% of the time. We get about a 70% success rate.
This is clearly unacceptable and we have to get it to 100%. The threads are being called with Delegate.BeginInvoke(), which we have used successfully in other contexts, but they don't like this for some reason.... there is obviously no EndInvoke(), because we need to respond immediately to the CGI and that's the end of the WebMethod.
Here is a simplified version of the WebMethod:
[WebMethod]
public string NewBusiness(string myParam)
{
if (InDisasterMode())
{
// Thread the standard method call
MethodDelegate myMethodDelegate = new MethodDelegate(ProcessNewBusiness);
myMethodDelegate.BeginInvoke(myParam, null, null);
// Return 'ok' to caller immediately
return 'ok';
}
else
{
// Call standard method synchronously to get result
return ProcessNewBusiness(myParam);
}
}
Is there some reason that this kind of 'fire and forget' call would fail if being used in a WebService WebMethod environment? If so then is there an alternative?
Unfortunately altering the Delphi side is not an option for us - the solution must be in the C# side.
Any help you could provide would be much appreciated.
Do you try to use the "HttpContext" in your method? If so, you should store it in a local variable first... also, I'd just use ThreadPool.QueueUserWorkItem.
Example:
[WebMethod]
public string NewBusiness(string myParam)
{
if (InDisasterMode())
{
// Only if you actually need this...
HttpContext context = HttpContext.Current;
// Thread the standard method call
ThreadPool.QueueUserWorkItem(delegate
{
HttpContext.Current = context;
ProcessNewBusiness(myParam);
});
return 'ok';
}
else
{
// Call standard method synchronously to get result
return ProcessNewBusiness(myParam);
}
}
As the documentation says, EndInvoke should be always called, so you have to create a helper for doing FireAndForget operations like this one:
http://www.reflectionit.nl/Blog/default.aspx?guid=ec2011f9-7e8a-4d7d-8507-84837480092f
I paste the code:
public class AsyncHelper {
delegate void DynamicInvokeShimProc(Delegate d, object[] args);
static DynamicInvokeShimProc dynamicInvokeShim = new
DynamicInvokeShimProc(DynamicInvokeShim);
static AsyncCallback dynamicInvokeDone = new
AsyncCallback(DynamicInvokeDone);
public static void FireAndForget(Delegate d, params object[] args) {
dynamicInvokeShim.BeginInvoke(d, args, dynamicInvokeDone, null);
}
static void DynamicInvokeShim(Delegate d, object[] args) {
DynamicInvoke(args);
}
static void DynamicInvokeDone(IAsyncResult ar) {
dynamicInvokeShim.EndInvoke(ar);
}
}
We use this code successfully in our application, although it is not web.