I have a silverlight 5 app that depends on several asynchronous calls to web services to populate the attributes of newly created graphics. I am trying to find a way to handle those asynchronous calls synchronously. I have tried the suggestions listed in this article and this one. i have tried the many suggestions regarding the Dispatcher object. None have worked well, so I am clearly missing something...
Here is what I have:
public partial class MainPage : UserControl {
AutoResetEvent waitHandle = new AutoResetEvent(false);
private void AssignNewAttributeValuesToSplitPolygons(List<Graphic> splitGraphics)
{
for (int i = 0; i < splitGraphics.Count; i++)
{
Graphic g = splitGraphics[i];
Thread lookupThread1 = new Thread(new ParameterizedThreadStart(SetStateCountyUtm));
lookupThread1.Start(g);
waitHandle.WaitOne();
Thread lookupThread2 = new Thread(new ParameterizedThreadStart(SetCongressionalDistrict));
lookupThread1.Start(g);
waitHandle.WaitOne();
}
private void SetStateCountyUtm(object graphic)
{
this.Dispatcher.BeginInvoke(delegate() {
WrapperSetStateCountyUtm((Graphic)graphic);
});
}
private void WrapperSetStateCountyUtm(Graphic graphic)
{
GISQueryEngine gisQEngine = new GISQueryEngine();
gisQEngine.StateCountyUtmLookupCompletedEvent += new GISQueryEngine.StateCountyUtmLookupEventHandler(gisQEngine_StateCountyUtmLookupCompletedEvent);
gisQEngine.PerformStateCountyUtmQuery(graphic.Geometry, graphic.Attributes["clu_number"].ToString());
}
void gisQEngine_StateCountyUtmLookupCompletedEvent(object sender, StateCountyUtmLookupCompleted stateCountyUtmLookupEventArgs)
{
string fred = stateCountyUtmLookupEventArgs.
waitHandle.Set();
}
}
public class GISQueryEngine
{
public void PerformStateCountyUtmQuery(Geometry inSpatialQueryGeometry, string cluNumber)
{
QueryTask queryTask = new QueryTask(stateandCountyServiceURL);
queryTask.ExecuteCompleted += new EventHandler<QueryEventArgs>(queryTask_StateCountyLookupExecuteCompleted);
queryTask.Failed += new EventHandler<TaskFailedEventArgs>(queryTask_StateCountyLookupFailed);
Query spatialQueryParam = new ESRI.ArcGIS.Client.Tasks.Query();
spatialQueryParam.OutFields.AddRange(new string[] { "*" });
spatialQueryParam.ReturnGeometry = false;
spatialQueryParam.Geometry = inSpatialQueryGeometry;
spatialQueryParam.SpatialRelationship = SpatialRelationship.esriSpatialRelIntersects;
spatialQueryParam.OutSpatialReference = inSpatialQueryGeometry.SpatialReference;
queryTask.ExecuteAsync(spatialQueryParam, cluNumber);
}
//and a whole bunch of other stuff i can add if needed
}
If I leave the 'waitHandle.WaitOne()' method uncommented, no code beyond that method is ever called, at least that I can see with the step through debugger. The application just hangs.
If I comment out the 'waitHandle.WaitOne()', everything runs just fine - except asynchronously. In other words, when the app reads the Attribute values of the new graphics, those values may or may not be set depending on how quickly the asynch methods return.
Thanks for any help.
It's going to be rather difficult to work through a problem like this as there are a few issues you'll need to address. SL is by nature asynch so forcing it to try and work synchronously is usually a very bad idea. You shouldn't do it unless it's absolutely necessary.
Is there a reason that you cannot wait for an async. callback? From what I see you appear to be making two calls for every state that is being rendered. I'm guessing the concern is that one call must complete before the second is made? In scenarios like this, I would kick off the first async call, and in it's response kick off the second call passing along the result you'll want to use from the first call. The second call response updates the provided references.
However, in cases where you've got a significant number of states to update, this results in a rather chatty, and difficult to debug set of calls. I'd really be looking at creating a service call that can accept a set of state references and pass back a data structure set for the values to be updated all in one hit. (or at least grouping them up to one call per state if the batch will be too time consuming and you want to render/interact with visual elements as they load up.)
Related
I need to test if there's any memory leak in our application and monitor to see if memory usage increases too much while processing the requests.
I'm trying to develop some code to make multiple simultaneous calls to our api/webservice method. This api method is not asynchronous and takes some time to complete its operation.
I've made a lot of research about Tasks, Threads and Parallelism, but so far I had no luck. The problem is, even after trying all the below solutions, the result is always the same, it appears to be processing only two requests at the time.
Tried:
-> Creating tasks inside a simple for loop and starting them with and without setting them with TaskCreationOptions.LongRunning
-> Creating threads inside a simple for loop and starting them with and without high priority
-> Creating a list of actions on a simple for loop and starting them using
Parallel.Foreach(list, options, item => item.Invoke)
-> Running directly inside a Parallel.For loop (below)
-> Running TPL methods with and without Options and TaskScheduler
-> Tried with different values for MaxParallelism and maximum threads
-> Checked this post too, but it didn't help either. (Could I be missing something?)
-> Checked some other posts here in Stackoverflow, but with F# solutions that I don't know how to properly translate them to C#. (I never used F#...)
(Task Scheduler class taken from msdn)
Here's the basic structure that I have:
public class Test
{
Data _data;
String _url;
public Test(Data data, string url)
{
_data = data;
_url = url;
}
public ReturnData Execute()
{
ReturnData returnData;
using(var ws = new WebService())
{
ws.Url = _url;
ws.Timeout = 600000;
var wsReturn = ws.LongRunningMethod(data);
// Basically convert wsReturn to my method return, with some logic if/else etc
}
return returnData;
}
}
sealed class ThreadTaskScheduler : TaskScheduler, IDisposable
{
// The runtime decides how many tasks to create for the given set of iterations, loop options, and scheduler's max concurrency level.
// Tasks will be queued in this collection
private BlockingCollection<Task> _tasks = new BlockingCollection<Task>();
// Maintain an array of threads. (Feel free to bump up _n.)
private readonly int _n = 100;
private Thread[] _threads;
public TwoThreadTaskScheduler()
{
_threads = new Thread[_n];
// Create unstarted threads based on the same inline delegate
for (int i = 0; i < _n; i++)
{
_threads[i] = new Thread(() =>
{
// The following loop blocks until items become available in the blocking collection.
// Then one thread is unblocked to consume that item.
foreach (var task in _tasks.GetConsumingEnumerable())
{
TryExecuteTask(task);
}
});
// Start each thread
_threads[i].IsBackground = true;
_threads[i].Start();
}
}
// This method is invoked by the runtime to schedule a task
protected override void QueueTask(Task task)
{
_tasks.Add(task);
}
// The runtime will probe if a task can be executed in the current thread.
// By returning false, we direct all tasks to be queued up.
protected override bool TryExecuteTaskInline(Task task, bool taskWasPreviouslyQueued)
{
return false;
}
public override int MaximumConcurrencyLevel { get { return _n; } }
protected override IEnumerable<Task> GetScheduledTasks()
{
return _tasks.ToArray();
}
// Dispose is not thread-safe with other members.
// It may only be used when no more tasks will be queued
// to the scheduler. This implementation will block
// until all previously queued tasks have completed.
public void Dispose()
{
if (_threads != null)
{
_tasks.CompleteAdding();
for (int i = 0; i < _n; i++)
{
_threads[i].Join();
_threads[i] = null;
}
_threads = null;
_tasks.Dispose();
_tasks = null;
}
}
}
And the test code itself:
private void button2_Click(object sender, EventArgs e)
{
var maximum = 100;
var options = new ParallelOptions
{
MaxDegreeOfParallelism = 100,
TaskScheduler = new ThreadTaskScheduler()
};
// To prevent UI blocking
Task.Factory.StartNew(() =>
{
Parallel.For(0, maximum, options, i =>
{
var data = new Data();
// Fill data
var test = new Test(data, _url); //_url is pre-defined
var ret = test.Execute();
// Check return and display on screen
var now = DateTime.Now.ToString("HH:mm:ss");
var newText = $"{Environment.NewLine}[{now}] - {ret.ReturnId}) {ret.ReturnDescription}";
AppendTextBox(newText, ref resultTextBox);
}
}
public void AppendTextBox(string value, ref TextBox textBox)
{
if (InvokeRequired)
{
this.Invoke(new ActionRef<string, TextBox>(AppendTextBox), value, textBox);
return;
}
textBox.Text += value;
}
And the result that I get is basically this:
[10:08:56] - (0) OK
[10:08:56] - (0) OK
[10:09:23] - (0) OK
[10:09:23] - (0) OK
[10:09:49] - (0) OK
[10:09:50] - (0) OK
[10:10:15] - (0) OK
[10:10:16] - (0) OK
etc
As far as I know there's no limitation on the server side. I'm relatively new to the Parallel/Multitasking world. Is there any other way to do this? Am I missing something?
(I simplified all the code for clearness and I believe that the provided code is enough to picture the mentioned scenarios. I also didn't post the application code, but it's a simple WinForms screen just to call and show results. If any code is somehow relevant, please let me know, I can edit and post it too.)
Thanks in advance!
EDIT1: I checked on the server logs that it's receiving the requests two by two, so it's indeed something related to sending them, not receiving.
Could it be a network problem/limitation related to how the framework manages the requests/connections? Or something with the network at all (unrelated to .net)?
EDIT2: Forgot to mention, it's a SOAP webservice.
EDIT3: One of the properties that I send (inside data) needs to change for each request.
EDIT4: I noticed that there's always an interval of ~25 secs between each pair of request, if it's relevant.
I would recommend not to reinvent the wheel and just use one of the existing solutions:
Most obvious choice: if your Visual Studio license allows you can use MS Load Testing Framework, most likely you won't even have to write a single line of code: How to: Create a Web Service Test
SoapUI is a free and open source web services testing tool, it has some limited load testing capabilities
If for some reasons SoapUI is not suitable (i.e. you need to run load tests in clustered mode from several hosts or you need more enhanced reporting) you can use Apache JMeter - free and open source multiprotocol load testing tool which supports web services load testing as well.
A good solution to create load tests without write a own project is use this service https://loader.io/targets
It is free for small tests, you can POST Parameters, Header,... and you have a nice reporting.
Isnt the "two requests at a time" the result of the default maxconnection=2 limit on connectionManagement?
<configuration>
<system.net>
<connectionManagement>
<add address = "http://www.contoso.com" maxconnection = "4" />
<add address = "*" maxconnection = "2" />
</connectionManagement>
</system.net>
</configuration>
My favorite load testing library is NBomber. It has an easy and powerful API, realistic user simulations, and provides you with nice HTML reports about latency and requests per second.
I used it to test my API and wrote an article about how I did it.
I'm writing a program that will analyze changes in the stock market.
Every time the candles on the stock charts are updated, my algorithm scans every chart for certain pieces of data. I've noticed that this process is taking about 0.6 seconds each time, freezing my application. Its not getting stuck in a loop, and there are no other problems like exception errors slowing it down. It just takes a bit of time.
To solve this, I'm trying to see if I can thread the algorithm.
In order to call the algorithm to check over the charts, I have to call this:
checkCharts.RunAlgo();
As threads need an object, I'm trying to figure out how to run the RunAlgo(), but I'm not having any luck.
How can I have a thread run this method in my checkCharts object? Due to back propagating data, I can't start a new checkCharts object. I have to continue using that method from the existing object.
EDIT:
I tried this:
M4.ALProj.BotMain checkCharts = new ALProj.BotMain();
Thread algoThread = new Thread(checkCharts.RunAlgo);
It tells me that the checkCharts part of checkCharts.RunAlgo is gives me, "An object reference is required for the non-static field, method, or property "M4.ALProj.BotMain"."
In a specific if statement, I was going to put the algoThread.Start(); Any idea what I did wrong there?
The answer to your question is actually very simple:
Thread myThread = new Thread(checkCharts.RunAlgo);
myThread.Start();
However, the more complex part is to make sure that when the method RunAlgo accesses variables inside the checkCharts object, this happens in a thread-safe manner.
See Thread Synchronization for help on how to synchronize access to data from multiple threads.
I would rather use Task.Run than Thread. Task.Run utilizes the ThreadPool which has been optimized to handle various loads effectively. You will also get all the goodies of Task.
await Task.Run(()=> checkCharts.RunAlgo);
Try this code block. Its a basic boilerplate but you can build on and extend it quite easily.
//If M4.ALProj.BotMain needs to be recreated for each run then comment this line and uncomment the one in DoRunParallel()
private static M4.ALProj.BotMain checkCharts = new M4.ALProj.BotMain();
private static object SyncRoot = new object();
private static System.Threading.Thread algoThread = null;
private static bool ReRunOnComplete = false;
public static void RunParallel()
{
lock (SyncRoot)
{
if (algoThread == null)
{
System.Threading.ThreadStart TS = new System.Threading.ThreadStart(DoRunParallel);
algoThread = new System.Threading.Thread(TS);
}
else
{
//Recieved a recalc call while still calculating
ReRunOnComplete = true;
}
}
}
public static void DoRunParallel()
{
bool ReRun = false;
try
{
//If M4.ALProj.BotMain needs to be recreated for each run then uncomment this line and comment private static version above
//M4.ALProj.BotMain checkCharts = new M4.ALProj.BotMain();
checkCharts.RunAlgo();
}
finally
{
lock (SyncRoot)
{
algoThread = null;
ReRun = ReRunOnComplete;
ReRunOnComplete = false;
}
}
if (ReRun)
{
RunParallel();
}
}
I have a slow and expensive method that return some data for me:
public Data GetData(){...}
I don't want to wait until this method will execute. Rather than I want to return a cached data immediately.
I have a class CachedData that contains one property Data cachedData.
So I want to create another method public CachedData GetCachedData() that will initiate a new task(call GetData inside of it) and immediately return cached data and after task will finish we will update the cache.
I need to have thread safe GetCachedData() because I will have multiple request that will call this method.
I will have a light ping "is there anything change?" each minute and if it will return true (cachedData != currentData) then I will call GetCachedData().
I'm new in C#. Please, help me to implement this method.
I'm using .net framework 4.5.2
The basic idea is clear:
You have a Data property which is wrapper around an expensive function call.
In order to have some response immediately the property holds a cached value and performs updating in the background.
No need for an event when the updater is done because you poll, for now.
That seems like a straight-forward design. At some point you may want to use events, but that can be added later.
Depending on the circumstances it may be necessary to make access to the property thread-safe. I think that if the Data cache is a simple reference and no other data is updated together with it, a lock is not necessary, but you may want to declare the reference volatile so that the reading thread does not rely on a stale cached (ha!) version. This post seems to have good links which discuss the issues.
If you will not call GetCachedData at the same time, you may not use lock. If data is null (for sure first run) we will wait long method to finish its work.
public class SlowClass
{
private static object _lock;
private static Data _cachedData;
public SlowClass()
{
_lock = new object();
}
public void GetCachedData()
{
var task = new Task(DoStuffLongRun);
task.Start();
if (_cachedData == null)
task.Wait();
}
public Data GetData()
{
if (_cachedData == null)
GetCachedData();
return _cachedData;
}
private void DoStuffLongRun()
{
lock (_lock)
{
Console.WriteLine("Locked Entered");
Thread.Sleep(5000);//Do Long Stuff
_cachedData = new Data();
}
}
}
I have tested on console application.
static void Main(string[] args)
{
var mySlow = new SlowClass();
var mySlow2 = new SlowClass();
mySlow.GetCachedData();
for (int i = 0; i < 5; i++)
{
Console.WriteLine(i);
mySlow.GetData();
mySlow2.GetData();
}
mySlow.GetCachedData();
Console.Read();
}
Maybe you can use the MemoryCache class,
as explained here in MSDN
This is the method in question:
public void StartBatchProcessing(IFileBatch fileBatch)
{
var dataWarehouseFactsMerger = m_dataWarehouseFactsMergerFactory.Create(fileBatch);
dataWarehouseFactsMerger.Merge();
if(!m_isTaskStarted)
{
m_isTaskStarted = true;
m_lastQueuedBatchProcessingTask = new TaskFactory().StartNew(() => ProcessBatch(dataWarehouseFactsMerger));
}
else
{
m_lastQueuedBatchProcessingTask = m_lastQueuedBatchProcessingTask.ContinueWith(previous => ProcessBatch(dataWarehouseFactsMerger));
}
}
As you can see I'm using TPL to queue tasks one after the other and I would like to test that the tasks will execute in the order they arrive as soon as the previous one finishes.
The ProcessBatch method is protected so I think it could be overwritten in a derived class and be used to set some flag or something and assert that.
All ideas are welcome and appreciated.
You could create an implementation of DataWarehouseFactsMergerFactory that creates implementations of DataWarehouseFactsMerger that are capable of logging which fileBatch was entered and the start time of each task, but for the rest don't really do anything.
I created a custom autocomplete control, when the user press a key it queries the database server (using Remoting) on another thread. When the user types very fast, the program must cancel the previously executing request/thread.
I previously implemented it as AsyncCallback first, but i find it cumbersome, too many house rules to follow (e.g. AsyncResult, AsyncState, EndInvoke) plus you have to detect the thread of the BeginInvoke'd object, so you can terminate the previously executing thread. Besides if I continued the AsyncCallback, there's no method on those AsyncCallbacks that can properly terminate previously executing thread.
EndInvoke cannot terminate the thread, it would still complete the operation of the to be terminated thread. I would still end up using Abort() on thread.
So i decided to just implement it with pure Thread approach, sans the AsyncCallback. Is this thread.abort() normal and safe to you?
public delegate DataSet LookupValuesDelegate(LookupTextEventArgs e);
internal delegate void PassDataSet(DataSet ds);
public class AutoCompleteBox : UserControl
{
Thread _yarn = null;
[System.ComponentModel.Category("Data")]
public LookupValuesDelegate LookupValuesDelegate { set; get; }
void DataSetCallback(DataSet ds)
{
if (this.InvokeRequired)
this.Invoke(new PassDataSet(DataSetCallback), ds);
else
{
// implements the appending of text on textbox here
}
}
private void txt_TextChanged(object sender, EventArgs e)
{
if (_yarn != null) _yarn.Abort();
_yarn = new Thread(
new Mate
{
LookupValuesDelegate = this.LookupValuesDelegate,
LookupTextEventArgs =
new LookupTextEventArgs
{
RowOffset = offset,
Filter = txt.Text
},
PassDataSet = this.DataSetCallback
}.DoWork);
_yarn.Start();
}
}
internal class Mate
{
internal LookupTextEventArgs LookupTextEventArgs = null;
internal LookupValuesDelegate LookupValuesDelegate = null;
internal PassDataSet PassDataSet = null;
object o = new object();
internal void DoWork()
{
lock (o)
{
// the actual code that queries the database
var ds = LookupValuesDelegate(LookupTextEventArgs);
PassDataSet(ds);
}
}
}
NOTES
The reason for cancelling the previous thread when the user type keys in succession, is not only to prevent the appending of text from happening, but also to cancel the previous network roundtrip, so the program won't be consuming too much memory resulting from successive network operation.
I'm worried if I avoid thread.Abort() altogether, the program could consume too much memory.
here's the code without the thread.Abort(), using a counter:
internal delegate void PassDataSet(DataSet ds, int keyIndex);
public class AutoCompleteBox : UserControl
{
[System.ComponentModel.Category("Data")]
public LookupValuesDelegate LookupValuesDelegate { set; get; }
static int _currentKeyIndex = 0;
void DataSetCallback(DataSet ds, int keyIndex)
{
if (this.InvokeRequired)
this.Invoke(new PassDataSet(DataSetCallback), ds, keyIndex);
else
{
// ignore the returned DataSet
if (keyIndex < _currentKeyIndex) return;
// implements the appending of text on textbox here...
}
}
private void txt_TextChanged(object sender, EventArgs e)
{
Interlocked.Increment(ref _currentKeyIndex);
var yarn = new Thread(
new Mate
{
KeyIndex = _currentKeyIndex,
LookupValuesDelegate = this.LookupValuesDelegate,
LookupTextEventArgs =
new LookupTextEventArgs
{
RowOffset = offset,
Filter = txt.Text
},
PassDataSet = this.DataSetCallback
}.DoWork);
yarn.Start();
}
}
internal class Mate
{
internal int KeyIndex;
internal LookupTextEventArgs LookupTextEventArgs = null;
internal LookupValuesDelegate LookupValuesDelegate = null;
internal PassDataSet PassDataSet = null;
object o = new object();
internal void DoWork()
{
lock (o)
{
// the actual code that queries the database
var ds = LookupValuesDelegate(LookupTextEventArgs);
PassDataSet(ds, KeyIndex);
}
}
}
No, it is not safe. Thread.Abort() is sketchy enough at the best of times, but in this case your control has no (heh) control over what's being done in the delegate callback. You don't know what state the rest of the app will be left in, and may well find yourself in a world of hurt when the time comes to call the delegate again.
Set up a timer. Wait a bit after the text change before calling the delegate. Then wait for it to return before calling it again. If it's that slow, or the user is typing that fast, then they probably don't expect autocomplete anyway.
Regarding your updated (Abort()-free) code:
You're now launching a new thread for (potentially) every keypress. This is not only going to kill performance, it's unnecessary - if the user isn't pausing, they probably aren't looking for the control to complete what they're typing.
I touched on this earlier, but P Daddy said it better:
You'd be better off just implementing
a one-shot timer, with maybe a
half-second timeout, and resetting it
on each keystroke.
Think about it: a fast typist might create a score of threads before the first autocomplete callback has had a chance to finish, even with a fast connection to a fast database. But if you delay making the request until a short period of time after the last keystroke has elapsed, then you have a better chance of hitting that sweet spot where the user has typed all they want to (or all they know!) and is just starting to wait for autocomplete to kick in. Play with the delay - a half-second might be appropriate for impatient touch-typists, but if your users are a bit more relaxed... or your database is a bit more slow... then you may get better results with a 2-3 second delay, or even longer. The most important part of this technique though, is that you reset the timer on every keystroke.
And unless you expect database requests to actually hang, don't bother trying to allow multiple concurrent requests. If a request is currently in-progress, wait for it to complete before making another one.
There are many warnings all over the net about using Thread.Abort. I would recommend avoiding it unless it's really needed, which in this case, I don't think it is. You'd be better off just implementing a one-shot timer, with maybe a half-second timeout, and resetting it on each keystroke. This way your expensive operation would only occur after a half-second or more (or whatever length you choose) of user inactivity.
You might want to have a look at An Introduction to Programming with C# Threads - Andrew D. Birrell. He outlines some of the best practices surrounding threading in C#.
On page 4 he says:
When you look at the
“System.Threading” namespace, you will
(or should) feel daunted by the range
of choices facing you: “Monitor” or
“Mutex”; “Wait” or “AutoResetEvent”;
“Interrupt” or “Abort”? Fortunately,
there’s a simple answer: use the
“lock” statement, the “Monitor” class,
and the “Interrupt” method. Those are
the features that I’ll use for most of
the rest of the paper. For now, you
should ignore the rest of
“System.Threading”, though I’ll
outline it for you section 9.
No, I would avoid ever calling Thread.Abort on your own code. You want your own background thread to complete normally and unwind its stack naturally. The only time I might consider calling Thread.Abort is in a scenario where my code is hosting foreign code on another thread (such as a plugin scenario) and I really want to abort the foreign code.
Instead, in this case, you might consider simply versioning each background request. In the callback, ignore responses that are "out-of-date" since server responses may return in the wrong order. I wouldn't worry too much about aborting a request that's already been sent to the database. If you find your database isn't responsive or is being overwhelmed by too many requests, then consider also using a timer as others have suggested.
Use Thread.Abort only as a last-resort measure when you are exiting application and KNOW that all IMPORTANT resources are released safely.
Otherwise, don't do it. It's even worse then
try
{
// do stuff
}
catch { } // gulp the exception, don't do anything about it
safety net...