Simulate Typing by Holding Thread - c#

I am writing a bot with a component that simulates typing by notifying the user "Bot is typing...". To do this the service provider that hosts the bot has to be notified every 10 seconds that the bot is still working.
The number of seconds the bot is "typing" is simply determined by the length of the string divided by an arbitrary value of "characters per second".
var text = "This is my response!";
var alertServiceInSeconds = 10;
var delay = text.Length / (double) charactersPerSecond;
So two tasks have to be performed: sleeping the thread for the duration of delay while calling a method every alertServiceInSeconds.
I've attempted a couple of methods, but the bot seems to reply in delay + alertServiceInSeconds seconds, rather than just delay seconds. I must be missing something in this grade school math.
while Stopwatch.Elapsed
From: How to execute the loop for specific time
I feel like this method should hurt my soul because of the number of times Math.Abs is called. Furthermore, TriggerTyping is being called more frequently than alertServiceInSeconds.
var stopwatch = new Stopwatch();
stopwatch.Start();
while (stopwatch.Elapsed < TimeSpan.FromSeconds(delay))
{
if (Math.Abs((stopwatch.ElapsedMilliseconds * 1000) / alertServiceInSeconds % 1) == 0)
await client.TriggerTyping();
}
stopwatch.Stop();
await client.SendMessage(text);
Thread.Sleep
This seems to get the job done, but not accurately as previously noted.
for (var i = 0; i < delay; i++)
{
Thread.Sleep(TimeSpan.FromSeconds(1));
if (Math.Abs(i / alertServiceInSeconds % 1) == 0)
await client.TriggerTyping();
}
await client.SendMessage(text);
Questions:
How does this cause the bot to "type" in delay + alertServiceInSeconds seconds?
Are the tactics of these approaches okay? A sleeping for loop and a while loop dependent on TimeSpan both seem flawed but I cannot ponder a better approach.

Instead of alertServiceInSeconds % 1, you need alertServiceInSeconds % 10.
You may also benefit from a task scheduling library and perhaps a queue.

Related

How to best use multiple Tasks? (progress reporting and performance)

I created the following code to compare images and check if they are similar. Since that takes quite a while, I tried to optimized my code using multithreading.
I worked with BackgroundWorker in the past and was now starting to use Tasks, but I am still not fully familiar with that.
Code below:
allFiles is a list of images to be compared.
chunksToCompare contains subset of the Tuples of files to compare (always a combination of two files to compare) - so each task can compare e. g. 20 Tuples of files.
The code below works fine in general but has two issues
progress reporting does not really make sense, since progress is only updated when all Tasks have been completed which takes quite a while
depending on the size of files, each thread has different processing time: in the code below it always waits until all (64) task are completed before the next is started which is obviously not optimal
Many thanks in advance of any hint / idea.
// List for results
List<SimilarImage> similarImages = new List<SimilarImage>();
// create chunk of files to send to a thread
var chunksToCompare = GetChunksToCompare(allFiles);
// position of processed chunks of files
var i = 0;
// number of tasks
var taskCount = 64;
while (true)
{
// list of all tasks
List<Task<List<SimilarImage>>> tasks = new();
// create single tasks
for (var n = 0; n < taskCount; n++)
{
var task = (i + 1 + n < chunksToCompare.Count) ?
GetSimilarImageAsync2(chunksToCompare[i + n], threshold) : null;
if (task != null) tasks.Add(task);
}
// wait for all tasks to complete
await Task.WhenAll(tasks.Where(i => i != null));
// get results of single task and add it to list
foreach (var task in tasks)
{
if (task?.Result != null) similarImages.AddRange(task.Result);
}
// progress of processing
i += tasks.Count;
// report the progress
progress.Report(new ProgressInformation() { Count = chunksToCompare.Count,
Position = i + 1 });
// exit condition
if (i + 1 >= chunksToCompare.Count) break;
}
return similarImages;
More info: I am using .NET 6. Images are stores on a SSD. With my test dataset it took 6:30 minutes with sequential and 4:00 with parallel execution. I am using a lib which only takes the image path of two images and then compares them. There is a lot of overhead because the same image is reloaded multiple times. I was looking for a different lib to compare images, but I was not successful.

C# Timing Loop Rate Inaccuracies

I'm trying to create a method in C# whereby I can repeatedly perform an action (in my particular application it's sending a UDP packet) at a targeted rate. I know that timer inaccuracy in C# will prevent the output from being precisely at the target rate, but best effort is good enough. However, at higher rates, the timing seems to be completely off.
while (!token.IsCancellationRequested)
{
stopwatch.Restart();
Thread.Sleep(5); // Simulate process
int processTime = (int)stopwatch.ElapsedMilliseconds;
int waitTime = taskInterval - processTime;
Task.Delay(Math.Max(0, waitTime)).Wait();
}
See here for the full example console app.
When run, the FPS output of this test app shows around 44-46 Hz for a target of 60 Hz. However at lower rates (say 20 Hz), the output rate is much closer to the target. I can't understand why this would be the case. What is the problem with my code?
The problem is that Thread.Sleep (or Task.Delay) is not very accurate. Take a look at this: Accuracy of Task.Delay
One way to fix this is to a start the timer once, and then have a loop where you delay some ~15 ms in each iteration. Inside each iteration, you calculate how much times the operation should have been executed so far and you compare it with how many times you have run it so far. And you then run the operation enough times to catch up.
Here is some code sample:
private static void timerTask(CancellationToken token)
{
const int taskRateHz = 60;
var stopwatch = new Stopwatch();
stopwatch.Start();
int ran_so_far = 0;
while (!token.IsCancellationRequested)
{
Thread.Sleep(15);
int should_have_run =
stopwatch.ElapsedMilliseconds * taskRateHz / 1000;
int need_to_run_now = should_have_run - ran_so_far;
if(need_to_run_now > 0)
{
for(int i = 0; i < need_to_run_now; i++)
{
ExecuteTheOperationHere();
}
ran_so_far += need_to_run_now;
}
}
}
Please note that you want to use longs instead of ints if the process is to remain alive for a very long time.
If you replace this:
Task.Delay(Math.Max(0, waitTime)).Wait();
with this:
Thread.Sleep(Math.Max(0, waitTime));
You should get closer values on higher rates (why would you use Task.Delay(..).Wait() anyway?).

await Task.Delay(foo); takes seconds instead of ms

Using a variable delay in Task.Delay randomly takes seconds instead of milliseconds when combined with a IO-like operation.
Code to reproduce:
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApplication {
class Program {
static void Main(string[] args) {
Task[] wait = {
new delayTest().looper(5250, 20),
new delayTest().looper(3500, 30),
new delayTest().looper(2625, 40),
new delayTest().looper(2100, 50)
};
Task.WaitAll(wait);
Console.WriteLine("All Done");
Console.ReadLine();
}
}
class delayTest {
private Stopwatch sw = new Stopwatch();
public delayTest() {
sw.Start();
}
public async Task looper(int count, int delay) {
var start = sw.Elapsed;
Console.WriteLine("Start ({0}, {1})", count, delay);
for (int i = 0; i < count; i++) {
var before = sw.Elapsed;
var totalDelay = TimeSpan.FromMilliseconds(i * delay) + start;
double wait = (totalDelay - sw.Elapsed).TotalMilliseconds;
if (wait > 0) {
await Task.Delay((int)wait);
SpinWait.SpinUntil(() => false, 1);
}
var finalDelay = (sw.Elapsed - before).TotalMilliseconds;
if (finalDelay > 30 + delay) {
Console.WriteLine("Slow ({0}, {1}): {4} Expected {2:0.0}ms got {3:0.0}ms", count, delay, wait, finalDelay, i);
}
}
Console.WriteLine("Done ({0}, {1})", count, delay);
}
}
}
Also reported this on connect.
Leaving old question bellow, for completeness.
I am running a task that reads from a network stream, then delays for 20ms, and reads again (doing 500 reads, this should take around 10 seconds). This works well when I only read with 1 task, but strange things happen when I have multiple tasks running, some with long (60 seconds) delay. My ms-delay tasks suddenly hang half way.
I am running the following code (simplified):
var sw = Stopwatch();
sw.Start()
await Task.Delay(20); // actually delay is 10, 20, 30 or 40;
if (sw.Elapsed.TotalSeconds > 1) {
Console.WriteLine("Sleep: {0:0.00}s", sw.Elapsed.TotalSeconds);
}
This prints:
Sleep: 11.87s
(Actually it gives the 20ms delay 99% of the time, those are ignored).
This delay is almost 600 times longer than expected. The same delay happens on 3 separate threads at the same time, and they all continue again at the same time also.
The 60 second sleeping task wakes up as normal ~40 seconds after the short tasks finish.
Half the time this problem does not even happen. The other half, it has a consistent delay of 11.5-12 seconds. I would suspect a scheduling or thread-pool problem, but all threads should be free.
When I pause my program during the stuck phase, the main thread stacktrace stands on Task.WaitAll, 3 tasks are Scheduled on await Task.Delay(20) and one task is Scheduled on await Task.Delay(60000). Also there are 4 more tasks Awaiting those first 4 tasks, reporting things like '"Task 24" is waiting on this object: "Task 5313" (Owned by thread 0)'. All 4 tasks say the waiting task is owned by thread 0. There are also 4 ContinueWith tasks that I think I can ignore.
There are some other things going on, like a second console application that writes to the network stream, but one console application should not affect the other.
I am completely clueless on this one. What is going on?
Update:
Based on comments and questions:
When I run my program 4 times, 2-3 times it will hang for 10-15 seconds, 1-2 times it will operate as normal (and wont print "Sleep: {0:0.00}s".)
Thread.Count indeed goes up, but this happens regardless of the hang. I just had a run where it did not hang, and Thread.Count started at 24, wend up to 40 after 1 second, around 22 seconds the short tasks finished normal, and then Thread.Count wend down to 22 slowly over the next 40 seconds.
Some more code, full code is found in the link below. Starting clients:
List<Task> tasks = new List<Task>();
private void makeClient(int delay, int startDelay) {
Task task = new ClientConnection(this, delay, startDelay).connectAsync();
task.ContinueWith(_ => {
lock (tasks) { tasks.Remove(task); }
});
lock (tasks) { tasks.Add(task); }
}
private void start() {
DateTime start = DateTime.Now;
Console.WriteLine("Starting clients...");
int[] iList = new[] {
0,1,1,2,
10, 20, 30, 40};
foreach (int delay in iList) {
makeClient(delay, 0); ;
}
makeClient(15, 40);
Console.WriteLine("Done making");
tasks.Add(displayThreads());
waitForTasks(tasks);
Console.WriteLine("All done.");
}
private static void waitForTasks(List<Task> tasks) {
Task[] waitFor;
lock (tasks) {
waitFor = tasks.ToArray();
}
Task.WaitAll(waitFor);
}
Also, I tried to replace the Delay(20) with await Task.Run(() => Thread.Sleep(20))
Thread.Count now goes from 29 to 43 and back down to 24, however among multiple runes it never hangs.
With or without ThreadPool.SetMinThreads(500, 500), using TaskExt.Delay by noserati it does not hang. (That said, even switching over 1 line of code sometimes stops it from hanging, only to randomly continue after I restart the project 4 times, but I've tried this 6 times in a row without any problems now).
I've tried everything above with and without ThreadPool.SetMinThreads so far, never made any difference.
Update2: CODE!
Without seeing more code, it's hard to make futher guesses, but I'd like to summarize the comments, it may help someone else in the future:
We've figured out that the ThreadPool stuttering is not an issues here, as ThreadPool.SetMinThreads(500, 500) didn't help.
Is there any SynchronizationContext in place anywhere in your task workflow? Place Debug.Assert(SyncrhonizationContext.Current == null) everywhere to check for that. Use ConfigureAwait(false) with every await.
Is there any .Wait, .WaitOne, .WaitAll, WaitAny, .Result used anywhere in your code? Any lock () { ... } constructs? Monitor.Enter/Exit or any other blocking synchronization primitives?
Regarding this: I've already replaced Task.Delay(20) with Task.Yield(); Thread.Sleep(20) as a workaround, that works. But yeah, I continue to try to figure out what's going on here because the idea that Task.Delay(20) can shoot this far out of line makes it totally unusable.
This sounds worrying, indeed. It's very unlikely there's a bug in Task.Delay, but everything is possible. For the sake of experimenting, try replacing await Task.Delay(20) with await Task.Run(() => Thread.Sleep(20)), having ThreadPool.SetMinThreads(500, 500) still in-place.
I also have an experimental implementation of Delay which uses unamanaged CreateTimerQueueTimer API (unlike Task.Delay, which uses System.Threading.Timer, which in turn uses managed TimerQueue). It's available here as a gist. Feel free to try it as TaskExt.Delay instead of the standard Task.Delay. The timer callbacks are posted to ThreadPool, so ThreadPool.SetMinThreads(500, 500) still should be used for this experiment. I doubt it could make any difference, but I'd be interested to know.

Measure Execution time of Parallel.For

I am using a Parallel.For loop to increase execution speed of a computation.
I would like to measure the approximate time left for the computation. Normally one simply has to measure the time it takes for each step and estimate the total time by multiplying the step time by the total number of steps.
e.g., If there are 100 steps and some step takes 5 seconds then one could except that the total time would be about 500 seconds. (one could average over several steps and continuously report to the user which is what I want to do).
The only way I can think to do this is by using an outer for loop that essentially resorts back to the original way by splitting up the parallel.for interval and measuring each one.
for(i;n;i += step)
Time(Parallel.For(i, i + step - 1, ...))
This isn't a very good way in general because either a few number of very long steps or a large number of short steps cause problems with timing.
Anyone have any ideas?
(Please realize I need a real time estimation of the time it is taking the parallel.for to complete and NOT the total time. I want to let the user know how much time is left in execution).
This method seems to be pretty effective. We can "linearize" the parallel for loop by simply having each parallel loop increment a counter:
Parallel.For(0, n, (i) => { Thread.Sleep(1000); Interlocked.Increment(ref cnt); });
(Note, thanks to Niclas, that ++ is not atomic and one must use lock or Interlocked.Increment)
Each loop, running in parallel, will increment cnt. The effect is that cnt is monotonically increasing to n, and cnt/n is the percentage of how much the for is complete. Since there is no contention for cnt, there are no concurrency issues and it is very fast and very perfectly accurate.
We can measure the percentage of completion of the parallel For loop at any time during the execution by simply computing cnt/n
The total computation time can be easily estimated by dividing the elapsed time since the start of the loop with the percentage the loop is at. These two quantities should have approximately the same rates of change when each loop takes approximately the same amount of time is relatively well behaved (can average out small fluctuation too).
Obviously the more unpredictable each task is, the more inaccurate the remaining computation time will be. This is to be expected and in general, there is no solution (which is why it's called an approximation). We can still get the elapsed computation time or percentage with complete accuracy.
The underlying assumption of any estimation of "time left" algorithms is each sub task takes approximately the same computation time (assuming one wants a linear result). For example, if we have a parallel approach where 99 tasks are very quick and 1 task is very slow, our estimation will be grossly inaccurate. Our counter will zip up to 99 pretty quick then sit on the last percentage until the slow task completes. We could linearly interpolate and do further estimation to get a smoother countdown but ultimately there is a breaking point.
The following code demonstrates how to measure the parallel for efficiently. Note the time at 100% is the true total execution time and can be used as a reference.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using System.Diagnostics;
namespace ParallelForTiming
{
class Program
{
static void Main(string[] args)
{
var sw = new Stopwatch();
var pct = 0.000001;
var iter = 20;
var time = 20 * 1000 / iter;
var p = new ParallelOptions(); p.MaxDegreeOfParallelism = 4;
var Done = false;
Parallel.Invoke(() =>
{
sw.Start();
Parallel.For(0, iter, p, (i) => { Thread.Sleep(time); lock(p) { pct += 1 / (double)iter; }});
sw.Stop();
Done = true;
}, () =>
{
while (!Done)
{
Console.WriteLine(Math.Round(pct*100,2) + " : " + ((pct < 0.1) ? "oo" : (sw.ElapsedMilliseconds / pct /1000.0).ToString()));
Thread.Sleep(2000);
}
}
);
Console.WriteLine(Math.Round(pct * 100, 2) + " : " + sw.ElapsedMilliseconds / pct / 1000.0);
Console.ReadKey();
}
}
}
This is almost impossible to answer.
First of all, it's not clear what all the steps do. Some steps may be I/O-intensive, or computationally intensive.
Furthermore, Parallel.For is a request -- you are not sure that your code will actually run in parallel. It depends on circumstances (availability of threads and memory) whether the code will actually run in parallel. Then if you have parallel code that relies on I/O, one thread will block the others while waiting for the I/O to complete. And you don't know what other processes are doing either.
This is what makes predicting how long something will take extremely error-prone and, actually, an exercise in futility.
This problem is a tough one to answer. The problems with timing that you refer to using very long steps or a large number of very short steps are likley related to that your loop will be working at the edges of what the parallel partitioner can handle.
Since the default partitioner is very dynamic and we know nothing about your actual problem there is no good answer that allows you to solve the problem at hand while still reaping the benefits of parallel execution with dynamic load balancing.
If it is very important to achive a reliable estimation of projected runtime perhaps you could set up a custom partitioner and then leverage your knowledge about the partioning to extrapolate timings from a few chunks on one thread.
Here's a possible solution to measure the average of all previously finished tasks. After each task finishes, an Action<T> is called where you could summarize all times and divide it by the total tasks finished. This is however just the current state and has no way to predict any future tasks / averages. (As others mentioned, this is quite difficult)
However: You'll have to measure if it fits for your problem because there is a possibility for lock contention on both the method level declared variables.
static void ComputeParallelForWithTLS()
{
var collection = new List<int>() { 1000, 2000, 3000, 4000 }; // values used as sleep parameter
var sync = new object();
TimeSpan averageTime = new TimeSpan();
int amountOfItemsDone = 0; // referenced by the TPL, increment it with lock / interlocked.increment
Parallel.For(0, collection.Count,
() => new TimeSpan(),
(i, loopState, tlData) =>
{
var sw = Stopwatch.StartNew();
DoWork(collection, i);
sw.Stop();
return sw.Elapsed;
},
threadLocalData => // Called each time a task finishes
{
lock (sync)
{
averageTime += threadLocalData; // add time used for this task to the total.
}
Interlocked.Increment(ref amountOfItemsDone); // increment the tasks done
Console.WriteLine(averageTime.TotalMilliseconds / amountOfItemsDone + ms.");
/*print out the average for all done tasks so far. For an estimation,
multiply with the remaining items.*/
});
}
static void DoWork(List<int> items, int current)
{
System.Threading.Thread.Sleep(items[current]);
}
I would propose having the method being executed at each step report when it is done. This is slightly tricky with thread safety of course, so that is something to remember when implementing. This will let you keep track of number of finished tasks out of the total, and also makes it (sort of) easy to know the time spent on each individual step, which is useful to remove outliers etc.
EDIT: Some code to demonstrate the idea
Parallel.For(startIdx, endIdx, idx => {
var sw = Stopwatch.StartNew();
DoCalculation(idx);
sw.Stop();
var dur = sw.Elapsed;
ReportFinished(idx, dur);
});
The key here is that ReportFinished will give you continuous information about number of finished tasks, and the duration of each of them. This enables you to do some better guesses about how long time remains by doing statistics on this data.
Here i wrote class that mesures time and speed
public static class Counter
{
private static long _seriesProcessedItems = 0;
private static long _totalProcessedItems = 0;
private static TimeSpan _totalTime = TimeSpan.Zero;
private static DateTime _operationStartTime;
private static object _lock = new object();
private static int _numberOfCurrentOperations = 0;
public static void StartAsyncOperation()
{
lock (_lock)
{
if (_numberOfCurrentOperations == 0)
{
_operationStartTime = DateTime.Now;
}
_numberOfCurrentOperations++;
}
}
public static void EndAsyncOperation(int itemsProcessed)
{
lock (_lock)
{
_numberOfCurrentOperations--;
if (_numberOfCurrentOperations < 0)
throw new InvalidOperationException("EndAsyncOperation without StartAsyncOperation");
_seriesProcessedItems +=itemsProcessed;
if (_numberOfCurrentOperations == 0)
{
_totalProcessedItems += _seriesProcessedItems;
_totalTime += DateTime.Now - _operationStartTime;
_seriesProcessedItems = 0;
}
}
}
public static double GetAvgSpeed()
{
if (_totalProcessedItems == 0) throw new InvalidOperationException("_totalProcessedItems is zero");
if (_totalProcessedItems == 0) throw new InvalidOperationException("_totalTime is zero");
return _totalProcessedItems / (double)_totalTime.TotalMilliseconds;
}
public static void Reset()
{
_totalProcessedItems = 0;
_totalTime = TimeSpan.Zero;
}
}
Example of usage and test:
static void Main(string[] args)
{
var st = Stopwatch.StartNew();
Parallel.For(0, 100, _ =>
{
Counter.StartAsyncOperation();
Thread.Sleep(100);
Counter.EndAsyncOperation(1);
});
st.Stop();
Console.WriteLine("Speed correct {0}", 100 / (double)st.ElapsedMilliseconds);
Console.WriteLine("Speed to test {0}", Counter.GetAvgSpeed());
}

Stopwatch appears to jump 500µs between iterations of a tight loop

I wrote some code which monitors the output of Stopwatch using a tight loop. This loop tracks the number of ticks which have elapsed since the last iteration. I'm observing jumps of 500 microseconds 20 times a second, while most other iterations take <1µs.
Can someone explain why I'm seeing these jumps?
I've tried:
Setting processor affinity (no effect)
Changing thread priority (Highest/AboveNormal makes things worse)
Running outside of the debugger
Release build with optimizations
My code is below:
Stopwatch sw = new Stopwatch();
int crossThresholdCount = 0;
long lastElapsedTicks = 0;
long lastPrintTicks = 0;
Console.WriteLine("IsHighResolution: " + Stopwatch.IsHighResolution);
Console.WriteLine("Frequency: " + Stopwatch.Frequency);
sw.Start();
long thresholdTicks = 5000; // 10000 ticks per ms
while (true)
{
long tempElapsed = sw.ElapsedTicks;
long sincePrev = tempElapsed - lastElapsedTicks;
lastElapsedTicks = tempElapsed;
if (sincePrev > thresholdTicks)
crossThresholdCount++;
// print output
if (crossThresholdCount > 0 && tempElapsed - lastPrintTicks > TimeSpan.TicksPerSecond)
{
lastPrintTicks = tempElapsed;
Console.WriteLine("crossed " + crossThresholdCount + " times");
crossThresholdCount = 0;
}
}
Most likely you are seeing preemptive task switching. This is when the operating system suspends your program, and goes off to execute other programs. This is how things have been since Windows 95 (Win 3.1 and earlier had cooperative multitasking, whereby you could hold the CPU for as long as you want).
By the way, there is a better way to time your execution accurately: QueryThreadCycleTime, which counts CPU cycles only when your code is executing, and so excludes such pauses.
Your test doesn't really make sense... Your process is not the only one to run on your machine. The system gives processor time to each thread in turn, so there are periods of time when your loop is not running at all, which explains the "jumps".

Categories

Resources