Simulate steady CPU load and spikes - c#

How could I generate steady CPU load in C#, lower than 100% for a certain time? I would also like to be able to change the load amount after a certain period of time. How do you recommend to generate usage spikes for a very short time?

First off, you have to understand that CPU usage is always an average over a certain time. At any given time, the CPU is either working or it is not. The CPU is never 40% working.
We can, however, simulate a 40% load over say a second by having the CPU work for 0.4 seconds and sleep 0.6 seconds. That gives an average utilization of 40% over that second.
Cutting it down to smaller than one second, say 100 millisecond chunks should give even more stable utilization.
The following method will take an argument that is desired utilization and then utilize a single CPU/core to that degree:
public static void ConsumeCPU(int percentage)
{
if (percentage < 0 || percentage > 100)
throw new ArgumentException("percentage");
Stopwatch watch = new Stopwatch();
watch.Start();
while (true)
{
// Make the loop go on for "percentage" milliseconds then sleep the
// remaining percentage milliseconds. So 40% utilization means work 40ms and sleep 60ms
if (watch.ElapsedMilliseconds > percentage)
{
Thread.Sleep(100 - percentage);
watch.Reset();
watch.Start();
}
}
}
I'm using a stopwatch here because it is more accurate than the the TickCount property, but you could likewise use that and use subtraction to check if you've run long enough.
Two things to keep in mind:
on multi core systems, you will have to spawn one thread for each core. Otherwise, you'll see only one CPU/core being exercised giving roughly "percentage/number-of-cores" utilization.
Thread.Sleep is not very accurate. It will never guarantee times exactly to the millisecond so you will see some variations in your results
To answer your second question, about changing the utilization after a certain time, I suggest you run this method on one or more threads (depending on number of cores) and then when you want to change utilization you just stop those threads and spawn new ones with the new percentage values. That way, you don't have to implement thread communication to change percentage of a running thread.

Just in add of the Isak response, I let here a simple implementation for multicore:
public static void CPUKill(object cpuUsage)
{
Parallel.For(0, 1, new Action<int>((int i) =>
{
Stopwatch watch = new Stopwatch();
watch.Start();
while (true)
{
if (watch.ElapsedMilliseconds > (int)cpuUsage)
{
Thread.Sleep(100 - (int)cpuUsage);
watch.Reset();
watch.Start();
}
}
}));
}
static void Main(string[] args)
{
int cpuUsage = 50;
int time = 10000;
List<Thread> threads = new List<Thread>();
for (int i = 0; i < Environment.ProcessorCount; i++)
{
Thread t = new Thread(new ParameterizedThreadStart(CPUKill));
t.Start(cpuUsage);
threads.Add(t);
}
Thread.Sleep(time);
foreach (var t in threads)
{
t.Abort();
}
}

For a uniform stressing: Isak Savo's answer with a slight tweak. The problem is interesting. In reality there are workloads that far exceed it in terms of wattage used, thermal output, lane saturation, etc. and perhaps the use of a loop as the workload is poor and almost unrealistic.
int percentage = 80;
for (int i = 0; i < Environment.ProcessorCount; i++)
{
(new Thread(() =>
{
Stopwatch watch = new Stopwatch();
watch.Start();
while (true)
{
// Make the loop go on for "percentage" milliseconds then sleep the
// remaining percentage milliseconds. So 40% utilization means work 40ms and sleep 60ms
if (watch.ElapsedMilliseconds > percentage)
{
Thread.Sleep(100 - percentage);
watch.Reset();
watch.Start();
}
}
})).Start();
}

Each time you have to set cpuUsageIncreaseby variable.
for example:
1- Cpu % increase by > cpuUsageIncreaseby % for one minute.
2- Go down to 0% for 20 seconds.
3- Goto step 1.
private void test()
{
int cpuUsageIncreaseby = 10;
while (true)
{
for (int i = 0; i < 4; i++)
{
//Console.WriteLine("am running ");
//DateTime start = DateTime.Now;
int cpuUsage = cpuUsageIncreaseby;
int time = 60000; // duration for cpu must increase for process...
List<Thread> threads = new List<Thread>();
for (int j = 0; j < Environment.ProcessorCount; j++)
{
Thread t = new Thread(new ParameterizedThreadStart(CPUKill));
t.Start(cpuUsage);
threads.Add(t);
}
Thread.Sleep(time);
foreach (var t in threads)
{
t.Abort();
}
//DateTime end = DateTime.Now;
//TimeSpan span = end.Subtract(start);
//Console.WriteLine("Time Difference (seconds): " + span.Seconds);
//Console.WriteLine("10 sec wait... for another.");
cpuUsageIncreaseby = cpuUsageIncreaseby + 10;
System.Threading.Thread.Sleep(20000);
}
}
}

Related

How to get accurate CPU usage during specific method processing

I am performing some heavy load Tasks on my application. For that, I want to calculate the CPU load during that operation, minimum and maximum CPU utilization. I am using Process namespace. The problem is that for every process I am getting the same value for maximum and minimum. 1.79769313486232E+308
Here is my code:
private async Task<double> GetCpuLoadAsync(TimeSpan measurementWindow)
{
Process CurrentProcess = Process.GetCurrentProcess();
TimeSpan StartCpuTime = CurrentProcess.TotalProcessorTime;
Stopwatch Timer = Stopwatch.StartNew();
await Task.Delay(measurementWindow);
TimeSpan EndCpuTime = CurrentProcess.TotalProcessorTime;
Timer.Stop();
return (EndCpuTime - StartCpuTime).TotalMilliseconds / (Environment.ProcessorCount * Timer.ElapsedMilliseconds);
}
and I am calling this function on a timer and starting the timer before getting into that method:
double maxCpu = double.MinValue, minCpu = double.MaxValue;
elapsedTimer.Elapsed += new ElapsedEventHandler(async (s, e) =>
{
double cpuUsage = await GetCpuLoadAsync(TimeSpan.FromMilliseconds(1000));
if (cpuUsage > maxCpu)
{
maxCpu = cpuUsage;
}
if(cpuUsage < minCpu)
{
minCpu = cpuUsage;
}
});
// Here I am calling my method. It takes 3-4 mins to process and I want to calculate CPU usage during that time.
elapsedTimer.stop();

Synchronize Timer with real time

I am trying to refresh my frame every 17ms with a timer.
Timer timer = new Timer(17);
timer.Elapsed += ResetFrame;
timer.Start();
But instead of waiting for 17ms and then repeating, it waited for the frame refresh to complete and then wait for 17msfor the next repeat. This causes the frame to be refreshed every 28ms. How to synchronize it with real time?
To have a real time timer having a very short interval, you can take a look at this article:
Real Time Timer in C#
In Dot Net, following timers are not real time.
System.Windows.Forms.Timer
System.Timers.Timer
System.Threading.Timer
Means if you want to run your code at every 100 millisecond then above
timer fire even around 110 millisecond or later. Windows is not a real
time OS because of this .Net is also not a real time.
To create a real time timer in C# you have to write custom code that
can hold CPU to run your code at right time.
class Program
{
static void Main(string[] args)
{
Console.ReadLine();
Console.WriteLine("Running");
RealTimeTimerTest obj = new RealTimeTimerTest();
obj.Run();
}
}
public class RealTimeTimerTest
{
List<DateTime> lst = new List<DateTime>();
System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch();
public void Run()
{
int Tick = 100;
int Sleep = Tick - 20;
long OldElapsedMilliseconds = 0;
sw.Start();
while (sw.IsRunning)
{
long ElapsedMilliseconds = sw.ElapsedMilliseconds;
long mod = (ElapsedMilliseconds % Tick);
if (OldElapsedMilliseconds != ElapsedMilliseconds && (mod == 0 || ElapsedMilliseconds > Tick))
{
//-----------------Do here whatever you want to do--------------Start
lst.Add(DateTime.Now);
//-----------------Do here whatever you want to do--------------End
//-----------------Restart----------------Start
OldElapsedMilliseconds = ElapsedMilliseconds;
OldElapsedMilliseconds = 0;
sw.Reset();
sw.Start();
System.Threading.Thread.Sleep(Sleep);
//-----------------Restart----------------End
}
//------------Must define some condition to break the loop here-----------Start
if (lst.Count > 500)
{
Write();
break;
}
//-------------Must define some condition to break the loop here-----------End
}
}
private void Write()
{
System.IO.StreamWriter sw = new System.IO.StreamWriter("d:\\text.txt", true);
foreach (DateTime dtStart in lst)
sw.WriteLine(dtStart.ToString("HH:mm:ss.ffffff")); sw.Close();
}
}
Also that:
Most accurate timer in .NET?
High resolution timer
High resolution timer in C#
Microsecond and Millisecond C# Timer
Precision-Repeat-Action-On-Interval-Async-Method

Why is running async operations in threads much slower than pure tasks or pure thread operations

During investigation of some performance issues, we stumbled upon some results that we not know the reason for
We tried running loops of async operations with different loop counts and delays, and we had 3 different constructs, where case 2 ran much slower when the number of threads increased (we would never actually use code like that in case 2, but what are the explanation for the results outlined below):
case 1:
for (int i = 0; i < count; i++)
{
tasks.Add(Task.Delay(delay));
}
Task.WaitAll(tasks.ToArray());
case 2:
for (int i = 0; i < count; i++)
{
var task = Task.Run(() => { Task.Delay(delay).Wait(); });
tasks.Add(task);
}
Task.WaitAll(tasks.ToArray());
case 3:
for (int i = 0; i < count; i++)
{
tasks.Add(Task.Run(() => Thread.Sleep(delay)));
}
Task.WaitAll(tasks.ToArray());
Why is case 2 so slow compared to the other cases - see output and full program below ?
RunAsync, count = 1, delay = 50
Async execution 1 times with Task.Delay of 50 ms. took 00:00:00.0510000, average = 00:00:00.0510000.
RunAsync, count = 5, delay = 50
Async execution 5 times with Task.Delay of 50 ms. took 00:00:00.0620000, average = 00:00:00.0124000.
RunAsync, count = 10, delay = 50
Async execution 10 times with Task.Delay of 50 ms. took 00:00:00.0660000, average = 00:00:00.0066000.
RunAsync, count = 50, delay = 50
Async execution 50 times with Task.Delay of 50 ms. took 00:00:00.0590000, average = 00:00:00.0011800.
RunAsync, count = 100, delay = 50
Async execution 100 times with Task.Delay of 50 ms. took 00:00:00.0620000, average = 00:00:00.0006200.
==================================================
RunAsyncInThread, count = 1, delay = 50
Task.Run 1 times with Task.Delay of 50 ms. took 00:00:00.0630000, average = 00:00:00.0630000.
RunAsyncInThread, count = 5, delay = 50
Task.Run 5 times with Task.Delay of 50 ms. took 00:00:00.0620000, average = 00:00:00.0124000.
RunAsyncInThread, count = 10, delay = 50
Task.Run 10 times with Task.Delay of 50 ms. took 00:00:00.7200000, average = 00:00:00.0720000.
RunAsyncInThread, count = 50, delay = 50
WHY ARE THESE SO SLOW:
Task.Run 50 times with Task.Delay of 50 ms. took 00:00:15.8100000, average = 00:00:00.3162000.
RunAsyncInThread, count = 100, delay = 50
Task.Run 100 times with Task.Delay of 50 ms. took 00:00:34.0600000, average = 00:00:00.3406000.
==================================================
RunThread, count = 1, delay = 50
Thread execution 1 times with Task.Delay of 50 ms. took 00:00:00.0500000, average = 00:00:00.0500000.
RunThread, count = 5, delay = 50
Thread execution 5 times with Task.Delay of 50 ms. took 00:00:00.0500000, average = 00:00:00.0100000.
RunThread, count = 10, delay = 50
Thread execution 10 times with Task.Delay of 50 ms. took 00:00:00.0500000, average = 00:00:00.0050000.
RunThread, count = 50, delay = 50
Thread execution 50 times with Task.Delay of 50 ms. took 00:00:00.0500000, average = 00:00:00.0010000.
RunThread, count = 100, delay = 50
Thread execution 100 times with Task.Delay of 50 ms. took 00:00:00.1000000, average = 00:00:00.0010000.
The full test program is here:
using System;
using System.Collections.Generic;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApplication2
{
class Program
{
static void Main(string[] args)
{
RunAsync(1, 50);
RunAsync(5, 50);
RunAsync(10, 50);
RunAsync(50, 50);
RunAsync(100, 50);
Console.WriteLine("==================================================");
RunAsyncInThread(1, 50);
RunAsyncInThread(5,50);
RunAsyncInThread(10,50);
RunAsyncInThread(50,50);
RunAsyncInThread(100,50);
Console.WriteLine("==================================================");
RunThread(1, 50);
RunThread(5,50);
RunThread(10,50);
RunThread(50,50);
RunThread(100,50);
//RunAsyncInThread(20, 50);
//RunAsyncInThread(40, 50);
//RunAsyncInThread(80, 50);
//RunAsyncInThread(160, 50);
//RunAsyncInThread(320, 50);
Console.WriteLine("Press enter:");
Console.ReadLine();
}
private static void RunAsyncInThread(int count, int delay)
{
Console.WriteLine("RunAsyncInThread, count = {0}, delay = {1} ", count, delay);
var now = DateTime.UtcNow;
var tasks = new List<Task>();
for (int i = 0; i < count; i++)
{
var task = Task.Run(() => { Task.Delay(delay).Wait(); });
tasks.Add(task);
}
Task.WaitAll(tasks.ToArray());
var elapsed = DateTime.UtcNow - now;
Console.WriteLine("Task.Run {0} times with Task.Delay of {1} ms. took {2}, average = {3}. ", count, delay, elapsed, TimeSpan.FromTicks(elapsed.Ticks / count));
}
private static void RunAsync(int count, int delay)
{
Console.WriteLine("RunAsync, count = {0}, delay = {1} ",count,delay);
var now = DateTime.UtcNow;
var tasks = new List<Task>();
for (int i = 0; i < count; i++)
{
tasks.Add(Task.Delay(delay));
}
Task.WaitAll(tasks.ToArray());
var elapsed = DateTime.UtcNow - now;
Console.WriteLine("Async execution {0} times with Task.Delay of {1} ms. took {2}, average = {3}. ", count, delay, elapsed, TimeSpan.FromTicks(elapsed.Ticks / count));
}
private static void RunThread(int count, int delay)
{
Console.WriteLine("RunThread, count = {0}, delay = {1} ",count,delay);
var now = DateTime.UtcNow;
var tasks = new List<Task>();
for (int i = 0; i < count; i++)
{
tasks.Add(Task.Run(() => Thread.Sleep(delay)));
}
Task.WaitAll(tasks.ToArray());
var elapsed = DateTime.UtcNow - now;
Console.WriteLine("Thread execution {0} times with Task.Delay of {1} ms. took {2}, average = {3}. ", count, delay, elapsed, TimeSpan.FromTicks(elapsed.Ticks / count));
}
}
}
Why is it slower to schedule a thread pool thread to start an asynchronous operation, and then sit there waiting for that asynchronous operation to finish, instead of just doing the asynchronous operation directly?
You save the overhead of waiting for the operation to be scheduled in the thread pool, and if the thread pool is saturated, say because you flooded it with a hundred requests all of a sudden, this could take some time. Once the operation is scheduled it can finally start to execute the asynchronous operation.
Why is it slower to create an entirely new thread so that you can start an asynchronous operation and then wait for it to finish instead of just starting an asynchronous operation?
Because you need to spend all of the time and effort to create a brand new thread, and wait for it to get scheduled for the first time before you can even start your asynchronous operation.

Performance of signaling threads in c#

I've been attempting to understand how long it takes to "wake" a thread who is waiting on a blocking construct like AutoResetEvent- from what I understood after reading multiple discussions is that windows has some kind of internal clock which "ticks" every 15.6ms (or so) and then decide which threads are scheduled to run next, so I would expect that the time difference between signaling a thread until that thread wakes up would take a random time between 0-15.6ms.
So I wrote this small program to test my theory:
static void Main(string[] args)
{
double total = 0;
int max = 100;
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
for (int i = 0; i < max; i++)
{
AutoResetEvent eventHandle = new AutoResetEvent(false);
double time1 = 0;
double time2 = 0;
Thread t1 = new Thread(new ThreadStart(() => time1 = f1(stopwatch, eventHandle)));
Thread t2 = new Thread(new ThreadStart(() => time2 = f2(stopwatch, eventHandle)));
t1.Start();
t2.Start();
t1.Join();
t2.Join();
double diff = time2 - time1;
total += diff;
Console.WriteLine("Diff = " + diff.ToString("F4"));
}
double avg = total / max;
Console.WriteLine("Avg = " + avg.ToString("F4"));
Console.ReadKey();
}
static double f1(Stopwatch s, AutoResetEvent eventHandle)
{
Thread.Sleep(500);
double res = s.Elapsed.TotalMilliseconds;
eventHandle.Set();
return res;
}
static double f2(Stopwatch s, AutoResetEvent eventHandle)
{
eventHandle.WaitOne();
return s.Elapsed.TotalMilliseconds;
}
To my surprise the average wake-up time was around 0.05 milliseconds - so obviously i'm missing something but I don't know what...
No, 15.625 msec is the period of the clock tick interrupt. Which lets the scheduler interrupt a thread if it has been running without blocking and the scheduler decides that another thread should get a turn.
Threads that block will be pre-empted at their WaitXxx() call. Or Sleep() call. Regardless of the clock tick interrupt.
Notable as well is that a sleeping thread can only resume running at a clock interrupt tick, the reason that Thread.Sleep(1) in fact sleeps for 15.6 msec. Timers, DateTime.Now and Environment.TickCount also have that accuracy, the clock is incremented by the interrupt.

Why is this code executing faster than expected?

I have this code:
public void replay() {
long previous = DateTime.Now.Ticks;
for (int i = 0; i < 1000; i++) {
Thread.Sleep(300);
long cur = DateTime.Now.Ticks;
Console.WriteLine(cur - previous);
previous = cur;
}
}
Which is invoked as a separate thread like this:
MethodInvoker replayer = new MethodInvoker(replay);
replayer.BeginInvoke(null, null);
However, if I watch the output, it acts strangely. It outputs i in pairs. For example, it'll wait a full wait, then output i, then quickly output the next i too, then wait again. Why is it doing that and how can I correct it?
It outputs this:
3125040
2968788
2968788
2968788
3125040
2968788
2968788
2968788
3125040
2968788
2968788
2968788
3125040
If I increase the sleep to more than a second this doesn't happen.
Change the code to eliminate display latency in your analysis:
public void replay()
{
Thread.Sleep(5000);
DateTime start = DateTime.Now;
for (int i = 0; i < 1000; i++)
{
Console.WriteLine(string.Format("Exec:{0} - {1} ms",
i, DateTime.Now - start));
start = DateTime.Now;
Thread.Sleep(300);
}
}
Looking at your modified output, there is less than 5% variance (15ms out of the 300) in the loop delay. This is normal, due to the uncertainties involved in when the OS actually assigns timeslices to the thread... (If I recall correctly, in a windows OS, this is normally only every 20 ms !)
The larger discrepancy you perceive in the console output is almost certainly due to display latencys.
Cannot reproduce. I wonder if it is something local to your machine; buffering, perhaps.
I can't reproduce this, but you might want to consider a timer. It would be more reliable.
public class Counter
{
private readonly TimeSpan initialDelay, incrementDelay;
private readonly int maxCount;
private Timer timer;
private int count;
public Counter(TimeSpan initialDelay, TimeSpan incrementDelay, int maxCount)
{
this.maxCount = maxCount;
this.initialDelay = initialDelay;
this.incrementDelay = incrementDelay;
}
public void Start(Action<int> tickBehavior)
{
if (timer != null)
{
Timer temp = timer;
timer = null;
temp.Dispose();
}
timer = new Timer(() =>
{
tickBehavior(count++);
if (count > maxCount) timer.Dispose();
}, null, initialDelay, incrementDelay);
}
}
Use it:
Counter counter = new Counter(TimeSpan.FromSeconds(5), TimeSpan.FromSeconds(.3), 1000);
counter.Start((count) => Console.WriteLine(count););
EDIT
I'm using System.Threading.Timer, but Counter could be easily be modified to use System.Timers.Timer or System.Windows.Forms.Timer depending on your need. See this link for a description of when to use which timers.
Your sleep inside the loop is only 300ms, which isn't very long. You application will do the following:
Sleep 5 secs
print 0
Sleep 300ms
print 1
Sleep 300ms
print 2
etc.

Categories

Resources