I need a slow C# function - c#

For some testing I'm doing I need a C# function that takes around 10 seconds to execute. It will be called from an ASPX page, but I need the function to eat up CPU time on the server, not rendering time. A slow query into the Northwinds database would work, or some very slow calculations. Any ideas?

Try to calculate nth prime number to simulate CPU intensive work -
public void Slow()
{
long nthPrime = FindPrimeNumber(1000); //set higher value for more time
}
public long FindPrimeNumber(int n)
{
int count=0;
long a = 2;
while(count<n)
{
long b = 2;
int prime = 1;// to check if found a prime
while(b * b <= a)
{
if(a % b == 0)
{
prime = 0;
break;
}
b++;
}
if(prime > 0)
{
count++;
}
a++;
}
return (--a);
}
How much time it will take will depend on the hardware configuration of the system.
So try with input as 1000 then either increase input value or decrease it.
This function will simulate CPU intensive work.

Arguably the simplest such function is this:
public void Slow()
{
var end = DateTime.Now + TimeSpan.FromSeconds(10);
while (DateTime.Now < end)
/* nothing here */ ;
}

You can use a 'while' loop to make the CPU busy.
void CpuIntensive()
{
var startDt = DateTime.Now;
while (true)
{
if ((DateTime.Now - startDt).TotalSeconds >= 10)
break;
}
}
This method will stay in the while loop for 10 seconds. Also, if you run this method in multiple threads, you can make all CPU cores busy.

For maxing out multiple cores I adjusted #Motti's answer a bit, and got the following:
Enumerable
.Range(1, Environment.ProcessorCount) // replace with lesser number if 100% usage is not what you are after.
.AsParallel()
.Select(i => {
var end = DateTime.Now + TimeSpan.FromSeconds(10);
while (DateTime.Now < end)
/*nothing here */ ;
return i;
})
.ToList(); // ToList makes the query execute.

This is CPU intensive on a single thread/CPU, and lasts 10 seconds.
var endTime = DateTime.Now.AddSeconds(10);
while(true) {
if (DateTime.Now >= endTime)
break;
}
As a side note, you should not normally do this.

Just use
Thread.Sleep(number of milliseconds here);
You will have to add using System.Threading;

Related

C#: Random.NextDouble method stalling out application

I'm receiving some inconsistent behavior from the Random.NextDouble().
Regularly, the console would freeze and the cpu usage would increase dramatically until I closed it down. I ran the debugger and found that the cause of the freezing was Random.NextDouble(). I added some lines for debugging purposes, but the code is as follows:
double generateCatenationRate()
{
double catenation = 999.999; //random value to see if it pops up down there
double uniformValue;
double int_covalence = covalence.original;
double dist = int_covalence - 4;
int counter = 0;
while (true)
{
counter++;
uniformValue = utils.getRandomDouble(); //goes bad here!
if (uniformValue <= 0.15)
{
catenation = Math.Abs(utils.normalize(dist, 0, 4)) + uniformValue;
if (catenation < 0 || catenation > 1)
{
if (counter > 10000)
{
Console.WriteLine("Went nuclear!");
break; //break so console doesn't stall out
}
continue;
}
else
{
break;
}
}
}
Console.WriteLine("Took "+counter+" iterations.");
return 1 - catenation;
}
And:
public static double getRandomDouble()
{
Init();
return random.NextDouble();
}
Lastly:
private static void Init()
{
if (random == null) random = new Random();
}
It typically does not stall out, but running it several times successively produces output such as:
Took 4 iterations.
Took 3 iterations
Took 3 iterations.
Took 23 iterations.
Took 12 iterations.
Took 4 iterations.
Went nuclear!
Took 10007 iterations.
Can anyone explain why Random.NextDouble() occasionally seems to create an infinite loop? Looking around, I suspect it has something to do with how the values are seeded, but any insight would be appreciated; would love to fix this issue.
Thank you!
EDIT: covalence.original is always an integer between 1 and 8 (inclusive). normalize() performs min-max normalization, producing a number from 0-1 based on an input and a range. Neither of these seem to contribute to the problem, however.
If I understand correctly then the value of dist and utils.normalize(dist, 0, 4) never changes.
So if int_covalence = 8 then dist = 4 and utils.normalize(dist, 0, 4) = 1, correct?
Since the chance of generating 0.0 is pretty small, that will make catenation virtually always greater than 1 and the check if (catenation < 0 || catenation > 1) always true.
why not just generate the samples directly rather than using rejection sampling?
public static double generateCatenationRate(Random rng, double coval_norm) {
double low = Math.abs(coval_norm) + 0.15;
double delta = 1. - low;
if (delta < 0) {
throw new IllegalArgumentException("impossible given covalence");
}
return low + delta * rng.nextDouble();
}
where coval_norm is whatever you get back from utils.normalize. if we write it this way we get visibility of the "impossible" condition and can do something about it, rather than just looping.

How to implement threading in Xamarin.Android?

I'm trying to send/publish at 100ms, and the message looks like this
x.x.x.x.x.x.x.x.x.x
So every 100ms or so subscribe will be called. My problem is that I think, it's not fast enough, (i.e if the current subscribe is not yet done and another subscribe is being called/message being published)
I was thinking, on how could I keep on populating the list, at the same time graph with Oxyplot. Can I use threading for this?
var x = 0;
channel.Subscribe(message =>
{
this.RunOnUiThread(() =>
{
var sample = message.Data;
byte[] data = (byte[])sample;
var data1 = System.Text.Encoding.ASCII.GetString(data);
var splitData = data1.Split('-');
foreach(string s in splitData) //Contains 10
{
double y = double.Parse(s);
y /= 100;
series1.Points.Add(new DataPoint(x, y));
MyModel.InvalidatePlot(true);
x++;
}
if (x >= xaxis.Maximum)
{
xaxis.Pan(xaxis.Transform(-1 + xaxis.Offset));
}
});
});
Guaranteeing a minimum execution time goes into Realtime Programming. And with a App on a Smartphone OS you are about as far from that I can imagine you to be. The only thing farther "off" would be any interpreted langauge (PHP, Python).
The only thing you can do is define a minimum time between itterations. I did once wrote some example code for doing that from within a alternative thread. A basic rate limiting code:
integer interval = 20;
DateTime dueTime = DateTime.Now.AddMillisconds(interval);
while(true){
if(DateTime.Now >= dueTime){
//insert code here
//Update next dueTime
dueTime = DateTime.Now.AddMillisconds(interval);
}
else{
//Just yield to not tax out the CPU
Thread.Sleep(1);
}
}
Note that DateTime.Now only resturns something new about every 18 ms, so anything less then 20 would be too little.
If you think you can not afford a minimum time, you may need to read the Speed Rant.

What is the best way to use Parallel.For in a recursive algorithm?

I am building software to evaluate many possible solutions and am trying to introduce parallel processing to speed up the calculations. My first attempt was to build a datatable with each row being a solution to evaluate but building the datatable takes quite some time and I am running into memory issues when the number of possible solutions goes into the millions.
The problem which warrants these solutions is structured as follows:
There is a range dates for x number of events which must be done in order. The solutions to evaluate could look as follows with each solution being a row, the events being the columns and the day number being the values.
Given 3 days (0 to 2) and three events:
0 0 0
0 0 1
0 0 2
0 1 1
0 1 2
0 2 2
1 1 1
1 1 2
1 2 2
2 2 2
My new plan was to use recursion and evaluate the solutions as I go rather than build a solution set to then evaluate.
for(int day = 0; day < maxdays; day++)
{
List<int> mydays = new List<int>();
mydays.Add(day);
EvalEvent(0,day,mydays);
}
private void EvalEvent(int eventnum,
int day, List<int> mydays)
{
Parallel.For(day,maxdays, day2 =>
// events must be on same day or after previous events
{
List<int> mydays2 = new List<int>();
for(int a = 0; a <mydays.Count;a++)
{
mydays2.Add(mydays[a]);
}
mydays2.Add(day2);
if(eventnum< eventcount - 1) // proceed to next event
{
EvalEvent(eventnum+1, day2,mydays2);
}
else
{
EvalSolution(mydays2);
}
});
}
My question is if this is actually an efficient use of parallel processing or will too many threads be spawned and slow it down? Should the parallel loop only be done on the last or maybe last few values of eventnum or is there a better way to approach the problem?
Requested old code pretty much is as follows:
private int daterange;
private int events;
private void ScheduleIt()
{
daterange = 10;
events = 6;
CreateSolutions();
int best = GetBest();
}
private DataTable Options();
private bool CreateSolutions()
{
Options= new DataTable();
Options.Columns.Add();
for (int day1=0;day1<=daterange ;day1++)
{
Options.Rows.Add(day1);
}
for (int event =1; event<events; event++)
{
Options.Columns.Add();
foreach(DataRow dr in Options.Rows)
{dr[Options.Columns.Count-1] = dr[Options.Columns.Count-2] ;}
int rows = Options.Rows.Count;
for (int day1=1;day1<=daterange ;day1++)
{
for(int i = 0; i <rows; i++)
{
if(day1 > Convert.ToInt32(Options.Rows[i][Options.Columns.Count-2]))
{
try{
Options.Rows.Add();
for (int col=0;col<Options.Columns.Count-1;col++)
{
Options.Rows[Options.Rows.Count-1][col] =Options.Rows[i][col];
}
Options.Rows[Options.Rows.Count-1][Options.Columns.Count-1] = day1;
}
catch(Exception ex)
{
return false;
}
}
}
}
}
return true;
}
private intGetBest()
{
int bestopt = 0;
double bestscore =999999999;
Parallel.For( 0, Options.Rows.Count,opt =>
{
double score = 0;
for(int i = 0; i <Options.Columns.Count;i++)
{score += Options.Rows[opt][i]}// just a stand in calc for a score
if (score < bestscore)
{bestscore = score;
bestopt = opt;
}
});
return bestopt;
}
Even if done without errors it can not significantly speed up your solution.
It looks like each level of recursion tries to start multiple (let say up to "k") next level calls for let's "n" level. This essentially mean code is O(k ^ n) which grows very fast. Non-algorithmic speedup of such O(k^n) solution is essentially useless (unless both k and n are very small). In particular, running code in parallel will only give you constant factor of speed up (roughly number of threads supported by your CPUs) which really does not change complexity at all.
Indeed creation of exponentially large number of requests for new threads will likely cause more problems by itself for just managing threads.
In addition to not significantly improving performance parallel code is harder to write as it needs proper synchronization or cleaver data partitioning - neither seem to be present in your case.
Parallelization works best when the workload is bulky and balanced. Ideally you would like your work splited to as many independent partitions as the logical processors of your machine, ensuring that all partitions have approximately the same size. This way all available processors will work with the maximum efficiency for approximately the same duration, and you'll get the results after the shortest time possible.
Of course you should start with a working and bug-free serial implementation, and then think about ways to partition your work. The easiest way is usually not optimal. For example an easy path is to convert your work to a LINQ query, and then parallelize it with AsParallel() (making it PLINQ). This usually results to a too granular partitioning, that introduces too much overhead. If you can't find ways to improve it you then can go the way of Parallel.For or Parallel.ForEach, which is a bit more complex.
A LINQ implementation should probably start by creating an iterator that produces all your units of work (Events or Solutions, it's not very clear to me).
public static IEnumerable<Solution> GetAllSolutions()
{
for (int day = 0; day < 3; day++)
{
for (int ev = 0; ev < 3; ev++)
{
yield return new Solution(); // ???
}
}
}
It will certainly be helpful if you have created concrete classes to represent the entities you are dealing with.

What is the resolution of the CPU time value in ants or dotTrace profiler?

As I understand from my previous research the resolution timer if we want to measure CPU time of a function is ~15.6ms mean we can get value like 0,15.6,32.2 ms
int a=Process.getCurrentProcess.UserProcessTime;
functionTest();
int b=Process.getCurrentProcess.UserProcessTime;
(b-a) //value like 0,15.6,32.2 ms
But using performance profiler like dotTrace or ant I see in time column where timing option is "CPU Time" value like 4.129; 1.032 ms So it's a high resolution.
What is the method to get this resolution by coding?
functionTest is ==>
private long FindPrimeNumber(int n)
{
int count = 0;
long a = 2;
while (count < n)
{
long b = 2;
int prime = 1;// to check if found a prime
while (b * b <= a)
{
if (a % b == 0)
{
prime = 0;
break;
}
b++;
}
if (prime > 0)
count++;
a++;
}
return (--a);
}
It's native methods in WinAPI most likely, you can invoke them from C# via DLLImport. But for simplicity you could try use third party wrapper from here.
But you should clearly understand what you are doing. Between first call of your function and the second one will be difference because of JITting time. And if your method allocates memory - GC might occur any time while calling your method and it will be reflected in measurement.

How to achieve the below logic in C#?

I need to assign channel for each schedule. There can be as many concurrent events as number of channels allocated for the customer. I.e if the customer is allocated 3 channels then he can have 3 concurrent events. If a channel was allocated to a event then the same channel cannot to allocated to another event that falls under same time but the same channel can be allocate to another event if the time differs.
Channel Table
ID Name
1 Name1
2 Name2
3 Name3
Event Table
ID EventName StartTime EndTime ChannelID
1 Event1 11:30AM 12PM 1
2 Event2 11:30AM 11:40AM 2
3 Event3 11:40AM 12PM 2
4 Event4 12PM 12:30PM 1 0r 2
5 Event5 11:30AM 12:30PM 3
The above is the expected output.
I tried nested foreachloop one for channel and other for evets, but was not able to achieve and the complexity is really high. How to achieve this logic?
Pseudo Code:
for each channel
{
foreach existing events
{
if(sametime && same channel)
{
go for next channel
}
break;
}
assign current channel to new event
}
This fail when I try to create 3rd event.
You can rather loop through events for assigning channels to event, check out below pseudo code:
foreach events
{
foreach channels
{
if currentChannel is assigned
{
foreach assignedEvents
{
if assignedTime = currentEventTime
go to next Channel (continue)
}
currentEvent.Channel = currentChannel
break;
}
else
{
currentEvent.Channel = currentChannel
break;
}
}
}
Looks somewhat similar to Vehicle Routing Problem to me. Channels are the same as vehicles and events are like nodes in a directed acyclic graph with edges leading from one event to another if and only if the first event finishes earlier than the second one starts.
You should be able to find publicly available algorithms for solving this problem.
You have to generate all possibilities, and choose the best.
It is NP-complete problem, so there's no way to do it both fast and correct - either you do it fast by some heuristic, but then you don't know if it really did the best job, or you do it slow, and i mean slow, but you are sure the result is optimal.
Depends on the size of your data.
EDIT:
Example to demonstrate, that just assigning events to the first free channel won't always work.
We have 3 channels:
Ch1, Ch2, Ch3
We have to place 6 events:
E1-2 - starts at 1:00 ends at 2:59
E1-3 - starts at 1:00 ends at 3:59
E1-3 - starts at 1:00 ends at 3:59
E4 - starts at 4:00 ends at 4:59
E4 - starts at 4:00 ends at 4:59
E3-4 - starts at 3:00 ends at 4:59
If you just assign to the first free place you'll end up with:
Ch1: E1-2, E4
Ch2: E1-3, E4
Ch3: E1-3
and no place for E3-4
If you assigned them in different order, you'll get
Ch1: E1-2, E3-4
Ch2: E1-3, E4
Ch3: E1-3, E4
And all the events would fit.
So you have to do backtracing somehow.
I fixed some problems in the code. I think it should work now
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ChannelAllocator
{
class Program
{
const int _numberOfChannels = 10;
static void Main(string[] args)
{
Dictionary<int, List<TimeSlot>> occupiedChannels = new Dictionary<int, List<TimeSlot>>();
for (int i = 1; i <= _numberOfChannels; i++)
{
occupiedChannels.Add(i, new List<TimeSlot>());
}
/** Example **/
TimeSpan start = DateTime.Now.AddHours(-1.0).TimeOfDay;
TimeSpan end = DateTime.Now.TimeOfDay;
AssignChannel(occupiedChannels, ref start, ref end);
}
private static bool AssignChannel(Dictionary<int, List<TimeSlot>> occupiedChannels, ref TimeSpan start, ref TimeSpan end)
{
List<int> channels = occupiedChannels.Keys.ToList();
if (start >= end )
return false;
foreach (var item in channels)
{
List<TimeSlot> slots = occupiedChannels[item];
if (slots.Count == 0)
{
occupiedChannels[item].Add(new TimeSlot(start, end));
return true;
}
else
{
bool available = false ;
foreach (var slot in slots)
{
TimeSpan channelStartTime = slot.StartTime;
TimeSpan channelEndTime = slot.EndTime;
if (start >= channelStartTime && end <= channelEndTime ||
start <= channelStartTime && end <= channelEndTime && end >= channelStartTime
|| end >= channelEndTime && start >= channelStartTime && start <= channelEndTime)
{
available = false;
break;
}
else {
available = true;
}
}
if (available)
{
occupiedChannels[item].Add(new TimeSlot(start, end));
return true;
}
}
}
return false;
}
private class TimeSlot
{
public TimeSpan StartTime;
public TimeSpan EndTime;
public TimeSlot(TimeSpan start, TimeSpan end)
{
StartTime = start;
EndTime = end;
}
}
}
}

Categories

Resources