Algorithm for selecting random times within a window per user - c#

I need to select N amount of events a day randomly, but they can't be too close to each other (M). So the N events have to be a at least M apart within a particular window (W). In this case the window I am thinking of is 12 hours.
N = the number of events
T = the time at which the event should occur (UTC)
M = the minimum factor they should be apart (Hours).
W = the window of the events (Now to Now + 12 hours).
U = the user (probably not important to this problem)
I could probably figure this out, but I thought it would be a fun StackOverflow question and interested how people would solve it.
Thanks in advance :)
Update: Moved answer to an answer

Try this:
It splits the available time (window - count * minimum) randomly, then sorts the times and adds the minimum amount to produce the final event array T[].
static Random rnd=new Random();
static void Main(string[] args)
{
double W=12;
double M=1.0;
int N=7;
double S=W-(N-1)*M;
double[] T=new double[N];
for(int i=0; i<N; i++)
{
T[i]=rnd.NextDouble()*S;
}
Array.Sort(T);
for(int i=0; i<N; i++)
{
T[i]+=M*i;
}
Console.WriteLine("{0,8} {1,8}", "#", "Time");
for(int i=0; i<N; i++)
{
Console.WriteLine("{0,8} {1,8:F3}", i+1, T[i]);
}
// With N=3, Window 12h, Min. Span = 5h
// # Time
// 1 0.468
// 2 5.496
// 3 10.529
// With N=7, Window 12h, Min. Span = 1h
// # Time
// 1 0.724
// 2 2.771
// 3 4.020
// 4 5.790
// 5 7.331
// 6 9.214
// 7 10.673
}
As a check also, when minimum times completely covers the time window then the events are equally spaced. So for 3 events on a 12hr window with minimum time 6hrs this algorithm produces events at 0.0 ,6.0 and 12.0 as expected.

You can use the idea I had for my question here: Generating non-consecutive combinations, essentially requiring that you only solve the M=0 case.
If you want to skip the description, the algorithm is given at the end of the post, which has no unpredictable while loops etc, and is guaranteed to be O(N log N) time (would have been O(N), if not for a sorting step).
Long Description
To reduce the general M case to the M=0 case, we map each possible combination (with the "aleast M constraint") to a combination without the "at least M" apart constraint.
If your events were at T1, T2, .., TN such that T1 <= T2 -M, T2 <= T3 - M ... you map them to the events Q1, Q2, .. QN such that
Q1 = T1
Q2 = T2 - M
Q3 = T3 - 2M
...
QN = TN - (N-1)M.
These Q satisfy the property that Q1 <= Q2 <= ... <= QN, and the mapping is 1 to 1. (From T you can construct the Q, and from Q you can construct the T).
So all you need to do is generate the Q (which is essentially the M=0 case), and map them back to the T.
Note that the window for generating Q becomes [Now, Now+12 - (N-1)M]
To solve the M=0 problem, just generate N random numbers in your window and sort them.
Final Algorithm
Thus your whole algorithm will be
Step 1) Set Window = [Start, End - (N-1)M]
Step 2) Generate N random numbers in the Window.
Step 3) Sort the numbers generated in Step 2. Call them Q1, Q2, .. , QN
Step 4) Create Ti with the formula Ti = Qi + (i-1)M, for i = 1 to N.
Step 5) Output T1,T2,..,TN

If we assume that events occur instantaneously (and, as such, can occur at time = end of window, you could do something like this:
//Using these, as defined in the question
double M;
int N;
DateTime start; //Taken from W
DateTime end; //Taken from W
//Set these up.
Random rand = new Random();
List<DateTime> times;
//This assumes that M is
TimeSpan waitTime = new TimeSpan.FromHours(M);
int totalSeconds = ((TimeSpan)end-start).TotalSeconds;
while( times.Count < N )
{
int seconds = rand.Next(totalSeconds);
DateTime next = start.AddSeconds(seconds);
bool valid = true;
if( times.Count > 0 )
{
foreach( DateTime dt in times )
{
valid = (dt > next && ((TimeSpan)dt - next) > waitTime) ? true : false;
if( !valid )
{
break;
}
}
}
if( valid )
{
times.Add(next);
}
}
Now, in a 12 hour window with at least an hour after each event before the next, you'd best have a small N - my psuedocode above does not check to see if it's possible to fit N events into X time with M hours between each event.

timeItems = new List();
int range;
double randomDouble;
for i = 1 to N
{
range = W/M;
//assumes generate produces a random number between 0 and 1 (exclusive)
randomDouble = RandomGenerator.generate() * (range);
T = Math.floor(randomDouble)*M;
timeItems.add(T);
}
return timeItems

First consider the job of generating one event such that there are (n-1) more events to generate (need to be separated by at least M each) and total of w time left.
Time t can be in between 0 to w-(n-1)m. The average value of t should be w/(n-1). Now, use any of your favorite distribution (I recommend poisson) to generate a random number with mean w/(n-1). If the number is higher than w-n-1)m, then generate again. That will give your t.
Recursively call (offset=offset+t, w=w-t, n=n-1, m=m) to generate more numbers.
def generate(offset, w, n, m):
mean = w/(n-1);
t=ininity;
while (t> (w-(n-1)m)):
t= poisson( w/(n-1) )
return [t+offset] + generate(offset+t, w-t, n-1, m)
I have not coded for corner conditions and other cases which I leave it to you.

Here is my solution. You get some weird behavior if your time apart and numOfEvents are conflicting. Play around with it and see what happens.
using System;
using System.Collections.Generic;
namespace RandomScheduler
{
class Program
{
public static Random R = new Random();
static void Main()
{
var now = DateTime.Now;
var random = new Random((int)now.Ticks);
const int windowOfHours = 12;
const int minimumTimeApartInHours = 2;
const int numOfEvents = 5;
// let's start the window 8 AM
var start = new DateTime(now.Year, now.Month, now.Day, 8, 0, 0, 0);
// let's end 12 hours later
var end = start.AddHours(windowOfHours);
var prev = null as DateTime?;
const int hoursInEachSection = windowOfHours / numOfEvents;
var events = new List<DateTime>();
for (var i = 0; i < numOfEvents; i++)
{
// if there is a previous value
// let's at least start 2 hours later
if (prev.HasValue)
start = prev.Value.AddHours(minimumTimeApartInHours);
DateTime? #event;
do
{
// pick a random double, so we get minutes involved
var hoursToAdd = random.NextDouble()*hoursInEachSection;
// let's add the random hours to the start
// and we get our next event
#event = start.AddHours(hoursToAdd);
// let's make sure we don't go past the window
} while (#event > end);
prev = #event;
events.Add(#event.Value);
}
Console.WriteLine(string.Join("\n", events));
Console.ReadLine();
}
}
}

Related

Massive amount number comparison using c#

Comparison of number sets is too slow. What is more efficiency way to solve this problem?
I have two groups of sets, each group has about 5 millions of sets, each set has 6 numbers and each number is between 1 to 100. Sets and Groups are not sorted and duplicated.
Following is Example.
No. Group A Group B
1 {1,2,3,4,5,6} {6,2,4,87,53,12}
2 {2,3,4,5,6,8} {43,6,78,23,96,24}
3 {45,23,57,79,23,76} {12,1,90,3,2,23}
4 {3,5,85,24,78,90} {12,65,78,9,23,13}
... ...
My goal is compare two groups and classify Group A by maximum common element count in 5hrs on my laptop.
In the example, No 1 of Group A and No 3 of Group B has 3 common elements(1,2,3).
Also, No 2 of Group A and No 3 of Group B has 2 common elements(2,3). Therefore I will classify Group A as following.
No. Group A Maximum Common Element Count
1 {1,2,3,4,5,6} 3
2 {2,3,4,5,6,8} 3
3 {45,23,57,79,23,76} 1
4 {3,5,85,24,78,90} 2
...
My approach is compare every sets and number, so complexity is Group A Count * Group B Count * 6 * 6. Therefore it need so many time.
Dictionary<int, List<int>> Classified = new Dictionary<int, List<int>>();
foreach (List<int> setA in GroupA)
{
int maxcount = 0;
foreach (List<int> setB in GroupB)
{
int count = 0;
foreach(int elementA in setA)
{
foreach(int elementB in setB)
{
if (elementA == elementB) count++;
}
}
if (count > maxcount) maxcount = count;
}
Classified.Add(maxcount, setA);
}
Here is my attempt - using a HashSet<int> and precalculating the range of each set to avoid set-to-set comparisons like {1,2,3,4,5,6} and {7,8,9,10,11,12} (as pointed out by Matt's answer).
For me (running with random sets) it resulted in a 130x speed improvement on the original code. You mentioned in a comment that
Now execution time is over 3 days, so as others said I need parallelization.
and in the question itself that
My goal is compare two groups and classify Group A by maximum common element count in 5hrs on my laptop.
so assuming that the comment means that the execution time for your data exceeded 3 days (72 hours), but you want it to complete in 5 hours, you'd only need something like a 14x speed increase.
Framework
I've created some classes to run these benchmarks:
Range - takes some int values, and keeps track of the minimum and maximum values.
public class Range
{
private readonly int _min;
private readonly int _max;
public Range(IReadOnlyCollection<int> values)
{
_min = values.Min();
_max = values.Max();
}
public int Min { get { return _min; } }
public int Max { get { return _max; } }
public bool Intersects(Range other)
{
if ( _min < other._max )
return false;
if ( _max > other._min )
return false;
return true;
}
}
SetWithRange - wraps a HashSet<int> and a Range of the values.
public class SetWithRange : IEnumerable<int>
{
private readonly HashSet<int> _values;
private readonly Range _range;
public SetWithRange(IReadOnlyCollection<int> values)
{
_values = new HashSet<int>(values);
_range = new Range(values);
}
public static SetWithRange Random(Random random, int size, Range range)
{
var values = new HashSet<int>();
// Random.Next(int, int) generates numbers in the range [min, max)
// so we need to add one here to be able to generate numbers in [min, max].
// See https://learn.microsoft.com/en-us/dotnet/api/system.random.next
var min = range.Min;
var max = range.Max + 1;
while ( values.Count() < size )
values.Add(random.Next(min, max));
return new SetWithRange(values);
}
public int CommonValuesWith(SetWithRange other)
{
// No need to call Intersect on the sets if the ranges don't intersect
if ( !_range.Intersects(other._range) )
return 0;
return _values.Intersect(other._values).Count();
}
public IEnumerator<int> GetEnumerator()
{
return _values.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
The results were generated using SetWithRange.Random as follows:
const int groupCount = 10000;
const int setSize = 6;
var range = new Range(new[] { 1, 100 });
var generator = new Random();
var groupA = Enumerable.Range(0, groupCount)
.Select(i => SetWithRange.Random(generator, setSize, range))
.ToList();
var groupB = Enumerable.Range(0, groupCount)
.Select(i => SetWithRange.Random(generator, setSize, range))
.ToList();
The timings given below are for an average of three x64 release build runs on my machine.
For all cases I generated groups with 10000 random sets then scaled up to approximate the execution time for 5 million sets by using
timeFor5Million = timeFor10000 / 10000 / 10000 * 5000000 * 5000000
= timeFor10000 * 250000
Results
Four foreach blocks:
Average time = 48628ms; estimated time for 5 million sets = 3377 hours
var result = new Dictionary<SetWithRange, int>();
foreach ( var setA in groupA )
{
int maxcount = 0;
foreach ( var setB in groupB )
{
int count = 0;
foreach ( var elementA in setA )
{
foreach ( int elementB in setB )
{
if ( elementA == elementB )
count++;
}
}
if ( count > maxcount ) maxcount = count;
}
result.Add(setA, maxcount);
}
Three foreach blocks with parallelisation on the outer foreach:
Average time = 10305ms; estimated time for 5 million sets = 716 hours (4.7 times faster than original):
var result = new Dictionary<SetWithRange, int>();
Parallel.ForEach(groupA, setA =>
{
int maxcount = 0;
foreach ( var setB in groupB )
{
int count = 0;
foreach ( var elementA in setA )
{
foreach ( int elementB in setB )
{
if ( elementA == elementB )
count++;
}
}
if ( count > maxcount ) maxcount = count;
}
lock ( result )
result.Add(setA, maxcount);
});
Using HashSet<int> and adding a Range to only check sets which intersect:
Average time = 375ms; estimated time for 5 million sets = 24 hours (130 times faster than original):
var result = new Dictionary<SetWithRange, int>();
Parallel.ForEach(groupA, setA =>
{
var commonValues = groupB.Max(setB => setA.CommonValuesWith(setB));
lock ( result )
result.Add(setA, commonValues);
});
Link to a working online demo here: https://dotnetfiddle.net/Kxpagh (note that .NET Fiddle limits execution times to 10 seconds, and that for obvious reasons its results are slower than running in a normal environment).
Fastest I can think of is this:
As all your numbers come from a limited range (1-100), you can express each of your sets as a 100-digit binary number <d1,d2,...,d100> where dn equals 1 iff n is in the set.
Then comparing two sets means a binary AND on the two binary representations and counting the set bits (which can be done efficiently)
In addition to that, this task can be parallelized (your input is immutable, so it's quite straightforward).
You would have to benchmark it with smaller sets but since you're going to have to do 5E6 * 5E6 = 25E12 comparisons, you might as well sort the contents of 5E6 + 5E6 = 10E6 sets first.
Then the set to set comparisons become much fast since you can stop in each comparison as soon as you reach the highest number in the first side of the comparison. Minuscule savings per set comparison but trillions of times over, it adds up.
You could also go further and index the two sets of five million by lowest entry and highest entry. You would further cut down the number of comparisons significantly. In the end, that's only 100 * 100' = 10,000 = 1E4 distinct collections. You would never have to compare sets that have for instance 12 for the highest number, with any sets that start with 13 or more. effectively avoiding a ton of work.
In my mind, this is sorting a lot of data, but it pales in order to the number of actual set to set comparisons you would have to do raw. Here, you are eliminating work for all the 0s and able to abort early if the conditions are right when you do do a compare.
And as others have said, parallelization...
PS: 5E6 = 5 * 10^6 = 5,000,000 and 25E12 = 25 * 10^12 = 25 * 10,000,000,000,000
The time complexity of any algorithm you come up with is going to be of the same order. HashSets might be a bit faster, but if they are it won't be by much - the overhead of 36 direct list comparisons vs 12 hashset lookups isn't going to be significantly higher, if at all, but you'll have to benchmark. Presorting might help a bit considering each set will be compared millions of times. Just FYI, for loops are faster than foreach loops on a List and arrays are faster than Lists (for and foreach on array is same performance), which for something like this might make a decent performance difference. If the No. column is sequential then I would use an array for that instead of a dictionary as well. Array lookups are an order of magnitude faster than dictionary lookups.
I think you are generally doing this as quickly as possible aside from parallelization though, with some small gains possible through the above micro-optimizations.
How far off from your target execution time is the current algorithm?
I would use the following:
foreach (List<int> setA in GroupA)
{
int maxcount = GroupB.Max(x => x.Sum(y => setA.Contains(y) ? 1 : 0));
Classified.Add(maxcount, setA);
}

Functioning of Random, C#

I have a question about working of Random in C#. Say I want to call some function if variable i == 0. I have the following code:
Random rnd = new Random();
int i = rnd.Next(5);
if (i == 0){
myFunction();
}
So, I would call myFunction() one time per 5 launches of the program. And what if I had another code:
Random rnd = new Random();
for (int j = 0; j < 10; j++){
int i = rnd.Next(50);
if (i == 0){
myFunction();
}
}
Would I have the same result in the final? (calling of myFunction() one time per 5 launches of the program)
If you give it a try, running this several times :
class Program
{
static int _caseOneCount = 0;
static int _caseTwoCount = 0;
static Random _rnd = new Random();
static void Main( string[] args )
{
var max = 100000;
for ( var i = 0 ; i < max ; i++ )
{
CaseOne();
CaseTwo();
Console.WriteLine( _caseOneCount.ToString() + "/" + _caseTwoCount.ToString() );
}
}
static void CaseOne()
{
if ( _rnd.Next( 5 ) == 0 )
_caseOneCount++;
}
static void CaseTwo()
{
for ( var i = 0 ; i < 10 ; i++ )
if ( _rnd.Next( 50 ) == 0 )
_caseTwoCount++;
}
}
You will see that the results are nearly equivalent and close to 20% as expected.
Edit : Now if you run CaseOne and CaseTwo only once, you can have :
CaseOne : only 0 or 1,
CaseTwo : a value from 0 to 10
Edit 2 : following the comments of #Jean-ClaudeColette. The second case corresponds to a binomial distribution (https://en.wikipedia.org/wiki/Binomial_distribution).
So as results, the probability to have :
0 call is 81.7%
1 call is 16.7%
2 calls is 1.5%
more is 100% minus the above which is around 0.086%
But the average value stays 20%.
Which means indeed that applying the second case only once will lead to a different result compared to the first case.
Random and its details are in the documentation :
https://msdn.microsoft.com/fr-fr/library/system.random(v=vs.110).aspx
And the description of the inner algorithm (Knuth subtractive random generator) is described here (with a C# implementation which is not the .Net implementation but a way to see how it works) :
https://rosettacode.org/wiki/Subtractive_generator
Actually both your above statements were wrong.
For your first loop, there is no guarantee that your function will get called one time per 5 launches - but if you run it enough times the probability of your function got called is 1/5.
For your second code sample, the probability is 1/50 instead. And your outer (j) loop just controls how many "launches" you are going to run using your words - it doesn't change the probability.

Other way to solve assigning

Lets say I have collection of n workers. Lets say there are 3:
John
Adam
Mark
I want to know when they have to clean the office. If I set int cleanDays = 3 it would be something like that:
//Day of month;worker
1;John
2;John
3;John
4;Adam
5;Adam
6;Adam
7;Mark
8;Mark
9;Mark
10;John
11;John
.
.
.
If I set cleanDays = 1 it would be:
1;John
2;Adam
3;Mark
4;John
5;Adam
.
.
.
And so on.
I already managed something like this:
int cleanDays = 6;
for (int day=1; day<30;day++) { //for each day
int worker = (day-1 % cleanDays)%workers.Count; //get current worker (starting from index 0)
for (int times=0; times< cleanDays; times++) //worker do the job `cleanDays` times
Console.WriteLine(day++ + ";" +workers[worker].Name);
}
This is not working properly, because it gaves me 34 days. That because of day++ in first loop. But if I delete day++ from first loop:
for (int day=1; day<30;) { //for each day
int worker = (day-1 % cleanDays)%workers.Count; //get current worker (starting from index 0)
for (int times=0; times< cleanDays; times++) //worker do the job `cleanDays` times
Console.WriteLine(day++ + ";" +workers[worker].Name);
}
It is giving output only with first worker. When I debugged I saw that:
int worker = (day-1 % cleanDays)%workers.Count;
and worker was equal to 0 everytime. That means:
(20-1%6)%3 was equal to 0. Why does that happen?
UPDATE: I just read your question more carefully and realized you were not asking about the actual code at all. Your real question was:
That means: (20-1%6)%3 was equal to 0. Why does that happen?
First of all, it doesn't. (20-1%6)%3 is 1. But the logic is still wrong because you have the parentheses in the wrong place. You meant to write
int worker = (day - 1) % cleanDays % workers.Count;
Remember, multiplication, division and remainder operators are all higher precedence than subtraction. a + b * c is a + (b * c), not (a + b) * c. The same is true of - and %. a - b % c is a - (b % c), not (a - b) % c.
But I still stand by my original answer: you can eliminate the problem entirely by writing a query that represents your sequence operations, rather than a loop with a bunch of complicated arithmetic that is easy to get wrong.
Original answer follows.
Dmitry Bychenko's solution is pretty good but we can improve on it; modular arithmetic is not necessary here. Rather than indexing into the worker array, we can simply select-many from it directly:
var query = Enumerable.Repeat(
workers.SelectMany(worker => Enumerable.Repeat(worker, cleanDays)),
1000)
.SelectMany(workerseq => workerseq)
.Select((worker, index) => new { Worker = worker, Day = index + 1})
.Take(30);
foreach(var x in query)
Console.WriteLine($"Day {x.Day} Worker {x.Worker}");
Make sure you understand how this query works, because these are core operations of LINQ. We take a sequence of workers,
{A, B, C}
This is projected onto a sequence of sequences:
{ {A, A}, {B, B}, {C, C} }
Which is flattened:
{A, A, B, B, C, C}
We then repeat that a thousand times:
{ { A, A, B, B, C, C },
{ A, A, B, B, C, C },
...
}
And then flatten that sequence-of-sequences:
{ A, A, B, B, C, C, A, A, B, B, C, C, ... }
We then select-with-index into that flattened sequence to produce a sequence of day, worker pairs.
{ {A, 1}, {A, 2}, {B, 3}, {B, 4}, ... }
Then take the first 30 of those. Then we execute the query and print the results.
Now, you might say isn't this inefficient? If we have, say, 4 workers, we put each on 5 days, and then we repeat that sequence 1000 times; that makes a sequence with 5 x 4 x 1000 = 20000 items, but we only need the first 30.
Do you see what is wrong with that logic?
LINQ sequences are constructed lazily. Because of the Take(30) we never construct more than 30 pairs in the first place. We could have repeated it a million times; doesn't matter. You say Take(30) and the sequence construction will stop constructing more items after you've printed 30 of them.
But don't stop there. Ask yourself how you can improve this code further.
The bit with the days as integers seems a bit dodgy. Surely what you want is actual dates.
var start = new DateTime(2017, 1, 1);
And now instead of selecting out numbers, we can select out dates:
...
.Select((worker, index) => new { Worker = worker, Day = start.AddDays(index)})
...
What are the key takeaways here?
Rather than messing around with loops and weird arithmetic, just construct queries that represent the shape of what you want. What do you want? Repeat each worker n times. Great, then there should be a line in your program somewhere that says Repeat(worker, n), and now your program looks like its specification. Now your program is more likely to be correct. And so on.
Use the right data type for the job. Want to represent dates? Use DateTime, not int.
I would use a while loop, and use some tracking variables to keep track of which worker you are at and how many clean-times are left for that worker. Something like this:
const int cleanTime = 3; // or 1 or 6
var workers = new [] { "John", "Adam" , "Mark" }
var day = 1;
var currentWorker = 0;
var currentCleanTimeLeft = cleanTime;
while (day <= 30) {
Console.WriteLine("{0};{1}", day, workers[currentWorker].Name);
currentCleanTimeLeft--;
if (currentCleanTimeLeft == 0) {
currentCleanTimeLeft = cleanTime;
currentWorker++;
if (currentWorker >= workers.Length)
currentWorker = 0;
}
day++;
}
A very basic solution, no division or arithmatics required.
The second loop is unnecessary, it simply messes up your day.
int cleanDays = 6;
for (int day = 1; day <= 30; day++)
{
int worker = ((day-1) / cleanDays) % workers.Count;
Console.WriteLine(day + ";" + workers[worker].Name);
}
Example on Fiddle
The basic idea is to give each individual day an numerical value - DateTime.Now.DayOfYear is a good choice, or just a running count - and map that numerical value to an index in the Worker array.
The main logic is in the workerIndex line below:
It takes the day number and divides it by cleanDays. This means that each x days is mapped to the same workerIndex.
It takes the workerIndex and does a modulo operation on it (%) on the count of workers. This causes the workerIndex to by cyclical, iterating endlessly over all workers.
string[] workers = new string[] {"Mike", "Bob", "Hank"};
int cleanDays = 6;
for (int dayNum = 0 ; dayNum < 300 ; dayNum++)
{
var workerIndex = (dayNum / cleanDays) % workers.Length; // <-- LOGIC!
Console.WriteLine("Day {0} - Cleaner: {1}", dayNum, workers[workerIndex]);
}
I suggest modulo arithmetics and Linq:
List<Worker> personnel = ...
int days = 30;
int cleanDays = 4;
var result = Enumerable.Range(0, int.MaxValue)
.SelectMany(index => Enumerable
.Repeat(personnel[index % personnel.Count], cleanDays))
.Select((man, index) => $"{index + 1};{man.Name}")
.Take(days);
Test:
Console.Write(string.Join(environment.NewLine, result));
Output:
1;John
2;John
3;John
4;John
5;Adam
6;Adam
7;Adam
8;Adam
9;Mark
...
24;Mark
25;John
26;John
27;John
28;John
29;Adam
30;Adam
you could create a sequence function:
public static IEnumerable<string> GenerateSequence(IEnumerable<string> sequence, int groupSize)
{
var day = 1;
while (true)
{
foreach (var element in sequence)
{
for (var i = 0; i < groupSize; ++i)
{
yield return $"{day};{element}";
day++;
}
}
}
}
usage:
var workers = new List<string> { "John", "Adam", "Mark" };
var cleanDays = 3;
GenerateSequence(workers, cleanDays).Take(100).Dump();
I would do something like this:
var cleanDays = 6; // Number of days in each shift
var max = 30; // The amount of days the loop will run for
var count = workers.Count(); // The amount of workers
if(count == 0) return; // Exit If there are no workers
if(count == 1) cleanDays = max; //See '3.' in explanation (*)
for(var index = 0; index < max; index++){
var worker = (index / cleanDays ) % count;
var day = index % cleanDays ;
Console.WriteLine(string.format("Day {0}: {1} cleaned today (Consecutive days cleaned: {2})", index+1, workers[worker].Name ,day));
}
Explanation
By doing index / cleanDays you get the amount of times of worker shifts. But it is possible that the shifts are more than the amount of workers in which case you would want to get the reminder (shifts % amount of workers).
To get how many consecutive days the worker has worked so far you simply need to get the remainder of the first division done above. (index / cleanDays ).
Finally as you can see I get the count of the array before I enter the loop for 3 reasons:
To only read it once. And save some time.
To exit if the method if the array is empty
To check if there is only one worker left. In which case that worker won't have a break and will be working from day 1 until day 'max' therefore I set the cleanDays to max. *

2d array,adding it values in a weird pattern

i started learning C# and programming a few months ago and have some problems. The idea here is we create a 2 dimensional array (the number of rows / columns are added by the user), the numbers need to be between 1 and 10.
Then when the array is created the number sequence ( 3-5-7-9-11 etc) is started in the first and finishes in the last column. The rest of the numbers in the columns are added via keyboard by the user starting with the first row (ignoring column 1 and the last column cause we have that added).
The questions are :
What will be the best way to check if the numbers of rows/columns are between 1 and 10? (I was thinking of IF-else but isn't there a better way ?)
How will i make it so that the number sequence 3-5-7 etc is started in the first and finishes in the last column?
Yeah i feel lost.
Where i am at the moment :
Console.WriteLine("Add row value of 1-10");
string s1
s1 = Console.ReadLine();
int k = int.Parse(s1);
Console.WriteLine("Add column value of 1-10");
string s2;
s2 = Console.ReadLine();
int p = int.Parse(s2);
int[,] M = new int[k, p];
Example : we added k(row) & p(coulmn) value of 4.So the array should look like :
3 x x 11
5 x x 13
7 x x 15
9 x x 17
Then the X's should be added again manually without overwriting the existing numbers .The value of the numbers doesnt matter.
So... If I get it right you want to ask user the "length and width" of dynamical 2d array?
To check if entered number is between 1 and 10 there's only 1 method:
int [,] M;
if (k >= 1 && k <= 10 && p >= 1 && p <= 10)
{
M = new int[k,p];
}
And better is to do int.TryParse() for case if user enters characters there instead of numbers, or else you can easily get an Exception.
Filling with numbers:
int num = 3;
for (int i = 0; i < k; ++i)
{
M[i,0] = num;
num+=2;
}
for (int i = 0; i < k; ++i)
{
M[i,p] = num;
num+=2;
}
This adds numbers in 1st and last column in each row. After that to fill other cells manually you check every cell thet it is not in firs or last column.
I hope I understood you correctly. Provided code may be simplified, but provided in such way for better understanding.
if(k>0 && k<11 && p>0 && p<11)
{
int i;
int M[,] = new int[k,p];
for (i=0;i<k;i++)
{
M[i,0]=i*2+3;
M[i,p-1]=(i+k)*2+3;
}
}

C# Array of Increments

If I want to generate an array that goes from 1 to 6 and increments by .01, what is the most efficient way to do this?
What I want is an array, with mins and maxs subject to change later...like this: x[1,1.01,1.02,1.03...]
Assuming a start, end and an increment value, you can abstract this further:
Enumerable
.Repeat(start, (int)((end - start) / increment) + 1)
.Select((tr, ti) => tr + (increment * ti))
.ToList()
Let's break it down:
Enumerable.Repeat takes a starting number, repeats for a given number of elements, and returns an enumerable (a collection). In this case, we start with the start element, find the difference between start and end and divide it by the increment (this gives us the number of increments between start and end) and add one to include the original number. This should give us the number of elements to use. Just be warned that since the increment is a decimal/double, there might be rounding errors when you cast to an int.
Select transforms all elements of an enumerable given a specific selector function. In this case, we're taking the number that was generated and the index, and adding the original number with the index multiplied by the increment.
Finally, the call to ToList will save the collection into memory.
If you find yourself using this often, then you can create a method to do this for you:
public static List<decimal> RangeIncrement(decimal start, decimal end, decimal increment)
{
return Enumerable
.Repeat(start, (int)((end - start) / increment) + 1)
.Select((tr, ti) => tr + (increment * ti))
.ToList()
}
Edit: Changed to using Repeat, so that non-whole number values will still be maintained. Also, there's no error checking being done here, so you should make sure to check that increment is not 0 and that start < end * sign(increment). The reason for multiplying end by the sign of increment is that if you're incrementing by a negative number, end should be before start.
The easiest way is to use Enumerable.Range:
double[] result = Enumerable.Range(100, 500)
.Select(i => (double)i/100)
.ToArray();
(hence efficient in terms of readability and lines of code)
I would just make a simple function.
public IEnumerable<decimal> GetValues(decimal start, decimal end, decimal increment)
{
for (decimal i = start; i <= end; i += increment)
yield return i;
}
Then you can turn that into an array, query it, or do whatever you want with it.
decimal[] result1 = GetValues(1.0m, 6.0m, .01m).ToArray();
List<decimal> result2 = GetValues(1.0m, 6.0m, .01m).ToList();
List<decimal> result3 = GetValues(1.0m, 6.0m, .01m).Where(d => d > 3 && d < 4).ToList();
Use a for loop with 0.01 increments:
List<decimal> myList = new List<decimal>();
for (decimal i = 1; i <= 6; i+=0.01)
{
myList.Add(i);
}
Elegant
double[] v = Enumerable.Range(1, 600).Select(x => x * 0.01).ToArray();
Efficient
Use for loop
Whatever you do, don't use a floating point datatype (like double), they don't work for things like this on behalf of rounding behaviour. Go for either a decimal, or integers with a factor. For the latter:
Decimal[] decs = new Decimal[500];
for (int i = 0; i < 500; i++){
decs[i] = (new Decimal(i) / 100)+1 ;
}
You could solve it like this. The solution method returns a double array
double[] Solution(double min, int length, double increment)
{
double[] arr = new double[length];
double value = min;
arr[0] = value;
for (int i = 1; i<length; i++)
{
value += increment;
arr[i] = value;
}
return arr;
}
var ia = new float[500]; //guesstimate
var x = 0;
for(float i =1; i <6.01; i+= 0.01){
ia[x] = i;
x++;
}
You could multi-thread this for speed, but it's probably not worth the overhead unless you plan on running this on a really really slow processor.

Categories

Resources