I have a table in the database called Control:
Table structure:
Id | Name | MinValue (decimal) | MaxValue(decimal)
I have some restrictions on that table, one of it's restrictions is : no intersections.
Example : if the table has some values as follows:
row1 : 1 | Test1 | 1.3 | 2.5 //valid
row2 : 2 | Test2 | 3.3 | 4.5 // valid
row3 : 3 | Test3 | 5 | 6 // valid
Now if I want to add a new record, it must not intersect with any other row
Example:
row4 : 4 | Test4 | 5.1 | 10 //not valid since slot from 5 to 6 is reserved
row5 : 5 | Test5 | 1.0 | 1.4 // not valid since slot from 1.3 to 2.5 is reserved
I'm using this code, and it worked perfectly, but I wonder if there is a better solution and more efficient :
var allRows = db.Control.ToList();
var minValue = control.MinimumValue;
var maxValue = control.MaximumValue;
bool flag = true;
foreach(var row in allRows)
{
for(var i = minValue; i <= maxValue && flag ; i = decimal.Add( i , (decimal) 0.01))
{
if(i >= row.MinimumValue && i <= row.MaximumValue)
{
flag = false;
min = row.MinimumValue;
max = row.MaximumValue;
break;
}
}
}
if (flag)
{
//add
}
else
{
//intersection
}
Any suggestions ?
I think this is a O(LogN) issue...
Keep segments Ordered by their Start Value.
in a valid list s[i].end < s[i+1].start for any i
when inserting new segment, find it's position (the one that which start is closest (but lesser) than your new segment) call it i
if((seg[i-1].end < new.start) && (seg[i+1].start > new.end))
//OK to insert
else
// intersect
Let's assume this is the object you're trying to add :
var control = new Control()
{
Name = 'name',
MinValue = 5,
MaxValue = 6
};
You can do the following:
var biggerThanMinValue = db.Control.Count(x => x.MinValue >= control.MinValue) != 0;
var biggerThanMaxValue = db.Control.Count(x => x.MaxValue >= control.MaxValue) != 0;
if (!biggerThanMinValue && !biggerThanMinValue)
{
db.Control.Add(control); // or whatever your add operation is
}
By doing so you:
do NOT load the whole table in-memory -> performance gain terms of time, traffic and memory
let the database use it's data structures/algorithms to verify that the item can be added (the db should be able to optimize this request) -> another performance gain (cpu + time)
have clearer backend code
have less code to test
Edit: I suppose you could also ask the database to sort your table by min/max value and then make some validation (1 or 2 ifs), but the first approach is better, imo.
Related
I have many tasks, each task defined by the day that I can start working on and the last day that task is still valid to do, each task done withing one day, not more, I can do one task per day.
The tasks with the deadlines as described in the below table.
| task | valid from | valid until |
|------|------------|-------------|
| t01 | 1 | 3 |
| t02 | 2 | 2 |
| t03 | 1 | 1 |
| t04 | 2 | 3 |
| t05 | 2 | 3 |
the number of tasks may be a huge number.
I want to know which algorithm I can use to solve this problem to maximize the number of tasks that I can do.
Update
base on the comments I wrote this code it is working but still hasn't good performance with a huge number of tasks.
public static int countTodoTasks(int[] validFrom, int[] validUnitil)
{
var tasks = new List<TaskTodo>();
for (int i = 0; i < validFrom.Length; i++)
{
tasks.Add(new TaskTodo { ValidFrom = validFrom[i], ValidUntil = validUnitil[i] });
}
tasks = tasks.OrderBy(x => x.ValidUntil).ToList();
var lDay = 0;
var schedule = new Dictionary<int, TaskTodo>();
while (tasks.Count > 0)
{
lDay = findBigestMinimumOf(lDay, tasks[0].ValidFrom, tasks[0].ValidUntil);
if (lDay != -1)
{
schedule[lDay] = tasks[0];
}
tasks.RemoveAt(0);
tasks.RemoveAll(x => lDay >= x.ValidUntil);
}
return schedule.Count;
}
static int findBigestMinimumOf(int x, int start, int end)
{
if (start > x)
{
return start;
}
if ((x == start && start == end) || x == end || x > end)
{
return -1;
}
return x + 1;
}
If the tasks have the same duration, then use a greedy algorithm as described above.
If it's too slow, use indexes (= hashing) and incremental calculation to speed it up if you need to scale out.
Indexing: during setup, iterate through all tasks to create map (=dictionary?) that maps each due date to a list of tasks. Better yet, use a NavigableMap (TreeMap), so you can ask for tail iterator (all tasks starting from a specific due date, in order). The greedy algorithm can then use that to scale better (think a better bigO notation).
Incremental calculation: only calculate the delta's for each task you're considering.
If the tasks have different duration, a greedy algorithm (aka construction heuristic) won't give you the optimal solution. Then it's NP-hard. After the Construction Heuristic (= greedy algorithm), run a Local Search (such as Tabu Search). Libraries such as OptaPlanner (Java, not C# unfortunately - look for alternatives there) can do both for you.
Also note there are multiple greedy algo's (First Fit, Fit Fit Decreasing, ...)
I suppose you can apply greedy algorithm for you purpose in this way.
Select minimal "valid from", minday.
Add to Xcandidates, all candidates with "valid from" = minday.
If no Xcandidates go to 1.
Select the interval, x, from Xcandidates, with earliest "valid until".
Remove x, inserting it in your schedule.
Remove all Xcandidates with "valid until" = minday.
Increment minday and go to 2.
I'm looking for best practices for modifying a datatable while you are looping through said datatable.
I'm grabbing the max value in the datatables sequence line number column. In this example it's 25. I have a lot of zeroes in this Datatable and I'm going to find them one by one and change them to 25+1,26+1,27+1 etc etc. I'm wondering what would be the best practice of going about this without having to create another table and build a new table as I edit values of the row.
int maxSequenceNumber = Convert.ToInt32(dtNewOrderGuide.Compute("max([seqlinnum])", string.Empty));
foreach(DataRow row in dtNewOrderGuide.Rows)
{
if (row["seqlinnum"].ToString() == "0")
{
}
}
Example of my table
RowNumber | seqlinnum
1 | 1
2 | 10
3 | 15
4 | 25
5 | 0
6 | 0
7 | 0
8 | 0
9 | 0
10 | 0
as an alternative, using a traditional for loop over a foreach would mean not doing any data copying or anything and would likely run though the datatable faster
for(int i=0;i<table.Rows.Count;i++)
{
if (table.Rows[i]["seqlinnum"].ToString() == "0")
{
table.Rows[i]["seqlinnum"]=maxSequenceNumber; maxSequenceNumber++;
}
}
You can do this with Linq... may want to use an order why before the select if you don’t want to risk overwriting the sequence.
var query = td.Rows.Cast<DataRow>().Select((r,i)=>new {r,i});
foreach (var row in query)
{
row.r[“seqlinnum”]=row.i;
}
I have the following problem: I need to create a table, which is combination of values coming from sets. The cardinality of the elements in the set is unknown, and may vary from set to set, the domain of the values is unknown, and may as well vary from set to set. The elements in the set are non-negative, at least two elements are within a set.
Here follows an example:
SET_A = { 0, 1, 2 }
SET_B = { 0, 1 }
SET_C = { 0, 1 }
The result should contain the following rows (order is not a constraint):
TABLE:
| 0 0 0 |
| 0 0 1 |
| 0 1 0 |
| 0 1 1 |
| 1 0 0 |
| 1 0 1 |
| 1 1 0 |
| 1 1 1 |
| 2 0 0 |
| 2 0 1 |
| 2 1 0 |
| 2 1 1 |
Does anybody know which is the Mathematics behind this problem? I tried to look at Multiset problems, logic tables, combinatorics. Many of the definitions that I found have similarities to my problem, but I can't isolate anything in the literature that I have accessed so far. Once I have a reference definition I can think of coding it, but now I just got lost in recursive functions and terrible array-index games. Thanks.
EDIT: Question was proposed already at:
C# Permutation of an array of arraylists?
Edit: Sorry, had to run last evening. For arbitrary dimensionality you probably would have to use recursion. There's probably a way to do without it, but with recursion is most straightforward. The below is untested but should be about right.
IEnumerable<int[]> getRows(int[][] possibleColumnValues, int[] rowPrefix) {
if(possibleColumnValues.Any()) { //can't return early when using yield
var remainingColumns = possibleColumnValues.Skip(1).ToArray();
foreach(var val in possibleColumnValues.First()) {
var rowSoFar = rowPrefix.Concat(new[]{val}).ToArray();
yield return getRows(remainingColumns rowSoFar);
}
}
}
Usage:
getRows(new [][] {
new [] {0,1,2},
new [] {0,1},
new [] {0,1},
}, new int[0]);
The thing you look for is combinatorics. Also it doesn't really matter what is the domain of the elements in set. As long as you can enumerate them, the problem is the same as for numbers from 0 to the set cardinality.
To enumerate all options, have a vector of indices and after each iteration increment the first index. If it overflows, set to 0 and increment the second index, etc.
The task is to print permutations. You seem to dig deeper then it is. It has nothing to do with nature of elements.
The following is not written for efficiency (neither in space nor speed). The idea is to just get the basic algorithm across. I'll leave making this more space and time efficient up to you.
The basic idea is to recognize that all the combinations of n lists, is just all the combinations of n-1 lists with each element of the first list tacked on. It's a pretty straight-forward recursive function at that point.
public static IEnumerable<int[]> Permute( params IEnumerable<int>[] sets )
{
if( sets.Length == 0 ) yield break;
if( sets.Length == 1 )
{
foreach( var element in sets[0] ) yield return new[] { element };
yield break;
}
var first = sets.First();
var rest = Permute( sets.Skip( 1 ).ToArray() );
var elements = first.ToArray();
foreach( var permutation in rest )
{
foreach( var element in elements )
{
var result = new int[permutation.Length + 1];
result[0] = element;
Array.Copy( permutation, 0, result, 1, permutation.Length );
yield return result;
}
}
}
The setup
I have a List<Room>() which I get back from a service. The list refreshes every 10 seconds, and rooms get added and removed.
class Room
{
public int ID {get;set;}
}
My job
To display these rooms on the screen, I have a Matrix-like view of variable size.
Sometimes the matrix is 3 x 3 cells, other times it is 4 x 2 or 5 x 1.
I needed a way to "remember" which slot/cell a room has been placed in, so I thought a DataTable would give me that option.
To store the cells I use a DataTable, which has 3 Columns:
"Column" (int)
"Row" (int)
"Room" (Room)
So If I have a 2 x 4 matrix, it would look like this.
Column | Row | Room
-----------------------------
0 | 0 | rooms[0]
-----------------------------
1 | 0 | rooms[1]
-----------------------------
2 | 0 | rooms[2]
-----------------------------
0 | 1 | rooms[3]
-----------------------------
1 | 2 | rooms[4]
And so forth...
Once I have this DataTable I am then able to refresh the screen, knowing that each room will be displayed at the position it was before. This can probably be achieved in a smarter way.
The problem
Now I need to enumerate the List<Room> and fill the matrix/DataTable.
If I have more rooms than cells, then I need to start at position 0,0 again (like adding a new matrix as a layer), until all rooms have been assigned a cell.
The approach so far
I have tried a few for(...) loops that look like:
int totalTiles = area.TileColumns * area.TileRows;
int totalLayers = (int)Math.Ceiling((double)area.Rooms.Count / totalTiles);
for (int i = 0; i < totalLayers; i++)
{
for (int j = 0; j < area.TileRows; j++)
{
for (int k = 0; k < area.TileColumns; k++)
{
// This is going nowhere :-(
}
}
}
In my brain
When I first came across this problem, I immediately thought: "Nothing a simple LINQ query won't fix!". And then I bricked ...
What would be the most efficient / best performing approach to fill this matrix?
Without being able to make assumptions, like will the row/columns change at runtime, I would have to say just make it completely dynamic.
class RoomStorage
{
public Room room {get;set;}
public int layer {get;set;}
public int row {get;set;}
public int col {get;set;}
}
var matrix=new List<RoomStorage>();
Then you can things like:
var newRooms=new List<Room>(); // Get from service
//Remove rooms no longer in use
var matrix=matrix.Where(m=>newRooms.Select(nr=>nr.ID).Contains(m.Room.ID));
//Find rooms we need to add (Optionally use Exclude for faster perf)
var roomsToAdd=newRooms.Where(r=>matrix.Select(m=>m.Room.ID).Contains(r.ID));
var maxLayer=matrix.Max(m=>m.layer);
var rows = ?
var cols = ?
var positions=Enumerable
.Range(0,maxLayer+1)
.SelectMany(layer=>
Enumerable
.Range(0,rows)
.SelectMany(row=>
Enumerable
.Range(0,cols)
.Select(col=>new {layer,row,col})));
Then you can use positions, left joining it to matrix for display purposes, or finding the first empty position.
I have a list of system users that are awaiting to be assigned with an account.
The assignment algorithm is very simple, assigning should be as fair as possible which means that if I have 40 accounts and 20 system users I need to assign 2 accounts per system user.
If I have 41 accounts and 20 system users I need to assign 2 accounts per system user and split the remaining accounts between the system users again (in this case, one system user will be assigned with one extra account).
I am trying to figure out how to do this while using a LINQ query.
So far I figured that grouping should be involved and my query is the following:
from account in accounts
let accountsPerSystemUser = accounts.Count / systemUsers.Count
let leftover = accounts.Count % systemUsers.Count
from systemUser in systemUsers
group account by systemUser into accountsGroup
select accountsGroup
However I am uncertain how to proceed from here.
I am positive that I am missing a where clause here that will prevent grouping if you reached the maximum amount of accounts to be assigned to a system user.
How do I implement the query correctly so that the grouping will know how much to assign?
Here is a simple implementation that works if you can restrict yourself to a IList<T> for the accounts (you can always use ToList though).
public static IEnumerable<IGrouping<TBucket, TSource>> DistributeBy<TSource, TBucket>(
this IEnumerable<TSource> source, IList<TBucket> buckets)
{
var tagged = source.Select((item,i) => new {item, tag = i % buckets.Count});
var grouped = from t in tagged
group t.item by buckets[t.tag];
return grouped;
}
// ...
var accountsGrouped = accounts.DistributeBy(systemUsers);
Basically this grabs each account's index and "tags" each with the remainder of integer division of that index by the number of system users. These tags are the indices of the system users they will belong to. Then it just groups them by the system user at that index.
This ensures your fairness requirement because the remainder will cycle between zero and one minus the number of system users.
0 % 20 = 0
1 % 20 = 1
2 % 20 = 2
...
19 % 20 = 19
20 % 20 = 0
21 % 21 = 1
22 % 22 = 2
...
39 % 20 = 19
40 % 20 = 0
You can't do this using "pure LINQ" (i.e. using query comprehension syntax), and to be honest LINQ probably isn't the best approach here. Nonetheless, here's an example of how you might do it:
var listB = new List<string>() { "a", "b", "c", "d", "e" };
var listA = new List<string>() { "1", "2", "3" };
var groupings = (from b in listB.Select((b, i) => new
{
Index = i,
Element = b
})
group b.Element by b.Index % listA.Count).Zip(listA, (bs, a) => new
{
A = a,
Bs = bs
});
foreach (var item in groupings)
{
Console.WriteLine("{0}: {1}", item.A, string.Join(",", item.Bs));
}
This outputs:
1: a,d
2: b,e
3: c
I don't thin "pure" LINQ is really suited to solve this problem. Nevertheless here is a solution that only requires two IEnumerable:
var users = new[] { "A", "B", "C" };
var accounts = new[] { 1, 2, 3, 4, 5, 6, 7, 8 };
var accountsPerUser = accounts.Count()/users.Count();
var leftover = accounts.Count()%users.Count();
var assignments = users
.Select((u, i) => new {
User = u,
AccountsToAssign = accountsPerUser + (i < leftover ? 1 : 0),
AccountsAlreadyAssigned =
(accountsPerUser + 1)*(i < leftover ? i : leftover)
+ accountsPerUser*(i < leftover ? 0 : i - leftover)
})
.Select(x => new {
x.User,
Accounts = accounts
.Skip(x.AccountsAlreadyAssigned)
.Take(x.AccountsToAssign)
});
To cut down on the text I use the term User instead of SystemUser.
The idea is quite simple. The first leftover users are assigned accountsPerUser + 1 from accounts. The remaining users are only assigned accountsPerUser.
The first Select uses the overload that provides an index to compute these values:
User | Index | AccountsAlreadyAssigned | AccountsToAssign
-----+-------+-------------------------+-----------------
A | 0 | 0 | 3
B | 1 | 3 | 3
C | 1 | 6 | 2
The second Select uses these values to Skip and Take the correct numbers from accounts.
If you want to you can "merge" the two Select statements and replace the AccountsAlreadyAssigned and AccountsToAssign with the expressions used to compute them. However, that will make the query really hard to understand.
Here is a "non-LINQ" alternative. It is based on IList but could easily be converted to IEnumerable. Or instead of returning the assignments as tuples it could perform the assignments inside the loop.
IEnumerable<Tuple<T, IList<U>>> AssignEvenly<T, U>(IList<T> targetItems, IList<U> sourceItems) {
var fraction = sourceItems.Count/targetItems.Count;
var remainder = sourceItems.Count%targetItems.Count;
var sourceIndex = 0;
for (var targetIndex = 0; targetIndex < targetItems.Count; ++targetIndex) {
var itemsToAssign = fraction + (targetIndex < remainder ? 1 : 0);
yield return Tuple.Create(
targetItems[targetIndex],
(IList<U>) sourceItems.Skip(sourceIndex).Take(itemsToAssign).ToList()
);
sourceIndex += itemsToAssign;
}
}