I am trying to make a program that checks if a given sudoku board is valid (solved correctly).
Also want to do it using linq however I find it hard to come up with a solution to get all the 3x3 groups from the board.
I want to get them as a IEnumerable<IEnumerable<int>> because of how I wrote the rest of the code.
Here is my solution so far :
public static bool IsValidSudoku(IEnumerable<IEnumerable<int>> sudokuBoard)
{
if (sudokuBoard == null)
{
throw new ArgumentNullException();
}
var columns = Enumerable.Range(0, 9)
.Select(lineCount => Enumerable.Range(0,9)
.Select(columnCount=>sudokuBoard
.ElementAt(columnCount)
.ElementAt(lineCount)
));
var groups = //this is where I got stuck
return columns.All(IsValidGroup) &&
sudokuBoard.All(IsValidGroup) &&
groups.All(IsValidGroup);
}
static bool IsValidGroup(IEnumerable<int> group)
{
return group.Distinct().Count() == group.Count()&&
group.All(x => x <= 9 && x > 0)&&
group.Count() == 9;
}
Performance is not important here.
Thank you for any advice!
You need two enumerables to choose which 3x3 group you're selecting, and then you can use .Skip and .Take to take runs of three elements to fetch those groups.
var groups = Enumerable.Range(0, 3).SelectMany(gy =>
Enumerable.Range(0, 3).Select(gx =>
// We now have gx and gy 0-2; find the three rows we want
sudoBoard.Skip(gy * 3).Take(3).Select(row =>
// and from each row take the three columns
row.Skip(gx * 3).Take(3)
)
));
This should give you an IEnumerable of IEnumerable<IEnumerable<int>>s as requested. However to pass each group to IsValidGroup you'll have to flatten the 3x3 IEnumerable<IEnumerable<int>> into a 9-longIEnumerable<int>s, e.g. groups.Select(group => group.SelectMany(n => n)).
Related
I'm creating a report generating tool that use custom data type of different sources from our system. The user can create a report schema and depending on what asked, the data get associated based different index keys, time, time ranges, etc. The project is NOT doing queries in a relational database, it's pure C# code in collections from RAM.
I'm having a huge performance issue and I'm looking at my code since a few days and struggle with trying to optimize it.
I stripped down the code to the minimum for a short example of what the profiler point as the problematic algorithm, but the real version is a bit more complex with more conditions and working with dates.
In short, this function return a subset of "values" satisfying the conditions depending on the keys of the values that were selected from the "index rows".
private List<LoadedDataSource> GetAssociatedValues(IReadOnlyCollection<List<LoadedDataSource>> indexRows, List<LoadedDataSource> values)
{
var checkContainers = ((ValueColumn.LinkKeys & ReportLinkKeys.ContainerId) > 0 &&
values.Any(t => t.ContainerId.HasValue));
var checkEnterpriseId = ((ValueColumn.LinkKeys & ReportLinkKeys.EnterpriseId) > 0 &&
values.Any(t => t.EnterpriseId.HasValue));
var ret = new List<LoadedDataSource>();
foreach (var value in values)
{
var valid = true;
foreach (var index in indexRows)
{
// ContainerId
var indexConservedSource = index.AsEnumerable();
if (checkContainers && index.CheckContainer && value.ContainerId.HasValue)
{
indexConservedSource = indexConservedSource.Where(t => t.ContainerId.HasValue && t.ContainerId.Value == value.ContainerId.Value);
if (!indexConservedSource.Any())
{
valid = false;
break;
}
}
//EnterpriseId
if (checkEnterpriseId && index.CheckEnterpriseId && value.EnterpriseId.HasValue)
{
indexConservedSource = indexConservedSource.Where(t => t.EnterpriseId.HasValue && t.EnterpriseId.Value == value.EnterpriseId.Value);
if (!indexConservedSource.Any())
{
valid = false;
break;
}
}
}
if (valid)
ret.Add(value);
}
return ret;
}
This works for small samples, but as soon as I have thousands of values, and 2-3 index rows with a few dozens values too, it can take hours to generate.
As you can see, I try to break as soon as a index condition fail and pass to the next value.
I could probably do everything in a single "values.Where(####).ToList()", but that condition get complex fast.
I tried generating a IQueryable around indexConservedSource but it was even worse. I tried using a Parallel.ForEach with a ConcurrentBag for "ret", and it was also slower.
What else can be done?
What you are doing, in principle, is calculating intersection of two sequences. You use two nested loops and that is slow as the time is O(m*n). You have two other options:
sort both sequences and merge them
convert one sequence into hash table and test the second against it
The second approach seems better for this scenario. Just convert those index lists into HashSet and test values against it. I added some code for inspiration:
private List<LoadedDataSource> GetAssociatedValues(IReadOnlyCollection<List<LoadedDataSource>> indexRows, List<LoadedDataSource> values)
{
var ret = values;
if ((ValueColumn.LinkKeys & ReportLinkKeys.ContainerId) > 0 &&
ret.Any(t => t.ContainerId.HasValue))
{
var indexes = indexRows
.Where(i => i.CheckContainer)
.Select(i => new HashSet<int>(i
.Where(h => h.ContainerId.HasValue)
.Select(h => h.ContainerId.Value)))
.ToList();
ret = ret.Where(v => v.ContainerId == null
|| indexes.All(i => i.Contains(v.ContainerId)))
.ToList();
}
if ((ValueColumn.LinkKeys & ReportLinkKeys.EnterpriseId) > 0 &&
ret.Any(t => t.EnterpriseId.HasValue))
{
var indexes = indexRows
.Where(i => i.CheckEnterpriseId)
.Select(i => new HashSet<int>(i
.Where(h => h.EnterpriseId.HasValue)
.Select(h => h.EnterpriseId.Value)))
.ToList();
ret = ret.Where(v => v.EnterpriseId == null
|| indexes.All(i => i.Contains(v.EnterpriseId)))
.ToList();
}
return ret;
}
I would like to do something like this (below) but not sure if there is a formal/optimized syntax to do so?
.Orderby(i => i.Value1)
.Take("Bottom 100 & Top 100")
.Orderby(i => i.Value2);
basically, I want to sort by one variable, then take the top 100 and bottom 100, and then sort those results by another variable.
Any suggestions?
var sorted = list.OrderBy(i => i.Value);
var top100 = sorted.Take(100);
var last100 = sorted.Reverse().Take(100);
var result = top100.Concat(last100).OrderBy(i => i.Value2);
I don't know if you want Concat or Union at the end. Concat will combine all entries of both lists even if there are similar entries which would be the case if your original list contains less than 200 entries. Union would only add stuff from last100 that is not already in top100.
Some things that are not clear but that should be considered:
If list is an IQueryable to a db, it probably is advisable to use ToArray() or ToList(), e.g.
var sorted = list.OrderBy(i => i.Value).ToArray();
at the beginning. This way only one query to the database is done while the rest is done in memory.
The Reverse method is not optimized the way I hoped for, but it shouldn't be a problem, since ordering the list is the real deal here. For the record though, the skip method explained in other answers here is probably a little bit faster but needs to know the number of elements in list.
If list would be a LinkedList or another class implementing IList, the Reverse method could be done in an optimized way.
You can use an extension method like this:
public static IEnumerable<T> TakeFirstAndLast<T>(this IEnumerable<T> source, int count)
{
var first = new List<T>();
var last = new LinkedList<T>();
foreach (var item in source)
{
if (first.Count < count)
first.Add(item);
if (last.Count >= count)
last.RemoveFirst();
last.AddLast(item);
}
return first.Concat(last);
}
(I'm using a LinkedList<T> for last because it can remove items in O(1))
You can use it like this:
.Orderby(i => i.Value1)
.TakeFirstAndLast(100)
.Orderby(i => i.Value2);
Note that it doesn't handle the case where there are less then 200 items: if it's the case, you will get duplicates. You can remove them using Distinct if necessary.
Take the top 100 and bottom 100 separately and union them:
var tempresults = yourenumerable.OrderBy(i => i.Value1);
var results = tempresults.Take(100);
results = results.Union(tempresults.Skip(tempresults.Count() - 100).Take(100))
.OrderBy(i => i.Value2);
You can do it with in one statement also using this .Where overload, if you have the number of elements available:
var elements = ...
var count = elements.Length; // or .Count for list
var result = elements
.OrderBy(i => i.Value1)
.Where((v, i) => i < 100 || i >= count - 100)
.OrderBy(i => i.Value2)
.ToArray(); // evaluate
Here's how it works:
| first 100 elements | middle elements | last 100 elements |
i < 100 i < count - 100 i >= count - 100
You can write your own extension method like Take(), Skip() and other methods from Enumerable class. It will take the numbers of elements and the total length in list as input. Then it will return first and last N elements from the sequence.
var result = yourList.OrderBy(x => x.Value1)
.GetLastAndFirst(100, yourList.Length)
.OrderBy(x => x.Value2)
.ToList();
Here is the extension method:
public static class SOExtensions
{
public static IEnumerable<T> GetLastAndFirst<T>(
this IEnumerable<T> seq, int number, int totalLength
)
{
if (totalLength < number*2)
throw new Exception("List length must be >= (number * 2)");
using (var en = seq.GetEnumerator())
{
int i = 0;
while (en.MoveNext())
{
i++;
if (i <= number || i >= totalLength - number)
yield return en.Current;
}
}
}
}
I'm trying to find out what's a good way to continually iterate through a dbset over various function calls, looping once at the end.
Essentially I've got a bunch of Ads in the database, and I want to getNext(count) on the dbset.Ads
Here's an example
ID Text Other Columns...
1 Ad1 ...
2 Ad2 ...
3 Ad3 ...
4 Ad4 ...
5 Ad5 ...
6 Ad6 ...
Let's say that in my View, I determine I need 2 ads to display for User 1. I want to return Ads 1-2. User 2 then requests 3 ads. I want it to return 3-5. User 3 needs 2 ads, and the function should return ads 6 and 1, looping back to the beginning.
Here's my code that I've been working with (it's in class AdsManager):
Ad NextAd = db.Ads.First();
public IEnumerable<Ad> getAds(count)
{
var output = new List<Ad>();
IEnumerable<Ad> Ads = db.Ads.OrderBy(x=>x.Id).SkipWhile(x=>x.Id != NextAd.Id);
output.AddRange(Ads);
//If we're at the end, handle that case
if(output.Count != count)
{
NextAd = db.Ads.First();
output.AddRange(getAds(count - output.Count));
}
NextAd = output[count-1];
return output;
}
The problem is that the function call IEnumerable<Ad> Ads = db.Ads.OrderBy(x=>x.Id).SkipWhile(x=>x.Id != NextAd.Id); throws an error on AddRange(Ads):
LINQ to Entities does not recognize the method 'System.Linq.IQueryable'1[Domain.Entities.Ad] SkipWhile[Ad](System.Linq.IQueryable'1[Domain.Entities.Ad], System.Linq.Expressions.Expression'1[System.Func`2[Domain.Entities.Ad,System.Boolean]])' method, and this method cannot be translated into a store expression.
I originally had loaded the entire dbset into a Queue, and did enqueue/dequeue, but that would not updat when a change was made to the database. I got the idea for this algorithm based on Get the next and previous sql row by Id and Name, EF?
What call should I be making to the database to get what I want?
UPDATE: Here's the working Code:
public IEnumerable<Ad> getAds(int count)
{
List<Ad> output = new List<Ad>();
output.AddRange(db.Ads.OrderBy(x => x.Id).Where(x => x.Id >= NextAd.Id).Take(count + 1));
if(output.Count != count+1)
{
NextAd = db.Ads.First();
output.AddRange(db.Ads.OrderBy(x => x.Id).Where(x => x.Id >= NextAd.Id).Take(count - output.Count+1));
}
NextAd = output[count];
output.RemoveAt(count);
return output;
}
SkipWhile is not supported in the EF; it can't be translated into SQL code. EF basically works on sets and not sequences (I hope that sentence makes sense).
A workaround is to simply use Where, e.g.:
IEnumerable<Ad> Ads = db.Ads.OrderBy(x=>x.Id).Where(x=>x.Id >= NextAd.Id);
Maybe you can simplify this into something like this:
Ad NextAd = db.Ads.First();
public IQueryable<Ad> getAds(count)
{
var firstTake = db.Ads
.OrderBy(x => x.Id)
.Where(x => x.Id >= NextAd.Id);
var secondTake = db.Ads
.OrderBy(x => x.Id)
.Take(count - result.Count());
var result = firstTake.Concat(secondTake);
NextAd = result.LastOrDefault()
return result;
}
Sorry, haven't tested this, but it should work.
class order {
Guid employeeId;
DateTime time;
}
I need to filter a list of orders into 4 lists based on the time range. 0-9AM to 1st list, 9AM-2PM to 2nd, 2-6PM to 3rd and 6-12PM to a 4th list.
I am curious if this can be achieved using lambda expressions in a efficient way? otherwise what would be the best way to split the list?
This should work:
var orders = list.OrderBy(o => o.time);
var first = orders.TakeWhile(o => o.time.TimeOfDay.TotalHours <= 9);
var second = orders.SkipWhile(o => o.time.TimeOfDay.TotalHours <= 9)
.TakeWhile(o => o.time.TimeOfDay.TotalHours <= 14);
var third = orders.SkipWhile(o => o.time.TimeOfDay.TotalHours <= 14)
.TakeWhile(o => o.time.TimeOfDay.TotalHours <= 18);
var fourth = orders.SkipWhile(o => o.time.TimeOfDay.TotalHours <= 18);
Here's another, maybe more efficient, more flexible and concise approach which uses Enumerable.GroupBy:
var groups = list.Select(o => new
{
Order = o,
DayPart = o.time.TimeOfDay.TotalHours <= 9 ? 1
: o.time.TimeOfDay.TotalHours > 9 && o.time.TimeOfDay.TotalHours <= 14 ? 2
: o.time.TimeOfDay.TotalHours > 14 && o.time.TimeOfDay.TotalHours <= 18 ? 3 : 4
})
.GroupBy(x => x.DayPart)
.OrderBy(g => g.Key);
var first = groups.ElementAt(0);
var second = groups.ElementAt(1);
// ...
Most readable way would be to use a named function to do the grouping and pass it as a delegate to the GroupBy()
var orderGroups = orders.GroupBy(GetOrderGroup)
private int GetOrderGroup(order o)
{
//implement your groups
}
This should do the trick:
var first = orders.Where(o => o.time.Hour >= 0 && o.time.Hour < 9);
var second = orders.Where(o => o.time.Hour >= 9 && o.time.Hour < 14);
var third = orders.Where(o => o.time.Hour >= 14 && o.time.Hour < 18);
var fourth = orders.Where(o => o.time.Hour >= 18 && o.time.Hour < 24);
I'm in OSX right now so I can't test the solution, but I'd probably add a property to my order class to calculate the group. I feel like your order would reasonably be concerned with this. So, you could have something like this:
class order {
Guid employeeId;
DateTime time;
public int Group { get { return /* check hours of day to group /} }
}
Then, it should be as easy as orders.GroupBy(o => o.Group);
If you don't feel like your order should know about the groups, you could make another method where you feel it's more important to define the group. Then you could still say orders.GroupBy(o => GetGroupNumber(o)).
If you still need help next time I'm in Windows, I'll write a snippet for you.
EDIT:
I've noticed several of the other answers to this question recommend executing a Where or a Skip-Take strategy (with the overhead of a sort) on the original list for each child list you want to create.
My concern is that there is a performance detriment on large sets. For example, the four .Where evaluations will execute the comparisons on all of your objects four times despite the fact that the groups are mutually exclusive.
I don't know how many data you have, but for your sake I hope it's a LOT of orders :). In any event, I'd probably try to do the grouping and comparisons in one iteration like I recommended. If you don't like that solution, I'd recommend you iterate over the list yourself and built your sets without linq to objects.
Just my two cents.
It's important to use the DateTime.TimeOfDay.TotalHours property, which will return the time represented by whole and fractional hours.
var endTimes = new List<int>() { 9, 14, 18, 24 };
var results = orders.GroupBy(o => endTimes.First(t => o.time.TimeOfDay.TotalHours < t))
.OrderBy(g => g.Key);
I have a list of data that I want to display as a table in a GUI (HTML).
I want to create a lambda expression so that a list of hundred items is for instance divided into 20 rows, 5 items each. Can I create a concise lambda expression to do this?
This is what I came up with (yes, 5 is a magic number, it is the number of items per row):
bool isCreateNewRow = true;
var aggregate = Model.Aggregate(new Collection<ICollection<Album>>(),
(tableCollection, album) =>
{
if (isCreateNewRow)
{
tableCollection.Add(new Collection<Album>());
isCreateNewRow = false;
}
tableCollection.Last().Add(album);
if(tableCollection.Last().Count() >= 5)
{
isCreateNewRow = true;
}
return tableCollection;
});
Is there a shorter way to create a 2 dimensional datastructure (IEnumerables of IEnumerables)?
It would be so much easier to
a) create your (1D) resultset
b) use 1 or 2 for loops to process (present) those results in table form.
Also because those 2 steps would logically belong to different layers of a solution.
// Create 20 dummy items.
var albums = Enumerable.Range(1, 20)
.Select(i => string.Format("Album {0}", i));
// Associate each one with the row index.
var numbered = albums
.Select((item, index) =>
new { Row = index / 3, Item = item });
// Arrange into rows.
var table = numbered
.GroupBy(x => x.Row)
.Select(g => g.Select(x=>x.Item).AsEnumerable());
At this point, table is an IEnumerable<IEnumerable<string>>.
To turn it into HTML, try this:
var html = rows.Aggregate(
new StringBuilder("<table>"),
(tableBuilder, row) => {
tableBuilder.AppendFormat("<tr>{0}</tr>",
row.Aggregate(new StringBuilder(),
(rowBuilder, cell) => {
rowBuilder.AppendFormat("<td>{0}</td>", cell);
return rowBuilder;
}));
return tableBuilder;
},
(tableBuilder) => {
tableBuilder.Append("</table>");
return tableBuilder;
});