I have a list of roughly 50~60 items that I want to be able to divide into multiple columns dynamically. I'm using a nested for loop and the lists divide properly when there are an even number of items. However, when there are an odd number of items the remainder (modulus) items get left out. I've been playing around with it for a while and have not struck gold yet. I'm hoping someone smarter than me can & will assist.
Thanks.
for (int fillRow = 0; fillRow < numOfCols; fillRow++)
{
for (int fillCell = 0; fillCell < (siteTitles.Count / numOfCols); fillCell++)
{
linkAddress = new HyperLink();
linkAddress.Text = tempSites[fillCell].ToString();
linkAddress.NavigateUrl = tempUrls[fillCell].ToString();
mainTbl.Rows[fillCell].Cells[fillRow].Controls.Add(linkAddress);
}
}
Well yes, the problem is here:
fillCell < (siteTitles.Count / numOfCols)
That division will round down, so for example if there are 13 titles and numOfCols is 5, it will give 2 - which means that items 10-12 won't be used.
I suggest that actually you loop over all the items instead, and work out the row and column for each item:
for (int i = 0; i < siteTitles.Count; i++)
{
int row = i / numOfCols;
int col = i % numOfCols;
// Fill in things using row, col and i
}
(It's not exactly clear what you're doing as you're using siteTitles in the loop condition and tempSites in the loop body, and you're not using fillRow when extracting the data... basically I think you've still got some bugs...)
Related
In C#, I create a list of lists and try to access/modify the elements. There is a problem in that it seems that an operation (e.g. adding a constant) seems to apply not only to the wanted index but to all elements.
Here is the piece of code:
List<List<double>> D = new List<List<double>>();
List<double> ttmp = new List<double>(new double[256]);
for (int i = 0; i < 100; i++)
{
D.Add(ttmp);
}
for (int i = 0; i < 100; i++)
{
D[i][0] = D[i][0] + 1;
}
D is a list of 100 lists, each of size 256. It initially contains only zeroes. In the second loop, I ask that the first element of every of the 100 lists be incremented by one.
As a results, the entire "matrix" is filled with ones, i.e. not only D[0][0], D[1][0] ... D[99][0], but also D[0][1], D[0][2] , etc.
Why is that?
NB: the C++ equivalent with vector<vector<double>> works perfectly fine...
When code is executed, result is not that
D[0][0], D[1][0] ... D[99][0], but also D[0][1], D[0][2] are modified.
Result is that inner lists have [0] element equal to 100.
Why so? Because you created one list, and added it 100 times into D. But that is the same list - when you modify it, you modify all instances (because this is reference data type).
Change it instead to:
List<List<double>> D = new List<List<double>>();
for (int i = 0; i < 100; i++)
{
D.Add(new List<double>(new double[256]));
}
foreach (var innerList in D)
{
innerList[0]++;
}
I would like to write a piece of for loop which would go through an existing list and take 20 items out of that list each time it iterates.
So something like this:
If filteredList list contains let's say 68 items...
First 3 loops would take 20 times each and then in last 4th iteration it would take the rest of the 8 items that are residing in the list...
I have written something like this:
var allResponses= new List<string>();
for (int i = 0; i < filteredList.Count(); i++)
{
allResponses.Add(GetResponse(filteredList.Take(20).ToList()));
}
Where assuming filteredList is a list that contains 68 items. I figured that this is not a way to go because I don't want to loop to the collections size, but instead of 68 times, it should be 4 times and that I take 20 items out of the list each time... How could I do this?
You are pretty close - just add a call to Skip, and divide Count by 20 with rounding up:
var allResponses= new List<string>();
for (int i = 0; i < (filteredList.Count+19) / 20; i++) {
allResponses.Add(GetResponse(filteredList.Skip(i*20).Take(20).ToList()));
}
The "add 19, divide by 20" trick provides an idiomatic way of taking the "ceiling" of integer division, instead of the "floor".
Edit: Even better (Thanks to Thomas Ayoub)
var allResponses= new List<string>();
for (int i = 0 ; i < filteredList.Count ; i = i + 20) {
allResponses.Add(GetResponse(filteredList.Skip(i).Take(20).ToList()));
}
You simply have to calculate the number of pages:
const in PAGE_SIZE = 20;
int pages = filteredList.Count() / PAGE_SIZE
+ (filteredList.Count() % PAGE_SIZE > 0 ? 1 : 0)
;
The last part makes sure that with 21, there will be added 1 page above the previously calculated page size (since 21/20 = 1).
Or, when using MoreLINQ, you could simply use a Batch call.
I suggest a simple loop with page adding on 0, 20, 40... iterations without Linq and complex modular operations:
int pageSize = 20;
List<String> page = null;
for (int i = 0; i < filteredList.Count; ++i) {
// if page reach pageSize, add a new one
if (i % pageSize == 0) {
page = new List<String>(pageSize);
allResponses.Add(page);
}
page.Add(filteredList[i]);
}
i needed to do same but i needed to skip 10 after each element so i wrote this simple code
List<Location2D> points = new List<Location2D>();
points.ForEach(p =>
{
points.AddRange(path.Skip((int)points.Count * 10).Take(1));
});
if (!points.Contains(path.LastOrDefault()))
points.Add(path.LastOrDefault());
I have a loop that is too slow in C#. I want to know if there is a faster way to process through these arrays. I'm currently working in .NET 2.0. i'm not opposed to upgrading this project. This is part of a theoretical image processing concept involving gray levels.
Pixel count (PixCnt = 21144402)
g_len = 4625
list1d - 1Dimensional array of an image with upper bound of the above pixel count.
pg - gray level intensity holder.
This function creates an index of those values. hence pgidx.
int[] pgidx = new int[PixCnt];
sw = new Stopwatch();
sw.Start();
for (i = 0; i < PixCnt; i++)
{
j = 0;
pgidx[i] = 0;
while (list_1d[i] != pg[j] && j < g_len) j++;
if (list_id[i] == pg[j])
pgidx[i] = j
}
sw.stop();
Debug.WriteLine("PixCnt Loop took" + sw.ElapsedMilliseconds + " ms");
I think using a dictionary to store what's in the pg array will speed it up. g_len is 4625 elements, so you will likely average around 2312 iterations of the inner while loop. Replacing that with a single hashed look up in a dictionary should be faster. Since the outer loop executes 21 million times, speeding up the body of that loop should reap big rewards. I'm guessing the code below will speed up your time by 100 to 1000 time faster.
var pgDict = new Dictionary<int,int>(g_len);
for (int i = 0; i < g_len; i++) pgDict.Add(pg[i], i);
int[] pgidx = new int[PixCnt];
int value = 0;
for (int i = 0; i < PixCnt; i++) {
if (pgDict.TryGetValue(list_id[i], out value)) pgidx[i] = value;
}
Note that setting pgidx[i] to zero when a match isn't found is not necessary, because all elements of the array are already initialized to zero when the array is created.
If there is the possibility for a value in pg to appear more than once, you would want to check first to see if that key has already been added, and skip adding it to the dictionary if it has. That would mimic your current behavior of finding the first match. To do that replace the line where the dictionary is built with this:
for (int i = 0; i < g_len; i++) if (!pgDict.ContainsKey(pg[i])) pgDict.Add(pg[i], i);
If the range of the pixel values in pq allows it (say 16 bpp = 65536 entries), you can create an auxiliary array that maps all possible gray levels to the index value in pg. Filling this array is done with a single pass over pg (after initializing to all zeroes).
Then convert list_1d to pgidx with straight table lookups.
If the table is too big (bigger than the image), then do as #hatchet answered.
EDIT: so it looks like this is normal behavior, so can anyone just recommend a faster way to do these numerous intersections?
so my problem is this. I have 8000 lists (strings in each list). For each list (ranging from size 50 to 400), I'm comparing it to every other list and performing a calculation based on the intersection number. So I'll do
list1(intersect)list1= number
list1(intersect)list2= number
list1(intersect)list888= number
And I do this for every list. Previously, I had HashList and my code was essentially this: (well, I was actually searching through properties of an object, so I
had to modify the code a bit, but it's basically this:
I have my two versions below, but if anyone knows anything faster, please let me know!
Loop through AllLists, getting each list, starting with list1, and then do this:
foreach (List list in AllLists)
{
if (list1_length < list_length) //just a check to so I'm looping through the
//smaller list
{
foreach (string word in list1)
{
if (block.generator_list.Contains(word))
{
//simple integer count
}
}
}
// a little more code, but the same, but looping through the other list if it's smaller/bigger
Then I make the lists into regular lists, and applied Sort(), which changed my code to
foreach (List list in AllLists)
{
if (list1_length < list_length) //just a check to so I'm looping through the
//smaller list
{
for (int i = 0; i < list1_length; i++)
{
var test = list.BinarySearch(list1[i]);
if (test > -1)
{
//simple integer count
}
}
}
The first version takes about 6 seconds, the other one takes more than 20 (I just stop there cuz otherwise it would take more than a minute!!!) (and this is for a smallish subset of the data)
I'm sure there's a drastic mistake somewhere, but I can't find it.
Well I have tried three distinct methods for achieving this (assuming I understood the problem correctly). Please note I have used HashSet<int> in order to more easily generate random input.
setting up:
List<HashSet<int>> allSets = new List<HashSet<int>>();
Random rand = new Random();
for(int i = 0; i < 8000; ++i) {
HashSet<int> ints = new HashSet<int>();
for(int j = 0; j < rand.Next(50, 400); ++j) {
ints.Add(rand.Next(0, 1000));
}
allSets.Add(ints);
}
the three methods I checked (code is what runs in the inner loop):
the loop:
note that you are getting duplicated results in your code (intersecting set A with set B and later intersecting set B with set A).
It won't affect your performance thanks to the list length check you are doing. But iterating this way is clearer.
for(int i = 0; i < allSets.Count; ++i) {
for(int j = i + 1; j < allSets.Count; ++j) {
}
}
first method:
used IEnumerable.Intersect() to get the intersection with the other list and checked IEnumerable.Count() to get the size of the intersection.
var intersect = allSets[i].Intersect(allSets[j]);
count = intersect.Count();
this was the slowest one averaging 177s
second method:
cloned the smaller set of the two sets I was intersecting, then used ISet.IntersectWith() and checked the resulting sets Count.
HashSet<int> intersect;
HashSet<int> intersectWith;
if(allSets[i].Count < allSets[j].Count) {
intersect = new HashSet<int>(allSets[i]);
intersectWith = allSets[j];
} else {
intersect = new HashSet<int>(allSets[j]);
intersectWith = allSets[i];
}
intersect.IntersectWith(intersectWith);
count = intersect.Count;
}
}
this one was slightly faster, averaging 154s
third method:
did something very similar to what you did iterated over the shorter set and checked ISet.Contains on the longer set.
for(int i = 0; i < allSets.Count; ++i) {
for(int j = i + 1; j < allSets.Count; ++j) {
count = 0;
if(allSets[i].Count < allSets[j].Count) {
loopingSet = allSets[i];
containsSet = allSets[j];
} else {
loopingSet = allSets[j];
containsSet = allSets[i];
}
foreach(int k in loopingSet) {
if(containsSet.Contains(k)) {
++count;
}
}
}
}
this method was by far the fastest (as expected), averaging 66s
conclusion
the method you're using is the fastest of these three. I certainly can't think of a faster single threaded way to do this. Perhaps there is a better concurrent solution.
I've found that one of the most important considerations in iterating/searching any kind of collection is to choose the collection type very carefully. To iterate through a normal collection for your purposes will not be the most optimal. Try using something like:
System.Collections.Generic.HashSet<T>
Using the Contains() method while iterating over the shorter list of two (as you mentioned you're already doing) should give close to O(1) performance, the same as key lookups in the generic Dictionary type.
I have this code
for (short i = 100; i >= 1; i--)
{OutputFormattedXmlElement("test", i.ToString());}
i am using the "test" to populate a dropdownlist
<select id="list><xsl:apply-templates select="/sales/test">...
My problem is that in the dropdownlist it shows the 1 as the default (correct) but it is at the bottom of the list and i want it at the top.
Where do i have to make a change? Is there any other way except javascript?
Thanks in advance!
You're looping from 100 down to 1, meaning that 100 is the first item and 1 is the last. It would seem to me that reversing the loop might give you better results.
Can you just reverse your for loop?
for (short i = 1; i <= 100; i++)
{OutputFormattedXmlElement("test", i.ToString());}
For your situation, is it valid to simply flip the for loop?
for (short i = 1; i <= 100; i++)
...