I have a list of arrays, of which i want to take one value from each array and build up a JSON structure. Currently for every managedstrategy the currency is always the last value in the loop. How can i take the 1st, then 2nd value etc while looping the names?
List<managedstrategy> Records = new List<managedstrategy>();
int idcnt = 0;
foreach (var name in results[0])
{
managedstrategy ms = new managedstrategy();
ms.Id = idcnt++;
ms.Name = name.ToString();
foreach (var currency in results[1]) {
ms.Currency = currency.ToString();
}
Records.Add(ms);
}
var Items = new
{
total = results.Count(),
Records
};
return Json(Items, JsonRequestBehavior.AllowGet);
JSON structure is {Records:[{name: blah, currency: gbp}]}
Assuming that I understand the problem correctly, you may want to look into the Zip method provided by Linq. It's used to "zip" together two different lists, similar to how a zipper works.
A related question can be found here.
Currently, you are nesting the second loop in the first, resulting in it always returning the last currency, you have to put it all in one big for-loop for it to do what you want:
for (int i = 0; i < someNumber; i++)
{
// some code
ms.Name = results[0][i].ToString();
ms.Currency = results[1][i].ToString();
}
Related
I have a list that is constantly being updated throughout my program. I would like to be able to compare the initial count and final count of my list after every update. The following is just a sample code (the original code is too lengthy) but it sufficiently captures the problem.
class Bot
{
public int ID { get; set; }
}
public class Program
{
public void Main()
{
List<Bot> InitialList = new List<Bot>();
List<Bot> FinalList = new List<Bot>();
for (int i = 0; i < 12345; i++)
{
Bot b = new Bot() {ID = i};
InitialList.Add(b);
}
FinalList = InitialList;
for (int i = 0; i < 12345; i++)
{
Bot b = new Bot() {ID = i};
FinalList.Add(b);
}
Console.Write($"Initial list has {InitialList.Count} bots");
Console.Write($"Final list has {FinalList.Count} bots");
}
}
Output:
Initial list has 24690 bots
Final list has 24690 bots
Expected for both lists to have 12345 bots.
What is correct way to copy the initial list so new set is not simply added to original?
To do what you seem to want to do, you want to copy the list rather than assign a new reference to the same list. So instead of
FinalList = InitialList;
Use
FinalList.AddRange(InitialList);
Basically what you had was two variables both referring to the same list. This way you have two different lists, one with the initial values and one with new values.
That said, you could also just store the count if that's all you want to do.
int initialCount = InitialList.Count;
FinalList = InitialList;
Although there's now no longer a reason to copy from one to the other if you already have the data you need.
I get the feeling you actually want to do more than what's stated in the question though, so the correct approach may change depending on what you actually want to do.
I'm back to haunt your dreams! I'm working on comparing some values in a complex loop. List 1 is a list of questions/answers, List 2 is also a list of questions/answers. I want to compare List 1 to List 2 and have duplicates removed from List 1 before merging it with List 2. My problem is in the current seed data I have the two items in List 1 match against List 2, but only one is removed instead of both.
I've been at this a couple days and my head is ready to explode, so I hope I can find some help!
Here's code for you:
//Fetching questions/answers which do not have an attempt
//Get questions, which automatically pull associated answers thanks to the model
List<QuizQuestions> notTriedQuestions = await db.QuizQuestions.Where(x=>x.QuizID == report.QuizHeader.QuizID).ToListAsync();
//Compare to existing attempt data and remove duplicate questions
int i = 0;
while(i < notTriedQuestions.Count)
{
var originalAnswersCount = notTriedQuestions.ElementAt(i).QuizAnswers.Count;
int j = 0;
while(j < originalAnswersCount)
{
var comparedID = notTriedQuestions.ElementAt(i).QuizAnswers.ElementAt(j).AnswerID;
if (report.QuizHeader.QuizQuestions.Any(item => item.QuizAnswers.Any(x => x.AnswerID == comparedID)))
{
notTriedQuestions.RemoveAt(i);
//Trip while value and cause break out of loop, otherwise you result in a catch
j = originalAnswersCount;
}
else
{
j++;
}
}
i++;
}
//Add filtered list to master list
foreach (var item in notTriedQuestions)
{
report.QuizQuestions.Add(item);
}
Try List.Union It is meant for exactly this sort of thing.
I'm after some help with how to query a list and return back the index, but not using Linq. I've seen many example where Linq is used, but the software I'm writing the query into doesn't support Linq. :(
So example to get us going:
List<string> location = new List<string>();
location.add(#"C:\test\numbers\FileName_IgnoreThis_1.jpg");
location.add(#"C:\test\numbers\FileName_IgnoreThis_2.jpg");
location.add(#"C:\test\numbers\FileName_ImAfterThis_3.jpg");
location.add(#"C:\test\numbers\FileName_IgnoreThis_4.jpg");
location.add(#"C:\test\numbers\FileName_ImAfterThis_5.jpg");
So this list will be populated with probably a few hundred records, what I need to do is query the list for the text "ImAfterThis" then return the index number location for this item into a string array but without using Linq.
The desired result would be 2 and 4 being added to the string array.
I was thinking of doing a for loop through the list, but is there a better way to achieve this?
List<int> results = new List<int>();
int i = 0;
foreach (string value in location)
{
if (value.Contains("IAfterThis"))
{
results.Add(i);
Console.WriteLine("Found in Index: " + i);
}
i++;
}
Console.ReadLine();
Thanks in advance.
If you want to get just the first occurrence you could simply use the IndexOf() method.
If you want all occurrences of string "whatever" then a for loop would certainly work for you. For the sake of argument here I've capture the indexes in another list:
string MyString = "whatever";
List<int> indexes = new List();
for (int i = 0; i < location.Count; i++)
{
if (location[i] == MyString)
{
indexes.Add(i);
}
}
Let's say I have two List<string>. These are populated from the results of reading a text file
List owner contains:
cross
jhill
bbroms
List assignee contains:
Chris Cross
Jack Hill
Bryan Broms
During the read from a SQL source (the SQL statement contains a join)... I would perform
if(sqlReader["projects.owner"] == "something in owner list" || sqlReader["assign.assignee"] == "something in assignee list")
{
// add this projects information to the primary results LIST
list_by_owner.Add(sqlReader["projects.owner"],sqlReader["projects.project_date_created"],sqlReader["projects.project_name"],sqlReader["projects.project_status"]);
// if the assignee is not null, add also to the secondary results LIST
// logic to determine if assign.assignee is null goes here
list_by_assignee.Add(sqlReader["assign.assignee"],sqlReader["projects.owner"],sqlReader["projects.project_date_created"],sqlReader["projects.project_name"],sqlReader["projects.project_status"]);
}
I do not want to end up using nested foreach.
The FOR loop would probably suffice. Someone had mentioned ZIP to me but wasn't sure if that would be a preferable route to go in my situation.
One loop to iterate through both lists (assuming both have same count):
for (int i = 0; i < alpha.Count; i++)
{
var itemAlpha = alpha[i] // <= your object of list alpha
var itemBeta = beta[i] // <= your object of list beta
//write your code here
}
From what you describe, you don't need to iterate at all.
This is what you need:
http://msdn.microsoft.com/en-us/library/bhkz42b3.aspx
Usage:
if ((listAlpga.contains(resultA) || (listBeta.contains(resultA)) {
// do your operation
}
List Iteration will happen implicitly inside the contains method. And thats 2n comparisions, vs n*n for nested iteration.
You would be better off with sequential iteration in each list one after the other, if at all you need to go that route.
This list is maybe better represented as a List<KeyValuePair<string, string>> which would pair the two list values together in a single list.
There are several options for this. The least "painful" would be plain old for loop:
for (var index = 0; index < alpha.Count; index++)
{
var alphaItem = alpha[index];
var betaItem = beta[index];
// Do something.
}
Another interesting approach is using the indexed LINQ methods (but you need to remember they get evaluated lazily, you have to consume the resulting enumerable), for example:
alpha.Select((alphaItem, index) =>
{
var betaItem = beta[index];
// Do something
})
Or you can enumerate both collection if you use the enumerator directly:
using (var alphaEnumerator = alpha.GetEnumerator())
using (var betaEnumerator = beta.GetEnumerator())
{
while (alphaEnumerator.MoveNext() && betaEnumerator.MoveNext())
{
var alphaItem = alphaEnumerator.Current;
var betaItem = betaEnumerator.Current;
// Do something
}
}
Zip (if you need pairs) or Concat (if you need combined list) are possible options to iterate 2 lists at the same time.
I like doing something like this to enumerate over parallel lists:
int alphaCount = alpha.Count ;
int betaCount = beta.Count ;
int i = 0 ;
while ( i < alphaCount && i < betaCount )
{
var a = alpha[i] ;
bar b = beta[i] ;
// handle matched alpha/beta pairs
++i ;
}
while ( i < alphaCount )
{
var a = alpha[i] ;
// handle unmatched alphas
++i ;
}
while ( i < betaCount )
{
var b = beta[i] ;
// handle unmatched betas
++i ;
}
I have a flat file with an unfortunately dynamic column structure. There is a value that is in a hierarchy of values, and each tier in the hierarchy gets its own column. For example, my flat file might resemble this:
StatisticID|FileId|Tier0ObjectId|Tier1ObjectId|Tier2ObjectId|Tier3ObjectId|Status
1234|7890|abcd|efgh|ijkl|mnop|Pending
...
The same feed the next day may resemble this:
StatisticID|FileId|Tier0ObjectId|Tier1ObjectId|Tier2ObjectId|Status
1234|7890|abcd|efgh|ijkl|Complete
...
The thing is, I don't care much about all the tiers; I only care about the id of the last (bottom) tier, and all the other row data that is not a part of the tier columns. I need normalize the feed to something resembling this to inject into a relational database:
StatisticID|FileId|ObjectId|Status
1234|7890|ijkl|Complete
...
What would be an efficient, easy-to-read mechanism for determining the last tier object id, and organizing the data as described? Every attempt I've made feels kludgy to me.
Some things I've done:
I have tried to examine the column names for regular expression patterns, identify the columns that are tiered, order them by name descending, and select the first record... but I lose the ordinal column number this way, so that didn't look good.
I have placed the columns I want into an IDictionary<string, int> object to reference, but again reliably collecting the ordinal of the dynamic columns is an issue, and it seems this would be rather non-performant.
I ran into a simular problem a few years ago. I used a Dictionary to map the columns, it was not pretty, but it worked.
First make a Dictionary:
private Dictionary<int, int> GetColumnDictionary(string headerLine)
{
Dictionary<int, int> columnDictionary = new Dictionary<int, int>();
List<string> columnNames = headerLine.Split('|').ToList();
string maxTierObjectColumnName = GetMaxTierObjectColumnName(columnNames);
for (int index = 0; index < columnNames.Count; index++)
{
if (columnNames[index] == "StatisticID")
{
columnDictionary.Add(0, index);
}
if (columnNames[index] == "FileId")
{
columnDictionary.Add(1, index);
}
if (columnNames[index] == maxTierObjectColumnName)
{
columnDictionary.Add(2, index);
}
if (columnNames[index] == "Status")
{
columnDictionary.Add(3, index);
}
}
return columnDictionary;
}
private string GetMaxTierObjectColumnName(List<string> columnNames)
{
// Edit this function if Tier ObjectId is greater then 9
var maxTierObjectColumnName = columnNames.Where(c => c.Contains("Tier") && c.Contains("Object")).OrderBy(c => c).Last();
return maxTierObjectColumnName;
}
And after that it's simply running thru the file:
private List<DataObject> ParseFile(string fileName)
{
StreamReader streamReader = new StreamReader(fileName);
string headerLine = streamReader.ReadLine();
Dictionary<int, int> columnDictionary = this.GetColumnDictionary(headerLine);
string line;
List<DataObject> dataObjects = new List<DataObject>();
while ((line = streamReader.ReadLine()) != null)
{
var lineValues = line.Split('|');
string statId = lineValues[columnDictionary[0]];
dataObjects.Add(
new DataObject()
{
StatisticId = lineValues[columnDictionary[0]],
FileId = lineValues[columnDictionary[1]],
ObjectId = lineValues[columnDictionary[2]],
Status = lineValues[columnDictionary[3]]
}
);
}
return dataObjects;
}
I hope this helps (even a little bit).
Personally I would not try to reformat your file. I think the easiest approach would be to parse each row from the front and the back. For example:
itemArray = getMyItems();
statisticId = itemArray[0];
fileId = itemArray[1];
//and so on for the rest of your pre-tier columns
//Then get the second to last column which will be the last tier
lastTierId = itemArray[itemArray.length -1];
Since you know the last tier will always be second from the end you can just start at the end and work your way forwards. This seems like it would be much easier than trying to reformat the datafile.
If you really want to create a new file, you could use this approach to get the data you want to write out.
I don't know C# syntax, but something along these lines:
split line in parts with | as separator
get parts [0], [1], [length - 2] and [length - 1]
pass the parts to the database handling code