Beginner programmer here so please keep (explanation of) answers as simple as possible.
For an assignment we got a text file that contains a large amount of lines.
Each line is a different 'Order' with an ordernumber, location, frequency (amount of times per week), and a few other arguments.
Example:
18; New York; 3PWK; ***; ***; etc
I've made it so that each line is read and stored in a string with
string[] orders = System.IO.File.ReadAllLines(#<filepath here>);
And I've made a separate class called "Order" which has a get+set for all the properties like ordernumber, etc.
Now I'm stuck on how to actually get the values in there. I know how string splitting works but not how I can make unique objects in a loop.
I'd need something that created a new Order for every line and then assign Order.ordernumber, Order.location etc.
Any help would be MUCH appreciated!
An easy approach will be to make a class to define the orders like this:
public class Order{
public string OrderNumber{get;set;}
public string OrderId{get;set;}
public string OrderSomeThingElse{get;set;}
}
Then initialize a List:
var orderList = new List<Order>();
Then loop through and populate it:
foreach( var order in orders ){
var splitString = order.Split(';');
orderList.Add( new Order{
OrderNumber = splitString[0],
OrderId = splitString[1],
OrderSomeThingElse = splitString[2]
});
}
If you want an easy, but not that elegant approach, this is it.
In addition to all the good answers you've already received. I recommend you to use File.ReadLines() instead File.ReadAllLines() Because you are reading large file.
The ReadLines and ReadAllLines methods differ as follows: When you use ReadLines, you can start enumerating the collection of strings before the whole collection is returned; when you use ReadAllLines, you must wait for the whole array of strings be returned before you can access the array. Therefore, when you are working with very large files, ReadLines can be more efficient. MSDN
Unless I misunderstand... do you mean something like this?
var ordersCollection = new List<Order>();
foreach (var order in orders)
{
var o = new Order();
o.PropertyName = ""; // Assign your property values like this
ordersCollection.Add(o);
}
// ordersCollection is now full of your orders.
Related
I have a file with 2 columns and multiple rows. 1st column is ID, 2nd column is Name. I would like to display a Dropdown where I will show only all the names from this file.
I will only iterate through the collection. So what is the better approach? Is creating the objects more readable for other developers? Or maybe creating new objects is too slow and it's not worth.
while (!reader.EndOfStream)
{
var row = reader.ReadLine();
var values = row.Split(' ');
list.Add(new Object { Id = int.Parse(values[0]), Name = values[1] });
}
or
while (!reader.EndOfStream)
{
var row = reader.ReadLine();
var values = row.Split(' ');
dict.Add(int.Parse(values[0]), values[1]);
}
Do I lose the speed in the case if I will create new objects?
You create new objects, so to speak, also while adding to the Dictionary<T>, you create new Key-Value pair of the desired type.
As you already mentioned in your question, the decision is made on primary
expected access pattern
performance considerations, which are the function also of access pattern per se.
If you need read-only array to iterate over, use List<T> (even better if the size of the data is known upfront use Type[] data, and just read it where you need it).
If you need key-wise access to your data, use Dictionary<T>.
If you want to only iterate objects, then use List. No need to use Dictionary class at all.
I want to modify some strings that are contained in an object like say an array, or maybe the nodes in an XDocument (XText)XNode.Value.
I want to gather a subset of strings from these objects and modify them, but I don't know at runtime from what object type they come from.
Put another way, let's say I have objects like this:
List<string> fruits = new List<string>() {"apple", "banana", "cantelope"};
XDocument _xmlObject;
I want to be able to add a subset of values from the original collections to new lists like this:
List<ref string> myStrings1 = new List<ref string>();
myStrings1.Add(ref fruits[1]);
myStrings1.Add(ref fruits[2]);
List<ref string> myStrings2 = new List<ref string>();
IEnumerable<XNode> xTextNodes = getTargetTextNodes(targetPath); //some function returns a series of XNodes in the XDocument
foreach (XNode node in xTextNodes)
{
myStrings2.Add(((XText)node).Value);
}
Then change the values using a general purpose method like this:
public void Modify(List<ref string> mystrings){
foreach (ref string item in mystrings)
{
item = "new string";
}
}
Such that I can pass that method any string collection, and modify the strings in the original object without having to deal with the original object itself.
static void Main(string[] args)
{
Modify(myStrings1);
Modify(myStrings2);
}
The important part here is the mystrings collection. That can be special. But I need to be able to use a variety of different kinds of strings and string collections as the originals source data to go in that collection.
Of course, the above code doesn't work, and neither does any variation I've tried. Is this even possible in c#?
What you want is possible with C#... but only if you can fix every possible source for your strings. That would allow you to use pointers to the original strings... at a terrible cost, however, in terms of memory management and unsafe code throughout your application.
I encourage you to pursue a different direction for this.
Based on your edits, it looks like you're always working with an entire collection, and always modifying the entire collection at once. Also, this might not even be a string collection at the outset. I don't think you'll be able to get the exact result you want, because of the base XDocument type you're working with. But one possible direction to explore might look like this:
public IEnumerable<string> Modify(IEnumerable<string> items)
{
foreach(string item in items)
{
yield return "blah";
}
}
You can use a projection to get strings from any collection type, and get your modified text back:
fruits = Modify(fruits).ToList();
var nodes = Modify( xTextNodes.Select(n => (XText)n.Value));
And once you understand how to make a projection, you may find that the existing .Select() method already does everything you need.
What I really suggest, though, is that rather than working with an entire collection, think about working in terms of one record at a time. Create a common object type that all of your data sources understand. Create a projection from each data source into the common object type. Loop through each of the objects in your projection and make your adjustment. Then have another projection back to the original record type. This will not be the original collection. It will be a new collection. Write your new collection back to disk.
Used appropriately, this also has the potential for much greater performance than your original approach. This is because working with one record at a time, using these linq projections, opens the door to streaming the data, such that only one the one current record is ever held in memory at a time. You can open a stream from the original and a stream for the output, and write to the output just as fast as you can read from the original.
The easiest way to achieve this is by doing the looping outside of the method. This allows you to pass the strings by reference which will replace the existing reference with the new one (don't forget that strings are immutable).
And example of this:
void Main()
{
string[] arr = new[] {"lala", "lolo"};
arr.Dump();
for(var i = 0; i < arr.Length; i++)
{
ModifyStrings(ref arr[i]);
}
arr.Dump();
}
public void ModifyStrings(ref string item)
{
item = "blah";
}
BACKGROUND TO THE PROBLEM
Say I had the following two lists (prioSums and contentVals) compiled from a SQL Server CE query like this:
var queryResults = db.Query(searchQueryString, searchTermsArray);
Dictionary<string, double> prioSums = new Dictionary<string, double>();
Dictionary<string, string> contentVals = new Dictionary<string, string>();
double prioTemp = 0.0;
foreach(var row in queryResults)
{
string location = row.location;
double priority = row.priority;
if (!prioSums.ContainsKey(location))
{
prioSums[location] = 0.0;
}
if (!contentVals.ContainsKey(location))
{
contentVals[location] = row.value;
prioTemp = priority;
}
if (prioTemp < priority)
{
contentVals[location] = row.value;
}
prioSums[location] += priority;
}
The query itself is pretty large, very dynamically compiled, and really beyond the scope of this question, so I'll just say that it returns rows that include a priority, text value, and location.
With the above code I am able to get one list (prioSums) which sums up all of the priorities for each location (not allowing repeats on the location [key] itself, even though repeats for the location are in the query results), and another list (contentVals) to hold the value of the location with the highest priority, once again, using the location as key.
All of this I have accomplished and it works very well. I can iterate over the two lists and display the information I want HOWEVER...
THE PROBLEM
...Now I need to reorder these lists together with the highest priority (or sums of priorities which are stored as the values in prioSums) first.
I have wracked my brain trying to think about using an instantiated class with three properties as given advice by others, but I can't seem to wrap my brain on how that would work, given my WebMatrix C#.net-webpages environment. I know how to call a class from a .cs file from the current .cshtml file, no problem, but I have never done this by instantiating a class to make it an object before (sorry, still new to some of the more complex C# logic/methodology).
Can anyone suggest how to accomplish this, or perhaps show an easier (at least easier to understand) way of doing this? In short all I really need is these two lists ordered together by the value in prioSums from highest to lowest.
NOTE
Please forgive me if I have not provided quite enough information. If more should be provided don't hesitate to ask.
Also, for more information or background on this problem, you can look at my previous question on this here: Is there any way to loop through my sql results and store certain name/value pairs elsewhere in C#?
I dont know if its the outcome you want but you can give it a try:
var result = from p in prioSums
orderby p.Value descending
select new { location = p.Key, priority = p.Value, value = contentVals[p.Key] };
I have a class that has multiple List<> contained within it. Its basically a table stored with each column as a List<>. Each column does not contain the same type. Each list is also the same length (has the same number of elements).
For example:
I have 3 List<> objects; one List, two List, and three List.
//Not syntactically correct
List<DateTime> one = new List...{4/12/2010, 4/9/2006, 4/13/2008};
List<double> two = new List...{24.5, 56.2, 47.4};
List<string> three = new List...{"B", "K", "Z"};
I want to be able to sort list one from oldest to newest:
one = {4/9/2006, 4/13/2008, 4/12/2010};
So to do this I moved element 0 to the end.
I then want to sort list two and three the same way; moving the first to the last.
So when I sort one list, I want the data in the corresponding index in the other lists to also change in accordance with how the one list is sorted.
I'm guessing I have to overload IComparer somehow, but I feel like there's a shortcut I haven't realized.
I've handled this design in the past by keeping or creating a separate index list. You first sort the index list, and then use it to sort (or just access) the other lists. You can do this by creating a custom IComparer for the index list. What you do inside that IComparer is to compare based on indexes into the key list. In other words, you are sorting the index list indirectly. Something like:
// This is the compare function for the separate *index* list.
int Compare (object x, object y)
{
KeyList[(int) x].CompareTo(KeyList[(int) y])
}
So you are sorting the index list based on the values in the key list. Then you can use that sorted key list to re-order the other lists. If this is unclear, I'll try to add a more complete example when I get in a situation to post one.
Here's a way to do it using LINQ and projections. The first query generates an array with the original indexes reordered by the datetime values; in your example, the newOrdering array would have members:
{ 4/9/2006, 1 }, { 4/13/2008, 2 }, { 4/12/2010, 0 }
The second set of statements generate new lists by picking items using the reordered indexes (in other words, items 1, 2, and 0, in that order).
var newOrdering = one
.Select((dateTime, index) => new { dateTime, index })
.OrderBy(item => item.dateTime)
.ToArray();
// now, order each list
one = newOrdering.Select(item => one[item.index]).ToList();
two = newOrdering.Select(item => two[item.index]).ToList();
three = newOrdering.Select(item => three[item.index]).ToList();
I am sorry to say, but this feels like a bad design. Especially because List<T> does not guarantee element order before you have called one of the sorting operations (so you have a problem when inserting):
From MSDN:
The List is not guaranteed to be
sorted. You must sort the List
before performing operations (such as
BinarySearch) that require the List
to be sorted.
In many cases you won't run into trouble based on this, but you might, and if you do, it could be a very hard bug to track down. For example, I think the current framework implementation of List<T> maintains insert order until sort is called, but it could change in the future.
I would seriously consider refactoring to use another data structure. If you still want to implement sorting based on this data structure, I would create a temporary object (maybe using an anonymous type), sort this, and re-create the lists (see this excellent answer for an explanation of how).
First you should create a Data object to hold everything.
private class Data
{
public DateTime DateTime { get; set; }
public int Int32 { get; set; }
public string String { get; set; }
}
Then you can sort like this.
var l = new List<Data>();
l.Sort(
(a, b) =>
{
var r = a.DateTime.CompareTo(b);
if (r == 0)
{
r = a.Int32.CompareTo(b);
if (r == 0)
{
r = a.String.CompareTo(b);
}
}
return r;
}
);
I wrote a sort algorithm that does this for Nito.LINQ (not yet released). It uses a simple-minded QuickSort to sort the lists, and keeps any number of related lists in sync. Source code starts here, in the IList<T>.Sort extension method.
Alternatively, if copying the data isn't a huge concern, you could project it into a LINQ query using the Zip operator (requires .NET 4.0 or Rx), order it, and then pull each result out:
List<DateTime> one = ...;
List<double> two = ...;
List<string> three = ...;
var combined = one.Zip(two, (first, second) => new { first, second })
.Zip(three, (pair, third) => new { pair.first, pair.second, third });
var ordered = combined.OrderBy(x => x.first);
var orderedOne = ordered.Select(x => x.first);
var orderedTwo = ordered.Select(x => x.second);
var orderedThree = ordered.Select(x => x.third);
Naturally, the best solution is to not separate related data in the first place.
Using generic arrays, this can get a bit cumbersome.
One alternative is using the Array.Sort() method that takes an array of keys and an array of values to sort. It first sorts the key array into ascending order and makes sure the array of values is reorganized to match this sort order.
If you're willing to incur the cost of converting your List<T>s to arrays (and then back), you could take advantage of this method.
Alternatively, you could use LINQ to combine the values from multiple arrays into a single anonymous type using Zip(), sort the list of anonymous types using the key field, and then split that apart into separate arrays.
If you want to do this in-place, you would have to write a custom comparer and create a separate index array to maintain the new ordering of items.
I hope this could help :
one = one.Sort(delegate(DateTime d1, DateTime d2)
{
return Convert.ToDateTime(d2).CompareTo(Convert.ToDateTime(d1));
});
At the moment I am using one list to store one part of my data, and it's working perfectly in this format:
Item
----------------
Joe Bloggs
George Forman
Peter Pan
Now, I would like to add another line to this list, for it to work like so:
NAME EMAIL
------------------------------------------------------
Joe Bloggs joe#bloggs.com
George Forman george#formangrills.co
Peter Pan me#neverland.com
I've tried using this code to create a list within a list, and this code is used in another method in a foreach loop:
// Where List is instantiated
List<List<string>> list2d = new List<List<string>>
...
// Where DataGrid instance is given the list
dg.DataSource = list2d;
dg.DataBind();
...
// In another method, where all people add their names and emails, then are added
// to the two-dimensional list
foreach (People p in ppl.results) {
list.Add(results.name);
list.Add(results.email);
list2d.Add(list);
}
When I run this, I get this result:
Capacity Count
----------------
16 16
16 16
16 16
... ...
Where am I going wrong here. How can I get the output I desire with the code I am using right now?
Why don't you use a List<People> instead of a List<List<string>> ?
Highly recommend something more like this:
public class Person {
public string Name {get; set;}
public string Email {get; set;}
}
var people = new List<Person>();
Easier to read, easy to code.
If for some reason you don't want to define a Person class and use List<Person> as advised, you can use a tuple, such as (C# 7):
var people = new List<(string Name, string Email)>
{
("Joe Bloggs", "joe#bloggs.com"),
("George Forman", "george#formangrills.co"),
("Peter Pan", "me#neverland.com")
};
var georgeEmail = people[1].Email;
The Name and Email member names are optional, you can omit them and access them using Item1 and Item2 respectively.
There are defined tuples for up to 8 members.
For earlier versions of C#, you can still use a List<Tuple<string, string>> (or preferably ValueTuple using this NuGet package), but you won't benefit from customized member names.
Where does the variable results come from?
This block:
foreach (People p in ppl.results) {
list.Add(results.name);
list.Add(results.email);
list2d.Add(list);
}
Should probably read more like:
foreach (People p in ppl.results) {
var list = new List<string>();
list.Add(p.name);
list.Add(p.email);
list2d.Add(list);
}
It's old but thought I'd add my two cents...
Not sure if it will work but try using a KeyValuePair:
List<KeyValuePair<?, ?>> LinkList = new List<KeyValuePair<?, ?>>();
LinkList.Add(new KeyValuePair<?, ?>(Object, Object));
You'll end up with something like this:
LinkList[0] = <Object, Object>
LinkList[1] = <Object, Object>
LinkList[2] = <Object, Object>
and so on...
You should use List<Person> or a HashSet<Person>.
Please show more of your code.
If that last piece of code declares and initializes the list variable outside the loop you're basically reusing the same list object, thus adding everything into one list.
Also show where .Capacity and .Count comes into play, how did you get those values?