Throwing errors while passing .txt file to class object array - c#

What I am trying to achieve is that I have a file "Additional courses" that has some format error and duplication error. When I import that file into my Course object array, it should catch those errors. I am stuck at how to check for those errors and also I have a problem while importing.
Can someone look at both of those errors please?
public void ImportCourses(string fileName, char Delim)
{
FileStream stream = new FileStream(fileName, FileMode.Open, FileAccess.Read);
StreamReader reader = new StreamReader(stream);
int index = 0;
while (!reader.EndOfStream)
{
var line = reader.ReadLine();
var array = line.Split(Delim);
Course C = new Course();
C.CourseCode = array[0];
C.Name = array[1];
C.Description = array[2];
C.NoOfEvaluations = int.Parse(array[3]);
courses[index++] = C;
//Console.WriteLine(C.GetInfo());
}
reader.Close();
stream.Close();
These are the exceptions I want to check for:
If number of fields is incorrect the message is “Invalid number of fields in record {record number}”
If course code is already used in the course collection the message is “Course code in record {record number} is in use”
If the number of evaluation is not a number the message is “Number of evaluations in record {record number} is not in correct format.
I am getting "index out of bounds array" exception and I don't know where to start with the exception.
This is my .txt I am Trying to Import:

You should check the array.Length to make sure that you have 4 elements before you try to access them. If the split fails because the data was was empty or the data did not have 4 delimiters, then the array will not be 4 elements long and attempt to access an element by index which is not there will result in an Index out of bounds exception.
here is a potential solution to your problem -- although this smells of a homework problem.
public class Course {
public string CourseCode { get; set; }
public string Name { get; set; }
public string Description { get; set; }
public int NoOfEvaluations { get; set; }
}
List<Course> courses = new List<Course>();
bool CourseAlreadyExists(Course course) {
foreach (Course c in courses) {
if (c.CourseCode == course.CourseCode) {
return true;
}
}
return false;
}
// Define other methods and classes here
public void ImportCourses(string fileName, char Delim) {
using (var stream = new FileStream(fileName, FileMode.Open, FileAccess.Read)) {
using (var reader = new StreamReader(stream)) {
int index = 0;
while (!reader.EndOfStream)
{
var line = reader.ReadLine();
var array = line.Split(Delim);
if (array.Length != 4)
{
throw new ApplicationException(String.Format("Invalid number of fields in record #{0}", index));
}
Course C = new Course();
C.CourseCode = array[0];
C.Name = array[1];
C.Description = array[2];
int evals;
if (!int.TryParse(array[3], out evals))
{
throw new ApplicationException(String.Format("Number of evaluations in record {0} is not in correct format.", index));
}
else
{
C.NoOfEvaluations = evals;
}
if (!CourseAlreadyExists(C))
{
courses[index++] = C;
}
else
{
throw new ApplicationException(String.Format("Course code in record {0} is in use", index));
}
}
}
}
}
}

Related

How To Go Back To Previous Line In .csv? [duplicate]

This question already has answers here:
How to read a text file reversely with iterator in C#
(11 answers)
Closed 1 year ago.
I'm trying to figure out how to either Record which line I'm in, for example, line = 32, allowing me to just add line-- in the previous record button event or find a better alternative.
I currently have my form setup and working where if I click on "Next Record" button, the file increments to the next line and displays the cells correctly within their associated textboxes, but how do I create a button that goes to the previous line in the .csv file?
StreamReader csvFile;
public GP_Appointment_Manager()
{
InitializeComponent();
}
private void buttonOpenFile_Click(object sender, EventArgs e)
{
try
{
csvFile = new StreamReader("patients_100.csv");
// Read First line and do nothing
string line;
if (ReadPatientLineFromCSV(out line))
{
// Read second line, first patient line and populate form
ReadPatientLineFromCSV(out line);
PopulateForm(line);
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
private bool ReadPatientLineFromCSV(out string line)
{
bool result = false;
line = "";
if ((csvFile != null) && (!csvFile.EndOfStream))
{
line = csvFile.ReadLine();
result = true;
}
else
{
MessageBox.Show("File has not been opened. Please open file before reading.");
}
return result;
}
private void PopulateForm(string patientDetails)
{
string[] patient = patientDetails.Split(',');
//Populates ID
textBoxID.Text = patient[0];
//Populates Personal
comboBoxSex.SelectedIndex = (patient[1] == "M") ? 0 : 1;
dateTimePickerDOB.Value = DateTime.Parse(patient[2]);
textBoxFirstName.Text = patient[3];
textBoxLastName.Text = patient[4];
//Populates Address
textboxAddress.Text = patient[5];
textboxCity.Text = patient[6];
textboxCounty.Text = patient[7];
textboxTelephone.Text = patient[8];
//Populates Kin
textboxNextOfKin.Text = patient[9];
textboxKinTelephone.Text = patient[10];
}
Here's the code for the "Next Record" Button
private void buttonNextRecord_Click(object sender, EventArgs e)
{
string patientInfo;
if (ReadPatientLineFromCSV(out patientInfo))
{
PopulateForm(patientInfo);
}
}
Now, this is some sort of exercise. This class uses the standard StreamReader with a couple of modification, to implement simple move-forward/step-back functionalities.
It also allows to associate an array/list of Controls with the data read from a CSV-like file format. Note that this is not a general-purpose CSV reader; it just splits a string in parts, using a separator that can be specified calling its AssociateControls() method.
The class has 3 constructors:
(1) public LineReader(string filePath)
(2) public LineReader(string filePath, bool hasHeader)
(3) public LineReader(string filePath, bool hasHeader, Encoding encoding)
The source file has no Header in the first line and the text Encoding should be auto-detected
Same, but the first line of the file contain the Header if hasHeader = true
Used to specify an Encoding, if the automatic discovery cannot identify it correctly.
The positions of the lines of text are stored in a Dictionary<long, long>, where the Key is the line number and Value is the starting position of the line.
This has some advantages: no strings are stored anywhere, the file is indexed while reading it but you could use a background task to complete the indexing (this feature is not implemented here, maybe later...).
The disadvantage is that the Dictionary takes space in memory. If the file is very large (just the number of lines counts, though), it may become a problem. To test.
A note about the Encoding:
The text encoding auto-detection is reliable enough only if the Encoding is not set to the default one (UTF-8). The code here, if you don't specify an Encoding, sets it to Encoding.ASCII. When the first line is read, the automatic feature tries to determine the actual encoding. It usually gets it right.
In the default StreamReader implementation, if we specify Encoding.UTF8 (or none, which is the same) and the text encoding is ASCII, the encoder will use the default (Encoding.UTF8) encoding, since UTF-8 maps to ASCII gracefully.
However, when this is the case, [Encoding].GetPreamble() will return the UTF-8 BOM (3 bytes), compromising the calculation of the current position in the underlying stream.
To associate controls with the data read, you just need to pass a collection of controls to the LineReader.AssociateControls() method.
This will map each control to the data field in the same position.
To skip a data field, specify null instead of a control reference.
The visual example is built using a CSV file with this structure:
(Note: this data is generated using an automated on-line tool)
seq;firstname;lastname;age;street;city;state;zip;deposit;color;date
---------------------------------------------------------------------------
1;Harriett;Gibbs;62;Segmi Center;Ebanavi;ID;57854;$4444.78;WHITE;05/15/1914
2;Oscar;McDaniel;49;Kulak Drive;Jetagoz;IL;57631;$5813.94;RED;02/11/1918
3;Winifred;Olson;29;Wahab Mill;Ucocivo;NC;46073;$2002.70;RED;08/11/2008
I skipped the seq and color fields, passing this array of Controls:
LineReader lineReader = null;
private void btnOpenFile_Click(object sender, EventArgs e)
{
string filePath = Path.Combine(Application.StartupPath, #"sample.csv");
lineReader = new LineReader(filePath, true);
string header = lineReader.HeaderLine;
Control[] controls = new[] {
null, textBox1, textBox2, textBox3, textBox4, textBox5,
textBox6, textBox9, textBox7, null, textBox8 };
lineReader.AssociateControls(controls, ";");
}
The null entries correspond to the data fields that are not considered.
Visual sample of the functionality:
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Windows.Forms;
class LineReader : IDisposable
{
private StreamReader reader = null;
private Dictionary<long, long> positions;
private string m_filePath = string.Empty;
private Encoding m_encoding = null;
private IEnumerable<Control> m_controls = null;
private string m_separator = string.Empty;
private bool m_associate = false;
private long m_currentPosition = 0;
private bool m_hasHeader = false;
public LineReader(string filePath) : this(filePath, false) { }
public LineReader(string filePath, bool hasHeader) : this(filePath, hasHeader, Encoding.ASCII) { }
public LineReader(string filePath, bool hasHeader, Encoding encoding)
{
if (!File.Exists(filePath)) {
throw new FileNotFoundException($"The file specified: {filePath} was not found");
}
this.m_filePath = filePath;
m_hasHeader = hasHeader;
CurrentLineNumber = 0;
reader = new StreamReader(this.m_filePath, encoding, true);
CurrentLine = reader.ReadLine();
m_encoding = reader.CurrentEncoding;
m_currentPosition = m_encoding.GetPreamble().Length;
positions = new Dictionary<long, long>() { [0]= m_currentPosition };
if (hasHeader) { this.HeaderLine = CurrentLine = this.MoveNext(); }
}
public string HeaderLine { get; private set; }
public string CurrentLine { get; private set; }
public long CurrentLineNumber { get; private set; }
public string MoveNext()
{
string read = reader.ReadLine();
if (string.IsNullOrEmpty(read)) return this.CurrentLine;
CurrentLineNumber += 1;
if ((positions.Count - 1) < CurrentLineNumber) {
AdjustPositionToLineFeed();
positions.Add(CurrentLineNumber, m_currentPosition);
}
else {
m_currentPosition = positions[CurrentLineNumber];
}
this.CurrentLine = read;
if (m_associate) this.Associate();
return read;
}
public string MovePrevious()
{
if (CurrentLineNumber == 0 || (CurrentLineNumber == 1 && m_hasHeader)) return this.CurrentLine;
CurrentLineNumber -= 1;
m_currentPosition = positions[CurrentLineNumber];
reader.BaseStream.Position = m_currentPosition;
reader.DiscardBufferedData();
this.CurrentLine = reader.ReadLine();
if (m_associate) this.Associate();
return this.CurrentLine;
}
private void AdjustPositionToLineFeed()
{
long linePos = m_currentPosition + m_encoding.GetByteCount(this.CurrentLine);
long prevPos = reader.BaseStream.Position;
reader.BaseStream.Position = linePos;
byte[] buffer = new byte[4];
reader.BaseStream.Read(buffer, 0, buffer.Length);
char[] chars = m_encoding.GetChars(buffer).Where(c => c.Equals((char)10) || c.Equals((char)13)).ToArray();
m_currentPosition = linePos + m_encoding.GetByteCount(chars);
reader.BaseStream.Position = prevPos;
}
public void AssociateControls(IEnumerable<Control> controls, string separator)
{
m_controls = controls;
m_separator = separator;
m_associate = true;
if (!string.IsNullOrEmpty(this.CurrentLine)) Associate();
}
private void Associate()
{
string[] values = this.CurrentLine.Split(new[] { m_separator }, StringSplitOptions.None);
int associate = 0;
m_controls.ToList().ForEach(c => {
if (c != null) c.Text = values[associate];
associate += 1;
});
}
public override string ToString() =>
$"File Path: {m_filePath} Encoding: {m_encoding.BodyName} CodePage: {m_encoding.CodePage}";
public void Dispose()
{
this.Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (disposing) { reader?.Dispose(); }
}
}
General approach is the following:
Add a text file input.txt like this
line 1
line 2
line 3
and set Copy to Output Directory property to Copy if newer
Create extension methods for StreamReader
public static class StreamReaderExtensions
{
public static bool TryReadNextLine(this StreamReader reader, out string line)
{
var isAvailable = reader != null &&
!reader.EndOfStream;
line = isAvailable ? reader.ReadLine() : null;
return isAvailable;
}
public static bool TryReadPrevLine(this StreamReader reader, out string line)
{
var stream = reader.BaseStream;
var encoding = reader.CurrentEncoding;
var bom = GetBOM(encoding);
var isAvailable = reader != null &&
stream.Position > 0;
if(!isAvailable)
{
line = null;
return false;
}
var buffer = new List<byte>();
var str = string.Empty;
stream.Position++;
while (!str.StartsWith(Environment.NewLine))
{
stream.Position -= 2;
buffer.Insert(0, (byte)stream.ReadByte());
var reachedBOM = buffer.Take(bom.Length).SequenceEqual(bom);
if (reachedBOM)
buffer = buffer.Skip(bom.Length).ToList();
str = encoding.GetString(buffer.ToArray());
if (reachedBOM)
break;
}
stream.Position--;
line = str.Trim(Environment.NewLine.ToArray());
return true;
}
private static byte[] GetBOM(Encoding encoding)
{
if (encoding.Equals(Encoding.UTF7))
return new byte[] { 0x2b, 0x2f, 0x76 };
if (encoding.Equals(Encoding.UTF8))
return new byte[] { 0xef, 0xbb, 0xbf };
if (encoding.Equals(Encoding.Unicode))
return new byte[] { 0xff, 0xfe };
if (encoding.Equals(Encoding.BigEndianUnicode))
return new byte[] { 0xfe, 0xff };
if (encoding.Equals(Encoding.UTF32))
return new byte[] { 0, 0, 0xfe, 0xff };
return new byte[0];
}
}
And use it like this:
using (var reader = new StreamReader("input.txt"))
{
string na = "N/A";
string line;
for (var i = 0; i < 4; i++)
{
var isAvailable = reader.TryReadNextLine(out line);
Console.WriteLine($"Next line available: {isAvailable}. Line: {(isAvailable ? line : na)}");
}
for (var i = 0; i < 4; i++)
{
var isAvailable = reader.TryReadPrevLine(out line);
Console.WriteLine($"Prev line available: {isAvailable}. Line: {(isAvailable ? line : na)}");
}
}
The result is:
Next line available: True. Line: line 1
Next line available: True. Line: line 2
Next line available: True. Line: line 3
Next line available: False. Line: N/A
Prev line available: True. Line: line 3
Prev line available: True. Line: line 2
Prev line available: True. Line: line 1
Prev line available: False. Line: N/A
GetBOM is based on this.

Get the entire CSV line on error parsing or reading

Get the entire Csv Line on parsing error
With CsvHelper we use :
MissingFieldFound:
Gets or sets the function that is called when a missing field is found.The default
function will throw a CsvHelper.MissingFieldException.You can supply your own
function to do other things like logging the issue instead of throwing an exception.
Arguments: headerNames, index, context
BadDataFound:
Gets or sets the function that is called when bad field data is found. A field
has bad data if it contains a quote and the field is not quoted (escaped). You
can supply your own function to do other things like logging the issue instead
of throwing an exception. Arguments: context
In the following MCVE, only MissingFieldFound capture the complete line when BadDataFound did not.
static void Main()
{
using (var stream = new MemoryStream())
using (var writer = new StreamWriter(stream))
using (var reader = new StreamReader(stream))
using (var csv = new CsvReader(reader))
{
writer.WriteLine("FirstName,LastName");
writer.WriteLine("\"Jon\"hn\"\",\"Doe\"");
writer.WriteLine("\"JaneDoe\"");
writer.WriteLine("\"Jane\",\"Doe\"");
writer.Flush();
stream.Position = 0;
var good = new List<Test>();
var bad = new List<string>();
var isRecordBad = false;
csv.Configuration.BadDataFound = context =>
{
isRecordBad = true;
bad.Add(context.RawRecord);
};
csv.Configuration.MissingFieldFound = (headerNames, index, context) =>
{
isRecordBad = true;
bad.Add(context.RawRecord);
};
while (csv.Read())
{
var record = csv.GetRecord<Test>();
if (!isRecordBad)
{
good.Add(record);
}
isRecordBad = false;
}
good.Dump();
bad.Dump();
}
}
public class Test
{
public string FirstName { get; set; }
public string LastName { get; set; }
}
I would like the result to be :
"Jon"hn"","Doe"
"JaneDoe"
Instead of :
"Jon"hn"",
"JaneDoe"
For long Csv with a lot of column the rest of the line often have valuable information.
You can get the line with this:
csv.Parser.Context.RawRecord;

error in XML document. Unexpected XML declaration. XML declaration must be the first node in the document

There is an error in XML document (8, 20). Inner 1: Unexpected XML declaration. The XML declaration must be the first node in the document, and no white space characters are allowed to appear before it.
OK, I understand this error.
How I get it, however, is what perplexes me.
I create the document with Microsoft's Serialize tool. Then, I turn around and attempt to read it back, again, using Microsoft's Deserialize tool.
I am not in control of writing the XML file in the correct format - that I can see.
Here is the single routine I use to read and write.
private string xmlPath = System.Web.Hosting.HostingEnvironment.MapPath(WebConfigurationManager.AppSettings["DATA_XML"]);
private object objLock = new Object();
public string ErrorMessage { get; set; }
public StoredMsgs Operation(string from, string message, FileAccess access) {
StoredMsgs list = null;
lock (objLock) {
ErrorMessage = null;
try {
if (!File.Exists(xmlPath)) {
var root = new XmlRootAttribute(rootName);
var serializer = new XmlSerializer(typeof(StoredMsgs), root);
if (String.IsNullOrEmpty(message)) {
from = "Code Window";
message = "Created File";
}
var item = new StoredMsg() {
From = from,
Date = DateTime.Now.ToString("s"),
Message = message
};
using (var stream = File.Create(xmlPath)) {
list = new StoredMsgs();
list.Add(item);
serializer.Serialize(stream, list);
}
} else {
var root = new XmlRootAttribute("MessageHistory");
var serializer = new XmlSerializer(typeof(StoredMsgs), root);
var item = new StoredMsg() {
From = from,
Date = DateTime.Now.ToString("s"),
Message = message
};
using (var stream = File.Open(xmlPath, FileMode.Open, FileAccess.ReadWrite)) {
list = (StoredMsgs)serializer.Deserialize(stream);
if ((access == FileAccess.ReadWrite) || (access == FileAccess.Write)) {
list.Add(item);
serializer.Serialize(stream, list);
}
}
}
} catch (Exception error) {
var sb = new StringBuilder();
int index = 0;
sb.AppendLine(String.Format("Top Level Error: <b>{0}</b>", error.Message));
var err = error.InnerException;
while (err != null) {
index++;
sb.AppendLine(String.Format("\tInner {0}: {1}", index, err.Message));
err = err.InnerException;
}
ErrorMessage = sb.ToString();
}
}
return list;
}
Is something wrong with my routine? If Microsoft write the file, it seems to me that it should be able to read it back.
It should be generic enough for anyone to use.
Here is my StoredMsg class:
[Serializable()]
[XmlType("StoredMessage")]
public class StoredMessage {
public StoredMessage() {
}
[XmlElement("From")]
public string From { get; set; }
[XmlElement("Date")]
public string Date { get; set; }
[XmlElement("Message")]
public string Message { get; set; }
}
[Serializable()]
[XmlRoot("MessageHistory")]
public class MessageHistory : List<StoredMessage> {
}
The file it generates doesn't look to me like it has any issues.
I saw the solution here:
Error: The XML declaration must be the first node in the document
But, in that case, it seems someone already had an XML document they wanted to read. They just had to fix it.
I have an XML document created my Microsoft, so it should be read back in by Microsoft.
The problem is that you are adding to the file. You deserialize, then re-serialize to the same stream without rewinding and resizing to zero. This gives you multiple root elements:
<?xml version="1.0"?>
<StoredMessage>
</StoredMessage
<?xml version="1.0"?>
<StoredMessage>
</StoredMessage
Multiple root elements, and multiple XML declarations, are invalid according to the XML standard, thus the .NET XML parser throws an exception in this situation by default.
For possible solutions, see XML Error: There are multiple root elements, which suggests you either:
Enclose your list of StoredMessage elements in some synthetic outer element, e.g. StoredMessageList.
This would require you to load the list of messages from the file, add the new message, and then truncate the file and re-serialize the entire list when adding a single item. Thus the performance may be worse than in your current approach, but the XML will be valid.
When deserializing a file containing concatenated root elements, create an XML writer using XmlReaderSettings.ConformanceLevel = ConformanceLevel.Fragment and iteratively walk through the concatenated root node(s) and deserialize each one individually as shown, e.g., here. Using ConformanceLevel.Fragment allows the reader to parse streams with multiple root elements (although multiple XML declarations will still cause an error to be thrown).
Later, when adding a new element to the end of the file using XmlSerializer, seek to the end of the file and serialize using an XML writer returned from XmlWriter.Create(TextWriter, XmlWriterSettings)
with XmlWriterSettings.OmitXmlDeclaration = true. This prevents output of multiple XML declarations as explained here.
For option #2, your Operation would look something like the following:
private string xmlPath = System.Web.Hosting.HostingEnvironment.MapPath(WebConfigurationManager.AppSettings["DATA_XML"]);
private object objLock = new Object();
public string ErrorMessage { get; set; }
const string rootName = "MessageHistory";
static readonly XmlSerializer serializer = new XmlSerializer(typeof(StoredMessage), new XmlRootAttribute(rootName));
public MessageHistory Operation(string from, string message, FileAccess access)
{
var list = new MessageHistory();
lock (objLock)
{
ErrorMessage = null;
try
{
using (var file = File.Open(xmlPath, FileMode.OpenOrCreate))
{
list.AddRange(XmlSerializerHelper.ReadObjects<StoredMessage>(file, false, serializer));
if (list.Count == 0 && String.IsNullOrEmpty(message))
{
from = "Code Window";
message = "Created File";
}
var item = new StoredMessage()
{
From = from,
Date = DateTime.Now.ToString("s"),
Message = message
};
if ((access == FileAccess.ReadWrite) || (access == FileAccess.Write))
{
file.Seek(0, SeekOrigin.End);
var writerSettings = new XmlWriterSettings
{
OmitXmlDeclaration = true,
Indent = true, // Optional; remove if compact XML is desired.
};
using (var textWriter = new StreamWriter(file))
{
if (list.Count > 0)
textWriter.WriteLine();
using (var xmlWriter = XmlWriter.Create(textWriter, writerSettings))
{
serializer.Serialize(xmlWriter, item);
}
}
}
list.Add(item);
}
}
catch (Exception error)
{
var sb = new StringBuilder();
int index = 0;
sb.AppendLine(String.Format("Top Level Error: <b>{0}</b>", error.Message));
var err = error.InnerException;
while (err != null)
{
index++;
sb.AppendLine(String.Format("\tInner {0}: {1}", index, err.Message));
err = err.InnerException;
}
ErrorMessage = sb.ToString();
}
}
return list;
}
Using the following extension method adapted from Read nodes of a xml file in C#:
public partial class XmlSerializerHelper
{
public static List<T> ReadObjects<T>(Stream stream, bool closeInput = true, XmlSerializer serializer = null)
{
var list = new List<T>();
serializer = serializer ?? new XmlSerializer(typeof(T));
var settings = new XmlReaderSettings
{
ConformanceLevel = ConformanceLevel.Fragment,
CloseInput = closeInput,
};
using (var xmlTextReader = XmlReader.Create(stream, settings))
{
while (xmlTextReader.Read())
{ // Skip whitespace
if (xmlTextReader.NodeType == XmlNodeType.Element)
{
using (var subReader = xmlTextReader.ReadSubtree())
{
var logEvent = (T)serializer.Deserialize(subReader);
list.Add(logEvent);
}
}
}
}
return list;
}
}
Note that if you are going to create an XmlSerializer using a custom XmlRootAttribute, you must cache the serializer to avoid a memory leak.
Sample fiddle.

trying to create a list of deserialized data windows phone

I am currently trying to create a list of custom objects using data I get back from isolatedstorage, and deserializing it.
it worked perfectly yesterday and just keeps givin me this exception today, and I am not sure what to do?
{System.ArgumentOutOfRangeException: Length cannot be less than zero.
Parameter name: length
at System.String.InternalSubStringWithChecks(Int32 startIndex, Int32 length, Boolean fAlwaysCopy)
at System.String.Substring(Int32 startIndex, Int32 length)
at LandbouWP.ViewModel.StoryVM.GetStories(List`1 news_items)}
the code for getting the data and deserializing it:
var loaded_result = settings["mainlist"].ToString();
var s = JsonConvert.DeserializeObject<List<Object>>(loaded_result);
the deserializing works perfectly, so I don't think the issue is here, however does it maybe add another property or something to the list?
then I create a custom list of the returned items
App.StoryViewModel.GetStories(s);
and that code is:
public void GetStories(List<Object> news_items)
{
List<Story> a = new List<Story>();
List<Story> b = new List<Story>();
//loop over all items and add them for a viewmodel
int i = 0;
foreach (var item in news_items)
{
if (item.IsDeleted == true)
{
//do not add the item
}
else
{
try
{
a.Add(new Story
{
ID = news_items[i].ID,
IsDeleted = news_items[i].IsDeleted,
IsActive = news_items[i].IsActive,
Title = news_items[i].Title,
Author = news_items[i].Author,
Synopsis = news_items[i].Synopsis,
Body = news_items[i].Body,
ImageUrl = news_items[i].ImageUrl,
//CreationDate = DateTime.Parse(news_items[i].CreationDate),
CreationDate = news_items[i].CreationDate.Substring(0, news_items[i].CreationDate.IndexOf('T')),
LastUpdateDate = news_items[i].LastUpdateDate.Substring(0, news_items[i].LastUpdateDate.IndexOf('T')),
DisplayUntilDate = news_items[i].DisplayUntilDate.Substring(0, news_items[i].DisplayUntilDate.IndexOf('T')),
TotalViews = news_items[i].TotalViews,
Gallery = news_items[i].Gallery
});
i++;
}
catch (Exception ex)
{
string msg = ex.ToString();
string msg2 = msg;
}
}
}
//try here to remove duplicates?
foreach (var item in a)
{
if (!b.Contains(item))
{
b.Add(item);
}
else
{
b.Remove(item);
}
}
var new_list = b.OrderByDescending(x => x.CreationDate).ToList();
//save all the stories
story = new_list;
I cannot even go through each item individually that I am trying to set, it just throws length cannot be less than zero, and I am not sure what its talking about, I do not have a parameter in my class named Length?
Check carefully this place
CreationDate = news_items[i].CreationDate.Substring(0, news_items[i].CreationDate.IndexOf('T')),
LastUpdateDate = news_items[i].LastUpdateDate.Substring(0, news_items[i].LastUpdateDate.IndexOf('T')),
DisplayUntilDate = news_items[i].DisplayUntilDate.Substring(0, news_items[i].DisplayUntilDate.IndexOf('T')),
I suppose one of your dates just in wrong format and has no "T"

Reading a line from a streamreader without consuming?

Is there a way to read ahead one line to test if the next line contains specific tag data?
I'm dealing with a format that has a start tag but no end tag.
I would like to read a line add it to a structure then test the line below to make sure it not a new "node" and if it isn't keep adding if it is close off that struct and make a new one
the only solution i can think of is to have two stream readers going at the same time kinda suffling there way along lock step but that seems wastefull (if it will even work)
i need something like peek but peekline
The problem is the underlying stream may not even be seekable. If you take a look at the stream reader implementation it uses a buffer so it can implement TextReader.Peek() even if the stream is not seekable.
You could write a simple adapter that reads the next line and buffers it internally, something like this:
public class PeekableStreamReaderAdapter
{
private StreamReader Underlying;
private Queue<string> BufferedLines;
public PeekableStreamReaderAdapter(StreamReader underlying)
{
Underlying = underlying;
BufferedLines = new Queue<string>();
}
public string PeekLine()
{
string line = Underlying.ReadLine();
if (line == null)
return null;
BufferedLines.Enqueue(line);
return line;
}
public string ReadLine()
{
if (BufferedLines.Count > 0)
return BufferedLines.Dequeue();
return Underlying.ReadLine();
}
}
You could store the position accessing StreamReader.BaseStream.Position, then read the line next line, do your test, then seek to the position before you read the line:
// Peek at the next line
long peekPos = reader.BaseStream.Position;
string line = reader.ReadLine();
if (line.StartsWith("<tag start>"))
{
// This is a new tag, so we reset the position
reader.BaseStream.Seek(pos);
}
else
{
// This is part of the same node.
}
This is a lot of seeking and re-reading the same lines. Using some logic, you may be able to avoid this altogether - for instance, when you see a new tag start, close out the existing structure and start a new one - here's a basic algorithm:
SomeStructure myStructure = null;
while (!reader.EndOfStream)
{
string currentLine = reader.ReadLine();
if (currentLine.StartsWith("<tag start>"))
{
// Close out existing structure.
if (myStructure != null)
{
// Close out the existing structure.
}
// Create a new structure and add this line.
myStructure = new Structure();
// Append to myStructure.
}
else
{
// Add to the existing structure.
if (myStructure != null)
{
// Append to existing myStructure
}
else
{
// This means the first line was not part of a structure.
// Either handle this case, or throw an exception.
}
}
}
Why the difficulty? Return the next line, regardless. Check if it is a new node, if not, add it to the struct. If it is, create a new struct.
// Not exactly C# but close enough
Collection structs = new Collection();
Struct struct;
while ((line = readline()) != null)) {
if (IsNode(line)) {
if (struct != null) structs.add(struct);
struct = new Struct();
continue;
}
// Whatever processing you need to do
struct.addLine(line);
}
structs.add(struct); // Add the last one to the collection
// Use your structures here
foreach s in structs {
}
Here is what i go so far. I went more of the split route than the streamreader line by line route.
I'm sure there are a few places that are dieing to be more elegant but for right now it seems to be working.
Please let me know what you think
struct INDI
{
public string ID;
public string Name;
public string Sex;
public string BirthDay;
public bool Dead;
}
struct FAM
{
public string FamID;
public string type;
public string IndiID;
}
List<INDI> Individuals = new List<INDI>();
List<FAM> Family = new List<FAM>();
private void button1_Click(object sender, EventArgs e)
{
string path = #"C:\mostrecent.ged";
ParseGedcom(path);
}
private void ParseGedcom(string path)
{
//Open path to GED file
StreamReader SR = new StreamReader(path);
//Read entire block and then plit on 0 # for individuals and familys (no other info is needed for this instance)
string[] Holder = SR.ReadToEnd().Replace("0 #", "\u0646").Split('\u0646');
//For each new cell in the holder array look for Individuals and familys
foreach (string Node in Holder)
{
//Sub Split the string on the returns to get a true block of info
string[] SubNode = Node.Replace("\r\n", "\r").Split('\r');
//If a individual is found
if (SubNode[0].Contains("INDI"))
{
//Create new Structure
INDI I = new INDI();
//Add the ID number and remove extra formating
I.ID = SubNode[0].Replace("#", "").Replace(" INDI", "").Trim();
//Find the name remove extra formating for last name
I.Name = SubNode[FindIndexinArray(SubNode, "NAME")].Replace("1 NAME", "").Replace("/", "").Trim();
//Find Sex and remove extra formating
I.Sex = SubNode[FindIndexinArray(SubNode, "SEX")].Replace("1 SEX ", "").Trim();
//Deterine if there is a brithday -1 means no
if (FindIndexinArray(SubNode, "1 BIRT ") != -1)
{
// add birthday to Struct
I.BirthDay = SubNode[FindIndexinArray(SubNode, "1 BIRT ") + 1].Replace("2 DATE ", "").Trim();
}
// deterimin if there is a death tag will return -1 if not found
if (FindIndexinArray(SubNode, "1 DEAT ") != -1)
{
//convert Y or N to true or false ( defaults to False so no need to change unless Y is found.
if (SubNode[FindIndexinArray(SubNode, "1 DEAT ")].Replace("1 DEAT ", "").Trim() == "Y")
{
//set death
I.Dead = true;
}
}
//add the Struct to the list for later use
Individuals.Add(I);
}
// Start Family section
else if (SubNode[0].Contains("FAM"))
{
//grab Fam id from node early on to keep from doing it over and over
string FamID = SubNode[0].Replace("# FAM", "");
// Multiple children can exist for each family so this section had to be a bit more dynaimic
// Look at each line of node
foreach (string Line in SubNode)
{
// If node is HUSB
if (Line.Contains("1 HUSB "))
{
FAM F = new FAM();
F.FamID = FamID;
F.type = "PAR";
F.IndiID = Line.Replace("1 HUSB ", "").Replace("#","").Trim();
Family.Add(F);
}
//If node for Wife
else if (Line.Contains("1 WIFE "))
{
FAM F = new FAM();
F.FamID = FamID;
F.type = "PAR";
F.IndiID = Line.Replace("1 WIFE ", "").Replace("#", "").Trim();
Family.Add(F);
}
//if node for multi children
else if (Line.Contains("1 CHIL "))
{
FAM F = new FAM();
F.FamID = FamID;
F.type = "CHIL";
F.IndiID = Line.Replace("1 CHIL ", "").Replace("#", "");
Family.Add(F);
}
}
}
}
}
private int FindIndexinArray(string[] Arr, string search)
{
int Val = -1;
for (int i = 0; i < Arr.Length; i++)
{
if (Arr[i].Contains(search))
{
Val = i;
}
}
return Val;
}

Categories

Resources