After searching and trying the different ways I found I either wasn't happy with the way I was doing the code or it didn't work right for me. I'm new at programming so my understanding is limited. Please keep in mind with the answer.
I want to read a .csv file line by line and skipping lines that are blank. With the contents of the lines I want to put into a list of object. I have everything working except for the skipping line part. Also any feedback about improving any parts of my code are all welcome. I like constructive criticism.
public void CardaxCsvFileReader()
{
string cardaxCsvPath = (#"C:\Cardax2WkbTest\Cardax\CardaxTable.csv");
try
{
using (System.IO.StreamReader cardaxSR =
new System.IO.StreamReader(System.IO.File.OpenRead(cardaxCsvPath)))
{
string line = "";
string[] value = line.Split(',');
while (!cardaxSR.EndOfStream)
{ // this commented out part is what I would like to work but doesn't seem to work.
line = cardaxSR.ReadLine();//.Skip(1).Where(item => !String.IsNullOrWhiteSpace(item));
value = line.Split(',');
if (line != ",,,,,") // using this as temp to skip the line because the above commented out part doesn't work.
{
CardaxDataObject cardaxCsvTest2 = new CardaxDataObject();
cardaxCsvTest2.EventID = Convert.ToInt32(value[0]);
cardaxCsvTest2.FTItemID = Convert.ToInt32(value[1]);
cardaxCsvTest2.PayrollNumber = Convert.ToInt32(value[2]);
cardaxCsvTest2.EventDateTime = Convert.ToDateTime(value[3]);
cardaxCsvTest2.CardholderFirstName = value[4];
cardaxCsvTest2.CardholderLastName = value[5];
Globals.CardaxQueryResult.Add(cardaxCsvTest2);
}
}
}
}
catch (Exception)
{
myLog.Error("Unable to open/read Cardax simulated punch csv file! " +
"File already open or does not exist: \"{0}\"", cardaxCsvPath);
}
EDITED
If you are lines are not truly blank and contain commas, you can split with RemoveEmptyEntries option and then check the column count.
while (!cardaxSR.EndOfStream)
{ // this commented out part is what I would like to work but doesn't seem to work.
line = cardaxSR.ReadLine();//.Skip(1).Where(item => !String.IsNullOrWhiteSpace(item));
value = line.Split(new char[] {','}, StringSplitOptions.RemoveEmptyEntries); // <-- Remove empty columns while splitting. It has a side-effect: Any record with just a single blank column will also get discarded by the if that follows.
if (value.length < 6)
continue;
CardaxDataObject cardaxCsvTest2 = new CardaxDataObject();
cardaxCsvTest2.EventID = Convert.ToInt32(value[0]);
cardaxCsvTest2.FTItemID = Convert.ToInt32(value[1]);
cardaxCsvTest2.PayrollNumber = Convert.ToInt32(value[2]);
cardaxCsvTest2.EventDateTime = Convert.ToDateTime(value[3]);
cardaxCsvTest2.CardholderFirstName = value[4];
cardaxCsvTest2.CardholderLastName = value[5];
Globals.CardaxQueryResult.Add(cardaxCsvTest2);
}
Another improvement feedback I have: When you catch an exception, it's a good practice to log the exception in addition to your custom error line. A custom error line might be good for say website users, but as a developer running some service you will appreciate the actual exception stack trace. It will help you debug a bug easier.
catch (Exception ex)
{
myLog.Error("Unable to open/read Cardax simulated punch csv file! " +
"File already open or does not exist: \"{0}\".\r\n Exception: {1}", cardaxCsvPath, ex.ToString());
}
Just check if value.Length == 6, this way it'll skip lines which don't contain enough data for your columns
Use a dedicated CSV parser, such as the EasyCSV class available here*:
https://github.com/jcoehoorn/EasyCSV
public void CardaxCsvFileReader()
{
try
{
string cardaxCsvPath = (#"C:\Cardax2WkbTest\Cardax\CardaxTable.csv");
Globals.CardaxQueryResult =
EasyCSV.FromFile(cardaxCsvPath)
.Where(r => r.Any(c => !string.IsNullOrEmpty(c)))
.Select(r => CardaxDataObject() {
cardaxCsvTest2.EventID = int.Parse(r[0]),
cardaxCsvTest2.FTItemID = int.Parse(r[1]),
cardaxCsvTest2.PayrollNumber = int.Parse(r[2]),
cardaxCsvTest2.EventDateTime = DateTinme.Parse(r[3]),
cardaxCsvTest2.CardholderFirstName = r[4],
cardaxCsvTest2.CardholderLastName = r[5]
}).ToList();
}
catch (Exception)
{
myLog.Error("Unable to open/read Cardax simulated punch csv file! " +
"File already open or does not exist: \"{0}\"", cardaxCsvPath);
}
}
I also recommend re-thinking how you structure this. The code below is better practice:
public IEnumerable<CardaxDataObject> ReadCardaxCsvFile(string filename)
{
//no try block at this level. Catch that in the method that calls this method
return EasyCSV.FromFile(cardaxCsvPath)
.Where(r => r.Any(c => !string.IsNullOrEmpty(c)))
// You may want to put a try/catch inside the `Select()` projection, though.
// It would allow you continue if you fail to parse an individual record
.Select(r => CardaxDataObject() {
cardaxCsvTest2.EventID = int.Parse(r[0]),
cardaxCsvTest2.FTItemID = int.Parse(r[1]),
cardaxCsvTest2.PayrollNumber = int.Parse(r[2]),
cardaxCsvTest2.EventDateTime = DateTinme.Parse(r[3]),
cardaxCsvTest2.CardholderFirstName = r[4],
cardaxCsvTest2.CardholderLastName = r[5]
});
}
Suddenly the method boils down to one statement (albeit a very long statement). Code like this is better, because it's more powerful, for three reasons: it's not limited to using just the one input file, it's not limited to only sending it's output to the one location, and it's not limited to only one way to handle errors. You'd call it like this:
try
{
string cardaxCsvPath = (#"C:\Cardax2WkbTest\Cardax\CardaxTable.csv");
Globals.CardaxQueryResult = ReadCardaxCsvFile(cardaxCsvPath).ToList();
}
catch (Exception)
{
myLog.Error("Unable to open/read Cardax simulated punch csv file! " +
"File already open or does not exist: \"{0}\"", cardaxCsvPath);
}
or like this:
try
{
string cardaxCsvPath = (#"C:\Cardax2WkbTest\Cardax\CardaxTable.csv");
foreach (var result in ReadCardaxCsvFile(cardaxCsvPath))
{
Globals.CardaxQueryResult.Add(result);
}
}
catch (Exception)
{
myLog.Error("Unable to open/read Cardax simulated punch csv file! " +
"File already open or does not exist: \"{0}\"", cardaxCsvPath);
}
I also recommend against using a Globalsclass like this. Find a more meaningful object with which you can associate this data.
* Disclaimer: I am the author of that parser
Related
I'm currently working on a dll library project.
if (!Directory.Exists(MenGinPath))
{
Directory.CreateDirectory(MenGinPath + #"TimedMessages");
File.WriteAllLines(MenGinPath + #"TimedMessages\timedmessages.txt", new string[] { "Seperate each message with a new line" });
}
else if (!File.Exists(MenGinPath + #"TimedMessages\timedmessages.txt"))
{
Directory.CreateDirectory(MenGinPath + #"TimedMessages");
File.WriteAllLines(MenGinPath + #"TimedMessages\timedmessages.txt", new string[] { "Seperate each message with a new line" });
}
As you can see if the statement Directory.Exists is false a specific directory (MenGinPath) will be created. However, if the same path, with another file in addition is false, the second functions will be called.
My question is the following: is there any way to make this shorter?
Because as you can see I'm calling 2 times the same functions:
Directory.CreateDirectory(MenGinPath + #TimedMessages\timedmessages.txt
and
File.WriteAllLines(MenGinPath + #"\TimedMessages\timedmessages.txt"))
Any help would be welcome!!
You don't need to check if directory exists because Directory.CreateDirectory automatically creates the directory if it does not exists and does nothing if the directory already exists.
Also, do not include the filename when creating the directory. Yes, it wont error but just for clarity sake.
Another one is to use Path.Combine instead of hardcoding the path. This will improve readability of your code.
So, here's what I can come up with:
string dir = Path.Combine(MenGinPath, #"Groups\TimesMessages");
string file = Path.Combine(dir, "timedmessages.txt");
// this automatically creates all directories in specified path
// unless it already exists
Directory.CreateDirectory(dir);
//of course, you still need to check if the file exists
if (!File.Exists(file) {
File.WriteAllLines(filePath, new string[] { "Seperate each message with a new line" });
}
/* or if file exists, do your stuff (optional)
* else {
* //do something else? maybe edit the file?
* }
*/
You can make your code shorter given the fact that CreateDirectory does nothing when the directory exists. Moreover do not pullute your code with all that string concatenations to create the path and the file names.
Just do it one time before entering the logic using the appropriate method to create filenames and pathnames (Path.Combine).
string messagePath = Path.Combine(MenGinPath, "TimedMessages");
string fileName = Path.Combine(messagePath, "timedmessages.txt");
// call the create even if it exists. The CreateDirectory checks the fact
// by itself and thus, if you add your own check, you are checking two times.
Directory.CreateDirectory(messagePath);
if (!File.Exists(fileName)
File.WriteAllLines(fileName, new string[] { "Seperate each message with a new line" });
Would something like this work?
string strAppended = string.Empty;
if (!Directory.Exists(MenGinPath))
{
strAppended = MenGinPath + #"Groups\timedmessages.txt";
}
else if (!File.Exists(MenGinPath + #"TimedMessages\timedmessages.txt"))
{
strAppended = MenGinPath + #"TimedMessages\TimedMessages.txt";
}
else
{
return;
}
Directory.CreateDirectory(strAppended);
File.WriteAllLines(strAppended, new string[] { "Seperate each message with a new line" });
I have found that it is a great idea to reuse blocks of code like this instead of hiding them in if statements because it makes code maintenance and debugging easier and less prone to missed bugs.
It seems the only difference between the 2 cases is the path. So just get only this path in your if-else
const string GroupsPath = #"Groups\timedmessages.txt";
const string TimedMessagesTxt = #"TimedMessages\TimedMessages.txt";
string addPath = null;
if (!Directory.Exists(MenGinPath)) {
addPath = GroupsPath;
} else if (!File.Exists(Path.Combine(MenGinPath, TimedMessagesTxt))) {
addPath = TimedMessagesTxt;
}
If (addPath != null) {
Directory.CreateDirectory(Path.Combine(MenGinPath, addPath));
File.WriteAllLines(Path.Combine(MenGinPath, TimedMessagesTxt),
new string[] { "Seperate each message with a new line" });
}
Note: Using Path.Combine instead of string concatenation has the advantage that missig or extra \ are added or removed automatically.
I have an app that reads from text files to determine which reports should be generated. It works as it should most of the time, but once in awhile, the program deletes one of the text files it reads from/writes to. Then an exception is thrown ("Could not find file") and progress ceases.
Here is some pertinent code.
First, reading from the file:
List<String> delPerfRecords = ReadFileContents(DelPerfFile);
. . .
private static List<String> ReadFileContents(string fileName)
{
List<String> fileContents = new List<string>();
try
{
fileContents = File.ReadAllLines(fileName).ToList();
}
catch (Exception ex)
{
RoboReporterConstsAndUtils.HandleException(ex);
}
return fileContents;
}
Then, writing to the file -- it marks the record/line in that file as having been processed, so that the same report is not re-generated the next time the file is examined:
MarkAsProcessed(DelPerfFile, qrRecord);
. . .
private static void MarkAsProcessed(string fileToUpdate, string
qrRecord)
{
try
{
var fileContents = File.ReadAllLines(fileToUpdate).ToList();
for (int i = 0; i < fileContents.Count; i++)
{
if (fileContents[i] == qrRecord)
{
fileContents[i] = string.Format("{0}{1} {2}"
qrRecord, RoboReporterConstsAndUtils.COMPLETED_FLAG, DateTime.Now);
}
}
// Will this automatically overwrite the existing?
File.Delete(fileToUpdate);
File.WriteAllLines(fileToUpdate, fileContents);
}
catch (Exception ex)
{
RoboReporterConstsAndUtils.HandleException(ex);
}
}
So I do delete the file, but immediately replace it:
File.Delete(fileToUpdate);
File.WriteAllLines(fileToUpdate, fileContents);
The files being read have contents such as this:
Opas,20170110,20161127,20161231-COMPLETED 1/10/2017 12:33:27 AM
Opas,20170209,20170101,20170128-COMPLETED 2/9/2017 11:26:04 AM
Opas,20170309,20170129,20170225-COMPLETED
Opas,20170409,20170226,20170401
If "-COMPLETED" appears at the end of the record/row/line, it is ignored - will not be processed.
Also, if the second element (at index 1) is a date in the future, it will not be processed (yet).
So, for these examples shown above, the first three have already been done, and will be subsequently ignored. The fourth one will not be acted on until on or after April 9th, 2017 (at which time the data within the data range of the last two dates will be retrieved).
Why is the file sometimes deleted? What can I do to prevent it from ever happening?
If helpful, in more context, the logic is like so:
internal static string GenerateAndSaveDelPerfReports()
{
string allUnitsProcessed = String.Empty;
bool success = false;
try
{
List<String> delPerfRecords = ReadFileContents(DelPerfFile);
List<QueuedReports> qrList = new List<QueuedReports>();
foreach (string qrRecord in delPerfRecords)
{
var qr = ConvertCRVRecordToQueuedReport(qrRecord);
// Rows that have already been processed return null
if (null == qr) continue;
// If the report has not yet been run, and it is due, add i
to the list
if (qr.DateToGenerate <= DateTime.Today)
{
var unit = qr.Unit;
qrList.Add(qr);
MarkAsProcessed(DelPerfFile, qrRecord);
if (String.IsNullOrWhiteSpace(allUnitsProcessed))
{
allUnitsProcessed = unit;
}
else if (!allUnitsProcessed.Contains(unit))
{
allUnitsProcessed = allUnitsProcessed + " and "
unit;
}
}
}
foreach (QueuedReports qrs in qrList)
{
GenerateAndSaveDelPerfReport(qrs);
success = true;
}
}
catch
{
success = false;
}
if (success)
{
return String.Format("Delivery Performance report[s] generate
for {0} by RoboReporter2017", allUnitsProcessed);
}
return String.Empty;
}
How can I ironclad this code to prevent the files from being periodically trashed?
UPDATE
I can't really test this, because the problem occurs so infrequently, but I wonder if adding a "pause" between the File.Delete() and the File.WriteAllLines() would solve the problem?
UPDATE 2
I'm not absolutely sure what the answer to my question is, so I won't add this as an answer, but my guess is that the File.Delete() and File.WriteAllLines() were occurring too close together and so the delete was sometimes occurring on both the old and the new copy of the file.
If so, a pause between the two calls may have solved the problem 99.42% of the time, but from what I found here, it seems the File.Delete() is redundant/superfluous anyway, and so I tested with the File.Delete() commented out, and it worked fine; so, I'm just doing without that occasionally problematic call now. I expect that to solve the issue.
// Will this automatically overwrite the existing?
File.Delete(fileToUpdate);
File.WriteAllLines(fileToUpdate, fileContents);
I would simply add an extra parameter to WriteAllLines() (which could default to false) to tell the function to open the file in overwrite mode, and not call File.Delete() at all then.
Do you currently check the return value of the file open?
Update: ok, it looks like WriteAllLines() is a .Net Framework function and therefore cannot be changed, so I deleted this answer. However now this shows up in the comments, as a proposed solution on another forum:
"just use something like File.WriteAllText where if the file exists,
the data is just overwritten, if the file does not exist it will be
created."
And this was exactly what I meant (while thinking WriteAllLines() was a user defined function), because I've had similar problems in the past.
So, a solution like that could solve some tricky problems (instead of deleting/fast reopening, just overwriting the file) - also less work for the OS, and possibly less file/disk fragmentation.
I am trying to work through a school assignment that has us use a C# program to parse data from a CSV file and add it to a table in a local database. When I try to run the program though, the method I am using fails to parse any of the data into the object.
Here is the method I am using:
//Parse CSV line
public bool ParseCSVline(string aLine)
{
try
{
string[] fields = aLine.Split(',');
this.Item_ID = int.Parse(fields[0]);
this.Invent_id = int.Parse(fields[1]);
this.Itemsize = fields[2];
this.Color = fields[3];
this.Curr_price = decimal.Parse(fields[4]);
this.Qoh = int.Parse(fields[5]);
return true; //if everything parsed, return true
}
catch (Exception ex)
{
Console.Write("Failed to Parse");
return false; //if a parse failed, return false
}
When running the program the method keeps throwing the Exception instead of actually parsing the data. For clarity, here is the section in the Main program that is calling everything:
/Step 2 - Open input file
//Set where the file comes from
string filepath = #"C:\Users\Karlore\Documents\School\SAI-430\";
string filename = #"NewInventory.csv";
//Open reader
StreamReader theFile = new StreamReader(filepath + filename);
//Step 3 - Create an object to use
Item theItem = new Item();
//Step 4 - Loop through file and add to database
while (theFile.Peek() >= 0)
{
//Get one line and parse it inside the object
theItem.ParseCSVline(filename);
//Check to see if item is already there
if (theItem.IsInDatabase(connection))
{
continue;
}
else
{
//Add the new item to the database if it wasn’t already there
theItem.AddRow(connection);
}
} //end of while loop
If anyone can point out where I may have made an error, or point me in the right direction I would appreciate it.
Replace the line:
theItem.ParseCSVline(filename);
by:
theItem.ParseCSVline(theFile.ReadLine());
I have a try catch statement which handles reading a list of xml files and outputs them to csv files.
Now I want to be able to move faulty xml files to a different folder from the healthy files but am not sure how to do it.
What I have got so far is as below:
bool faultyYN = false;
foreach (string filename in XMLFiles)
{
using (var reader = new StreamReader(filename))
{
string shortFileName = Path.GetFileNameWithoutExtension(filename);
XMLShredder.DataFile df = null;
try
{
var sw = new Stopwatch();
sw.Start();
df = Shredder.ShredDocument(XDocument.Load(reader, LoadOptions.SetLineInfo));
sw.Stop();
var elapsed = sw.ElapsedMilliseconds;
_log.InfoFormat(" Shredded file <{0}> in {1}ms", shortFileName, elapsed);
string outputFileName = Path.Combine(outputDirectory, shortFileName) + ".csv";
sw.Reset();
sw.Start();
using (var writer = new ChunkedShreddedFileWriter(outputFileName))//full file path
{
new DataFileCsvWriter().Write(df,
writer);
}
sw.Stop();
var elapsed2 = sw.ElapsedMilliseconds;
_log.InfoFormat(" Wrote file <{0}> in {1}ms", shortFileName, elapsed2);
}
catch (XmlException e)
{
_log.Error(String.Format("Reading failed due to incorrect structure in XML Document. File Name : <{0}>. Error Message : {1}.", shortFileName, e.Message), e);
faultyYN = true;
}
catch (IOException e)
{
_log.Error(String.Format("Reading failed due to IO Exception. File Name : <{0}>. Error Message : {1}.", shortFileName, e.Message), e);
}
if(bool faultyYN == true)
{
MoveFaultyXML(faultyXMLDirectory, shortFileName);
}
}
TidyUp(XMLFiles);//deletes the files after the process has finished.
}
I have tried adding the Move faulty files to faulty directory after the catch but the files still keep getting deleted.
So basically the method that does not work as I don't know where I should be calling it from is "MoveFaultyXML(faultyXMLDirectory, shortFileName)".
I have read on the net that I shouldn't be using a an exception to branch out but in this case I couldn't think of an alternative solution. The exception has to be thrown for me to know that there is something wrong with the file.
If there is another way of dealing with this which is better practice or if this way works but I am doing it wrong then please help me and I would really appreciate it.
Thanks,
Jetnor.
First solution that comes to my mind would be to:
Move the MoveFaultyXML(faultyXMLDirectory, shortFileName); call to do it within the appropriate catch block:
catch (XmlException e)
{
//log
MoveFaultyXML(faultyXMLDirectory, shortFileName);
}
You don't need the boolean faultyYN.
Now you can create a class representing your XML file (instead of storing just file names in your XMLFiles list):
public class XMLFile
{
public string FileName { get; set; }
public bool Delete { get; set; }
}
And set the Delete flag to 'false' if you move the file.
In the TidyUp delete only files with this flag set to 'true'.
An alternative solution would be to:
Replace foreach() with
for(int i=XMLFiles.Count - 1; i >= 0; i--)
{
string filename = XMLFiles[i];
//the rest of your code
}
Change the catch block with the XMLException to:
catch (XmlException e)
{
//log
MoveFaultyXML(faultyXMLDirectory, shortFileName);
XMLFiles.RemoveAt(i);
}
This way when you get to CleanUp function, any files that were moved are no longer on the list to be deleted.
The `XmlException' is thrown when the XML is incorrect, so it is inside this catch block that you have to call your MoveFaultyXML.
Additional Notes:
Don't add YN to boolean names. Use something like xmlIsFaulty = true. This makes the code easier to read because then you have conditional statements like
if(xmlIsFaulty){MoveFaultyXml();}
which even a non-programmar can understand.
In this code, you're redeclaring the faultyYN variable which should given an error.
if(bool faultyYN == true)
{
MoveFaultyXML(faultyXMLDirectory, shortFileName);
}
After you've declared the variable at the start of the method, you do not need to declare it again.
This is because TidyUp(XMLFiles); still gets executed after your exception is caught, you can move TidyUp(XMLFiles); to within the try block or only call it in catch blocks which are needed.
Okay so i'm trying to load up a bunch of profiles through C# and I keep getting this error when I try to start up the program.
C:\C#FILES>program.exe
Unhandled Exception: System.IndexOutOfRangeException: Index was outside the boun
ds of the array.
at ConsoleApplication2.Program.loadAccounts()
at ConsoleApplication2.Program.Main(String[] args)
C:\C#FILES>
i've investigated and i think it has to do with the format of the accounts in the file
i'm wondering what the proper way is, i've tried every way i can think of
here's the loading accounts method
private static void loadAccounts()
{
using (TextReader tr = new StreamReader("accounts.txt"))
{
string line = null;
while ((line = tr.ReadLine()) != null)
{
String[] details = line.Split('\t');
accounts.Add(details[0] + ":" + details[1]);
}
}
}
the accounts.txt part is the part i'm unsure about, i thought it would be as follows
username(tab)password
like this
username password
however it gives the error shown above
does anyone know what the proper account format should be?
You're getting an IndexOutOfRangeException, which suggests that details only had a single entry - which means there wasn't a tab on that line.
I suggest you print out the line in question before splitting, so you can see which line is causing problems. Or possibly do it conditionally:
while ((line = tr.ReadLine()) != null)
{
String[] details = line.Split('\t');
if (details.Length == 1)
{
// Or log it, or whatever...
Console.WriteLine("Input error: no tab in line '{0}'", line);
}
else
{
accounts.Add(details[0] + ":" + details[1]);
}
}
This is occurring because the line you are splitting from your input does not contain the elements requested.
It is unlikely that the first (read: 0th) element in the array is the cause of the issue because of the way that .NET deals with Split.
Have you check that there are no blank lines in your input file? A single blank line (even at the end of the file) would cause this issue.
There are multiple checks you could add such as..
if(!string.IsNullOrWhitespace(line)) ...
or
if(details.Length > 1)
These are a few checks, either or both I would recommend implementing (there are more to consider) otherwise you are just blindly trusting input values and that is not good practice in general.