How would I convert data in a .txt file into xml? c# - c#

I have thousands of lines of data in a text file I want to make easily searchable by turning it into something easier to search (I am hoping an XML or another type of large data structure, though I am not sure if it will be the best for what I have in mind).
The data looks like this for each line:
Book 31, Thomas,George, 32, 34, 154
(each book is not unique, they are indexes so book will have several different entries of whom is listed in it, and the numbers are the page they are listed)
So I am kinda of lost on how to do this, I would want to read the .txt file, trim out all the spaces and commas, I basically get how to prep the data for it, but how would I programmatically make that many elements and values in xml or populate some other large data structure?

If your csv file does not change too much and the structure is stable, you could simply parse it to a list of objects at startup
private class BookInfo {
string title {get;set;}
string person {get;set;}
List<int> pages {get;set;}
}
private List<BookInfo> allbooks = new List<BookInfo>();
public void parse() {
var lines = File.ReadAllLines(filename); //you could also read the file line by line here to avoid reading the complete file into memory
foreach (var l in lines) {
var info = l.Split(',').Select(x=>x.Trim()).ToArray();
var b = new BookInfo {
title = info[0],
person = info[1]+", " + info[2],
pages = info.Skip(3).Select(x=> int.Parse(x)).ToList()
};
allbooks.Add(b);
}
}
Then you can easily search the allbooks list with for instance LINQ.
EDIT
Now, that you have clarified your input, I adapted the parsing a little bit to better fit your needs.
If you want to search your booklist by either the title or the person more easily, you can also create a lookup on each of the properties
var titleLookup = allbooks.ToLookup(x=> x.title);
var personLookup = allbooks.ToLookup(x => x.person);
So personLookup["Thomas, George"] will give you a list of all bookinfos that mention "Thomas, George" and titleLookup["Book 31"] will give you a list of all bookinfos for "Book 31", ie all persons mentioned in that book.

If you want the CSV file to make easily searchable by turning it into something easier to search, you can convert it to DataTable.
if you want data , you can use LINQ to XML to search
The following class generates both DataTable or Xml data format. You can pass delimeter ,includeHeader or use the default:
class CsvUtility
{
public DataTable Csv2DataTable(string fileName, bool includeHeader = false, char separator = ',')
{
IEnumerable<string> reader = File.ReadAllLines(fileName);
var data = new DataTable("Table");
var headers = reader.First().Split(separator);
if (includeHeader)
{
foreach (var header in headers)
{
data.Columns.Add(header.Trim());
}
reader = reader.Skip(1);
}
else
{
for (int index = 0; index < headers.Length; index++)
{
var header = "Field" + index; // headers[index];
data.Columns.Add(header);
}
}
foreach (var row in reader)
{
if (row != null) data.Rows.Add(row.Split(separator));
}
return data;
}
public string Csv2Xml(string fileName, bool includeHeader = false, char separator = ',')
{
var dt = Csv2DataTable(fileName, includeHeader, separator);
var stream = new StringWriter();
dt.WriteXml(stream);
return stream.ToString();
}
}
example to use:
CsvUtility csv = new CsvUtility();
var dt = csv.Csv2DataTable("f1.txt");
// Search for string in any column
DataRow[] filteredRows = dt.Select("Field1 LIKE '%" + "Thomas" + "%'");
//search in certain field
var filtered = dt.AsEnumerable().Where(r => r.Field<string>("Field1").Contains("Thomas"));
//generate xml
var xml= csv.Csv2Xml("f1.txt");
Console.WriteLine(xml);
/*
output of xml for your sample:
<DocumentElement>
<Table>
<Field0>Book 31</Field0>
<Field1> Thomas</Field1>
<Field2>George</Field2>
<Field3> 32</Field3>
<Field4> 34</Field4>
<Field5> 154</Field5>
</Table>
</DocumentElement>
*/

Related

Is it possible to get specific column data from a large pipe delimited file without creating a class for every column?

I am writing a C# program that will grab some data from a pipe delimited file with 400 columns in it. I'm only required to work with 6 of the columns in each row. The file does not have headers, and the first line is a 5 column row with general description of file (file name, batch date, number of records, total, report id). Before I create a class with 400 fields in it, I was curious if anyone here had a better idea of how to approach this. Thanks for your time.
Well, you don't mention much as to how you're loading the file, but I imagine it is using System.IO and then doing a string split on each line. If so, you need not extract every field in the resulting splitted array.
Imagine you only needed two columns, the second and fourth, and had a class to accept each row as follows:
public class row {
public string field2;
public string field4;
}
Then you would extract your data like this:
IEnumerable<row> parsed =
File.ReadLines(#"path to file")
.Skip(1)
.Select(line => {
var splitted = line.Split('|');
return new row {
field2 = splitted[1],
field4 = splitted[3]
};
});
You could use the Microsoft.VisualBasic.FileIO reference and then do something like this:
using(var parser = new TextFieldParsser(file))
{
Int32 skipHeader = 0;
parser.SetDelimiters("|");
while (!parser.EndOfData)
{
//Processing row
string[] fields = parser.ReadFields();
Int32 x = 0;
if (skipHeader > 0)
{
foreach (var field in fields)
{
if (x == 0)
{
//SAVE STUFF TO VARIABLE
}
else if (x==4)
{
//SAVE MORE STUFF
}
else if (x == 20)
{
//SAVE LAST STUFF
break;//THIS IS THE LAST COLUMN OF DATA NEEDED SO YOU BREAK
}
x++;
}
//DO SOMETHING WITH ALL THE SAVED STUFF AND CLEAR IT OUT
}
else
{
skipHeader++;
}
}}

how to take specific column from text file(.txt) with delimiters with C#

I have example data like this , the data is in the text file(.txt) sry i got this type of file, if its excel or csv maybe it will be easier
Edit : i make a console app with C#
FamilyID;name;gender;DOB;Place of birth;status
1;nicky;male;01-01-1998;greenland;married
1;sonia;female;02-02-1995;greenland;married
2;dicky;male;04-01-1995;bali;single
3;redding;male;01-05-1996;USA;single
3;sisca;female;05-03-1994;australia;married
i want to take the specific column from that data, for example i want to take FamilyID,Name and status.
I already tried some code to read data and take all the data and list it to new text file.
The goal is to create a new text file based on family ID, and only take specific columns.
The problem is : i cant take a specific column that i want from text file (don't know how to select many column in the code that i write)
DateTime date = DateTime.Now;
string tgl = date.Date.ToString("dd");
string bln = date.Month.ToString("d2");
string thn = date.Year.ToString();
string tglskrg = thn + "/" + bln + "/" + tgl;
string filename = ("C:\\Users\\Documents\\My Received Files\\exampledata.txt");
string[] liness = File.ReadAllLines(filename);
string[] col;
var lines = File.ReadAllLines(filename);
var groups = lines.Skip(1)
.Select(x => x.Split(';'))
.GroupBy(x => x[0]).ToArray();
foreach (var group in groups)
{
Console.WriteLine(group);
File.WriteAllLines(#"C:\\Users\\Documents\\My Received Files\\exampledata_"+group.Key+".txt", group.Select(x => string.Join(";", x)));
}
maybe someone can help? thankyou
One way to approach this would be capture the details to a data structure and later write the required details to file. For example,
public class Detail
{
public int FamilyID{get;set;}
public string Name{get;set;}
public string Gender{get;set;}
public DateTime DOB{get;set;}
public string PlaceOfBirth{get;set;}
public string Status{get;set;}
}
Now you can write a method that parses the string based on delimiter and returns an IEnumerable.
public IEnumerable<Detail> Parse(string source,char delimiter)
{
return source.Split(new []{Environment.NewLine},StringSplitOptions.RemoveEmptyEntries)
.Skip(1)
.Select(x=>
{
var detail = x.Split(new []{delimiter});
return new Detail
{
FamilyID = Int32.Parse(detail[0]),
Name = detail[1],
Gender = detail[2],
DOB = DateTime.Parse(detail[3]),
PlaceOfBirth = detail[4],
Status = detail[5]
};
}
);
}
Client Call
Parse(stringFromFile,';');
Output
Now you can pick and write the details you want to write to output file from the collection.
try this.
var list = new List<String>();
list.Add("FamilyID;name;gender;DOB;Place of birth;status");
list.Add("1;nicky;male;01-01-1998;greenland;married");
list.Add("1;sonia;female;02-02-1995;greenland;married");
list.Add("2;dicky;male;04-01-1995;bali;single");
list.Add("3;redding;male;01-05-1996;USA;single");
list.Add("3;sisca;female;05-03-1994;australia;married");
var group = from item in list.Skip(1)
let splitItem = item.Split(';', StringSplitOptions.RemoveEmptyEntries)
select new
{
FamilyID = splitItem[0],
Name = splitItem[1],
Status = splitItem[5],
};
foreach(var item in group.ToList())
{
Console.WriteLine($"Family ID: {item.FamilyID}, Name: {item.Name}, Status: {item.Status}");
}

Is there a way to dynamically create an object at run time in .NET 3.5?

I'm working on an importer that takes tab delimited text files. The first line of each file contains 'columns' like ItemCode, Language, ImportMode etc and there can be varying numbers of columns.
I'm able to get the names of each column, whether there's one or 10 and so on. I use a method to achieve this that returns List<string>:
private List<string> GetColumnNames(string saveLocation, int numColumns)
{
var data = (File.ReadAllLines(saveLocation));
var columnNames = new List<string>();
for (int i = 0; i < numColumns; i++)
{
var cols = from lines in data
.Take(1)
.Where(l => !string.IsNullOrEmpty(l))
.Select(l => l.Split(delimiter.ToCharArray(), StringSplitOptions.None))
.Select(value => string.Join(" ", value))
let split = lines.Split(' ')
select new
{
Temp = split[i].Trim()
};
foreach (var x in cols)
{
columnNames.Add(x.Temp);
}
}
return columnNames;
}
If I always knew what columns to be expecting, I could just create a new object, but since I don't, I'm wondering is there a way I can dynamically create an object with properties that correspond to whatever GetColumnNames() returns?
Any suggestions?
For what it's worth, here's how I used DataTables to achieve what I wanted.
// saveLocation is file location
// numColumns comes from another method that gets number of columns in file
var columnNames = GetColumnNames(saveLocation, numColumns);
var table = new DataTable();
foreach (var header in columnNames)
{
table.Columns.Add(header);
}
// itemAttributeData is the file split into lines
foreach (var row in itemAttributeData)
{
table.Rows.Add(row);
}
Although there was a bit more work involved to be able to manipulate the data in the way I wanted, Karthik's suggestion got me on the right track.
You could create a dictionary of strings where the first string references the "properties" name and the second string its characteristic.

Flat file normalization with a dynamic number of columns

I have a flat file with an unfortunately dynamic column structure. There is a value that is in a hierarchy of values, and each tier in the hierarchy gets its own column. For example, my flat file might resemble this:
StatisticID|FileId|Tier0ObjectId|Tier1ObjectId|Tier2ObjectId|Tier3ObjectId|Status
1234|7890|abcd|efgh|ijkl|mnop|Pending
...
The same feed the next day may resemble this:
StatisticID|FileId|Tier0ObjectId|Tier1ObjectId|Tier2ObjectId|Status
1234|7890|abcd|efgh|ijkl|Complete
...
The thing is, I don't care much about all the tiers; I only care about the id of the last (bottom) tier, and all the other row data that is not a part of the tier columns. I need normalize the feed to something resembling this to inject into a relational database:
StatisticID|FileId|ObjectId|Status
1234|7890|ijkl|Complete
...
What would be an efficient, easy-to-read mechanism for determining the last tier object id, and organizing the data as described? Every attempt I've made feels kludgy to me.
Some things I've done:
I have tried to examine the column names for regular expression patterns, identify the columns that are tiered, order them by name descending, and select the first record... but I lose the ordinal column number this way, so that didn't look good.
I have placed the columns I want into an IDictionary<string, int> object to reference, but again reliably collecting the ordinal of the dynamic columns is an issue, and it seems this would be rather non-performant.
I ran into a simular problem a few years ago. I used a Dictionary to map the columns, it was not pretty, but it worked.
First make a Dictionary:
private Dictionary<int, int> GetColumnDictionary(string headerLine)
{
Dictionary<int, int> columnDictionary = new Dictionary<int, int>();
List<string> columnNames = headerLine.Split('|').ToList();
string maxTierObjectColumnName = GetMaxTierObjectColumnName(columnNames);
for (int index = 0; index < columnNames.Count; index++)
{
if (columnNames[index] == "StatisticID")
{
columnDictionary.Add(0, index);
}
if (columnNames[index] == "FileId")
{
columnDictionary.Add(1, index);
}
if (columnNames[index] == maxTierObjectColumnName)
{
columnDictionary.Add(2, index);
}
if (columnNames[index] == "Status")
{
columnDictionary.Add(3, index);
}
}
return columnDictionary;
}
private string GetMaxTierObjectColumnName(List<string> columnNames)
{
// Edit this function if Tier ObjectId is greater then 9
var maxTierObjectColumnName = columnNames.Where(c => c.Contains("Tier") && c.Contains("Object")).OrderBy(c => c).Last();
return maxTierObjectColumnName;
}
And after that it's simply running thru the file:
private List<DataObject> ParseFile(string fileName)
{
StreamReader streamReader = new StreamReader(fileName);
string headerLine = streamReader.ReadLine();
Dictionary<int, int> columnDictionary = this.GetColumnDictionary(headerLine);
string line;
List<DataObject> dataObjects = new List<DataObject>();
while ((line = streamReader.ReadLine()) != null)
{
var lineValues = line.Split('|');
string statId = lineValues[columnDictionary[0]];
dataObjects.Add(
new DataObject()
{
StatisticId = lineValues[columnDictionary[0]],
FileId = lineValues[columnDictionary[1]],
ObjectId = lineValues[columnDictionary[2]],
Status = lineValues[columnDictionary[3]]
}
);
}
return dataObjects;
}
I hope this helps (even a little bit).
Personally I would not try to reformat your file. I think the easiest approach would be to parse each row from the front and the back. For example:
itemArray = getMyItems();
statisticId = itemArray[0];
fileId = itemArray[1];
//and so on for the rest of your pre-tier columns
//Then get the second to last column which will be the last tier
lastTierId = itemArray[itemArray.length -1];
Since you know the last tier will always be second from the end you can just start at the end and work your way forwards. This seems like it would be much easier than trying to reformat the datafile.
If you really want to create a new file, you could use this approach to get the data you want to write out.
I don't know C# syntax, but something along these lines:
split line in parts with | as separator
get parts [0], [1], [length - 2] and [length - 1]
pass the parts to the database handling code

Reading a CSV file in .NET?

How do I read a CSV file using C#?
A choice, without using third-party components, is to use the class Microsoft.VisualBasic.FileIO.TextFieldParser (http://msdn.microsoft.com/en-us/library/microsoft.visualbasic.fileio.textfieldparser.aspx) . It provides all the functions for parsing CSV. It is sufficient to import the Microsoft.VisualBasic assembly.
var parser = new Microsoft.VisualBasic.FileIO.TextFieldParser(file);
parser.TextFieldType = Microsoft.VisualBasic.FileIO.FieldType.Delimited;
parser.SetDelimiters(new string[] { ";" });
while (!parser.EndOfData)
{
string[] row = parser.ReadFields();
/* do something */
}
You can use the Microsoft.VisualBasic.FileIO.TextFieldParser class in C#:
using System;
using System.Data;
using Microsoft.VisualBasic.FileIO;
static void Main()
{
string csv_file_path = #"C:\Users\Administrator\Desktop\test.csv";
DataTable csvData = GetDataTableFromCSVFile(csv_file_path);
Console.WriteLine("Rows count:" + csvData.Rows.Count);
Console.ReadLine();
}
private static DataTable GetDataTableFromCSVFile(string csv_file_path)
{
DataTable csvData = new DataTable();
try
{
using(TextFieldParser csvReader = new TextFieldParser(csv_file_path))
{
csvReader.SetDelimiters(new string[] { "," });
csvReader.HasFieldsEnclosedInQuotes = true;
string[] colFields = csvReader.ReadFields();
foreach (string column in colFields)
{
DataColumn datacolumn = new DataColumn(column);
datacolumn.AllowDBNull = true;
csvData.Columns.Add(datacolumn);
}
while (!csvReader.EndOfData)
{
string[] fieldData = csvReader.ReadFields();
//Making empty value as null
for (int i = 0; i < fieldData.Length; i++)
{
if (fieldData[i] == "")
{
fieldData[i] = null;
}
}
csvData.Rows.Add(fieldData);
}
}
}
catch (Exception ex)
{
}
return csvData;
}
You could try CsvHelper, which is a project I work on. Its goal is to make reading and writing CSV files as easy as possible, while being very fast.
Here are a few ways you can read from a CSV file.
// By type
var records = csv.GetRecords<MyClass>();
var records = csv.GetRecords( typeof( MyClass ) );
// Dynamic
var records = csv.GetRecords<dynamic>();
// Using anonymous type for the class definition
var anonymousTypeDefinition =
{
Id = default( int ),
Name = string.Empty,
MyClass = new MyClass()
};
var records = csv.GetRecords( anonymousTypeDefinition );
I usually use a simplistic approach like this one:
var path = Server.MapPath("~/App_Data/Data.csv");
var csvRows = System.IO.File.ReadAllLines(path, Encoding.Default).ToList();
foreach (var row in csvRows.Skip(1))
{
var columns = row.Split(';');
var field1 = columns[0];
var field2 = columns[1];
var field3 = columns[2];
}
I just used this library in my application. http://www.codeproject.com/KB/database/CsvReader.aspx. Everything went smoothly using this library, so I'm recommending it. It is free under the MIT License, so just include the notice with your source files.
I didn't display the CSV in a browser, but the author has some samples for Repeaters or DataGrids. I did run one of his test projects to test a Sort operation I have added and it looked pretty good.
You can try Cinchoo ETL - an open source lib for reading and writing CSV files.
Couple of ways you can read CSV files
Id, Name
1, Tom
2, Mark
This is how you can use this library to read it
using (var reader = new ChoCSVReader("emp.csv").WithFirstLineHeader())
{
foreach (dynamic item in reader)
{
Console.WriteLine(item.Id);
Console.WriteLine(item.Name);
}
}
If you have POCO object defined to match up with CSV file like below
public class Employee
{
public int Id { get; set; }
public string Name { get; set; }
}
You can parse the same file using this POCO class as below
using (var reader = new ChoCSVReader<Employee>("emp.csv").WithFirstLineHeader())
{
foreach (var item in reader)
{
Console.WriteLine(item.Id);
Console.WriteLine(item.Name);
}
}
Please check out articles at CodeProject on how to use it.
Disclaimer: I'm the author of this library
I recommend Angara.Table, about save/load: http://predictionmachines.github.io/Angara.Table/saveload.html.
It makes column types inference, can save CSV files and is much faster than TextFieldParser. It follows RFC4180 for CSV format and supports multiline strings, NaNs, and escaped strings containing the delimiter character.
The library is under MIT license. Source code is https://github.com/Microsoft/Angara.Table.
Though its API is focused on F#, it can be used in any .NET language but not so succinct as in F#.
Example:
using Angara.Data;
using System.Collections.Immutable;
...
var table = Table.Load("data.csv");
// Print schema:
foreach(Column c in table)
{
string colType;
if (c.Rows.IsRealColumn) colType = "double";
else if (c.Rows.IsStringColumn) colType = "string";
else if (c.Rows.IsDateColumn) colType = "date";
else if (c.Rows.IsIntColumn) colType = "int";
else colType = "bool";
Console.WriteLine("{0} of type {1}", c.Name, colType);
}
// Get column data:
ImmutableArray<double> a = table["a"].Rows.AsReal;
ImmutableArray<string> b = table["b"].Rows.AsString;
Table.Save(table, "data2.csv");
You might be interested in Linq2Csv library at CodeProject. One thing you would need to check is that if it's reading the data when it needs only, so you won't need a lot of memory when working with bigger files.
As for displaying the data on the browser, you could do many things to accomplish it, if you would be more specific on what are your requirements, answer could be more specific, but things you could do:
1. Use HttpListener class to write simple web server (you can find many samples on net to host mini-http server).
2. Use Asp.Net or Asp.Net Mvc, create a page, host it using IIS.
Seems like there are quite a few projects on CodeProject or CodePlex for CSV Parsing.
Here is another CSV Parser on CodePlex
http://commonlibrarynet.codeplex.com/
This library has components for CSV parsing, INI file parsing, Command-Line parsing as well. It's working well for me so far. Only thing is it doesn't have a CSV Writer.
This is just for parsing the CSV. For displaying it in a web page, it is simply a matter of taking the list and rendering it however you want.
Note: This code example does not handle the situation where the input string line contains newlines.
public List<string> SplitCSV(string line)
{
if (string.IsNullOrEmpty(line))
throw new ArgumentException();
List<string> result = new List<string>();
int index = 0;
int start = 0;
bool inQuote = false;
StringBuilder val = new StringBuilder();
// parse line
foreach (char c in line)
{
switch (c)
{
case '"':
inQuote = !inQuote;
break;
case ',':
if (!inQuote)
{
result.Add(line.Substring(start, index - start)
.Replace("\"",""));
start = index + 1;
}
break;
}
index++;
}
if (start < index)
{
result.Add(line.Substring(start, index - start).Replace("\"",""));
}
return result;
}
}
I have been maintaining an open source project called FlatFiles for several years now. It's available for .NET Core and .NET 4.5.1.
Unlike most of the alternatives, it allows you to define a schema (similar to the way EF code-first works) with an extreme level of precision, so you aren't fight conversion issues all the time. You can map directly to your data classes, and there is also support for interfacing with older ADO.NET classes.
Performance-wise, it's been tuned to be one of the fastest parsers for .NET, with a plethora of options for quirky format differences. There's also support for fixed-length files, if you need it.
you can use this library: Sky.Data.Csv
https://www.nuget.org/packages/Sky.Data.Csv/
this is a really fast CSV reader library and it's really easy to use:
using Sky.Data.Csv;
var readerSettings = new CsvReaderSettings{Encoding = Encoding.UTF8};
using(var reader = CsvReader.Create("path-to-file", readerSettings)){
foreach(var row in reader){
//do something with the data
}
}
it also supports reading typed objects with CsvReader<T> class which has a same interface.

Categories

Resources