how to get data from my List<T> - c#

guyz i know how to add data from my list but the problem is how can i retrieve it...?
id delcared my list in GlobalVar.cs:
public static List<string> ViolationRefNumToPrint = new List<string>();
here's the code behind in adding data to my list.....
GlobalVar.ViolationRefNumToPrint.Clear();
for (int i = 0; i < lvviolations.Items.Count; i++)
{
GlobalVar.ViolationRefNumToPrint.Add(((EmpViolationObject)lvviolations.Items[i]).VioRefNum);
}
my question is how can i retrieve it to my list... :(
EDIT
guyz i've used the code below. which is given by #evanmcdonnal. actually i'm goin to use this on my report... and i've used DocumentViewer
here's my code....
ReportDocument reportDocument = new ReportDocument();
string ats = new DirectoryInfo(Environment.CurrentDirectory).Parent.Parent.FullName;
StreamReader reader = new StreamReader(new FileStream(ats.ToString() + #"\Template\ReportViolation.xaml", FileMode.Open, FileAccess.Read));
reportDocument.XamlData = reader.ReadToEnd();
reportDocument.XamlImagePath = Path.Combine(ats.ToString(), #"Template\");
reader.Close();
DateTime dateTimeStart = DateTime.Now; // start time measure here
List<ReportData> listData = new List<ReportData>();
int i = 0;
foreach (string item in GlobalVar.ViolationRefNumToPrint)
{
ReportData data = new ReportData();
data.ReportDocumentValues.Add("PrintDate", DateTime.Now);
data.ReportDocumentValues.Add("EmpIDNum", NewIDNumber.ToString());
data.ReportDocumentValues.Add("EmpName", NewEmpName.ToString());
data.ReportDocumentValues.Add("EmpPosition", NewPosition.ToString());
data.ReportDocumentValues.Add("PageNumber",(i + 1));
data.ReportDocumentValues.Add("PageCount", GlobalVar.ViolationRefNumToPrint.Count.ToString());
listData.Add(data);
i++;
}
XpsDocument xps = reportDocument.CreateXpsDocument(listData);
documentViewer.Document = xps.GetFixedDocumentSequence();
// show the elapsed time in window title
Title += " - generated in " + (DateTime.Now - dateTimeStart).TotalMilliseconds + "ms";
the problem here is it give's me error like this....

You have to loop over it and search for the item you want.
foreach (string item in ViolationRefNumToPrint)
{
Console.Out(item);
}
If instead you want a specific item (assume your list has objects call it string itemImLookinFor = "some nonsense"; loop over it with a conditional to match;
foreach (MyObject item in ViolationRefNumToPrint)
{
if (item.name == itemImLookinFor)
//do something with this object
}

Related

Lucene 4.8 facets usage

I have difficulties understanding this example on how to use facets :
https://lucenenet.apache.org/docs/4.8.0-beta00008/api/Lucene.Net.Demo/Lucene.Net.Demo.Facet.SimpleFacetsExample.html
My goal is to create an index in which each document field have a facet, so that at search time i can choose which facets use to navigate data.
What i am confused about is setup of facets in index creation, to
summarize my question : is index with facets compatibile with
ReferenceManager?
Need DirectoryTaxonomyWriter to be actually written and persisted
on disk or it will embedded into the index itself and is just
temporary? I mean given the code
indexWriter.AddDocument(config.Build(taxoWriter, doc)); of the
example i expect it's temporary and will be embedded into the index (but then the example also show you need the Taxonomy to drill down facet). So can the Taxonomy be tangled in some way with the index so that the are handled althogeter with ReferenceManager?
If is not may i just use the same folder i use for storing index?
Here is a more detailed list of point that confuse me :
In my scenario i am indexing the document asyncrhonously (background process) and then fetching the indext ASAP throught ReferenceManager in ASP.NET application. I hope this way to fetch the index is compatibile with DirectoryTaxonomyWriter needed by facets.
Then i modified the code i write introducing the taxonomy writer as indicated in the example, but i am a bit confused, seems like i can't store DirectoryTaxonomyWriter into the same folder of index because the folder is locked, need i to persist it or it will be embedded into the index (so a RAMDirectory is enougth)? if i need to persist it in a different direcotry, can i safely persist it into subdirectory?
Here the code i am actually using :
private static void BuildIndex (IndexEntry entry)
{
string targetFolder = ConfigurationManager.AppSettings["IndexFolder"] ?? string.Empty;
//** LOG
if (System.IO.Directory.Exists(targetFolder) == false)
{
string message = #"Index folder not found";
_fileLogger.Error(message);
_consoleLogger.Error(message);
return;
}
var metadata = JsonConvert.DeserializeObject<IndexMetadata>(File.ReadAllText(entry.MetdataPath) ?? "{}");
string[] header = new string[0];
List<dynamic> csvRecords = new List<dynamic>();
using (var reader = new StreamReader(entry.DataPath))
{
CsvConfiguration csvConfiguration = new CsvConfiguration(CultureInfo.InvariantCulture);
csvConfiguration.AllowComments = false;
csvConfiguration.CountBytes = false;
csvConfiguration.Delimiter = ",";
csvConfiguration.DetectColumnCountChanges = false;
csvConfiguration.Encoding = Encoding.UTF8;
csvConfiguration.HasHeaderRecord = true;
csvConfiguration.IgnoreBlankLines = true;
csvConfiguration.HeaderValidated = null;
csvConfiguration.MissingFieldFound = null;
csvConfiguration.TrimOptions = CsvHelper.Configuration.TrimOptions.None;
csvConfiguration.BadDataFound = null;
using (var csvReader = new CsvReader(reader, csvConfiguration))
{
csvReader.Read();
csvReader.ReadHeader();
csvReader.Read();
header = csvReader.HeaderRecord;
csvRecords = csvReader.GetRecords<dynamic>().ToList();
}
}
string targetDirectory = Path.Combine(targetFolder, "Index__" + metadata.Boundle + "__" + DateTime.Now.ToString("yyyyMMdd_HHmmss") + "__" + Path.GetRandomFileName().Substring(0, 6));
System.IO.Directory.CreateDirectory(targetDirectory);
//** LOG
{
string message = #"..creating index : {0}";
_fileLogger.Information(message, targetDirectory);
_consoleLogger.Information(message, targetDirectory);
}
using (var dir = FSDirectory.Open(targetDirectory))
{
using (DirectoryTaxonomyWriter taxoWriter = new DirectoryTaxonomyWriter(dir))
{
Analyzer analyzer = metadata.GetAnalyzer();
var indexConfig = new IndexWriterConfig(LuceneVersion.LUCENE_48, analyzer);
using (IndexWriter writer = new IndexWriter(dir, indexConfig))
{
long entryNumber = csvRecords.Count();
long index = 0;
long lastPercentage = 0;
foreach (dynamic csvEntry in csvRecords)
{
Document doc = new Document();
IDictionary<string, object> dynamicCsvEntry = (IDictionary<string, object>)csvEntry;
var indexedMetadataFiled = metadata.IdexedFields;
foreach (string headField in header)
{
if (indexedMetadataFiled.ContainsKey(headField) == false || (indexedMetadataFiled[headField].NeedToBeIndexed == false && indexedMetadataFiled[headField].NeedToBeStored == false))
continue;
var field = new Field(headField,
((string)dynamicCsvEntry[headField] ?? string.Empty).ToLower(),
indexedMetadataFiled[headField].NeedToBeStored ? Field.Store.YES : Field.Store.NO,
indexedMetadataFiled[headField].NeedToBeIndexed ? Field.Index.ANALYZED : Field.Index.NO
);
doc.Add(field);
var facetField = new FacetField(headField, (string)dynamicCsvEntry[headField]);
doc.Add(facetField);
}
long percentage = (long)(((decimal)index / (decimal)entryNumber) * 100m);
if (percentage > lastPercentage && percentage % 10 == 0)
{
_consoleLogger.Information($"..indexing {percentage}%..");
lastPercentage = percentage;
}
writer.AddDocument(doc);
index++;
}
writer.Commit();
}
}
}
//** LOG
{
string message = #"Index Created : {0}";
_fileLogger.Information(message, targetDirectory);
_consoleLogger.Information(message, targetDirectory);
}
}

Export DataGrid to text file

I'm new to programming (1st year of learning at college) and I'm working on a small application.
I have a window where user can retrieve data from SQL to DataGrid and a Button for exporting some data from a DataGrid data to a text file.
This is the code I've used to get data from SQL:
SqlConnection con = new SqlConnection("Server = localhost;Database = autoser; Integrated Security = true");
SqlCommand cmd = new SqlCommand("selectproduct", con); // Using a Store Procedure.
cmd.CommandType = CommandType.StoredProcedure;
DataTable dt = new DataTable("dtList");
cmd.Parameters.AddWithValue("#Code", txtbarcode.Text);
SqlDataAdapter da = new SqlDataAdapter(cmd);
da.Fill(dt);
data.ItemsSource = dt.DefaultView;
SqlDataAdapter adapt = new SqlDataAdapter(cmd);
DataSet ds = new DataSet();
adapt.Fill(ds);
con.Close();
int count = ds.Tables[0].Rows.Count;
if (count == 0)
{
MessageBox.Show("This product doesn't excist");
SystemSounds.Hand.Play();
}
else if (count == 1)
{
lblinfo.Visibility = Visibility.Visible;
SystemSounds.Asterisk.Play();
}
And this one is the code I used to write text file:
{
using (StreamWriter writer = new StreamWriter("D:\\test.txt", true))
{
writer.WriteLine("Welcome");
writer.WriteLine("E N T E R N E T");
}
using (StreamWriter writer = new StreamWriter("D:\\test.txt", true))
{
writer.WriteLine(data.Items);
}
using (StreamWriter writer = new StreamWriter("D:\\test.txt", true))
{
writer.WriteLine(data.Items);
}
// Append line to the file.
using (StreamWriter writer = new StreamWriter("D:\\test.txt", true))
{
writer.WriteLine("---------------------------------------");
writer.WriteLine(" Thank You! ");
writer.WriteLine(" " + DateTime.Now + " ");
}
}
When I Open the text file i get this data
Welcome
E N T E R N E T
System.Windows.Controls.ItemCollection - Why isn't show the data grid
data
---------------------------------------
Thank You
7/26/2018 12:38:37 PM
My question is: Where is my mistake that cause the data from the DataGrid to don't be showed in correct way?
Thanks in advance
You are using currently the following overload of the WriteLine method:
public virtual void WriteLine(object value)
If you look at the documentation of StreamWriter.WriteLine(object) it says that it:
Writes the text representation of an object by calling the ToString method on that object, followed by a line terminator to the text string or stream.
This is the reason why you get the following nice line in your file:
System.Windows.Controls.ItemCollection
The documentation of Object.ToString() method reveals that the
default implementations of the Object.ToString method return the fully qualified name of the object's type.
You would need to iterate through the collection and write each entry separately into the file. I would also suggest to use directly the data source instead of writing from the DataGrid.
foreach (DataRow row in dt.Rows)
{
object[] array = row.ItemArray;
writer.WriteLine(string.Join(" | ", array));
}
This is because data.Items is an ItemCollection and not a string.
All objects return the output of their ToString method when are asked to represent their contents as string. Normally you would override this method but in this case you can't.
So you need to tell the compiler how to retrieve the representative information from that collection. You can use either of these queries to fetch desired information out of the data grid:
var items = data.Items.AsQueryable().Cast<MyItemDataType>().Select(x => x.MyProperty);
var items = data.ItemsSource.Cast<MyItemDataType>().Select(x => x.MyProperty);
var items = data.Items.SourceCollection.AsQueryable().Cast<MyItemDataType>().Select(x => x.MyProperty);
items is a collection so you need to convert it to a string:
var text = items.Aggregate((x,y)=> x+", "+y);
MyItemDataType differs in each query and you have to find out yourself which data type is being used and MyProperty is the property in that class which represents the text of a row.
Edit
You can use this code too. It does the same thing:
string text = "";
for (int i = 0; i < data.Items.Count; i++)
{
text += data.Items[i].ToString();
if(i < data.Items.Count - 1)
text += ", ";
}
writer.WriteLine(text);
But pay attention to the data type of each item in data.Items[i].ToString(). For example if each item is of type int then data.Items[i].ToString() returns a string representing the value of that integer (e.g. 1 turns into "1") but if they are of other types (e.g. such as Customer or MyDataGridItem) you need to override ToString() method of that class to look something like this:
public class Customer{
//...
public override string ToString(){
return this.Id + " " + this.Name;
}
}
so if you cannot override this method for any reason you need to do the other approach:
string text = "";
for (int i = 0; i < data.Items.Count; i++)
{
Customer customer = data.Items[i] as Customer;//cast is required since type of Items[i] is object
text += (customer.Id + " " + customer.Name);
if(i < data.Items.Count - 1)
text += ", ";
}
writer.WriteLine(text);
furthermore, you can use a StringBuilder to speed up the string concatenation because += is slow on strings.
Look at the sample code here. This will do what you want.
public static void WriteDataToFile(DataTable submittedDataTable, string submittedFilePath)
{
int i = 0;
StreamWriter sw = null;
sw = new StreamWriter(submittedFilePath, false);
for (i = 0; i < submittedDataTable.Columns.Count - 1; i++)
{
sw.Write(submittedDataTable.Columns[i].ColumnName + ";");
}
sw.Write(submittedDataTable.Columns[i].ColumnName);
sw.WriteLine();
foreach (DataRow row in submittedDataTable.Rows)
{
object[] array = row.ItemArray;
for (i = 0; i < array.Length - 1; i++)
{
sw.Write(array[i].ToString() + ";");
}
sw.Write(array[i].ToString());
sw.WriteLine();
}
sw.Close();
}
Also, take a look at this.
using System;
using System.Web;
using System.IO;
using System.Data;
namespace WebApplication1
{
public partial class WebForm1 : System.Web.UI.Page
{
protected void Button1_Click(object sender, EventArgs e)
{
StreamWriter swExtLogFile = new StreamWriter("D:/Log/log.txt",true);
DataTable dt = new DataTable();
//Adding data To DataTable
dt.Columns.Add("ID");
dt.Columns.Add("Name");
dt.Columns.Add("Address");
dt.Rows.Add(1, "venki","Chennai");
dt.Rows.Add(2, "Hanu","London");
dt.Rows.Add(3, "john","Swiss");
int i;
swExtLogFile.Write(Environment.NewLine);
foreach (DataRow row in dt.Rows)
{
object[] array = row.ItemArray;
for (i = 0; i < array.Length - 1; i++)
{
swExtLogFile.Write(array[i].ToString() + " | ");
}
swExtLogFile.WriteLine(array[i].ToString());
}
swExtLogFile.Write("*****END OF DATA****"+DateTime.Now.ToString());
swExtLogFile.Flush();
swExtLogFile.Close();
}
}
}

UWP - Compare data on JSON and database

I have a database called ebookstore.db as below:
and JSON as below:
I want when slug on JSON is not the same as a title in the database, it will display the amount of data with a slug on JSON which is not same as a title in the database in ukomikText.
Code:
string judulbuku;
try
{
string urlPath1 = "https://...";
var httpClient1 = new HttpClient(new HttpClientHandler());
httpClient1.DefaultRequestHeaders.TryAddWithoutValidation("KIAT-API-KEY", "....");
var values1 = new List<KeyValuePair<string, string>>
{
new KeyValuePair<string, string>("halaman", 1),
new KeyValuePair<string, string>("limit", 100),
};
var response1 = await httpClient1.PostAsync(urlPath1, new FormUrlEncodedContent(values1));
response1.EnsureSuccessStatusCode();
if (!response1.IsSuccessStatusCode)
{
MessageDialog messageDialog = new MessageDialog("Memeriksa update Komik gagal", "Gangguan Server");
await messageDialog.ShowAsync();
}
string jsonText1 = await response1.Content.ReadAsStringAsync();
JsonObject jsonObject1 = JsonObject.Parse(jsonText1);
JsonArray jsonData1 = jsonObject1["data"].GetArray();
foreach (JsonValue groupValue in jsonData1)
{
JsonObject groupObject = groupValue.GetObject();
string id = groupObject["id"].GetString();
string judul = groupObject["judul"].GetString();
string slug = groupObject["slug"].GetString();
BukuUpdate file1 = new BukuUpdate();
file1.ID = id;
file1.Judul = judul;
file1.Slug = slug;
List<String> title = sqlhelp.GetKomikData();
foreach (string juduldb in title)
{
judulbuku = juduldb.Substring(juduldb.IndexOf('.') + 1);
if (judulbuku != file1.Slug.Replace("-", "_") + ".pdf")
{
BukuData.Add(file1);
ListBuku.ItemsSource = BukuData;
}
else
{
ukomikText.Text = "belum tersedia komik yang baru";
ukomikText.Visibility = Visibility.Visible;
}
}
}
if (ListBuku.Items.Count > 0)
{
ukomikText.Text = BukuData.Count + " komik baru";
ukomikText.Visibility = Visibility.Visible;
jumlahbuku = BukuData.Count;
}
else
{
ukomikText.Text = "belum tersedia komik yang baru";
ukomikText.Visibility = Visibility.Visible;
}
public static List<String> GetKomikData()
{
List<String> entries = new List<string>();
using (SqliteConnection db =
new SqliteConnection("Filename=ebookstore.db"))
{
db.Open();
SqliteCommand selectCommand = new SqliteCommand
("SELECT title FROM books where folder_id = 67", db);
SqliteDataReader query = selectCommand.ExecuteReader();
while (query.Read())
{
entries.Add(query.GetString(0));
}
db.Close();
}
return entries;
}
BukuUpdate.cs:
public string ID { get; set; }
public string Judul { get; set; }
public string Slug { get; set; }
I have a problem, that is when checking slugs on JSON, then the slug that is displayed is the first slug is displayed repeatedly as much data in the database, after that show the second slug repeatedly as much data on the database, and so on, as below:
How to solve it so that slug on JSON is not displayed repeatedly (according to the amount of data on JSON)?
The problem is that you have two nested foreach loops. What the code does in simplified pseudocode:
For each item in JSON
Load all rows from DB
And for each loaded row
Check if the current JSON item matches the row from DB and if not, output
As you can see, if you have N items in the JSON and M rows in the database, this inevitably leads to N*M lines of output except for those rare ones where the JSON item matches a specific row in database.
If I understand it correctly, I assume that you instead want to check if there is a row that matches the JSON item and if not, output it. You could do this the following way:
List<String> title = sqlhelp.GetKomikData();
HashSet<string> dbItems = new HashSet<string>();
foreach (string juduldb in title)
{
judulbuku = juduldb.Substring(juduldb.IndexOf('.') + 1);
dbItems.Add( judulbuku );
}
...
foreach ( JsonValue groupValue in jsonData1 )
{
...
//instead of the second foreach
if ( !dbItems.Contains( file1.Slug.Replace("-", "_") + ".pdf" ) )
{
//item is not in database
}
else
{
//item is in database
}
}
Additional tips
Avoid calling GetKomikData inside the foreach. This method does not have any arguments and that means you are just accessing the database again and again without a reason, which takes time and slows down the execution significantly. Instead, call GetKomikData only once before the first foreach and then just use title variable.
Don't assign ItemsSource every time the collection changes. This will unnecessarily slow down the UI thread, as it will have to reload all the items with each loop. Instead, assign the property only once after the outer foreach
write your code in one language. When you start mixing variable names in English with Indonesian, the code becomes confusing and less readable and adds cognitive overhead.
avoid non-descriptive variable names like file1 or jsonObject1. The variable name should be clear and tell you what it contains. When there is a number at the end, it usually means it could be named more clearly.
use plurals for list variable names - instead of title use titles

C# Reading CSV to DataTable and Invoke Rows/Columns

i am currently working on a small Project and i got stuck with a Problem i currently can not manage to solve...
I have multiple ".CSV" Files i want to read, they all have the same Data just with different Values.
Header1;Value1;Info1
Header2;Value2;Info2
Header3;Value3;Info3
While reading the first File i Need to Create the Headers. The Problem is they are not splited in Columns but in rows (as you can see above Header1-Header3).
Then it Needs to read the Value 1 - Value 3 (they are listed in the 2nd Column) and on top of that i Need to create another Header -> Header4 with the data of "Info2" which is always placed in Column 3 and Row 2 (the other values of Column 3 i can ignore).
So the Outcome after the first File should look like this:
Header1;Header2;Header3;Header4;
Value1;Value2;Value3;Info2;
And after multiple files it sohuld be like this:
Header1;Header2;Header3;Header4;
Value1;Value2;Value3;Value4;
Value1b;Value2b;Value3b;Value4b;
Value1c;Value2c;Value3c;Value4c;
I tried it with OleDB but i get the Error "missing ISAM" which i cant mange to fix. The Code i Used is the following:
public DataTable ReadCsv(string fileName)
{
DataTable dt = new DataTable("Data");
/* using (OleDbConnection cn = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=\"" +
Path.GetDirectoryName(fileName) + "\";Extendet Properties ='text;HDR=yes;FMT=Delimited(,)';"))
*/
using (OleDbConnection cn = new OleDbConnection("Provider=Microsoft.Jet.OLEDB.4.0;Data Source=" +
Path.GetDirectoryName(fileName) + ";Extendet Properties ='text;HDR=yes;FMT=Delimited(,)';"))
{
using(OleDbCommand cmd = new OleDbCommand(string.Format("select *from [{0}]", new FileInfo(fileName).Name,cn)))
{
cn.Open();
using(OleDbDataAdapter adapter = new OleDbDataAdapter(cmd))
{
adapter.Fill(dt);
}
}
}
return dt;
}
Another attempt i did was using StreamReader. But the Headers are in the wrong place and i dont know how to Change this + do this for every file. the Code i tried is the following:
public static DataTable ReadCsvFilee(string path)
{
DataTable oDataTable = new DataTable();
var fileNames = Directory.GetFiles(path);
foreach (var fileName in fileNames)
{
//initialising a StreamReader type variable and will pass the file location
StreamReader oStreamReader = new StreamReader(fileName);
// CONTROLS WHETHER WE SKIP A ROW OR NOT
int RowCount = 0;
// CONTROLS WHETHER WE CREATE COLUMNS OR NOT
bool hasColumns = false;
string[] ColumnNames = null;
string[] oStreamDataValues = null;
//using while loop read the stream data till end
while (!oStreamReader.EndOfStream)
{
String oStreamRowData = oStreamReader.ReadLine().Trim();
if (oStreamRowData.Length > 0)
{
oStreamDataValues = oStreamRowData.Split(';');
//Bcoz the first row contains column names, we will poluate
//the column name by
//reading the first row and RowCount-0 will be true only once
// CHANGE TO CHECK FOR COLUMNS CREATED
if (!hasColumns)
{
ColumnNames = oStreamRowData.Split(';');
//using foreach looping through all the column names
foreach (string csvcolumn in ColumnNames)
{
DataColumn oDataColumn = new DataColumn(csvcolumn.ToUpper(), typeof(string));
//setting the default value of empty.string to newly created column
oDataColumn.DefaultValue = string.Empty;
//adding the newly created column to the table
oDataTable.Columns.Add(oDataColumn);
}
// SET COLUMNS CREATED
hasColumns = true;
// SET RowCount TO 0 SO WE KNOW TO SKIP COLUMNS LINE
RowCount = 0;
}
else
{
// IF RowCount IS 0 THEN SKIP COLUMN LINE
if (RowCount++ == 0) continue;
//creates a new DataRow with the same schema as of the oDataTable
DataRow oDataRow = oDataTable.NewRow();
//using foreach looping through all the column names
for (int i = 0; i < ColumnNames.Length; i++)
{
oDataRow[ColumnNames[i]] = oStreamDataValues[i] == null ? string.Empty : oStreamDataValues[i].ToString();
}
//adding the newly created row with data to the oDataTable
oDataTable.Rows.Add(oDataRow);
}
}
}
//close the oStreamReader object
oStreamReader.Close();
//release all the resources used by the oStreamReader object
oStreamReader.Dispose();
}
return oDataTable;
}
I am thankful for everyone who is willing to help. And Thanks for reading this far!
Sincerely yours
If I understood you right, there is a strict parsing there like this:
string OpenAndParse(string filename, bool firstFile=false)
{
var lines = File.ReadAllLines(filename);
var parsed = lines.Select(l => l.Split(';')).ToArray();
var header = $"{parsed[0][0]};{parsed[1][0]};{parsed[2][0]};{parsed[1][0]}\n";
var data = $"{parsed[0][1]};{parsed[1][1]};{parsed[2][1]};{parsed[1][2]}\n";
return firstFile
? $"{header}{data}"
: $"{data}";
}
Where it would return - if first file:
Header1;Header2;Header3;Header2
Value1;Value2;Value3;Value4
if not first file:
Value1;Value2;Value3;Value4
If I am correct, rest is about running this against a list file of files and joining the results in an output file.
EDIT: Against a directory:
void ProcessFiles(string folderName, string outputFileName)
{
bool firstFile = true;
foreach (var f in Directory.GetFiles(folderName))
{
File.AppendAllText(outputFileName, OpenAndParse(f, firstFile));
firstFile = false;
}
}
Note: I missed you want a DataTable and not an output file. Then you could simply create a list and put the results into that list making the list the datasource for your datatable (then why would you use semicolons in there? Probably all you need is to simply attach the array values to a list).
(Adding as another answer just to make it uncluttered)
void ProcessMyFiles(string folderName)
{
List<MyData> d = new List<MyData>();
var files = Directory.GetFiles(folderName);
foreach (var file in files)
{
OpenAndParse(file, d);
}
string[] headers = GetHeaders(files[0]);
DataGridView dgv = new DataGridView {Dock=DockStyle.Fill};
dgv.DataSource = d;
dgv.ColumnAdded += (sender, e) => {e.Column.HeaderText = headers[e.Column.Index];};
Form f = new Form();
f.Controls.Add(dgv);
f.Show();
}
string[] GetHeaders(string filename)
{
var lines = File.ReadAllLines(filename);
var parsed = lines.Select(l => l.Split(';')).ToArray();
return new string[] { parsed[0][0], parsed[1][0], parsed[2][0], parsed[1][0] };
}
void OpenAndParse(string filename, List<MyData> d)
{
var lines = File.ReadAllLines(filename);
var parsed = lines.Select(l => l.Split(';')).ToArray();
var data = new MyData
{
Col1 = parsed[0][1],
Col2 = parsed[1][1],
Col3 = parsed[2][1],
Col4 = parsed[1][2]
};
d.Add(data);
}
public class MyData
{
public string Col1 { get; set; }
public string Col2 { get; set; }
public string Col3 { get; set; }
public string Col4 { get; set; }
}
I don't know if this is the best way to do this. But what i would have done in your case, is to rewrite the CSV's the conventionnal way while reading all the files, then create a stream containing the new CSV created.
It would look like something like this :
var csv = new StringBuilder();
csv.AppendLine("Header1;Header2;Header3;Header4");
foreach (var item in file)
{
var newLine = string.Format("{0},{1},{2},{3}", item.value1, item.value2, item.value3, item.value4);
csv.AppendLine(newLine);
}
//Create Stream
MemoryStream stream = new MemoryStream();
StreamReader reader = new StreamReader(stream);
//Fill your data table here with your values
Hope this will help.

Bug in C# System.Collections.Generic.List<T>?

I'm writing a simple code to read some data from a text file and storing in a C# List but having problems with it. Please help if the problem is at my side or is it the library. I've written the following function :
public List<EmpBO> ReadData()
{
EmpBO temp = new EmpBO();
List<EmpBO> lis = new List<EmpBO>(100);
string[] tokens;
string data;
StreamReader sw = new StreamReader(new FileStream("emp.txt",FileMode.OpenOrCreate));
int ind = 0;
while ((data = sw.ReadLine())!=null)
{
Console.WriteLine("Reading " + data);
tokens = data.Split(';');
temp.Id = int.Parse(tokens[0]);
temp.Name = tokens[1];
temp.Salary = double.Parse(tokens[2]);
temp.Br = double.Parse(tokens[3]);
temp.Tax = double.Parse(tokens[4]);
temp.Designation = tokens[5];
//lis.Add(temp);
lis.Insert(ind,temp);
ind++;
}
sw.Close();
Console.WriteLine("Read this material and returning list");
for (int i = 0; i < lis.Count; i++)
{
Console.WriteLine("" + (lis.ElementAt(i)).Name);
}
//foreach (EmpBO ob in lis)
//{
// Console.WriteLine("" + ob.Id + ob.Name);
//}
return lis;
}
File emp.txt Contains:
1;Ahmed;100000;20;1000;manager
2;Bilal;200000;15;2000;ceo
Now as you can see that in while loop, I've displayed what StreamReader has read and it does 2 iterations in this case and displays.
Reading 1;Ahmed;100000;20;1000;manager
Reading 2;Bilal;200000;15;2000;ceo
and as you can see i'm saving this info in temp and inserting in the list.
after the while loop is finished , when I traverse the list for knowing that what is stored in it then it displays:
Read this material and returning list
Bilal
BIlal
Well, the second record is stored in the list twice and 1st record is absent.. What seems to be the problem? I've used Add() method too , and foreach loop for traversing list as you can see it's commented out but the result was same.. Please help
Move this line
EmpBO temp = new EmpBO();
into the while-loop so that it looks like
while ((data = sw.ReadLine())!=null){
EmpBO temp = new EmpBO();
Console.WriteLine("Reading " + data);
tokens = data.Split(';');
temp.Id = int.Parse(tokens[0]);
temp.Name = tokens[1];
temp.Salary = double.Parse(tokens[2]);
temp.Br = double.Parse(tokens[3]);
temp.Tax = double.Parse(tokens[4]);
temp.Designation = tokens[5];
//lis.Add(temp);
lis.Insert(ind,temp);
ind++;
}
You are not creating a new EmpBO for each entry, but more overwriting the same object with the read values and adding it again to the List.
The effect is that you add the same object mutiple times to the List.
In your code you have created the EmpBO object only once. In the second iteration you are modified the value in the same object. you have to create instance for EmpBO inside the while loop like below.
while ((data = sw.ReadLine())!=null)
{
Console.WriteLine("Reading " + data);
tokens = data.Split(';');
EmpBO temp = new EmpBO();
temp.Id = int.Parse(tokens[0]);
temp.Name = tokens[1];
temp.Salary = double.Parse(tokens[2]);
temp.Br = double.Parse(tokens[3]);
temp.Tax = double.Parse(tokens[4]);
temp.Designation = tokens[5];
//lis.Add(temp);
lis.Insert(ind,temp);
ind++;
}
This isn't a direct answer to the question, but your code has other problems.
Both your FileStream and StreamReader should be disposed of after use.
Alternatively, you could write your code like this:
public List<EmpBO> ReadData()
{
return File
.ReadAllLines("emp.txt")
.Select(data =>
{
var tokens = data.Split(';');
return new EmpBO()
{
Id = int.Parse(tokens[0]),
Name = tokens[1],
Salary = double.Parse(tokens[2]),
Br = double.Parse(tokens[3]),
Tax = double.Parse(tokens[4]),
Designation = tokens[5],
};
})
.ToList();
}
That, hopefully, should be even easier.
You've inserted the same object twice. You have to create a new object in the loop otherwise you will override the attributes on each iteration and simply and a reference to the same object over and over again.It's safe to assume that standard operations on the BCL classes work correctly or as Eric Lippert put's it Maybe there's something wrong with the universe but probably not
you simply need to change the start of the loop to this:
while ((data = sw.ReadLine())!=null)
{
EmpBO temp = new EmpBO();
If you try to add same object twice in a list ,it will override values entered first time and will show only values from second object but twice
for example :Take a list ,add a object in it . modify that object ,again add it .
when you try to print values ,you will get values of last object
ob1.a=5;
list1.add(ob1);
// list1[0]-->a-->5
ob1.a=7;
list1.add(ob1);
// list1[0]--->a--->7 list1[1]--->a--->7

Categories

Resources