I am writing a form application where it displays all the PDF files from a directory in a Datagridview.
Now the files name format is usually 12740-250-B-File Name (So basically XXXXX-XXX-X-XXXXXXX).
So the first number is the project number, the second number followed after the dash is the series number- and the letter is the revision of the file.
I would like to have a button when pressed, it will find the files with the same series number (XXXXX-Series No - Revision - XXXXXX) and show me the latest revision which it will be the biggest letter, So between 12763-200-A-HelloWorld and from 12763-200-B-HelloWorld I want the 12763-200-B-HelloWorld to be the result of my query.
This is what I got so far:
private void button1_Click(object sender, EventArgs e)
{
}
private void button2_Click(object sender, EventArgs e)
{
String[] files = Directory.GetFiles(#"M:\Folder Directory","*.pdf*", SearchOption.AllDirectories);
DataTable table = new DataTable();
table.Columns.Add("File Name");
for (int i = 0; i < files.Length; i++)
{
FileInfo file = new FileInfo(files[i]);
table.Rows.Add(file.Name);
}
dataGridView1.DataSource = table;
}
Thanks in advance.
Note:
In the end, the files with the latest revision will be inserted in an excel spreadsheet.
Assuming your collection is a list of file names which is the result of Directory.GetFiles(""); The below linq will work. For the below to work you need to be certain of the file format as the splitting is very sensitive to a specific file format.
var seriesNumber = "200";
var files = new List<string> { "12763-200-A-HelloWorld", "12763-200-B-HelloWorld" };
var matching = files.Where(x => x.Split('-')[1] == seriesNumber)
.OrderByDescending(x => x.Split('-')[2])
.FirstOrDefault();
Result:
Matching: "12763-200-B-HelloWorld"
You can try the following:
string dirPath = #"M:\Folder Directory";
string filePattern = "*.pdf";
DirectoryInfo di = new DirectoryInfo(dirPath);
FileInfo[] files = di.GetFiles(filePattern, SearchOption.AllDirectories);
Dictionary<string, FileInfo> matchedFiles = new Dictionary<string, FileInfo>();
foreach (FileInfo file in files)
{
string filename = file.Name;
string[] seperatedFilename = filename.Split('-');
// We are assuming that filenames are consistent
// As such,
// the value at seperatedFilename[1] will always be Series No
// the value at seperatedFilename[2] will always be Revision
// If this is not the case in every scenario, the following code should expanded to allow other cases
string seriesNo = seperatedFilename[1];
string revision = seperatedFilename[2];
if (matchedFiles.ContainsKey(seriesNo))
{
FileInfo matchedFile = matchedFiles[seriesNo];
string matchedRevision = matchedFile.Name.Split('-')[2];
// Compare on the char values - https://learn.microsoft.com/en-us/dotnet/api/system.string.compareordinal?view=netframework-4.7.2
// If the value is int, then it can be cast to integer for comparison
if (String.CompareOrdinal(matchedRevision, seriesNo) > 0)
{
// This file is higher than the previous
matchedFiles[seriesNo] = file;
}
} else
{
// Record does not exist - so its is added by default
matchedFiles.Add(seriesNo, file);
}
}
// We have a list of all files which match our criteria
foreach (FileInfo file in matchedFiles.Values)
{
// TODO : Determine if the directory path is also required for the file
Console.WriteLine(file.FullName);
}
It splits the filename into component parts and compares the revision where the series names match; storing the result in a dictionary for further processing later.
This seems to be a good situation to use a dictionary in my opinion! You could try the following:
String[] files = new string[5];
//group of files with the same series number
files[0] = "12763-200-A-HelloWorld";
files[1] = "12763-200-X-HelloWorld";
files[2] = "12763-200-C-HelloWorld";
//another group of files with the same series number
files[3] = "12763-203-C-HelloWorld";
files[4] = "12763-203-Z-HelloWorld";
//all the discting series numbers, since split will the de second position of every string after the '-'
var distinctSeriesNumbers = files.Select(f => f.Split('-')[1]).Distinct();
Dictionary<String, List<String>> filesDictionary = new Dictionary<string, List<String>>();
//for each series number, we will try to get all the files and add them to dictionary
foreach (var serieNumber in distinctSeriesNumbers)
{
var filesWithSerieNumber = files.Where(f => f.Split('-')[1] == serieNumber).ToList();
filesDictionary.Add(serieNumber, filesWithSerieNumber);
}
List<String> listOfLatestSeries = new List<string>();
//here we will go through de dictionary and get the latest file of each series number
foreach (KeyValuePair<String, List<String>> entry in filesDictionary)
{
listOfLatestSeries.Add(entry.Value.OrderByDescending(d => d.Split('-')[2]).First());
}
//now we have the file with the last series number in the list
MessageBox.Show(listOfLatestSeries[0]); //result : "12763-200-X-HelloWorld"
MessageBox.Show(listOfLatestSeries[1]); //result : "12763-203-Z-HelloWorld";
"0.0.0.0,""0.255.255.255"",""ZZ"""
"1.0.0.0,""1.0.0.255"",""AU"""
"1.0.1.0,""1.0.3.255"",""CN"""
"1.0.4.0,""1.0.7.255"",""AU"""
"1.0.8.0,""1.0.15.255"",""CN"""
"1.0.16.0,""1.0.31.255"",""JP"""
"1.0.32.0,""1.0.63.255"",""CN"""
"1.0.64.0,""1.0.127.255"",""JP"""
"1.0.128.0,""1.0.255.255"",""TH"""
"1.1.0.0,""1.1.0.255"",""CN"""
"1.1.1.0,""1.1.1.255"",""AU"""
"1.1.2.0,""1.1.63.255"",""CN"""
"1.1.64.0,""1.1.127.255"",""JP"""
"1.1.128.0,""1.1.255.255"",""TH"""
İN EXCEL
0.0.0.0,"0.255.255.255","ZZ"
1.0.0.0,"1.0.0.255","AU"
1.0.1.0,"1.0.3.255","CN"
1.0.4.0,"1.0.7.255","AU"
1.0.8.0,"1.0.15.255","CN"
1.0.16.0,"1.0.31.255","JP"
1.0.32.0,"1.0.63.255","CN"
1.0.64.0,"1.0.127.255","JP"
1.0.128.0,"1.0.255.255","TH"
1.1.0.0,"1.1.0.255","CN"
1.1.1.0,"1.1.1.255","AU"
1.1.2.0,"1.1.63.255","CN"
1.1.64.0,"1.1.127.255","JP"
1.1.128.0,"1.1.255.255","TH"
1.2.0.0,"1.2.2.255","CN"
1.2.3.0,"1.2.3.255","AU"
1.2.4.0,"1.2.127.255","CN"
1.2.128.0,"1.2.255.255","TH"
1.3.0.0,"1.3.255.255","CN"
1.4.0.0,"1.4.0.255","AU"
1.4.1.0,"1.4.127.255","CN"
1.4.128.0,"1.4.255.255","TH"
How can split this CSV file.
For example 0.0.0.0 0.255.255.255 ZZ for first row and how can add datagridview with 3columns
You can do it via the following way..
using System.IO;
static void Main(string[] args)
{
using(var reader = new StreamReader(#"C:\test.csv"))
{
List<string> listA = new List<string>();
List<string> listB = new List<string>();
while (!reader.EndOfStream)
{
var line = reader.ReadLine();
var values = line.Split(','); // or whatever yur get by reading that file
listA.Add(values[0]);
listB.Add(values[1]);
}
}
}
A CSV file is either a Tab delimited or a Comma delimited file. That said; you have to read the file line by line and then separate the values available in a line based on the delimiter character. The first line usually appears in a CSV file is usually the headers which you can use in order to produce a KeyValue pair to make your collection more efficient. For example:
Dictionary<int, Dictionary<String, String>> values = new Dictionary<int, Dictionary<String,String>>();
using(FileStream fileStream = new FileStream(#"D:\MyCSV.csv", FileMode.Open, FileAccess.Read, FileShare.Read)) {
using(StreamReader streamReader = new StreamReader(fileStream)){
//You can skip this line if there is no header
// Then instead of Dictionary<String,String> you use List<String>
var headers = streamReader.ReadLine().Split(',');
String line = null;
int lineNumber = 1;
while(!streamReader.EndOfStream){
line = streamReader.ReadLine().split(',');
if(line.Length == headers.Length){
var temp = new Dictionary<String, String>();
for(int i = 0; i < headers.Length; i++){
// You can remove '"' character by line[i].Replace("\"", "") or through using the Substring method
temp.Add(headers[i], line[i]);
}
values.Add(lineNumber, temp);
}
lineNumber++;
}
}
In case the data structure of your CSV is constant and it will not change in the future, you can develop a strongly typed data model and get rid of the Dictionary type. This approach will be more elegant and more efficient.
First of all, your CSV lines are surrounded by quotes. Is it copy/paste mistake? If not, you will need to sanitize the file to a valid CSV file.
You can try Cinchoo ETL - an open source library to load the CSV file to datatable, then you can assign it to your DataGridView source.
I'll show you both approach, how to handle
Valid CSV: (test.csv)
0.0.0.0,"0.255.255.255","ZZ"
1.0.0.0,"1.0.0.255","AU"
1.0.1.0,"1.0.3.255","CN"
1.0.4.0,"1.0.7.255","AU"
1.0.8.0,"1.0.15.255","CN"
1.0.16.0,"1.0.31.255","JP"
1.0.32.0,"1.0.63.255","CN"
1.0.64.0,"1.0.127.255","JP"
1.0.128.0,"1.0.255.255","TH"
1.1.0.0,"1.1.0.255","CN"
1.1.1.0,"1.1.1.255","AU"
1.1.2.0,"1.1.63.255","CN"
1.1.64.0,"1.1.127.255","JP"
1.1.128.0,"1.1.255.255","TH"
Read CSV:
using (var p = new ChoCSVReader("test.csv"))
{
var dt = p.AsDataTable();
//Assign dt to DataGridView
}
Next approach
Invalid CSV: (test.csv)
"0.0.0.0,""0.255.255.255"",""ZZ"""
"1.0.0.0,""1.0.0.255"",""AU"""
"1.0.1.0,""1.0.3.255"",""CN"""
"1.0.4.0,""1.0.7.255"",""AU"""
"1.0.8.0,""1.0.15.255"",""CN"""
"1.0.16.0,""1.0.31.255"",""JP"""
"1.0.32.0,""1.0.63.255"",""CN"""
"1.0.64.0,""1.0.127.255"",""JP"""
"1.0.128.0,""1.0.255.255"",""TH"""
"1.1.0.0,""1.1.0.255"",""CN"""
"1.1.1.0,""1.1.1.255"",""AU"""
"1.1.2.0,""1.1.63.255"",""CN"""
"1.1.64.0,""1.1.127.255"",""JP"""
"1.1.128.0,""1.1.255.255"",""TH"""
Read CSV:
using (var p = new ChoCSVReader("Sample6.csv"))
{
p.SanitizeLine += (o, e) =>
{
string line = e.Line as string;
if (line != null)
{
line = line.Substring(1, line.Length - 2);
line = line.Replace(#"""""", #"""");
}
e.Line - line;
};
var dt = p.AsDataTable();
//Assign dt to DataGridView
}
Hope it helps.
I have two files: Example1.csv and Example2.csv, note they are not comma-separated, but are saved with the 'csv' extension.
Example 1 has 1 column which has emails address only
Example 2 has many columns in which it has the column that is there in example 1 csv file.
Example1.csv file
emails
abc#gmail.com
jhg#yahoo.com
...
...
Example 2.csv
Column1 column2 Column3 column4 emails
1 45 456 123 abc#gmail.com
2 89 898 254 jhg#yahoo.com
3 85 365 789 ...
Now i need to delete the rows in example2.csv that matches with data in example 1 file, for example: Row 1 and 2 should be removed as they both the email matches.
string[] lines = File.ReadAllLines(#"C:\example2.csv");
var emails = File.ReadAllLines(#"C:\example1.csv");
List<string> linesToWrite = new List<string>();
foreach (string s in lines)
{
String[] split = s.Split(' ');
if (s.Contains(emails))
linesToWrite.Remove(s);
}
File.WriteAllLines("file3.csv", linesToWrite);
This should work:
var emails = new HashSet<string>(File.ReadAllLines(#"C:\example1.csv").Skip(1));
File.WriteAllLines("file3.csv", File.ReadAllLines("C:\example2.csv").Where(line => !emails.Contains(line.Split(',')[4]));
It reads all of file one, puts all emails into a format where lookup is easy, then goes through all lines in the second file and writes only those to disk that don't match any of the existing emails in their 5th column. You may want to expand on many parts, for example there is little to no error handling. It also compares emails case-sensitive, although emails are normally not.
Variable line is not string, but string array, same as lines, you are reading it in the same way as lines.
Also this line
if (s.Contains(line))
is not correct. You are trying to check if a string contains an array. If you need to check if a line contains an email from list, then this will be better:
if (split.Intersect(line).Any())
So, here is the final code.
var lines = File.ReadAllLines(#"C:\example2.csv");
var line = File.ReadAllLines(#"C:\example1.csv");
var linesToWrite = new List<string>();
foreach (var s in lines)
{
var split = s.Split(',');
if (split.Intersect(line).Any())
{
linesToWrite.Remove(s);
}
}
File.WriteAllLines("file3.csv", linesToWrite);
static void Main(string[] args)
{
var Example1CsvPath = #"C:\Inetpub\Poligon\Poligon\Resources\Example1.csv";
var Example2CsvPath = #"C:\Inetpub\Poligon\Poligon\Resources\Example2.csv";
var Example3CsvPath = #"C:\Inetpub\Poligon\Poligon\Resources\Example3.csv";
var EmailsToDelete = new List<string>();
var Result = new List<string>();
foreach(var Line in System.IO.File.ReadAllLines(Example1CsvPath))
{
if (!string.IsNullOrWhiteSpace(Line) && Line.IndexOf('#') > -1)
{
EmailsToDelete.Add(Line.Trim());
}
}
foreach (var Line in System.IO.File.ReadAllLines(Example2CsvPath))
{
if (!string.IsNullOrWhiteSpace(Line))
{
var Values = Line.Split(' ');
if (!EmailsToDelete.Contains(Values[4]))
{
Result.Add(Line);
}
}
}
System.IO.File.WriteAllLines(Example3CsvPath, Result);
}
I know this is 4 years-old... But I've got some ideas from this and I like to share my solution...
The idea behind this code is a simple CSV, with maximum of about 20 lines (reeeeally maximum), so I've decided to make something basic and not use a DB for this.
My solution is to rescan the CSV saving all variables (that is not the same that I like to delete) into a list and after scanning the CSV, it writes the list into the CSV (removing the one I've passed {textBox1})
List<string> _ = new();
try {
using (var reader = new StreamReader($"{Main.directory}\\bin\\ip.csv")) {
while (!reader.EndOfStream) {
var line = reader.ReadLine();
var values = line.Split(',');
if (values[0] == textBox1.Text || values[1] == textBox2.Text)
continue;
_.Add($"{values[0]},{values[1]},{values[2]},");
}
}
File.WriteAllLines($"{Main.directory}\\bin\\ip.csv", _);
} catch (Exception f) {
MessageBox.Show(f.Message);
}
i have to import 2 CSV's.
CSV 1 [49]: Including about 50 tab seperated colums.
CSV 2:[2] Inlcudes 3 Columns which should be replaced on the [3] [6] and [11] place of my first csv.
So heres what i do:
1) Importing the csv and split into a array.
string employeedatabase = "MYPATH";
List<String> status = new List<String>();
StreamReader file2 = new System.IO.StreamReader(filename);
string line = file2.ReadLine();
while ((line = file2.ReadLine()) != null)
{
string[] ud = line.Split('\t');
status.Add(ud[0]);
}
String[] ud_status = status.ToArray();
PROBLEM 1: i have about 50 colums to handle, ud_status is just the first, so do i need 50 Lists and 50 String arrays?
2) Importing the second csv and split into a array.
List<String> vorname = new List<String>();
List<String> nachname = new List<String>();
List<String> username = new List<String>();
StreamReader file = new System.IO.StreamReader(employeedatabase);
string line3 = file.ReadLine();
while ((line3 = file.ReadLine()) != null)
{
string[] data = line3.Split(';');
vorname.Add(data[0]);
nachname.Add(data[1]);
username.Add(data[2]);
}
String[] db_vorname = vorname.ToArray();
String[] db_nachname = nachname.ToArray();
String[] db_username = username.ToArray();
PROBLEM 2: After loading these two csv's i dont know how to combine them, and change to columns as mentioned above ..
somethine like this?
mynewArray = ud_status + "/t" + ud_xy[..n] + "/t" + changed_colum + ud_xy[..n];
save "mynewarray" into tablulator seperated csv with encoding "utf-8".
To read the file into a meaningful format, you should set up a class that defines the format of your CSV:
public class CsvRow
{
public string vorname { get; set; }
public string nachname { get; set; }
public string username { get; set; }
public CsvRow (string[] data)
{
vorname = data[0];
nachname = data[1];
username = data[2];
}
}
Then populate a list of this:
List<CsvRow> rows = new List<CsvRow>();
StreamReader file = new System.IO.StreamReader(employeedatabase);
string line3 = file.ReadLine();
while ((line3 = file.ReadLine()) != null)
{
rows.Add(new CsvRow(line3.Split(';'));
}
Similarly format your other CSV and include unused properties for the new fields. Once you have loaded both, you can populate the new properties from this list in a loop, matching the records by whatever common field the CSVs hopefully share. Then finally output the resulting data to a new CSV file.
Your solution is not to use string arrays to do this. That will just drive you crazy. It's better to use the System.Data.DataTable object.
I didn't get a chance to test the LINQ lambda expression at the end of this (or really any of it, I wrote this on a break), but it should get you on the right track.
using (var ds = new System.Data.DataSet("My Data"))
{
ds.Tables.Add("File0");
ds.Tables.Add("File1");
string[] line;
using (var reader = new System.IO.StreamReader("FirstFile"))
{
//first we get columns for table 0
foreach (string s in reader.ReadLine().Split('\t'))
ds.Tables["File0"].Columns.Add(s);
while ((line = reader.ReadLine().Split('\t')) != null)
{
//and now the rest of the data.
var r = ds.Tables["File0"].NewRow();
for (int i = 0; i <= line.Length; i++)
{
r[i] = line[i];
}
ds.Tables["File0"].Rows.Add(r);
}
}
//we could probably do these in a loop or a second method,
//but you may want subtle differences, so for now we just do it the same way
//for file1
using (var reader2 = new System.IO.StreamReader("SecondFile"))
{
foreach (string s in reader2.ReadLine().Split('\t'))
ds.Tables["File1"].Columns.Add(s);
while ((line = reader2.ReadLine().Split('\t')) != null)
{
//and now the rest of the data.
var r = ds.Tables["File1"].NewRow();
for (int i = 0; i <= line.Length; i++)
{
r[i] = line[i];
}
ds.Tables["File1"].Rows.Add(r);
}
}
//you now have these in functioning datatables. Because we named columns,
//you can call them by name specifically, or by index, to replace in the first datatable.
string[] columnsToReplace = new string[] { "firstColumnName", "SecondColumnName", "ThirdColumnName" };
for(int i = 0; i < ds.Tables[0].Rows.Count; i++)
{
//you didn't give a sign of any relation between the two tables
//so this is just by row, and assumes the row count is equivalent.
//This is also not advised.
//if there is a key these sets of data share
//you should join on them instead.
foreach(DataRow dr in ds.Tables[0].Rows[i].ItemArray)
{
dr[3] = ds.Tables[1].Rows[i][columnsToReplace[0]];
dr[6] = ds.Tables[1].Rows[i][columnsToReplace[1]];
dr[11] = ds.Tables[1].Rows[i][columnsToReplace[2]];
}
}
//ds.Tables[0] now has the output you want.
string output = String.Empty;
foreach (var s in ds.Tables[0].Columns)
output = String.Concat(output, s ,"\t");
output = String.Concat(output, Environment.NewLine); // columns ready, now the rows.
foreach (DataRow r in ds.Tables[0].Rows)
output = string.Concat(output, r.ItemArray.SelectMany(t => (t.ToString() + "\t")), Environment.NewLine);
if(System.IO.File.Exists("MYPATH"))
using (System.IO.StreamWriter file = new System.IO.StreamWriter("MYPATH")) //or a variable instead of string literal
{
file.Write(output);
}
}
With Cinchoo ETL - an open source file helper library, you can do the merge of CSV files as below. Assumed the 2 CSV file contains same number of lines.
string CSV1 = #"Id Name City
1 Tom New York
2 Mark FairFax";
string CSV2 = #"Id City
1 Las Vegas
2 Dallas";
dynamic rec1 = null;
dynamic rec2 = null;
StringBuilder csv3 = new StringBuilder();
using (var csvOut = new ChoCSVWriter(new StringWriter(csv3))
.WithFirstLineHeader()
.WithDelimiter("\t")
)
{
using (var csv1 = new ChoCSVReader(new StringReader(CSV1))
.WithFirstLineHeader()
.WithDelimiter("\t")
)
{
using (var csv2 = new ChoCSVReader(new StringReader(CSV2))
.WithFirstLineHeader()
.WithDelimiter("\t")
)
{
while ((rec1 = csv1.Read()) != null && (rec2 = csv2.Read()) != null)
{
rec1.City = rec2.City;
csvOut.Write(rec1);
}
}
}
}
Console.WriteLine(csv3.ToString());
Hope it helps.
Disclaimer: I'm the author of this library.
I'm struggling to do this I'm trying, I'll explain..
First, Sorry for my english..
I have this:
int counter = 0;
string line;
System.IO.StreamReader file = new System.IO.StreamReader(#"myfile.txt");
WebClient testador = new WebClient();
while ((line = file.ReadLine()) != null)
{
string[] campos = line.Split(':');
counter++;
}
file.Close();
I need he get the word of campos[0], ex. campos[0] = apple
and search this word in other file .txt, ex: myfile2.txt, and copy the next 10 letters..
Again, sorry for my english.
EDIT¹:
First i get the text in myfile.txt, ex: Line 1 of MyFile.txt is: "apple:yeah:test" i use string[] campos = line.Split(':'), to separe, campos[0] = apple, now.. i need to search apple in myfile2.txt, and copy the next 10 letters from it.
Thanks guys. :D
According to my understanding of what you want to do I wrote this code in a way that you can follow step by step. Hopefully it can help you with what you want to do.
To start, just feed it your 2 text files and run it to see if it gives you the results that you want.
var file1 = #"C:\Users\User\Desktop\file1.txt";
var file2 = #"C:\Users\User\Desktop\file2.txt";
var file1Lines = File.ReadLines(file1);
var file2Text = File.ReadAllText(file2);
var file1WordList = new List<string>();
var file2WordListWithExtraTenLetters = new List<string>();
foreach (var l in file1Lines)
{
file1WordList.AddRange(l.Split(':'));
}
for (int i = 0; i < file1WordList.Count; i++)
{
if (file2Text.Contains(file1WordList[i]))
{
var indexOfWordFromFile1InFile2 = file2Text.IndexOf(file1WordList[i]);
file2WordListWithExtraTenLetters.Add(file2Text.Substring(indexOfWordFromFile1InFile2, 10));
}
}
// Test it
foreach (var element in file1WordList)
{
Console.WriteLine(element);
}
Console.WriteLine("=====================");
foreach (var element in file2WordListWithExtraTenLetters)
{
Console.WriteLine(element);
}