I am working with Csv file and datagridview in a C# project for a inventory app, I try to update a row to CSV file!
i need to update if user edit a row current word with a new word but my problem here is i need save the current word and new word and get total in pseudo code example:
foreach (DataGridViewRow row in dataGridView1.Rows)
{
if(row in column is modified)
update specific row with comma to current file and load it...
}
Csv file is look like,
Current:
1;2;;4;5
Update:
1;2,A;;4;5 changed device A total: 1 time...
Next row modified :
1;A;;4,B,C;5 changed device B and C total change : 2 time...
With a database it's easy to update data but i don't have sql server installed so this option has not for me i think..
My goal is for tracking device out/in so if you have a solution please share it.
Short of using an SQL server, maybe something like this could help? LiteDB You'd have your LiteDB to host your data, and export it CSV whenever you need. Working with CSV files usually means you'll re-write the whole file every time there is an update to make... Which is slow and cumbersome. I recommend you use CSV to transport data from Point A to Point B, but not to maintain data.
Also, if you really want to stick to CSV, have a look at the Microsoft Ace OLEDB driver, previously known as JET driver. I use it to query CSV files, but I have never used it to update... so your mileage may vary.
Short of using an actual DataBase or a database driver, you'll have to use a StreamReader along with a StreamWriter. Read the file with the StreamReader, write the new file with the StreamWriter. In your StreanReader. This implies you'll have code in your StreamReader to find the correct Line(s) to update.
Here's the class I created and am using to interact with LiteDB. It's not all that robust, but it did exactly what I needed it to do at the time. I had to make changes to a slew of products hosted on my platform, and I used this to keep track of the progress.
using System;
using LiteDB;
namespace FixProductsProperty
{
public enum ListAction
{
Add = 0,
Remove,
Update,
Disable,
Enable
}
class DbInteractions
{
public static readonly string dbFilename = "MyDatabaseName.db";
public static readonly string dbItemsTableName = "MyTableName";
public void ToDataBase(ListAction incomingAction, TrackingDbEntry dbEntry = null)
{
if (dbEntry == null)
{
Exception ex = new Exception("dbEntry can not be null");
throw ex;
}
// Open database (or create if not exits)
using (var db = new LiteDatabase(dbFilename))
{
var backupListInDB = db.GetCollection<TrackingDbEntry>(dbItemsTableName);
//ovverride action if needed
if (incomingAction == ListAction.Add)
{
var tempone = backupListInDB.FindOne(p => p.ProductID == dbEntry.ProductID);
if (backupListInDB.FindOne(p => p.ProductID == dbEntry.ProductID) != null)
{
//the record already exists
incomingAction = ListAction.Update;
//IOException ex = new IOException("Err: Duplicate. " + dbEntry.ProductID + " is already in the database.");
//throw ex;
}
else
{
//the record does not already exist
incomingAction = ListAction.Add;
}
}
switch (incomingAction)
{
case ListAction.Add:
backupListInDB.Insert(dbEntry);
break;
case ListAction.Remove:
//backupListInDB.Delete(p => p.FileOrFolderPath == backupItem.FileOrFolderPath);
if (dbEntry.ProductID != 0)
{
backupListInDB.Delete(dbEntry.ProductID);
}
break;
case ListAction.Update:
if (dbEntry.ProductID != 0)
{
backupListInDB.Update(dbEntry.ProductID, dbEntry);
}
break;
case ListAction.Disable:
break;
case ListAction.Enable:
break;
default:
break;
}
backupListInDB.EnsureIndex(p => p.ProductID);
// Use Linq to query documents
//var results = backupListInDB.Find(x => x.Name.StartsWith("Jo"));
}
}
}
}
I use it like this:
DbInteractions yeah = new DbInteractions();
yeah.ToDataBase(ListAction.Add, new TrackingDbEntry { ProductID = dataBoundItem.ProductID, StoreID = dataBoundItem.StoreID, ChangeStatus = true });
Sorry... my variable naming convention sometimes blows...
Related
I am using Visual Studio 2017 with smo dll
and trying to remove a file from database files with the following procedure
public string RemoveFile(string fileName,string databaseName)
{
Server srv = new Server(servConn);
Database database = srv.Databases[databaseName];
if (database != null)
{
var file = LoadFiles(databaseName).Where(a => a.Name == fileName);
if (!file.Any())
{
SqlServerDisconnect();
return "File Doesn't Exist.kindly Enter Right File Name";
}
else
{
DataFile fileToRemove = file.FirstOrDefault();
database.FileGroups[fileToRemove.Parent.Name].Files.Remove(fileToRemove);
database.Alter();
return "File Removed Successfully";
}
}
}
I am not going to mention the code of servConn parameter and SqlServerDisconnect in order to abbreviate code that I have used in other places and I am sure that it works well.
When I remove a file that I take it's name from one of existing files' logical names
RemoveFile("File1",MyDataBase")
I get the message:
You cannot perform operation Remove on an object in state Existing.
How can I update the state of the file before removing it even though state field is read only and is my way in removing the file right?
You are using SMO however; For alternative way, If you can execute the SQL to do these operations. I would suggest to simply use TSQLs to remove file like :
ALTER DATABASE SchoolDb2012
REMOVE FILE schoolDataFile1;
GO
You can find detailed information here https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-database-transact-sql-file-and-filegroup-options?view=sql-server-2017
i updated my procedure to drop the file directly and not from it's file group
and the exception disappeared
public string RemoveFile(string fileName,string databaseName)
{
Server srv = new Server(servConn);
Database database = srv.Databases[databaseName];
if (database != null)
{
var file = LoadFiles(databaseName).Where(a => a.Name == fileName);
if (!file.Any())
{
SqlServerDisconnect();
return "File Doesn't Exist.kindly Enter Right File Name";
}
else
{
DataFile fileToRemove = file.FirstOrDefault();
fileToRemove.Drop();
database.Alter();
return "File Removed Successfully";
}
}
}
You can use SetState change the SmoObject status
e.g.
var removeColumn = target.Columns.OfType<Column>().Where(x => !source.Columns.Contains(x.Name)).ToList();
foreach (Column columItem in removeColumn)
{
//If you want remove this item just replace Remove to SetState
columItem.SetState(SqlSmoState.ToBeDropped);
//If I use Remove will got You cannot perform operation Remove on an object in state Existing
//target.Columns.Remove(columItem.Name);
}
I am having difficulties UPDATING the databes via LINQ to SQL, inserting a new record works fine.
The code correctly inserts a new row and adds a primary key, the issue I am having is when I go to update (chnage a value that is already in the database) that same row the database is not updating, it is the else part of the code that does not work correctly. This is strange b/c the DB is properly connected and functioning through the fact that the DataContext inserts a new row with no issues. Checking the database confirms this.
This is the code,
using System;
using System.Collections.Generic;
using System.Linq;
using Cost = Invoices.Tenant_Cost_TBL;
namespace Invoices
{
class CollectionGridEvents
{
static string conn = Settings.Default.Invoice_DbConnectionString;
public static void CostDataGridCellEditing(DataGridRowEditEndingEventArgs e)
{
using (DatabaseDataContext DataContext = new DatabaseDataContext(conn))
{
var sDselectedRow = e.Row.Item as Cost;
if (sDselectedRow == null) return;
if (sDselectedRow.ID == 0)
{
sDselectedRow.ID = DateTime.UtcNow.Ticks;
DataContext.Tenant_Cost_TBLs.InsertOnSubmit(sDselectedRow);
}
else
{
// these two lines are just for debuging
long lineToUpdateID = 636154619329526649; // this is the line to be updated primary key
long id = sDselectedRow.ID; // this is to check the primary key on selected line is same
// these 3 lines are to ensure I am entering actual data into the DB
int? amount = sDselectedRow.Cost_Amount;
string name = sDselectedRow.Cost_Name;
int? quantity = sDselectedRow.Cost_Quantity;
sDselectedRow.Cost_Amount = amount;
sDselectedRow.Cost_Name = name;
sDselectedRow.Cost_Quantity = quantity;
}
try
{
DataContext.SubmitChanges();
}
catch (Exception ex)
{
Alert.Error("Did not save", "Error", ex);
}
}
}
}
}
And I am calling the method from this,
private void CostDataGrid_RowEditEnding(object sender, DataGridRowEditEndingEventArgs e)
{
CollectionGridEvents.CostDataGridCellEditing(e);
}
The lineToUpdateID is copied dirrectly from the database and is just there to check against the currently selected rows primary key is the same, so I know I am trying to update the same row.
I have looked through as many of the same type of issues here on SO , such as this one Linq-to-Sql SubmitChanges not updating fields … why?. But still no closer to finding out what is going wrong.
Any ideas would be much appreciated.
EDIT: Cost is just short hand of this using Cost = Invoices.Tenant_Cost_TBL;
You cannot do that. You need to get the record out of the database and then update that record. Then save it back. Like this:
else
{
// first get it
var query =
from ord in DataContext.Tenant_Cost_TBLs
where ord.lineToUpdateID = 636154619329526649
select ord;
// then update it
// Most likely you will have one record here
foreach (Tenant_Cost_TBLs ord in query)
{
ord.Cost_Amount = sDselectedRow.Cost_Amount;
// ... and the rest
// Insert any additional changes to column values.
}
}
try
{
DataContext.SubmitChanges();
}
catch (Exception ex)
{
Alert.Error("Did not save", "Error", ex);
}
Here is an example you can follow.
Or you can use a direct query if you do not want to select first.
DataContext.ExecuteCommand("update Tenant_Cost_TBLs set Cost_Amount =0 where ...", null);
Your object (Cost) is not attached to DB context. You should attach it then save changes. Check solution here
I have a method that queries a table for the count of its records. QA has discovered an "edge case" where if a particular operation is canceled in a particular order and speed (as fast as possible), the GUI "forgets" about the rest of the records in that table (the contents of the tables are uploaded to a server; when each one finishes, the corresponding table is deleted).
To be clear, this table that is having records deleted from it and then queried for count ("workTables") is a table of table names, that are deleted after they are processed.
What I have determined (I'm pretty sure) is that this anomaly occurs when a record from the "workTables" table is in the process of being deleted when the workTables table is queried for the count of its records. This causes an exception, which causes the method to return -1, which in our case indicates we should cuase the GUI to not display those records.
Is there a way to check if a table is in the process of having a record deleted from it, and wait until after that operation has completed, before proceeding with the query, so that it won't throw an exception?
For those interested in the specifics, this method is the one that, under those peculiar circumstances, throws an exception:
public int isValidTable(string tableName)
{
int validTable = -1;
string tblQuery = "SELECT COUNT(*) FROM ";
tblQuery += tableName;
openConnectionIfPossibleAndNecessary();
try
{
SqlCeCommand cmd = objCon.CreateCommand();
cmd.CommandText = tblQuery;
object objcnt = cmd.ExecuteScalar();
validTable = Int32.Parse(objcnt.ToString());
}
catch (Exception ex)
{
validTable = -1;
}
return validTable;
}
...and this is the method that deletes a record from the "workTables" table after the corresponding table has had its contents uploaded:
private void DropTablesAndDeleteFromTables(string recordType, string fileName)
{
try
{
WorkFiles wrkFile = new WorkFiles();
int tableOK = 0;
DataSet workfiles;
tableOK = wrkFile.isValidWorkTable(); // -1 == "has no records"
if (tableOK > 0) //Table has at least one record
{
workfiles = wrkFile.getAllRecords();
//Go thru dataset and find filename to clean up after
foreach (DataRow row in workfiles.Tables[0].Rows)
{
. . .
dynSQL = string.Format("DELETE FROM workTables WHERE filetype = '{0}' and Name = '{1}'", tmpType, tmpStr);
dbconn = DBConnection.GetInstance();
dbconn.DBCommand(dynSQL, false);
populateListBoxWithWorkTableData();
return;
} // foreach (DataRow row in workfiles.Tables[0].Rows)
}
}
catch (Exception ex)
{
SSCS.ExceptionHandler(ex, "frmCentral.DropTablesAndDeleteFromTables");
}
}
// method called by DropTablesAndDeleteFromTables() above
public int isValidWorkTable() //reverted to old way to accommodate old version of DBConnection
{
// Pass the buck
return dbconn.isValidTable("workTables");
}
I know this code is very funky and klunky and kludgy; refactoring it to make more sense and be more easily understood is a long and ongoing process.
UPDATE
I'm not able to test this code:
lock (this)
{
// drop the table
}
...yet, because the handheld is no longer allowing me to copy files to it (I get, "Cannot copy [filename.[dll,exe] The device has either stopped responding or has been disconnected" (it is connected, as shown by ActiveStync))
If that doesn't work, I might have to try this:
// global var
bool InDropTablesMethod;
// before querying that database from elsewhere:
while (InDropTablesMethod)
{
Pause(500);
}
UPDATE 2
I've finally been able to test my lock code (copies of binaries were present in memory, not allowing me to overwrite them; the StartUp folder had a *.lnk to the .exe, so every time I started the handheld, it tried to run the buggy versions of the .exe), but it doesn't work - I still get the same conflict/contention.
UPDATE 3
What seems to work, as kludgy as it may be, is:
public class CCRUtils
{
public static bool InDropTablesMethod;
. . .
if (CCRUtils.InDropTablesMethod) return;
CCRUtils.InDropTablesMethod = true;
. . . // do it all; can you believe somebody from El Cerrito has never heard of CCR?
CCRUtils.InDropTableMethod = false;
UPDATE 4
Wrote too soon - the bug is back. I added this MessageBox.Show(), and do indeed see the text "proof of code re-entrancy" at run-time.
while (HHSUtils.InDropTablesMethod)
{
MessageBox.Show("proof of code re-entrancy");
i++;
if (i > 1000000) return;
}
try
{
HHSUtils.InDropTablesMethod = true;
. . .
}
HHSUtils.InDropTablesMethod = false;
...so my guess that code re-entrancy may be a problem is correct...
I'm building a console application that have to process a bunch of document.
To stay simple, the process is :
for each year between X and Y, query the DB to get a list of document reference to process
for each of this reference, process a local file
The process method is, I think, independent and should be parallelized as soon as input args are different :
private static bool ProcessDocument(
DocumentsDataset.DocumentsRow d,
string langCode
)
{
try
{
var htmFileName = d.UniqueDocRef.Trim() + langCode + ".htm";
var htmFullPath = Path.Combine("x:\path", htmFileName;
missingHtmlFile = !File.Exists(htmFullPath);
if (!missingHtmlFile)
{
var html = File.ReadAllText(htmFullPath);
// ProcessHtml is quite long : it use a regex search for a list of reference
// which are other documents, then sends the result to a custom WS
ProcessHtml(ref html);
File.WriteAllText(htmFullPath, html);
}
return true;
}
catch (Exception exc)
{
Trace.TraceError("{0,8}Fail processing {1} : {2}","[FATAL]", d.UniqueDocRef, exc.ToString());
return false;
}
}
In order to enumerate my document, I have this method :
private static IEnumerable<DocumentsDataset.DocumentsRow> EnumerateDocuments()
{
return Enumerable.Range(1990, 2020 - 1990).AsParallel().SelectMany(year => {
return Document.FindAll((short)year).Documents;
});
}
Document is a business class that wrap the retrieval of documents. The output of this method is a typed dataset (I'm returning the Documents table). The method is waiting for a year and I'm sure a document can't be returned by more than one year (year is part of the key actually).
Note the use of AsParallel() here, but I never got issue with this one.
Now, my main method is :
var documents = EnumerateDocuments();
var result = documents.Select(d => {
bool success = true;
foreach (var langCode in new string[] { "-e","-f" })
{
success &= ProcessDocument(d, langCode);
}
return new {
d.UniqueDocRef,
success
};
});
using (var sw = File.CreateText("summary.csv"))
{
sw.WriteLine("Level;UniqueDocRef");
foreach (var item in result)
{
string level;
if (!item.success) level = "[ERROR]";
else level = "[OK]";
sw.WriteLine(
"{0};{1}",
level,
item.UniqueDocRef
);
//sw.WriteLine(item);
}
}
This method works as expected under this form. However, if I replace
var documents = EnumerateDocuments();
by
var documents = EnumerateDocuments().AsParrallel();
It stops to work, and I don't understand why.
The error appears exactly here (in my process method):
File.WriteAllText(htmFullPath, html);
It tells me that the file is already opened by another program.
I don't understand what can cause my program not to works as expected. As my documents variable is an IEnumerable returning unique values, why my process method is breaking ?
thx for advises
[Edit] Code for retrieving document :
/// <summary>
/// Get all documents in data store
/// </summary>
public static DocumentsDS FindAll(short? year)
{
Database db = DatabaseFactory.CreateDatabase(connStringName); // MS Entlib
DbCommand cm = db.GetStoredProcCommand("Document_Select");
if (year.HasValue) db.AddInParameter(cm, "Year", DbType.Int16, year.Value);
string[] tableNames = { "Documents", "Years" };
DocumentsDS ds = new DocumentsDS();
db.LoadDataSet(cm, ds, tableNames);
return ds;
}
[Edit2] Possible source of my issue, thanks to mquander. If I wrote :
var test = EnumerateDocuments().AsParallel().Select(d => d.UniqueDocRef);
var testGr = test.GroupBy(d => d).Select(d => new { d.Key, Count = d.Count() }).Where(c=>c.Count>1);
var testLst = testGr.ToList();
Console.WriteLine(testLst.Where(x => x.Count == 1).Count());
Console.WriteLine(testLst.Where(x => x.Count > 1).Count());
I get this result :
0
1758
Removing the AsParallel returns the same output.
Conclusion : my EnumerateDocuments have something wrong and returns twice each documents.
Have to dive here I think
This is probably my source enumeration in cause
I suggest you to have each task put the file data into a global queue and have a parallel thread take writing requests from the queue and do the actual writing.
Anyway, the performance of writing in parallel on a single disk is much worse than writing sequentially, because the disk needs to spin to seek the next writing location, so you are just bouncing the disk around between seeks. It's better to do the writes sequentially.
Is Document.FindAll((short)year).Documents threadsafe? Because the difference between the first and the second version is that in the second (broken) version, this call is running multiple times concurrently. That could plausibly be the cause of the issue.
Sounds like you're trying to write to the same file. Only one thread/program can write to a file at a given time, so you can't use Parallel.
If you're reading from the same file, then you need to open the file with only read permissions as not to put a write lock on it.
The simplest way to fix the issue is to place a lock around your File.WriteAllText, assuming the writing is fast and it's worth parallelizing the rest of the code.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What's the best way to import a CSV file into a strongly-typed data structure?
Microsoft's TextFieldParser is stable and follows RFC 4180 for CSV files. Don't be put off by the Microsoft.VisualBasic namespace; it's a standard component in the .NET Framework, just add a reference to the global Microsoft.VisualBasic assembly.
If you're compiling for Windows (as opposed to Mono) and don't anticipate having to parse "broken" (non-RFC-compliant) CSV files, then this would be the obvious choice, as it's free, unrestricted, stable, and actively supported, most of which cannot be said for FileHelpers.
See also: How to: Read From Comma-Delimited Text Files in Visual Basic for a VB code example.
Use an OleDB connection.
String sConnectionString = "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=C:\\InputDirectory\\;Extended Properties='text;HDR=Yes;FMT=Delimited'";
OleDbConnection objConn = new OleDbConnection(sConnectionString);
objConn.Open();
DataTable dt = new DataTable();
OleDbCommand objCmdSelect = new OleDbCommand("SELECT * FROM file.csv", objConn);
OleDbDataAdapter objAdapter1 = new OleDbDataAdapter();
objAdapter1.SelectCommand = objCmdSelect;
objAdapter1.Fill(dt);
objConn.Close();
If you're expecting fairly complex scenarios for CSV parsing, don't even think up of rolling our own parser. There are a lot of excellent tools out there, like FileHelpers, or even ones from CodeProject.
The point is this is a fairly common problem and you could bet that a lot of software developers have already thought about and solved this problem.
I agree with #NotMyself. FileHelpers is well tested and handles all kinds of edge cases that you'll eventually have to deal with if you do it yourself. Take a look at what FileHelpers does and only write your own if you're absolutely sure that either (1) you will never need to handle the edge cases FileHelpers does, or (2) you love writing this kind of stuff and are going to be overjoyed when you have to parse stuff like this:
1,"Bill","Smith","Supervisor", "No Comment"
2 , 'Drake,' , 'O'Malley',"Janitor,
Oops, I'm not quoted and I'm on a new line!
Brian gives a nice solution for converting it to a strongly typed collection.
Most of the CSV parsing methods given don't take into account escaping fields or some of the other subtleties of CSV files (like trimming fields). Here is the code I personally use. It's a bit rough around the edges and has pretty much no error reporting.
public static IList<IList<string>> Parse(string content)
{
IList<IList<string>> records = new List<IList<string>>();
StringReader stringReader = new StringReader(content);
bool inQoutedString = false;
IList<string> record = new List<string>();
StringBuilder fieldBuilder = new StringBuilder();
while (stringReader.Peek() != -1)
{
char readChar = (char)stringReader.Read();
if (readChar == '\n' || (readChar == '\r' && stringReader.Peek() == '\n'))
{
// If it's a \r\n combo consume the \n part and throw it away.
if (readChar == '\r')
{
stringReader.Read();
}
if (inQoutedString)
{
if (readChar == '\r')
{
fieldBuilder.Append('\r');
}
fieldBuilder.Append('\n');
}
else
{
record.Add(fieldBuilder.ToString().TrimEnd());
fieldBuilder = new StringBuilder();
records.Add(record);
record = new List<string>();
inQoutedString = false;
}
}
else if (fieldBuilder.Length == 0 && !inQoutedString)
{
if (char.IsWhiteSpace(readChar))
{
// Ignore leading whitespace
}
else if (readChar == '"')
{
inQoutedString = true;
}
else if (readChar == ',')
{
record.Add(fieldBuilder.ToString().TrimEnd());
fieldBuilder = new StringBuilder();
}
else
{
fieldBuilder.Append(readChar);
}
}
else if (readChar == ',')
{
if (inQoutedString)
{
fieldBuilder.Append(',');
}
else
{
record.Add(fieldBuilder.ToString().TrimEnd());
fieldBuilder = new StringBuilder();
}
}
else if (readChar == '"')
{
if (inQoutedString)
{
if (stringReader.Peek() == '"')
{
stringReader.Read();
fieldBuilder.Append('"');
}
else
{
inQoutedString = false;
}
}
else
{
fieldBuilder.Append(readChar);
}
}
else
{
fieldBuilder.Append(readChar);
}
}
record.Add(fieldBuilder.ToString().TrimEnd());
records.Add(record);
return records;
}
Note that this doesn't handle the edge case of fields not being deliminated by double quotes, but meerley having a quoted string inside of it. See this post for a bit of a better expanation as well as some links to some proper libraries.
I was bored so i modified some stuff i wrote. It try's to encapsulate the parsing in an OO manner whle cutting down on the amount of iterations through the file, it only iterates once at the top foreach.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
// usage:
// note this wont run as getting streams is not Implemented
// but will get you started
CSVFileParser fileParser = new CSVFileParser();
// TO Do: configure fileparser
PersonParser personParser = new PersonParser(fileParser);
List<Person> persons = new List<Person>();
// if the file is large and there is a good way to limit
// without having to reparse the whole file you can use a
// linq query if you desire
foreach (Person person in personParser.GetPersons())
{
persons.Add(person);
}
// now we have a list of Person objects
}
}
public abstract class CSVParser
{
protected String[] deliniators = { "," };
protected internal IEnumerable<String[]> GetRecords()
{
Stream stream = GetStream();
StreamReader reader = new StreamReader(stream);
String[] aRecord;
while (!reader.EndOfStream)
{
aRecord = reader.ReadLine().Split(deliniators,
StringSplitOptions.None);
yield return aRecord;
}
}
protected abstract Stream GetStream();
}
public class CSVFileParser : CSVParser
{
// to do: add logic to get a stream from a file
protected override Stream GetStream()
{
throw new NotImplementedException();
}
}
public class CSVWebParser : CSVParser
{
// to do: add logic to get a stream from a web request
protected override Stream GetStream()
{
throw new NotImplementedException();
}
}
public class Person
{
public String Name { get; set; }
public String Address { get; set; }
public DateTime DOB { get; set; }
}
public class PersonParser
{
public PersonParser(CSVParser parser)
{
this.Parser = parser;
}
public CSVParser Parser { get; set; }
public IEnumerable<Person> GetPersons()
{
foreach (String[] record in this.Parser.GetRecords())
{
yield return new Person()
{
Name = record[0],
Address = record[1],
DOB = DateTime.Parse(record[2]),
};
}
}
}
}
There are two articles on CodeProject that provide code for a solution, one that uses StreamReader and one that imports CSV data using the Microsoft Text Driver.
A good simple way to do it is to open the file, and read each line into an array, linked list, data-structure-of-your-choice. Be careful about handling the first line though.
This may be over your head, but there seems to be a direct way to access them as well using a connection string.
Why not try using Python instead of C# or VB? It has a nice CSV module to import that does all the heavy lifting for you.
I had to use a CSV parser in .NET for a project this summer and settled on the Microsoft Jet Text Driver. You specify a folder using a connection string, then query a file using a SQL Select statement. You can specify strong types using a schema.ini file. I didn't do this at first, but then I was getting bad results where the type of the data wasn't immediately apparent, such as IP numbers or an entry like "XYQ 3.9 SP1".
One limitation I ran into is that it cannot handle column names above 64 characters; it truncates. This shouldn't be a problem, except I was dealing with very poorly designed input data. It returns an ADO.NET DataSet.
This was the best solution I found. I would be wary of rolling my own CSV parser, since I would probably miss some of the end cases, and I didn't find any other free CSV parsing packages for .NET out there.
EDIT: Also, there can only be one schema.ini file per directory, so I dynamically appended to it to strongly type the needed columns. It will only strongly-type the columns specified, and infer for any unspecified field. I really appreciated this, as I was dealing with importing a fluid 70+ column CSV and didn't want to specify each column, only the misbehaving ones.
I typed in some code. The result in the datagridviewer looked good. It parses a single line of text to an arraylist of objects.
enum quotestatus
{
none,
firstquote,
secondquote
}
public static System.Collections.ArrayList Parse(string line,string delimiter)
{
System.Collections.ArrayList ar = new System.Collections.ArrayList();
StringBuilder field = new StringBuilder();
quotestatus status = quotestatus.none;
foreach (char ch in line.ToCharArray())
{
string chOmsch = "char";
if (ch == Convert.ToChar(delimiter))
{
if (status== quotestatus.firstquote)
{
chOmsch = "char";
}
else
{
chOmsch = "delimiter";
}
}
if (ch == Convert.ToChar(34))
{
chOmsch = "quotes";
if (status == quotestatus.firstquote)
{
status = quotestatus.secondquote;
}
if (status == quotestatus.none )
{
status = quotestatus.firstquote;
}
}
switch (chOmsch)
{
case "char":
field.Append(ch);
break;
case "delimiter":
ar.Add(field.ToString());
field.Clear();
break;
case "quotes":
if (status==quotestatus.firstquote)
{
field.Clear();
}
if (status== quotestatus.secondquote)
{
status =quotestatus.none;
}
break;
}
}
if (field.Length != 0)
{
ar.Add(field.ToString());
}
return ar;
}
If you can guarantee that there are no commas in the data, then the simplest way would probably be to use String.split.
For example:
String[] values = myString.Split(',');
myObject.StringField = values[0];
myObject.IntField = Int32.Parse(values[1]);
There may be libraries you could use to help, but that's probably as simple as you can get. Just make sure you can't have commas in the data, otherwise you will need to parse it better.