C# SQLServer retrieving results and place in a .csv format - c#

I had a look on the site and on Google, but I couldn't seem to find a good solution to what I'm trying to do.
Basically, I have a client server application (C#) where I send the server an SQL select statement (Connecting to SQL Server 2008) and would like to return results in a CSV manner back to the client.
So far I have the following:
if (sqlDataReader.HasRows)
{
while(sqlDataReader.Read())
{
//not really sure what to put here and if the while should be there!
}
}
`
Unfortunately, I'm really new to connecting C# with SQL. I need any tips on how to simply put the results in a string in a csv format. The columns and fields are likely to be different so I cannot use the method of something[something] as I've seen in a few sites. I'm not sure if I'm being comprehensible tbh!
I would really appreciate any tips / points on how to go about this please!

Here is a method I use to dump any IDataReader out to a StreamWriter. I generally create the StreamSwriter like this: new StreamWriter(Response.OutputStream). I convert any double-quote characters in the input into single-quote characters (maybe not the best way to handle this, but it works for me).
public static void createCsvFile(IDataReader reader, StreamWriter writer) {
string Delimiter = "\"";
string Separator = ",";
// write header row
for (int columnCounter = 0; columnCounter < reader.FieldCount; columnCounter++) {
if (columnCounter > 0) {
writer.Write(Separator);
}
writer.Write(Delimiter + reader.GetName(columnCounter) + Delimiter);
}
writer.WriteLine(string.Empty);
// data loop
while (reader.Read()) {
// column loop
for (int columnCounter = 0; columnCounter < reader.FieldCount; columnCounter++) {
if (columnCounter > 0) {
writer.Write(Separator);
}
writer.Write(Delimiter + reader.GetValue(columnCounter).ToString().Replace('"', '\'') + Delimiter);
} // end of column loop
writer.WriteLine(string.Empty);
} // data loop
writer.Flush();
}

As mentioned, there are quite a few issues with delimiters, escaping characters correctly, and formatting different types correctly. But if you are just looking for an example of putting data into a string, here is yet another one. It does not do any checking for the aforementioned complications.
public static void ReaderToString( IDataReader Reader )
{
while ( Reader.Read() )
{
StringBuilder str = new StringBuilder();
for ( int i = 0; i < Reader.FieldCount; i++ )
{
if ( Reader.IsDBNull( i ) )
str.Append( "null" );
else
str.Append( Reader.GetValue( i ).ToString() );
if ( i < Reader.FieldCount - 1 )
str.Append( ", " );
}
// do something with the string here
Console.WriteLine(str);
}
}

When dealing with CSV file I usually go for the FileHelpers library: it has a SqlServerStorage class which you can use to read records from a SQL server and write them to a CSV file.

You may be able to adapt the implementation of a CSV writer available here.
If you also need to parse CSV files, the implementation here is relatively good.
The CSV format is more complicated than it looks - particularly if you're going to deal with arbitrary data coming back from a query. You would need to be able to handle escaping of special characters (like quotes and commas), dealing with line breaks, and the like. You are better off finding and using a proven implementation - especially if you're new to C#.

You can get the table column names like this:
SqlConnection conn = new SqlConnection(connString);
conn.Open();
SqlCommand cmd = new SqlCommand(sql, conn);
SqlDataReader rdr = cmd.ExecuteReader();
DataTable schema = rdr.GetSchemaTable();
foreach (DataRow row in schema.Rows)
{
foreach (DataColumn col in schema.Columns)
Console.WriteLine(col.ColumnName + " = " + row[col]);
}
rdr.Close()
conn.Close();
Of course you can determine the columns names with the first row only, here it does it on every rows.
You can now put your own code to join the columns into a CSV line pretty easily...
Thanks

Related

SQL Insert not considering blank values for the insert in my C# code

I have a nice piece of C# code which allows me to import data into a table with less columns than in the SQL table (as the file format is consistently bad).
My problem comes when I have a blank entry in a column. The values statement does not pickup an empty column from the csv. And so I receive the error
You have more insert columns than values
Here is the query printed to a message box...
As you can see there is nothing for Crew members 4 to 11, below is the file...
Please see my code:
SqlConnection ADO_DB_Connection = new SqlConnection();
ADO_DB_Connection = (SqlConnection)
(Dts.Connections["ADO_DB_Connection"].AcquireConnection(Dts.Transaction) as SqlConnection);
// Inserting data of file into table
int counter = 0;
string line;
string ColumnList = "";
// MessageBox.Show(fileName);
System.IO.StreamReader SourceFile =
new System.IO.StreamReader(fileName);
while ((line = SourceFile.ReadLine()) != null)
{
if (counter == 0)
{
ColumnList = "[" + line.Replace(FileDelimiter, "],[") + "]";
}
else
{
string query = "Insert into " + TableName + " (" + ColumnList + ") ";
query += "VALUES('" + line.Replace(FileDelimiter, "','") + "')";
// MessageBox.Show(query.ToString());
SqlCommand myCommand1 = new SqlCommand(query, ADO_DB_Connection);
myCommand1.ExecuteNonQuery();
}
counter++;
}
If you could advise how to include those fields in the insert that would be great.
Here is the same file but opened with a text editor and not given in picture format...
Date,Flight_Number,Origin,Destination,STD_Local,STA_Local,STD_UTC,STA_UTC,BLOC,AC_Reg,AC_Type,AdultsPAX,ChildrenPAX,InfantsPAX,TotalPAX,AOC,Crew 1,Crew 2,Crew 3,Crew 4,Crew 5,Crew 6,Crew 7,Crew 8,Crew 9,Crew 10,Crew 11
05/11/2022,241,BOG,SCL,15:34,22:47,20:34,02:47,06:13,N726AV,"AIRBUS A-319 ",0,0,0,36,AV,100612,161910,323227
Not touching the potential for sql injection as I'm free handing this code. If this a system generated file (Mainframe extract, dump from Dynamics or LoB app) the probability for sql injection is awfully low.
// Char required
char FileDelimiterChar = FileDelimiter.ToChar()[0];
int columnCount = 0;
while ((line = SourceFile.ReadLine()) != null)
{
if (counter == 0)
{
ColumnList = "[" + line.Replace(FileDelimiterChar, "],[") + "]";
// How many columns in line 1. Assumes no embedded commas
// The following assumes FileDelimiter is of type char
// Add 1 as we will have one fewer delimiters than columns
columnCount = line.Count(x => x == FileDelimiterChar) +1;
}
else
{
string query = "Insert into " + TableName + " (" + ColumnList + ") ";
// HACK: this fails if there are embedded delimiters
int foundDelimiters = line.Count(x => x == FileDelimiter) +1;
// at this point, we know how many delimiters we have
// and how many we should have.
string csv = line.Replace(FileDelimiterChar, "','");
// Pad out the current line with empty strings aka ','
// Note: I may be off by one here
// Probably a classier linq way of doing this or string.Concat approach
for (int index = foundDelimiters; index <= columnCount; index++)
{
csv += "','";
}
query += "VALUES('" + csv + "')";
// MessageBox.Show(query.ToString());
SqlCommand myCommand1 = new SqlCommand(query, ADO_DB_Connection);
myCommand1.ExecuteNonQuery();
}
counter++;
}
Something like that should get you a solid shove in the right direction. The concept is that you need to inspect the first line and see how many columns you should have. Then for each line of data, how many columns do you actually have and then stub in the empty string.
If you change this up to use SqlCommand objects and parameters, the approximate logic is still the same. You'll add all the expected parameters by figuring out columns in the first line and then for each line you will add your values and if you have a short row, you just send the empty string (or dbnull or whatever your system expects).
The big take away IMO is that CSV parsing libraries exist for a reason and there are so many cases not addressed in the above psuedocode that you'll likely want to trash the current approach in favor of a standard parsing library and then while you're at it, address the potential security flaws.
I see your updated comment that you'll take the formatting concerns back to the source party. If they can't address them, I would envision your SSIS package being
Script Task -> Data Flow task.
Script Task is going to wrangle the unruly data into a strict CSV dialect that a Data Flow task can handle. Preprocessing the data into a new file instead of trying to modify the existing in place.
The Data Flow then becomes a chip shot of Flat File Source -> OLE DB Destination
Here's how you can process this file... I would still ask for Json or XML though.
You need two outputs set up. Flight Info (the 1st 16 columns) and Flight Crew (a business key [flight number and date maybe] and CrewID).
Seems to me the problem is how the crew is handled in the CSV.
So basic steps are Read the file, use regex to split it, write out first 16 col to output1 and the rest (with key) to flight crew. And skip the header row on your read.
var lines = System.File.IO.ReadAllLines("filepath");
for(int i =1; i<lines.length; i++)
{
var = new System.Text.RegularExpressions.Regex("new Regex("(?:^|,)(?=[^\"]|(\")?)\"?((?(1)(?:[^\"]|\"\")*|[^,\"]*))\"?(?=,|$)"); //Some code I stole to split quoted CSVs
var m = r.Matches(line[i]); //Gives you all matches in a MatchCollection
//first 16 columns are always correct
OutputBuffer0.AddRow();
OutputBuffer0.Date = m[0].Groups[2].Value;
OutputBuffer0.FlightNumber = m[1].Groups[2].Value;
[And so on until m[15]]
for(int j=16; j<m.Length; j++)
{
OutputBuffer1.AddRow(); //This is a new output that you need to set up
OutputBuffer1.FlightNumber = m[1].Groups[2].Value;
[Keep adding to make a business key here]
OutputBuffer1.CrewID = m[j].Groups[2].Value;
}
}
Be careful as I just typed all this out to give you a general plan without any testing. For example m[0] might actually be m[0].Value and all of the data types will be strings that will need to be converted.
To check out how regex processes your rows, please visit https://regex101.com/r/y8Ayag/1 for explanation. You can even paste in your row data.
UPDATE:
I just tested this and it works now. Needed to escape the regex function. And specify that you wanted the value of group 2. Also needed to hit IO in the File.ReadAllLines.
The solution that I implemented in the end avoided the script task completely. Also meaning no SQL Injection possibilities.
I've done a flat file import. Everything into one column then using split_string and a pivot in SQL then inserted into a staging table before tidy up and off into main.
Flat File Import to single column table -> SQL transform -> Load
This also allowed me to iterate through the files better using a foreach loop container.
ELT on this occasion.
Thanks for all the help and guidance.

C# syntax help, date formatting and adding " to strings

This is my first foray into C# as a SSIS and Informatica developer living only in SQL. I have a script task that is reading data from a single SQL Server table via Query and simply writing that data to a text file. Everything works except what I think are two small formatting problems I can't figure out.
The following requirements are in place for this build. Thanks in advance I'm here to answer any questions!
SQL query is purposefully set as a Select * to pick up any new columns added(already in code)
First 3 columns excluded from write to file(already in code)
Problems:
" " wrappers need to be added to all values, column and rows.
Date in database is true Date but when writing to file it shows Datetime. Needs to be only date.
Current:
ID
Name
Date
Ratio
12345678
John Wayne
12/31/2018 12:00:00 AM
1/1
Needs to be:
"ID"
"Name"
"Date"
"Ratio"
"12345678"
"John Wayne"
"2018-12-31"
"1/1"
Code:
// Declare Variables
string DestinationFolder = Dts.Variables["User::Target_FilePath"].Value.ToString();
string QueryStage = Dts.Variables["User::Query_Stage"].Value.ToString();
//string TableName = Dts.Variables["User::TableName"].Value.ToString();
string FileName = Dts.Variables["User::OutputFileName"].Value.ToString();
string FileDelimiter = Dts.Variables["User::Target_FileDelim"].Value.ToString();
//string FileExtension = Dts.Variables["User::AC_Prefix"].Value.ToString();
//USE ADO.NET Connection from SSIS Package to get data from table
SqlConnection myADONETConnection = new SqlConnection();
myADONETConnection = (SqlConnection)(Dts.Connections["ADO_TEST_CONN"].AcquireConnection(Dts.Transaction) as SqlConnection);
// Read data from table or view to data table
string query = QueryStage;
SqlCommand cmd = new SqlCommand(query, myADONETConnection);
//myADONETConnection.Open();
DataTable d_table = new DataTable();
d_table.Load(cmd.ExecuteReader());
myADONETConnection.Close();
string FileFullPath = DestinationFolder + "\\" + FileName + ".txt";
StreamWriter sw = null;
sw = new StreamWriter(FileFullPath, false);
// Write the Header Row to File
int ColumnCount = d_table.Columns.Count;
for (int ic = 4; ic < ColumnCount; ic++)
{
sw.Write(d_table.Columns[ic]);
if (ic < ColumnCount - 1)
{
sw.Write(FileDelimiter);
}
}
sw.Write(sw.NewLine);
// Write All Rows to the File
foreach (DataRow dr in d_table.Rows)
{
for (int ir = 4; ir < ColumnCount; ir++)
{
if (!Convert.IsDBNull(dr[ir]))
{
sw.Write(dr[ir].ToString());
}
if (ir < ColumnCount - 1)
{
sw.Write(FileDelimiter);
}
}
sw.Write(sw.NewLine);
}
sw.Close();
Dts.TaskResult = (int)ScriptResults.Success;
Blindly running .ToString() on an object, which is done in the line sw.Write(dr[ir].ToString());, is going to use the default settings of converting that data type into a string. If it's a DateTime (the c# data type, not the SQL Date column type), then it will include the time information.
C# converts SQL column types (such as Date) into C# data types (DateTime). You need to detect this, just as you're detecting if a value is DBNull.
if (!Convert.IsDBNull(dr[ir]))
{
if (dr[ir] is DateTime dt)
{
// use DateTime's specific string rendering
sw.Write(dt.ToString("d"));
}
else
{
// fall back to standard string rendering
sw.Write(dr[ir].ToString());
}
}
You can change out the format ("d" in this case) to be something else if you need a different format. Keep in mind that the Culture of a computer will affect how the string is rendered, unless you explicitly use a named Culture.
The other thing in your problem is adding quotes around printed values. This can be done with string concatination. For example:
string result = "\"" + "my string" + "\"";
// result is "my string", with quotes
Remember to escape the quote mark.

How to bypass comma , double quote using Lumenworks

I am fetching data from a database in csv format.i.e http://iapp250.dev.sx.com:5011/a.csv?select[>date]from employee
But some columns in the table contains comma and double quotes and after reading the csv it becomes comma separated string.As a result I get index out of bound exception when I deserialize it.
I decided to use Lumenworks csv reader after reading some of the posts.I proceeded in below way but still could not fix it.Please find the below code snippet.
List<List<string>> LineFields = new List<List<string>>();
using(var reader = new StreamReader(siteminderresponse.getResponseStream())
{
Char quotingCharacter = '\0';
Char escapeCharacter = quotingCharacter;
Char delimiter = '|';
using (var csv = new CsvReader(reader, true, delimiter, quotingCharacter, escapeCharacter, '\0', ValueTrimmingOptions.All))
{
csv.DefaultParseErrorAction = ParseErrorAction.ThrowException;
//csv.ParseError += csv_ParseError;
csv.SkipEmptyLines = true;
while (csv.ReadNextRecord())
{
List<string> fields = new List<string>(csv.FieldCount);
for (int i = 0; i < csv.FieldCount; i++)
{
try
{
string field = csv[i];
fields.Add(field.Trim('"'));
} catch (MalformedCsvException ex)
{
throw;
}
}
LineFields Add(fields);
}
}
}
End result is I get comma separated fields like
1,16:00:01,BUY,-2,***ROBERTS, K***.
Please see the ROBERTS, K which was a single column value and now is comma separated due to which my serialization fails.
This appears to be a problem with the way you're formatting the results of your query. If you want "ROBERTS, K" to be read as a single field by a CSV reader, you need put quotes around "ROBERTS, K" in your input to the CSV reader. LumenWorks is only doing what it's supposed to.
Similarly, if you want literal double-quotes in your parsed fields, you need to escape them with another double-quote. So, to properly express this as a single field:
Roberts, K is 6" taller than I am
...you'd need to pass this into the CSV parser:
"Roberts, K is 6"" taller than I am"

Read a CSV file in to an array using C#

I am trying to write code that will pull in, read, and separate a csv file. It has four columns with no titles. I've been searching for hours online and no one really seems to have the answer so I'm hoping that someone here can. After it is read in I need it to be able to be pulled very specifically as it is part of design. Thanks ahead of time!
Your question is a little vague, but I'll try and answer it as best I can.
A CSV file is (by definition) a file containing comma seperated values - the key here is that a comma is used as the delimiter. Personally, I find that using a different delimiter is prone to less nasties when parsing.
I've created the following test CSV file:
Column1,Column2,Column3,Column4
Row1Value1,Row1Value2,Row1Value3,Row1Value4
Row2Value1,Row2Value2,Row2Value3,Row2Value4
Row3Value1,Row3Value2,Row3Value3,Row3Value4
Row4Value1,Row4Value2,Row4Value3,Row4Value4
Row5Value1,Row5Value2,Row5Value3,Row5Value4
Here's some code to read that file into some simple structures that you can then manipulate. You might want to extend this code by creating classes for the columns and rows (and values as well).
string sFileContents = "";
using (StreamReader oStreamReader = new StreamReader(File.OpenRead("Test.csv")))
{
sFileContents = oStreamReader.ReadToEnd();
}
List<string[]> oCsvList = new List<string[]>();
string[] sFileLines = sFileContents.Split(Environment.NewLine.ToCharArray(), StringSplitOptions.RemoveEmptyEntries);
foreach (string sFileLine in sFileLines)
{
oCsvList.Add(sFileLine.Split(",".ToCharArray(), StringSplitOptions.RemoveEmptyEntries));
}
int iColumnNumber = 0;
int iRowNumber = 0;
Console.WriteLine("Column{0}, Row{1} = \"{2}\"", iColumnNumber, iRowNumber, oCsvList[iColumnNumber][iRowNumber]);
iColumnNumber = 4;
iRowNumber = 2;
Console.WriteLine("Column{0}, Row{1} = \"{2}\"", iColumnNumber, iRowNumber, oCsvList[iColumnNumber][iRowNumber]);
Keep in mind that values are accessed by the column number, and then the row number.
I hope this helps.
All you need to do is simply convert it into a byte[] array and back into a string[builder?]. Then seperate each entry, and parse it like so.
http://www.digitalcoding.com/Code-Snippets/C-Sharp/C-Code-Snippet-Convert-file-to-byte-array.html
And to convert to a string:
// C# to convert a byte array to a string.
byte [] dBytes = ...
string str;
System.Text.UTF8Encoding enc = new System.Text.UTF8Encoding();
str = enc.GetString(dBytes);
You have to understand you need to make a parser. I made one to pull in Yahoo Stocks, basically splitting the colons into data.
This is a much simpler way to do what you want.
var lineCount = File.ReadLines(#"C:\file.txt").Count();
var reader = new StreamReader(File.OpenRead(#"C:\location1.csv"));
int[,] properties = new int[lineCount,4];
for(int i2 = 0; i2 < 4; i2++)
{
for(int i = 0; i < lineCount; i++)
{
var line = reader.ReadLine();
var values = line.Split(';');
properties[i,i2] = Convert.ToInt32(values[i2];
}
}

Storing a DataTable in a Flatfile

This is the first time I have done any sort of work with flat files.
I need this to be a plain txt file NOT XML.
I have written the following opting for a comma delimited format.
public static void DataTableToFile(string fileLoc, DataTable dt)
{
StringBuilder str = new StringBuilder();
// get the column headers
foreach (DataColumn c in dt.Columns)
{
str.Append(c.ColumnName.ToString() + ",");
}
str.Remove(str.Length-1, 1);
str.AppendLine();
// write the data here
foreach (DataRow dr in dt.Rows)
{
foreach (var field in dr.ItemArray)
{
str.Append(field.ToString() + ",");
}
str.Remove(str.Length-1, 1);
str.AppendLine();
}
try
{
Write(fileLoc, str.ToString());
}
catch (Exception ex)
{
//ToDO:Add error logging
}
}
My question is: Can i do this better or faster?
And str.Remove(str.Length-1, 1); is there to remove the last , which is the only way I could think of.
Any suggestions?
Use
public static void DataTableToFile(string fileLoc, DataTable dt)
{
StringBuilder str = new StringBuilder();
// get the column headers
str.Append(String.Join(",", dt.Columns.Cast<DataColumn>()
.Select(col => col.ColumnName)) + "\r\n");
// write the data here
dt.Rows.Cast<DataRow>().ToList()
.ForEach(row => str.Append(string.Join(",", row.ItemArray) + "\r\n"));
try
{
Write(fileLoc, str.ToString());
}
catch (Exception ex)
{
//ToDO:Add error logging
}
}
The key point would be: there is no need to construct this in memory with a StringBuilder - you should instead be writing to a file via something like StreamWriter, i.e. via File.CreateText. The API is similar to StringBuilder, but you shouldn't try to remove - instead, don't add - i.e.
bool first = true;
foreach(...blah...) {
if(first) { first = false; }
else { writer.Write(','); }
... write the data ...
}
As another consideration: CSV is not just a case of adding commas. You need to think about quoted text (for data with , in), and multi-line data. Unless the data is very very simple. You might also want to make the format more explicit than just .ToString(), which is very culture-sensitive. The classic example would be large parts of Europe that use , as the decimal character, thus "CSV" often uses a different separator, to avoid having to quote everything. If the choice is available, personally I'd always use TSV instead of CSV - less problematic (in theory, although you still need to handle data with tabs in).

Categories

Resources