Delimit file path into multiple database columns - c#

I am trying to delimit a file path and populate it into multiple database columns.
So if the string were C:\Engineering\Structural\CAD\Baghouse.dwg then it would populate 8 database columns, 5 with values and 3 with "".
DIR01 | C:
DIR02 | Engineering
DIR03 | Structural
DIR04 | CAD
DIR05 | Baghouse.dwg
DIR06 |
DIR07 |
DIR08 |
I can easily delimit the file path using Path.DirectorySeparatorChar, and when I debug and look in the Locals box the array looks perfect.
What I can't figure out is how to access each element of the array and put them into separate columns.
private void cmdDelimitFilePath_Click(object sender, EventArgs e)
{
string SqlCmd;
string ScannedPath = String.Empty;
string DIR01 = String.Empty;
string DIR02 = String.Empty;
string DIR03 = String.Empty;
string DIR04 = String.Empty;
string DIR05 = String.Empty;
string DIR06 = String.Empty;
string DIR07 = String.Empty;
string DIR08 = String.Empty;
DataTable dt = new DataTable("DirectoryAnalysis");
SqlConnectionStringBuilder ConnStrBuilder = new SqlConnectionStringBuilder();
try
{
ConnStrBuilder.DataSource = txtServer.Text;
ConnStrBuilder.InitialCatalog = txtSourceSchema.Text;
ConnStrBuilder.Password = txtPassword.Text;
ConnStrBuilder.UserID = txtUser.Text;
//this connects to the database and creates the new fields
using (DbConnection connexx = new SqlConnection(ConnStrBuilder.ConnectionString))
{
connexx.Open();
using (DbCommand command = new SqlCommand("ALTER TABLE [DirectoryAnalysis] ADD [DIR01] varchar(100), [DIR02] varchar(100), [DIR03] varchar(100), [DIR04] varchar(100), [DIR05] varchar(100), [DIR06] varchar(100), [DIR07] varchar(100), [DIR08] varchar(100)"))
{
command.Connection = connexx;
command.ExecuteNonQuery();
}
}
// this connects to the database and populates the new fields
using (SqlConnection Conn = new SqlConnection(ConnStrBuilder.ConnectionString))
{
Conn.Open();
SqlCmd = "SELECT [DA_Id], [ScannedPath], [DIR01], [DIR02], [DIR03], [DIR04], [DIR05], [DIR06], [DIR07], [DIR08] FROM [DirectoryAnalysis]";
using (SqlDataAdapter da = new SqlDataAdapter(SqlCmd, Conn))
{
da.Fill(dt);
foreach (DataRow dr in dt.Rows)
{
ScannedPath = Convert.ToString(dr["ScannedPath"]);
//This returns each individual folder in the directories array.
string[] directories = ScannedPath.Split(Path.DirectorySeparatorChar);
//You can get the number of folders returned like this:
int folderCount = directories.Length;
// everything works perfectly up to here...
foreach (string part in directories)
{
// how to access elements of the array?
//this is as close as I have been...
DIR01 = Convert.ToString(part[0]);
dr["DIR01"] = DIR01;
DIR02 = Convert.ToString(part[1]);
dr["DIR02"] = DIR02;
DIR03 = Convert.ToString(part[2]);
dr["DIR03"] = DIR03;
// and repeat through 8 if this would work
}
}
MessageBox.Show("DirectoryAnalysis has been updated.", this.Text, MessageBoxButtons.OK, MessageBoxIcon.Information);
}
}
}
catch (Exception Ex)
{
MessageBox.Show(Ex.Message, this.Text, MessageBoxButtons.OK, MessageBoxIcon.Exclamation);
}
finally
{
this.Cursor = Cursors.Default;
}
}

IF i understand correctly, the problem is the following:
You need access to all the elements of the array "directories" at the same time. However, you LOSE it by doing:
foreach (string part in directories)
because "part" is the current element, and it's difficult(ish) to take the previous n elements.
Hence, i think the fix is:
Stop using the foreach loop and access each element of the array like this:
dir1 = directories[0]
dir2 = directories[1]
and so on.
Like this, you can also use them directly in your sql insert statement.
Hope this helps!

What about something like this:
string[] StrArr = filePath.Split('\');
.
for (int i = 0; i < StrArr.length - 1; i++)
{
//Run this SQL command:
String.Format("UPDATE [table] (DIR{0}) VALUES ({1})", i + 1, StrArr[i])
}
Split string to Array
loop through array with a for loop
Update database with the values

Related

How do I remove a line from the list based on the ID of that line?

I've been trying to figure this out for the past few days, but I just can't seem to get it work.
So I have a txt file which has this format:
id;könyvcím;szerző;kiadó;kiadási év;
I am using a structs and a list such as this:
public static List<Books> BooksList = new List<Books>();
public struct Books
{
public int id;
public string title;
public string writer;
public string publisher;
public int published_year;
}
And I'm also putting all these into a List based on the struct like this:
StreamReader booksRead = new StreamReader("konyvek.txt", Encoding.UTF8);
booksRead.ReadLine();
while (!booksRead.EndOfStream)
{
string[] split = booksRead.ReadLine().Split(';');
Books inRead = new Books();
inRead.id = Convert.ToInt32(split[0]);
inRead.title = split[1];
inRead.writer = split[2];
inRead.publisher = split[3];
inRead.published_year = Convert.ToInt32(split[4]);
BooksList.Add(inRead);
}
booksRead.Close();
All I want is, for example, to find where the line with ID 2 is, and remove that line from my textfile. I've tried to get the index of the line I want, and remove it like that from my textfile, but it even fails to get the index, I tried using IndexOf, FindIndex and trying to go on a loop. I'm pretty sure my struct is not happy with me for using it like that because I get errors such as this when I run my code:
System.InvalidCastException: 'Unable to cast object of type 'Books' to
type 'System.IConvertible'.'
Here is the way I'm trying to get the index of the line I want to remove
Books item = new Books();
for (int i = 0; i < BooksList.Count; i++)
{
if (Convert.ToInt32(textBox_id_delete.Text) == item.id)
{
RemoveAt = item.id;
}
}
int index = BooksList.FindIndex(x => Convert.ToInt32(x) == RemoveAt);
MessageBox.Show(Convert.ToString(index));
I'm pretty sure I'm approaching this extremely wrong, and I'd accept any kind of help.
You are doing it completely wrong for a number of reasons.
First, how would you do that the way you are doing:
void Main()
{
var filename = #"c:\myFolder\mybooklist.txt";
// read into an enumerable
var books = File.ReadAllLines(filename)
.Select(x => x.Split(';'))
.Select(x => new Book {
Id = int.TryParse(x[0], out int bookId)?bookId:0,
Title = x[1],
Writer = x[2],
Publisher = x[3],
Published_year=int.TryParse(x[4], out int year)?year:0
});
// remove the one with id 2
// and save back
var otherBooks = books.Where(b => b.Id != 2);
File.WriteAllLines(filename, otherBooks.Select(b => $"{b.Id};{b.Title};{b.Writer};{b.Publisher};{b.Published_year}"));
}
public struct Book
{
public int Id;
public string Title;
public string Writer;
public string Publisher;
public int Published_year;
}
And now what is wrong with this.
A text file is not a database but you are trying to use a text file as a database.
With a text file, you are not actually doing any control here, if the ID is unique or not (there might be N books with the ID 2).
(Side matter) You are using C#, but looks like you are coming from another language and not using the naming conventions at all.
IMHO, instead you should simply use a database, an embedded one for example like LiteDb or Sqlite. If you care to see a sample with LiteDb or Sqlite, let me know.
EDIT: I am adding SQLite and LiteDb samples. In either case, you would need to add Sqlite.Data.Sqlite and LiteDB respectively from Nuget and add using statements.
In case of SQLite, please note that you could use Linq adding some drivers. I directly used the ADO.Net commands and didn't use a Book class for mapping.
LiteDB, being a NoSQL database written in C# for C#, can directly use objects and support Linq out of the box.
Samples show only the surface for both.
SQLite sample:
private static readonly string dataFile = #"d:\temp\books.s3db";
void Main()
{
CreateDb(dataFile);
SeedSampleData(dataFile);
// List the current data
Console.WriteLine("Current Data");
Console.WriteLine("".PadRight(100, '='));
ListData(dataFile);
Console.WriteLine("".PadRight(100, '='));
DeleteSampleRow(dataFile);
// List the current data
Console.WriteLine("After deleting");
Console.WriteLine("".PadRight(100, '='));
ListData(dataFile);
Console.WriteLine("".PadRight(100, '='));
}
void DeleteSampleRow(string dbName)
{
string deleteById = "delete from books where id = #id";
string deleteByTitle = "delete from books where Title = #title";
string deleteByWriter = "delete from books where Writer = #writer";
using (SQLiteConnection cn = new SQLiteConnection($"Data Source={dbName}"))
using (SQLiteCommand cmdById = new SQLiteCommand(deleteById, cn))
using (SQLiteCommand cmdByTitle = new SQLiteCommand(deleteByTitle, cn))
using (SQLiteCommand cmdByWriter = new SQLiteCommand(deleteByWriter, cn))
{
cmdById.Parameters.Add("#id", DbType.Int32).Value = 2; // delete the book with id = 2
cmdByTitle.Parameters.Add("#title", DbType.String).Value = $"Sample Title #5"; // delete all books having title "Sample Title #5"
cmdByWriter.Parameters.Add("#writer", DbType.String).Value = $"Sample Writer #3"; // delete all books written by "Sample Writer #3"
cn.Open();
cmdById.ExecuteNonQuery();
cmdByTitle.ExecuteNonQuery();
cmdByWriter.ExecuteNonQuery();
cn.Close();
}
}
void ListData(string dbName)
{
string selectCommand = "select * from books";
using (SQLiteConnection cn = new SQLiteConnection($"Data Source={dbName}"))
using (SQLiteCommand cmd = new SQLiteCommand(selectCommand, cn))
{
cn.Open();
var r = cmd.ExecuteReader();
while (r.Read())
{
Console.WriteLine($"{r["id"]},{r["title"]},{r["writer"]},{r["publisher"]},{r["published_year"]}");
}
cn.Close();
}
}
private void CreateDb(string dbName)
{
if (File.Exists(dbName)) // if it exists, delete and create afresh, just for sampling
{
File.Delete(dbName);
}
string createTable = #"Create Table books (
id int primary key not null,
title varchar(500) not null,
writer varchar(100) not null,
publisher varchar(100) not null,
published_year int not null
)";
using (SQLiteConnection cn = new SQLiteConnection($"Data Source={dbName}"))
using (SQLiteCommand cmd = new SQLiteCommand(createTable, cn))
{
cn.Open();
cmd.ExecuteNonQuery();
cn.Close();
}
}
private void SeedSampleData(string dbName)
{
string insertCommand = #"insert into books
(id, title, writer, publisher, published_year)
values
(#id, #title, #writer, #publisher, #year);";
using (SQLiteConnection cn = new SQLiteConnection($"Data Source={dbName}"))
using (SQLiteCommand cmd = new SQLiteCommand(insertCommand, cn))
{
cmd.Parameters.Add("#id", DbType.Int32);
cmd.Parameters.Add("#title", DbType.String);
cmd.Parameters.Add("#writer", DbType.String);
cmd.Parameters.Add("#publisher", DbType.String);
cmd.Parameters.Add("#year", DbType.Int32);
Random r = new Random();
cn.Open();
int id = 1;
using (SQLiteTransaction transaction = cn.BeginTransaction())
{
cmd.Parameters["#id"].Value = id++;
cmd.Parameters["#title"].Value = $"Around the World in Eighty Days";
cmd.Parameters["#writer"].Value = $"Jules Verne";
cmd.Parameters["#publisher"].Value = $"Le Temps, Pierre-Jules Hetzel";
cmd.Parameters["#year"].Value = 1873;
cmd.ExecuteNonQuery();
cmd.Parameters["#id"].Value = id++;
cmd.Parameters["#title"].Value = $"A Tale of Two Cities";
cmd.Parameters["#writer"].Value = $"Charles Dickens";
cmd.Parameters["#publisher"].Value = $"Chapman & Hall";
cmd.Parameters["#year"].Value = 1859;
cmd.ExecuteNonQuery();
// add dummy 10 more rows
for (int i = 0; i < 10; i++)
{
cmd.Parameters["#id"].Value = id++;
cmd.Parameters["#title"].Value = $"Sample Title #{i}";
cmd.Parameters["#writer"].Value = $"Sample Writer #{r.Next(1, 5)}";
cmd.Parameters["#publisher"].Value = $"Sample Publisher #{i}";
cmd.Parameters["#year"].Value = r.Next(1980, 2022);
cmd.ExecuteNonQuery();
}
transaction.Commit();
}
// databases generally use some indexes
new SQLiteCommand(#"Create Index if not exists ixId on books (id);", cn).ExecuteNonQuery();
new SQLiteCommand(#"Create Index if not exists ixTitle on books (title);", cn).ExecuteNonQuery();
new SQLiteCommand(#"Create Index if not exists ixWriter on books (writer);", cn).ExecuteNonQuery();
new SQLiteCommand(#"Create Index if not exists ixPublisher on books (publisher);", cn).ExecuteNonQuery();
cn.Close();
}
}
LiteDb sample:
private static readonly string dataFile = #"d:\temp\books.litedb";
void Main()
{
//CreateDb(dataFile); // this step is not needed with LiteDB
// instead we just simply delete the datafile if it exists
// for starting afresh
// if it exists, delete and create afresh, just for sampling
// so you can run this same sample over and over if you wish
if (File.Exists(dataFile))
{
File.Delete(dataFile);
}
SeedSampleData(dataFile);
// List the current data
Console.WriteLine("Current Data");
Console.WriteLine("".PadRight(100, '='));
ListData(dataFile);
Console.WriteLine("".PadRight(100, '='));
DeleteSampleRow(dataFile);
// List the current data
Console.WriteLine("After deleting");
Console.WriteLine("".PadRight(100, '='));
ListData(dataFile);
Console.WriteLine("".PadRight(100, '='));
}
void DeleteSampleRow(string dbName)
{
using (var db = new LiteDatabase(dbName))
{
var bookCollection = db.GetCollection<Book>("Books");
// by ID
bookCollection.Delete(2);
// by Title
bookCollection.DeleteMany(c => c.Title == "Sample Title #5");
// by Writer
bookCollection.DeleteMany(c => c.Writer == "Sample Writer #3");
}
}
void ListData(string dbName)
{
using (var db = new LiteDatabase(dbName))
{
var bookCollection = db.GetCollection<Book>("Books");
foreach (var book in bookCollection.FindAll())
{
Console.WriteLine($"{book.Id},{book.Title},{book.Writer},{book.Publisher},{book.Published_year}");
}
}
}
private void SeedSampleData(string dbName)
{
Random r = new Random();
var books = new List<Book> {
new Book {Title="Around the World in Eighty Days",Writer = "Jules Verne",Publisher = "Le Temps, Pierre-Jules Hetzel",Published_year= 1873},
new Book {Title="A Tale of Two Cities",Writer = "Charles Dickens",Publisher = "Chapman & Hall",Published_year= 1859},
};
// add dummy 10 more rows
books.AddRange(Enumerable.Range(0, 10).Select(i => new Book
{
Title = $"Sample Title #{i}",
Writer = $"Sample Writer #{r.Next(1, 5)}",
Publisher = $"Sample Publisher #{i}",
Published_year = r.Next(1980, 2022)
}));
using (var db = new LiteDatabase(dbName))
{
var bookCollection = db.GetCollection<Book>("Books");
bookCollection.InsertBulk(books);
// databases generally use some indexes
// create the same indexes that we created in SQLite sample
bookCollection.EnsureIndex(c => c.Id);
bookCollection.EnsureIndex(c => c.Title);
bookCollection.EnsureIndex(c => c.Writer);
bookCollection.EnsureIndex(c => c.Publisher);
}
}
public class Book
{
public int Id {get;set;}
public string Title {get;set;}
public string Writer {get;set;}
public string Publisher {get;set;}
public int Published_year {get;set;}
}
welcome to SO. I'm going to assume you've got a reason for keeping the data in a text file. As several answers have suggested if you need it in a text file the easiest thing to do is to simply create a new file with the lines you want.
One way to do that is to make use of a interator function to filter the lines. This lets you easily use the .NET File class to do the rest - creating the new file and removing the old if you want to. Often keeping the old file and archiving it can be useful too but anyway, here's a way to filter the lines.
static void Main(string[] _)
{
var filteredLines = FilterOnID(File.ReadAllLines("datafile.txt"), "2");
File.WriteAllLines("updated.datafile.txt", filteredLines);
// rename if necessary
File.Delete("datafile.txt");
File.Move("updated.datafile.txt", "datafile.txt");
}
static IEnumerable<string> FilterOnID(IEnumerable<string> lines, string id)
{
foreach (var line in lines)
{
var fields = line.Split(';');
if (fields.Length != 0 || !string.IsNullOrEmpty(fields[0]))
{
if (id == fields[0])
continue;
}
yield return line;
}
}
To test I added simple file like so:
1;field1;field2;field3
2;field1;field2;field3
3;field1;field2;field3
4;field1;field2;field3
5;field1;field2;field3
6;field1;field2;field3
And after running you get this:
1;field1;field2;field3
3;field1;field2;field3
4;field1;field2;field3
5;field1;field2;field3
6;field1;field2;field3
When you put books into a list from a file, you can search the book for remove from BooksList.
Delete it and save BooksList into a file.
var removeBook = BookList.FirstOrDefault(book => book.id == removeId);
if (removeBook != null)
{
BookList.Remove(removeBook);
}
var booksAsString = BookList.Select(book => $"{book.id};{book.title};{book.writer};{book.publisher};{book.published_year}");
File.WriteAllLines("konyvek.txt", booksAsString, Encoding.UTF8);

Query taking longer than expected odbc sage

I am doing a simple select with a date filter on it with a months range where only 32 records are present however its taking 15 seconds to query and return the data I am using sage 50 as you can probally tell and c#. I am using odbc to create the query the seem speeds can be found if i use the odbc query tool.
This is for a stright forward select and it should not be taking that long to return the data through odbc.
String SQL = string.Format("SELECT 'ORDER_NUMBER', 'ORDER_OR_QUOTE',
'ANALYSIS_1','ACCOUNT_REF','ORDER_DATE','NAME',
'COURIER_NUMBER','COURIER_NAME','CUST_TEL_NUMBER'
,'DESPATCH_DATE','ACCOUNT_REF', 'DEL_NAME', 'DEL_ADDRESS_1',
'DEL_ADDRESS_2', 'DEL_ADDRESS_3', 'DEL_ADDRESS_4', 'DEL_ADDRESS_5',
'INVOICE_NUMBER','ORDER_DATE','INVOICE_NUMBER_NUMERIC',
'CONTACT_NAME','CONSIGNMENT', 'NOTES_1', 'ITEMS_NET'
,'ITEMS_GROSS','QUOTE_STATUS' FROM SALES_ORDER WHERE ORDER_DATE
='{0}' and ORDER_DATE <='{1}'", fromD, toD);
public List<SalesOrders> GetSalesOrders()
{
List<SalesOrders> _salesOrdersList = new List<SalesOrders>();
try
{
string sageDsn = ConfigurationManager.AppSettings["SageDSN"];
string sageUsername = ConfigurationManager.AppSettings["SageUsername"];
string sagePassword = ConfigurationManager.AppSettings["SagePassword"];
//int totalRecords = GetSalesOrdersount();
int counter = 0;
//using (var connection = new OdbcConnection("DSN=SageLine50v24;Uid=Manager;Pwd=;"))
using (var connection = new OdbcConnection(String.Format("DSN={0};Uid={1};Pwd={2};", sageDsn, sageUsername, sagePassword)))
{
connection.Open();
//string sql = string.Format(getInvoiceSql, customerCode, DateTime.Today.AddMonths(-1).ToString("yyyy-MM-dd"));
string fromD = dtpFrom.Value.ToString("yyyy-MM-dd");
string toD = dtpTo.Value.ToString("yyyy-MM-dd");
String SQL = string.Format("SELECT 'ORDER_NUMBER', 'ORDER_OR_QUOTE', 'ANALYSIS_1','ACCOUNT_REF','ORDER_DATE','NAME', 'COURIER_NUMBER','COURIER_NAME','CUST_TEL_NUMBER' ,'DESPATCH_DATE','ACCOUNT_REF', 'DEL_NAME', 'DEL_ADDRESS_1', 'DEL_ADDRESS_2', 'DEL_ADDRESS_3', 'DEL_ADDRESS_4', 'DEL_ADDRESS_5', 'INVOICE_NUMBER','ORDER_DATE','INVOICE_NUMBER_NUMERIC', 'CONTACT_NAME','CONSIGNMENT', 'NOTES_1', 'ITEMS_NET' ,'ITEMS_GROSS','QUOTE_STATUS' FROM SALES_ORDER WHERE ORDER_DATE >='{0}' and ORDER_DATE <='{1}'", fromD, toD);
using (var command = new OdbcCommand(SQL, connection))
{
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
counter++;
backgroundWorker1.ReportProgress(counter);
var salesOrders = new SalesOrders();
salesOrders.ACCOUNT_REF = Convert.ToString(reader["ACCOUNT_REF"]);
salesOrders.RecordIdentifier = "";
salesOrders.ShipmmentId = Convert.ToString(reader["ORDER_NUMBER"]);
salesOrders.OrderDate = Convert.ToDateTime(reader["ORDER_DATE"]);
salesOrders.OrderNumber = Convert.ToString(reader["ORDER_NUMBER"]);
salesOrders.Company = "";
salesOrders.Carrier = Convert.ToString(reader["COURIER_NUMBER"]);
salesOrders.CarrierService = Convert.ToString(reader["COURIER_NAME"]);
salesOrders.CustomerName = Convert.ToString(reader["NAME"]);
salesOrders.ShipToAddress1 = Convert.ToString(reader["DEL_ADDRESS_1"]);
salesOrders.ShipToAddress2 = Convert.ToString(reader["DEL_ADDRESS_2"]);
salesOrders.ShipToAddress3 = Convert.ToString(reader["DEL_ADDRESS_3"]);
salesOrders.ShipToAddress4 = Convert.ToString(reader["DEL_ADDRESS_4"]);
salesOrders.ShipToAddress5 = Convert.ToString(reader["DEL_ADDRESS_5"]);
salesOrders.ShiptoAttention = Convert.ToString(reader["DEL_NAME"]);
salesOrders.ShiptoPhoneNo = Convert.ToString(reader["CUST_TEL_NUMBER"]);
salesOrders.Country = Convert.ToString(reader["ANALYSIS_1"]);
salesOrders.ShiptoEmail = "";
salesOrders.MakeAddressDefault = "Y";
bool isProcessed = _sqlManager.hasbeenProcessed(salesOrders.OrderNumber);
if (isProcessed == true)
salesOrders.Exported = true;
_salesOrdersList.Add(salesOrders);
}
}
}
}
return _salesOrdersList.OrderByDescending(o => o.OrderDate).ToList();
}
don't use {0}, {1} for embedding values in strings... ADD via Parameters
String SQL =
#"SELECT
ORDER_NUMBER,
ORDER_OR_QUOTE,
ANALYSIS_1,
ACCOUNT_REF,
ORDER_DATE,
`NAME`,
COURIER_NUMBER,'
OURIER_NAME,
CUST_TEL_NUMBER,
DESPATCH_DATE,
ACCOUNT_REF,
DEL_NAME,
DEL_ADDRESS_1,
DEL_ADDRESS_2,
DEL_ADDRESS_3,
DEL_ADDRESS_4,
DEL_ADDRESS_5,
INVOICE_NUMBER,
ORDER_DATE,
INVOICE_NUMBER_NUMERIC,
CONTACT_NAME,
CONSIGNMENT,
NOTES_1,
ITEMS_NET,
ITEMS_GROSS,
QUOTE_STATUS
FROM
SALES_ORDER
WHERE
ORDER_DATE >= ?
and ORDER_DATE <= ?
ORDER BY
ORDER_DATE DESC";
using (var command = new OdbcCommand(SQL, connection))
{
// assuming fields are actually date data types fields
command.Parameters.Add( "parmFromDate", fromD );
command.Parameters.Add( "parmToDate", toD );
The "?" in the query are place-holders for the parameter values which are handled by the ODBC process. The Parameters being added within the using() portion are added in the same ordinal position as their respective place-holder parts. I just assigned the parameter name to give context to whoever is looking at it after.
The query itself SHOULD be very quick depending on the date range you are pulling. Even added the SQL Order by descending order so it is pre-pulled down in the order you intended it too.

how to separate the "collection" from creating and adding a new instance of "lookaheadRunInfo"

I am trying to "collect" the GetString(2) until GetString(0) changes,so am trying to find out how to separate the "collection" from creating and adding a new instance of "lookaheadRunInfo"?I have tried as below which throws an exception
System.NullReferenceException was unhandled by user code at line
lookaheadRunInfo.gerrits.Add(rdr.GetString(1)); ,can anyone provide guidance on how to fix this issue?
try
{
Console.WriteLine("Connecting to MySQL...");
conn.Open();
string sql = #"select lr.ec_job_link, cl.change_list ,lr.submitted_by, lr.submission_time,lr.lookahead_run_status
from lookahead_run as lr, lookahead_run_change_list as lrcl, change_list_details as cld,change_lists as cl
where lr.lookahead_run_status is null
and lr.submission_time is not null
and lrcl.lookahead_run_id = lr.lookahead_run_id
and cl.change_list_id = lrcl.change_list_id
and cl.change_list_id not in (select clcl.change_list_id from component_labels_change_lists as clcl)
and cld.change_list_id = lrcl.change_list_id
group by lr.lookahead_run_id, cl.change_list
order by lr.submission_time desc
limit 1000
";
MySqlCommand cmd = new MySqlCommand(sql, conn);
MySqlDataReader rdr = cmd.ExecuteReader();
var ECJoblink_previous ="";
var gerritList = new List<String>();
while (rdr.Read())
{
//Console.WriteLine(rdr[0] + " -- " + rdr[1]);
//Console.ReadLine();
var lookaheadRunInfo = new LookaheadRunInfo();
lookaheadRunInfo.ECJobLink = rdr.GetString(0);
if (ECJoblink_previous == lookaheadRunInfo.ECJobLink)
{
//Keep appending the list of gerrits until we get a new lookaheadRunInfo.ECJobLink
lookaheadRunInfo.gerrits.Add(rdr.GetString(1));
}
else
{
lookaheadRunInfo.gerrits = new List<string> { rdr.GetString(1) };
}
ECJoblink_previous = lookaheadRunInfo.ECJobLink;
lookaheadRunInfo.UserSubmitted = rdr.GetString(2);
lookaheadRunInfo.SubmittedTime = rdr.GetString(3).ToString();
lookaheadRunInfo.RunStatus = "null";
lookaheadRunInfo.ElapsedTime = (DateTime.UtcNow-rdr.GetDateTime(3)).ToString();
lookaheadRunsInfo.Add(lookaheadRunInfo);
}
rdr.Close();
}
catch
{
throw;
}
If I understand your requirements correctly, you wish to keep a single lookaheadRunInfo for several rows of the resultset, until GetString(0) changes. Is that right?
In that case you have some significant logic problems. The way it is written, even if we fix the null reference, you will get a new lookaheadRunInfo with each and every row.
Try this:
string ECJoblink_previous = null;
LookAheadRunInfo lookaheadRunInfo = null;
while (rdr.Read())
{
if (ECJoblink_previous != rdr.GetString(0)) //A new set of rows is starting
{
if (lookaheadRunInfo != null)
{
lookaheadRunsInfo.Add(lookaheadRunInfo); //Save the old group, if it exists
}
lookaheadRunInfo = new LookAheadRunInfo //Start a new group and initialize it
{
ECJobLink = rdr.GetString(0),
gerrits = new List<string>(),
UserSubmitted = rdr.GetString(2),
SubmittedTime = rdr.GetString(3).ToString(),
RunStatus = "null",
ElapsedTime = (DateTime.UtcNow-rdr.GetDateTime(3)).ToString()
}
}
lookahead.gerrits.Add(rdr.GetString(1)); //Add current row
ECJoblink_previous = rdr.GetString(0); //Keep track of column 0 for next iteration
}
if (lookaheadRunInfo != null)
{
lookaheadRunsInfo.Add(lookaheadRunInfo); //Save the last group, if there is one
}
The idea here is:
Start with a blank slate, nothing initialized
Monitor column 0. When it changes (as it will on the first row), save any old list and start a new one
Add to current list with each and every iteration
When done, save any remaining items in its own list. A null check is required in case the reader returned 0 rows.

Code performance (bulk copy/ reading multiple lines and saving in database from text file)

I am learning c# programming these days and need some help in determining the performance of code.
I have to read a file and some details from it.
File has 4 columns:
ID, dob, size, accountno.
Problem:I have to read every line and insert them into a database and there are more than 50000 entries per day.
Solution I tried:
Created a class with 4 properties (ID, dob, size, accountno.) and then I iterate through the file and convert all the data into objects and keep on adding them on ArraList. So, basically now I got an arraylist with 50000 objects.
Now, I iterate through the array at last and inserted the detail in database.
Is this correct approach ?
Experts please help.
code :
namespace testing
{
class Program
{
static void Main(string[] args)
{
string timestamp = DateTime.Now.ToString("yyyyMMddHHmmss");
string InputDirectory = #"My Documents\\2015";
string FileMask = "comb*.txt";
ArrayList al = new ArrayList();
string line;
var Files = Directory.GetFiles(InputDirectory, FileMask, SearchOption.AllDirectories).Select(f => Path.GetFullPath(f));
foreach (var f in Files)
{
using (StreamReader reader = new StreamReader(f))
{
string date;
while ((line = reader.ReadLine()) != null)
{
Datamodel dm = new Datamodel();
string[] values = line.Split(',').Select(sValue => sValue.Trim()).ToArray();
dm.ID = values[0].ToString();
dm.dob= dm.RPT_ID.Remove(0, 4);
dm.size= values[1].ToString();
dm.accountno= values[2].ToString();
al.Add(dm);
}
reader.Close();
}
}
utilityClass.Insert_Entry(al);
}
}
}
For additional SQL performance look into transactions:
connection.BeginTransaction();
//bulk insert commands here
connection.Commit();
My Solution :Thanks to all above comments.
namespace Test
{
class Program
{
static void Main(string[] args)
{
string timestamp = DateTime.Now.ToString("yyyyMMddHHmmss");
string InputDirectory = #"My Documents\\2015";
string FileMask = "comb*.txt";
try
{
string line = null;
var Files = Directory.GetFiles(InputDirectory, FileMask, SearchOption.AllDirectories).Select(f => Path.GetFullPath(f));
foreach (var f in Files)
{
DataTable table = new DataTable();
table.TableName = f;
table.Columns.Add("ID", typeof(Int64));
table.Columns.Add("dob", typeof(string));
table.Columns.Add("size", typeof(string));
table.Columns.Add("accountno", typeof(string));
using (StreamReader reader = new StreamReader(f))
{
while ((line = reader.ReadLine()) != null)
{
string[] values = line.Split(',').Select(sValue => sValue.Trim()).ToArray();
string uniqueGuid = SequentialGuidGenerator.NewGuid().ToString();
uniqueGuid = uniqueGuid.Replace("-", "");
int ID = convert.toint(values[0]);
string NOTIF_ID = "";
table.Rows.Add(ID,values[1].ToString(),values[2]).toString(),values[2]).toString());
}
reader.Close();
}
utilityClass.Insert_Entry(table, env);
}
}
catch (Exception e)
{
CustomException.Write(CustomException.CreateExceptionString(e));
}
}
}
}
Insert_Entry
using (SqlConnection con = new SqlConnection(utilityClass.GetConnectionString(environ)))
{
con.Open();
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(con))
{
bulkCopy.DestinationTableName = "dbo.inserttablename";
try
{
bulkCopy.WriteToServer(mfsentdata);
}
catch (SqlException e)
{
CustomException.Write(CustomException.CreateExceptionString(e, mfsentdata.TableName));
}
}
con.Close();
}
}

Modify an attribute according to its data type c#

For a giving DB, I used CodeSmith to generate a text file that has the following values
(TableName)([TableGUID]) (AttributeName).(AttributeType)
for example
CO_CallSignLists[e3fc5e2d-fe84-492d-ad94-3acced870714] SunSpots.smallint
Now I parsed these values and assigned each to a certain variable
for (int j = 0; j < newLst.Count; j += 2)
{
test_objectName_Guid = newLst[j]; //CO_CallSignLists[e3fc5e2d-fe84-492d-ad94-3acced870714]
test_attr = newLst[j + 1]; //SunSpots.smallint
//Seperate Guid from objectName
string[] obNameGuid = test_objectName_Guid.Split('[',']');
var items = from line in obNameGuid
select new
{
aobjectName = obNameGuid[0],
aGuid = obNameGuid[1]
};
foreach (var item in items)
{
final_objectName = item.aobjectName;
final_oGuid = new Guid(item.aGuid);
}
Console.WriteLine("\nFinal ObjectName\t{0}\nFinal Guid\t\t{1}",
final_objectName, final_oGuid);
string final_attributeName = string.Empty;
string final_attributeType = string.Empty;
string[] words = test_attr.Split('.');
var items2 = from line in words
select new
{
attributeName = words[0],
attributeType = words[1]
};
foreach (var item in items2)
{
final_attributeName = item.attributeName;
final_attributeType = item.attributeType;
}
Console.WriteLine("Attribute Name\t\t{0}\nAttributeType\t\t{1}\n",
final_attributeName, final_attributeType);
I then generate an xml file that loads data from the DB depending on the objectName and its GUID and save this xml in a string variable
string generatedXMLFile = Test.run_Load_StoredProcedure(final_objectName, final_oGuid, Dir);
Now I wanna modify the attribute in this xml file that is equal to the attribute from the parsed txt file. (I wanna modify it according to its type)
public void modifyPassedAttribute(string myFile, string attributeName)
{
Object objString = (Object)attributeName;
string strType = string.Empty;
int intType = 0;
XDocument myDoc = XDocument.Load(myFile);
var attrib = myDoc.Descendants().Attributes().Where(a => a.Name.LocalName.Equals(objString));
foreach (XAttribute elem in attrib)
{
Console.WriteLine("ATTRIBUTE NAME IS {0} and of Type {1}", elem, elem.Value.GetType());
if (elem.Value.GetType().Equals(strType.GetType()))
{
elem.Value += "_change";
Console.WriteLine("NEW VALUE IS {0}", elem.Value);
}
else if (elem.Value.GetType().Equals(intType.GetType()))
{
elem.Value += 2;
Console.WriteLine("NEW VALUE IS {0}", elem.Value);
}
}
myDoc.Save(myFile);
}
The problem is that I always says that the value type is 'string' and modifies it as a string (adds "_change").. though when I wanna save the data back into the DB it says you can't assign a nvarchar to a smallint value.
What's wrong with my code? Why do I always get a string type? Is it because all attributes are treated as strings in an xml file? How then can I modify the attribute according to its original type so that I won't get errors when I wanna save it back in the DB?
I'm open for suggestion to optimize the code I know it's not the best code to archive my goal
EDIT
Here is the code to generate the XMLs
public void generate_XML_AllTables(string Dir)
{
SqlDataReader Load_SP_List = null; //SQL reader that gets list of stored procedures in the database
SqlDataReader DataclassId = null; //SQL reader to get the DataclassIds from tables
SqlConnection conn = null;
conn = new SqlConnection("Data Source= EUADEVS06\\SS2008;Initial Catalog=TacOps_4_0_0_4_test;integrated security=SSPI; persist security info=False;Trusted_Connection=Yes");
SqlConnection conn_2 = null;
conn_2 = new SqlConnection("Data Source= EUADEVS06\\SS2008;Initial Catalog=TacOps_4_0_0_4_test;integrated security=SSPI; persist security info=False;Trusted_Connection=Yes");
SqlCommand getDataclassId_FromTables;
int num_SP = 0, num_Tables = 0;
string strDataClass; //Name of table
string sql_str; //SQL command to get
conn.Open();
//Select Stored Procedeurs that call upon Tables in the DB. Tables which have multiple DataClassIds (rows)
//Selecting all Load Stored Procedures of CLNT & Get the table names
// to pass the Load operation which generates the XML docs.
SqlCommand cmd = new SqlCommand("Select * from sys.all_objects where type_desc='SQL_STORED_PROCEDURE' and name like 'CLNT%Load';", conn);
Load_SP_List = cmd.ExecuteReader();
while (Load_SP_List.Read())
{
//Gets the list of Stored Procedures, then modifies it
//to get the table names
strDataClass = Load_SP_List[0].ToString();
strDataClass = strDataClass.Replace("CLNT_", "");
strDataClass = strDataClass.Replace("_Load", "");
sql_str = "select TOP 1 DataclassId from " + strDataClass;
conn_2.Open();
getDataclassId_FromTables = new SqlCommand(sql_str, conn_2);
DataclassId = getDataclassId_FromTables.ExecuteReader();
while (DataclassId.Read())
{
string test = DataclassId[0].ToString();
Guid oRootGuid = new Guid(test);
run_Load_StoredProcedure(strDataClass, oRootGuid, Dir);
num_Tables++;
}
DataclassId.Close();
conn_2.Close();
num_SP++;
}
Load_SP_List.Close();
conn.Close();
System.Console.WriteLine("{0} of Stored Procedures have been executed and {1} of XML Files have been generated successfully..", num_SP,num_Tables);
}
public string run_Load_StoredProcedure(string strDataClass, Guid guidRootId, string Dir)
{
SqlDataReader rdr = null;
SqlConnection conn = null;
conn = new SqlConnection("Data Source= EUADEVS06\\SS2008;Initial Catalog=TacOps_4_0_0_4_test;integrated security=SSPI; persist security info=False;Trusted_Connection=Yes");
conn.Open();
// Procedure call with parameters
SqlCommand cmd = new SqlCommand("CLNT_" + strDataClass + "_Load", conn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandTimeout = 0;
//Adding parameters, in- and output
SqlParameter idParam = new SqlParameter("#DataclassId", SqlDbType.UniqueIdentifier);
idParam.Direction = ParameterDirection.Input;
idParam.Value = guidRootId;
SqlParameter xmlParam = new SqlParameter("#XML", SqlDbType.VarChar, -1 /*MAX*/ );
xmlParam.Direction = ParameterDirection.Output;
cmd.Parameters.Add(idParam);
cmd.Parameters.Add(xmlParam);
rdr = cmd.ExecuteReader(CommandBehavior.SingleResult);
DirectoryInfo dest_2 = new DirectoryInfo(Dir + "\\Copies");
DirectoryInfo dest = new DirectoryInfo(Dir + "\\Backup");
DirectoryInfo source = new DirectoryInfo(Dir);
if (source.Exists == false)
{
source.Create();
if (dest.Exists == false)
{
dest.Create();
}
if (dest_2.Exists == false)
{
dest_2.Create();
}
}
string xmlFile = #Dir + "\\" + strDataClass + " [" + guidRootId + "].xml";
//The value of the output parameter ‘xmlParam’ will be saved in XML format using the StreamWriter.
System.IO.StreamWriter wIn = new System.IO.StreamWriter(xmlFile, false);
wIn.WriteLine(xmlParam.Value.ToString());
wIn.Close();
rdr.Close();
rdr.Close();
conn.Close();
return xmlFile;
}
Short answer:
Is it because all attributes are treated as strings in an xml file?
Yes.
You'll need to store the original type in the Xml - as far as I could tell, you're not including that, so you're discarding the information.
ok so I solved the issue! All I had to do was to pass the attributeType string to the modifyPassedAttribute function and use it to determine the changes I wanna make
Here is the modified final code
public void modifyPassedAttribute(string myFile, string attributeName, string attributeType)
{
Object objString = (Object)attributeName;
string strType = "nvarchar";
string smallintType = "smallint";
string intType = "int";
string dateType = "datetime";
XDocument myDoc = XDocument.Load(myFile);
//var myAttr = from el in myDoc.Root.Elements()
// from attr in el.Attributes()
// where attr.Name.ToString().Equals(attributeName)
// select attr;
var attrib = myDoc.Descendants().Attributes().Where(a => a.Name.LocalName.Equals(objString));
foreach (XAttribute elem in attrib)
{
Console.WriteLine("ATTRIBUTE NAME IS {0} and of Type {1}", elem, elem.Value.GetType());
if (strType.Equals(attributeType))
{
if (elem.Value.EndsWith("_change"))
{
elem.Value = elem.Value.Replace("_change", "");
Console.WriteLine("NEW VALUE IS {0}", elem.Value);
}
else
{
elem.Value += "_change";
Console.WriteLine("NEW VALUE IS {0}", elem.Value);
}
}
else if (smallintType.Equals(attributeType))
{
if (elem.Value.EndsWith("2"))
{
elem.Value = elem.Value.Replace("2", "");
Console.WriteLine("NEW VALUE IS {0}", elem.Value);
}
else
{
elem.Value += 2;
Console.WriteLine("NEW VALUE IS {0}", elem.Value);
}
}
else if (intType.Equals(attributeType))
{
if (elem.Value.EndsWith("2"))
{
elem.Value = elem.Value.Replace("2", "");
Console.WriteLine("NEW VALUE IS {0}", elem.Value);
}
else
{
elem.Value += 2;
Console.WriteLine("NEW VALUE IS {0}", elem.Value);
}
}
else if (dateType.Equals(attributeType))
{
if (elem.Value.EndsWith("2"))
{
elem.Value = elem.Value.Replace("2", "");
Console.WriteLine("NEW VALUE IS {0}", elem.Value);
}
else
{
elem.Value += 2;
Console.WriteLine("NEW VALUE IS {0}", elem.Value);
}
}
}
myDoc.Save(myFile);
}
the if statements inside are there so that no large numbers would be generated because the 2 is added to the string 100 (i.e. 1002) and if each time I'm gonna add 2 then it's gonna simply crash

Categories

Resources