I'm fairly new to C# and this has me stumped. My project is using DataTables and TableAdapters to connect to a SQL Server database. I have a method that opens Excel, builds a DataRow and then passes that to the method below which adds it to my DataTable (cdtJETS) via the TableAdapter (ctaJETS).
public bool AddJETSRecord(DataRow JETSDataRow)
{
bool bolException = false;
cdtJETS.BeginLoadData();
// Add the data row to the table
try
{
cdtJETS.ImportRow(JETSDataRow);
}
catch (Exception e)
{
// Log an exception
bolException = true;
Console.WriteLine(e.Message);
}
cdtJETS.EndLoadData();
// If there were no errors and no exceptions, then accept the changes
if (!cdtJETS.HasErrors && !bolException)
{
ctaJETS.Update(cdtJETS);
return true;
}
else
return false;
}
The above works fine and the records show up in SQL Server as expected. I have another method that grabs a subset of the records in that DataTable and outputs them to another Excel file (this is a batch process that will collect records over time using the above method and then occasionally output them, so I can't directly move the data from the first Excel file to the second). After the second Excel file is updated I want to delete the records from the table so that they aren't duplicated the next time the method is run. This is where I'm having the issue:
public bool DeleteJETSRecords(DataTable JETSData)
{
int intCounter = 0;
DataRow drTarget;
// Parse all of the rows in the JETS Data that is to be deleted
foreach (DataRow drCurrent in JETSData.Rows)
{
// Search the database data table for the current row's OutputID
drTarget = cdtJETS.Rows.Find(drCurrent["OutputID"]);
// If the row is found, then delete it and increment the counter
if (drTarget != null)
{
cdtJETS.Rows.Remove(drTarget);
intCounter++;
}
}
// Continue if all of the rows were found and removed
if (JETSData.Rows.Count == intCounter && !cdtJETS.HasErrors)
{
cdtJETS.AcceptChanges();
try
{
ctaJETS.Update(dtJETS);
}
catch (Exception)
{
throw;
}
return true;
}
else
cdtJETS.RejectChanges();
return false;
}
As I step through the method I can see the rows being removed from the DataTable (i.e. if JETSData has 10 rows, at the end cdtJETS has n-10 rows) and no exceptions are thrown, but after I AcceptChanges and Update the TableAdapter, the underlying records are still in my SQL Server table. What am I missing?
The Rows.Remove method is equivalent to calling the row's Delete method, followed by the row's AcceptChanges method.
As with the DataTable.AcceptChanges method, this indicates that the change has already been saved to the database. This is not what you want.
The following should work:
public bool DeleteJETSRecords(DataTable JETSData)
{
int intCounter = 0;
DataRow drTarget;
// Parse all of the rows in the JETS Data that is to be deleted
foreach (DataRow drCurrent in JETSData.Rows)
{
// Search the database data table for the current row's OutputID
drTarget = cdtJETS.Rows.Find(drCurrent["OutputID"]);
// If the row is found, then delete it and increment the counter
if (drTarget != null)
{
drTarget.Delete();
intCounter++;
}
}
// Continue if all of the rows were found and removed
if (JETSData.Rows.Count == intCounter && !cdtJETS.HasErrors)
{
// You have to call Update *before* AcceptChanges:
ctaJETS.Update(dtJETS);
cdtJETS.AcceptChanges();
return true;
}
cdtJETS.RejectChanges();
return false;
}
Related
I am having difficulties UPDATING the databes via LINQ to SQL, inserting a new record works fine.
The code correctly inserts a new row and adds a primary key, the issue I am having is when I go to update (chnage a value that is already in the database) that same row the database is not updating, it is the else part of the code that does not work correctly. This is strange b/c the DB is properly connected and functioning through the fact that the DataContext inserts a new row with no issues. Checking the database confirms this.
This is the code,
using System;
using System.Collections.Generic;
using System.Linq;
using Cost = Invoices.Tenant_Cost_TBL;
namespace Invoices
{
class CollectionGridEvents
{
static string conn = Settings.Default.Invoice_DbConnectionString;
public static void CostDataGridCellEditing(DataGridRowEditEndingEventArgs e)
{
using (DatabaseDataContext DataContext = new DatabaseDataContext(conn))
{
var sDselectedRow = e.Row.Item as Cost;
if (sDselectedRow == null) return;
if (sDselectedRow.ID == 0)
{
sDselectedRow.ID = DateTime.UtcNow.Ticks;
DataContext.Tenant_Cost_TBLs.InsertOnSubmit(sDselectedRow);
}
else
{
// these two lines are just for debuging
long lineToUpdateID = 636154619329526649; // this is the line to be updated primary key
long id = sDselectedRow.ID; // this is to check the primary key on selected line is same
// these 3 lines are to ensure I am entering actual data into the DB
int? amount = sDselectedRow.Cost_Amount;
string name = sDselectedRow.Cost_Name;
int? quantity = sDselectedRow.Cost_Quantity;
sDselectedRow.Cost_Amount = amount;
sDselectedRow.Cost_Name = name;
sDselectedRow.Cost_Quantity = quantity;
}
try
{
DataContext.SubmitChanges();
}
catch (Exception ex)
{
Alert.Error("Did not save", "Error", ex);
}
}
}
}
}
And I am calling the method from this,
private void CostDataGrid_RowEditEnding(object sender, DataGridRowEditEndingEventArgs e)
{
CollectionGridEvents.CostDataGridCellEditing(e);
}
The lineToUpdateID is copied dirrectly from the database and is just there to check against the currently selected rows primary key is the same, so I know I am trying to update the same row.
I have looked through as many of the same type of issues here on SO , such as this one Linq-to-Sql SubmitChanges not updating fields … why?. But still no closer to finding out what is going wrong.
Any ideas would be much appreciated.
EDIT: Cost is just short hand of this using Cost = Invoices.Tenant_Cost_TBL;
You cannot do that. You need to get the record out of the database and then update that record. Then save it back. Like this:
else
{
// first get it
var query =
from ord in DataContext.Tenant_Cost_TBLs
where ord.lineToUpdateID = 636154619329526649
select ord;
// then update it
// Most likely you will have one record here
foreach (Tenant_Cost_TBLs ord in query)
{
ord.Cost_Amount = sDselectedRow.Cost_Amount;
// ... and the rest
// Insert any additional changes to column values.
}
}
try
{
DataContext.SubmitChanges();
}
catch (Exception ex)
{
Alert.Error("Did not save", "Error", ex);
}
Here is an example you can follow.
Or you can use a direct query if you do not want to select first.
DataContext.ExecuteCommand("update Tenant_Cost_TBLs set Cost_Amount =0 where ...", null);
Your object (Cost) is not attached to DB context. You should attach it then save changes. Check solution here
Goal: Search through a specific single column in a CSV for empty rows only in that column and replace with string "No Box".
Attempts: So far I have tried to use CsvHelper and CsvTools(CsvReader) via Nuget C#. I am not very experienced with C# so not sure how to accomplish my task. Searching did not turn up any examples or references that helped me understand what I need to implement. There are a lot of similar questions, but none of them searched a specific column. I am hoping someone can provide me with advice on how to get my for loop to work and get the number of rows for my checking.
Image sample of my CSV file.
Sample of CSV data column Site
private static void SiteBlanks()
{
try
{
MutableDataTable dt = DataAccess.DataTable.New.ReadCsv(#"C:\temp.csv");
for (int i = 0; i <= dt.Rows.Count; i++) // Cannot be applied to data types, so this errors.
{
if (!string.IsNullOrEmpty(dt.GetRow(i)["Site"])) // Check if cells in column 1 are empty
{
dt.Columns[1].Values[i] = "No Box"; // Update empty values with No Box
}
}
dt.SaveCSV(#"C:\temp.csv"); // Save file after changes.
}
catch (Exception ex)
{
//Set Error message
Error("ERROR: SiteBlanks()", ex);
}
}
Note: This is my first question ever asked so be gentle and tell me what I may have did wrong posting wise.
Based on your current code you can try the following
private static void SiteBlanks() {
try {
string filePath = #"C:\temp.csv";
MutableDataTable dt = DataTable.New.ReadCsv(filePath);
string columnName = "Site";
var numberOfRows = dt.NumRows ;
for (int i = 0; i < numberOfRows; i++) {
var row = dt.GetRow(i);
if (string.IsNullOrEmpty(row[columnName])) {
row[columnName] = "No Box";
}
}
dt.SaveCSV(filePath);
} catch (Exception ex) {
//Set Error message
Error("ERROR: SiteBlanks()", ex);
}
}
I need some suggestion on the below code in which, i am writing some data into the file and at the same time and i updating records in the database.
foreach (DataRow dr in dtCSVdata.Rows)
{
StringBuilder sbData = new StringBuilder();
//Below line of code is used to write data in database
SQLHelper.MoveOtherQuotesToWaitingArivalOfBags(quoteHeaderId, Config.WSUserName, refCode);
foreach (object obj in dr.ItemArray)
{
if (sbData.Length == 0)
{
sbData.Append("\"");
sbData.Append(RemoveSpecialCharacters(obj.ToString()));
sbData.Append("\"");
}
else
{
sbData.Append(",");
sbData.Append("\"");
sbData.Append(RemoveSpecialCharacters(obj.ToString()));
sbData.Append("\"");
}
}
//Write to file
writer.WriteLine(sbData.ToString());
}
Sometimes timeout exception occurs while updating the records in database and when exception occurs control is passed to the catch block and no data is being written into the file.
Suppose if there were 100 records and loop has just updated the 50 records then there would be no data written in the file for 50 records. So, it becomes very difficult to track what records has been updated.
Can you please help me in this as i want if something goes wrong then data in the file should be written before that.
Just move the line to insert data after writing to file,
foreach (DataRow dr in dtCSVdata.Rows)
{
StringBuilder sbData = new StringBuilder();
//comment Below line of codewhich is used to write data in database
// SQLHelper.MoveOtherQuotesToWaitingArivalOfBags(quoteHeaderId, Config.WSUserName, refCode);
foreach (object obj in dr.ItemArray)
{
if (sbData.Length == 0)
{
sbData.Append("\"");
sbData.Append(RemoveSpecialCharacters(obj.ToString()));
sbData.Append("\"");
}
else
{
sbData.Append(",");
sbData.Append("\"");
sbData.Append(RemoveSpecialCharacters(obj.ToString()));
sbData.Append("\"");
}
}
//Write to file
writer.WriteLine(sbData.ToString());
writer.close();//may use using
//after writing insert into database table.
SQLHelper.MoveOtherQuotesToWaitingArivalOfBags(quoteHeaderId, Config.WSUserName, refCode);
}
I have a method that queries a table for the count of its records. QA has discovered an "edge case" where if a particular operation is canceled in a particular order and speed (as fast as possible), the GUI "forgets" about the rest of the records in that table (the contents of the tables are uploaded to a server; when each one finishes, the corresponding table is deleted).
To be clear, this table that is having records deleted from it and then queried for count ("workTables") is a table of table names, that are deleted after they are processed.
What I have determined (I'm pretty sure) is that this anomaly occurs when a record from the "workTables" table is in the process of being deleted when the workTables table is queried for the count of its records. This causes an exception, which causes the method to return -1, which in our case indicates we should cuase the GUI to not display those records.
Is there a way to check if a table is in the process of having a record deleted from it, and wait until after that operation has completed, before proceeding with the query, so that it won't throw an exception?
For those interested in the specifics, this method is the one that, under those peculiar circumstances, throws an exception:
public int isValidTable(string tableName)
{
int validTable = -1;
string tblQuery = "SELECT COUNT(*) FROM ";
tblQuery += tableName;
openConnectionIfPossibleAndNecessary();
try
{
SqlCeCommand cmd = objCon.CreateCommand();
cmd.CommandText = tblQuery;
object objcnt = cmd.ExecuteScalar();
validTable = Int32.Parse(objcnt.ToString());
}
catch (Exception ex)
{
validTable = -1;
}
return validTable;
}
...and this is the method that deletes a record from the "workTables" table after the corresponding table has had its contents uploaded:
private void DropTablesAndDeleteFromTables(string recordType, string fileName)
{
try
{
WorkFiles wrkFile = new WorkFiles();
int tableOK = 0;
DataSet workfiles;
tableOK = wrkFile.isValidWorkTable(); // -1 == "has no records"
if (tableOK > 0) //Table has at least one record
{
workfiles = wrkFile.getAllRecords();
//Go thru dataset and find filename to clean up after
foreach (DataRow row in workfiles.Tables[0].Rows)
{
. . .
dynSQL = string.Format("DELETE FROM workTables WHERE filetype = '{0}' and Name = '{1}'", tmpType, tmpStr);
dbconn = DBConnection.GetInstance();
dbconn.DBCommand(dynSQL, false);
populateListBoxWithWorkTableData();
return;
} // foreach (DataRow row in workfiles.Tables[0].Rows)
}
}
catch (Exception ex)
{
SSCS.ExceptionHandler(ex, "frmCentral.DropTablesAndDeleteFromTables");
}
}
// method called by DropTablesAndDeleteFromTables() above
public int isValidWorkTable() //reverted to old way to accommodate old version of DBConnection
{
// Pass the buck
return dbconn.isValidTable("workTables");
}
I know this code is very funky and klunky and kludgy; refactoring it to make more sense and be more easily understood is a long and ongoing process.
UPDATE
I'm not able to test this code:
lock (this)
{
// drop the table
}
...yet, because the handheld is no longer allowing me to copy files to it (I get, "Cannot copy [filename.[dll,exe] The device has either stopped responding or has been disconnected" (it is connected, as shown by ActiveStync))
If that doesn't work, I might have to try this:
// global var
bool InDropTablesMethod;
// before querying that database from elsewhere:
while (InDropTablesMethod)
{
Pause(500);
}
UPDATE 2
I've finally been able to test my lock code (copies of binaries were present in memory, not allowing me to overwrite them; the StartUp folder had a *.lnk to the .exe, so every time I started the handheld, it tried to run the buggy versions of the .exe), but it doesn't work - I still get the same conflict/contention.
UPDATE 3
What seems to work, as kludgy as it may be, is:
public class CCRUtils
{
public static bool InDropTablesMethod;
. . .
if (CCRUtils.InDropTablesMethod) return;
CCRUtils.InDropTablesMethod = true;
. . . // do it all; can you believe somebody from El Cerrito has never heard of CCR?
CCRUtils.InDropTableMethod = false;
UPDATE 4
Wrote too soon - the bug is back. I added this MessageBox.Show(), and do indeed see the text "proof of code re-entrancy" at run-time.
while (HHSUtils.InDropTablesMethod)
{
MessageBox.Show("proof of code re-entrancy");
i++;
if (i > 1000000) return;
}
try
{
HHSUtils.InDropTablesMethod = true;
. . .
}
HHSUtils.InDropTablesMethod = false;
...so my guess that code re-entrancy may be a problem is correct...
I know this question exists, because it's mine and I put up 500 bounty points on it:
Exporting C# report to Excel when there are more than 5K lines
The answer got me over the hump (to some degree) but we're sort of at the point where we just accept that abnormally large datasets just can't be exported via our ASP front end, so we ship those requests off to our SQL Server DBs, who then run the appropriate stored procedures and copy/paste to Excel spreadsheets.
My question here is; can someone definitively answer whether or not it's absolutely impossible to export a large dataset to an Excel spreadsheet via a ASP front end? Once a particular report hits about 8K records or something, it just can't seem to be done. I'm just trying to determine whether any other potential tweak can be made, or if that much data is just more than ASP can handle?
Well... since I've streamed gigabytes of data directly from ASP.NET, I'm pretty sure you're doing something wrong. Try to isolate the problem first - is it in putting the data into the session, is it request / response limits, is it request timeouts? Figure out where the problem is, and then go ahead and solve it! :)
In general terms, there's no reason why you should put the data in a DataSet first. Instead, use a SqlDataReader and write the data to output in chunks. This way you'll avoid having the whole data set in memory; the same way, you can just directly write to the output stream, without buffering the generated HTML in memory. Why do you keep data in Session? Wouldn't it be better to just hold the parameters necessary to retrieve it from the DB as needed, using the DataReader?
If you're having trouble with timeouts, periodical Flushes help. This also helps reduce the memory footprint on the ASP.NET side.
Saving the output data to a file on the server first also helps, and it allows you to wire up partial file downloads too - just make sure you actually have enough space on the drive.
EDIT:
Ok, so you've got an SqlCommand. Instead of using it in a SqlDataAdapter, you can do something like this (cmd being your SqlCommand instance):
HtmlTextWriter wr = new HtmlTextWriter(Response.Output);
using (var rdr = cmd.ExecuteReader())
{
int index = 0;
wr.WriteBeginTag("table");
wr.WriteLine("<tr><td>Column 1</td><td>Column 2</td></tr>");
while (rdr.Read())
{
wr.WriteBeginTag("tr");
wr.WriteBeginTag("td");
wr.Write(rdr["Column1"]);
wr.WriteEndTag("td");
wr.WriteBeginTag("td");
wr.Write(rdr["Column2"]);
wr.WriteEndTag("td");
wr.WriteEndTag("tr");
if (index++ % 1000 == 0) Response.Flush();
}
wr.WriteEndTag("table");
}
I have not tested it, so it might need some tweaking, but the idea should be pretty obvious.
It is possible to do this as I have actually just finished some code specifically to do this as part of a reporting project that I am working on where we have in-excess of 20K records that need to be pulled back and exported into excel.
I will pull out the code and stick it on github for you to look at.
I am actually using NPOI's excel processing package and then using my custom code I am able to process any List of classes dynamically into a dataset and then dump it into the worksheets.
I need to tidy up the code for you but I should have something ready for you this evening.
This code will work for both desktop and web apps.
To give you an idea my code has been able to process a dataset of over 30K relatively quickly. I have to resolve an issue with datasets over the limit of 65536 records first before it is ready for you.
The nice thing with this solution means it doesn't rely on excel being installed on the machine hosting the solution.
EDIT
I have loaded a project onto github here:
https://github.com/JellyMaster/ExcelHelper
but here is the main bit that does all the excel processing:
public static MemoryStream CreateExcelSheet(DataSet dataToProcess)
{
MemoryStream stream = new MemoryStream();
if (dataToProcess != null)
{
var excelworkbook = new HSSFWorkbook();
foreach (DataTable table in dataToProcess.Tables)
{
var worksheet = excelworkbook.CreateSheet();
var headerRow = worksheet.CreateRow(0);
foreach (DataColumn column in table.Columns)
{
headerRow.CreateCell(table.Columns.IndexOf(column)).SetCellValue(column.ColumnName);
}
//freeze top panel.
worksheet.CreateFreezePane(0, 1, 0, 1);
int rowNumber = 1;
foreach (DataRow row in table.Rows)
{
var sheetRow = worksheet.CreateRow(rowNumber++);
foreach (DataColumn column in table.Columns)
{
sheetRow.CreateCell(table.Columns.IndexOf(column)).SetCellValue(row[column].ToString());
}
}
}
excelworkbook.Write(stream);
}
return stream;
}
public static DataSet CreateDataSetFromExcel(Stream streamToProcess, string fileExtentison = "xlsx")
{
DataSet model = new DataSet();
if (streamToProcess != null)
{
if (fileExtentison == "xlsx")
{
XSSFWorkbook workbook = new XSSFWorkbook(streamToProcess);
model = ProcessXLSX(workbook);
}
else
{
HSSFWorkbook workbook = new HSSFWorkbook(streamToProcess);
model = ProcessXLSX(workbook);
}
}
return model;
}
private static DataSet ProcessXLSX(HSSFWorkbook workbook)
{
DataSet model = new DataSet();
for (int index = 0; index < workbook.NumberOfSheets; index++)
{
ISheet sheet = workbook.GetSheetAt(0);
if (sheet != null)
{
DataTable table = GenerateTableData(sheet);
model.Tables.Add(table);
}
}
return model;
}
private static DataTable GenerateTableData(ISheet sheet)
{
DataTable table = new DataTable(sheet.SheetName);
for (int rowIndex = 0; rowIndex <= sheet.LastRowNum; rowIndex++)
{
//we will assume the first row are the column names
IRow row = sheet.GetRow(rowIndex);
//a completely empty row of data so break out of the process.
if (row == null)
{
break;
}
if (rowIndex == 0)
{
for (int cellIndex = 0; cellIndex < row.LastCellNum; cellIndex++)
{
string value = row.GetCell(cellIndex).ToString();
if (string.IsNullOrEmpty(value))
{
break;
}
else
{
table.Columns.Add(new DataColumn(value));
}
}
}
else
{
//get the data and add to the collection
//now we know the number of columns to iterate through lets get the data and fill up the table.
DataRow datarow = table.NewRow();
object[] objectArray = new object[table.Columns.Count];
for (int columnIndex = 0; columnIndex < table.Columns.Count; columnIndex++)
{
try
{
ICell cell = row.GetCell(columnIndex);
if (cell != null)
{
objectArray[columnIndex] = cell.ToString();
}
else
{
objectArray[columnIndex] = string.Empty;
}
}
catch (Exception error)
{
Debug.WriteLine(error.Message);
Debug.WriteLine("Column Index" + columnIndex);
Debug.WriteLine("Row Index" + row.RowNum);
}
}
datarow.ItemArray = objectArray;
table.Rows.Add(datarow);
}
}
return table;
}
private static DataSet ProcessXLSX(XSSFWorkbook workbook)
{
DataSet model = new DataSet();
for (int index = 0; index < workbook.NumberOfSheets; index++)
{
ISheet sheet = workbook.GetSheetAt(index);
if (sheet != null)
{
DataTable table = GenerateTableData(sheet);
model.Tables.Add(table);
}
}
return model;
}
}
This does require the NPOI nuget package to be installed in your project.
Any questions give me a shout. The github project does a bit more but this is enough to get you going hopefully.