C# Viewstate - cant retrieve table - c#

I have a datatable containing file paths which I am passing via viewstate (referencing, via a linkbutton, an index in this table), wanting to then use the path from the table to construct a HTTP filetransfer. (So 3 cols; name, path and index)
I am unable to successfully retrieve the datatable once saved in viewstate;
ViewState["varFiles"] = filedata;
(When page is originally constructed, then after postback:)
if (!IsPostBack) { SetupSession(); newpopfiles(); }
else { { if (ViewState["varFiles"] != null) { DataTable filedata = new DataTable(); filedata = (DataTable)Session["varFiles"]; } } }
From what I understand this should pull back filedata as a table in exactly the same form as before postback. Is this correct?
When subsequently referencing the table I get a null reference exception. Any ideas?
Many thanks,
Dan

It sounds like you're almost there, just need to be a bit more consistent with using the same storage mechanism :)
The bit to save the DataTable into your session, probably in OnInit() or PageLoad():
DataTable myDataTable = //... fill it in somehow
Session["varFiles"] = myDataTable;
The bit to read the DataTable after postback:
if (!IsPostBack)
{
SetupSession();
newpopfiles();
}
else
{
DataTable filedata = Session["varFiles"] as DataTable;
if (filedata != null)
{
//... do something
}
}

Related

Switch source method for object created in Using block

I have a process that is to be used to load data from various sources to a SQL Server database. Within the process, I have several methods that each consume file data and return a DataTable object. Depending on the type of data to be loaded, one of these methods is called for any single run of the process.
All of the DataTable objects created by these methods are consumed by the same target method, which transfers the data to SQL Server. This has led to some duplication of code:
if (useDT == 1)
{
using (DataTable dt = MakeDT1())
{
ConsumeDT(dt);
}
}
if (useDT == 2)
{
using (DataTable dt = MakeDT2())
{
ConsumeDT(dt);
}
}
(Simplified for clarity, real world names are descriptive)
I'd like to avoid this if at all possible. Is it possible to pre-calculate the correct method to call to generate the DataTable, then call ConsumeDT(dt) just once? E.g.
switch (useDT)
{
case 1
dtCall = MakeDT1()
break;
case 2
dtCall = MakeDT2()
}
using (DataTable dt = dtCall)
//etc
Thanks in advance, Iain
Write a little helper method that returns the correct kind of DT object:
private DataTable makeDt(bool useDT)
{
return useDT ? MakeDT1() : MakeDT2();
}
And then call that in the using like so:
using (var dt = makeDt(useDt))
{
ConsumeDT(dt);
}
This has the advantage of assigning the disposable dt inside a using making it unlikely that someone will write code that could cause a leak.
You can leave out the using block, you'll just have to make sure to correctly dispose the object in the end:
DataTable dt;
switch(useDT) {
case 1: dt = MakeDT1(); break;
case 2: dt = MakeDT2(); break;
}
try {
ConsumeDt(dt);
} finally {
dt.Dispose();
}

DataTable Remove Rows Not Reflected in SQL Server

I'm fairly new to C# and this has me stumped. My project is using DataTables and TableAdapters to connect to a SQL Server database. I have a method that opens Excel, builds a DataRow and then passes that to the method below which adds it to my DataTable (cdtJETS) via the TableAdapter (ctaJETS).
public bool AddJETSRecord(DataRow JETSDataRow)
{
bool bolException = false;
cdtJETS.BeginLoadData();
// Add the data row to the table
try
{
cdtJETS.ImportRow(JETSDataRow);
}
catch (Exception e)
{
// Log an exception
bolException = true;
Console.WriteLine(e.Message);
}
cdtJETS.EndLoadData();
// If there were no errors and no exceptions, then accept the changes
if (!cdtJETS.HasErrors && !bolException)
{
ctaJETS.Update(cdtJETS);
return true;
}
else
return false;
}
The above works fine and the records show up in SQL Server as expected. I have another method that grabs a subset of the records in that DataTable and outputs them to another Excel file (this is a batch process that will collect records over time using the above method and then occasionally output them, so I can't directly move the data from the first Excel file to the second). After the second Excel file is updated I want to delete the records from the table so that they aren't duplicated the next time the method is run. This is where I'm having the issue:
public bool DeleteJETSRecords(DataTable JETSData)
{
int intCounter = 0;
DataRow drTarget;
// Parse all of the rows in the JETS Data that is to be deleted
foreach (DataRow drCurrent in JETSData.Rows)
{
// Search the database data table for the current row's OutputID
drTarget = cdtJETS.Rows.Find(drCurrent["OutputID"]);
// If the row is found, then delete it and increment the counter
if (drTarget != null)
{
cdtJETS.Rows.Remove(drTarget);
intCounter++;
}
}
// Continue if all of the rows were found and removed
if (JETSData.Rows.Count == intCounter && !cdtJETS.HasErrors)
{
cdtJETS.AcceptChanges();
try
{
ctaJETS.Update(dtJETS);
}
catch (Exception)
{
throw;
}
return true;
}
else
cdtJETS.RejectChanges();
return false;
}
As I step through the method I can see the rows being removed from the DataTable (i.e. if JETSData has 10 rows, at the end cdtJETS has n-10 rows) and no exceptions are thrown, but after I AcceptChanges and Update the TableAdapter, the underlying records are still in my SQL Server table. What am I missing?
The Rows.Remove method is equivalent to calling the row's Delete method, followed by the row's AcceptChanges method.
As with the DataTable.AcceptChanges method, this indicates that the change has already been saved to the database. This is not what you want.
The following should work:
public bool DeleteJETSRecords(DataTable JETSData)
{
int intCounter = 0;
DataRow drTarget;
// Parse all of the rows in the JETS Data that is to be deleted
foreach (DataRow drCurrent in JETSData.Rows)
{
// Search the database data table for the current row's OutputID
drTarget = cdtJETS.Rows.Find(drCurrent["OutputID"]);
// If the row is found, then delete it and increment the counter
if (drTarget != null)
{
drTarget.Delete();
intCounter++;
}
}
// Continue if all of the rows were found and removed
if (JETSData.Rows.Count == intCounter && !cdtJETS.HasErrors)
{
// You have to call Update *before* AcceptChanges:
ctaJETS.Update(dtJETS);
cdtJETS.AcceptChanges();
return true;
}
cdtJETS.RejectChanges();
return false;
}

Limitation when exporting data to an Excel spreadsheet

I know this question exists, because it's mine and I put up 500 bounty points on it:
Exporting C# report to Excel when there are more than 5K lines
The answer got me over the hump (to some degree) but we're sort of at the point where we just accept that abnormally large datasets just can't be exported via our ASP front end, so we ship those requests off to our SQL Server DBs, who then run the appropriate stored procedures and copy/paste to Excel spreadsheets.
My question here is; can someone definitively answer whether or not it's absolutely impossible to export a large dataset to an Excel spreadsheet via a ASP front end? Once a particular report hits about 8K records or something, it just can't seem to be done. I'm just trying to determine whether any other potential tweak can be made, or if that much data is just more than ASP can handle?
Well... since I've streamed gigabytes of data directly from ASP.NET, I'm pretty sure you're doing something wrong. Try to isolate the problem first - is it in putting the data into the session, is it request / response limits, is it request timeouts? Figure out where the problem is, and then go ahead and solve it! :)
In general terms, there's no reason why you should put the data in a DataSet first. Instead, use a SqlDataReader and write the data to output in chunks. This way you'll avoid having the whole data set in memory; the same way, you can just directly write to the output stream, without buffering the generated HTML in memory. Why do you keep data in Session? Wouldn't it be better to just hold the parameters necessary to retrieve it from the DB as needed, using the DataReader?
If you're having trouble with timeouts, periodical Flushes help. This also helps reduce the memory footprint on the ASP.NET side.
Saving the output data to a file on the server first also helps, and it allows you to wire up partial file downloads too - just make sure you actually have enough space on the drive.
EDIT:
Ok, so you've got an SqlCommand. Instead of using it in a SqlDataAdapter, you can do something like this (cmd being your SqlCommand instance):
HtmlTextWriter wr = new HtmlTextWriter(Response.Output);
using (var rdr = cmd.ExecuteReader())
{
int index = 0;
wr.WriteBeginTag("table");
wr.WriteLine("<tr><td>Column 1</td><td>Column 2</td></tr>");
while (rdr.Read())
{
wr.WriteBeginTag("tr");
wr.WriteBeginTag("td");
wr.Write(rdr["Column1"]);
wr.WriteEndTag("td");
wr.WriteBeginTag("td");
wr.Write(rdr["Column2"]);
wr.WriteEndTag("td");
wr.WriteEndTag("tr");
if (index++ % 1000 == 0) Response.Flush();
}
wr.WriteEndTag("table");
}
I have not tested it, so it might need some tweaking, but the idea should be pretty obvious.
It is possible to do this as I have actually just finished some code specifically to do this as part of a reporting project that I am working on where we have in-excess of 20K records that need to be pulled back and exported into excel.
I will pull out the code and stick it on github for you to look at.
I am actually using NPOI's excel processing package and then using my custom code I am able to process any List of classes dynamically into a dataset and then dump it into the worksheets.
I need to tidy up the code for you but I should have something ready for you this evening.
This code will work for both desktop and web apps.
To give you an idea my code has been able to process a dataset of over 30K relatively quickly. I have to resolve an issue with datasets over the limit of 65536 records first before it is ready for you.
The nice thing with this solution means it doesn't rely on excel being installed on the machine hosting the solution.
EDIT
I have loaded a project onto github here:
https://github.com/JellyMaster/ExcelHelper
but here is the main bit that does all the excel processing:
public static MemoryStream CreateExcelSheet(DataSet dataToProcess)
{
MemoryStream stream = new MemoryStream();
if (dataToProcess != null)
{
var excelworkbook = new HSSFWorkbook();
foreach (DataTable table in dataToProcess.Tables)
{
var worksheet = excelworkbook.CreateSheet();
var headerRow = worksheet.CreateRow(0);
foreach (DataColumn column in table.Columns)
{
headerRow.CreateCell(table.Columns.IndexOf(column)).SetCellValue(column.ColumnName);
}
//freeze top panel.
worksheet.CreateFreezePane(0, 1, 0, 1);
int rowNumber = 1;
foreach (DataRow row in table.Rows)
{
var sheetRow = worksheet.CreateRow(rowNumber++);
foreach (DataColumn column in table.Columns)
{
sheetRow.CreateCell(table.Columns.IndexOf(column)).SetCellValue(row[column].ToString());
}
}
}
excelworkbook.Write(stream);
}
return stream;
}
public static DataSet CreateDataSetFromExcel(Stream streamToProcess, string fileExtentison = "xlsx")
{
DataSet model = new DataSet();
if (streamToProcess != null)
{
if (fileExtentison == "xlsx")
{
XSSFWorkbook workbook = new XSSFWorkbook(streamToProcess);
model = ProcessXLSX(workbook);
}
else
{
HSSFWorkbook workbook = new HSSFWorkbook(streamToProcess);
model = ProcessXLSX(workbook);
}
}
return model;
}
private static DataSet ProcessXLSX(HSSFWorkbook workbook)
{
DataSet model = new DataSet();
for (int index = 0; index < workbook.NumberOfSheets; index++)
{
ISheet sheet = workbook.GetSheetAt(0);
if (sheet != null)
{
DataTable table = GenerateTableData(sheet);
model.Tables.Add(table);
}
}
return model;
}
private static DataTable GenerateTableData(ISheet sheet)
{
DataTable table = new DataTable(sheet.SheetName);
for (int rowIndex = 0; rowIndex <= sheet.LastRowNum; rowIndex++)
{
//we will assume the first row are the column names
IRow row = sheet.GetRow(rowIndex);
//a completely empty row of data so break out of the process.
if (row == null)
{
break;
}
if (rowIndex == 0)
{
for (int cellIndex = 0; cellIndex < row.LastCellNum; cellIndex++)
{
string value = row.GetCell(cellIndex).ToString();
if (string.IsNullOrEmpty(value))
{
break;
}
else
{
table.Columns.Add(new DataColumn(value));
}
}
}
else
{
//get the data and add to the collection
//now we know the number of columns to iterate through lets get the data and fill up the table.
DataRow datarow = table.NewRow();
object[] objectArray = new object[table.Columns.Count];
for (int columnIndex = 0; columnIndex < table.Columns.Count; columnIndex++)
{
try
{
ICell cell = row.GetCell(columnIndex);
if (cell != null)
{
objectArray[columnIndex] = cell.ToString();
}
else
{
objectArray[columnIndex] = string.Empty;
}
}
catch (Exception error)
{
Debug.WriteLine(error.Message);
Debug.WriteLine("Column Index" + columnIndex);
Debug.WriteLine("Row Index" + row.RowNum);
}
}
datarow.ItemArray = objectArray;
table.Rows.Add(datarow);
}
}
return table;
}
private static DataSet ProcessXLSX(XSSFWorkbook workbook)
{
DataSet model = new DataSet();
for (int index = 0; index < workbook.NumberOfSheets; index++)
{
ISheet sheet = workbook.GetSheetAt(index);
if (sheet != null)
{
DataTable table = GenerateTableData(sheet);
model.Tables.Add(table);
}
}
return model;
}
}
This does require the NPOI nuget package to be installed in your project.
Any questions give me a shout. The github project does a bit more but this is enough to get you going hopefully.

Make changes to referenced datatable in Parallel.ForEach loop

I'm trying to create a method which when passed a datatable reference with pingable host names, tries to ping each of the hosts and then change the value of corresponding column and row depending on ping success.
However i cannot use references in Parallel.ForEach method. Is there any way i could make this work?
Here's my code:
public void checkON(ref DataTable PCS)
{
Parallel.ForEach(PCS.AsEnumerable(), pc =>
{
string loopIp = pc["Name"].ToString();
if (PingIP(loopIp))
{
DataRow[] currentpc = PCS.Select("Name = '{0}'", loopIp);
currentpc[0]["Online"] = "ON";
}
else
{
DataRow[] currentpc = PCS.Select("Name = '{0}'", loopIp);
currentpc[0]["Online"] = "OFF";
}
}
);}
Unless code explicitly says that it is thread-safe, you should assume it is not - and therefore access must be synchronized. The ref in your code serves no purpose. Each pc is a DataRow, so you can access that directly:
string loopIp;
lock(someLockObject) {
loopIp = (string)pc["Name"];
}
string online = PingIP(loopIp) ? "ON" : "OFF";
lock(someLockObject) {
pc["Online"] = online;
}
where someLockObject is shared between all of the callers, because you can't make assumptions about the threading model:
object someLockObject = new object();
Parallel.ForEach(PCS.AsEnumerable(), pc =>
{ ... });
In particular, you can't just lock the row because DataTable doesn't store data in rows (it stores it in columns; no, really).

Using Session[] with Page Load

I want to load the data into session so that when the next button is clicked in crystal report viewer then in should load the data from the datatable instead retrieving the data again from the database. Here goes my code...
ReportDocument rpt = new ReportDocument();
DataTable resultSet = new DataTable();
string reportpath = null;
protected void Page_Load(object sender, EventArgs e)
{
if (!Page.IsPostBack)
{
if (Request.QueryString.Get("id") == "5")
{
string publication = Request.QueryString.Get("pub");
DateTime date = DateTime.Parse(Request.QueryString.Get("date"));
int pages = int.Parse(Request.QueryString.Get("pages"));
int sort = int.Parse(Request.QueryString.Get("sort"));
if (sort == 0)
{
reportpath = Server.MapPath("IssuesReport.rpt");
rpt.Load(reportpath);
DataTable resultSet1 = RetrievalProcedures.IssuesReport(date, publication, pages);
Session["Record"] = resultSet1;
}
DataTable report = (DataTable)Session["Record"];
rpt.SetDataSource(report);
CrystalReportViewer1.ReportSource = rpt;
I am trying this code but when i clicked the next button it gives me the error that invalid report source..i guess the session is null thats why its giving me this error.
Any sugesstions how can I solve this...
I think you'd want to use the Cache object with a unique key for each user instead of Session here.
Pseudo code:
var data = Cache["Record_999"] as DataTable;
if (data == null) {
// get from db
// insert into cache
}
SetDataSource(data);
The problem lies not in with using Session, it lies with the logic used to determine when to retrieve data. Session is the correct approach to use here as Cache is shared across requests - that is, User A would see the report User B just configured if User B was the first user to execute code that used Cache instead of Session.
if (!Page.IsPostBack)
{
if (Request.QueryString.Get("id") == "5")
{
string publication = Request.QueryString.Get("pub");
DateTime date = DateTime.Parse(Request.QueryString.Get("date"));
int pages = int.Parse(Request.QueryString.Get("pages"));
int sort = int.Parse(Request.QueryString.Get("sort"));
// fixed the statement below to key off of session
if (Session["Record"] == null)
{
reportpath = Server.MapPath("IssuesReport.rpt");
rpt.Load(reportpath);
Session["Record"] = RetrievalProcedures.IssuesReport(date, publication, pages);
}
rpt.SetDataSource((DataTable)Session["Record"]);
CrystalReportViewer1.ReportSource = rpt;
// ....
}
}
`Could it be that sort is not 0? If sort is not 0 and the user is accessing the page for the first time(Session["Record"] has not been set before) he might get the error.
might want to try:
if(sort==0 || Session["Record"] == null)
{
// do your magic
}

Categories

Resources