Memory leak with DataAdapter.Fill to DataSet - c#

I have a simple program that grab data from the database and stored it in a DataSet through DataAdapter.Fill().
The problem I am facing is that the program's memory size keep increasing. Using process explorer, I monitor the virtual size of the program. The processes are run in a standalne build(not in unity editor).
2mins after the process is launch, the program sits at 824,444K virtual size, but leaving the program running for 30 mins, the program's virtual size increased to 1,722,340K.
(can't upload screenshot) https://drive.google.com/open?id=0B0DwzunTEqfKcDhHcXRmV2twUEE
The program only consist of 2 simple script: A helper class SQLController which manage the loading from database to dataset and a monobehavior script DataProducer that poll the database at regular interval for "update" using SQLController object.
SQLController:
using UnityEngine;
using System.Collections;
using System.Data;
using System.Data.Sql;
using System.Data.SqlClient;
using UnityEngine.UI;
using System.Threading;
public class SQLController {
string connectionString;
string dataSource, catalog, uid, pwd;
DataSet dat_set;
public SQLController(string dataSource, string catalog, string uid, string pwd)
{
this.dataSource = dataSource;
this.catalog = catalog;
this.uid = uid;
this.pwd = pwd;
}
/// <summary>
/// Open a connectio to the database and query it for data with the statement input
/// </summary>
/// <param name="statement"> The query statement to be used to query the server</param>
/// <returns></returns>
public DataSet Load(string statement, string name)
{
connectionString = string.Format("data source={0};initial catalog={1};uid={2};pwd={3}", dataSource, catalog, uid, pwd);
using (SqlConnection dbcon = new SqlConnection(connectionString))
using (SqlDataAdapter dataAdapter = new SqlDataAdapter(statement, dbcon))
{
dat_set = new System.Data.DataSet();
dbcon.Open();
dataAdapter.Fill(dat_set, name);
}
return dat_set;
}
}
DataProducer
using UnityEngine;
using System.Collections;
using System.Threading;
using System.Data;
using UnityEngine.UI;
using SimpleJSON;
public class DataProducer : MonoBehaviour {
public GameObject textObj;
private string[] _configData;
private string _outputText;
private SQLController _sqlController;
private bool _toggle = true;
private bool _updating = false;
private DataSet _dataSetCache;
private Thread _dataGrabThread;
// Use this for initialization
void Start () {
_configData = new string[5];
if (LoadFromConfigFile())
{
StartCoroutine(LoadFromDB());
}
}
// Update is called once per frame
void Update () {
textObj.GetComponent<Text>().text = _outputText;
}
public void OnDisable()
{
// stop any running thread
if (null != _dataGrabThread && _dataGrabThread.IsAlive)
{
_dataGrabThread.Abort();
}
}
IEnumerator LoadFromDB()
{
while (true)
{
if (_updating)
{
Debug.Log("Data System Poll DataBase ignored");
}
else
{
_updating = true;
_dataGrabThread = new Thread(Load);
_dataGrabThread.IsBackground = true;
_dataGrabThread.Start();
}
yield return new WaitForSeconds(10f);
}
}
void Load()
{
string statement;
if (_toggle)
{
_toggle = !_toggle;
statement = "SELECT TOP 100000 [AIIDX],[LASTATTACKDATE],[LASTRECEIVEDDATE],[DEVICEID],[INSTANCES],[ATTACKTYPE],[SEVERITY],[STATUS] FROM AI (NOLOCK)";
}
else
{
_toggle = !_toggle;
statement = "SELECT TOP 100000 [AIIDX],[LASTATTACKDATE],[LASTRECEIVEDDATE],[DEVICEID],[SEVERITY],[STATUS] FROM AI (NOLOCK)";
}
_sqlController = new SQLController(_configData[0], _configData[1], _configData[2], _configData[3]);
_outputText = "Loading";
_dataSetCache = _sqlController.Load(statement, "TestObject");
PrintDataSet();
_updating = false;
}
/// <summary>
/// Convert datatable into string and print it out through a text object
/// </summary>
void PrintDataSet()
{
if (null == _dataSetCache)
{
return;
}
DataTable dt = _dataSetCache.Tables["TestObject"];
if (null == dt)
{
return;
}
System.Text.StringBuilder builder = new System.Text.StringBuilder();
for (int i = 0; i < 20; ++i)
{
builder.AppendFormat("{0,-5}", (i + 1) + ".");
//comp.text += string.Format("{0,-5}", (i + 1) + ".");
DataRow dr = dt.Rows[i];
for (int j = 0; j < dt.Columns.Count; ++j)
{
builder.AppendFormat("{0, -30}", dr[dt.Columns[j]].ToString());
}
builder.Append("\n");
}
_outputText = builder.ToString();
builder = null;
}
bool LoadFromConfigFile()
{
string line;
using (System.IO.StreamReader file = new System.IO.StreamReader("./config.txt"))
{
if (file == null)
{
return false;
}
int index = 0;
while ((line = file.ReadLine()) != null)
{
if (index > _configData.Length)
{
Debug.LogError("Invalid Config file");
return false;
}
_configData[index++] = line;
}
//if the config file does not consist of 5 data
if (index < _configData.Length)
{
Debug.LogError("Invalid Config file");
return false;
}
return true;
}
}
}
I am not very sure what exactly causes the memory leak but when I changed the loading from threaded
_dataGrabThread = new Thread(Load);
to running in the main thread,
Load()
the process virtual size, though still increment, but at a slower rate. From 828,312K at 2 min to 1,083,908K at 40mins.

Related

ADO.NET - Resilient Connection to Oracle Database using OracleClient

We have a legacy C# Windows Service application (.Net Framework 4.8) that performs some statistical analysis which usually takes hours to complete as the underlying database has millions of rows of historical data.
It works fine if there is no underlying network interruption. However, we recently started using a database which is only accessible over the VPN. Now if there is any VPN connection issue, the analysis stops.
Is there any way to recover from these transient faults silently and gracefully and continue the work?
For the SqlClient I have found a sample code at https://learn.microsoft.com/en-us/sql/connect/ado-net/step-4-connect-resiliently-sql-ado-net?view=sql-server-ver15
The sample code from the link is shown below (just in case if the link dies).
using System; // C#
using CG = System.Collections.Generic;
using QC = Microsoft.Data.SqlClient;
using TD = System.Threading;
namespace RetryAdo2
{
public class Program
{
static public int Main(string[] args)
{
bool succeeded = false;
int totalNumberOfTimesToTry = 4;
int retryIntervalSeconds = 10;
for (int tries = 1;
tries <= totalNumberOfTimesToTry;
tries++)
{
try
{
if (tries > 1)
{
Console.WriteLine
("Transient error encountered. Will begin attempt number {0} of {1} max...",
tries, totalNumberOfTimesToTry
);
TD.Thread.Sleep(1000 * retryIntervalSeconds);
retryIntervalSeconds = Convert.ToInt32
(retryIntervalSeconds * 1.5);
}
AccessDatabase();
succeeded = true;
break;
}
catch (QC.SqlException sqlExc)
{
if (TransientErrorNumbers.Contains
(sqlExc.Number) == true)
{
Console.WriteLine("{0}: transient occurred.", sqlExc.Number);
continue;
}
else
{
Console.WriteLine(sqlExc);
succeeded = false;
break;
}
}
catch (TestSqlException sqlExc)
{
if (TransientErrorNumbers.Contains
(sqlExc.Number) == true)
{
Console.WriteLine("{0}: transient occurred. (TESTING.)", sqlExc.Number);
continue;
}
else
{
Console.WriteLine(sqlExc);
succeeded = false;
break;
}
}
catch (Exception Exc)
{
Console.WriteLine(Exc);
succeeded = false;
break;
}
}
if (succeeded == true)
{
return 0;
}
else
{
Console.WriteLine("ERROR: Unable to access the database!");
return 1;
}
}
/// <summary>
/// Connects to the database, reads,
/// prints results to the console.
/// </summary>
static public void AccessDatabase()
{
//throw new TestSqlException(4060); //(7654321); // Uncomment for testing.
using (var sqlConnection = new QC.SqlConnection
(GetSqlConnectionString()))
{
using (var dbCommand = sqlConnection.CreateCommand())
{
dbCommand.CommandText = #"
SELECT TOP 3
ob.name,
CAST(ob.object_id as nvarchar(32)) as [object_id]
FROM sys.objects as ob
WHERE ob.type='IT'
ORDER BY ob.name;";
sqlConnection.Open();
var dataReader = dbCommand.ExecuteReader();
while (dataReader.Read())
{
Console.WriteLine("{0}\t{1}",
dataReader.GetString(0),
dataReader.GetString(1));
}
}
}
}
/// <summary>
/// You must edit the four 'my' string values.
/// </summary>
/// <returns>An ADO.NET connection string.</returns>
static private string GetSqlConnectionString()
{
// Prepare the connection string to Azure SQL Database.
var sqlConnectionSB = new QC.SqlConnectionStringBuilder();
// Change these values to your values.
sqlConnectionSB.DataSource = "tcp:myazuresqldbserver.database.windows.net,1433"; //["Server"]
sqlConnectionSB.InitialCatalog = "MyDatabase"; //["Database"]
sqlConnectionSB.UserID = "MyLogin"; // "#yourservername" as suffix sometimes.
sqlConnectionSB.Password = "MyPassword";
sqlConnectionSB.IntegratedSecurity = false;
// Adjust these values if you like. (ADO.NET 4.5.1 or later.)
sqlConnectionSB.ConnectRetryCount = 3;
sqlConnectionSB.ConnectRetryInterval = 10; // Seconds.
// Leave these values as they are.
sqlConnectionSB.IntegratedSecurity = false;
sqlConnectionSB.Encrypt = true;
sqlConnectionSB.ConnectTimeout = 30;
return sqlConnectionSB.ToString();
}
static public CG.List<int> TransientErrorNumbers =
new CG.List<int> { 4060, 40197, 40501, 40613,
49918, 49919, 49920, 11001 };
}
/// <summary>
/// For testing retry logic, you can have method
/// AccessDatabase start by throwing a new
/// TestSqlException with a Number that does
/// or does not match a transient error number
/// present in TransientErrorNumbers.
/// </summary>
internal class TestSqlException : ApplicationException
{
internal TestSqlException(int testErrorNumber)
{ this.Number = testErrorNumber; }
internal int Number
{ get; set; }
}
}
However, I couldn't find any helpful material for the OracleClient. Any ideas, please?

Fast and efficent way to write large amount of data from DB to CSV file

I'm creating a program who will fetch data from a DB and writes it to the disk using a CSV file, the main problem is that the DB can have more than 6 columns and more than 3000 rows.
I'm using CSV Helper to can write the CSV file from a string array.
I know this isn't the best method, so can you give me some ideas about a fast and efficent method to write the data to the CSV file without wating +20 minutes?
This is the code I've written who writes to the CSV file the data (I feel that this isn't a good way to achieve this)
// Start writing the CSV file
using (TextWriter _w = new StreamWriter(String.Format("{0}{1}[{2}].csv", args[2], args[1], GetDateTime())))
using (CsvWriter _csv = new CsvWriter(_w)) {
_csv.Configuration.Delimiter = ",";
// Writes the column names
for (int i = 0; i < ODBCSQL.ColumnsCount; i++)
_csv.WriteField(ODBCSQL.GetColumnName(i));
// Starts writing the rows
_csv.NextRecord();
int _columnID = 0;
int _cnt = 0;
while (_cnt < ODBCSQL.ElementsCount) {
string[] _elements = ODBCSQL.GetElements(_columnID);
_csv.WriteField(_elements[_cnt], true);
if (_columnID == ODBCSQL.ColumnsCount - 1)
_csv.NextRecord();
if (_columnID != ODBCSQL.ColumnsCount - 1) {
_columnID++;
} else {
_columnID = 0;
_cnt++;
}
}
MessageBox.Show("CSV Phase 1 : ok");
_w.Flush();
MessageBox.Show("CSV Phase 2 : ok");
}
P.S: The ODBCSQL class is only a helper class I've written.
And below is the code of the ODBCSQL helper class
#region [ODBC_SQL_HELPER_CLASS]
public static class ODBCSQL {
private static OdbcConnection _connection;
private static OdbcDataReader _reader;
private static string _cmd;
public static int ColumnsCount = 0;
public static int ElementsCount = 0;
#region [CONNECTION]
public static bool Connect(string ConnectionString, string Command) {
_connection = new OdbcConnection(ConnectionString);
_cmd = Command;
try {
_connection.Open();
// Test to see if we can read data from the DB
if (!ReadData())
return false;
return true;
} catch {
CloseConnection();
return false;
}
}
// Dispose
public static void CloseConnection() {
try { _connection.Close(); } catch { }
}
#endregion
#region [DATA]
private static void ResetReader() {
_reader.Close();
_reader = (new OdbcCommand(_cmd, _connection)).ExecuteReader();
}
private static bool ReadData() {
try {
_reader = (new OdbcCommand(_cmd, _connection)).ExecuteReader();
// Retrieve the number of columns
ColumnsCount = _reader.FieldCount;
ElementsCount = GetElementsCount();
return true;
} catch {
return false;
}
}
private static int GetElementsCount() {
int _cnt = 0;
ResetReader();
while (_reader.Read())
_cnt++;
return _cnt;
}
public static string GetColumnName(int ColumnID) {
return _reader.GetName(ColumnID);
}
public static string[] GetElements(int ColumnID) {
List<string> _elements = new List<string>();
ResetReader();
while (_reader.Read())
_elements.Add(_reader[ColumnID].ToString());
return _elements.ToArray();
}
#endregion
}
#endregion
Maybe the problem is the ResetReader()?
I know it is not a c# answer, but I would suggest doing that kind of work (mass data export/import) using ETL tools that are specifically designed to do that, specially if this type of request is not a one-time-off type of thing. Some example ETL tools are:
SQL Server Integration Services
Pentaho Data Integration
I've solved.
The problem was the GetElements function, I was calling it everytime in the CSV writing loop, which was iterating again and again all the "elements" (data) in the columns.
The solution:
public static List<string> GetAllElements() {
List<string> _elements = new List<string>();
ResetReader();
while (_reader.Read()) {
for (int i = 0; i < ColumnsCount; i++)
_elements.Add(_reader[i].ToString());
}
return _elements;
}
I use this to retrieve only one time all the elements in the DB from the selected columns.
And this to write the file
// Start writing the CSV file
using (TextWriter _w = new StreamWriter(String.Format("{0}\\{1}[{2}].csv", args[2], args[1], GetDateTime())))
using (CsvWriter _csv = new CsvWriter(_w)) {
_csv.Configuration.Delimiter = ",";
// Writes the column names
for (int i = 0; i < ODBCSQL.ColumnsCount; i++)
_csv.WriteField(ODBCSQL.GetColumnName(i));
_csv.NextRecord();
// Starts writing the rows
List<string> _elements = ODBCSQL.GetAllElements();
for (int i = 0; i < (ODBCSQL.ElementsCount * ODBCSQL.ColumnsCount); i += ODBCSQL.ColumnsCount) {
for (int j = 0; j < ODBCSQL.ColumnsCount; j++) {
_csv.WriteField(_elements[i + j], true);
if (j == ODBCSQL.ColumnsCount - 1)
_csv.NextRecord();
}
}
_w.Flush();
}
Thanks to all for the helping!

How to create an array and fill from tree node variable

I'm trying to transfer data from a treenode (at least I think that's what it is) which contains much more data than I need. It would be very difficult for me to manipulate the data within the treenode. I would much rather have an array which provides me with only the necessary data for data manipulation.
I would like higher rates have following variables:
1. BookmarkNumber (integer)
2. Date (string)
3. DocumentType (string)
4. BookmarkPageNumberString (string)
5. BookmarkPageNumberInteger (integer)
I would like to the above defined rate from the data from variable book_mark (as can be seen in my code).
I've been wrestling with this for two days. Any help would be much appreciated. I'm probably sure that the question wasn't phrased correctly so please ask questions so that I may explain further if needed.
Thanks so much
BTW what I'm trying to do is create a Windows Form program which parses a PDF file which has multiple bookmarks into discrete PDF files for each bookmark/chapter while saving the bookmark in the correct folder with the correct naming convention, the folder and naming convention dependent upon the PDF name and title name of the bookmark/chapter being parsed.
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.IO;
using itextsharp.pdfa;
using iTextSharp.awt;
using iTextSharp.testutils;
using iTextSharp.text;
using iTextSharp.xmp;
using iTextSharp.xtra;
namespace WindowsFormsApplication1
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void ChooseImageFileWrapper_Click(object sender, EventArgs e)
{
OpenFileDialog openFileDialog1 = new OpenFileDialog();
openFileDialog1.InitialDirectory = GlobalVariables.InitialDirectory;
openFileDialog1.Filter = "Pdf Files|*.pdf";
openFileDialog1.RestoreDirectory = true;
openFileDialog1.Title = "Image File Wrapper Chooser";
if (openFileDialog1.ShowDialog() == DialogResult.OK)
{
try
{
GlobalVariables.ImageFileWrapperPath = openFileDialog1.FileName;
}
catch (Exception ex)
{
MessageBox.Show("Error: Could not read file from disk. Original error: " + ex.Message);
}
}
ImageFileWrapperPath.Text = GlobalVariables.ImageFileWrapperPath;
}
private void ImageFileWrapperPath_TextChanged(object sender, EventArgs e)
{
}
private void button2_Click(object sender, EventArgs e)
{
iTextSharp.text.pdf.PdfReader pdfReader = new iTextSharp.text.pdf.PdfReader(GlobalVariables.ImageFileWrapperPath);
IList<Dictionary<string, object>> book_mark = iTextSharp.text.pdf.SimpleBookmark.GetBookmark(pdfReader);
List<ImageFileWrapperBookmarks> IFWBookmarks = new List<ImageFileWrapperBookmarks>();
foreach (Dictionary<string, object> bk in book_mark) // bk is a single instance of book_mark
{
ImageFileWrapperBookmarks.BookmarkNumber = ImageFileWrapperBookmarks.BookmarkNumber + 1;
foreach (KeyValuePair<string, object> kvr in bk) // kvr is the key/value in bk
{
if (kvr.Key == "Kids" || kvr.Key == "kids")
{
//create recursive program for children
}
else if (kvr.Key == "Title" || kvr.Key == "title")
{
}
else if (kvr.Key == "Page" || kvr.Key == "page")
{
}
}
}
MessageBox.Show(GlobalVariables.ImageFileWrapperPath);
}
}
}
Here's one way to parse a PDF and create a data structure similar to what you describe. First the data structure:
public class BookMark
{
static int _number;
public BookMark() { Number = ++_number; }
public int Number { get; private set; }
public string Title { get; set; }
public string PageNumberString { get; set; }
public int PageNumberInteger { get; set; }
public static void ResetNumber() { _number = 0; }
// bookmarks title may have illegal filename character(s)
public string GetFileName()
{
var fileTitle = Regex.Replace(
Regex.Replace(Title, #"\s+", "-"),
#"[^-\w]", ""
);
return string.Format("{0:D4}-{1}.pdf", Number, fileTitle);
}
}
A method to create a list of Bookmark (above):
List<BookMark> ParseBookMarks(IList<Dictionary<string, object>> bookmarks)
{
int page;
var result = new List<BookMark>();
foreach (var bookmark in bookmarks)
{
// add top-level bookmarks
var stringPage = bookmark["Page"].ToString();
if (Int32.TryParse(stringPage.Split()[0], out page))
{
result.Add(new BookMark() {
Title = bookmark["Title"].ToString(),
PageNumberString = stringPage,
PageNumberInteger = page
});
}
// recurse
if (bookmark.ContainsKey("Kids"))
{
var kids = bookmark["Kids"] as IList<Dictionary<string, object>>;
if (kids != null && kids.Count > 0)
{
result.AddRange(ParseBookMarks(kids));
}
}
}
return result;
}
Call method above like this to dump the results to a text file:
void DumpResults(string path)
{
using (var reader = new PdfReader(path))
{
// need this call to parse page numbers
reader.ConsolidateNamedDestinations();
var bookmarks = ParseBookMarks(SimpleBookmark.GetBookmark(reader));
var sb = new StringBuilder();
foreach (var bookmark in bookmarks)
{
sb.AppendLine(string.Format(
"{0, -4}{1, -100}{2, -25}{3}",
bookmark.Number, bookmark.Title,
bookmark.PageNumberString, bookmark.PageNumberInteger
));
}
File.WriteAllText(outputTextFile, sb.ToString());
}
}
The bigger problem is how to extract each Bookmark into a separate file. If every Bookmark starts a new page it's easy:
Iterate over the return value of ParseBookMarks()
Select a page range that begins with the current BookMark.Number, and ends with the next BookMark.Number - 1
Use that page range to create separate files.
Something like this:
void ProcessPdf(string path)
{
using (var reader = new PdfReader(path))
{
// need this call to parse page numbers
reader.ConsolidateNamedDestinations();
var bookmarks = ParseBookMarks(SimpleBookmark.GetBookmark(reader));
for (int i = 0; i < bookmarks.Count; ++i)
{
int page = bookmarks[i].PageNumberInteger;
int nextPage = i + 1 < bookmarks.Count
// if not top of page will be missing content
? bookmarks[i + 1].PageNumberInteger - 1
/* alternative is to potentially add redundant content:
? bookmarks[i + 1].PageNumberInteger
*/
: reader.NumberOfPages;
string range = string.Format("{0}-{1}", page, nextPage);
// DEMO!
if (i < 10)
{
var outputPath = Path.Combine(OUTPUT_DIR, bookmarks[i].GetFileName());
using (var readerCopy = new PdfReader(reader))
{
var number = bookmarks[i].Number;
readerCopy.SelectPages(range);
using (FileStream stream = new FileStream(outputPath, FileMode.Create))
{
using (var document = new Document())
{
using (var copy = new PdfCopy(document, stream))
{
document.Open();
int n = readerCopy.NumberOfPages;
for (int j = 0; j < n; )
{
copy.AddPage(copy.GetImportedPage(readerCopy, ++j));
}
}
}
}
}
}
}
}
}
The problem is that it's highly unlikely all bookmarks are going to be at the top of every page of the PDF. To see what I mean, experiment with commenting / uncommenting the bookmarks[i + 1].PageNumberInteger lines.

System.Threading.Timer Doesn't Trigger my TimerCallBack Delegate

I am writing my first Windows Service using C# and I am having some trouble with my Timer class.
When the service is started, it runs as expected but the code will not execute again (I want it to run every minute)
Please take a quick look at the attached source and let me know if you see any obvious mistakes!
TIA
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Diagnostics;
using System.Linq;
using System.ServiceProcess;
using System.Text;
using System.Threading;
using System.IO;
namespace CXO001
{
public partial class Service1 : ServiceBase
{
public Service1()
{
InitializeComponent();
}
/*
* Aim: To calculate and update the Occupancy values for the different Sites
*
* Method: Retrieve data every minute, updating a public value which can be polled
*/
protected override void OnStart(string[] args)
{
Daemon();
}
public void Daemon()
{
TimerCallback tcb = new TimerCallback(On_Tick);
TimeSpan duetime = new TimeSpan(0, 0, 1);
TimeSpan interval = new TimeSpan(0, 1, 0);
Timer querytimer = new Timer(tcb, null, duetime, interval);
}
protected override void OnStop()
{
}
static int[] floorplanids = new int[] { 115, 114, 107, 108 };
public static List<Record> Records = new List<Record>();
static bool firstrun = true;
public static void On_Tick(object timercallback)
{
//Update occupancy data for the last minute
//Save a copy of the public values to HDD with a timestamp
string starttime;
if (Records.Count > 0)
{
starttime = Records.Last().TS;
firstrun = false;
}
else
{
starttime = DateTime.Today.AddHours(7).ToString();
firstrun = true;
}
DateTime endtime = DateTime.Now;
GetData(starttime, endtime);
}
public static void GetData(string starttime, DateTime endtime)
{
string connstr = "Data Source = 192.168.1.123; Initial Catalog = Brickstream_OPS; User Id = Brickstream; Password = bstas;";
DataSet resultds = new DataSet();
//Get the occupancy for each Zone
foreach (int zone in floorplanids)
{
SQL s = new SQL();
string querystr = "SELECT SUM(DIRECTIONAL_METRIC.NUM_TO_ENTER - DIRECTIONAL_METRIC.NUM_TO_EXIT) AS 'Occupancy' FROM REPORT_OBJECT INNER JOIN REPORT_OBJ_METRIC ON REPORT_OBJECT.REPORT_OBJ_ID = REPORT_OBJ_METRIC.REPORT_OBJECT_ID INNER JOIN DIRECTIONAL_METRIC ON REPORT_OBJ_METRIC.REP_OBJ_METRIC_ID = DIRECTIONAL_METRIC.REP_OBJ_METRIC_ID WHERE (REPORT_OBJ_METRIC.M_START_TIME BETWEEN '" + starttime + "' AND '" + endtime.ToString() + "') AND (REPORT_OBJECT.FLOORPLAN_ID = '" + zone + "');";
resultds = s.Go(querystr, connstr, zone.ToString(), resultds);
}
List<Record> result = new List<Record>();
int c = 0;
foreach (DataTable dt in resultds.Tables)
{
Record r = new Record();
r.TS = DateTime.Now.ToString();
r.Zone = dt.TableName;
if (!firstrun)
{
r.Occupancy = (dt.Rows[0].Field<int>("Occupancy")) + (Records[c].Occupancy);
}
else
{
r.Occupancy = dt.Rows[0].Field<int>("Occupancy");
}
result.Add(r);
c++;
}
Records = result;
MrWriter();
}
public static void MrWriter()
{
StringBuilder output = new StringBuilder("Time,Zone,Occupancy\n");
foreach (Record r in Records)
{
output.Append(r.TS);
output.Append(",");
output.Append(r.Zone);
output.Append(",");
output.Append(r.Occupancy.ToString());
output.Append("\n");
}
output.Append(firstrun.ToString());
output.Append(DateTime.Now.ToFileTime());
string filePath = #"C:\temp\CXO.csv";
File.WriteAllText(filePath, output.ToString());
}
}
}
The manual states:
As long as you are using a Timer, you must keep a reference to it. As with any managed object, a Timer is subject to garbage collection when there are no references to it. The fact that a Timer is still active does not prevent it from being collected.
Your Timer is probably being collected by the GC.

Using an ObjectDataSource with a GridView in a dynamic scenario

I have a search page that is tasked with searching 3.5 million records for individuals based on their name, customer ID, address, etc. The queries range from complex to simple.
Currently, this code relies on a SqlDataSource and a GridView. When a user types a serach term in and presses enter, the TextBoxChanged even runs a Search(term, type) function that changes the query that the SqlDataSource uses, adds the parameters, and rebinds the GridView.
It works well, but I've become obsessed with rewriting the code more efficiently. I want the paging to be done by SQL Server instead of the inefficiencies of a SqlDataSource in DataSet mode.
Enter the ObjectDataSource. Caveat: I have never used one before today.
I have spent the better part of the day putting together this class:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Configuration;
using System.Text;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
/// <summary>
/// Summary description for MultiSearchData
/// </summary>
public class MultiSearchData
{
private string _connectionString = string.Empty;
private string _sortColumns = string.Empty;
private string _selectQuery = string.Empty;
private int _lastUpdate;
private int _lastRowCountUpdate;
private int _lastRowCount;
private SqlParameterCollection _sqlParams;
public MultiSearchData()
{
}
private void UpdateDate()
{
_lastUpdate = (int)(DateTime.UtcNow - new DateTime(1970, 1, 1)).TotalSeconds;
}
private string ReplaceFirst(string text, string search, string replace)
{
int pos = text.IndexOf(search);
if (pos < 0)
{
return text;
}
return text.Substring(0, pos) + replace + text.Substring(pos + search.Length);
}
public string SortColumns
{
get { return _sortColumns; }
set { _sortColumns = value; }
}
public SqlParameterCollection SqlParams
{
get { return _sqlParams; }
set { _sqlParams = value; }
}
public string ConnectionString
{
get { return _connectionString; }
set { _connectionString = value; }
}
public string SelectQuery
{
get { return _selectQuery; }
set
{
if (value != _selectQuery)
{
_selectQuery = value;
UpdateDate();
}
}
}
public DataTable GetFullDataTable()
{
return GetDataTable(AssembleSelectSql());
}
public DataTable GetPagedDataTable(int startRow, int pageSize, string sortColumns)
{
if (sortColumns.Length > 0)
_sortColumns = sortColumns;
return GetDataTable(AssemblePagedSelectSql(startRow, pageSize));
}
public int GetRowCount()
{
if (_lastRowCountUpdate == _lastUpdate)
{
return _lastRowCount;
}
else
{
string strCountQuery = _selectQuery.Remove(7, _selectQuery.IndexOf("FROM") - 7);
strCountQuery = strCountQuery.Replace("SELECT FROM", "SELECT COUNT(*) FROM");
using (SqlConnection conn = new SqlConnection(_connectionString))
{
conn.Open();
using (SqlCommand cmd = new SqlCommand(strCountQuery, conn))
{
if (_sqlParams.Count > 0)
{
foreach (SqlParameter param in _sqlParams)
{
cmd.Parameters.Add(param);
}
}
_lastRowCountUpdate = _lastUpdate;
_lastRowCount = (int)cmd.ExecuteScalar();
return _lastRowCount;
}
}
}
}
public DataTable GetDataTable(string sql)
{
DataTable dt = new DataTable();
using (SqlConnection conn = new SqlConnection(_connectionString))
{
using (SqlCommand GetCommand = new SqlCommand(sql, conn))
{
conn.Open();
if (_sqlParams.Count > 0)
{
foreach (SqlParameter param in _sqlParams)
{
GetCommand.Parameters.Add(param);
}
}
using (SqlDataReader dr = GetCommand.ExecuteReader())
{
dt.Load(dr);
conn.Close();
return dt;
}
}
}
}
private string AssembleSelectSql()
{
StringBuilder sql = new StringBuilder();
sql.Append(_selectQuery);
return sql.ToString();
}
private string AssemblePagedSelectSql(int startRow, int pageSize)
{
StringBuilder sql = new StringBuilder();
string originalQuery = ReplaceFirst(_selectQuery, "FROM", ", ROW_NUMBER() OVER (ORDER BY " + _sortColumns + ") AS ResultSetRowNumber FROM");
sql.Append("SELECT * FROM (");
sql.Append(originalQuery);
sql.Append(") AS PagedResults");
sql.AppendFormat(" WHERE ResultSetRowNumber > {0} AND ResultSetRowNumber <= {1}", startRow.ToString(), (startRow + pageSize).ToString());
return sql.ToString();
}
}
I don't know if it's pretty. It works. I give it a query in the ObjectCreating method:
protected void dataMultiSearchData_ObjectCreating(object sender, ObjectDataSourceEventArgs e)
{
MultiSearchData info;
info = Cache["MultiSearchDataObject"] as MultiSearchData;
if (null == info)
{
info = new MultiSearchData();
}
info.SortColumns = "filteredcontact.fullname";
info.ConnectionString = "Data Source=SERVER;Initial Catalog=TheDatabase;Integrated Security=sspi;Connection Timeout=60";
info.SelectQuery = #"SELECT filteredcontact.contactid,
filteredcontact.new_libertyid,
filteredcontact.fullname,
'' AS line1,
filteredcontact.emailaddress1,
filteredcontact.telephone1,
filteredcontact.birthdateutc AS birthdate,
filteredcontact.gendercodename
FROM filteredcontact
WHERE fullname LIKE 'Griffin%' AND filteredcontact.statecode = 0";
e.ObjectInstance = info;
}
protected void dataMultiSearchData_ObjectDisposing(object sender, ObjectDataSourceDisposingEventArgs e)
{
MultiSearchData info = e.ObjectInstance as MultiSearchData;
MultiSearchData temp = Cache["MultiSearchDataObject"] as MultiSearchData;
if (null == temp)
{
Cache.Insert("MultiSearchDataObject", info);
}
e.Cancel = true;
}
Once the class has the query, it wraps it in paging friendly SQL and we're off to the races. I've implemented caching so that it can skip some expensive queries. Etc.
My problem is, this completely breaks my pretty little Search(term, type) world. Having ot set the query in the ObjectCreating method is completely harshing my vibe.
I've been trying to think of a better way to do this all day, but I keep ending up with a really messy...do it all in ObjectCreating model that just turns my stomach.
How would you do this? How can I keep the efficiency of this new method whilst have the organizational simplicity of my former model?
Am I being too OCD?
I determined that it can't be done. Furthermore, after benchmarking this class I found it performed no better than a SqlDataSource but was much more difficult to maintain.
Thus I abandoned this project. I hope someone finds this code useful at some point though.

Categories

Resources