ASP.NET page stops processing - c#

I'm having a hard time with what appears to be a connection issue. I have scoured the web and sounds like this is not the way to go but if someone has any thoughts or ideas that would be great.
I have a hunch it's because these 4 pages are using the same .cs code file and all logic is happening OnLoad() so it is kicking the other off? These reports are for display only, no input required from user.
Please let me know if more information is needed, thank you!
Issue:
Page loads fine on its own but if multiple tabs are ran and one is still processing, it halts this and then displays missing data and formatting. Can sometimes be mimicked by pressing refresh (F5) twice quickly.
Environment:
IIS running on server
DB2Database (IBM)
Web Report:
4 asp.net pages that link to same Default.CS code file (ex. /dash/steel.aspx, /dash/steelnums.aspx)
On page load > read CSV files using StreamReader > run SQL query > format/display information into data grid view
Connection Example:
iDB2Connection BlueDB2Connection = new iDB2Connection(strConnectionString);
iDB2DataAdapter BlueDB2PartsDataAdapter = new iDB2DataAdapter();
iDB2Command SqlCmd = BlueDB2Connection.CreateCommand();
SqlCmd.CommandTimeout = 1000000000;
// select proper query based on page being loaded
if (curPage.Contains("amewood"))
{
SqlCmd.CommandText = sqlMainDataWood();
}
else if (curPage.Contains("amesteel"))
{
SqlCmd.CommandText = sqlMainDataSteel();
}
BlueDB2PartsDataAdapter.SelectCommand = SqlCmd;
try
{
BlueDB2PartsDataAdapter.Fill(dsParts);
}
catch (SqlException sqlEx)
{
DisplayError.Text = "Error:" + sqlEx.Message;
}
Reading CSV function:
using (StreamReader reader = new StreamReader(basePath + filePath + "daysStart.csv"))
{
var headerLine = reader.ReadLine();
var line = reader.ReadToEnd();
var values = line.Split(',');
DateTime dt;
DateTime today = DateTime.ParseExact(DateTime.Now.ToString("MMddyyyy"), "MMddyyyy", CultureInfo.InvariantCulture);
int i = 0;
if (values.Length != 0)
{
foreach (string item in values)
{
if (item != "")
{
dt = DateTime.ParseExact(item, "MMddyyyy", CultureInfo.InvariantCulture);
dateData.startDate = dt;
}
else
{
dateData.startDate = today;
}
i++;
}
}
else
{
dateData.startDate = today;
}
}
Troubleshooting:
Attempted multiple threading
Tried delays prior to code
Tested that the CSV's were not causing the issue

Not closing the DB connection (disposing it) would be the culprit. After running every command or query, you have to dispose of the connection (or at the end of the event handler function). As #Dai suggested if you're not limited to using DB2 and Web Forms you should use newer technologies such as ASP.net MVC and EntityFramework or other ORMs.
Update :
after reading your link:
Any public static (Shared in Visual Basic) members of this type are safe for multithreaded operations. Any instance members are not guaranteed to be thread-safe.
it is maybe from not sharing the same instance of DB2DataAdapter object. Share it with static modifier between pages and see if it helps.

Sorry for the delay, I don't get much time to work on this. The issue turned out to be related to a static variable that was being overwritten... face palm. I'm sure if I posted all of the code you would have noticed it immediately. Thank you for your time and effort.

Related

"The process cannot access the file because it is being used by another process." with SystemReader

I have no coding experience but have been trying to fix a broken program many years ago. I've been fumbling through fixing things but have stumbled upon a piece that I can't fix. From what I've gathered you get Alexa to append a Dropbox file and the program reads that file looking for the change and, depending on what it is, executes a certain command based on a customizable list in an XML document.
I've gotten this to work about five times in the hundred of attempts I've done, every other time it will crash and Visual Studio gives me: "System.IO.IOException: 'The process cannot access the file 'C:\Users\\"User"\Dropbox\controlcomputer\controlfile.txt' because it is being used by another process.'"
This is the file that Dropbox appends and this only happens when I append the file, otherwise, the program works fine and I can navigate it.
I believe this is the code that handles this as this is the only mention of StreamReader in all of the code:
public static void launchTaskControlFile(string path)
{
int num = 0;
StreamReader streamReader = new StreamReader(path);
string str = "";
while (true)
{
string str1 = streamReader.ReadLine();
string str2 = str1;
if (str1 == null)
{
break;
}
str = str2.TrimStart(new char[] { '#' });
num++;
}
streamReader.Close();
if (str.Contains("Google"))
{
MainWindow.googleSearch(str);
}
else if (str.Contains("LockDown") && Settings.Default.lockdownEnabled)
{
MainWindow.executeLock();
}
else if (str.Contains("Shutdown") && Settings.Default.shutdownEnabled)
{
MainWindow.executeShutdown();
}
else if (str.Contains("Restart") && Settings.Default.restartEnabled)
{
MainWindow.executeRestart();
}
else if (!str.Contains("Password"))
{
MainWindow.launchApplication(str);
}
else
{
SendKeys.SendWait(" ");
Thread.Sleep(500);
string str3 = "potato";
for (int i = 0; i < str3.Length; i++)
{
SendKeys.SendWait(str3[i].ToString());
}
}
Console.ReadLine();
}
I've searched online but have no idea how I could apply anything I've found to this. Once again before working on this I have no coding experience so act like you're talking to a toddler.
Sorry if anything I added here is unnecessary I'm just trying to be thorough. Any help would be appreciated.
I set up a try delay pattern like Adriano Repetti said and it seems to be working, however doing that flat out would only cause it to not crash so I had to add a loop around it and set the loop to stop when a variable hit 1, which happened whenever any command types are triggered. This takes it out of the loop and sets the integer back to 0, triggering the loop again. That seems to be working now.

Why does my file sometimes disappear in the process of reading from it or writing to it?

I have an app that reads from text files to determine which reports should be generated. It works as it should most of the time, but once in awhile, the program deletes one of the text files it reads from/writes to. Then an exception is thrown ("Could not find file") and progress ceases.
Here is some pertinent code.
First, reading from the file:
List<String> delPerfRecords = ReadFileContents(DelPerfFile);
. . .
private static List<String> ReadFileContents(string fileName)
{
List<String> fileContents = new List<string>();
try
{
fileContents = File.ReadAllLines(fileName).ToList();
}
catch (Exception ex)
{
RoboReporterConstsAndUtils.HandleException(ex);
}
return fileContents;
}
Then, writing to the file -- it marks the record/line in that file as having been processed, so that the same report is not re-generated the next time the file is examined:
MarkAsProcessed(DelPerfFile, qrRecord);
. . .
private static void MarkAsProcessed(string fileToUpdate, string
qrRecord)
{
try
{
var fileContents = File.ReadAllLines(fileToUpdate).ToList();
for (int i = 0; i < fileContents.Count; i++)
{
if (fileContents[i] == qrRecord)
{
fileContents[i] = string.Format("{0}{1} {2}"
qrRecord, RoboReporterConstsAndUtils.COMPLETED_FLAG, DateTime.Now);
}
}
// Will this automatically overwrite the existing?
File.Delete(fileToUpdate);
File.WriteAllLines(fileToUpdate, fileContents);
}
catch (Exception ex)
{
RoboReporterConstsAndUtils.HandleException(ex);
}
}
So I do delete the file, but immediately replace it:
File.Delete(fileToUpdate);
File.WriteAllLines(fileToUpdate, fileContents);
The files being read have contents such as this:
Opas,20170110,20161127,20161231-COMPLETED 1/10/2017 12:33:27 AM
Opas,20170209,20170101,20170128-COMPLETED 2/9/2017 11:26:04 AM
Opas,20170309,20170129,20170225-COMPLETED
Opas,20170409,20170226,20170401
If "-COMPLETED" appears at the end of the record/row/line, it is ignored - will not be processed.
Also, if the second element (at index 1) is a date in the future, it will not be processed (yet).
So, for these examples shown above, the first three have already been done, and will be subsequently ignored. The fourth one will not be acted on until on or after April 9th, 2017 (at which time the data within the data range of the last two dates will be retrieved).
Why is the file sometimes deleted? What can I do to prevent it from ever happening?
If helpful, in more context, the logic is like so:
internal static string GenerateAndSaveDelPerfReports()
{
string allUnitsProcessed = String.Empty;
bool success = false;
try
{
List<String> delPerfRecords = ReadFileContents(DelPerfFile);
List<QueuedReports> qrList = new List<QueuedReports>();
foreach (string qrRecord in delPerfRecords)
{
var qr = ConvertCRVRecordToQueuedReport(qrRecord);
// Rows that have already been processed return null
if (null == qr) continue;
// If the report has not yet been run, and it is due, add i
to the list
if (qr.DateToGenerate <= DateTime.Today)
{
var unit = qr.Unit;
qrList.Add(qr);
MarkAsProcessed(DelPerfFile, qrRecord);
if (String.IsNullOrWhiteSpace(allUnitsProcessed))
{
allUnitsProcessed = unit;
}
else if (!allUnitsProcessed.Contains(unit))
{
allUnitsProcessed = allUnitsProcessed + " and "
unit;
}
}
}
foreach (QueuedReports qrs in qrList)
{
GenerateAndSaveDelPerfReport(qrs);
success = true;
}
}
catch
{
success = false;
}
if (success)
{
return String.Format("Delivery Performance report[s] generate
for {0} by RoboReporter2017", allUnitsProcessed);
}
return String.Empty;
}
How can I ironclad this code to prevent the files from being periodically trashed?
UPDATE
I can't really test this, because the problem occurs so infrequently, but I wonder if adding a "pause" between the File.Delete() and the File.WriteAllLines() would solve the problem?
UPDATE 2
I'm not absolutely sure what the answer to my question is, so I won't add this as an answer, but my guess is that the File.Delete() and File.WriteAllLines() were occurring too close together and so the delete was sometimes occurring on both the old and the new copy of the file.
If so, a pause between the two calls may have solved the problem 99.42% of the time, but from what I found here, it seems the File.Delete() is redundant/superfluous anyway, and so I tested with the File.Delete() commented out, and it worked fine; so, I'm just doing without that occasionally problematic call now. I expect that to solve the issue.
// Will this automatically overwrite the existing?
File.Delete(fileToUpdate);
File.WriteAllLines(fileToUpdate, fileContents);
I would simply add an extra parameter to WriteAllLines() (which could default to false) to tell the function to open the file in overwrite mode, and not call File.Delete() at all then.
Do you currently check the return value of the file open?
Update: ok, it looks like WriteAllLines() is a .Net Framework function and therefore cannot be changed, so I deleted this answer. However now this shows up in the comments, as a proposed solution on another forum:
"just use something like File.WriteAllText where if the file exists,
the data is just overwritten, if the file does not exist it will be
created."
And this was exactly what I meant (while thinking WriteAllLines() was a user defined function), because I've had similar problems in the past.
So, a solution like that could solve some tricky problems (instead of deleting/fast reopening, just overwriting the file) - also less work for the OS, and possibly less file/disk fragmentation.

Clicking Retry necessary for file download with HttpResponse.WriteFile

I have a site where I'm trying to deliver files via WriteFile and they work fine in Chrome and Firefox, but in IE I have to hit "Retry" once or twice to actually make the file download.
Here is the code:
public class DownloadHandler : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
var r = context.Response;
r.Clear();
r.ClearContent();
r.ContentType = "application/octet-stream";
string path = "";
try
{
if (HttpContext.Current.Request.QueryString["n"] != null)
{
var file = HttpContext.Current.Request.QueryString["n"].ToString();
var type = HttpContext.Current.Request.QueryString["t"].ToString();
r.AddHeader("Content-Disposition", "attachment; filename=" + file.Substring(file.IndexOf('_')+1));
string folder = "";
switch (type.ToLower())
{
case "public":
folder = ConfigurationManager.AppSettings["BCD_PublicDocsLoc"];
break;
case "private":
folder = ConfigurationManager.AppSettings["BCD_PrivateDocsLoc"];
break;
case "internal":
folder = ConfigurationManager.AppSettings["BCD_InternalDocsLoc"];
break;
}
path = folder + "/" + file;
r.WriteFile(path);
r.Flush();
r.Close();
r.End();
}
}
catch (Exception ex)
{
r.Flush();
r.Close();
r.End();
context.Response.Redirect("Error.aspx?err=301");
}
}
public bool IsReusable
{
get
{
return false;
}
}
}
If anyone has any advice as to why this is happening, it would be greatly appreciated. Thanks!
Try substituting the HttpResponse's Close() and End() calls with HttpApplication.CompleteRequest().
Read here why, there are examples too.
Also, this solution was suggested here(in the first answer) for a situation similar to yours.
As was hinted that a small explanation in this post would be convenient, due to the possibility of the links going dead in the future, here it goes:
In short: IE seems to have problems with the HttpResponse.Close and HttpResponse.End methods. Aside of that, anyways, Microsoft recommends in most cases the use of HttpApplication.CompleteRequest over the former two, because:
-HttpResponse.Close() terminates the connection abruptly, dropping buffered data and is not intended for normal HTTP use in which a response to the client is desired
-HttpResponse.End() exists for compatibility reasons with the older ASP technology. It calls the EndRequest event directly and no further code after the End call is executed which is inconvenient in many cases
-HttpApplication.CompleteRequest(): also executes the EndRequest event and it does allow the execution of the code that following the CompleteRequest call, which makes it more appropriate to handle most situations.
Just a hunch but it sounds like an I.E. caching issue to me...
if I.E is set to automatically check for newer pages 'every time i go to the website.' (in [tools\internet options\general\ browsing history\settings]) then you wont have a cache issue.
Like I say, only a hunch, but give it a whirl.
If you want to get around this [*1], add a guid to your Query string.[*2]
[*1] The cache setting is a user by user setting, you can never pre-empt the users settings, so work with them instead
[*2] The nocache value is always different, the browser will never have a cached version to go to.
I use something like this...
protected void Page_PreRender(object sender, EventArgs e)
{
if (HttpContext.Current.Request.QueryString["FirstRun"] == "1")
{
NameValueCollection nvc = HttpUtility.ParseQueryString(Request.Url.Query);
nvc.Remove("FirstRun");
string url = Request.Url.AbsolutePath;
for (int i = 0; i < nvc.Count; i++)
url += string.Format("{0}{1}={2}", (i == 0 ? "?" : "&"), nvc.Keys[i], nvc[i]);
Response.Redirect(string.Format("{1}&NoCache={0}",System.Guid.NewGuid().ToString().Replace("-",""),url));
}
}
Any links/redirects to this page need ?FirstRun=1 (or &FirstRun=1) appended to the querystring. The page reload cycles itself once adding a &noCache value to the querystring.
Note:
Because you added FirstRun=1, it will always execute twice serverside, but appear like a single load to your user, and the browser.
If you don't add FirstRun=1, it will behave like a normal request as it never gets into the condition.

0 rows inserted with Entity Framework

I've been trying to write a simple financial app to manage my home spending's and while I was writing my Save button code I've encountered a situation where the code runs fine but inserts 0 rows to the local database.
Here's the code that calls saveIncome method:
if (comboBox1.SelectedIndex == 0)
{
try
{
saveIncome();
}
catch(Exception ex)
{
MessageBox.Show(ex.Message);
}
}
And here's the code for the "Save" button:
public void saveIncome()
{
using (WalletEntities ctx = new WalletEntities())
{
var Income = new Income
{
ID = transID,
Name = tbName.Text,
Date = calDate.SelectionRange.Start,
Value = decimal.Parse(tbValue.Text),
Owner = tbOwner.Text,
Desc = tbDesc.Text,
};
ctx.Income.Add(Income);
ctx.SaveChanges();
MessageBox.Show("Added Income ID: " + transID.ToString());
}
}
When I've tried to debug this everything ran ok. Object Income was filled and the Message Box shows.
As I understand, I was using the "Model First" approach to make this.
Please be gentle - I'm a beginner in programming :) also sorry for my English - not my primary language.
Ok, so the problem is fixed and it was due to my lack of knowledge. So apparently #MilenPavlov was right. I did in fact inspect a different database. I had no idea that when built the project copies the *.sdf to DEBUG folder and places changes there - courtesy of Copy to Output Directory property on your *.sdf file. So as I was inspecting via Visual Studio, I've been viewing a different copy of the file.
Thanks #MilenPavlov for showing me the way :)

logging error messages in a log file for the client using a web application

My application is a Mvc4 web application
I am reading an excel file which might have 20,000 records
The loop will go thorough the entire excel file and add all lines with errors to the string builder object which the user should be able to correct later in a file.
My users want this displayed in a text file like a logfile which they can see right away. I am displaying this in a div right now but are scared that this might be tough when
they encounter a lot of errors
Que: How can i log all these errors in a text file that my users can click on and view the lines with errors on the client.
Please note that multiple users can use this web application.
StringBuilder sb = new StringBuilder();
for (int rowNumber = startRow + 1; rowNumber <= currentWorkSheet.Dimension.End.Row; rowNumber++)
// read each row from the start of the data (start row + 1 header row) to the end of the spreadsheet.
{
try
{
object col1Value = currentWorkSheet.Cells[rowNumber, 1].Value;
object col2Value = currentWorkSheet.Cells[rowNumber, 2].Value;
object col3Value = currentWorkSheet.Cells[rowNumber, 3].Value;
object col4Value = currentWorkSheet.Cells[rowNumber, 4].Value;
if ((col1Value != null && col2Value != null))
{
exampleDataList.Add(new PersonalData
{
firstname = col1Value.ToString(),
lastname = col2Value.ToString(),
currentDate = col3Value == null ? DateTime.MinValue : Convert.ToDateTime(col3Value),
mySalary = col4Value == null ? 0 : Convert.ToInt32(col4Value)
});
}
}
catch (Exception e)
{
//log exception here
sb.AppendFormat("{0}: {1}{2}",rowNumber, e.Message, Environment.NewLine);
}
}
//convert the StringBuilder into the final string object
string allMessages = sb.ToString();
Perhaps something like Log4Net would be sufficient for your needs. In the end you could even provide a UI to handle these problematic entries in your application.
So why not store them in your db and then show them in a custom UI where users can handle the issues one by one in a collaborative manner?

Categories

Resources