Clicking Retry necessary for file download with HttpResponse.WriteFile - c#

I have a site where I'm trying to deliver files via WriteFile and they work fine in Chrome and Firefox, but in IE I have to hit "Retry" once or twice to actually make the file download.
Here is the code:
public class DownloadHandler : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
var r = context.Response;
r.Clear();
r.ClearContent();
r.ContentType = "application/octet-stream";
string path = "";
try
{
if (HttpContext.Current.Request.QueryString["n"] != null)
{
var file = HttpContext.Current.Request.QueryString["n"].ToString();
var type = HttpContext.Current.Request.QueryString["t"].ToString();
r.AddHeader("Content-Disposition", "attachment; filename=" + file.Substring(file.IndexOf('_')+1));
string folder = "";
switch (type.ToLower())
{
case "public":
folder = ConfigurationManager.AppSettings["BCD_PublicDocsLoc"];
break;
case "private":
folder = ConfigurationManager.AppSettings["BCD_PrivateDocsLoc"];
break;
case "internal":
folder = ConfigurationManager.AppSettings["BCD_InternalDocsLoc"];
break;
}
path = folder + "/" + file;
r.WriteFile(path);
r.Flush();
r.Close();
r.End();
}
}
catch (Exception ex)
{
r.Flush();
r.Close();
r.End();
context.Response.Redirect("Error.aspx?err=301");
}
}
public bool IsReusable
{
get
{
return false;
}
}
}
If anyone has any advice as to why this is happening, it would be greatly appreciated. Thanks!

Try substituting the HttpResponse's Close() and End() calls with HttpApplication.CompleteRequest().
Read here why, there are examples too.
Also, this solution was suggested here(in the first answer) for a situation similar to yours.
As was hinted that a small explanation in this post would be convenient, due to the possibility of the links going dead in the future, here it goes:
In short: IE seems to have problems with the HttpResponse.Close and HttpResponse.End methods. Aside of that, anyways, Microsoft recommends in most cases the use of HttpApplication.CompleteRequest over the former two, because:
-HttpResponse.Close() terminates the connection abruptly, dropping buffered data and is not intended for normal HTTP use in which a response to the client is desired
-HttpResponse.End() exists for compatibility reasons with the older ASP technology. It calls the EndRequest event directly and no further code after the End call is executed which is inconvenient in many cases
-HttpApplication.CompleteRequest(): also executes the EndRequest event and it does allow the execution of the code that following the CompleteRequest call, which makes it more appropriate to handle most situations.

Just a hunch but it sounds like an I.E. caching issue to me...
if I.E is set to automatically check for newer pages 'every time i go to the website.' (in [tools\internet options\general\ browsing history\settings]) then you wont have a cache issue.
Like I say, only a hunch, but give it a whirl.
If you want to get around this [*1], add a guid to your Query string.[*2]
[*1] The cache setting is a user by user setting, you can never pre-empt the users settings, so work with them instead
[*2] The nocache value is always different, the browser will never have a cached version to go to.
I use something like this...
protected void Page_PreRender(object sender, EventArgs e)
{
if (HttpContext.Current.Request.QueryString["FirstRun"] == "1")
{
NameValueCollection nvc = HttpUtility.ParseQueryString(Request.Url.Query);
nvc.Remove("FirstRun");
string url = Request.Url.AbsolutePath;
for (int i = 0; i < nvc.Count; i++)
url += string.Format("{0}{1}={2}", (i == 0 ? "?" : "&"), nvc.Keys[i], nvc[i]);
Response.Redirect(string.Format("{1}&NoCache={0}",System.Guid.NewGuid().ToString().Replace("-",""),url));
}
}
Any links/redirects to this page need ?FirstRun=1 (or &FirstRun=1) appended to the querystring. The page reload cycles itself once adding a &noCache value to the querystring.
Note:
Because you added FirstRun=1, it will always execute twice serverside, but appear like a single load to your user, and the browser.
If you don't add FirstRun=1, it will behave like a normal request as it never gets into the condition.

Related

ASP.NET Core download from a Network Share (sometimes works?)

I have a pretty standard ASP.NET Core 2.2 web app.
I'm running into an issue with downloading files that are stored on a Network Share. We are using a method of impersonation in code (client requirements) to access the file share. Uploading to the share with the provided credentials works fine, so we know that (a) the impersonation is working and (b) the files ARE at the destination. The issue I am having comes from Downloading the file. It's a pretty standard download link that points to an action in a controller that gets the file information from the database and uses two of the database values (PathToFile and Filename) to get the location of the file and pull it back to the controller, followed by returning a file:
var fileRecord = //Get the record from the database.
byte[] bytes = null;
if(fileRecord != null)
{
try
{
string fullPath = $"{fileRecord.PathToFile}\\{fileRecord.Filename}";
await ImpersonationHelper.Impersonate(async () => { bytes = await System.IO.File.ReadAllBytesAsync(fullPath); }, _settings);
}
catch (Exception e)
{
return NotFound();
}
}
return File(bytes, System.Net.Mime.MediaTypeNames.Application.Octet, fileRecord.Filename);
For reference:
public static async Task Impersonate(Action actionToExecute, ApplicationSettings settings)
{
IntPtr tokenHandle = new IntPtr(0);
SafeAccessTokenHandle safeAccessTokenHandle = null;
ImpersonateLogin login = new ImpersonateLogin(settings);
Task<bool> returnValue = Task.Run(() => LogonUser(login.username, login.domain, login.password, 2, 0, out safeAccessTokenHandle));
if (false == returnValue.Result)
{
int ret = Marshal.GetLastWin32Error();
throw new System.ComponentModel.Win32Exception(ret);
}
if(safeAccessTokenHandle != null)
await returnValue.ContinueWith(antecedent => WindowsIdentity.RunImpersonated(safeAccessTokenHandle, () =>
{
actionToExecute();
}));
}
}
Locally, it works fine (we skip the impersonation with an appsetting) and the file comes back and set up as a download in the browser.
On the server, however, it doesn't work, but it also does work. It's strange: Clicking on the link will lead to an error page:
but refreshing this error page (ie. re-requesting that file) over and over again will make it work (usually every 2-4 refreshes will return the file correctly).
Has anyone encountered this, or something like this that can offer some insight?
As it turns out, it (seemingly?) had something to do with the method(s) being async.
Once I removed the (forced) async call to the impersonation, and all async calls right to the "Download" action, it all lined up and works 100% of the time now. From what I could find online, it looks like it was a "timing" issue with async/sync calls. The impersonation would happen AFTER the download file, so the user wouldn't have permission to actually fetch the file to download, but in some cases, the impersonation would happen first, so the file would come back. Making everything "non-async" fixed the issues I was having.

Why does my file sometimes disappear in the process of reading from it or writing to it?

I have an app that reads from text files to determine which reports should be generated. It works as it should most of the time, but once in awhile, the program deletes one of the text files it reads from/writes to. Then an exception is thrown ("Could not find file") and progress ceases.
Here is some pertinent code.
First, reading from the file:
List<String> delPerfRecords = ReadFileContents(DelPerfFile);
. . .
private static List<String> ReadFileContents(string fileName)
{
List<String> fileContents = new List<string>();
try
{
fileContents = File.ReadAllLines(fileName).ToList();
}
catch (Exception ex)
{
RoboReporterConstsAndUtils.HandleException(ex);
}
return fileContents;
}
Then, writing to the file -- it marks the record/line in that file as having been processed, so that the same report is not re-generated the next time the file is examined:
MarkAsProcessed(DelPerfFile, qrRecord);
. . .
private static void MarkAsProcessed(string fileToUpdate, string
qrRecord)
{
try
{
var fileContents = File.ReadAllLines(fileToUpdate).ToList();
for (int i = 0; i < fileContents.Count; i++)
{
if (fileContents[i] == qrRecord)
{
fileContents[i] = string.Format("{0}{1} {2}"
qrRecord, RoboReporterConstsAndUtils.COMPLETED_FLAG, DateTime.Now);
}
}
// Will this automatically overwrite the existing?
File.Delete(fileToUpdate);
File.WriteAllLines(fileToUpdate, fileContents);
}
catch (Exception ex)
{
RoboReporterConstsAndUtils.HandleException(ex);
}
}
So I do delete the file, but immediately replace it:
File.Delete(fileToUpdate);
File.WriteAllLines(fileToUpdate, fileContents);
The files being read have contents such as this:
Opas,20170110,20161127,20161231-COMPLETED 1/10/2017 12:33:27 AM
Opas,20170209,20170101,20170128-COMPLETED 2/9/2017 11:26:04 AM
Opas,20170309,20170129,20170225-COMPLETED
Opas,20170409,20170226,20170401
If "-COMPLETED" appears at the end of the record/row/line, it is ignored - will not be processed.
Also, if the second element (at index 1) is a date in the future, it will not be processed (yet).
So, for these examples shown above, the first three have already been done, and will be subsequently ignored. The fourth one will not be acted on until on or after April 9th, 2017 (at which time the data within the data range of the last two dates will be retrieved).
Why is the file sometimes deleted? What can I do to prevent it from ever happening?
If helpful, in more context, the logic is like so:
internal static string GenerateAndSaveDelPerfReports()
{
string allUnitsProcessed = String.Empty;
bool success = false;
try
{
List<String> delPerfRecords = ReadFileContents(DelPerfFile);
List<QueuedReports> qrList = new List<QueuedReports>();
foreach (string qrRecord in delPerfRecords)
{
var qr = ConvertCRVRecordToQueuedReport(qrRecord);
// Rows that have already been processed return null
if (null == qr) continue;
// If the report has not yet been run, and it is due, add i
to the list
if (qr.DateToGenerate <= DateTime.Today)
{
var unit = qr.Unit;
qrList.Add(qr);
MarkAsProcessed(DelPerfFile, qrRecord);
if (String.IsNullOrWhiteSpace(allUnitsProcessed))
{
allUnitsProcessed = unit;
}
else if (!allUnitsProcessed.Contains(unit))
{
allUnitsProcessed = allUnitsProcessed + " and "
unit;
}
}
}
foreach (QueuedReports qrs in qrList)
{
GenerateAndSaveDelPerfReport(qrs);
success = true;
}
}
catch
{
success = false;
}
if (success)
{
return String.Format("Delivery Performance report[s] generate
for {0} by RoboReporter2017", allUnitsProcessed);
}
return String.Empty;
}
How can I ironclad this code to prevent the files from being periodically trashed?
UPDATE
I can't really test this, because the problem occurs so infrequently, but I wonder if adding a "pause" between the File.Delete() and the File.WriteAllLines() would solve the problem?
UPDATE 2
I'm not absolutely sure what the answer to my question is, so I won't add this as an answer, but my guess is that the File.Delete() and File.WriteAllLines() were occurring too close together and so the delete was sometimes occurring on both the old and the new copy of the file.
If so, a pause between the two calls may have solved the problem 99.42% of the time, but from what I found here, it seems the File.Delete() is redundant/superfluous anyway, and so I tested with the File.Delete() commented out, and it worked fine; so, I'm just doing without that occasionally problematic call now. I expect that to solve the issue.
// Will this automatically overwrite the existing?
File.Delete(fileToUpdate);
File.WriteAllLines(fileToUpdate, fileContents);
I would simply add an extra parameter to WriteAllLines() (which could default to false) to tell the function to open the file in overwrite mode, and not call File.Delete() at all then.
Do you currently check the return value of the file open?
Update: ok, it looks like WriteAllLines() is a .Net Framework function and therefore cannot be changed, so I deleted this answer. However now this shows up in the comments, as a proposed solution on another forum:
"just use something like File.WriteAllText where if the file exists,
the data is just overwritten, if the file does not exist it will be
created."
And this was exactly what I meant (while thinking WriteAllLines() was a user defined function), because I've had similar problems in the past.
So, a solution like that could solve some tricky problems (instead of deleting/fast reopening, just overwriting the file) - also less work for the OS, and possibly less file/disk fragmentation.

Grabbing Dropbox access token on Windows Form using Dropbox API

I have done a class which already works with the Dropbox API uploading files, downloading, deleting and so on. It has been working quite well since I was just using my own access token, but I need to register other users and a single but "big" problem appeared: retrieving the access token.
1.- Redirect URI? I'm starting to doubt why do I need this. I finally used this URI (https://www.dropbox.com/1/oauth2/redirect_receiver) because "The redirect URI you use doesn't really matter" Of course I included this one on my app config on Dropbox.
2.- I reach the user's account (I can see the user's count increased and I see the app has access to the user's account.
3.- I have a breakpoint on my code to inspect the variables in order to apply the DropboxOAuth2Helper.ParseTokenFragment but I have no success on there.
This is my code, but on the if before the try catch is where it gets stuck:
string AccessToken;
const string AppKey = "theOneAtmyAppConfigOnDropbox";
const string redirectUrl = "https://www.dropbox.com/1/oauth2/redirect_receiver";
string oauthUrl =
$#"https://www.dropbox.com/1/oauth2/authorize?response_type=token&redirect_uri={redirectUrl}&client_id={AppKey}";
private string oauth2State;
private bool Result;
public Form1()
{
InitializeComponent();
}
private void Form1_Load(object sender, EventArgs e)
{
Start(AppKey, webBrowser1);
webBrowser1.Navigating += Browser_Navigating;
}
private void Start(string appKey, WebBrowser w)
{
this.oauth2State = Guid.NewGuid().ToString("N");
Uri authorizeUri = DropboxOAuth2Helper.GetAuthorizeUri(OauthResponseType.Token, appKey, redirectUrl, state: oauth2State);
w.Navigate(authorizeUri);
}
private void Browser_Navigating(object sender, WebBrowserNavigatingEventArgs e)
{
if (!e.Url.ToString().StartsWith(redirectUrl, StringComparison.InvariantCultureIgnoreCase))
{
// we need to ignore all navigation that isn't to the redirect uri.
return;
}
try
{
OAuth2Response result = DropboxOAuth2Helper.ParseTokenFragment(e.Url);
if (result.State != this.oauth2State)
{
// The state in the response doesn't match the state in the request.
return;
}
this.AccessToken = result.AccessToken;
this.Result = true;
}
catch (ArgumentException)
{
// There was an error in the URI passed to ParseTokenFragment
}
finally
{
e.Cancel = true;
this.Close();
}
}
I've been fighting against this for hours and I'm starting to see the things a little cloudy at this point.
This is the tutorial I used, but I'm not moving forward. I would really appreciate any help!
EDIT: I finally made some steps forward. I changed the line which contains
Uri authorizeUri2 = DropboxOAuth2Helper.GetAuthorizeUri(appKey);
Now I'm showing the generated access token on the WebClient! Bad part comes when trying to get it (it gets inside the if) and it gets generated every time I ask the user for permission, so it gets overwrited.
EDIT 2: I noted the token I get generated on the browser is somehow malformed. I try to manually change it hardcored when I'm debugging and I get an exception when an AuthException when creating the DropboxClient object :( What the hell!
As Greg stated, the solution was using the event Browser_Navigated. Looks like the version of the embedded IE my Visual Studio (2015) uses didn't notice that if it's a redirect, it won't launch the event BrowserNavigating.

Stop process if webBrowser control hangs

I am using the WebBrowser control.
This works fine most of the time however wehn navigating to a new page or waiting for a new page to load can sometimes hangs.
Is there a way to catch this? i.e. if the page is failing to navigate or load after a certain amount of time then kill the process?
I am using the - webBrowser1_DocumentCompleted event to pick up ertain behaviours when the page loads/navigates as expected however not sure how to catch if a page is hanging??
Maby you should try to implement some kind of timeout logic? There are quite many samples in web about this. F.e. this one
Also you might be interested in this event of WebBrowserControl ProgressChanged
This is due to that webbrowser component is very basic model of internet explorer, and it get stuck at ajax pages. You can fix this problem explicitly to use latest version of internet explorer... Using this code...
try
{
string installkey = #"SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATION";
string entryLabel = "YourExe.exe";
string develop = "YourExe.vshost.exe";//This is for Visual Studio Debugging...
System.OperatingSystem osInfo = System.Environment.OSVersion;
string version = osInfo.Version.Major.ToString() + '.' + osInfo.Version.Minor.ToString();
uint editFlag = (uint)((version == "6.2") ? 0x2710 : 0x2328); // 6.2 = Windows 8 and therefore IE10
Microsoft.Win32.RegistryKey existingSubKey = Microsoft.Win32.Registry.LocalMachine.OpenSubKey(installkey, false); // readonly key
if (existingSubKey.GetValue(entryLabel) == null)
{
existingSubKey = Microsoft.Win32.Registry.LocalMachine.OpenSubKey(installkey, true); // writable key
existingSubKey.SetValue(entryLabel, unchecked((int)editFlag), Microsoft.Win32.RegistryValueKind.DWord);
}
if (existingSubKey.GetValue(develop) == null)
{
existingSubKey = Microsoft.Win32.Registry.LocalMachine.OpenSubKey(installkey, true); // writable key
existingSubKey.SetValue(develop, unchecked((int)editFlag), Microsoft.Win32.RegistryValueKind.DWord);
}
}
catch
{
MessageBox.Show("You Don't Have Admin Previlege to Overwrite System Settings");
}
}
Right Click Both your Exe. And vshost.exe and Run as Administrator To Update Registry for this Application....

Using webdriver PageFactory to pick certain page

I have a web project where clicking a button navigates to another page. The new page can be 1 of three possible pages depending on data in the server. (The url may be the same for 2 of those pages)
I have three classes representing expected elements on each page using the PageObject model.
What is the best way to actually find what page actually got loaded? Is there an OR type of wait that I can wait on three unique elements and get the one that actually got loaded?
Yes, it is possible to check the presence of unique element (which identifies the page) and then return respective page in the framework.
However, a test should know the page it is expecting next and should assume that the correct page has loaded and perform further actions/assertions. You can even put an assertion here to verify correct page has loaded. If a different page has loaded, then the test eventually fails as assertions would fail.
This way test becomes more readable and describes flow of application.
Also, setting up test data upfront for the tests, is always advisable. This way you would know what data is available on server and test would know which page would render.
I had a similar issue where I needed to detect if a login was for a new user (the login page then goes to a terms & conditions page rather than direct to the home page).
Initially I just waited and then tested the second page but this was just a pain so I came up with this.
To Test the result with this:
var whichScreen = waitForEitherElementText(By.CssSelector(HeaderCssUsing), "HOME SCREEN", "home", terms.getHeaderLocator(), terms.headerText, "terms", driver, MAX_STALE_RETRIES);
if(whichScreen.Item1 && whichScreen.Item2 == "terms")
{
terms.aggreeToTerms();
}
The method that this calls is :
protected Tuple<bool, string> waitForEitherElementText(By locator1, string expectedText1, string return1Ident,
By locator2, string expectedText2, string return2Ident, IWebDriver driver, int retries)
{
var retryCount = 0;
string returnText = "";
WebDriverWait explicitWait = new WebDriverWait(driver, TimeSpan.FromSeconds(globalWaitTime));
driver.Manage().Timeouts().ImplicitlyWait(TimeSpan.FromSeconds(0.5));
while (retryCount < retries)
{
try
{
explicitWait.Until<bool>((d) =>
{
try
{
if (Equals(d.FindElement(locator1).Text, expectedText1)) { returnText = return1Ident; };
}
catch (NoSuchElementException)
{
if (Equals(d.FindElement(locator2).Text, expectedText2)) { returnText = return2Ident; };
}
return (returnText != "");
});
return Tuple.Create(true, returnText);
}
catch (StaleElementReferenceException e)
{
Console.Out.WriteLine(DateTime.UtcNow.ToLocalTime().ToString() +
":>>> -" + locator1.ToString() + " OR " + locator2.ToString() + "- <<< - " +
this.GetType().FullName + "." + System.Reflection.MethodBase.GetCurrentMethod().Name +
" : " + e.Message);
retryCount++;
}
}
return Tuple.Create(false,"");
}
The explicit wait until uses a boolean so will loop around for the full wait time (I have a very slow Test server so I set this to 60 seconds). the implicit wait is set to half a second so the element tests will attempt every half a second and loop around until either true is returned or it fails.
I use a Tuple so that I can detect which screen I am on, and in this case agree to the terms & conditions which then sets me back on my normal page path

Categories

Resources