I have a web page.There is a link to generate pdf reports.When the user clicks on the link the pdf generation process should start.Since the file size of the generated pdf is huge(>15Mb) the user cannot wait & thus has to proceed ahead with his other activities.That means the following simultaneous things would be happening now
The PDF Generation process continues unabated
The user continues with browsing without any jolt
After the pdf generation is completed the user should receive an email containing the download link.
Basically the implementation is
User clicks on generate report button
Using AJAX I make a call to the c# function say generateReport()
The problem
When I do this the user is not allowed to perform anything unless & untill the entire process completes.Ofcourse he can click on different links but with no response because of the AJAX call still getting implemented
How do I achieve this.I am a dot net(framework 2.0) developer creating aspx web pages using C#.I use javascript & AJAX(AjaxPro) to get rid of the postback in typical ASP.NET web applications.
In cases like this, you might want to consider splitting your PDF generation code out into a separate service, which your AJAX code could interact with to kick off the PDF creation. Once the service has created the PDF file, the service can email the user with the relevant info.
The AJAX code would use remoting to communicate with the service.
The concept you have in creating the reports and having them emailed to the user is a good one.
The implementation of this should be quite simple on the client side, ie a basic call to indicate that a report needs to be created. ie save this to a table (report queue)
The actual creation of the report should not be triggered by any of the calls from the front-end directly, Create a service (Windows Service) that runs through the "report queue" generating the PDF files and sending the emails.
As an added option, assuming the PDF's are not destroyed (ie not an email only solution) an ajax popup could be created on the client where the user can then go to a reports page and download the already generated file.
You could try on a timer make an AJAX call to a Function isReportDone(), that in turn checks a repository for the PDF. For generating the PDF, I THINK you'll need to pass that task out of your normal Request-Response Thread. Either by calling a seperate Thread (Multi-threading is fun) to process it, or by passing the data out to a seperate service on the server.
These are just two ideas really I had a similar issue with generating a file from a DB that had to be forcibly downloaded. I ended up calling a seperate page, in a seperate window, that wrote the data to the response stream. It actually worked great until the customer's network blocked any and all popups.
User asks for report to be generated (clicks a button)
Page kicks off an async service call to GeneratePdf(args).
User sees a spinner of some sort while Pdf is being generated, that way they know something is happening.
Pdf is generated (iTextSharp might come in handy here).
Pdf is stored somewhere. Database blob field would be ideal, that way the webservice can just pass back the id of the new file. Failing that, pass a filename back to the ajax code.
Webservice async (see this one) complete event happens, ajax code throws up either a popup or redirects to a new page. Maybe you want to replace the spinner with a "Ready!" link.
Link to pdf report file is emailed to user.
Related
I have a scenario where I would like to automate programmatically the following process:
Currently, I have to manually
Navigate to a webpage
Enter some text (an email) in a certain field on the webpage
Press the 'Search' button, which generates a new page containing a Table with the results on it.
Manually scroll through the generated results table and extract 4 pieces of information.
Is there a way for me to do this from a Desktop WPF App using C# ?
I am aware there is a WebClient type that can download a string, presumably of the content of the webpage, but I don't see how that would help me.
My knowledge of web based stuff is pretty non-existent so I am quite lost how to go about this, or even if this is possible.
I think a web driver is what you're looking for, I would suggest using Selenium, you can navigate to sites and send input or clicks to specific elements in them.
Well, I'll write the algorithm for you but you also need to some homework.
UseWebClient get the htm page with the form you want to auto fill and submit
Us regex and extract the action attribute of the form you want to auto submit. That gets you the URL you want to submit your next request to.
Since you know the fields in that form, create a class corresponding to those fields, let's call the class AutoClass
Create a new instance of your auto class and assign values you want to auto fill
Using WebClient to send your new request with the url you extracted from the form previously, attach your object which you want to send to the server either through serialization or any method.
Send the request and wait for feedback, then further action
Either use a web driver like Puppeteer (Selenium is kinda dead) or use HTTPS protocol to make web requests (if you don't get stopped by bot checks). I feel like your looking for the latter method because there is no reason to use a web driver in this case when a lighter method like HTTP requests can be used.
You can use RestSharp or the built in libraries if you want. Here is a popular thread of the ways to send requests with the libraries built in to C#.
To figure out what you need to send you should use a tool like Fiddler or Chrome Dev Tools (specifically the Network tab) to see what you need to send to acheive your goal as you would in a browser.
How using MVC C# (without the use of JS or JQuery) can I send a user to /home/stasis which will load with a loader image (already implemented using css), and then send them to the final url (which has a really long load time, and users end up clicking multiple times - not helping themselves)
The problem is that the use of JS and JQuery won't work, as this needs to work as an in-app webview as well (which doesn't support either JS or JQuery). So I go to /home/index click on a link to take me to /home/stasis which will load, then automatically begin loading the final url lets say google.com for example.
Without javascript, we have to hope that the browser and server will do the right thing:
The browser will display the entity content when the server returns a 307 redirect.
The server will not return partial entity while the long-running request is pending. In other words. The long-running request should return all its entity data in the final second the request.
The browser won't clear the screen until the first bytes of the next entity have arrived.
Assuming the browser and server behave like this, MVC doesn't offer an easy way to do it. You need to:
Create a new class derived from ActionResult.
In your ActionResult's ExecuteResult() method, write output to ControllerContext.HttpContext.Response. Set the response code to 307, set the RedirectLocation, and write whatever content you want to display to the OutputStream.
I have an app which uploads an xml to the server via ajax and does some processing liking parsing,schema validation,connecting to the db based on the conn string present in the xml data.These validations,connection and data extraction is being done in the same function of the controller. I want to update my viewer the steps that has already been completed. Since I cannot return the function ,so I created a file in which the the given function will write the update and another ajax call via different function of the same controller will read the file and display it on the page.This is working fine except that if i connect to the app from two browsers the information get overlapped. What other alternative can you suggest?
Are you using dotnet 4.5? Would it be possible for you to use SignalR, at various points your controllers action to send back progress messages to the browser?
You should also consider using Async methods
Problem Background
In my ASP.net MVC4 web application, we allow user to download data in an Excel workbook, wherein one of the cell contains a hyperlink to a report page. We prepare the link such that when user click the link in Excel, ReportController gets called with parameters, processes the request and return a report summary view i.e. .cshtml page. All works well...
I generate excel using SpreadSheetGear, code snippet that generate link:
rrid = (int.TryParse((string) values[row][column], out outInt) ? outInt : 0);
worksheet.Hyperlinks.Add(worksheet.Cells[row + 1, column],
PrepareProspectProfileLink((int) rrid, downloadCode),
string.Empty,
"CTRL + click to follow link",
rrid.ToString(CultureInfo.InvariantCulture));
Problem
I just noticed that when I click the link in excel, the same request is sent to the web server twice.
Analysis
I checked using Fiddler and placed a breakpoint in application code and its confirmed that the request is indeed sent twice.
In fiddler, Under Process column I found that first request is coming from "excel:24408" and second request is coming from "chrome:4028".
Also if I copy paste link in Outlook, it invokes request just once.
I understand this indicate, the first request is invoked by excel, when excel is served with html, it knows nothing about how to render it hence handover the request to default web browser which is Chrome on my system. Now Chrome fires the same request and on receiving html, it opens the html page.
Question
How can I stop this behavior? It puts unnecessary load on web server. And secondly when I audit user action, I get two entry :(
I'm not sure about excel, but you can handle this weird behavior on web server instead. You can create html page (without auditing) that will use javascript to redirect user to page with real report (and auditing stuff).
If you're concerned just about auditing, you can track requests for report in cache (or db) and consider making auditing entry only if same request for report wasn't fired let's say 5 seconds ago.
I have a page that downloads a large HTML file from another domain then serve it to the user. The file is around 100k - 10MB and usually takes about 5min. What was think about doing something like this to make the user experience better.
download file
if file is not download within 10 seconds then displays a page that tells the user that the file is being downloaded
if the server completes the download in 1 second then it will serve the downloaded html
can this be done? do I need to use the async feature?
Updated question: the downloaded file is a html file
In order to provide an 'asynchronous' file download try a trick that Google is using: Create a hidden iframe and set it's source to the file you want to download. You can then still run javascript on your original page while the file is being downloaded through the iframe.
I think you should:
Return an HTML page to the user straight away, to tell them the transfer has started.
Start the download from the other domain in a separate process on your server.
Have the HTML from step 1 repeatedly reload, so you can check if the download has completed already, and possibly give an ETA or update to the user.
Return a link to the user when the initial transfer is complete.
It sounds like you need to use a waiting page that refreshes itself every so often and displays the status of your download. The download can be run on a separate thread using a System.Threading.Task, for instance.