I need to generate some data on the fly (when a user clicks a button on an .aspx page) and send it to the browser as a Word document.
I found this article and copied the code. At first it worked perfectly. I made some mods to the code to get it to do what I want and suddenly I found that when IE displayed the box in which it says 'Do you want to open or save MsWordSample.doc(3.77k) from localhost' - when I click 'Open' - it shows initially '100% downloaded' but this disappears (very quickly - you barely get to read it) and is replaced by text that says 'MsWordSample.doc couldn't be downloaded'. If I click 'Retry' it opens Word but it displays a representation of the .aspx page. I.e. it shows the text box and label - it doesn't show the html which is in the Response.Write at the end of the code.
How can this happen? First off it worked fine. I have changed the page back to exactly in the sample code - but it still won't send the right data to Word - it, (after the messing around described above) opens the code on the .aspx page.
Please try something like Response.flush() and Response.End() at the end of your code.
Related
This is a follow up question to this question:
Load local HTML file in a C# WebBrowser
I've created an html file by simply copying the example from this link:
https://developers.google.com/maps/documentation/javascript/examples/marker-simple
When I run it by double clicking the file which opens the regular web browser (i.e Firefox), it works. I then added the HTML to my sln, and change the file's property to always be copied to the output directory. I then tried running it like this:
string curDir = Directory.GetCurrentDirectory();
webBrowser.Navigate(new Uri(String.Format("file:\\{0}\\mymap.html", curDir)));
The browser is opened with the yellow warning on top:
"to help protect your security, your browser has restricted this file from showing active content .."
I'm clicking it, allowing the blocked content, and then I get a message saying there's an error in the script in the page. I'm allowing it to continue, and the browser remains blank.. why is that?
After searching for an answer, I came across this blog entry:
https://weblog.west-wind.com/posts/2011/May/21/Web-Browser-Control-Specifying-the-IE-Version
First of all for the yellow warning, just add this comment line under the html tag of the page:
<!-- saved from url=(0016)http://localhost -->
You can read more about it here:
https://learn.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/compatibility/ms537628(v=vs.85)
Regarding the map and the script error:
It seems like you need to set the WebBrowser's IE version to edge, in order to be able to render HTML5.
set this line as the first one inside the head tag:
<meta http-equiv="X-UA-Compatible" content="IE=Edge">
After these 2 changes, it will work
I've searched the web for 2 days now and am about to give up on this, but I'm soo close to the final solution... so you're my last hope. ;)
I have made a little C# application with Windows Forms GUI that uses a webBrowser element to display an HTML file with a TinyMCE editor embedded - this way I get a nice window with customizable editor functions I can use perfectly for my needs in this project.
I can set the textarea input for this editor window without problems thanks to this solution posted here on stackoverflow: https://stackoverflow.com/a/16322324/3498545
However I'm having big troubles reading text from this textarea. If I read the element by ID as shown (for setting content) in the solution above, I get the old text, as TinyMCE never really saves the changes.
But how do I get the input that my users will make in the textarea via TinyMCE? Is there some way to trigger a form send in HTML to get this input?
Thank you so much for you help!
Ok, to answer my own question (maybe others find this useful sometime later):
The changes of TinyMCE don't get written back to the textarea field in realtime, so I had to work around that. One solution would be to add something in the javascript header that writes every change back to the textarea immediately, however this caused problems for me as TinyMCE code clean up is not involved at this point.
My solution was to temporarily create a new file where the input of the TinyMCE field gets written directly into source code, which is a piece of cake to read back in C#.
The javascript code needed looks like this:
setup : function(editor){
editor.on('submit', function (){
tinymce.triggerSave();
document.writeln("<!DOCTYPE html><html><body><div id='content'>"+document.getElementById('textarea').value+"</div></body></html>")
document.close()
});
}
After that you can read the content in C# with the following code:
webBrowser.Document.GetElementById("content").InnerHtml
Now I can store HTML formatted code in my SQL database that can be managed and edited with a shiny interface. ;)
I created a web page in ASP.NET 4.0.
I'm using MarkdownDeep library (http://www.toptensoftware.com/markdowndeep/) to convert some text to HTML.
I found an issue that I can't fix. I hope you can help me.
I have some kind of forum, a want to post some code into the comment and see the indentation.
In the markdown preview editor, the text looks good. I save the textarea content to a database I show in the page, and I looks good.
For example, I try show two element tags of html to see the indentation on the code.
If I see the code with Chrome Development Tools, I look this:
<pre><code><head>
<title>
</code></pre>
It shows like this:
Everything is fine. But, if I reload the page, it appears like this
<pre><code><head><title>
</code></pre>
And it shows like this:
What I'm doing is:
write some text in the textarea
save text to the database
bring text back from database
converting markdown to html with the Transform() method of MarkdownDeep
put the result to the a Label's Text property
I tried, converting markdown before saving to the database, but nothing changes.
Ok, I finally found the solution.
That code was in a page that have a masterpage. If I took away the masterpage, everything is fine.
I don't know why... but that issue dissapear.
I have a ASP.NET app that at one point generates a PDF file and loads the next page. I can easily do this with two separate buttons but it is made much tougher when I try to do this with one button.
When both are fired by the same button the PDF will download but the page will not load. I even had the thread sleep after the file was transmitted but it would wait but then stop afterwards.
I have attached the code that I have been trying to make work:
Response.ContentType = "application/pdf";
Response.AppendHeader("Content-Disposition", "attachment; filename=labels.pdf");
Response.TransmitFile(Server.MapPath("~/"+randomNumber.ToString()+".pdf"));
Server.Transfer("~/createshipment.aspx", true);
You can't have two different responses from the server but you're trying to do so.
First - you'd like the server to return a PDF.
Second - you'd like the server to return the createshipment.aspx page.
This is just against the communication protocol. Probably the best solution is already presented by another user, competent_tech - you could open a new window (javascript's window.open) and this new window would return the PDF and in the same time the main window could post to the server and be redirected to the createshipment.aspx.
So in a nutshell you want to navigate to the next page that says something like "thank you for downloading this file" and start the download.
What you should do is on your button click you need to generate PDF and save it somewhere (on disk or DB - whichever is easier in your app), store the name/location of the new file (or primary key from DB) in a session variable and redirect to the next page. No reason to do transfer here. Then on that next page you should add a hidden iframe that points to your saved file.
Alternatively your button click could be just a link to the next page, which includes a hidden iframe pointing to the page that generates PDF. This is a bit simple but wouldn't work so well if you need to pass parameters from original page to the page that generates PDF.
This is because server.transfer "...terminates execution of the current page and starts execution of a new page by using the specified URL path of the page".
Your best bet is to open a new window in the client that gets the PDF and then perform whatever postback is needed to move the user to the next page.
I know this is old, but I'm just seeing it (looking for similar info myself).
I'm going to guess that this is causing issues:
Response.TransmitFile(Server.MapPath("~/"+randomNumber.ToString()+".pdf"));
You would need to map the path to the actual file and not some randomly created filename - or am I missing some steps?
I use:
window.print();
to print documents.
Problem:
As you guys know that browsers automatically adds page's title, path on top of the print page and page number and date on footer of the page.
But the client has asked me to remove all those things from the page or change their color to white so that they are not visible.
Question:
Is it possible to remove those things from page generated by the browser?
(I suspect answer might be no but no sure; possibly this can be controlled :))
This is a setting on the browser and can be turned on and off.
http://www.mintprintables.com/print-tips/header-footer.php
Only way to do this is via ActiveX for IE.
If you can do something server side, you could create a PDF file which the user can then print. I've not seen how to this on the browser itself, although I'd be interested to see if anyone else has.