I am using a Hit on server and getting the content in Streamer.
I then use a string where I get the Html code of the website.
I have to use this in a WPF application.
Which control should I use where I can put a url which contains html code to display in my wpf and HOW??
string urlcode;
HttpWebRequest request = WebRequest.Create("http://google.com/") as HttpWebRequest;
HttpWebResponse response = request.GetResponse() as HttpWebResponse;
StreamReader streamr = new StreamReader(response.GetResponseStream());
urlcode = streamr.ReadToEnd();
Embed the WebBrowser control on the preview tab and pass the HTML into it using the NavigateToString or NavigateToStream methods.
Use the WebBrowser control and its NavigateToStream method:
XAML:
<Grid>
<WebBrowser Name="webBrower"/>
</Grid>
Code:
WebRequest request = WebRequest.Create("http://google.com");
webBrower.NavigateToStream(request.GetResponse().GetResponseStream());
This is a simplified example. You would at least have to close/dispose the response object when navigation has finished.
Use a NavigationWindow to contain the page. You can then call the Navigate(Uri) method to traverse from page to page.
MSDN entry
You can use web browser control ,pass string containg HTML control to its NavigateToString method
webB.NavigateToString(#"<html>HTML code go here </html>");
Related
I'm looking for a method that replicates a Web Browsers Save Page As function (Save as Type = Text Files) in C#.
Dilemma: I've attempted to use WebClient and HttpWebRequest to download all Text from a Web Page. Both methods only return the HTML of the web page which does not include dynamic content.
Sample code:
string url = #"https://www.canadapost.ca/cpotools/apps/track/personal/findByTrackNumber?trackingNumber=" + package.Item2 + "&LOCALE=en";
try
{
System.Net.ServicePointManager.SecurityProtocol = System.Net.SecurityProtocolType.Tls11 | System.Net.SecurityProtocolType.Tls12;
using (WebClient client = new WebClient())
{
string content = client.DownloadString(url);
}
}
The above example returns the HTML without the tracking events from the page.
When I display the page in Firefox, right click on the page and select Save Page As and save as Text File all of the raw text is saved in the file. I would like to mimic this feature.
If you are scraping a web page that shows dynamic content then you basically have 2 options:
Use something to render the page first. The simplest in C# would be to have a WebBrowser control, and listen for the DocumentCompleted event. Note that there is some nuance to this when it fires for multiple documents on one page
Figure out what service the page is calling to get the extra data, and see if you can access that directly. It may well be the case that the Canadapost website is accessing an API that you can also call directly.
The Problem:
I'm running a winforms application with an embedded WebBrowser control. I've used the magic registry setting to switch this Control to IE 8 mode (as answered here Will the IE9 WebBrowser Control Support all of IE9's features, including SVG?).
But now if I navigate to a website which contains the Meta tag X-UA-Compatible IE=9 (as of http://msdn.microsoft.com/en-us/library/cc288325(v=vs.85).aspx) my webbrowser control switches to IE9 mode and ignores the registry settings.
I would like my control to stay in IE8 mode...
My solution attempts
I've tried to remove the meta tag after the control has loaded (Document_complete) using IHTMLDOMNode.removeChild but the control does not re-render the page.
I've tried to load the HTML content manually (using WebClient), remove the meta tag and feed this into the the webbrowser control (using Document.Write or DocumentText) but this way the control refuses to load any other content (like images).
Help
Now I'm out of ideas short of writing my own HTTPProxy and modifiying the response on the way (which I would not like to do).
Anyone any ideas?
I'm using .Net 4, I cannot change the website which will be displayed and I need it to render in IE8 mode regardless of the X-UA-Compatible tag...
Thanks!
I had problems with DocumentText too - I gave up with it.
My solution was to write an in-process HTTP server and point the WebBrowser at that.
I wrote an article about it here: http://SimplyGenius.net/Article/WebBrowserEx
In my case, I was getting the content from the file system.
You'd have to change it to make calls to your target website, but it shouldn't be too much work.
Then you can modify the HTML as you like, and links will still work.
Don't know of a way to make the WebBrowser control ignore that tag and not override your registry setting. For a quick (dirty) workaround you could do the following.
Create a request for the site which you want to show in the WebBrowser control.
var requestUri = new Uri("http://stackoverflow.com/");
var request = (HttpWebRequest) WebRequest.Create(requestUri);
Get the response.
var response = request.GetResponse();
using (var stream = response.GetResponseStream())
using (var reader = new StreamReader(stream))
{
var html = reader.ReadToEnd();
//...
}
Use NuGet to install the HTMLAgilityPack.
http://nuget.org/packages/HtmlAgilityPack
Load the HTML you've just retrieved in an HtmlDocument instance.
var document = new HtmlAgilityPack.HtmlDocument();
document.LoadHtml(html);
Select the tag. Here I use StackOverflow.com as an example and select its stylesheet node instead. When found, just remove the node.
var nodes = document.DocumentNode.SelectNodes("//link[#rel=\"stylesheet\"]");
foreach(var node in nodes)
{
node.ParentNode.RemoveChild(node);
}
All that remains is to retrieve the modified HTML and feed it directly to the WebBrowser control.
html = document.DocumentNode.OuterHtml;
webBrowser.DocumentText = html;
It cannot interprete what's not there.
You could do the same to solve your issue. Issue a request, get the response, modify the HTML and feed it to the WebBrowser control. Tested it, seems to load the rest of the document OK.
I have a button click event set up to retrieve a byte array object from my DB and it is then going to show the file in a new browser window. Right now I have this much:
Response.ContentType = "image/jpeg";
Response.AddHeader("content-length", fileBytes.Length.ToString());
Response.BinaryWrite(fileBytes);
where fileByes is my byte array. This is working perfectly, but I need to force this to open in a new window. I have tried adding the javascript to the response with response.write but that doesn't seem to work.
Writing your response is handled server side. Displaying your response is handled client side. You would have to tell your client to open a new window given the response from the server, e.g.
Get Image
Where getImage.aspx is the ASP.NET page responsible for serving the image/page.
You can't open a new window from server-side code. You'll need to call window.open() from JavaScript and pass in a URL to a page that returns the file.
You'd want to have your button click open the new browser window, which then makes the call to your code you have posted in your question. You're trying to do it sort of backwards.
Use a hyperlink with a URL to a blank .aspx,
pass a parameters in the URL as ?param=4¶m2 ... etc.
In the load event for the page place your response code there.
I am using webrequest and webresponse in my application. I want to click on a button same as webbrowser control and fill a textbox field. How can I do this from webrequest and webresponse?
You cant do that, they just return you the response stream..
I did a blog post on how to programmatically log into a website(I used Twitter in my example). This is basically what you want(filling in textboxes and submitting the information), if I understand your question correctly.
http://eclipsed4utoo.com/blog/log-website-programmatically/
C# 3.0
ASP.Net 2.0
IIS6
I have a regular [non-https] page. There is the standard one ASP.Net form on the page.
There are two "areas" of functionality on the page though. Login and "Get Quote". The login page needs to POST to HTTPS while the rest of the page [including the "other area"] form can't be HTTPS. In Java [JSP] and regular Html, we would just have two forms. One that posts to HTTPS and one that doesn't.
What is the way to handle this in ASP.Net [from one page]. I know that I could link to an HTTPS login.aspx page, but the business really would like the context together.
Any ideas?
Thanks,
The solution is to use asp.net to specify a "cross page postback", that is, you user the PostBackUrl property of any button control (LinkButton, Button, ImageButton etc.). This property allows you to post back to any page you like. Just set your PostBackUrl to the https version of your page and you're good to go (also make sure there are no url redirects active which force http on your page).
// ensure we send credentials over a secure connection
if (!HttpContext.Current.Request.IsSecureConnection)
{
string postbackUrl = HttpContext.Current.Request.Url.AbsoluteUri.Replace("http", "https");
LinkButton_Login.PostBackUrl = postbackUrl;
}
In your specific case you should set one of your buttons to post back to the https version, and the other to the http version (if you don't specify the PostBackUrl the default is to post back to the page itself as is).
You can have two forms on an aspx page. You just can't nest them.
On a page I built, I have one form that posts back to the page, and one that posts back to Google Checkout.
If you have to mix the contents of the page, put the https form at the bottom of the page (after the main form tag) and fill it with hidden fields. When the user clicks a button, use Javascript to assign values to the hidden fields and then post the https form.
You could do a manual post through code using the HttpWebRequest object for the login event and then write the returned response back to the user's stream.
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(webRequest.URL);
request.UserAgent = UserAgent;
request.ContentType = ContentType;
request.Method = "POST";
// Write your bytes of the login section here
Stream oStream = request.GetRequestStream();
oStream.Write(webRequest.BytesToWrite, 0, webRequest.BytesToWrite.Length);
oStream.Close();
// Send the request and get a response
HttpWebResponse resp = (HttpWebResponse)request.GetResponse();
// Read the response
StreamReader sr = new StreamReader(resp.GetResponseStream());
// return the response to the screen
string returnedValue = sr.ReadToEnd();
sr.Close();
resp.Close();
Response.Write(returnedValue);
I'm assuming from your context, that you are doing one thing or the other, not both at the same time.
Look at the PostbackURL of the button objects.
the login button can postback to "https://secure.login.com"
The other button can just postback to the page itself.
The problem here is that you'll still be posting back the login fields to the insecure page, which means they're not encrypted, and could be sniffed.
The quick and dirty workaround would be to have javascript clear the login fields before posting if the "Get Quote" button is pressed.
Are the HTTP and HTTPS pages on the same server / part of the same application?
If so you maybe able to use the Server.Transfer() method to keep the form intact but also have the HTTPS.
In ASP.Net 3.5 (maybe SP1--forget if it was in the base library or the SP) you can now set the "action" attribute. But that would make it post to HTTPS for both 'forms'.
If you want to have both forms on the same page, and determine which to post to at 'runtime', you'll have to do it with client-side code. Have client handlers on all objects that trigger post backs or hook into the _dopostback (or whatever it's called--to lazy to look it up) function, and have it check which button was pressed. If the non-secure page, then clear out any data in the login fields first. Then manually trigger the postback yourself to the correct page.
Couldn't you just do a Response.Redirect("https://.../Login.aspx"); in the Login button click event.