I have an MVC application that loads a video content package (from an external third-party vendor) into an iFrame. Under normal circumstances, when you make the call to this third party to retrieve the content, you pass it a ReturnURL parameter that tells the vendor's player where to redirect after the video content module has completed. Unfortunately, some new updated packages do not support the redirect. Instead of redirecting to the designated Redirect URL, upon the content module completing, it just redirects the iFrame to a page called "goodbye.htm" (which is still on the same domain as the third-party vendor).
Per recommendation, the vendor suggested we have a listener on the iFrame to detect when the iFrame src changes to this "goodbye.htm" page and then execute the code we need to redirect.
One of the problems we've encountered is getting the src of the iFrame AFTER it changes. Obviously just getting the src attribute from the iFrame only returns the original src, even after it changes. The other issue of course is that the domain of the goodbye.htm page is different than the domain of our application and the parent window, so it violates Same-Origin Policy.
Below is our JQuery function we're using to attempt the redirect.
$('.js__iframe-play').on('load', function () {
var src = $(this).attr("src");
if (src.includes("goodbye.htm")) {
const url = "/Learn/GetResults/" + $(this).data("class-id") + "?regid=" + $(this).data("registration-id");
console.log(url);
const newiframe = $(this).clone();
newiframe.attr("src", url);
$(this).after(newiframe);
$(this).remove();
}
})
Does anyone have any idea of how we can accomplish our goal?
Related
My web-app is .Net 6, VS-2022 Blazor WASM-hosted.
I am having a problem with the URL for the API when being called from a component.
I have three files: page: VehicleList.razor, page: VehicleEdit.Razor and component: CRUD_Vehicle.razor.
The page flow is the user navigates to 'VehicleList' and has the ability to select a vehicle to EDIT via callback function. Once selected, the flow navigates to 'VehicleEdit' where some data needs to be passed to support the embedded sub-component 'CRUD_Vehicle'.
The problem is that URL to the 'VehicleEdit' looks like this with query-string data in a comma-delimited string AND it remains there when the CRUD-component is showing.
https://localhost:7777/vehicleedit/667?par=17,Bigelow,Active
When the user makes editing changes and SUBMITs the component, the HttpClient service gets called and fails with a 400-Bad Request. I am under the impression that the query-string interfers with the api URL. Please see below. Is there a way I could set the URL to not show the query-string and only have the base-address and the UID_VEHICLE like this?
https://localhost:7777/vehicleedit/667
public async Task<Vehicle> UpdateVehicle(Vehicle pVehicle) {
// Save the edited record.
HttpResponseMessage response = await httpClient.PutAsJsonAsync<Vehicle>($"/api/vehicle/{pVehicle.UID_VEHICLE}", pVehicle);
if (!response.IsSuccessStatusCode) {
// problems handling here.
Console.WriteLine($"UpdateVehicle() Error occurred, the status code is: {(int)response.StatusCode}: {response.StatusCode}" );
}
return await response.Content.ReadFromJsonAsync<Vehicle>();
}
I have a similar construction for the editing of the Customer object, but the page that shows the CRUD_Customer component has a simple URL and works perfectly to save data to the DB. The main difference is the URL for the VehicleEdit.razor. Your comments are welcome.
Thanks.
https://localhost:7777/customeredit/17
I downloaded Postman and entered the PUT transaction and the PUT RESULT showed there was an ERROR (based on DataAnnotation for the 'StartDate'-field) that was NOT caught at the client validation code. It turns out that the TITLE text was based on what the possible cause might be. But Postman showed me the exact reason why the PUT call failed.
So, my 'answer' is: Use Postman to simulate the 'httpClient.PutAsJsonAsync'-call to observe the result-details to pinpoint the reason for the '400-Bad Request' return from the service.
I am trying to get the HTML Code from a specific website async with the following code:
var response = await httpClient.GetStringAsync("url");
But the problem is that the website usually takes another second to load the other parts of it. Which I need, so the question is if I can load the site first and read the content after a certain amount of time.
Sorry if this question already got answered, but I didn't really know what to search for.
Thanks,
Twenty
Edit #1
If you want to try it yourself the URL is http://iloveradio.de/iloveradio/, I need the Title and the Artist which do not immediately load.
You are on the wrong direction. The referenced site has playlist api which returns json. you can get information from :
http://iloveradio.de/typo3conf/ext/ep_channel/Scripts/playlist.php
Edit: Chome Inspector is used to find out Playlist link
You could use Puppeteer-Sharp:
await new BrowserFetcher().DownloadAsync(BrowserFetcher.DefaultRevision);
using (var browser = await Puppeteer.LaunchAsync(new LaunchOptions { Headless = false }))
using (var page = await browser.NewPageAsync())
{
await page.SetViewportAsync(new ViewPortOptions() { Width = 1280, Height = 600 });
await page.GoToAsync("http://iloveradio.de/iloveradio/");
await page.WaitForSelectorAsync("#artisttitle DIV");
var artist = await page.EvaluateExpressionAsync<string>("$('#artisttitle DIV')[0].innerText");
Console.WriteLine(artist);
Console.ReadLine();
}
If there are things that load after, it means that they are generated by javascript code after page load (an ajax request for example), so no matter how long you wait, it won't have the content you want (because they are not in the source code when it loads).
Easy way to do it:
Use a WebBrowser and when DocumentCompleated event triggers wait till the element you want appears.
The Right Way:
find the javascript yourself and trigger it yourself (easy to say, hard to do).
The thing to understand here is that when you read the response from the URL, all you will ever get is the raw response, in this case the HTML source code the server replied with.
Unlike what you might see in your browser's DOM Inspector developer tools, you will only get the original HTML source from the page (what you might see in the "Page Source" developer tool) which will not include any dynamically created content (JavaScript) or loaded content (like iframes).
So you aren't getting what you see here in the DOM Inspector:
You are getting what you see here in the Page Source (View > Developer > View Source in Chrome):
You can't wait for that other content to load because it will never load since that HTML content isn't being parsed or rendered like a browser would.
You have several options available though:
See if the website has an API you can use
Determine where that content you want is actually loaded from, and make another/different HTTP request to that content (the Network Panel is helpful here)
Use a headless browser to programmatically load the page and dynamically read the page contents (this will add a lot of overhead, and should probably be avoided if possible)
I have checked out the website, data is loaded by javascript. You only can get the html using httpClient.GetStringAsync("url");.
As far as I know, there is no luck to get the elements what is manipulate by browser.
I am working on a piece of code that directly relates to redirecting a page to a login screen if the user id is non existent.
The code is currently written as:
this.currentContext = System.Web.HttpContext.Current;
this.User = new BLL.User(); // base constructor
this.User.RestoreSession(currentContext.Session); // attempt to connect to DB with current session
if (this.UserID < 1)
{
currentContext.Response.Redirect("~/Default.aspx?url=" + currentContext.Request.Url.AbsoluteUri.ToBase64());
}
Which work's just fine.
However in a new addon we are building into the system it uses iframes which is okay but the login screen happens in the iframe and we need to make the parent window redirect to the login window then redirect back to page we were on.
My question is what would be the best way of doing this without rewriting the entire login process?
The best way to do redirect without rewriting the login process is to replace iframes with Ajax calls. Check this: How can I use AJAX as an alternative to an iframe?
I'm aware that data can be passed in through the URL, like "example.com/thing?id=1234", or it can be passed in through a form and a "submit" button, but neither of these methods will work for me.
I need to get a fairly large xml string/file. I need to parse it and get the data from it before I can even display my page.
How can I get this on page load? Does the client have to send a http request? Or submit the xml as a string to a hidden form?
Edit with background info:
I am creating a widget that will appear in my customer's application, embedded using C# WebBrowser control, but will be hosted on my server. The web app needs to pass some data (including a token for client validation) to my widget via xml, and this needs to be loaded in first thing when my widget starts up.
ASP.NET MVC 4 works great with jQuery and aJax posts. I have accomplished this goal many times by taking advantage of this.
jQuery:
$(document).ready(function() {
$.ajax({
type: "POST",
url: "/{controller}/{action}/",
data: { clientToken: '{token}', foo: 'bar',
success: function (data, text) {
//APPEND YOUR PAGE WITH YOUR PARSED XML DATA
//NOTE: 'data' WILL CONTAIN YOUR RETURNED RESULT
}
});
});
MVC Controller:
[HttpPost]
public JsonResult jqGetXML(string clientToken, string foo)
{
JsonResult jqResult = new JsonResult();
//GET YOUR XML DATA AND DO YOUR WORK
jqResult.Data = //WHATEVER YOU WANT TO RETURN;
return jqResult;
}
Note: This example returns Json data (easier to work with IMO), not XML. It also assumes that the XML data is not coming from the client but is stored server-side.
EDIT: Here is a link to jQuery's Ajax documentation,
http://api.jquery.com/jQuery.ajax/
Assuming you're using ASP.NET, since you say it's generated by another page, just stick the XML in the Session state.
Another approach, not sure if it helps in your situation.
If you share the second level domain name on your two sites (i.e. .....sitename.com ) then another potential way to share data is you could have them assert a cookie at this 2nd level with the token and xml data in it. You'll then be provided with this cookie.
I've only done this to share authentication details, you need to share machine keys at a minimum to support this (assuming .Net here...).
You won't be able to automatically upload a file from the client to the server - at least not via a browser using html/js/httprequests. The browser simply will not allow this.
Imagine the security implications if browsers allowed you to silently upload a file from the clients local machine without their knowledge.
Sample solution:
Background process imports xml file and parses it. The background process knows it is for customer YYY and updates their information so it know the xml file has been processed.
A visitor goes to the customer's web application where the widget is embedded. In the markup of the widget the customer token has been added. This could be in JavaScript, Flash, iFrame, etc.
When the widget loads, it makes a request to you app which then checks to see if the file was parsed for the provided customer (YYY) if it has, then show the page/widget.
If the XML is being served via HTTP you can use Liqn to parse the data.
Ex.
public partial class Sample : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
string url = "http://news.yahoo.com/rss/";
var el = XElement.Load(url).Elements("channel");
StringBuilder output = new StringBuilder();
foreach (var c in el.Elements())
{
switch (c.Name.LocalName.ToLower())
{
case "title":
output.Append(c.Value);
output.Append("<br />");
break;
}
}
this.Label1.Text = output.ToString();
}
}
It is not exactly clear what the application is and what kind of options you have, and what kind of control over web server you have.
If you are the owner of the web server/application your options are way wider. You can first send a file to web-server with HTTP POST or PUT, including a random token, and then use the same token for GET with token in the query string
or use other options, applicable to third party-owned websites
if you are trying to consume some auth api, learn more about it. since you are hosting web browser control, you have plenty of options to script it. including loading whatever form, setting textarea or hidden field text with your xml and then simulating a submit button click. you can then respond to any redirects and html responses.
you can also inject javascript inside the page that would send it to server with ajax request.
the choice heavily depends on the interaction model.
if you need better advice, it would be most helpful if you provided sample/simplified url/url pattern, form content, and sequence of events that is expected from you from code/api/sdk perspective. they are usually quite friendly.
There are limited number of ways to pass data between pages. Personally for this I would keep in session during the generating page and clear it when it is retrieved in the required page.
If it is generated server side then there is no reason to retrieve it from client side.
http://msdn.microsoft.com/en-us/library/6c3yckfw(v=vs.100).aspx
Create a webservice that your C# app can POST the XML to and get back HTML in response. Load this HTML string into the WebBrowser control rather than pointing the control to a URL.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I am doing a project in which I have to make a windows application that can Take a URL in textbox from user. Now when the user press the Proceed button, the application should open that URl in a webbrowser control and fill the form on that page containing userID & password textboxes and submit it via the login button on that web page. Now my application should show the next page in that webbrowser control to the user.
I can open the url in the application's webbrowser control through my C# Code, but I can't figure it out that how to find the userID & pasword textboxes on that web page that is currently opened in the webbrowser control of my application, how to fill them, how to find the login button & how to click it through my C# Code.
For this you will have to look into the page source of the 3rd party site and find the id of the username, password textbox and submit button. (If you provide a link I would check it for you). Then use this code:
//add a reference to Microsoft.mshtml in solution explorer
using mshtml;
private SHDocVw.WebBrowser_V1 Web_V1;
Form1_Load()
{
Web_V1 = (SHDocVw.WebBrowser_V1)webBrowser1.ActiveXInstance;
}
webBrowser1_Document_Complete()
{
if (webBrowser1.ReadyState == WebBrowserReadyState.Complete)
{
if (webBrowser1.Url.ToString() == "YourLoginSite.Com")
{
try
{
HTMLDocument pass = new HTMLDocument();
pass = (HTMLDocument)Web_V1.Document;
HTMLInputElement passBox = (HTMLInputElement)pass.all.item("PassIDThatyoufoundinsource", 0);
passBox.value = "YourPassword";
HTMLDocument log = new HTMLDocument();
log = (HTMLDocument)Web_V1.Document;
HTMLInputElement logBox = (HTMLInputElement)log.all.item("loginidfrompagesource", 0);
logBox.value = "yourlogin";
HTMLInputElement submit = (HTMLInputElement)pass.all.item("SubmitButtonIDFromPageSource", 0);
submit.click();
}
catch { }
}
}
}
I would use Selenium as opposed to the WebBrowser control.
It has an excellent C# library, and this kind of thing is the main reason it was developed.
You don't have to simulate filling in the username/password fields nor clicking on the login button. You need to simulate the browser rather than the user.
Read the login page html and parse it to find the ids of the username and password fields. The username can be obtained by looking for tags with name set as "username", "user", "login", etc. The password will usually be an tag with type="password". Javascript based popup panels for login would involve parsing the js.
Then follow the example code shown here, How do you programmatically fill in a form and 'POST' a web page?
The important thing here is that you're simulating a browser POST event. Don't worry about text boxes and other visual form elements, your goal is to generate a HTTP POST request with the appropriate key-value pairs.
Your first step is to look through the HTML of the page you're pretend to be and figure out the names of the user id and password form elements. So, let's say for example that they're called "txtUsername" and "txtPassword" respectively, then the post arguments that the browser (or user-agent) will be sending up in its POST request will besomething like:
txtUsername=fflintstone&txtPassword=ilikerocks
As a background to this, you might like to do a little research on how HTTP works. But I'll leave that to you.
The other important thing is to figure out what URL it posts this login request to. Normally, this is whatever appears in the address bar of the browser when you log in, but it may be something else. You'll need to check the action attribute of the form element so see where it goes.
It may be useful to download a copy of Fiddler2. Yes, weird name, but it's a great web debugging tool that basically acts as a proxy and captures everything going between the browser and the remote host. Once you figure out how to use it, you can then pull apart each request-response to see what's happening. It'll give you the URL being called, the type of the request (usually GET or POST), the request arguments, and the full text of the response.
Now, you want to build your app. You need to build logic which make the correct HTTP requests, pass in the form arguments, and get back the results. Luckily, the System.Net.HttpWebRequest class will help you do just that.
Let's say the login page is at www.hello.org/login.aspx and it expects you to POST the login arguments. So your code might look something like this (obviously, this is very simplified):
Imports System.IO
Imports System.Net
Imports System.Web
Dim uri As String = "http://www.hello.org/login.aspx"
Dim request As HttpWebRequest = DirectCast(WebRequest.Create(uri), HttpWebRequest)
request.Timeout = 10000 ' 10 seconds
request.UserAgent = "FlintstoneFetcher/1.0" ' or whatever
request.Accept = "text/*"
request.Headers.Add("Accept-Language", "en")
request.Method = "POST"
Dim data As Byte() = New ASCIIEncoding().GetBytes("txtUsername=fflintstone&txtPassword=ilikerocks")
request.ContentType = "application/x-www-form-urlencoded"
request.ContentLength = data.Length
Dim postStream As Stream = request.GetRequestStream()
postStream.Write(data, 0, data.Length)
postStream.Close()
Dim webResponse As HttpWebResponse
webResponse = DirectCast(request.GetResponse(), HttpWebResponse)
Dim streamReader As StreamReader = New StreamReader(webResponse.GetResponseStream(), Encoding.GetEncoding(1252))
Dim response As String = streamReader.ReadToEnd()
streamReader.Close()
webResponse.Close()
The response string now contains the full response text from the remote host, and that host should consider you logged in. You may need to do a little extra work if the remote host is trying to set cookies (you'll need to return those cookies). Alternatively, if it expects you to pass integrated authentication on successive pages, you'll need to add credentials to your successive requests, something like:
request.Credentials = New NetworkCredential(theUsername, thePassword)
That should be enough information to get cracking. I would recommend that you modularise your logic for working with HTTP into a class of its own. I've implemented a complex solution that logs into a certain website, navigates to a pre-determined page, parses the html and looks for a daily file to be downloaded in the "invox" and if it exists then downloads it. I set this up as a batch process which runs each morning, saving someone having to do this manually. Hopefully, my experience will benefit you!