I am building onto a massive Razor website, which I cannot re-architect. I need to use AngularJs on the client side, and when the page is loaded there is a little bit of server side preprossessing that needs to be done before the page is rendered.
I need to pass a parameter to C# via the URL query string, and I need that same parameter on the JavaScript side. Currently, if I use this URL:
http://localhost:32289/razorPage.cshtml#/?param="1234"
I can get that value on the JavaScript side, but when I call
var queryString = HttpContext.Current.Request.QueryString;
on the server side, it's empty. Additionally, if I use this URL:
http://localhost:32289/razorPage.cshtml/?param="1234"#/
I can access the query string on the server side, but then JavaScript goes nuts, as though I was continuously rerunning the code in my Angular controller. I get this in the Chrome console:
Synchronous XMLHttpRequest on the main thread is deprecated because of its detrimental effects to the end user's experience. For more help, check https://xhr.spec.whatwg.org/.
And eventually an error that says the Maximum call stack size has been reached. If I put a console.log() in that Angular controller, it logs continuously until the call stack max size is reached.
This is my razorPage.cshtml:
#{
Page.Title = "OCE Visualizer";
Page.IsDetailedView = false;
Page.IsCapacity = false;
Page.IsEmbedded = false;
InitProvider.Init();
}
<html>
<!--...AngularJs App lives in here-->
</html>
The init method (which populates some data folders on the server so they can be served to Angular) uses parameters from the query string, as does the AngularJs app, which also manages its own routing.
I'm relatively new to Razor, but I am familiar with AngularJs. I think part of the problem could be because of the way .NET manages routing, which could be messing with how Angular can do it. I am aware of this and this SO answers, but they apply to an MVC app, where mine is just a website with a lot of .cshtml pages, no Controllers or APIs.
Is there a way to access query strings in both Angular and Razor C#, while maintaining AngularJs routing with "pretty" URLs?
Ok I figured out a solution. I'll post it if people in the future have this problem, or if it's a bad answer and people can fix it.
I realized that the main difference between the two URLs in my question was the location of the #, which is a fragment identifier (?). I read about it here, but I could caution that that page is almost 20 years old. Anyway, I found that the fragment part of the URL does NOT get sent to the server, which is why I couldn't parse the query string server side when it was after the #. I don't know why JavaScript was freaking out when the query was before the #, but I'm willing to believe it was a problem with my code.
The solution was to pass the query string on both sides of the #. Thus, the working URL looks like this:
http://localhost:32289/razorPage.cshtml?param="1234"#/?param="1234"
The query string to the left is what the C# can access, and the one on the right is what AngularJs can access. Additionally, anything after the # works like normal AngularJs routing, so I don't think that was related.
Related
How using MVC C# (without the use of JS or JQuery) can I send a user to /home/stasis which will load with a loader image (already implemented using css), and then send them to the final url (which has a really long load time, and users end up clicking multiple times - not helping themselves)
The problem is that the use of JS and JQuery won't work, as this needs to work as an in-app webview as well (which doesn't support either JS or JQuery). So I go to /home/index click on a link to take me to /home/stasis which will load, then automatically begin loading the final url lets say google.com for example.
Without javascript, we have to hope that the browser and server will do the right thing:
The browser will display the entity content when the server returns a 307 redirect.
The server will not return partial entity while the long-running request is pending. In other words. The long-running request should return all its entity data in the final second the request.
The browser won't clear the screen until the first bytes of the next entity have arrived.
Assuming the browser and server behave like this, MVC doesn't offer an easy way to do it. You need to:
Create a new class derived from ActionResult.
In your ActionResult's ExecuteResult() method, write output to ControllerContext.HttpContext.Response. Set the response code to 307, set the RedirectLocation, and write whatever content you want to display to the OutputStream.
I have an ASP.NET Controller, that has a function, that simply returns a ContentResult of a WebClient.
In this case, CNN is my testbed, my WebClient, downloads CNN as a string, into it's content, and returns the result as an ActionResult/ContentResult on a controller.
I'm calling a RenderPartial on this action, to get CNN to display "below" and to the right of my current content, in its own box.
The problem I'm running into, is when you click a link on CNN, it redirects to a "relative" url sometimes, and that relative url, doesn't exist in my localhost, and won't exist on my web server, thus the link fails.
When it does redirect to an absolute url, that's also a problem, as the resulting page "replaces" my .NET shell.
What I need it to do, is take any URLs that are inside that ContentResult I returned, and if any of those URLs are clicked, pass them back to my .NET application, to be downloaded by the WebClient, and rendered back in that shell.
I'm aware I could use an IFRAME to do this instead of WebClient, but an IFRAME is impossible in my case, as its restricted by the public API we're consuming, its restricted/has flaws in the framework that API is servicing, and its blocked on some of our client machines, which we have no control over.
Also, the client machines will be off our network, so I can't use an AJAX load, as that'd be XSS.
One idea that I can think of, is basically to create custom routes, with filters/rules, that look for "CNN" like links, and pass them to my controller, as a parameter; then have my controller render my page, and pass those links to the web client.
This would obviously be a lot of work, and not even really sure where to start with it in the routing engine.
The only other thing I can think of that might work, is checking the validity of the URL, by attempting to see if it points to a valid link. If it doesn't point to a valid link, adding CNN's url prefix to it, and seeing if that is then a valid link. But that seems like a lot of work, for a solution that would be a heavy hit on performance, as checking each link on a page like CNN, could be very costly. Also, since I'd be running each link twice, that'd be a problem where POST operations were required, as it'd essentially run each operation twice, and being that this is a public API I'm consuming off a public framework, I wouldn't have control of the source code to safeguard against that.
Any other suggestions?
Is there any way to accomplish what I'm trying to do easily?
I have a problem here. Assume there's a basic calculator implemented in javascript hosted on a website ( I have googled it and to find an example and found this one: http://www.unitsconverter.net/calculator/ ). What I want to do is make a program that opens this website, enters some value and gets the return value. So, in our website calculator, the program:
- open the website
- enters an operand
- enters an operation
- enters an operand
- retrieve the result
Note: things should be done without the need to show anything to the user ( the browser for example ).
I did some search and found about HttpWebRequest and HttpWebRespond. But I think those can be used to post data to the server, which means, The file I'm sending data to must be php, aspx or jsp. But Javascript is client side. So, I think they are kind of useless to me in this case.
Any help?
Update:
I have managed to develop the web bot using WebBrowser Control tool ( found in System.Windows.Forms )
Here's a sample of the code:
webBrowser1.Navigate("LinkOfTheSiteYouWant"); // this will load the page specified in the string. You can add webBrowser1.ScriptErrorsSuppressed = true; to disable the script in a page
webBrowser1.Document.GetElementById("ElementId").SetAttribute("HTMLattrbute", "valueToBeSet");
Those are the main methods I have used to do what I wanted to.
I have found this video useful: http://www.youtube.com/watch?v=5P2KvFN_aLY
I guess you could use something like WatiN to pipe the user's input/output from your app to the website and return the results, but as another commenter pointed out, the value of this sort of thing when you could just write your own calculator fairly escapes me.
You'll need a JavaScript interpreter (engine) to parse all the JavaScript code on the page.
https://www.google.com/search?q=c%23+javascript+engine
What you're looking for is something more akin to a web service. The page you provided doesn't seem like it accepts any data in an HTTP POST and doesn't have any meaningful information in the source that you could scrape. If for example you wanted to programmatically make searches for eBay auctions, you could figure out how to correctly post data to it eg:
http://www.ebay.com/sch/i.html?_nkw=http+for+dummies&_sacat=267&_odkw=http+for+dummies&_osacat=0
and then look through the http response for the information you're looking for. You'd probably need to create a regular expression to match the markup you're looking for like if you wanted to know how many results, you'd search the http response for this bit of markup:
<div class="alt w"><div class="cnt">Your search returned <b>0 items.</b></div></div>
As far as clientside/javascript stuff, you just plain aren't going to be able to do anything like what you're going for.
It is a matter of API: "Does the remote website expose any API for the required functionality?".
Well web resources that expose interactive API are called web service. There are tons of examples (Google Maps for istance).
You can access the API -depending on the Terms & Conditions of the service- through a client. The nature of the client depends on the kind of web service you are accessing.
A SOAP based service is based on SOAP protocol.
A REST based service is based on REST principles.
So, if there is an accessible web service called "Calculator", then you can access the service and, for istance, invoke the sum method.
In your example, the calculator is a Javascript implementation, so it is not a web service and it cannot be accessed via HTTP requests. Though, its implementation is still accessible: it is the javascript file where the calculator is implemented. You can always include the file in your website and access its functions via javascript (always mind terms and conditions!!).
A very common example is the jQuery library stored in Google Libraries.
I have a subdomain that is http://trade.businessbazaar.in . I am dynamically creating urls from database something in this manner http://trade.businessbazaar.in/mycompany. To display details, I have an index.aspx file there,thinking that on every request the index.aspx page will load and display data accodingly. Also, There is a masterpage on the index.aspx page from where i am capturing the text mycompany and query it in database to fetch result. But nothing seems to work.
A genuine link is http://trade.businessbazaar.in/Symparlife. But its unable to load index.aspx. I need a clean approach without any third party dll or rewriters. Directly to push some lines in config and start working. That is url will be the same but index page will get loaded...
In short, i want to say
I need the StackOverflow type clean url mechanism to fetch pages
Thanks in Advance
You can handle the Begin_Request event in Global.asax and add custom code to redirect to index.aspx and convert the parts of the URL into query string arguments. You should use Server.Transfer to keep the URL in the browser.
I'd recommend upgrading to 4.0 and using the Routing enine though. You should check if the standard routing is available as a download for ASP.NET 3.5. I am sure your code will get messy very soon. Been there, done that.
As #Mike Miller mentions in the comments the Routing engine ships with ASP.NET 3.5. You can check the documentation here - http://msdn.microsoft.com/en-us/library/system.web.routing(v=vs.90).aspx
Here is a tutorial on how to use it with Web Forms - http://weblogs.asp.net/scottgu/archive/2009/10/13/url-routing-with-asp-net-4-web-forms-vs-2010-and-net-4-0-series.aspx
For your case the code would be something like:
routes.MapPageRoute("company-index", "/{company}", "~/index.aspx")
And in index.aspx you can access the route value for company like this:
string company = (string)Page.RouteData.Values["company"];
Keep in mind that you'd better add something in the URL before your actual argument (the company name). If you don't you will have problems later on when because you may want to add a URL like "/Login" but then you will have to validate that users can't create a company named "Login". Not how Stack Overflow has "/questions/" before the actual question info in the URL.
I am trying to use domain masking to simulate multi-tenant access to my application. The plan right now is to read the subdomain portion of the domain ie: demo.mydomain.com and load settings from the DB using that name.
The issue I'm having is that request.url is getting the request url - NOT the url in the browser.
So if I have http://demo.mydomain.com forwarding to http://www.mydomain.com/controllername with masking, request.url is grabbing the latter, simply because of how masking works, i assume - by putting the masked site inside of a frame.
Is it even possible to read the url in the browsers address bar? Thanks.
You probably can get the url you want, but at the client side...
So, do this:
Get the browser's url by using a javascript call, like window.location.href.
Post that url to the server-side.
Cons:
This is a javascript dependent solution, it will not work with javascript disabled.
This is ugly as hell.
Pros:
You probably do not have any other option.