We have a sitecore website and we need to know the item from which the link that brought you to page X.
Example:
You're on page A and click a link provided by item X that will lead you to page B.
On page B we need to be able to get that item X referred you, and thus access the item and it's properties.
It could go through session, Sitecore context, I don't know what and we don't even need the entire item itself, just the ID would do.
Anyone know how to accomplish this?
From the discussion in the comments you have a web-architecture problem that isn't really Sitecore specific.
You have a back end which consumes several data items to produce some HTML which is sent to the client. Each of those data items may produce links in the HTML. They may produce identical links. Only one of the items is considered the source of the HTML page.
You wan't to know which of those items produced the link. Your only option is to find a way of identifying the links produced. To do this you will have to add some form of tagging information to the URL produced(such as a querystring) that can be interpretted when the request for the URL is processed. The items themselves don't exist in the client.
The problem would be exactly the same if your links were produced by a database query. If you wanted to know which record produced the link you'd have to add an identifier to the link.
You could probably devise a system that would allow you to identify item most of the time (i.e. when the link clicked on was unique to that page), but it would involve either caching lots of data in a session (list of links produced and the items that produced them) or recreating the request for the referring URL. Either sounds like a lot of hassle for a non-perfect solution that could feasibly slow your server down a fair amount.
James is correct... your original parameters are basically impossible to satisfy.
With some hacking and replacing of the standard Sitecore providers though, you could track these. But it would be far easier to use a querystring ID of some sort.
On our system, we have 3rd party advertising links... they have client javascript which actually submits the request to a local page and then gets redirected to the target URL. So when you hover over the link, the status bar shows you "http://whatever.com"... it appears the link is going to whatever.com, but you are actually going to http://ourserver/redirect.aspx first so we can track that link, and then getting a Response.Redirect().
You could do something similar by providing your own LinkManager and including the generating item ID in the tracking URL, then redirecting to the actual page/item the user wants.
However... this seems rather convoluted and error-prone, and I would not recommend it.
Related
I need to create a "speed bump" that issues a warning whenever a user clicks on a link that would direct them to a different website (not on the domain). Is there any way to create a custom Orchard workflow activity that will activate whenever a link on the website is clicked? I'm having a problem getting C# to fire an event whenever a link (or anchor tag) on the page gets clicked (I can't just add an onServerClick event to every anchor tag or add an event handler to anchor tags with specific IDs because I need it to fire on all anchor tags many of which are dynamically assigned an id when created).
Another option I was toying with would be to create a custom workflow task that will search any content item for links and then add a speedbump to any link that is determined to lead to an external url. Is it possible to use C# to search the contents of any content item upon creation/publish for anchor tags and then alter the tag somehow to include a speedbump?
As a side note I also need to be able to whitelist urls so a third party can't use the speedbump to direct the user to a malicious website.
I've been stumped on this for quite some time any help would be greatly appreciated.
One way to do this is to add a bit of client-side script to intercept the A tags click events and handle them according to the logic you want to implement. Advantages are performance and ease of implementation. Very, very few people disable javascript, and those users who do can presumably read a domain name in the address bar, so there are no downsides.
Another way, if you don't want to use javascript, is to write a server-side filter that parses the response being output, finds all A tags, and replaces their URL on the fly with the URL of a special controller, with the actual URL being passed as a querystring parameter. Drawbacks of this method is that it's going to be an important drag on the performance of the server, and it's going to be hard to write.
But the best way to solve the issue, by far, for you and your users, is to convince your legal department that this is an extremely bad idea and that there is, in reality, no legal issue here (but I may be wrong about this: not a lawyer (this is not legal advice)).
I'm trying to help save time at work with for a lot of tedious copy/paste tasks we have.
So, we have a propitiatory CRM (with proper HTML ID's, etc for accessing elements) and I'd like to copy those vales from the CRM to textboxes on other web pages (outside of the CRM, so sites like Twitter, Facebook, Google, etc)
I'm aware browsers limit this for security and I'm open to anything, it can be a C#/C++ application, Adobe AIR, etc. We only use Firefox at work so even an extension would work. (We do have GreaseMonkey installed so if that's usable too, sweet).
So, any ideas on how to copy values from one web page to another? Ideally, I'm looking to click a button and have it auto-populate fields. If that button has to launch the web pages that need to be copied over to, that's fine.
Example: Copy customers Username from our CRM, paste it in Facebook's Username field when creating a new account.
UPDATE: To answer a user below, the HTML elements on each domain have specific HTML ID's. The data won't need to be manipulated or cleaned up, just a simple copy from ourCRM.com to facebook.com / twitter.com
Ruby Mechanize is a good bet for scraping the data. Then you can store it and post it however you please.
First, I'd suggest that you more clearly define exactly what it is you're looking to do. I read this as you're trying to take some unstructured data from Point A and copy it to Point B. Do the names of these fields remain constant every time you do the operation? Do you need to simply pull any textbox elements from the page and copy them all over? Do some sort of filtering of this data before writing it over?
Once you've got a clear idea of the requirements, if you go the C# route, I'd use something like SimpleBrowser. Judging by the example on their Github page, you could give it the URL of the page you're looking to copy, then name each of the fields you're looking to obtain the value of, perhaps store these in an IDictionary, then open a new URL and copy those values back into the page (and submit the form).
Alternatively, if you don't know the names of the fields, perhaps there's a provided function in that or a similar project that will allow you to simply enumerate all the text fields on the page and retrieve the values for all of them. Then you'd simply apply some logic of your own to filter those options down to whatever is on the destination form.
SO we thought of an easier way to do this (in case anyone else runs into this issue).
1) From our CRM, we added a "Sign up for Facebook" button
2) The button opens a new window with GET variables in the URL
3) Use a greasemonkey script to read those GET variables and fill in textbox values
4) SUCCESS!
Simple, took about 10 minutes to get working. Thanks for you suggestions.
Our site consists of 3 main pages we call "Start.aspx" and then a content iframe inside of that where the user does nearly all of the site interactions.
Recently though, I've had to implement functionality that will jump between Start.aspx pages in different products and automatically change the content iframe to a specified page.
The actual functionality works just fine, but the issue we're having is that the full querystring is exposed. Because we load all pages in the content iframe, the page URL remains at "Product/Start.aspx" during regular site usage.
However, this new functionality is passing a querystring to Start.aspx (which has appropriate parsers to load the requested page in the content iframe), and we need that URL to remain as "Start.aspx".
So far, I've researched into URL Rewriting, which was throwing errors because the landing page for each product is "[Product]/Start.aspx". I've looked at a different URL Rewriting solution, as well as ScottGu's blog post on routing.
The issue is that these solutions seem to be used for simplifying navigation, such as taking "Blogpost.aspx?Year=2013&Month=07&Day=15" and turning it into "Blogpost.aspx/2013/07/14" which really isn't what we're going for. We're not trying to simplify navigation via URL, we're really just trying to completely hide our querystrings.
What we're going for is turning "[Product]/Start.aspx?frame=Company.aspx?id=1570" into "[Product]/Start.aspx" once the content iframe has what it needs from the initial querystring. We don't need to account for every single page. We just need that to be the overarching rule. 90% of the time it won't be an issue, as most of the work being done doesn't jump from product to product without the user just switching products (which is done in a fashion that specifically uses "Response.Redirect("[Product]/Start.aspx")".
Once the content iframe has loaded from the Querystring paramters, we don't need them anymore for anything. The rest of the functionality runs through the iframe without any issue.
Am I overthinking this, or am I asking for something that's not really feasible?
As far as literally "removing all of the query string characters" and still beg able to pass the querystring values to another page, I do not think that is possible. Unless you do it in a Session Variable or something like that.
IF you're simply worried about sensitive data being displayed in plain text in the query string, there is the option of "encrypting" the query string:
http://www.codeproject.com/Articles/33350/Encrypting-Query-Strings
The query string will still show but it will be "Product/Start.aspx?e0ayfefae0y0someencryptedmess108yfe0ayf0a". The page that receives the query string would decrypt it. So the functionality of the query string is there, but the values are not known to the end user.
Since you've tagged this as an ASP.NET question, I'd say the way to go is to keep navigation data in your Session variables.
Can you use a POST instead of a GET? That way, the data is in the form, rather than the Query String.
As a side note, hiding the parameters as a way of making the URL look nicer and be bookmark-able is fine. If you're doing it for any kind of security reasons, it's very shallow security. It's trivial for a user to see what's being passed in both the form and on the query string and to change and repost those. Security needs to be handled primarily on the server side.
I am working on an ASP.NET/MVC4 app and I fetch data continuously and my problem is related to caching.
The problem is that when I click on a particular link in my application it works fine, but sometimes it automatically redirects to the INDEX page that is the default page.
I surfed around about this problem and found that it's a problem in Mozilla that it maintains caching of every link. But sometimes some weird things happen and it automatically redirects a particular link to the INDEX page (301 Permanently REMOVED) and also stores it in the cache such that now every time I click on that link it always redirects me to the INDEX page that's been cached.
So now I have to clear the cache in my browser every time I face this problem.
How can I make it not automatically redirect to the cached INDEX page?
You should really expand on what exactly is happening at that particular link you mention because well it should not 301 redirect unless your telling it to.
Also you say I fetch data continuously. What does this mean to us? Why is this important to know? Explain if this changes the link or the data? Are you 404ing the older data or something? That could possibly explain why you 301 back to your index.
Now with the limited information we have been given by you... if you want to prevent firefox from caching your urls/redirects simply make your url have a querystring that updates which each request. Like using a timestamp.
For example: http://example.com/return-data.asp?timestamp=1350668920
Then each time you continuously fetch data update the page's link accordingly
For example: http://example.com/return-data.asp?timestamp=1350669084
my scenario is this; the user selects the list of reports they wish to print, once they select and click on the a button, i open up another page with the selected reports ready for printing. I am using a session variable to pass reports from one page to another.
first time you try it, it works fine, second time you try it, it opens the report window with the previous selected reports. I have to refresh the page to make sure it loads the latest selections.
is there a way to get the latest value from the session every time you use it? or is there a better way to solve this problem. open for suggestions...
Thanks
C# Asp.net, IE&7 /IE 8
After doing some more checking maybe if you check out COMET it might help.
The idea is that you can have code in your second page which will keep checking the server for updated values every few seconds and if it finds updated values it will refresh itself.
There are 2 very good links explaining the imlementation.
Scalable COMET Combined with ASP.NET
Scalable COMET Combined with ASP.NET - Part 2
The first link explains what COMET is and how it ties in with ASP.NET, the second link has an example using a chat room. However, I'm sure the code querying for updates will be pretty generic and can be applied to your scenario.
I have never implemented COMET yet so I'm not sure how complex it is or if it is easy to implement into your solution.
Maybe someone developing the SO application is able to resolve this issue for you. SO uses some real-time feature for the notifications on a page, i.e: You are in the middle of writing an answer and a message pops up in your client letting you know someone else has added an answer and to click "here" to refresh.
The proper fix is to set the caching directives on the HTTP response correctly, so that the cached response is not reused without validation from the server.
When you fail to specify the cache lifetime, the client has to "guess" how long the response is good for, and the browser's guess probably isn't what you want. See http://blogs.msdn.com/b/ie/archive/2010/07/14/caching-improvements-in-internet-explorer-9.aspx
It's better to use URL paramaters. So you have a view of value of the paramaters.