Google Experiments on Page that Redirects - c#

I have a site that has a several page offer form. The offer information is stored in a session, and I keep track of what step the customer has completed. If the customer has not completed all previous steps in relation to the page they are on then they are redirected back to the start of the process. In this way I prohibit users from accessing step 3 by simply typing in it's URL. This is done because information on steps after 1 depend on valid information from previous steps.
The problem is that when I set up my content experiment through Google Analytics it cannot validate my original or variation pages since when it hits those pages (which is step 4) the sever recognizes that they are not allowed on that page and returns step 1 to them.
I attempted to proceed anyways, but it seems that when I arrive at step 4 it is not pushing me to my variation page (I have it set so that everyone that arrives at step 4 should go to the variation.) I'm assuming the problem is because of the redirect.
Any ideas?

The perceived problem was that GA could not ping my page because of the redirect I have on it.
The actual problem was that my GA experiments code was not the very first thing after my opening HEAD tag. GA Experiments only scan the first 256 characters of a page, so if the beginning of the experiments code is not within that, then it won't work.
Also, I had my GA code in a js file and was linking it on the page for cleanliness....this also does not work. GA Experiments scans the first 256 characters of the page and cares not for links, I needed to have the exact code, with comments, that GA gave me on the page for this to work.

Related

How to detect that the URLs navigate to the same webpage

I'm working with webbrowser tool trying to build my own browser.
Something that i'm having trouble doing is the history part.
When the document completes navigating , I search in my database if its URL doesn't exist then I add it to the history, else I just increase the "counter" of this page in the database.
The problem is that when I enter some pages each time it gives me different URL but it's the same page ! such as google.com , when I navigate to it it gives me in the first time (for example) : https://www.google.co.il/?gws_rd=cr&ei=eBP-UtPCOMi84ASukoCAAw
the second time I navigate :
https://www.google.co.il/?gws_rd=cr&ei=rhP-UpW6CYG54ATAqIHIDg
Is there a way to identify that both these URLs lead to the same page ??
I'm trying to do this because when I load the history to my application , many URLs are loaded that are leading to the same page.
Any help is appreciated , thanx in advance
You can use the Uri object and ask for the AbsolutePath property
I personally would expect my browser to have the history by URL and not by content (that's what you actually try to do as far as I understand). But if you want to avoid these multiple entries, you might calculate a hash code for each content received by that page and increase your counter.
The problem is that you cannot know what the server will do with that URL. It might be the same today and different tomorrow. I also wouldn't just go for the URL without the parameters because on other pages the parameter might make a really important difference.
Another note: In case you hash the content, you might want to exclude things like 404 pages (which can occur with different URLs and shouldn't be grouped under the same hash.)

Caching issue with Firefox

I am working on an ASP.NET/MVC4 app and I fetch data continuously and my problem is related to caching.
The problem is that when I click on a particular link in my application it works fine, but sometimes it automatically redirects to the INDEX page that is the default page.
I surfed around about this problem and found that it's a problem in Mozilla that it maintains caching of every link. But sometimes some weird things happen and it automatically redirects a particular link to the INDEX page (301 Permanently REMOVED) and also stores it in the cache such that now every time I click on that link it always redirects me to the INDEX page that's been cached.
So now I have to clear the cache in my browser every time I face this problem.
How can I make it not automatically redirect to the cached INDEX page?
You should really expand on what exactly is happening at that particular link you mention because well it should not 301 redirect unless your telling it to.
Also you say I fetch data continuously. What does this mean to us? Why is this important to know? Explain if this changes the link or the data? Are you 404ing the older data or something? That could possibly explain why you 301 back to your index.
Now with the limited information we have been given by you... if you want to prevent firefox from caching your urls/redirects simply make your url have a querystring that updates which each request. Like using a timestamp.
For example: http://example.com/return-data.asp?timestamp=1350668920
Then each time you continuously fetch data update the page's link accordingly
For example: http://example.com/return-data.asp?timestamp=1350669084

IE shows a previously cached version of my page

my scenario is this; the user selects the list of reports they wish to print, once they select and click on the a button, i open up another page with the selected reports ready for printing. I am using a session variable to pass reports from one page to another.
first time you try it, it works fine, second time you try it, it opens the report window with the previous selected reports. I have to refresh the page to make sure it loads the latest selections.
is there a way to get the latest value from the session every time you use it? or is there a better way to solve this problem. open for suggestions...
Thanks
C# Asp.net, IE&7 /IE 8
After doing some more checking maybe if you check out COMET it might help.
The idea is that you can have code in your second page which will keep checking the server for updated values every few seconds and if it finds updated values it will refresh itself.
There are 2 very good links explaining the imlementation.
Scalable COMET Combined with ASP.NET
Scalable COMET Combined with ASP.NET - Part 2
The first link explains what COMET is and how it ties in with ASP.NET, the second link has an example using a chat room. However, I'm sure the code querying for updates will be pretty generic and can be applied to your scenario.
I have never implemented COMET yet so I'm not sure how complex it is or if it is easy to implement into your solution.
Maybe someone developing the SO application is able to resolve this issue for you. SO uses some real-time feature for the notifications on a page, i.e: You are in the middle of writing an answer and a message pops up in your client letting you know someone else has added an answer and to click "here" to refresh.
The proper fix is to set the caching directives on the HTTP response correctly, so that the cached response is not reused without validation from the server.
When you fail to specify the cache lifetime, the client has to "guess" how long the response is good for, and the browser's guess probably isn't what you want. See http://blogs.msdn.com/b/ie/archive/2010/07/14/caching-improvements-in-internet-explorer-9.aspx
It's better to use URL paramaters. So you have a view of value of the paramaters.

What is the proper way to handle several automatic redirects?

I have a website that basically allows customers to build a cart with an item that can be configured. A user will pick an item, and they'll be prompted to pick the first option they want, they get sent to the second step where they pick their second option, etc.
The number of steps and the number of options are variable, as they are defined by the client. Usually the item only has 2-3 steps with 5-10 options. However, in order to make it faster for the customer, if there is only one option available for the given step, it will automatically be chosen and the user will be sent to the next step.
A client decided to set up an item with 10+ steps with only one option for each step. This results in the entire process automatically choosing everything. FireFox doesn't like this, as it gives the error
Firefox has detected that the server
is redirecting the request for this
address in a way that will never
complete.
(I haven't checked IE or Chrome, although it probably gives similar errors).
What's the best way to fix this?
Right now the process is basically
User picks item
User picks option if there is more than one option available. Otherwise the website does step 3 itself.
POST to add the option to the cart
Redirect to Page.aspx?step=#
Repeat step 2-4 as many times as necessary
Is there any change I can make to the code or page so that FireFox doesn't think I'm in an endless loop?
I am surprised that you get an endless redirect error if # is different each time, but either way, this doesn't seem like the best architecture. Basically, if the code decides a step can be done automatically, it instantly redirects to the same page with the new step number?
Why don't you just have your code do that without redirecting, increment the page number in the server code as needed, and show them the right step directly, without having to redirect?
Whatever is happening when you POST at each step I would think you can accomplish just as easily in code without actually having to do a new post.
I'm guessing something like this would work:
Read Step # from query string into local variable
Load data from database passing in the local step variable
If data only contains one option then:
(3.1) Store option
(3.2) increment local step variable
(3.3) goto 2
Load page with data from step 2
How does your code handling someone skipping options and entering Page.aspx?step=10 into the address bar when they are on Page.aspx?step=1?

Telling search engine bots to wait

Short story:
My site pre generates pages based on user submited data, sometimes this cache has to be cleared when this happens it would kill a super computer unless i controled the amount of stats being generated at once.
The problem:
Now comes the search engine bots that hit the site constantly ( due to the sheer amount of pages, its pretty constants that search engines bot crawl ). The problem here is that they will use up all my "generate" slots, and real users will be left with a page saying "bla bla, please wait".
Posible solution:
Can i basicly return a 503 to the bots, without having them give me negative ranking for having a unstable site?
Or did someone come up with some other solution?
How critical is it that the cache is cleared immediately? If your cache supports it, you could instead mark all the cached pages as 'dirty' and regenerate them when a real user next visits; if a bot visits in the meantime, serve them the stale page.

Categories

Resources