Recently I have been asked by a client to log into a legacy site using POST and not GET (from a 3rd party site), All of the needed variables are now sent within a post instead of a query string.
The problem is that upon receiving all of variables they are stored into Session and then redirected to the correct page within the application (from the logo-in Page).
While this works perfectly while calling the page using GET, a POST call will lose all of the Session variables after
Response.Redirect(#"~/SOMEPAGE.aspx",false);
Another thing that is odd is that the Session ID will remain the same but all values will be gone.
When Using Server.Transfer the session is intact but will be lost once the Response.Redirect is used. (there is no option to change all of the code.)
Does any one know of a way to resolve this or some sort of a work around that might be used.
Thanks!!!
There are a few reasons this could happen.
You are using Session.Abandon() in your code
You are switching between a secure (https://) and insecure (http://) URL
You have some code in your global.asax that is manipulating Session, or the .Secure or .Path properties of your Response.Cookie
edit http://forums.asp.net/t/1670844.aspx
Related
I have to implement session handling across multiple users, and multiple browsers. Each user has a unique token which I save in HttpContext.Current.Session variable inside the Session__Start() in Global.asax.cs method. It works perfectly fine for a single session. However, when I fire the request from two browsers, then while browsing through various pages, sometimes the method Session_Start() automatically gets called for the second session and it resets the session variable, resulting in a null value.
How should I handle this scenario?
Edit 1:
What are the scenarios where the session may time out? Eg: switching between HTTPGet/HttpPost, or making Ajax calls?
I also read this link:
Does Session timeout reset on every request
Is this something which I should be keeping in mind? My code has 2 GET and 1 POST request, and the session variable becomes NULL in the POST method for the second browser session.
I figured out the reason. The sessionState mode in web.config file was set to "InProc", whereas it should be set as "StateServer". When I made the necessary change, then it worked like a charm.
ASP.net state service should also be started from services.msc for this to work.
Alright, I'm using form.target to open content in a new window. However, when I do this and then hit the back button on my browser, I find any entry resulting from a GET is doing another round trip to the server. This is a problem because session variables may have been changed in the interim, so the new GET no longer matches the old one.
I'm using C# and javascript for this web application, if it helps any.
This behavior occurs on IE8, but not on Firefox 10. Is there any way to prevent it in IE?
I solved this problem by adding a semi-random ID as a querystring on the URL.
I also added this to my OnInit event. For some reason explicitly setting the cache settings helps.
Response.AppendHeader("Cache-Control", "private, max-age=600");
Combined, this gave the page uniqueness and enforced the browser caching. This prevented the browser from always retrieving the most recent version of the page when encountering a GET operation in the browsing history.
I was just wondering how in I can find the previously visited url for a session?
So after a user does something I can redirect them to that URL.
Is there any standard way to do this? Otherwise I was going to add some overrides to Global.asax and use a session variable to store URL history.
Request.UrlReferrer might be what you want, but you might want to think about using AJAX or passing the url as a parameter...
The above answer is totally correct, although in certain cases you cannot pass the URL along the query string, for instance when the URL is masked. In this case the right way is Request.UrlReferrer. The problem with using a session variable is it might expire and cause unreliable behavior. Or if you have more than one session active or mirrored servers, it wont work at all.
In general its a poor idea to use session variables in MVC when it can be avoided. The solution I went with was using JavaScript to get the previous URL and passing that up in the view model.
Are we "doing it wrong"?
A colleague and I are messing around with an ASP.NET page to act as a "portal" to view the results from a diagnostic program on a UniData server. Although we do the odd-job of ASP/ASP.NET at work, it is not our primary language.
To access this server, we have to use UniObjects, which is an API for authenticating and using the UniData server.
We needed each user visiting the website to have to authenticate with UniData and get their own session via the UniObjects library, then be able to use it without signing in again (unless the session isn't used with in 'x' minutes).
The method we have come up with is as follows:
We have a singleton with a Hashtable. It maps Windows username with a session object.
If the user goes to our page and 'username' doesn't exist in the Hashtable, it redirects to a login page where the session object is created and added to the Hashtable if authentication succeeds. Otherwise, it grabs the users existing session object from the Hashtable and uses that for the request (unless it has expired, in which case we remove it and redirect to the login page).
Each session object (which is a wrapper object for stuff from UniObjects) has a "lastUsed" method. We need to clean-up user's sessions since we have license restrictions on users logged into the UniData server, so every time a user gets redirected to the sign-in page, it checks if any sessions have not been used in 'x' mins, in which case it closes that session and removes it from the Hashtable. It is done here so users won't experience any delay related to checking all sessions on every request, only at login.
Something is telling me that this solution smells, but I don't have enough ASP.NET experience to work out what we should be doing? Is there a better method for doing this or is it actually okay?
Since all of your users seem to be authenticated, I would suggest you think about using a different way of managing session state and timeout.
Part of the issue you have is that if a user just closes the browser without logging out, or stops using the application, you have to wait until the session times out to kill it off and free up UniObjects for your licensing issues.
My suggestion is as follows:
Add an invisible IFRAME to your
MasterPage template, or to each page
in the site if you aren't using
MasterPages.
That MasterPage will
load a KeepAlive.aspx page, that
contains a META Refresh, reloading
the page every 5 minutes.
You can
reduce the session timeout to 10
minutes (maybe even 6)
Now, if a user closes their browser windows, their session times out much quicker than usual, but if their browser window is left open, their session is persistent.
A code example and walkthrough can be seen here.
You now need to solution to prevent the user from leaving their browser window open all night and hogging your UniData licences. In this case I would implement a similar methodology, where a stagnant page (i.e. user has done nothing for 20 minutes) is refreshed to a logout ASPX page, clearing the session.
If you are using UniObjects COM, make sure you get your COM marshalling working correctly. Take a look at:
SafeCOMWrapper - Managed Disposable Strongly Typed safe wrapper to late bound COM
http://www.codeproject.com/KB/COM/safecomwrapper.aspx
Another thing to watch out for is that the dynamic array class in UniObjects COM has a threading issue that doesn't play nice with .NET. If you can, use your own dynamic array class or array splits in .NET instead of the Dynamic array class in UniObjects COM.
Sometimes when you try to access the data from the class, it shows an empty string, but when you debug it, the data is there. Don't know the root cause of this.
If you need a generic dynamic array class that works with .NET, I can supply you one.
UniObjects.NET does not have these problems to my knowledge.
Nathan Rector
When you say you are using UniObjects... are you using the COM or .NET object set? The easiest would be to use UniObject Conneciton pooling.
When you create your Singleton, are you storing it in the Application object, Session Object, or Cache Object?
I would suggest Application object, as the Session object is can do strange things. One way to handle and check timeouts would be to use the Cache Key with a CacheRemoveCallback. This way you can use a File/Path Monitor dependency to watch for a windows file change to cause a remove manually, or a timeout from the Cache dependency.
Draw back to this is that timeouts on Cache Dependencies are only driven by page activity, and if the asp.net session recycles, it may/will destory the cache dependencies.
Nathan Rector
I'm using a C# WebClient to post login details to a page and read the all the results.
The page I am trying to load includes flash (which, in the browser, translates into HTML). I'm guessing it's flash to avoid being picked up by search engines???
The flash I am interested in is just text (not an image/video) etc and when I "View Selection Source" in firefox I do actually see the text, within HTML, that I want to see.
(Interestingly when I view the source for the whole page I do not see the text, within HTML, that I want to see. Could this be related?)
Currently after I have posted my login details, and loaded the HTML back, I see the page which does NOT show the flash HTML (as if I had viewed source for the whole page).
Thanks in advance,
Jim
PS: I should point out that the POST is actually working, my log in is successful.
Fiddler (or similar tool) is invaluable to track down screen-scraping problems like this. Using a normal browser and with fiddler active, look at all the requests being made as you go through the login and navigation process to get to the data you want. In between, you will likely see one or more things that your code is doing differently which the server is responding to and hence showing you different HTML than a real client.
The list of stuff below (think of it as "scraping 101") is what you want to look for. Most of the stuff below is probably stuff you're already doing, but I included everything for completeness.
In order to scrape effectively, you may need to deal with one or more of the following:
cookies and/or hidden fields. when you show up at any page on a site, you'll typically get a session cookie and/or hidden form field which (in a normal browser) would be propagated back to the server on all subsequent requests. You will likely also get a persistent cookie. On many sites, if a requests shows up without a proper cookie (or form field for sites using "cookieless sessions"), the site will redirect the user to a "no cookies" UI, a login page, or another undesirable location (from the scraper app's perspective). always make sure you capture the cookies set on the initial request and faithfully send them back to the server on subsequent requests, except if one of those subsequent requests changes a cookie (in which case propagate that new cookie instead).
authentication tokens a special case of above is forms-authentication cookies or hidden fields. make sure you're capturing the login token (usually a cookie) and sending it back.
POST vs. GET this is obvious, but make sure you're using the same HTTP method that a real browser does.
form fields (esp. hidden ones!) I'm sure you're doing this already, but make sure to send all form fields that a real browser does, not just the visible fields. make sure fields are HTML-encoded properly.
HTTP headers. you already checked this, but it may make sense to check again just to make sure the (non-cookie) headers are identical. I always start with the exact same headers and then start pulling out headers one by one, and only keep the ones that cause the request to fail or return bogus data. this approach simplifies your scraping code.
redirects. These can either come from the server, or from client script (e.g. "if user doesn't have flash plug-in loaded, redirect to a non-flash page"). See WebRequest: How to find a postal code using a WebRequest against this ContentType="application/xhtml+xml, text/xml, text/html; charset=utf-8"? for a crazy example of how redirection can trip up a screen-scraper. Note that if you're using .NET for scraping, you'll need to use HttpWebRequest (not WebClient) for redirect-dependent scraping, because by default WebClient doesn't provide a way for your code to attach cookies and headers to the second (post-redirect) request. See the thread above for more details.
sub-requests (frames, ajax, flash, etc.) - often, page elements (not the main HTTP requests) will end up fetching the data you want to scrape. you'll be able to figure this out by looking which HTTP response contains the text you want, and then working backwards until you find what on the page is actually making the request for that content. A few sites do really crazy things in sub-requests, like requesting compressed or encrypted text via ajax, and then using client-side script to decrypt it. if this is the case, you'll need to do a bit more work like reverse-engineering what the client script is doing.
ordering - this one is obvious: make HTTP requests in the same order that a browser client does. that doesn't mean you need to make every request (e.g. images). Typically you only need to make the requests which return text/html content type, unless the data you want is not in the HTML and is in an ajax/flash/etc. request.
(Interestingly when I view the source for the whole page I do not see the text, within HTML, that I want to see. Could this be related?)
This usually means that the discrepancy is caused by some DOM manipulations via javascript after the page has loaded. Try turning off javascript and see what it looks like.