I was curious about how people build third-party apps for sites with NO public APIs, but I could not really find any tutorials on this topic. So I decided to just give it a try. I created a simple desktop application, which uses HttpClient to send GET requests to the site I frequently use, and then parses the response and displays the data in my WPF window. This approach worked pretty well (probably because the site is fairly simple).
However, today I tried to run my application from a different place, and I kept getting 403 errors in response to my application's requests. It turned out, that the network I was using went through a VPN server, while the site I was trying to access used CloudFlare as protection layer, which apparently forces VPN users to enter reCaptcha in order to access the target site.
var baseAddress = new Uri("http://www.cloudflare.com");
using (var client = new HttpClient() { BaseAddress = baseAddress })
{
var message = new HttpRequestMessage(HttpMethod.Get, "/");
//this line returns CloudFlare home page if I use regualr network and reCaptcha page, when I use VPN
var result = await client.SendAsync(message);
//this line throws if I use VPN (403 Forbidden)
result.EnsureSuccessStatusCode();
}
Now the question is: what is the proper way to deal with CloudFlare protection in client application? Do I have to display the reCaptcha in my application just like the web browser does? Do I have to set any particular headers in order to get a proper response instead of 403? Any tips are welcome, as this is a completely new area to me.
P.S. I write in C# because this is the laguage I'm most comfortable with, but I don't mind aswers using any other language as long as they answer the question.
I guess, one way to go about it is to handle captcha in web browser, outside the client application.
Parse the response to see if it is a captcha page.
If it is - open this page in browser.
Let user solve the captcha there.
Fetch the CloudFlare cookies form browser's cookie storage. You gonna need __cfduid (user ID) and cf_clearance (proof of solving the captcha).
Attach those cookies to requests sent by client application.
Use application as normal for the next 24 hours (until CloudFlare cookies expire).
Now the hard part here is (4). It's easy to manually copy-paste the cookies to make the code snippet in my question work with VPN:
var baseAddress = new Uri("http://www.cloudflare.com");
var cookieContainer = new CookieContainer();
using (var client = new HttpClient(new HttpClientHandler() { CookieContainer = cookieContainer } , true) { BaseAddress = baseAddress })
{
var message = new HttpRequestMessage(HttpMethod.Get, "/");
//I've also copy-pasted all the headers from browser
//some of those might be optional
message.Headers.Add("User-Agent", "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:44.0) Gecko/20100101 Firefox/44.0");
message.Headers.Add("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8");
message.Headers.Add("Accept-Encoding", "gzip, deflate" });
message.Headers.Add("Accept-Language", "en-US;q=0.5,en;q=0.3");
//adding CloudFlare cookies
cookieContainer.Add(new Cookie("__cfduid", "copy-pasted-cookie-value", "/", "cloudflare.com"));
cookieContainer.Add(new Cookie("cf_clearance", "copy-pasted-cookie-value", "/", "cloudflare.com"));
var result = await client.SendAsync(message);
result.EnsureSuccessStatusCode();
}
But I think its going to be a tricky task to automate the process of fetching the cookies, due to different browsers storing cookies in different places and/or formats. Not to metion the fact that you need to use external browser for this approach to work, which is really annoying. Still, its something to consider.
Answer to "build third-party apps for sites with NO public APIs" is that even though some Software Vendors don't have a public api's they have partner programs.
Good example is Netflix, they used to have a public api. Some of the Apps developed when the Public Api was enabled allowed to continue api usage.
In your scenario, your client app acts as a web crawler (downloading html content and trying to parse information). What you are trying to do is to Crawl the Cloudfare data which is not meant to be crawled by a third party app (bot). From the cloudfare side, they have done the correct thing to have a Captcha which prevents automated requests.
Further, if you try to send requests at a high frequency (requests/sec), and if the Cloudfare has Threat detection mechanisms, your ip address will be blocked. I assume that they already identified the VPN server IP address you are trying to use and blacklisted that, that's why you are getting a 403.
Basically you solely depend on security holes in Cloudfare pages you try to access via the client app. This is sort of hacking Cloudfare (doing something cloudfare has restricted) which I would not recommend.
If you have a cool idea, better to contact their developer team and discuss about that.
In case you still need it, I had the very same problem and came up with the following solution 2 years ago.
It opens up the Cloudflare protected web page with the C# WebBrowser class, waits about 6 seconds so that CloudFlare saves the clearance cookie and then the program saves the cookie to disk.
You need a javascript capable browser like the C# WebBrowser class, as the Cloudflare captcha page needs javascript to function and count down in order to save the cookie, any other attempt will fail.
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Runtime.InteropServices;
using System.Net;
using System.Threading;
namespace kek
{
public partial class Form1 : Form
{
[DllImport("wininet.dll", SetLastError = true)]
public static extern bool InternetGetCookieEx(string url, string cookieName, StringBuilder cookieData, ref int size, Int32 dwFlags, IntPtr lpReserved);
private Uri Uri = new Uri("http://www.my-cloudflare-protected-website.com");
private const Int32 InternetCookieHttponly = 0x2000;
private const Int32 ERROR_INSUFFICIENT_BUFFER = 0x7A;
public Form1()
{
InitializeComponent();
webBrowser1.DocumentCompleted += new System.Windows.Forms.WebBrowserDocumentCompletedEventHandler(this.webBrowser1_DocumentCompleted);
webBrowser1.Navigate(Uri, null, null, "User-Agent: kappaxdkappa\r\n"); //user-agent needs to be set another way if that doesnt work
}
private void webBrowser1_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e)
{
int waitTime = 0;
if(webBrowser1.DocumentTitle.Contains("We are under attack")) //check what string identifies the unique cloudflare captcha page and put it here
{
waitTime = 6000;
}
Task.Run(async () =>
{
await Task.Delay(waitTime); //cookie can be saved right away, but the waiting period might not have passed yet
String cloudflareCookie = GetCookie(Uri, "cf_clearance");
if (!String.IsNullOrEmpty(cloudflareCookie))
{
System.IO.StreamWriter file = new System.IO.StreamWriter("c:\\CFcookie.blob"); //save to %appdata%\MyProgram\Cookies\clearence.blob
file.Write(cloudflareCookie);
file.Close();
}
});
}
String GetCookie(Uri uri, String cookieName)
{
int datasize = 0;
StringBuilder cookieData = new StringBuilder(datasize);
InternetGetCookieEx(uri.ToString(), cookieName, cookieData, ref datasize, InternetCookieHttponly, IntPtr.Zero);
if (Marshal.GetLastWin32Error() == ERROR_INSUFFICIENT_BUFFER && datasize > 0)
{
cookieData = new StringBuilder(datasize);
if (InternetGetCookieEx(uri.ToString(), cookieName, cookieData, ref datasize, InternetCookieHttponly, IntPtr.Zero))
{
if (cookieData.Length > 0)
{
CookieContainer container = new CookieContainer();
container.SetCookies(uri, cookieData.ToString());
return container.GetCookieHeader(uri);
}
}
}
return String.Empty;
}
}
}
Some notes:
Use a better user agent
The cookie is saved to disk as well because I needed it for something
else. Not sure if the in-built browser saved the cookies for next
time, but in case it does not, this way you can simply load it again.
Change the "We are under attack" phrase to the one that identifies
the CF captcha page you are trying to bypass.
__cfduid cookie is not required afaik
EDIT: Sorry, I was so focused on Cloudflare itself after reading other answers in here that I didn't notice that you need to bypass Recaptcha that is sometimes found on the Cloudflare page. My code can help you a bit for the browser and cookie part, but you will have a hard time solving Recaptcha, at least now. A few weeks ago they made it even harder. I can recommend compiling your own version of Firefox and then automatically solve the captcha by hitting the checkbox. If you don't get that simple captcha then you need to display it for the user. Mind that you also need to randomize the behaviour of your browser and how you click on the checkbox, otherwise it will detect you as a bot.
Related
I know this is possible using Puppeteer in js, but I'm wondering if anyone has figured out how to proxy on a page level in PuppeteerSharp (different proxies for different tabs)?.
it seems I can catch the request, but I'm not sure how to adjust the proxy.
page.SetRequestInterceptionAsync(true).Wait();
page.Request += (s, ev) =>
{
// what to do?
}
Edit
I am aware that I can set the proxy at the browser level like so;
var browser = await Puppeteer.LaunchAsync(new LaunchOptions
{
Headless = false,
Args = new[] { "--proxy-server=host:port" }
});
var page = await browser.NewPageAsync();
await page.AuthenticateAsync(new Credentials() { Username = "username", Password = "password" });
But this is not what I'm trying to do. I'm trying to set the proxy for each page within a single browser instance. I want to test lots of proxies so spawning a new instance of the browser just to set the proxy is too much overhead.
You can use different browser instances for each logical instances. I mean instead of trying to set different proxy for each page/tab with different proxy just create new browser instance and set proxy via launch args.
If this solution doesn't fit your needs, check this question. There is library for NodeJS which give ability to use different proxy per each page/tab. You can check that library source code and implement same things inside your C# application.
That library is using very simple method. Instead of sending requests via puppeter's browser/page library send request via nodejs http tools. It can be done by using method page.setRequestInterception. So library intercept each request from page, after that gather data and send request via http tools. I used C# a long time ago. So maybe I am wrong, but you can try to use HttpWebRequest or something similar. After you get result you should use method request.respond and pass response results there. In this way you can put any kind of proxy inside your application. Check here code of library.
I've found several different questions about this error, but none of them seem to outline my scenario.
I am creating a website that pulls in tweets from our company's Twitter account, and displaying them on a social wall. I am using C# asp.NET webforms. The C# code uses a Linqtotwitter library to handle the authentication and the "tweet pulling." It grabs the tweets and dumps them onto an aspx file as a big long string of json. We then have a jquery script that reads through the json and displays the tweets on the page nice and pretty like.
The code currently works perfect on my dev box. But when I push the code up to production I get this .NET error:
The remote certificate is invalid according to the validation procedure
I'll provide my code in a bit here, but first let me give you a little background. I have no idea if this information would be relevant or not, but who knows. This website is actually part of a larger project to fit several tiny one page microsites that we get from marketing onto one server to reduce the overhead they cause. These microsites can all have a different host name, but they point to the same IP address. An httpmodule lives on that server, and intercepts all requests coming in, and redirects them to an appropriate sub folder depending on the host name.
From the research that I've done, it seems that SSL is tied into this error quite a bit. I'm still pretty new to the IT world, and I'm learning more about SSL as this troubleshooting goes on. The server these microsites live on does have a few SSL certificates on it, and one of the microsites uses SSL, but not the website I'm currently working on. But since they both share the same IP address in that sense they kind of ARE the same website.
This is the C# LinqtoTwitter code:
private SingleUserAuthorizer auth;
private TwitterContext twitterCtx;
protected void Page_Load(object sender, EventArgs e)
{
Response.ContentType = "application/json";
auth = new SingleUserAuthorizer
{
Credentials = new SingleUserInMemoryCredentials
{
ConsumerKey =
ConfigurationManager.AppSettings["twitterConsumerKey"],
ConsumerSecret =
ConfigurationManager.AppSettings["twitterConsumerSecret"],
TwitterAccessToken =
ConfigurationManager.AppSettings["twitterAccessToken"],
TwitterAccessTokenSecret =
ConfigurationManager.AppSettings["twitterAccessTokenSecret"]
}
};
if (auth.IsAuthorized)
{
twitterCtx = new TwitterContext(auth);
var tweetResponse =
(from tweet in twitterCtx.Status
where tweet.Type == StatusType.User &&
tweet.ScreenName == "OurProfile" &&
tweet.IncludeRetweets == true
select tweet)
.ToList();
Results.Text = twitterCtx.RawResult;
}
}
protected override void OnPreRender(EventArgs e)
{
base.OnPreRender(e);
if (twitterCtx != null)
{
twitterCtx.Dispose();
twitterCtx = null;
}
}
Does anyone have any ideas of what could be the problem here? Like I said, I'm still pretty new, and I'm at a loss here even how to troubleshoot this issue beyond Google. Could it be something where our server can't verify that Twitter's SSL certificate is from a trusted source? Let me know if I can provide any more information or any more code. Thanks for your time and for reading through my post!
I need to access a php page that's behind a central authentication server. Using any other means, when I try to access this site, it redirects to the CAS page, awaiting authentication. Which essentially, has me supplying GET variables to the authentication page. (do not want.) Unfortunately, we cannot remove the CAS for this page, because it is a massive security risk.
So my question is, is there anyway to supply the credentials I have to the CAS, store any cookies from that, then using that, access the php page?
my attempt below:
Since I'm using the GET method, I don't need to worry about waiting until I'm at the php page to supply the values, I just need to actually access the page, which we'll call https://site.com/page.php?var1=value1&var2=value2
As WebClient accessing page with credentials suggested, I created a class CookieAwareClient
public class CookieAwareWebClient : WebClient
{
public CookieAwareWebClient()
{
CookieContainer = new CookieContainer();
}
public CookieContainer CookieContainer { get; private set; }
protected override WebRequest GetWebRequest(Uri address)
{
var request = (HttpWebRequest)base.GetWebRequest(address);
request.CookieContainer = CookieContainer;
return request;
}
}
and then the following to submit:
using (var client = new CookieAwareWebClient())
using (var client = new CookieAwareWebClient())
{
var values = new NameValueCollection
{
{"username", "usr" },
{"password", "(passw0rd!)"}, //don't worry. not actual information
};
//This should bring me to the login page, which will upload the values I have given
client.UploadValues("https://site.com/page.php?var1=value1&var2=value2", values);
string result = client.DownloadString("https://site.com/page.php?var1=value1&var2=value2");
unfortunately, this still does not log me into the CAS, and result is returned as the HTML for the CAS.
Does anyone have ANY idea how I fix this? thank you for your help.
I don't know if CAS supports standard HTTP auth out of the box. It is generally forms based.
After being redirected to CAS, you'll want to parse/scrape the page for a few of the hidden form variables. POST those parameters, the username, and the password (along with the cookie that was returned with GET request).
CAS should return an HTTP redirect. (There will be an additional cookie added to the collection: CASTGC cookie.) The webclient should redirect you back to the originally requested resource. Some apps might need two hops to get back.
If you will be polling the resource within the lifetime of the web application's session, make the subsequent calls using the same CookieCollection and you shouldn't have authenticate against CAS again.
I wonder of someone know a working sample of logging in using Twitter (OAuth) for .NET
I'm currently using this one http://www.voiceoftech.com/swhitley/?p=681
but it only works if I set the callback url to "oob", if I set a real callback url I get "401 unauthorized".
Thanks!
I wrote an OAuth manager for this, because the existing options were too complicated.
OAuth with Verification in .NET
The class focuses on OAuth, and works specifically with Twitter. This is not a class that exposes a ton of methods for the entire surface of Twitter's web API. It is just OAuth. If you want to update status on Twitter, this class exposes no "UpdateStatus" method. I figured it's a simple matter for app designers to construct the HTTP message they want to send. In other words the HTTP message is the API. But the OAuth stuff can get a little complicated, so that deserves an API, which is what the OAuth class is.
Here's example code to request a "request token":
var oauth = new OAuth.Manager();
oauth["consumer_key"] = MY_APP_SPECIFIC_CONSUMER_KEY;
oauth["consumer_secret"] = MY_APP_SPECIFIC_CONSUMER_SECRET;
oauth.AcquireRequestToken(SERVICE_SPECIFIC_REQUEST_TOKEN_URL, "POST");
THAT'S IT. In Twitter, the service-specific URL for requesting tokens is "https://api.twitter.com/oauth/request_token".
Once you get the request token, you pop the web browser UI in which the user will explicitly grant approval to your app, to access Twitter. You need to do this once, the first time the app runs. Do this in an embedded WebBrowser control, with code like so:
var url = SERVICE_SPECIFIC_AUTHORIZE_URL_STUB + oauth["token"];
webBrowser1.Url = new Uri(url);
For Twitter, the URL for this is "https://api.twitter.com/oauth/authorize?oauth_token=" with the oauth_token appended.
Grab the pin from the web browser UI, via some HTML screen scraping. Then request an "access token":
oauth.AcquireAccessToken(URL_ACCESS_TOKEN,
"POST",
pin);
For Twitter, that URL is "https://api.twitter.com/oauth/access_token".
You don't need to explicitly handle the access token; the OAuthManager class maintains it in state for you. But the token and secret are available in oauth["token"] and oauth["token_secret"], in case you want to write them off to permanent storage. To make requests with that access token, generate the authz header like this:
var authzHeader = oauth.GenerateAuthzHeader(url, "POST");
...where url is the resource endpoint. To update the user's status on Twitter, it would be "http://api.twitter.com/1/statuses/update.xml?status=Hello".
Then set the resulting string into the HTTP Header named Authorization, and send out the HTTP request to the url.
In subsequent runs, when you already have the access token and secret, you can instantiate the OAuth.Manager like this:
var oauth = new OAuth.Manager();
oauth["consumer_key"] = MY_APP_SPECIFIC_CONSUMER_KEY;
oauth["consumer_secret"] = MY_APP_SPECIFIC_CONSUMER_SECRET;
oauth["token"] = your_stored_access_token;
oauth["token_secret"] = your_stored_access_secret;
Then just generate the authz header, and make your requests as described above.
Download the DLL
View the Documentation
Already solved my issue with http://www.voiceoftech.com/swhitley/?p=681
I was saving my app as "browser" but since I wasn't especifying a callback url it was transformed to "client" app on saving.
I am late to the conversation, but I have created a video tutorial for anyone else who is having this same task. Like you, I had a ton of fun figuring out the 401 error.
Video: http://www.youtube.com/watch?v=TGEA1sgMMqU
Tutorial: http://www.markhagan.me/Samples/Grant-Access-And-Tweet-As-Twitter-User-ASPNet
Code (in case you don't want to leave this page):
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using Twitterizer;
namespace PostFansTwitter
{
public partial class twconnect : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
var oauth_consumer_key = "gjxG99ZA5jmJoB3FeXWJZA";
var oauth_consumer_secret = "rsAAtEhVRrXUTNcwEecXqPyDHaOR4KjOuMkpb8g";
if (Request["oauth_token"] == null)
{
OAuthTokenResponse reqToken = OAuthUtility.GetRequestToken(
oauth_consumer_key,
oauth_consumer_secret,
Request.Url.AbsoluteUri);
Response.Redirect(string.Format("http://twitter.com/oauth/authorize?oauth_token={0}",
reqToken.Token));
}
else
{
string requestToken = Request["oauth_token"].ToString();
string pin = Request["oauth_verifier"].ToString();
var tokens = OAuthUtility.GetAccessToken(
oauth_consumer_key,
oauth_consumer_secret,
requestToken,
pin);
OAuthTokens accesstoken = new OAuthTokens()
{
AccessToken = tokens.Token,
AccessTokenSecret = tokens.TokenSecret,
ConsumerKey = oauth_consumer_key,
ConsumerSecret = oauth_consumer_secret
};
TwitterResponse<TwitterStatus> response = TwitterStatus.Update(
accesstoken,
"Testing!! It works (hopefully).");
if (response.Result == RequestResult.Success)
{
Response.Write("we did it!");
}
else
{
Response.Write("it's all bad.");
}
}
}
}
}
"DotNetOpenAuth" will be great helps for u. http://www.dotnetopenauth.net/
I've developed a C# library for OAuth that is really simple to use and get up and running with. The project is an open source project and I've included a demo application that works against
1. Google
2. Twitter
3. Yahoo
4. Vimeo
Of course any other OAuth provider will do as well. You can find the article and library here
OAuth C# Library
I'm using the Yahoo Uploader, part of the Yahoo UI Library, on my ASP.Net website to allow users to upload files. For those unfamiliar, the uploader works by using a Flash applet to give me more control over the FileOpen dialog. I can specify a filter for file types, allow multiple files to be selected, etc. It's great, but it has the following documented limitation:
Because of a known Flash bug, the Uploader running in Firefox in Windows does not send the correct cookies with the upload; instead of sending Firefox cookies, it sends Internet Explorer’s cookies for the respective domain. As a workaround, we suggest either using a cookieless upload method or appending document.cookie to the upload request.
So, if a user is using Firefox, I can't rely on cookies to persist their session when they upload a file. I need their session because I need to know who they are! As a workaround, I'm using the Application object thusly:
Guid UploadID = Guid.NewGuid();
Application.Add(Guid.ToString(), User);
So, I'm creating a unique ID and using it as a key to store the Page.User object in the Application scope. I include that ID as a variable in the POST when the file is uploaded. Then, in the handler that accepts the file upload, I grab the User object thusly:
IPrincipal User = (IPrincipal)Application[Request.Form["uploadid"]];
This actually works, but it has two glaring drawbacks:
If IIS, the app pool, or even just the application is restarted between the time the user visits the upload page, and actually uploads a file, their "uploadid" is deleted from application scope and the upload fails because I can't authenticate them.
If I ever scale to a web farm (possibly even a web garden) scenario, this will completely break. I might not be worried, except I do plan on scaling this app in the future.
Does anyone have a better way? Is there a way for me to pass the actual ASP.Net session ID in a POST variable, then use that ID at the other end to retrieve the session?
I know I can get the session ID through Session.SessionID, and I know how to use YUI to post it to the next page. What I don't know is how to use that SessionID to grab the session from the state server.
Yes, I'm using a state server to store the sessions, so they persist application/IIS restarts, and will work in a web farm scenario.
Here is a post from the maintainer of SWFUpload which explains how to load the session from an ID stored in Request.Form. I imagine the same thing would work for the Yahoo component.
Note the security disclaimers at the bottom of the post.
By including a Global.asax file and the following code you can override the missing Session ID cookie:
using System;
using System.Web;
public class Global_asax : System.Web.HttpApplication
{
private void Application_BeginRequest(object sender, EventArgs e)
{
/*
Fix for the Flash Player Cookie bug in Non-IE browsers.
Since Flash Player always sends the IE cookies even in FireFox
we have to bypass the cookies by sending the values as part of the POST or GET
and overwrite the cookies with the passed in values.
The theory is that at this point (BeginRequest) the cookies have not been ready by
the Session and Authentication logic and if we update the cookies here we'll get our
Session and Authentication restored correctly
*/
HttpRequest request = HttpContext.Current.Request;
try
{
string sessionParamName = "ASPSESSID";
string sessionCookieName = "ASP.NET_SESSIONID";
string sessionValue = request.Form[sessionParamName] ?? request.QueryString[sessionParamName];
if (sessionValue != null)
{
UpdateCookie(sessionCookieName, sessionValue);
}
}
catch (Exception ex)
{
// TODO: Add logging here.
}
try
{
string authParamName = "AUTHID";
string authCookieName = FormsAuthentication.FormsCookieName;
string authValue = request.Form[authParamName] ?? request.QueryString[authParamName];
if (authValue != null)
{
UpdateCookie(authCookieName, authValue);
}
}
catch (Exception ex)
{
// TODO: Add logging here.
}
}
private void UpdateCookie(string cookieName, string cookieValue)
{
HttpCookie cookie = HttpContext.Current.Request.Cookies.Get(cookieName);
if (cookie == null)
{
HttpCookie newCookie = new HttpCookie(cookieName, cookieValue);
Response.Cookies.Add(newCookie);
}
else
{
cookie.Value = cookieValue;
HttpContext.Current.Request.Cookies.Set(cookie);
}
}
}
Security Warning: Don't just copy and paste this code in to your ASP.Net application without knowing what you are doing. It introduces security issues and possibilities of Cross-site Scripting.
Relying on this blog post, here's a function that should get you the session for any user based on the session ID, though it's not pretty:
public SessionStateStoreData GetSessionById(string sessionId)
{
HttpApplication httpApplication = HttpContext.ApplicationInstance;
// Black magic #1: getting to SessionStateModule
HttpModuleCollection httpModuleCollection = httpApplication.Modules;
SessionStateModule sessionHttpModule = httpModuleCollection["Session"] as SessionStateModule;
if (sessionHttpModule == null)
{
// Couldn't find Session module
return null;
}
// Black magic #2: getting to SessionStateStoreProviderBase through reflection
FieldInfo fieldInfo = typeof(SessionStateModule).GetField("_store", BindingFlags.NonPublic | BindingFlags.Instance);
SessionStateStoreProviderBase sessionStateStoreProviderBase = fieldInfo.GetValue(sessionHttpModule) as SessionStateStoreProviderBase;
if (sessionStateStoreProviderBase == null)
{
// Couldn't find sessionStateStoreProviderBase
return null;
}
// Black magic #3: generating dummy HttpContext out of the thin air. sessionStateStoreProviderBase.GetItem in #4 needs it.
SimpleWorkerRequest request = new SimpleWorkerRequest("dummy.html", null, new StringWriter());
HttpContext context = new HttpContext(request);
// Black magic #4: using sessionStateStoreProviderBase.GetItem to fetch the data from session with given Id.
bool locked;
TimeSpan lockAge;
object lockId;
SessionStateActions actions;
SessionStateStoreData sessionStateStoreData = sessionStateStoreProviderBase.GetItem(
context, sessionId, out locked, out lockAge, out lockId, out actions);
return sessionStateStoreData;
}
You can get your current SessionID from the following code:
string sessionId = HttpContext.Current.Session.SessionID;
Then you can feed that into a hidden field maybe and then access that value through YUI.
It's just a get, so you hopefully won't have any scaling problems. Security-problems though, that I don't know.
The ASP.Net Session ID is stored in Session.SessionID so you could set that in a hidden field and then post it to the next page.
I think, however, that if the application restarts, the sessionID will expire if you do not store your sessions in sql server.