I am trying to use Google Translate to translate but it gives error Server Unavailable. What my guess is that when I try to put same thing in the address bar we receive a captcha to fill in. If we get thru the captcha than only it downloads a txt file. I am thinking this might be the issue of captcha page and not the Server Unavailable.
Calling Function.
string result = TranslateGoogle("Life is great and one is spoiled when it goes on and on and on", "en", "hi");
Console.WriteLine(result);
TranslateGoogle Function
public string TranslateGoogle(string text, string fromCulture, string toCulture)
{
fromCulture = fromCulture.ToLower();
toCulture = toCulture.ToLower();
string[] tokens = fromCulture.Split('-');
if(tokens.Length > 1)
fromCulture = tokens[0];
tokens = toCulture.Split('-');
if(tokens.Length > 1)
toCulture = tokens[0];
string url = string.Format(#"http://translate.google.com/translate_a/t?client=j&text={0}&hl=en&sl={1}&tl={2}", System.Uri.EscapeDataString(text), fromCulture, toCulture);
string html = null;
try
{
WebClient web = new WebClient();
web.Headers.Add(HttpRequestHeader.UserAgent, "Mozilla/5.0");
web.Headers.Add(HttpRequestHeader.AcceptCharset, "UTF-8");
web.Encoding = Encoding.UTF8;
html = web.DownloadString(url);
}
catch(Exception ex)
{
MessageBox.Show(ex.Message);
return null;
}
string result = Regex.Match(html, "trans\":(\".*?\"),\"", RegexOptions.IgnoreCase).Groups[1].Value;
return result;
}
Expected Output
{
"sentences":
[
{
"trans":"जीवन महान है और इस पर और पर और पर चला जाता है जब एक खराब है",
"orig":"Life is great and one is spoiled when it goes on and on and on",
"translit":"Jīvana mahāna hai aura isa para aura para aura para calā jātā hai jaba ēka kharāba hai",
"src_translit":"",
"backend":0
}
],
"src":"en",
"server_time":85
}
This is what I am getting.
"The remote server returned an error: (503) Server Unavailable."
What should I be doing to get the expected output for the program.
Sorry, this is not an answer(but maybe the community can help and let this become a real answer), but I need to post here because in comment I can't format well.
I tried your example and it seems that google thinks that you are trying to abuse their services, here is what client sends:
GET http://translate.google.com/translate_a/t?client=j&text=Life%20is%20great%20and%20one%20is%20spoiled%20when%20it%20goes%20on%20and%20on%20and%20on&hl=en&sl=en&tl=hi HTTP/1.1
Accept-Charset: UTF-8
User-Agent: Mozilla/5.0
Host: translate.google.com
Proxy-Connection: Keep-Alive
Google sends this request to http://ipv4.google.com/sorry/IndexRedirect?continue=http://translate.google.com/translate_a/t%3Fclient%3Dj%26text%3DLife%2520is%2520great%2520and%2520one%2520is%2520spoiled%2520when%2520it%2520goes%2520on%2520and%2520on%2520and%2520on%26hl%3Den%26sl%3Den%26tl%3Dhi&q=CGMSBFgz6X4YkI3frwUiGQDxp4NLo-2RV2k8i7UPzIRYKSuT5usFkUU
here, if navigated from browser it shows captcha, so I tried navigating the url generated by the program using web browser (Firefox).
That's what it shows:
Sorry for the italian, it says that an unusual traffic is coming from the PC.
Once you prompt the captcha correctly your browser saves a cookie for the next requests(so you won't get the captcha again) and you are redirected to the translated sentence.
Here is an example of the browser requests on next navigations:
GET http://translate.google.com/translate_a/t?client=j&text=Life%20is%20great%20and%20one%20is%20spoiled%20when%20it%20goes%20on%20and%20on%20and%20on&hl=en&sl=en&tl=hi HTTP/1.1
Host: translate.google.com
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:43.0) Gecko/20100101 Firefox/43.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Cookie: NID=71=a__xJqNU4C1oQTkLrMCSL4CLdR_nelc5kbjcUwgvJUBILn2SOHrfUeIg-9vWfy6tRHVh9Z4yXT1dpwnnHIXf5i2NLlCuDn-joB1tpYo_9JM4_zQnaaYO7UCsFoFILogS8G4XTt1M8esMgUnG_JzoMWSG81Q-JfGk1_IQsb5gIHyHcKroJeNEUp4bnMkiOvZgj1Sk; SID=DQAAAP8AAADnhNjYLtZUYSPbm-V_62WNnlSj8pUKPRnUfLR-Fp18gYeyWsC93YgLn5yoy0L3FLPb2_yNM7ysBQPCnqJGCy6Or6i2WLHicMaVFr0_0LT4xM2KECq3F6Nczc6V7RO8G5VYnHNLXjZ4ZqVMRTfG3E-Ljrgq_0zg_bhi1DT2CeWoBgBFSVTh_cyMjjYdCRiPpyEFRAtUp_48EKmd62YzJHyPeD-JfXTvVlyacDavPzl4L5yf1KmJ37c-j_Px8dYVKHn5tE_jAKHcFjJ717mY85bjyyUasTKoPc_w9AhnVQXE-v-jBsT4rvbJ3khIqiddjagnQ6LpVCMrRwZ9OwU2uubG; HSID=AX4zDBkEvzB-ZdrnV; APISID=ZMLtLIl8PnW6C6X2/A20GPxC9NiRmY3t1T; _ga=GA1.3.1956353841.1435321193; PREF=ID=1111111111111111:FF=0:LD=it:TM=1436338644:LM=1437143045:V=1:S=me455Y_9_LyG2PFU; GOOGLE_ABUSE_EXEMPTION=ID=52cecb7a44e552cc:TM=1442301156:C=c:IP=88.51.233.126-:S=APGng0tXDRxFvrRNJHu-uk3IRqKVpJAIIQ
Connection: keep-alive
As a proof if I add this line to the C# code:
web.Headers.Add(HttpRequestHeader.Cookie, "NID=71=a__xJqNU4C1oQTkLrMCSL4CLdR_nelc5kbjcUwgvJUBILn2SOHrfUeIg-9vWfy6tRHVh9Z4yXT1dpwnnHIXf5i2NLlCuDn-joB1tpYo_9JM4_zQnaaYO7UCsFoFILogS8G4XTt1M8esMgUnG_JzoMWSG81Q-JfGk1_IQsb5gIHyHcKroJeNEUp4bnMkiOvZgj1Sk; SID=DQAAAP8AAADnhNjYLtZUYSPbm-V_62WNnlSj8pUKPRnUfLR-Fp18gYeyWsC93YgLn5yoy0L3FLPb2_yNM7ysBQPCnqJGCy6Or6i2WLHicMaVFr0_0LT4xM2KECq3F6Nczc6V7RO8G5VYnHNLXjZ4ZqVMRTfG3E-Ljrgq_0zg_bhi1DT2CeWoBgBFSVTh_cyMjjYdCRiPpyEFRAtUp_48EKmd62YzJHyPeD-JfXTvVlyacDavPzl4L5yf1KmJ37c-j_Px8dYVKHn5tE_jAKHcFjJ717mY85bjyyUasTKoPc_w9AhnVQXE-v-jBsT4rvbJ3khIqiddjagnQ6LpVCMrRwZ9OwU2uubG; HSID=AX4zDBkEvzB-ZdrnV; APISID=ZMLtLIl8PnW6C6X2/A20GPxC9NiRmY3t1T; _ga=GA1.3.1956353841.1435321193; PREF=ID=1111111111111111:FF=0:LD=it:TM=1436338644:LM=1437143045:V=1:S=me455Y_9_LyG2PFU; GOOGLE_ABUSE_EXEMPTION=ID=52cecb7a44e552cc:TM=1442301156:C=c:IP=88.51.233.126-:S=APGng0tXDRxFvrRNJHu-uk3IRqKVpJAIIQ"); //This is the cookie of the request of Firefox
Google sends the translated sentence "जीवन महान है और इस पर और पर और पर चला जाता है जब एक खराब है"
Here is a project that seems to work, it basically add different parameters in the url.
GoogleTranslator works by directly invoking Google's translation API
called by its online translation form and parsing the results.
I have been trying to use the Google TTS as well, but it doesn't work anymore. The Google Translate v2 doesn't support TTS anymore (see here)
Since you are using C# you could better use the speechsynthesis with System.Speech.Synthesis
public static void TextToSpeech (string utterance)
{
SpeechSynthesizer speaker = new SpeechSynthesizer();
speaker.Speak(utterance);
return;
}
Hope this answers a bit of your question. There are no workarounds for the captcha as of yet.
Related
What am I essencially doing is: I input URL (stackoverflow.com) in browser, catch HTTP request with proxy, proceed it to website, then get response from website and relay it back to browser.
In theory it should work just fine, but in practice I get response with "Moved Permanently" and "Location: https://stackoverflow.com/". If I understand it correctly, I need to get this location address and replace old address in HTTP request ("GET http://stackoverflow.com/ HTTP/1.1" -> "GET https://stackoverflow.com/ HTTP/1.1") and then make request again.
However, that's where I stuck. When I make new request, website responses with same status code - 301, no matter what I do.
That's what I am doing at proxy side:
Get list of IP addresses by using Dns.GetHostAddresses() - it returns 4 addresses:
151.101.65.69
151.101.129.69
151.101.193.69
151.101.1.69
Then I connect to each address and send this request (which I got from browser):
"GET http://stackoverflow.com/ HTTP/1.1\r\nHost: stackoverflow.com\r\nUser-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:57.0) Gecko/20100101 Firefox/57.0\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language: en-US,en;q=0.5\r\nAccept-Encoding: gzip, deflate\r\nCookie: prov=0014938e-7640-15fd-486c-9e6cb05cd5b1\r\nConnection: keep-alive\r\nUpgrade-Insecure-Requests: 1\r\n\r\n"
I then get response:
"HTTP/1.1 301 Moved Permanently\r\nLocation: https://stackoverflow.com/\r\nX-Request-Guid: 080d3919-7d02-4e48-9348-b9dc8b5b08f6\r\nContent-Length: 143\r\nAccept-Ranges: bytes\r\nDate: Mon, 26 Feb 2018 17:56:59 GMT\r\nVia: 1.1 varnish\r\nConnection: keep-alive\r\nX-Served-By: cache-ams4441-AMS\r\nX-Cache: MISS\r\nX-Cache-Hits: 0\r\nX-Timer: S1519667819.162626,VS0,VE76\r\nVary: Fastly-SSL\r\nX-DNS-Prefetch-Control: off\r\n\r\n<html><head><title>Object moved</title></head><body>\r\n<h2>Object moved to here.</h2>\r\n</body></html>\r\n"
But then even if I change "GET http://stackoverflow.com/ HTTP/1.1" to "GET https://stackoverflow.com/ HTTP/1.1", nothing changes, every of this four IPs return same response.
Am I mising something important? Maybe I should include another tag or change something else in new request header?
I have a local website (not mine) that requires authentication before doing some queries. The authentication header looks like this:
Host: 192.168.7.9
Connection: keep-alive
Content-Length: 185
Origin: http://192.168.7.9
X-Requested-With: XMLHttpRequest
User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/27.0.1453.3 Safari/537.36
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Accept: */*
DNT: 1
Referer: http://192.168.7.9/signin
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Cookie: _FProEnterprise_session=BAh7CzoVbmVlZF93ZWxjb21lX21zZ1Q6D3Nlc3Npb25faWQiJTUxNjI5OGRiMDNmNjU4ZDg4ODE3NmFiZjhmMDU3YTI2OglzaXRlSSIKc2l0ZTAGOgZFRjoObGFuZ19wYXRoSSIHZW4GOwhUOg5vbmVfY2xpY2tGOgx1c2VyX2lkaRE%3D--0c6634a714baa7f0e4795aee89b31f9b7ec0565e
And the request body looks like this:
username=myusername&password=mypassword
I'm not super great with how authentication works. So first, is this forms authentication? I'm guessing it is, since I have to enter my username and password on the site then submit to get in.
Second, why is there a Cookie already there? Is it from a previous session perhaps, and I can ignore it?
My goal is to reproduce this in C#, so that I can authenticate, get the cookie and then post data and retrieve results from this site. At least thats what I think I need to do. Links and code would be super helpful. If it's helpful I need to make this request from my web.api app controller.
You use asp.net membership provider and do the authentication like Membership.ValidateUser() and that will authenticate the formsauthentication also. Check if it is authenticated if (Context.User.Identity.IsAuthenticated) - FormsAuthentication.SignOut();
You need sql server or some kind of authentication mechanism first to save the username and password.
This seems to be an AJAX request (X-Requested-With: XMLHttpRequest). Therefore the user has to be on the web page first, which is when the session started. That is when the user gets the session cookie, which is sent every time to keep track of the session. This session is also kept on the server, where login information is stored - whether or not you're logged in, and who you are.
The contents seem to be a simple HTTP form, but since it came from an XMLHttpRequest it could just as well be created using Javascript. This is at least the standard way to send POST data through HTTP.
That is using plain HTTP authentication and the cookies are from an old session.
http://en.wikipedia.org/wiki/Basic_access_authentication
This link solved it for me:
HERE
My final code (in my web.api controller looked like this):
public static string JsonWithAuth( string url, string data )
{
var bytes = Encoding.Default.GetBytes( data );
using ( var client = new WebClientEx() )
{
var values = new NameValueCollection
{
{ "username", "myUsername" },
{ "password", "myPassword" },
};
// Authenticate
client.UploadValues( "http://192.168.7.9/main/signin", values );
// Post data
var response = client.UploadData( url, "POST", bytes );
return Encoding.Default.GetString( response );
}
}
And this was the class that made it work (from the linked answer):
/// <summary>
/// A custom WebClient featuring a cookie container
/// </summary>
public class WebClientEx : WebClient
{
public CookieContainer CookieContainer { get; private set; }
public WebClientEx()
{
CookieContainer = new CookieContainer();
}
protected override WebRequest GetWebRequest( Uri address )
{
var request = base.GetWebRequest( address );
if ( request is HttpWebRequest )
{
( request as HttpWebRequest ).CookieContainer = CookieContainer;
}
return request;
}
}
So my final call was like this:
string sampleInfo = JsonWithAuth(
"http://192.168.7.9/samples/sample_locations_list",
"sort=position&dir=ASC&box_id=");
Hope that helps someone else!
I am attempting to form a json request to authenticate using oath2 specification for Google's "Service Account" authentication. I am using google's documentation here. It is using JWT. It seems that there is not much information on how to do this using C#. This is the RAW json request that I am using, but the only response I can get from google is "invalid_request".
POST https://accounts.google.com/o/oauth2/token HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Host: accounts.google.com
Content-Length: 457
Expect: 100-continue
Connection: Keep-Alive
{
"grant_type": "assertion",
"assertion_type": "http://oauth.net/grant_type/jwt/1.0/bearer",
"assertion": "JWT (including signature)"
}
Any ideas on what could be going on? I am attempting to create a windows service that pings google latitude locations at set intervals? Is there potentially another way to get access to that API without jumping through this hoop? Thanks!
The docs are relatively clear that you should POST a urlencoded string. Instead of attempting to post json, post application/x-www-form-urlencoded data instead:
var invariantPart = "grant_type=assertion&" +
"assertion_type=http%3A%2F%2Foauth.net%2Fgrant_type%2Fjwt%2F1.0%2Fbearer&" +
"assertion=";
var fullPostData = invariantPart + Uri.EscapeDataString(myCalculatedAssertion);
//POST fullPostData to google
I'm working on a small C#/WPF application that interfaces with a web service implemented in Ruby on Rails, using handcrafted HttpWebRequest calls and JSON serialization. Without caching, everything works as it's supposed to, and I've got HTTP authentication and compression working as well.
Once I enable caching, by setting request.CachePolicy = new HttpRequestCachePolicy(HttpRequestCacheLevel.CacheIfAvailable);, things go awry - in the production environment. When connecting to a simple WEBrick instance, things work fine, I get HTTP/1.1 304 Not Modified as expected and HttpWebRequest delivers the cached content.
When I try the same against the production server, running nginx/0.8.53 + Phusion Passenger 3.0.0, the application breaks. First request (uncached) is served properly, but on the second request which results in the 304 response, I get a WebException stating that "The request was aborted: The request was canceled." as soon as I invoke request.GetResponse().
I've run the connections through fiddler, which hasn't helped a whole lot; both WEBrick and nginx return an empty entity body, albeit different response headers. Intercepting the request and changing the response headers for nginx to match those of WEBrick didn't change anything, leading me to think that it could be a keep-alive issue; setting request.KeepAlive = false; changes nothing, though - it doesn't break stuff when connecting to WEBrick, and it doesn't fix stuff when connecting to nginx.
For what it's worth, the WebException.InnerException is a NullReferenceException with the following StackTrace:
at System.Net.HttpWebRequest.CheckCacheUpdateOnResponse()
at System.Net.HttpWebRequest.CheckResubmitForCache(Exception& e)
at System.Net.HttpWebRequest.DoSubmitRequestProcessing(Exception& exception)
at System.Net.HttpWebRequest.ProcessResponse()
at System.Net.HttpWebRequest.SetResponse(CoreResponseData coreResponseData)
Headers for the (working) WEBrick connection:
########## request
GET /users/current.json HTTP/1.1
Authorization: Basic *REDACTED*
Content-Type: application/json
Accept: application/json
Accept-Charset: utf-8
Host: testbox.local:3030
If-None-Match: "84a49062768e4ca619b1c081736da20f"
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
########## response
HTTP/1.1 304 Not Modified
X-Ua-Compatible: IE=Edge
Etag: "84a49062768e4ca619b1c081736da20f"
Date: Wed, 01 Dec 2010 18:18:59 GMT
Server: WEBrick/1.3.1 (Ruby/1.8.7/2010-08-16)
X-Runtime: 0.177545
Cache-Control: max-age=0, private, must-revalidate
Set-Cookie: *REDACTED*
Headers for the (exception-throwing) nginx connection:
########## request
GET /users/current.json HTTP/1.1
Authorization: Basic *REDACTED*
Content-Type: application/json
Accept: application/json
Accept-Charset: utf-8
Host: testsystem.local:8080
If-None-Match: "a64560553465e0270cc0a23cc4c33f9f"
Accept-Encoding: gzip, deflate
Connection: Keep-Alive
########## response
HTTP/1.1 304 Not Modified
Connection: keep-alive
Status: 304
X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 3.0.0
ETag: "a64560553465e0270cc0a23cc4c33f9f"
X-UA-Compatible: IE=Edge,chrome=1
X-Runtime: 0.240160
Set-Cookie: *REDACTED*
Cache-Control: max-age=0, private, must-revalidate
Server: nginx/0.8.53 + Phusion Passenger 3.0.0 (mod_rails/mod_rack)
UPDATE:
I tried doing a quick-and-dirty manual ETag cache, but turns out that's a no-go: I get a WebException when invoking request.GetResponce(), telling me that "The remote server returned an error: (304) Not Modified." - yeah, .NET, I kinda knew that, and I'd like to (attempt to) handle it myself, grr.
UPDATE 2:
Getting closer to the root of the problem. The showstopper seems to be a difference in the response headers for the initial request. WEBrick includes a Date: Wed, 01 Dec 2010 21:30:01 GMT header, which isn't present in the nginx reply. There's other differences as well, but intercepting the initial nginx reply with fiddler and adding a Date header, the subsequent HttpWebRequests are able to process the (unmodified) nginx 304 replies.
Going to try to look for a workaround, as well as getting nginx to add the Date header.
UPDATE 3:
It seems that the serverside issue is with Phusion Passenger, they have an open issue about lack of the Date header. I'd still say that HttpWebRequest's behavior is... suboptimal.
UPDATE 4:
Added a Microsoft Connect ticket for the bug.
I think the designers find it reasonable to throw an exception when the "expected behavior"---i.e., getting a response body---cannot be completed. You can handle this somewhat intelligently as follows:
catch (WebException ex)
{
if (ex.Status == WebExceptionStatus.ProtocolError)
{
var statusCode = ((HttpWebResponse)ex.Response).StatusCode;
// Test against HttpStatusCode enumeration.
}
else
{
// Do something else, e.g. throw;
}
}
So, it turns out to be Phusion Passenger (or nginx, depending on how you look at it - and Thin as well) that doesn't add a Date HTTP response header, combined with what I see as a bug in .NET HttpWebRequest (in my situation there's no If-Modified-Since, thus Date shouldn't be necessary) leading to the problem.
The workaround for this particular case was to edit our Rails ApplicationController:
class ApplicationController < ActionController::Base
# ...other stuff here
before_filter :add_date_header
# bugfix for .NET HttpWebRequst 304-handling bug and various
# webservers' lazyness in not adding the Date: response header.
def add_date_header
response.headers['Date'] = Time.now.to_s
end
end
UPDATE:
Turns out it's a bit more complex than "just" setting HttpRequestCachePolicy - to repro, I also need to have manually constructed HTTP Basic Auth. So the involved components are the following:
HTTP server that doesn't include a HTTP "Date:" response header.
manual construction of HTTP Authorization request header.
use of HttpRequestCachePolicy.
Smallest repro I've been able to come up with:
namespace Repro
{
using System;
using System.IO;
using System.Net;
using System.Net.Cache;
using System.Text;
class ReproProg
{
const string requestUrl = "http://drivelog.miracle.local:3030/users/current.json";
// Manual construction of HTTP basic auth so we don't get an unnecessary server
// roundtrip telling us to auth, which is what we get if we simply use
// HttpWebRequest.Credentials.
private static void SetAuthorization(HttpWebRequest request, string _username, string _password)
{
string userAndPass = string.Format("{0}:{1}", _username, _password);
byte[] authBytes = Encoding.UTF8.GetBytes(userAndPass.ToCharArray());
request.Headers["Authorization"] = "Basic " + Convert.ToBase64String(authBytes);
}
static public void DoRequest()
{
var request = (HttpWebRequest) WebRequest.Create(requestUrl);
request.Method = "GET";
request.CachePolicy = new HttpRequestCachePolicy(HttpRequestCacheLevel.CacheIfAvailable);
SetAuthorization(request, "user#domain.com", "12345678");
using(var response = request.GetResponse())
using(var stream = response.GetResponseStream())
using(var reader = new StreamReader(stream))
{
string reply = reader.ReadToEnd();
Console.WriteLine("########## Server reply: {0}", reply);
}
}
static public void Main(string[] args)
{
DoRequest(); // works
DoRequest(); // explodes
}
}
}
I'd like my application to query a csv file from a secure website. I have no experience with web programming so I'd appreciate detailed instructions. Currently I have the user login to the site, manually query the csv, and have my application load the file locally. I'd like to automate this by having the user enter his login information, authenticating him on the website, and querying the data. The application is written in C# .NET.
I've tested the following code already and am able to access the file once the user has already authenticated himself and created a manual query.
System.Net.WebClient Client = new WebClient();
Stream strm = Client.OpenRead("https://<URL>/file.csv");
Here is the request sent to the site for authentication. I've angle bracketed the real userid and password.
POST /pwdVal.asp HTTP/1.1
Accept: image/jpeg, application/x-ms-application, image/gif, application/xaml+xml, image/pjpeg, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, application/x-shockwave-flash, */*
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; Tablet PC 2.0; OfficeLiveConnector.1.4; OfficeLivePatch.1.3; .NET4.0C; .NET4.0E)
Content-Type: application/x-www-form-urlencoded
Accept-Encoding: gzip, deflate
Cookie: ASPSESSIONID<unsure if this data contained password info so removed>; ClientId=<username>
Host: www3.emidas.com
Content-Length: 36
Connection: Keep-Alive
Cache-Control: no-cache
Accept-Language: en-US
client_id=<username>&password=<password>
Most likely the server sends a cookie once login is performed. You need to submit the same values as the login form. (this can be done using UploadValues()) However, you need to save the resulting cookies in a CookieContainer.
When I did this, I did it using HttpWebRequest, however per http://couldbedone.blogspot.com/2007/08/webclient-handling-cookies.html you can subclass WebClient and override the GetWebRequest() method to make it support cookies.
Oh, also, I found it useful to use Fiddler while manually accessing the web site to see what actually gets sent back and forth to the web site, so I knew what I was trying to reproduce.
edit, elaboration requested: I can only elaborate how to do it using HttpWebRequest, I have not done it using WebClient. Below is the code snippet I used for login.
private CookieContainer _jar = new CookieContainer();
private string _password;
private string _userid;
private string _url;
private string _userAgent;
...
string responseData;
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(_url);
webRequest.CookieContainer = _jar;
webRequest.Method = "POST";
webRequest.ContentType = "application/x-www-form-urlencoded";
webRequest.UserAgent = _userAgent;
string requestBody = String.Format(
"client_id={0}&password={1}", _userid, _password);
try
{
using (StreamWriter requestWriter = new StreamWriter(webRequest.GetRequestStream()))
{
requestWriter.Write(requestBody);
requestWriter.Close();
using (HttpWebResponse res = (HttpWebResponse)webRequest.GetResponse())
{
using (StreamReader responseReader = new StreamReader(res.GetResponseStream()))
{
responseData = responseReader.ReadToEnd();
responseReader.Close();
if (res.StatusCode != HttpStatusCode.OK)
throw new WebException("Logon failed", null, WebExceptionStatus.Success, res);
}
}
}
Before you go down this rabbit hole, contact the web site and ask them if they provide a web service to query user account info from. The simulated login method you are proposing should be a last resort only.
Another way you can do it is to automate IE, e.g. use a WebBrowser control. That will more accurately simulate all the clever stuff that IE does like running Javascript, which might be necessary. Although if Javascript or other clever stuff isn't necessary then using IE is a little heavy-handed and possibly prone to other problems.