I'm trying to get the pronunciation for certain word from a web dictionary. For example, in the following code, I want to get the pronunciation of good from http://collinsdictionary.com
(HTTP Agility Pack is used here)
static void test()
{
String url = "http://www.collinsdictionary.com/dictionary/english/good";
WebClient client = new WebClient();
client.Encoding = System.Text.Encoding.UTF8;
String html = client.DownloadString(url);
HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();
doc.LoadHtml(html);
HtmlAgilityPack.HtmlNode node = doc.DocumentNode.SelectSingleNode("//*[#id=\"good_1\"]/div[1]/h2/span/text()[1]");
if (node == null)
{
Console.WriteLine("XPath not found.");
}
else
{
Console.WriteLine(node.WriteTo());
}
}
I was expecting
(ɡʊd
but what I could get at best is
(ɡ?d
How to get it right?
The problem is not in your parsing of the text, rather it is a problem with the console output. If you are doing this from a command line app, you can set the output encoding of the console to be unicode:
Console.OutputEncoding = System.Text.Encoding.Unicode;
You need to also ensure that your font in the console is a font that has unicode support. See this answer for more info.
If you know the page encoding (e.g System.Text.Encoding.UTF8);
string html = DownloadSmallFiles_String(url, System.Text.Encoding.UTF8, 20000);
or use automatic encoding detection (depends on server response)
string html = DownloadSmallFiles_String(url, null, 20000);
and finally load the html
doc.LoadHtml(html);
Try below code
static void test()
{
String url = "http://www.collinsdictionary.com/dictionary/english/good";
System.Text.Encoding PageEncoding = null; //System.Text.Encoding.UTF8
//PageEncoding = null; it means try to detect encoding automatically
string html = DownloadSmallFiles_String(url, PageEncoding, 20000);
HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();
//doc.LoadHtml(html);
doc.LoadHtml(html);
HtmlAgilityPack.HtmlNode node = doc.DocumentNode.SelectSingleNode("//*[#id=\"good_1\"]/div[1]/h2/span/text()[1]");
if (node == null)
{
Console .WriteLine("XPath not found.");
}
else
{
Console.WriteLine(node.WriteTo());
}
}
private static HttpWebRequest CreateWebRequest(string url, int TimeOut = 20000)
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.UserAgent = "Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko";
request.Method = "GET";
request.Timeout = TimeOut;
request.CachePolicy = new HttpRequestCachePolicy(HttpRequestCacheLevel.NoCacheNoStore);
request.KeepAlive = false;
request.UseDefaultCredentials = true;
request.Proxy = null;//ProxyHelperClass.GetIEProxy;
return request;
}
public static string DownloadSmallFiles_String(string Url, System.Text.Encoding ForceTextEncoding_SetThistoNothingToUseAutomatic, int TimeOut = 20000)
{
try
{
string ResponsString = "";
HttpWebRequest request = CreateWebRequest(Url, TimeOut);
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
if (response.StatusCode == HttpStatusCode.OK)
{
using (Stream receiveStream = response.GetResponseStream())
{
if (ForceTextEncoding_SetThistoNothingToUseAutomatic != null)
{
ResponsString = new StreamReader(receiveStream, ForceTextEncoding_SetThistoNothingToUseAutomatic).ReadToEnd();
}
else
{
if (string.IsNullOrEmpty(response.CharacterSet) == false)
{
System.Text.Encoding respEncoding = System.Text.Encoding.GetEncoding(response.CharacterSet);
ResponsString = new StreamReader(receiveStream, respEncoding).ReadToEnd();
}
else
{
ResponsString = new StreamReader(receiveStream).ReadToEnd();
}
}
}
}
}
return ResponsString;
}
catch (Exception ex)
{
return "";
}
}
Related
I am trying to perform a GET request to https://sede.educacion.gob.es/publiventa/catalogo.action?cod=E; with the cod=E parameter, in the browser, the web site open a menu below "Materias de educación", but when I perform the request using C# this menu is not loading and I need it. This is the code I am using to readHtml as string to later parse it with HtmlAgilityPack.
private string readHtml(string urlAddress)
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(urlAddress);
request.UserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:56.0) Gecko/20100101 Firefox/56.0";
request.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
request.AutomaticDecompression = DecompressionMethods.GZip;
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
if (response.StatusCode == HttpStatusCode.OK)
{
Stream receiveStream = response.GetResponseStream();
StreamReader readStream = null;
if (response.CharacterSet == null)
{
readStream = new StreamReader(receiveStream);
}
else
{
readStream = new StreamReader(receiveStream, Encoding.GetEncoding(response.CharacterSet));
}
string data = readStream.ReadToEnd();
response.Close();
readStream.Close();
return data;
}
return null;
}
The Uri you posted (https://sede.educacion.gob.es/publiventa/catalogo.action?cod=E) uses a Javascript switch to show the Menu content.
When you connect to that Uri (without clicking a menu link), that site shows three different versions of that page.
1) Page with closed menu and proposed new editions
2) Page with closed menu and search engine fields
3) Page with open menu and a selection of the menu content
This switch is based on a internal procedure which records the current session. Unless you click on a menu link (which is connected to an event listener), the Javascript proc shows the page in different states.
I gave it a look; those script are quite long (a whole multi-purpose library) and I had no time to parse it all (may be you can do that) to find out what parameters the event listener is passing.
But, the three-state version switch is constant.
What I mean is you can call that page three times, preserving the Cookie Container: the third time you connect to it, it will stream the whole menu content and its links.
If you request three times the same page, the third time the Html page will
contain all theMaterias de educación links
public async void SomeMethodAsync()
{
string HtmlPage = await GetHttpStream([URI]);
HtmlPage = await GetHttpStream([URI]);
HtmlPage = await GetHttpStream([URI]);
}
This is, more or less, what I used to get that page:
CookieContainer CookieJar = new CookieContainer();
public async Task<string> GetHttpStream(Uri HtmlPage)
{
HttpWebRequest httpRequest;
string Payload = string.Empty;
httpRequest = WebRequest.CreateHttp(HtmlPage);
try
{
httpRequest.CookieContainer = CookieJar;
httpRequest.KeepAlive = true;
httpRequest.ConnectionGroupName = Guid.NewGuid().ToString();
httpRequest.AllowAutoRedirect = true;
httpRequest.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
httpRequest.ServicePoint.MaxIdleTime = 30000;
httpRequest.ServicePoint.Expect100Continue = false;
httpRequest.UserAgent = "Mozilla/5.0 (Windows NT 10; Win64; x64; rv:56.0) Gecko/20100101 Firefox/56.0";
httpRequest.Accept = "ext/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
httpRequest.Headers.Add(HttpRequestHeader.AcceptLanguage, "es-ES,es;q=0.8,en-US;q=0.5,en;q=0.3");
httpRequest.Headers.Add(HttpRequestHeader.AcceptEncoding, "gzip, deflate;q=0.8");
httpRequest.Headers.Add(HttpRequestHeader.CacheControl, "no-cache");
using (HttpWebResponse httpResponse = (HttpWebResponse)await httpRequest.GetResponseAsync())
{
Stream ResponseStream = httpResponse.GetResponseStream();
if (httpResponse.StatusCode == HttpStatusCode.OK)
{
try
{
//ResponseStream.Position = 0;
Encoding encoding = Encoding.GetEncoding(httpResponse.CharacterSet);
using (MemoryStream _memStream = new MemoryStream())
{
if (httpResponse.ContentEncoding.Contains("gzip"))
{
using (GZipStream _gzipStream = new GZipStream(ResponseStream, System.IO.Compression.CompressionMode.Decompress))
{
_gzipStream.CopyTo(_memStream);
};
}
else if (httpResponse.ContentEncoding.Contains("deflate"))
{
using (DeflateStream _deflStream = new DeflateStream(ResponseStream, System.IO.Compression.CompressionMode.Decompress))
{
_deflStream.CopyTo(_memStream);
};
}
else
{
ResponseStream.CopyTo(_memStream);
}
_memStream.Position = 0;
using (StreamReader _reader = new StreamReader(_memStream, encoding))
{
Payload = _reader.ReadToEnd().Trim();
};
};
}
catch (Exception)
{
Payload = string.Empty;
}
}
}
}
catch (WebException exW)
{
if (exW.Response != null)
{
//Handle WebException
}
}
catch (System.Exception exS)
{
//Handle System.Exception
}
CookieJar = httpRequest.CookieContainer;
return Payload;
}
I am reading source of the following url but the title is coming as bunch of ?? marks, how do I convert it to actual language that the web page is presenting.
http://support.microsoft.com/common/survey.aspx?scid=sw;ja;3703&showpage=1
private string[] getTitleNewUrl()
{
string url = #"http://support.microsoft.com/common/survey.aspx?scid=sw;ja;3703&showpage=1";
string[] titleNewUrl = new string[2];
var navigatedUrl = string.Empty;
string title = string.Empty;
try
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.Credentials = System.Net.CredentialCache.DefaultCredentials;
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
if (response.StatusCode == HttpStatusCode.OK)
{
navigatedUrl = response.ResponseUri.ToString(); **//this returns [http://support.microsoft.com/default.aspx?scid=gp;en-us;fmserror][1]**
StreamReader sr = new StreamReader(response.GetResponseStream());
var htmlSource = sr.ReadToEnd();
Match m = Regex.Match(htmlSource, #"<title>\s*(.+?)\s*</title>");
if (m.Success)
{
title = m.Groups[1].Value;
}
titleNewUrl[0] = title;
titleNewUrl[1] = navigatedUrl;
}
}
catch (Exception ex)
{
MessageBox.Show("Invalid URL: " + navigatedUrl + " Error: " + ex.Message);
}
return titleNewUrl;
}
Thanks
Here is the answer
public string GetResponseStream(string sURL)
{
string strWebPage = "";
// create request
System.Net.WebRequest objRequest = System.Net.HttpWebRequest.Create(sURL);
// get response
System.Net.HttpWebResponse objResponse;
objResponse = (System.Net.HttpWebResponse)objRequest.GetResponse();
// get correct charset and encoding from the server's header
string Charset = objResponse.CharacterSet;
Encoding encoding = Encoding.GetEncoding(Charset);
// read response
using (StreamReader sr = new StreamReader(objResponse.GetResponseStream(), encoding))
{
strWebPage = sr.ReadToEnd();
// Close and clean up the StreamReader
sr.Close();
}
return strWebPage;
}
i am trying to login to a forum with httpwerequests but i had no success so far, this is my code:
string url = "http://www.warriorforum.com/";
var bytes = Encoding.Default.GetBytes(#"vb_login_username=MyUsername&cookieuser=1&vb_login_password=&s=&securitytoken=guest&do=login&vb_login_md5password=d9350bad28eee253951d7c5211e50179&vb_login_md5password_utf=d9350bad28eee253951d7c5211e50179");
var container = new CookieContainer();
var request = (HttpWebRequest)(WebRequest.Create(url));
request.CookieContainer = container;
request.UserAgent = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/535.2";
request.ContentLength = bytes.Length;
request.Method = "POST";
request.KeepAlive = true;
request.AllowAutoRedirect = true;
request.AllowWriteStreamBuffering = true;
request.CookieContainer = container;
using (var requestStream = request.GetRequestStream())
requestStream.Write(bytes, 0, bytes.Length);
var requestResponse = request.GetResponse();
using (var responsStream = requestResponse.GetResponseStream())
{
if (responsStream != null)
{
using (var responseReader = new StreamReader(responsStream))
{
var responseStreamReader = responseReader.ReadToEnd();
richTextBox1.Text = responseStreamReader; //this is to read the page source after the request
}
}
}
After the request the response is just the same page, nothing changed, no message telling me that i input wrong password or something like that.
I just tested using my generic VBulletin login function and it seemed to work fine:
private static bool VBulletinLogin(Uri loginUrl, string user, string password)
{
var postParams = new[] {
new HttpParam("vb_login_username", user),
new HttpParam("cookieuser", "1"),
new HttpParam("vb_login_password", password),
new HttpParam("securitytoken", "guest"),
new HttpParam("do", "login"),
};
var http = new HttpContext();
var src = http.GetEncodedPageData(loginUrl, HttpRequestType.POST, postParams);
return src.ResponseData.Contains("Thank you for logging in");
}
Unfortunately, this uses my HttpContext class, which is part of a library I've been writing and the features are fairly intertwined. Hopefully, however, it will at least give you an idea of the post params. I've also included a few helpful structs/functions from my own class which should help. (note, requires a reference to the .NET 3.5 System.Web namespace.
First helpful struct, HttpParam:
public struct HttpParam
{
private string _key;
private string _value;
public string Key { get { return HttpUtilty.UrlEncode(_key); } set { _key = value; } }
public string Value { get { return HttpUtility.UrlEncode(_value); } set { _value = value; } }
public HttpParam(string key, string value)
{
_key = key;
_value = value;
}
public override string ToString()
{
return string.Format("{0}={1}", Key, Value);
}
};
And a function to go along with it:
private static string GetQueryString(HttpParam[] args)
{
return args != null
? string.Join("&", Array.ConvertAll(args, arg => arg.ToString()))
: string.Empty;
}
The combination of these will help you to generate consistent, safe query strings. So in the above case:
var postParams = new[] {
new HttpParam("vb_login_username", user),
new HttpParam("cookieuser", "1"),
new HttpParam("vb_login_password", password),
new HttpParam("securitytoken", "guest"),
new HttpParam("do", "login"),
};
var queryString = GetQueryString(postParams);
Would get you something like:
vb_login_username=<user>&cookieuser=1&vb_login_password=<password>&securitytoken=guest&do=login
Then something similar to what you already have for posting could be used, just ensure you have the correct URL. I'd also use UTF8 encoding when getting the query string bytes. For example (using your original code, slightly modified)
var postParams = new[] {
new HttpParam("vb_login_username", "yourusername"),
new HttpParam("cookieuser", "1"),
new HttpParam("vb_login_password", "yourpassword"),
new HttpParam("securitytoken", "guest"),
new HttpParam("do", "login"),
};
string url = "http://warriorforum.com/login.php?do=login";
var container = new CookieContainer();
var buffer = Encoding.UTF8.GetBytes(GetQueryString(postParams));
var request = (HttpWebRequest)HttpWebRequest.Create(url);
request.CookieContainer = container;
request.UserAgent = "Mozilla/5.0";
request.Method = "POST";
request.KeepAlive = true;
request.AllowAutoRedirect = true;
request.CookieContainer = container;
request.ContentLength = buffer.Length;
request.ContentType = "application/x-www-form-urlencoded; charset=UTF-8";
request.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
using (var requestStream = request.GetRequestStream())
requestStream.Write(buffer, 0, buffer.Length);
using (var response = request.GetResponse())
{
if (response.StatusCode == HttpStatusCode.OK || response.StatusCode == HttpStatusCode.NotModified)
{
using (var reader = new StreamReader(response.GetResponseStream()))
{
var result = reader.ReadToEnd();
richTextBox1.Text = result; //this is to read the page source after the request
}
}
}
Note the changes with the ContentType as well.
You seem to be missing something the browser does when you login... does that forum really need a POST or perhaps a GET ? Are all your parameters correct ? Does the web page perhaps send an additional parameter (hidden) when login happens from the browser ?
You need to see what really goes over the wire when you login manually via a browser - use Wireshark or Fiddler to find out and then simulate what happens in code...
I am trying to scrape content from this page: https://www.google.com/search?hl=en&biw=1920&bih=956&tbm=shop&q=Xenon+12640&oq=Xenon+12640&aq=f&gs_l=serp.3...3743.3743.0.3905.1.1.0.0.0.0.0.0..0.0.ekh..0.0.Hq3XS7AxFDU&sei=Dr_MT_WOM6nO2AWE25mTCA&gbv=2
The problem I am experiencing is that opening that url in a browser I get everything I need to scrape but scraping the same link in the code, two (important) pieces are missing, the reviews number and the ratings, below the price and the seller info.
Here is the screenshot from the internal web client in c#: http://gyazo.com/908a37c7f70712fba1f82ec90a604d4d.png?1338822369
Here is the code with which I am trying to get the content:
public string navGet(string inURL, CookieContainer inCookieContainer, bool GZip, string proxyAddress, int proxyPort,string proxyUserName, string proxyPassword)
{
try
{
this.currentUrl = inURL;
HttpWebRequest webRequest = (HttpWebRequest)WebRequest.Create(inURL);
webRequest.Timeout = this.TimeOutSetting;
webRequest.CookieContainer = inCookieContainer;
if (proxyAddress == "0" || proxyPort == 0)
{ }
else
{
webRequest.Proxy = new WebProxy(proxyAddress, proxyPort);
// Use login credentials to access proxy
NetworkCredential networkCredential = new NetworkCredential(proxyUserName, proxyPassword);
webRequest.Proxy.Credentials = networkCredential;
}
Uri destination = webRequest.Address;
webRequest.KeepAlive = true;
webRequest.Method = "GET";
webRequest.Accept = "*/*";
webRequest.Headers.Add("Accept-Language", "en-us");
if (GZip)
{
webRequest.Headers.Add("Accept-Encoding", "gzip, deflate");
}
webRequest.AllowAutoRedirect = true;
webRequest.UserAgent = "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; FunWebProducts; .NET CLR 1.1.4322; .NET CLR 2.0.50727)";
webRequest.ContentType = "text/xml";
//webRequest.CookieContainer.Add(inCookieContainer.GetCookies(destination));
try
{
string strSessionID = inCookieContainer.GetCookies(destination)["PHPSESSID"].Value;
webRequest.Headers.Add("Cookie", "USER_OK=1;PHPSESSID=" + strSessionID);
}
catch (Exception ex2)
{
}
HttpWebResponse webResponse = (HttpWebResponse)webRequest.GetResponse();
if (webRequest.HaveResponse)
{
// First handle cookies
foreach(Cookie retCookie in webResponse.Cookies)
{
bool cookieFound = false;
foreach(Cookie oldCookie in inCookieContainer.GetCookies(destination))
{
if (retCookie.Name.Equals(oldCookie.Name))
{
oldCookie.Value = retCookie.Value;
cookieFound = true;
}
}
if (!cookieFound)
inCookieContainer.Add(retCookie);
}
// Read response
Stream responseStream = responseStream = webResponse.GetResponseStream();
if (webResponse.ContentEncoding.ToLower().Contains("gzip"))
{
responseStream = new GZipStream(responseStream, CompressionMode.Decompress);
}
else if (webResponse.ContentEncoding.ToLower().Contains("deflate"))
{
responseStream = new DeflateStream(responseStream, CompressionMode.Decompress);
}
StreamReader stream = new StreamReader(responseStream, System.Text.Encoding.Default);
string responseString = stream.ReadToEnd();
stream.Close();
this.currentUrl = webResponse.ResponseUri.ToString();
this.currentAddress = webRequest.Address.ToString();
setViewState(responseString);
return responseString;
}
throw new Exception("No response received from host.");
return "An error was encountered";
}
catch(Exception ex)
{
//MessageBox.Show("NavGet:" + ex.Message);
return ex.Message;
}
}
Looks like it happens because the reviews number and the ratings are generated dynamically using Java Script (probably AJAX or something else). In this case you need to analyze additional traffic that takes place when the page is loaded in the browser and find where this data is transfered or analize JavaScript code to see how it's generated.
i have a problem in certain company in germany. They use proxy in their network and my program cant communicate with server.
IE works with this settings:
It means:
Automatically detect settings
This is the code:
public static bool CompleteValidation(string regKey)
{
string uri = "***";
int c = 1;
if (Counter < 5) c = 6 - Counter;
string response = "";
try
{
System.Net.ServicePointManager.Expect100Continue = false;
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(uri);
request.AllowWriteStreamBuffering = true;
request.Method = "POST";
request.UserAgent = "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:5.0) Gecko/20100101 Firefox/5.0";
request.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8";
request.Headers.Add(HttpRequestHeader.AcceptLanguage, "pl,en-us;q=0.7,en;q=0.3");
request.Headers.Add(HttpRequestHeader.AcceptEncoding, "gzip, deflate");
request.Headers.Add(HttpRequestHeader.AcceptCharset, "ISO-8859-2,utf-8;q=0.7,*;q=0.7");
request.KeepAlive = true;
//proxy settings
string exepath = Path.GetDirectoryName(Application.ExecutablePath);
string proxySettings = exepath + #"\proxy.ini";
WebProxy wp = new WebProxy();
if (File.Exists(proxySettings)) {
request.Proxy = WebRequest.DefaultWebProxy;
IniFile ini = new IniFile(proxySettings);
string user = ini.IniReadValue("Proxy", "User");
string pass = ini.IniReadValue("Proxy", "Password");
string domain = ini.IniReadValue("Proxy", "Domain");
string ip = ini.IniReadValue("Proxy", "IP");
string port_s = ini.IniReadValue("Proxy", "Port");
int port = 0;
if (!string.IsNullOrEmpty(ip))
{
if (!string.IsNullOrEmpty(port_s))
{
try
{
port = Convert.ToInt32(port_s);
}
catch (Exception e)
{
ErrorLog.AddToLog("Problem with conversion of port:");
ErrorLog.AddToLog(e.Message);
ErrorLog.ShowLogWindow();
}
wp = new WebProxy(ip, port);
} else {
wp = new WebProxy(ip);
}
}
if (string.IsNullOrEmpty(domain))
wp.Credentials = new NetworkCredential(user, pass);
else
wp.Credentials = new NetworkCredential(user, pass, domain);
request.Proxy = wp;
}
string post = "***";
request.ContentLength = post.Length;
request.ContentType = "application/x-www-form-urlencoded";
StreamWriter writer = null;
try
{
writer = new StreamWriter(request.GetRequestStream()); // Here is the WebException thrown
writer.Write(post);
writer.Close();
}
catch (Exception e)
{
ErrorLog.AddToLog("Problem with request sending:");
ErrorLog.AddToLog(e.Message);
ErrorLog.ShowLogWindow();
}
HttpWebResponse Response = null;
try
{
Response = (HttpWebResponse)request.GetResponse();
}
catch (Exception e)
{
ErrorLog.AddToLog("Problem with response:");
ErrorLog.AddToLog(e.Message);
ErrorLog.ShowLogWindow();
}
//Request.Proxy = WebProxy.GetDefaultProxy();
//Request.Proxy.Credentials = CredentialCache.DefaultCredentials;
string sResponseHeader = Response.ContentEncoding; // get response header
if (!string.IsNullOrEmpty(sResponseHeader))
{
if (sResponseHeader.ToLower().Contains("gzip"))
{
byte[] b = DecompressGzip(Response.GetResponseStream());
response = System.Text.Encoding.GetEncoding(Response.ContentEncoding).GetString(b);
}
else if (sResponseHeader.ToLower().Contains("deflate"))
{
byte[] b = DecompressDeflate(Response.GetResponseStream());
response = System.Text.Encoding.GetEncoding(Response.ContentEncoding).GetString(b);
}
}
// uncompressed, standard response
else
{
StreamReader ResponseReader = new StreamReader(Response.GetResponseStream());
response = ResponseReader.ReadToEnd();
ResponseReader.Close();
}
}
catch (Exception e)
{
ErrorLog.AddToLog("Problem with comunication:");
ErrorLog.AddToLog(e.Message);
ErrorLog.ShowLogWindow();
}
if (response == "***")
{
SaveKeyFiles();
WriteRegKey(regKey);
RenewCounter();
return true;
}
else
{
return false;
}
}
My program logs it as:
[09:13:18] Searching for hardware ID
[09:13:56] Problem with response:
[09:13:56] The remote server returned an error: (407) Proxy Authentication Required.
[09:15:04] problem with comunication:
[09:15:04] Object reference not set to an object instance.
If they write user and pass into proxy.ini file, program works. But the problem is they cant do that. And somehow IE works without it. Is there any way to get those settings from IE or system?
Use GetSystemWebProxy to return what the system default proxy is.
WebRequest.DefaultProxy = WebRequest.GetSystemWebProxy();
But every HttpWebRequest should automatically be filled out with this information by default. For example, the following snippet in a standalone console application should print the correct information on a system with a PAC file configured.
HttpWebRequest myWebRequest=(HttpWebRequest)WebRequest.Create("http://www.microsoft.com");
// Obtain the 'Proxy' of the Default browser.
IWebProxy proxy = myWebRequest.Proxy;
// Print the Proxy Url to the console.
if (proxy != null)
{
Console.WriteLine("Proxy: {0}", proxy.GetProxy(myWebRequest.RequestUri));
}
else
{
Console.WriteLine("Proxy is null; no proxy will be used");
}
Use DefaultNetworkCredentials to return system proxy credentials.
request.Proxy.Credentials = System.Net.CredentialCache.DefaultNetworkCredentials;