What is the best way to set expect100continue when using WebClient(C#.NET). I have this code below, I still see 100 continue in the header. Stupid apache still complains with 505 error.
string url = "http://aaaa.com";
ServicePointManager.Expect100Continue = false;
WebClient service = new WebClient();
service.Credentials = new NetworkCredential("username", "password");
service.Headers.Add("Content-Type","text/xml");
service.UploadStringCompleted += (sender, e) => CompleteCallback(BuildResponse(e));
service.UploadStringAsync(new Uri(url), "POST", query);
Note: If I put the above in a console app and let it run - then I do not see the headers in fiddler. But, my code is embedded in a user library which is loaded by a WPF app. So, Is there more to Expect100Continue in terms of thread, initialization, etc. Now, I think it is more of my code issue.
You need to set the Expect100Continue property on the ServicePoint used for the URI you're accessing:
var uri = new Uri("http://foo.bar.baz");
var servicePoint = ServicePointManager.FindServicePoint(uri);
servicePoint.Expect100Continue = false;
The only way to do this is to create an override.
public class ExpectContinueAware : System.Net.WebClient
{
protected override System.Net.WebRequest GetWebRequest(Uri address)
{
System.Net.WebRequest request = base.GetWebRequest(address);
if (request is System.Net.HttpWebRequest)
{
var hwr = request as System.Net.HttpWebRequest;
hwr.ServicePoint.Expect100Continue = false;
}
return request;
}
}
This works perfect.
Try to create the WebClient instanse after you change Expect100Continue to false. Or use a Webrequest instead of a WebClient
One of my below approach also worked,
In my constructor of class (service calling), you can set System.Net.ServicePointManager.Expect100Continue = false;
and then I have created BasicHttpBinding object and continued, it executed successfully.
You can modify your Web/App.Config, by adding:
<system.net>
<settings>
<servicePointManager expect100Continue="false"/>
</settings>
</system.net>
At least for me, if the faulty server gets patched, I can easily remove the setting without having to recompile.
Related
I have a problem with following a user on Instagram with InstaSharp
Here is my code:
private async void Follow()
{
var followMe = await api.FollowUserAsync(userID);
if (followMe.Succeeded)
{
MessageBox.Show("Followed");
}
if (!followMe.Succeeded)
{
MessageBox.Show(followMe.Info.Message);
}
}
And when I call this method in the messageBox it says feedback_required.
How can I fix this?
Also : other functions like Unfollow Login LogOut are working fine I just have problem with Follow function.
Some specific countries ip's will banned in situation like this as you said.
You can use proxies inside your program for this problem if your customers are from those countries.
C# Connecting Through Proxy
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("[ultimate destination of your request]");
WebProxy myproxy = new WebProxy("[your proxy address]", [your proxy port number]);
myproxy.BypassProxyOnLocal = false;
request.Proxy = myproxy;
request.Method = "GET";
HttpWebResponse response = (HttpWebResponse) request.GetResponse();
I hope this helps you.
I fixed the issue by changing the proxy server. It seems Instagram was banning my IP for no reason!
I'm calling 2 webservices from a .Net 3.5 Framework Client (Server is a more modern .NET Framework Server: 4.6).
Separated I run into no Problems, but if I call the methods in the order shown below I get the Problem that the VerifyFile method on the Server is never entered, and I instead immediately get a The server has committed a protocol Violation Section=ResponseStatusLine error on the Client.
To be more exact: The Server Registers in the Events the VerifyFile request but doesn't enter the actual code until up to 6 minutes later (and immediately Returns something instead that causes the above error).
After much testing I could reduce it to the first method "DownloadFile" being the cause of the Problem. And that always when I return anything else than the statuscode ok from it (Content or not Content doesn't matter).
I'm at a complete loss with this phenomenon (I would have expected the Client to have Troubles, but the Server not entering that one code part until MINUTES later Looks like the Server is getting into Troubles itself, which is unexpected, also unexpected is that the SECOND method and not the original one is running into Problems there).
So my question is why is returning anything but HttpStatusCode.OK causing These Problems and what can I do to correct this?
Client:
WebClient webClient = new WebClient();
webClient.QueryString.Add("downloadId", id);
webClient.DownloadFile("localhost/Download/DownloadFile", #"c:\temp\local.txt");
webClient = new WebClient();
webClient.QueryString.Add("downloadId", id);
webClient.QueryString.Add("fileLength", GetFileLength(#"c:\temp\local.txt"));
var i = webClient.DownloadString("localhost/Download/VerifyFile");
Testwise I replaced the DownloadFile with: webClient.DownloadString("localhost/Download/DownloadFile");
Originally I also had only one new WebClient, but added the second one after the first failures.
Server:
[RoutePrefix("Download")]
public class DownloadController : ApiController
{
[HttpGet]
[Route("DownloadFile")]
public IHttpActionResult DownloadFile(int downloadId){
return ResponseMessage(GetFile(downloadId));
}
private HttpResponseMessage GetFile(int downloadId){
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
string filePath = GetFilePathFromDB(downloadid);
if (filePath != String.Empty){
var stream = new FileStream(filePath, FileMode.Open, FileAccess.ReadWrite);
result.Content = new StreamContent(stream);
result.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment");
result.Content.Headers.ContentDisposition.FileName = Path.GetFileName(filePath);
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
result.Content.Headers.ContentLength = stream.Length;
}
else{
result = new HttpResponseMessage(HttpStatusCode.InternalServerError);
}
return result;
}
[HttpGet]
[Route("VerifyFile")]
public IHttpActionResult VerifyFile(int downloadId, int fileLength)
{
return Content(HttpStatusCode.OK, "OK");
}
private string GetFilePathFromDB(int downloadId)
{
return #"C:\temp\mytest.txt"; // testcode
}
}
You can try three things.
Follow the http spec so you don't get this error
Add this to your client web.config
<system.net>
<settings>
<httpWebRequest useUnsafeHeaderParsing="true" />
</settings>
</system.net>
Set connnection header to close instead of keep alive
class ConnectionCloseClient : WebClient
{
protected override WebRequest GetWebRequest(Uri address)
{
WebRequest request = base.GetWebRequest(address);
if (request is HttpWebRequest)
{
(request as HttpWebRequest).KeepAlive = false;
}
return request;
}
}
My theory is that When you Get an error response from the server then the file isn't created. Suppose you are catching the DownloadFile exception that you should receive when you return any other response than OK.
GetFileLength probably returns an invalid number and according to this answer it can result in the error you have mentioned. As to why it takes 6 minutes to reach that server code - I guess it does some inner error handling before calling the method, and returning an error. Sadly ASP.net 4.* isn't open source so I am not really familiar with the inner workings.
I think it's a problem with a Keep-Alive header. You can capture the http request and check this value. A value of true will try to mantain a connection opened. Not all proxies are compatible with this header.
Try to use two diferent WebClient instances and dispose them before use the next one. Or force the header to false.
WebClient webClient = new WebClient();
webClient.QueryString.Add("downloadId", id);
webClient.DownloadFile("localhost/Download/DownloadFile", #"c:\temp\local.txt");
webClient.Dispose();
webClient2 = new WebClient();
webClient2.QueryString.Add("downloadId", id);
webClient2.QueryString.Add("fileLength", GetFileLength(#"c:\temp\local.txt"));
var i = webClient2.DownloadString("localhost/Download/VerifyFile");
webClient2.Dispose();
Or wrap them in a using statement:
using (WebClient webClient = new WebClient())
{
webClient.QueryString.Add("downloadId", id);
webClient.DownloadFile("localhost/Download/DownloadFile", #"c:\temp\local.txt");
}
using (WebClient webClient = new WebClient())
{
webClient2 = new WebClient();
webClient2.QueryString.Add("downloadId", id);
webClient2.QueryString.Add("fileLength", GetFileLength(#"c:\temp\local.txt"));
var i = webClient2.DownloadString("localhost/Download/VerifyFile");
}
Check this post:
WebClient is opening a new connection each time I download a file and all of them stay established
I have the following C# code that is attempting to make an https request to a specific url. If I change the URL to point to the QAS server, which is not https then all works fine. I have tried numerous combinations of settings, but nothing that I do seems to get this to work correctly. You can see several of the different combinations of things that I have done in the comments.
var request = (HttpWebRequest)WebRequest.Create(nextUrl);
request.AllowAutoRedirect = false;
request.UseDefaultCredentials = true;
request.KeepAlive = false;
//request.PreAuthenticate = true;
//request.Credentials = CredentialCache.DefaultNetworkCredentials;
//request.Credentials = new NetworkCredential("name", "pass", "domain");
ServicePointManager.ServerCertificateValidationCallback = AcceptAllCertifications;
HttpWebResponse response;
using (response = (HttpWebResponse)request.GetResponse())
{
//Do Something
}
Your code for the certificate callback looks incorrect, it should look:
ServicePointManager.ServerCertificateValidationCallback =
new RemoteCertificateValidationCallback(AcceptAllCertifications);
I'm assuming you have the proper method signature for the AcceptAllCertifications. Also the class you're utilizing if nextUrl is http it will default to unsecure, if it is https it will default to secure.
I am trying to write code that will authenticate to the website wallbase.cc. I've looked at what it does using Firfebug/Chrome Developer tools and it seems fairly easy:
Post "usrname=$USER&pass=$PASS&nopass_email=Type+in+your+e-mail+and+press+enter&nopass=0" to the webpage "http://wallbase.cc/user/login", store the returned cookies and use them on all future requests.
Here is my code:
private CookieContainer _cookies = new CookieContainer();
//......
HttpPost("http://wallbase.cc/user/login", string.Format("usrname={0}&pass={1}&nopass_email=Type+in+your+e-mail+and+press+enter&nopass=0", Username, assword));
//......
private string HttpPost(string url, string parameters)
{
try
{
System.Net.WebRequest req = System.Net.WebRequest.Create(url);
//Add these, as we're doing a POST
req.ContentType = "application/x-www-form-urlencoded";
req.Method = "POST";
((HttpWebRequest)req).Referer = "http://wallbase.cc/home/";
((HttpWebRequest)req).CookieContainer = _cookies;
//We need to count how many bytes we're sending. Post'ed Faked Forms should be name=value&
byte[] bytes = System.Text.Encoding.ASCII.GetBytes(parameters);
req.ContentLength = bytes.Length;
System.IO.Stream os = req.GetRequestStream();
os.Write(bytes, 0, bytes.Length); //Push it out there
os.Close();
//get response
using (System.Net.WebResponse resp = req.GetResponse())
{
if (resp == null) return null;
using (Stream st = resp.GetResponseStream())
{
System.IO.StreamReader sr = new System.IO.StreamReader(st);
return sr.ReadToEnd().Trim();
}
}
}
catch (Exception)
{
return null;
}
}
After calling HttpPost with my login parameters I would expect all future calls using this same method to be authenticated (assuming a valid username/password). I do get a session cookie in my cookie collection but for some reason I'm not authenticated. I get a session cookie in my cookie collection regardless of which page I visit so I tried loading the home page first to get the initial session cookie and then logging in but there was no change.
To my knowledge this Python version works: https://github.com/sevensins/Wallbase-Downloader/blob/master/wallbase.sh (line 336)
Any ideas on how to get authentication working?
Update #1
When using a correct user/password pair the response automatically redirects to the referrer but when an incorrect user/pass pair is received it does not redirect and returns a bad user/pass pair. Based on this it seems as though authentication is happening, but maybe not all the key pieces of information are being saved??
Update #2
I am using .NET 3.5. When I tried the above code in .NET 4, with the added line of System.Net.ServicePointManager.Expect100Continue = false (which was in my code, just not shown here) it works, no changes necessary. The problem seems to stem directly from some pre-.Net 4 issue.
This is based on code from one of my projects, as well as code found from various answers here on stackoverflow.
First we need to set up a Cookie aware WebClient that is going to use HTML 1.0.
public class CookieAwareWebClient : WebClient
{
private CookieContainer cookie = new CookieContainer();
protected override WebRequest GetWebRequest(Uri address)
{
HttpWebRequest request = (HttpWebRequest)base.GetWebRequest(address);
request.ProtocolVersion = HttpVersion.Version10;
if (request is HttpWebRequest)
{
(request as HttpWebRequest).CookieContainer = cookie;
}
return request;
}
}
Next we set up the code that handles the Authentication and then finally loads the response.
var client = new CookieAwareWebClient();
client.UseDefaultCredentials = true;
client.BaseAddress = #"http://wallbase.cc";
var loginData = new NameValueCollection();
loginData.Add("usrname", "test");
loginData.Add("pass", "123");
loginData.Add("nopass_email", "Type in your e-mail and press enter");
loginData.Add("nopass", "0");
var result = client.UploadValues(#"http://wallbase.cc/user/login", "POST", loginData);
string response = System.Text.Encoding.UTF8.GetString(result);
We can try this out using the HTML Visualizer inbuilt into Visual Studio while staying in debug mode and use that to confirm that we were able to authenticate and load the Home page while staying authenticated.
The key here is to set up a CookieContainer and use HTTP 1.0, instead of 1.1. I am not entirely sure why forcing it to use 1.0 allows you to authenticate and load the page successfully, but part of the solution is based on this answer.
https://stackoverflow.com/a/10916014/408182
I used Fiddler to make sure that the response sent by the C# Client was the same as with my web browser Chrome. It also allows me to confirm if the C# client is being redirect correctly. In this case we can see that with HTML 1.0 we are getting the HTTP/1.0 302 Found and then redirects us to the home page as intended. If we switch back to HTML 1.1 we will get an HTTP/1.1 417 Expectation Failed message instead.
There is some information on this error message available in this stackoverflow thread.
HTTP POST Returns Error: 417 "Expectation Failed."
Edit: Hack/Fix for .NET 3.5
I have spent a lot of time trying to figure out the difference between 3.5 and 4.0, but I seriously have no clue. It looks like 3.5 is creating a new cookie after the authentication and the only way I found around this was to authenticate the user twice.
I also had to make some changes on the WebClient based on information from this post.
http://dot-net-expertise.blogspot.fr/2009/10/cookiecontainer-domain-handling-bug-fix.html
public class CookieAwareWebClient : WebClient
{
public CookieContainer cookies = new CookieContainer();
protected override WebRequest GetWebRequest(Uri address)
{
var request = base.GetWebRequest(address);
var httpRequest = request as HttpWebRequest;
if (httpRequest != null)
{
httpRequest.ProtocolVersion = HttpVersion.Version10;
httpRequest.CookieContainer = cookies;
var table = (Hashtable)cookies.GetType().InvokeMember("m_domainTable", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.GetField | System.Reflection.BindingFlags.Instance, null, cookies, new object[] { });
var keys = new ArrayList(table.Keys);
foreach (var key in keys)
{
var newKey = (key as string).Substring(1);
table[newKey] = table[key];
}
}
return request;
}
}
var client = new CookieAwareWebClient();
var loginData = new NameValueCollection();
loginData.Add("usrname", "test");
loginData.Add("pass", "123");
loginData.Add("nopass_email", "Type in your e-mail and press enter");
loginData.Add("nopass", "0");
// Hack: Authenticate the user twice!
client.UploadValues(#"http://wallbase.cc/user/login", "POST", loginData);
var result = client.UploadValues(#"http://wallbase.cc/user/login", "POST", loginData);
string response = System.Text.Encoding.UTF8.GetString(result);
You may need to add the following:
//get response
using (System.Net.WebResponse resp = req.GetResponse())
{
foreach (Cookie c in resp.Cookies)
_cookies.Add(c);
// Do other stuff with response....
}
Another thing that you might have to do is, if the server responds with a 302 (redirect) the .Net web request will automatically follow it and in the process you might lose the cookie you're after. You can turn off this behavior with the following code:
req.AllowAutoRedirect = false;
The Python you reference uses a different referrer (http://wallbase.cc/start/). It is also followed by another post to (http://wallbase.cc/user/adult_confirm/1). Try the other referrer and followup with this POST.
I think you are authenticating correctly, but that the site needs more info/assertions from you before proceeding.
How can I check whether a page exists at a given URL?
I have this code:
private void check(string path)
{
try
{
Uri uri = new Uri(path);
WebRequest request = WebRequest.Create(uri);
request.Timeout = 3000;
WebResponse response;
response = request.GetResponse();
}
catch(Exception loi) { MessageBox.Show(loi.Message); }
}
But that gives an error message about the proxy. :(
First, you need to understand that your question is at least twofold,
you must first check if the server is responsive, using ping for example - that's the first check, while doing this, consider timeout, for which timeout you will consider a page as not existing?
second, try retrieving the page using many methods which are available on google, again, you need to consider the timeout, if the server taking long to replay, the page might still "be there" but the server is just under tons of pressure.
If the proxy needs to authenticate you with your Windows credentials (e.g. you are in a corporate network) use:
WebRequest request=WebRequest.Create(url);
request.UseDefaultCredentials=true;
request.Proxy.Credentials=request.Credentials;
try
{
Uri uri = new Uri(path);
HttpWebRequest request = HttpWebRequest.Create(uri);
request.Timeout = 3000;
HttpWebResponse response;
response = request.GetResponse();
if (response.StatusCode.Equals(200))
{
// great - something is there
}
}
catch (Exception loi)
{
MessageBox.Show(loi.Message);
}
You can check the content-type and length, see MSDN HTTPWebResponse.
At a guess, without knowing the specific error message or path, you could try casting the WebRequest to a HttpWebRequest and then setting the WebProxy.
See MSDN: HttpWebRequest - Proxy Property