We are using one load balance Server which redirect us to another four different server and other two servers which are not on the farm of load balance server. The configuration looks like completed perfectly from the administrator of the system.
The problem I face is that when I would like to have an HTTP web response and take the response headers I faced an issue. The problem I faced is that the Load balanced "link" which I use from load balance server does not give me the response I expect. I use the below code as example.
try
{
var myHttpWebRequest = WebRequest.Create(url);
myHttpWebRequest.Credentials = credentials;
var myHttpWebResponse = myHttpWebRequest.GetResponse();
myHttpWebResponse.Close();
}
catch (WebException e)
{
Console.WriteLine("WebException: " + url);
Console.WriteLine("Web Exception Status: " + e.Status);
Console.WriteLine("\n");
if (e.Status == WebExceptionStatus.ProtocolError)
{
Console.WriteLine(((HttpWebResponse)e.Response).Headers);
}
}
Now, the WebExceptionStatus I take when execute the code on the load balanced server is "SendFailure" which does not give me any "response.headers" result.
If I install and use "curl"
curl [load balance server url] -k -I
on the command line I will take headers when I use load balance server URL.
In conclusion, my target is I would like to take and use theWebExceptionStatus, "X-FEServer","www-Authenticate" and "Date" for some purposes but it is not possible with the c# approach as above to be retrieved.
Please, make sure that you understand the load balanced server approach before any comment or answer. Any suggestion will be helpful
TL;DR version
When a transfer error occurs while writing to the request stream, I can't access the response, even though the server sends it.
Full version
I have a .NET application that uploads files to a Tomcat server, using HttpWebRequest. In some cases, the server closes the request stream prematurely (because it refuses the file for one reason or another, e.g. an invalid filename), and sends a 400 response with a custom header to indicate the cause of the error.
The problem is that if the uploaded file is large, the request stream is closed before I finish writing the request body, and I get an IOException:
Message: Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
InnerException: SocketException: An existing connection was forcibly closed by the remote host
I can catch this exception, but then, when I call GetResponse, I get a WebException with the previous IOException as its inner exception, and a null Response property. So I can never get the response, even though the server sends it (checked with WireShark).
Since I can't get the response, I don't know what the actual problem is. From my application point of view, it looks like the connection was interrupted, so I treat it as a network-related error and retry the upload... which, of course, fails again.
How can I work around this issue and retrieve the actual response from the server? Is it even possible? To me, the current behavior looks like a bug in HttpWebRequest, or at least a severe design issue...
Here's the code I used to reproduce the problem:
var request = HttpWebRequest.CreateHttp(uri);
request.Method = "POST";
string filename = "foo\u00A0bar.dat"; // Invalid characters in filename, the server will refuse it
request.Headers["Content-Disposition"] = string.Format("attachment; filename*=utf-8''{0}", Uri.EscapeDataString(filename));
request.AllowWriteStreamBuffering = false;
request.ContentType = "application/octet-stream";
request.ContentLength = 100 * 1024 * 1024;
// Upload the "file" (just random data in this case)
try
{
using (var stream = request.GetRequestStream())
{
byte[] buffer = new byte[1024 * 1024];
new Random().NextBytes(buffer);
for (int i = 0; i < 100; i++)
{
stream.Write(buffer, 0, buffer.Length);
}
}
}
catch(Exception ex)
{
// here I get an IOException; InnerException is a SocketException
Console.WriteLine("Error writing to stream: {0}", ex);
}
// Now try to read the response
try
{
using (var response = (HttpWebResponse)request.GetResponse())
{
Console.WriteLine("{0} - {1}", (int)response.StatusCode, response.StatusDescription);
}
}
catch(Exception ex)
{
// here I get a WebException; InnerException is the IOException from the previous catch
Console.WriteLine("Error getting the response: {0}", ex);
var webEx = ex as WebException;
if (webEx != null)
{
Console.WriteLine(webEx.Status); // SendFailure
var response = (HttpWebResponse)webEx.Response;
if (response != null)
{
Console.WriteLine("{0} - {1}", (int)response.StatusCode, response.StatusDescription);
}
else
{
Console.WriteLine("No response");
}
}
}
Additional notes:
If I correctly understand the role of the 100 Continue status, the server shouldn't send it to me if it's going to refuse the file. However, it seems that this status is controlled directly by Tomcat, and can't be controlled by the application. Ideally, I'd like the server not to send me 100 Continue in this case, but according to my colleagues in charge of the back-end, there is no easy way to do it. So I'm looking for a client-side solution for now; but if you happen to know how to solve the problem on the server side, it would also be appreciated.
The app in which I encounter the issue targets .NET 4.0, but I also reproduced it with 4.5.
I'm not timing out. The exception is thrown long before the timeout.
I tried an async request. It doesn't change anything.
I tried setting the request protocol version to HTTP 1.0, with the same result.
Someone else has already filed a bug on Connect for this issue: https://connect.microsoft.com/VisualStudio/feedback/details/779622/unable-to-get-servers-error-response-when-uploading-file-with-httpwebrequest
I am out of ideas as to what can be a client side solution to your problem. But I still think the server side solution of using a custom tomcat valve can help here. I currently doesn`t have a tomcat setup where I can test this but I think a server side solution here would be along the following lines :
RFC section 8.2.3 clearly states :
Requirements for HTTP/1.1 origin servers:
- Upon receiving a request which includes an Expect request-header
field with the "100-continue" expectation, an origin server MUST
either respond with 100 (Continue) status and continue to read
from the input stream, or respond with a final status code. The
origin server MUST NOT wait for the request body before sending
the 100 (Continue) response. If it responds with a final status
code, it MAY close the transport connection or it MAY continue
to read and discard the rest of the request. It MUST NOT
perform the requested method if it returns a final status code.
So assuming tomcat confirms to the RFC, while in the custom valve you would have recieved the HTTP request header, but the request body would not be sent since the control is not yet in the servlet that reads the body.
So you can probably implement a custom valve, something similar to :
import org.apache.catalina.connector.Request;
import org.apache.catalina.connector.Response;
import org.apache.catalina.valves.ErrorReportValve;
public class CustomUploadHandlerValve extends ValveBase {
#Override
public void invoke(Request request, Response response) throws IOException, ServletException {
HttpServletRequest httpRequest = (HttpServletRequest) request;
String fileName = httpRequest.getHeader("Filename"); // get the filename or whatever other parameters required as per your code
bool validationSuccess = Validate(); // perform filename check or anyother validation here
if(!validationSuccess)
{
response = CreateResponse(); //create your custom 400 response here
request.SetResponse(response);
// return the response here
}
else
{
getNext().invoke(request, response); // to pass to the next valve/ servlet in the chain
}
}
...
}
DISCLAIMER : Again I haven`t tried this to success, need sometime and a tomcat setup to try it out ;).
Thought it might be a starting point for you.
I had the same problem. The server sends a response before the client end of the transmission of the request body, when I try to do async request. After a series of experiments, I found a workaround.
After the request stream has been received, I use reflection to check the private field _CoreResponse of the HttpWebRequest. If it is an object of class CoreResponseData, I take his private fields (using reflection): m_StatusCode, m_StatusDescription, m_ResponseHeaders, m_ContentLength. They contain information about the server's response!
In most cases, this hack works!
What are you getting in the status code and response of the second exception not the internal exception?
If a WebException is thrown, use the Response and Status properties of the exception to determine the response from the server.
http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.getresponse(v=vs.110).aspx
You are not saying what exactly version of Tomcat 7 you are using...
checked with WireShark
What do you actually see with WireShark?
Do you see the status line of response?
Do you see the complete status line, up to CR-LF characters at its end?
Is Tomcat asking for authentication credentials (401), or it is refusing file upload for some other reason (first acknowledging it with 100 but then aborting it mid-flight)?
The problem is that if the uploaded file is large, the request stream
is closed before I finish writing the request body, and I get an IOException:
If you do not want the connection to be closed but all the data transferred over the wire and swallowed at the server side, on Tomcat 7.0.55 and later it is possible to configure maxSwallowSize attribute on HTTP connector, e.g. maxSwallowSize="-1".
http://tomcat.apache.org/tomcat-7.0-doc/config/http.html
If you want to discuss Tomcat side of connection handling, you would better ask on the Tomcat users' mailing list,
http://tomcat.apache.org/lists.html#tomcat-users
At .Net side:
Is it possible to perform stream.Write() and request.GetResponse() simultaneously, from different threads?
Is it possible to performs some checks at the client side before actually uploading the file?
hmmm... i don't get it - that is EXACTLY why in many real-life scenarios large files are uploaded in chunks (and not as a single large file)
by the way: many internet servers have size limitations. for instance in tomcat that is representad by maxPostSize (as seen in this link: http://tomcat.apache.org/tomcat-5.5-doc/config/http.html)
so tweaking the server configurations seems like the easy way, but i do think that the right way is to split the file to several requests
EDIT: replace Uri.EscapeDataString with HttpServerUtility.UrlEncode
Uri.EscapeDataString(filename) // a problematic .net implementation
HttpServerUtility.UrlEncode(filename) // the proper way to do it
I am experience a pretty similar problem currently also with Tomcat and a Java client. The Tomcat REST service sends a HTTP returncode with response body before reading the whole request body. The client however fails with IOException. I inserted a HTTP Proxy on the client to sniff the protocol and actually the HTTP response is sent to the client eventually. Most likly the Tomcat closed the request input stream before sending the response.
One solution is to use a different HTTP server like Jetty which does not have this problem. The other solution is a add a Apache HTTP server with AJP in front of Tomcat. Apache HTTP server has a different handling of streams and with that the problem goes away.
As you all know there is many file host websites, is there a way to process the http link of a of file on one of those sites and retrieve a result if the file exists or if the http link even exists or not. I know that maybe some of those file host websites uses their own APIs but i want a more generic way.
Edit:
So as i understand there is no file on a server, it's just that i have to read the response and read it properly. I want to ask another thing, what about redirection, does that mean if i got the response of a link that redirects to other link, i will get the final target from the response ?
You can find out if a file exist using the exists method:
bool System.IO.File.Exists(string path)
///
in order to find out if a file exist on a remove server you can try this:
WebRequest request;
WebResponse response;
String strMSG = string.Empty;
request = WebRequest.Create(new Uri(“http://www.yoururl.com/yourfile.jpg”));
request.Method = “HEAD”;
try
{
response = request.GetResponse();
strMSG = string.Format(“{0} {1}”, response.ContentLength, response.ContentType);
}
catch (Exception ex)
{
//In case of File not Exist Server return the (404) Error
strMSG = ex.Message;
}
see this:
If I understand you correctly, you're trying to tell if a given URL has content.
Use the
WebClient
class.
Call the url, if you receive a 200, you're good to go. A 404 exception or similar probably means the link is no good.
Or, even better way to do this is to do a HEAD http request. See here for more info on that.
I need to check that our visitors are using HTTPS. In BasePage I check if the request is coming via HTTPS. If it's not, I redirect back with HTTPS. However, when someone comes to the site and this function is used, I get the error:
System.Web.HttpException: Server
cannot append header after HTTP
headers have been sent. at
System.Web.HttpResponse.AppendHeader(String
name, String value) at
System.Web.HttpResponse.AddHeader(String
name, String value) at
Premier.Payment.Website.Generic.BasePage..ctor()
Here is the code I started with:
// If page not currently SSL
if (HttpContext.Current.Request.ServerVariables["HTTPS"].Equals("off"))
{
// If SSL is required
if (GetConfigSetting("SSLRequired").ToUpper().Equals("TRUE"))
{
string redi = "https://" +
HttpContext.Current.Request.ServerVariables["SERVER_NAME"].ToString() +
HttpContext.Current.Request.ServerVariables["SCRIPT_NAME"].ToString() +
"?" + HttpContext.Current.Request.ServerVariables["QUERY_STRING"].ToString();
HttpContext.Current.Response.Redirect(redi.ToString());
}
}
I also tried adding this above it (a bit I used in another site for a similar problem):
// Wait until page is copletely loaded before sending anything since we re-build
HttpContext.Current.Response.BufferOutput = true;
I am using c# in .NET 3.5 on IIS 6.
Chad,
Did you try ending the output when you redirect? There is a second parameter that you'd set to true to tell the output to stop when the redirect header is issued. Or, if you are buffering the output then maybe you need to clear the buffer before doing the redirect so the headers are not sent out along with the redirect header.
Brian
This error usually means that something has bee written to the response stream before a redirection is initiated. So you should make sure that the test for https is done fairly high up in the page load function.
I've encountered an issue with HttpWebRequest that if the URI is over 2048 characters long the request fails and returns a 404 error even though the server is perfectly capable of servicing a request with a URI that long. I know this since the same URI that causes an error if submitted via HttpWebRequest works fine when pasted directly into a browser address bar.
My current workaround is to allow users to set a compatability flag to say that it's safe to send the parameters as a POST request instead in the case where the URI would be too long but this is not ideal since the protocol I'm using is RESTful and GET should be used for queries. Plus there is no guarentee that other implementors of the protocol will accept POSTed queries
Is there another class in .Net that has equivalent functionality to HttpWebRequest that doesn't suffer from the URI length limit that I could use?
I'm aware of WebClient but I don't really want to use that as I need to be able to fully control the HTTP Headers which WebClient restricts the ability to do.
Edit
Because Shoban asked for it:
http://localhost/BBCDemo/sparql/?query=PREFIX+rdf%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F1999%2F02%2F22-rdf-syntax-ns%23%3E%0D%0APREFIX+rdfs%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2000%2F01%2Frdf-schema%23%3E%0D%0APREFIX+xsd%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2001%2FXMLSchema%23%3E%0D%0APREFIX+skos%3A+%3Chttp%3A%2F%2Fwww.w3.org%2F2004%2F02%2Fskos%2Fcore%23%3E%0D%0APREFIX+dc%3A+%3Chttp%3A%2F%2Fpurl.org%2Fdc%2Felements%2F1.1%2F%3E%0D%0APREFIX+po%3A+%3Chttp%3A%2F%2Fpurl.org%2Fontology%2Fpo%2F%3E%0D%0APREFIX+timeline%3A+%3Chttp%3A%2F%2Fpurl.org%2FNET%2Fc4dm%2Ftimeline.owl%23%3E%0D%0ASELECT+*+WHERE+{%0D%0A++++%3Chttp%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fb00n4d6y%23programme%3E+dc%3Atitle+%3Ftitle+.%0D%0A++++%3Chttp%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fb00n4d6y%23programme%3E+po%3Ashort_synopsis+%3Fsynopsis-short+.%0D%0A++++%3Chttp%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fb00n4d6y%23programme%3E+po%3Amedium_synopsis+%3Fsynopsis-med+.%0D%0A++++%3Chttp%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fb00n4d6y%23programme%3E+po%3Along_synopsis+%3Fsynopsis-long+.%0D%0A++++%3Chttp%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fb00n4d6y%23programme%3E+po%3Amasterbrand+%3Fchannel+.%0D%0A++++%3Chttp%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fb00n4d6y%23programme%3E+po%3Agenre+%3Fgenre+.%0D%0A++++%3Fchannel+dc%3Atitle+%3Fchanneltitle+.%0D%0A++++OPTIONAL+{%0D%0A++++++++%3Chttp%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fb00n4d6y%23programme%3E+po%3Abrand+%3Fbrand+.%0D%0A++++++++%3Fbrand+dc%3Atitle+%3Fbrandtitle+.%0D%0A++++}%0D%0A++++OPTIONAL+{%0D%0A++++++++%3Chttp%3A%2F%2Fwww.bbc.co.uk%2Fprogrammes%2Fb00n4d6y%23programme%3E+po%3Aversion+%3Fver+.%0D%0A++++++++%3Fver+po%3Atime+%3Finterval+.%0D%0A++++++++%3Finterval+timeline%3Astart+%3Fstart+.%0D%0A++++++++%3Finterval+timeline%3Aend+%3Fend+.%0D%0A++++}%0D%0A}&default-graph-uri=&timeout=30000
Which is the following encoded onto the querystring:
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
PREFIX dc: <http://purl.org/dc/elements/1.1/>
PREFIX po: <http://purl.org/ontology/po/>
PREFIX timeline: <http://purl.org/NET/c4dm/timeline.owl#>
SELECT * WHERE {
<http://www.bbc.co.uk/programmes/b00n4d6y#programme> dc:title ?title .
<http://www.bbc.co.uk/programmes/b00n4d6y#programme> po:short_synopsis ?synopsis-short .
<http://www.bbc.co.uk/programmes/b00n4d6y#programme> po:medium_synopsis ?synopsis-med .
<http://www.bbc.co.uk/programmes/b00n4d6y#programme> po:long_synopsis ?synopsis-long .
<http://www.bbc.co.uk/programmes/b00n4d6y#programme> po:masterbrand ?channel .
<http://www.bbc.co.uk/programmes/b00n4d6y#programme> po:genre ?genre .
?channel dc:title ?channeltitle .
OPTIONAL {
<http://www.bbc.co.uk/programmes/b00n4d6y#programme> po:brand ?brand .
?brand dc:title ?brandtitle .
}
OPTIONAL {
<http://www.bbc.co.uk/programmes/b00n4d6y#programme> po:version ?ver .
?ver po:time ?interval .
?interval timeline:start ?start .
?interval timeline:end ?end .
}
}
the protocol I'm using is RESTful and GET should be used for queries.
There's no reason POST can't also be used for queries; for really long request data you have to, as very-long-URIs aren't globally supported, and have never been. This is one area where HTTP does not live up to the REST ideal.
The reason POST generally isn't used on a plain-HTML level is to stop the browser prompting for reloads, and promote eg. bookmarking. But for HttpWebRequest you don't have either of those concerns, so go ahead and POST it. Web applications should use a parameter or a URI path part to distinguish write requests from queries, not merely the request method. (Of course a write request from a GET method should still be denied.)
I don't think HttpWebRequest is actually incompatible with GET URLs of the size you are talking about. I say this based on two things:
In my own work I use HttpWebRequest to send HTTP GET requests longer than 2048 characters without trouble. I'm not sure what my longest ones are, but we're talking 10,000+ characters. (This is primarily between a web application and an instance of Solr running under Tomcat.)
.NET does have some limits on GET URL lengths, but the ones I'm aware of are much higher than 2048 characters. For example, I learned today from my profiler that WebRequest.Create(string url) calls the Uri class constructor, and that is documented to throw a UriFormatException if "the length of uriString exceeds 65534 characters."
I'm not sure where your problem might be, if it's not HttpWebRequest itself. Do you know under what conditions your web service will return HTTP 404 (i.e. "not found")? (I assume the 404 is coming from your web service, rather than being faked inside the depths of .NET.) I'd also want to double-check that the address you're pasting into the browser is actually the same one that's being sent by .NET; as feroze suggested, you should use a network sniffing tool for this. If the two addresses are the same, then maybe next compare how the HTTP headers vary between the .NET case and the browser case. (Incidentally, I personally find Fiddler a bit handier than wireshark for HTTP debugging tasks along these lines.)
See also this somewhat related question: How does HttpWebRequest differ (functional) from pasteing a URL into an address bar?
Here's a snippet which constructs HttpWebRequest instances with bigger and bigger url values until an exception gets thrown:
using System.Net;
...
StringBuilder url = new StringBuilder("http://example.com?p=");
try
{
for (int i = 1; i < Int32.MaxValue; i++)
{
url.Append("0");
HttpWebRequest request = HttpWebRequest.CreateHttp(url.ToString());
}
}
catch (Exception ex)
{
Console.Out.WriteLine("Error occurred at url length: " + url.Length);
Console.Out.WriteLine(ex.GetType().ToString() + ": " + ex.Message);
return;
}
Console.Out.WriteLine("Completed without error!");
On my machine (in LINQPad running .Net 4.5), this snippet outputs:
Error occurred at url length: 65520
System.UriFormatException: Invalid URI: The Uri string is too long.
Your query string is wrong according to RFC3986. '{' and '}' characters are not allowed in a URI.