When I use the following code to send HTTP GET request using sockets in C#
IPEndPoint RHost = new IPEndPoint(IPAddress.Parse("xxx.xxx.xxx.xxx"), 80);
Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
socket.Connect(RHost);
String req = "GET / HTTP/1.1\r\nHost: awebsite.com\r\n\r\n";
socket.Send(Encoding.ASCII.GetBytes(req), SocketFlags.None);
int bytes = 0;
byte[] buffer = new byte[256];
var result = new StringBuilder();
do
{
bytes = socket.Receive(buffer, 0, buffer.Length, SocketFlags.None);
result.Append(Encoding.ASCII.GetString(buffer, 0, bytes));
}
while (bytes > 0);
I get
System.Net.Sockets.SocketException: 'An existing connection was
forcibly closed by the remote host'
and when I add Connection: Close header to the request, it is working without any problem.
But using Repeater tool in the Burp Suite I am able to send and receive a response from the server without setting Connection: Close header.
Notes:
I have to use socket.
I am facing this problem just with few websites, not every website
You are sending a HTTP/1.1 request. Without an explicit Connection: close there is an implicit Connection: keep-alive with HTTP/1.1 (different to HTTP/1.0). This means that the server might wait for new requests within the same TCP connection after the response is done and close the connection sometimes later if no new requests arrive. This later close might be done with a RST which results in the error you see.
But, your code expects the server to behave differently: it expects that the server closes the connection once the request is done and not to wait for more requests and not to close the idle connection with RST.
To fix this you have either to adapt your request so that the server behavior matches your expectations or to adjust the expectations.
The first can be done by either explicitly adding Connection: close header or by simply using HTTP/1.0 instead of HTTP/1.1. The latter one is recommended because this also results in simpler responses (no chunked response).
If you instead want to adjust your expectations you have to properly parse the servers response: first read the header, then check for Transfer-Encoding: chunked or Content-length and then read the response body based on what these headers say. For details see the HTTP standard.
Related
I have a problem with calling a external service with post method via HttpClient. When I call the external service to get data the HttpClient throws following exception:
System.IO.IOException: Unable to read data from the transport connection: The connection was closed.
This error only occurs when the response message contains larger amount of data. When I prepare requests that return a JSON with an empty array in it the post call proceeds correctly. But when I try a request, which result should be populated with larger amount of data I get this exception.
I have searched for the solution of this problem on Internet, and I found a solution that implies, that I should set properly System.Net.ServicePointManager.SecurityProtocol correctly with following line:
System.Net.ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls12;
But this didn’t help in my case. I noticed that when I send my request through Postman, the service responds correctly, no matter how big the response payload will be. On Postman I checked the headers in requests and after a while I noticed, that when I do not include Accept-Encoding: gzip header in my request, Postman will have too problems with returning response.
I tried to use this in my code and force HttpClient to use Accept-Encoding header but with no success. Below is my current HttpClient configuration:
using (HttpClient client = new HttpClient(new HttpClientHandler
{
AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate,
}))
{
client.DefaultRequestHeaders.AcceptEncoding.Add(new StringWithQualityHeaderValue("gzip"));
client.DefaultRequestHeaders.AcceptEncoding.Add(new StringWithQualityHeaderValue("deflate"));
client.BaseAddress = _uri;
client.DefaultRequestHeaders.Accept.Clear();
client.Timeout = TimeSpan.FromSeconds(_timeout);
client.DefaultRequestHeaders.Add("ApiKey", _apiKey);
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
client.DefaultRequestHeaders.ConnectionClose = false;
System.Net.ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls12;
HttpResponseMessage result = await client.PostAsync(uri, content)
if (result.IsSuccessStatusCode)
{
string resultString = await result.Content.ReadAsStringAsync();
model = JsonConvert.DeserializeObject<T>(resultString);
}
else
{
await ResponseErrorThrow(result);
}
return model;
}
I have no more ideas at the moment. Can you help me with this problem?
I recently faced the same issue and through logging around our retry logic we found that the request had gone through.
Through some research I found that the issue might be because of client side connection being open to more than the default timeout of 100s.
Unable to read data from the transport connection: Operation canceled
The client code tries to read the response even when the remote connection is closed and thats when it throws this exception.
C# Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. Reading networkstream
As suggested in the answer above, the best way is to handle the exception thrown and check (if you can) that the request went through.
There are ways to check tcp connection before reading from it but one must remember that any check is at that point in time. A check can return a connection-alive but that still doesn't mean the connection wouldn't die out while reading.
I'm trying to write an UWP app that communicates with a server using MQTT over HTTPS. I use StreamSocket to send the whole MQTT packet over the wire but I couldn't get any response from server. If I attempt to resend the packet, the server would terminate the connection. Using Wireshark, I can see that the server responded with a message Content-Type: Alert (21) with 26 bytes of data in it but I couldn't read it through StreamSocket.InputStream
var streamSocket = new StreamSocket();
Buffer packet;
// Building Mqtt packet.
await streamSocket.ConnectAsync(new HostName("server.com"), "443", SocketProtectionLevel.Tls12);
await streamSocket.OutputStream.WriteAsync(packet);
await Task.Delay(2000); // Give the server some time to respond
var inputBytes = new byte[2048];
var completion = await streamSocket.InputStream.ReadAsync(inputBytes.AsBuffer(),(uint) inputBytes.Length, InputStreamOptions.Partial);
// inputBytes still empty
I want to be able to read the bytes server responded. I think there is a way to access these bytes but I could not find it anywhere.
Update: Added Wireshark result
Wireshark result
It seems that the problem is related with the HTTPS. Have you tried to use the UpgradeToSslAsync method, if your SSL/TLS connection is desired after sending and receiving some initial data, please use the UpgradeToSslAsync method to upgrade the connection to use SSL as following:
await streamSocket.ConnectAsync(new HostName("server.com"), "443", Windows.Networking.Sockets.SocketProtectionLevel.PlainSocket);
await streamSocket.UpgradeToSslAsync(Windows.Networking.Sockets.SocketProtectionLevel.Tls12, hostName);
Thanks,
Amy Peng
I have create an httplistener. So i need when client will send me data to read them. The problem is that i dont know how client should send the data
HttpListener listener = new HttpListener();
listener.Prefixes.Add("http://192.168.1.26:8282/");
listener.Prefixes.Add("http://localhost:8282/");
listener.Prefixes.Add("http://127.0.0.1:8282/");
listener.Start();
new Thread(() =>
{
Thread.CurrentThread.IsBackground = true;
for (;;)
{
Console.WriteLine("Listening...");
// Note: The GetContext method blocks while waiting for a request.
HttpListenerContext context = listener.GetContext();
HttpListenerRequest request = context.Request;
string text;
using (var reader = new StreamReader(request.InputStream,
request.ContentEncoding))
{
text = reader.ReadToEnd();
MessageBox.Show(text);
}
// Obtain a response object.
HttpListenerResponse response = context.Response;
// Construct a response.
string responseString = "HelloWorld";
byte[] buffer = System.Text.Encoding.UTF8.GetBytes(responseString);
// Get a response stream and write the response to it.
response.ContentLength64 = buffer.Length;
System.IO.Stream output = response.OutputStream;
output.Write(buffer, 0, buffer.Length);
// You must close the output stream.
output.Close();
}
}).Start();
}
So from client i send this command:
GET / 192.168.1.26:8282 HTTP/1.0
But i'm getting this message
Recv 34 bytes
SEND OK
+IPD,1,518:HTTP/1.1 400 Bad Request
Content-Type: text/html; charset=us-ascii
Server: Microsoft-HTTPAPI/2.0
Date: Wed, 13 Jun 2018 13:16:03 GMT
Connection: close
Content-Length: 339
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request - Invalid Header</h2>
<hr><p>HTTP Error 400. The request has an invalid header name.</p>
</BODY></HTML>
1,CLOSED
I cant understand what is wrong. Also in my code i set to get a message box every time a request will happen. But it never runs
This s what mozilla is sending
You are not attempting to invoke the service correctly. Here is your client request:
GET / 192.168.1.26:8282 HTTP/1.0
What you should be doing is first establishing a socket connection to host 192.168.1.26 over port 8282. Then you must issue a HTTP request in a valid format:
GET / HTTP/1.0
Don't forget to add some newlines after the request (ie: \r\n\r\n). Then your web server should respond to the HTTP request.
Quick example in Telnet:
telnet 192.168.1.26 8282
GET / HTTP/1.0
Quick example with netcat:
nc 192.168.1.26 8282
GET / HTTP/1.0
Note that these quick examples are provided just to help you ensure that your web service is accessible and functioning correctly. Ideally, you would likely use a more robust HTTP client that is customized for whatever your particular needs are. The process is still the same:
Establish a connection to your host IP address over the listening port
Issue a HTTP request in a valid format: (HTTP_VERB PATH HTTP_VERSION)
*) Maybe check out the developer tools in your browser of choice (F12 -> Network) to see how HTTP headers are sent as well.
Parse the response in some meaningful way.
"Also in my code i set to get a message box every time a request will happen." - You should try putting in a manual message to the message box, instead of reading from the input stream. This is a good debugging technique. In a HTTP GET request you generally are not sending data except in the form of optional query string parameters. I have a feeling that you are not getting the results you are expecting because you are reading from input that isn't there. Before reading from the stream input, first make sure that the connection is successful.
TL;DR version
When a transfer error occurs while writing to the request stream, I can't access the response, even though the server sends it.
Full version
I have a .NET application that uploads files to a Tomcat server, using HttpWebRequest. In some cases, the server closes the request stream prematurely (because it refuses the file for one reason or another, e.g. an invalid filename), and sends a 400 response with a custom header to indicate the cause of the error.
The problem is that if the uploaded file is large, the request stream is closed before I finish writing the request body, and I get an IOException:
Message: Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
InnerException: SocketException: An existing connection was forcibly closed by the remote host
I can catch this exception, but then, when I call GetResponse, I get a WebException with the previous IOException as its inner exception, and a null Response property. So I can never get the response, even though the server sends it (checked with WireShark).
Since I can't get the response, I don't know what the actual problem is. From my application point of view, it looks like the connection was interrupted, so I treat it as a network-related error and retry the upload... which, of course, fails again.
How can I work around this issue and retrieve the actual response from the server? Is it even possible? To me, the current behavior looks like a bug in HttpWebRequest, or at least a severe design issue...
Here's the code I used to reproduce the problem:
var request = HttpWebRequest.CreateHttp(uri);
request.Method = "POST";
string filename = "foo\u00A0bar.dat"; // Invalid characters in filename, the server will refuse it
request.Headers["Content-Disposition"] = string.Format("attachment; filename*=utf-8''{0}", Uri.EscapeDataString(filename));
request.AllowWriteStreamBuffering = false;
request.ContentType = "application/octet-stream";
request.ContentLength = 100 * 1024 * 1024;
// Upload the "file" (just random data in this case)
try
{
using (var stream = request.GetRequestStream())
{
byte[] buffer = new byte[1024 * 1024];
new Random().NextBytes(buffer);
for (int i = 0; i < 100; i++)
{
stream.Write(buffer, 0, buffer.Length);
}
}
}
catch(Exception ex)
{
// here I get an IOException; InnerException is a SocketException
Console.WriteLine("Error writing to stream: {0}", ex);
}
// Now try to read the response
try
{
using (var response = (HttpWebResponse)request.GetResponse())
{
Console.WriteLine("{0} - {1}", (int)response.StatusCode, response.StatusDescription);
}
}
catch(Exception ex)
{
// here I get a WebException; InnerException is the IOException from the previous catch
Console.WriteLine("Error getting the response: {0}", ex);
var webEx = ex as WebException;
if (webEx != null)
{
Console.WriteLine(webEx.Status); // SendFailure
var response = (HttpWebResponse)webEx.Response;
if (response != null)
{
Console.WriteLine("{0} - {1}", (int)response.StatusCode, response.StatusDescription);
}
else
{
Console.WriteLine("No response");
}
}
}
Additional notes:
If I correctly understand the role of the 100 Continue status, the server shouldn't send it to me if it's going to refuse the file. However, it seems that this status is controlled directly by Tomcat, and can't be controlled by the application. Ideally, I'd like the server not to send me 100 Continue in this case, but according to my colleagues in charge of the back-end, there is no easy way to do it. So I'm looking for a client-side solution for now; but if you happen to know how to solve the problem on the server side, it would also be appreciated.
The app in which I encounter the issue targets .NET 4.0, but I also reproduced it with 4.5.
I'm not timing out. The exception is thrown long before the timeout.
I tried an async request. It doesn't change anything.
I tried setting the request protocol version to HTTP 1.0, with the same result.
Someone else has already filed a bug on Connect for this issue: https://connect.microsoft.com/VisualStudio/feedback/details/779622/unable-to-get-servers-error-response-when-uploading-file-with-httpwebrequest
I am out of ideas as to what can be a client side solution to your problem. But I still think the server side solution of using a custom tomcat valve can help here. I currently doesn`t have a tomcat setup where I can test this but I think a server side solution here would be along the following lines :
RFC section 8.2.3 clearly states :
Requirements for HTTP/1.1 origin servers:
- Upon receiving a request which includes an Expect request-header
field with the "100-continue" expectation, an origin server MUST
either respond with 100 (Continue) status and continue to read
from the input stream, or respond with a final status code. The
origin server MUST NOT wait for the request body before sending
the 100 (Continue) response. If it responds with a final status
code, it MAY close the transport connection or it MAY continue
to read and discard the rest of the request. It MUST NOT
perform the requested method if it returns a final status code.
So assuming tomcat confirms to the RFC, while in the custom valve you would have recieved the HTTP request header, but the request body would not be sent since the control is not yet in the servlet that reads the body.
So you can probably implement a custom valve, something similar to :
import org.apache.catalina.connector.Request;
import org.apache.catalina.connector.Response;
import org.apache.catalina.valves.ErrorReportValve;
public class CustomUploadHandlerValve extends ValveBase {
#Override
public void invoke(Request request, Response response) throws IOException, ServletException {
HttpServletRequest httpRequest = (HttpServletRequest) request;
String fileName = httpRequest.getHeader("Filename"); // get the filename or whatever other parameters required as per your code
bool validationSuccess = Validate(); // perform filename check or anyother validation here
if(!validationSuccess)
{
response = CreateResponse(); //create your custom 400 response here
request.SetResponse(response);
// return the response here
}
else
{
getNext().invoke(request, response); // to pass to the next valve/ servlet in the chain
}
}
...
}
DISCLAIMER : Again I haven`t tried this to success, need sometime and a tomcat setup to try it out ;).
Thought it might be a starting point for you.
I had the same problem. The server sends a response before the client end of the transmission of the request body, when I try to do async request. After a series of experiments, I found a workaround.
After the request stream has been received, I use reflection to check the private field _CoreResponse of the HttpWebRequest. If it is an object of class CoreResponseData, I take his private fields (using reflection): m_StatusCode, m_StatusDescription, m_ResponseHeaders, m_ContentLength. They contain information about the server's response!
In most cases, this hack works!
What are you getting in the status code and response of the second exception not the internal exception?
If a WebException is thrown, use the Response and Status properties of the exception to determine the response from the server.
http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.getresponse(v=vs.110).aspx
You are not saying what exactly version of Tomcat 7 you are using...
checked with WireShark
What do you actually see with WireShark?
Do you see the status line of response?
Do you see the complete status line, up to CR-LF characters at its end?
Is Tomcat asking for authentication credentials (401), or it is refusing file upload for some other reason (first acknowledging it with 100 but then aborting it mid-flight)?
The problem is that if the uploaded file is large, the request stream
is closed before I finish writing the request body, and I get an IOException:
If you do not want the connection to be closed but all the data transferred over the wire and swallowed at the server side, on Tomcat 7.0.55 and later it is possible to configure maxSwallowSize attribute on HTTP connector, e.g. maxSwallowSize="-1".
http://tomcat.apache.org/tomcat-7.0-doc/config/http.html
If you want to discuss Tomcat side of connection handling, you would better ask on the Tomcat users' mailing list,
http://tomcat.apache.org/lists.html#tomcat-users
At .Net side:
Is it possible to perform stream.Write() and request.GetResponse() simultaneously, from different threads?
Is it possible to performs some checks at the client side before actually uploading the file?
hmmm... i don't get it - that is EXACTLY why in many real-life scenarios large files are uploaded in chunks (and not as a single large file)
by the way: many internet servers have size limitations. for instance in tomcat that is representad by maxPostSize (as seen in this link: http://tomcat.apache.org/tomcat-5.5-doc/config/http.html)
so tweaking the server configurations seems like the easy way, but i do think that the right way is to split the file to several requests
EDIT: replace Uri.EscapeDataString with HttpServerUtility.UrlEncode
Uri.EscapeDataString(filename) // a problematic .net implementation
HttpServerUtility.UrlEncode(filename) // the proper way to do it
I am experience a pretty similar problem currently also with Tomcat and a Java client. The Tomcat REST service sends a HTTP returncode with response body before reading the whole request body. The client however fails with IOException. I inserted a HTTP Proxy on the client to sniff the protocol and actually the HTTP response is sent to the client eventually. Most likly the Tomcat closed the request input stream before sending the response.
One solution is to use a different HTTP server like Jetty which does not have this problem. The other solution is a add a Apache HTTP server with AJP in front of Tomcat. Apache HTTP server has a different handling of streams and with that the problem goes away.
I have a big problem: I need to send 200 objects at once and avoid timeouts.
while (true)
{
NameValueCollection data = new NameValueCollection();
data.Add("mode", nat);
using (var client = new WebClient())
{
byte[] response = client.UploadValues(serverA, data);
responseData = Encoding.ASCII.GetString(response);
string[] split = Javab.Split(new[] { '!' }, StringSplitOptions.RemoveEmptyEntries);
string command = split[0];
string server = split[1];
string requestCountStr = split[2];
switch (command)
{
case "check":
int requestCount = Convert.ToInt32(requestCountStr);
for (int i = 0; i < requestCount; i++)
{
Uri myUri = new Uri(server);
WebRequest request = WebRequest.Create(myUri);
request.Timeout = 200000;
WebResponse myWebResponse = request.GetResponse();
}
break;
}
}
}
This produces the error:
Unhandled Exception: System.Net.WebException: The operation has timed out
at System.Net.HttpWebRequest.GetResponse()
at vir_fu.Program.Main(String[] args)
The requestCount loop works fine outside my base code but when I add it to my project I get this error. I have tried setting request.Timeout = 200; but it didn't help.
It means what it says. The operation took too long to complete.
BTW, look at WebRequest.Timeout and you'll see that you've set your timeout for 1/5 second.
Close/dispose your WebResponse object.
I'm not sure about your first code sample where you use WebClient.UploadValues, it's not really enough to go on, could you paste more of your surrounding code? Regarding your WebRequest code, there are two things at play here:
You're only requesting the headers of the response**, you never read the body of the response by opening and reading (to its end) the ResponseStream. Because of this, the WebRequest client helpfully leaves the connection open, expecting you to request the body at any moment. Until you either read the response body to completion (which will automatically close the stream for you), clean up and close the stream (or the WebRequest instance) or wait for the GC to do its thing, your connection will remain open.
You have a default maximum amount of active connections to the same host of 2. This means you use up your first two connections and then never dispose of them so your client isn't given the chance to complete the next request before it reaches its timeout (which is milliseconds, btw, so you've set it to 0.2 seconds - the default should be fine).
If you don't want the body of the response (or you've just uploaded or POSTed something and aren't expecting a response), simply close the stream, or the client, which will close the stream for you.
The easiest way to fix this is to make sure you use using blocks on disposable objects:
for (int i = 0; i < ops1; i++)
{
Uri myUri = new Uri(site);
WebRequest myWebRequest = WebRequest.Create(myUri);
//myWebRequest.Timeout = 200;
using (WebResponse myWebResponse = myWebRequest.GetResponse())
{
// Do what you want with myWebResponse.Headers.
} // Your response will be disposed of here
}
Another solution is to allow 200 concurrent connections to the same host. However, unless you're planning to multi-thread this operation so you'd need multiple, concurrent connections, this won't really help you:
ServicePointManager.DefaultConnectionLimit = 200;
When you're getting timeouts within code, the best thing to do is try to recreate that timeout outside of your code. If you can't, the problem probably lies with your code. I usually use cURL for that, or just a web browser if it's a simple GET request.
** In reality, you're actually requesting the first chunk of data from the response, which contains the HTTP headers, and also the start of the body. This is why it's possible to read HTTP header info (such as Content-Encoding, Set-Cookie etc) before reading from the output stream. As you read the stream, further data is retrieved from the server. WebRequest's connection to the server is kept open until you reach the end of this stream (effectively closing it as it's not seekable), manually close it yourself or it is disposed of. There's more about this here.
proxy issue can cause this. IIS webconfig put this in
<defaultProxy useDefaultCredentials="true" enabled="true">
<proxy usesystemdefault="True" />
</defaultProxy>
I remember I had the same problem a while back using WCF due the quantity of the data I was passing. I remember I changed timeouts everywhere but the problem persisted. What I finally did was open the connection as stream request, I needed to change the client and the server side, but it work that way. Since it was a stream connection, the server kept reading until the stream ended.
I encountered the same error than adding
Task.Delay(2000);
in each request solved the problem