I am passing some data in HTTP body using the Ajax request. These contents on server are accessible using Request.InputStream. Now i want to write it to file on server disk Asynchronously. How to do that ?
If the data sent using the HTTP is more then i don't want server application to die/get affected. So that i would like to write it Asynchronously.
Following is sample for reference regarding what i am trying to do ...
System.IO.Stream str;
// Create a Stream object.
str = Request.InputStream;
// TO DO Write it to file asyncronously ...
str.Write(someFile, 0, strLen);
Stream.BeginWrite & Stream.EndWrite are the things to look at. Try this MSDN link.
One approach would be, assuming you're happy to not have control over reading the stream (i.e. you trust that something else will, or happy with it not happening, or not happening in full, is to use a 'virtual stream'.
If you create a class that 'wraps' the stream you wish to use and passes all calls to that underlying stream, you could hook your own logic in the read method, so that when your stream's read method is called you read from the underlying stream, write to your own stream, and then pass it back.
This is not 100% asynchronous as the reader of the stream waits until you write the buffer to your file before receiving the buffer, but it doesn't force the reader to wait until you've read the entire stream and also works well with non-seekable streams.
I wrote a BizTalk-related whitepaper on this with full source code, if you're happy to ignore the BizTalk bits the stream implementation is discussed there in detail
What would you like to do while the file is being written? Something tells me you're mistaken, all proper HTTP servers are multithreaded, so no matter how long a request runs, other requests can be serviced in parallel, too.
Related
Maybe this seems like weird question, but I came across the following situation:
I try to make a post request to a service, and to add the post-data I chose make a Stream out of the request and use a StreamWriter to write the body on it.
But, before I actually execute the request (with GetResponse), even before I write to the stream object, I get an "Unable to connect exception" exactly on
var stream = request.GetRequestStream();
After a little investigation, I realized that request.GetRequestStream() is actually trying to connect. The problem in my case was network connectivity to the server (firewall issue).
BUT my question here is Why HttpWebRequest.GetRequestStream() tries to connect???
My simple thought was that, while on the request creation, there is no connection to the server of the request.
I found some related questions, such like this
But it does not seem to ansewr my question exactly.
Any explanation please?
PS: Any suggestion of how to avoid this "early" connection effect would be much appreciated.
.NET I/O APIs generally operate on streams, which are APIs that allow developers to read and write an ordered sequence of data. By making reading and writing into generic APIs, it enables generic libraries to operate on streams to do powerful things: compression, encryption, encoding, etc. (BTW, treating different kinds of I/O similarly has a long history, most famously in UNIX where everything is a file.)
Although reading and writing data works pretty similarly across many different kinds of streams, opening a stream is much harder to make generic. Think about the vastly different APIs you use to open a file vs. make an HTTP request vs. execute a database query.
Therefore, .NET's Stream class has no generic Open() method because getting a stream into an opened state is very different between different types of streams. Instead, the streams APIs expect to be given a stream that's already open, where "open" means that it's ready to be written to and/or read from.
Therefore, in .NET there's a typical pattern for I/O:
Write some resource-specific code to open a stream. These APIs generally return an open stream.
Hand off that open stream to generic APIs that read and/or write from it.
Close the stream (also generic) when you're done.
Now think about how that pattern above aligns to an HTTP request, which has the following steps:
a. Lookup the server's IP address in DNS
b. Make a TCP connection to the server
c. Send the URL and request headers to the server
d. If it's a POST (or PUT or other method that sends a request body) then upload the request body. If it's a GET, this is a no-op.
e. Now read the response
f. Finally, close the connection.
(I'm ignoring a lot of real-world complexity in the steps above like SSL, keep-alive connections, cached responses, etc. but the basic workflow is accurate enough to answer your question.)
OK now put yourself in the shoes of the .NET team trying to build an HTTP client API, remembering to split the non-generic parts ("get an open stream") from the generic parts: read and/or write, and then close the stream.
If your API only had to handle GET requests, then you'd probably make the connection while executing the same API that returns the response stream. This is exactly what HttpWebRequest.GetResponse does.
But if you're sending POST requests (or PUT or other similar methods), then you have to upload data to the server. Unlike HTTP headers which are only a few KB, the data you upload in a POST could be huge. If you're uploading a 10GB file, you don't want to park it in RAM during the hours it might take to upload to the server. This would kill your client's performance in the meantime. Instead, you need a way to get a Stream so you only have to load small chunks of data into RAM before sending to the server. And remember that Stream has no Open() method, so your API must provide an open stream.
Now you have an answer to your first question: HttpWebRequest.GetRequestStream must make the network connection because if it didn't then the stream would be closed and you couldn't write to it.
Now on to your second question: how can you delay the connection? I assume you mean that the connection should happen upon the first write to the request stream. One way to do this would be to write a class that inherits from Stream that only calls GetRequestStream as late as possible, and then delegates all methods to the underlying request stream. Something like this as as starting point:
using System.Net;
using System.Threading.Tasks;
using System.Threading;
class DelayConnectRequestStream : Stream
{
private HttpWebRequest _req;
private Stream _stream = null;
public DelayConnectRequestStream (HttpWebRequest req)
{
_req = req;
}
public void Write (byte[] buffer, int offset, int count)
{
if (_stream == null)
{
_stream = req.GetRequestStream();
}
return _stream.Write(buffer, offset, count);
}
public override WriteAsync (byte[] buffer, int offset, int count, CancellationToken cancellationToken)
{
if (_stream == null)
{
// TODO: figure out if/how to make this async
_stream = req.GetRequestStream();
}
return _stream.WriteAsync(buffer, offset, count, cancellationToken);
}
// repeat the pattern above for all needed methods on Stream
// you may need to decide by trial and error which properties and methods
// must require an open stream. Some properties/methods you can probably just return
// without opening the stream, e.g. CanRead which will always be false so no need to
// create a stream before returning from that getter.
// Also, the code sample above is not thread safe. For
// thread safety, you could use Lazy<T> or roll your own locking.
}
But honestly the approach above seems like overkill. If I were in your shoes, I'd look at why I am trying to defer opening of the stream and to see if there's another way to solve this problem.
I'm working on an Asynchronous HTTP handler and trying to figure out if the HttpResponse.Write function blocks until it receives an ACK from the client.
The MSDN documentation doesn't specifically say; however, I do know that the MSDN documentation for the ISAPI WriteClient() function (a similar mechanism) mentions that the synchronous version does block while attempting to send data to the client.
I thought of three possible ways to determine the answer:
Have someone tell me its non-blocking
Write a low level TCP test client and set break point on the acknowledgement ( is this possible?)
Use reflection to inspect the inner workings of the HTTPResponse.Write method ( is this possible?)
Its not blocking, but can use a buffer and send them all together.
Try to set HttpResponse.Buffer=false; to direct write to your client.
You can also use the HttpResponse.Flush(); to force to send what you have to your client.
About HttpResponse.Buffer Property on MSDN
And maybe this intresting you: Web app blocked while processing another web app on sharing same session
HttpResponse operates in two distinct modes, buffered and unbuffered. In buffered mode, the various Write functions put their data into a memory region and the function returns as soon as the data is copied over. If you set Buffer to false, Write blocks until all of the data is sent to the client.
I have created a stream based on a stateless protocol, think 2 web servers sending very limited requests to each other.
As such neither will know if I suddenly stop one as no connection will close, there will simply be no requests. There could legitimately be a gap in requests so I don't want to treat the lack of them as a lost connection.
What I want to do is send a heartbeat to say "I'm alive", I obviously don't want the heartbeat data when I read form the stream though, so my question.
How do I create a new stream class that wraps another stream and sends heartbeat data without exposing that to calling code?
Assuming 2 similar implementations on both sides: send each block of data with a header so you can safely send Zero-data heartbeat blocks. I.e. translate Write on outer stream into several writes on inner stream like "{Data, 100 bytes, [bytes]}, {Data, 13 bytes, [bytes]}", heartbeat would look like "{Ping, 0 bytes, []}". On receiving end immediately respond with similar empty Ping.
I have a server app that listens for connections on port 8888. I am also creating the client application. It is a simple application the only hard thing about it is managing multiple connections. so I just need to send files between computers so the way I do that, I don't know if it is wright but it works maybe you guys can correct me. here is my algorithm when sending the file:
NetworkStream stream = \\ initialize it
while(someCondition)
{
// first I open the file for reading and read chunks of it
byte[] chunk = fileRead(file, indexStart, indexEnd) // I have a similar method this is just to illustate my point
stream.Write(chunk, \\other params)
// since I often send large files it will be nice if I can wait here
// until the stream.Write is done. when debuging this the while loop
// executes several times then it waits.
}
and on the other side I read bytes from that stream and write it to a file.
I also need to wait sometimes because I send multiple files and I want to make sure that the first file has been sent before moving to the next. I know I can solve this by using the stream.Read method once the transfer has been done. and sending data back from the client. but sometimes I believe it will be helpful to know when the stream.write is done.
Edit
ok so based on your answers I can for example send to the client the number of bytes that I am planing to send. and once the client recives that many bytes it means it is done. But my question is if this is efficient. I mean doing something like
on the server:
writing data "sending the file length"
read data "check to see if the client received the length" (expecting a string ok for example)
write data "tel the client the name of the file"
read data "check to see if the client recived the name of the file"
write data "start sending chuncks of the file"
read data "wait until client replies with the string ok for example"
The write is complete when the line
stream.Write(chunk, \\other params)
completes. It's worth noting that this does not imply that the other end has received anything. In fact, immediately subsequent to that line, the data is likely to be in some buffer on the sending machine. That means that it's now out of your control. If you want receipt confirmation, the remote end will have to let you know.
Stream.Write is synchronous, so it will always block your thread until the writing finishes.
HttpListener gives you response stream, but calling flush means nothing (and from sources it's clear, because it's actually doing nothing). Digging inside HTTP API shows that this is a limitation of HttpListener itself.
Anyone knows exactly how to flush response stream of HttpListener (may be with reflection or additional P/Invokes)?
Update: You can't http stream anything if you don't have a flush option or ability to define buffer size.
Flush only works in most of the System.Net namespace when Transfer-Encoding is set to Chuncked, else the whole request is returned and Flush really does nothing. At least this is what I have experienced while working with HttpWebResponse.
I've not tried this yet, but how about writing a separate TCP server for streaming responses? Then forward the request from the HttpListener to the "internal" tcp server. Using this redirect you might be able to stream data back as you need.
As for flushing it, they only way I see to do it is to simulate a dispose, without actually disposing. If you can hack into the HttpResponseStream object, tell it to dispose, unset the m_Closed flag, etc, you might be able to flush the streaming data.