I'm trying to stream a pdf file. Most of the files open without any problems but sometimes it fails. When it fails, it also looks like file size is smaller than the original one. For example, I was trying to open a 47K file but when the streamed output to the browser it's only 44.5K. When check the size of the stream (result.FileStream), it's 47K like it supposed to be.
I'm using Stream.Read to output the file to the browser. When I had a problem, I was using buffer size of 10000 bytes. However, when I changed the buffer size from 10000 to 1000 the problem disappear and I was able to the file. I cannot explain why the change in the buffer size makes the streaming behave differently.
Here's the code I'm using result.FileStream is of type Stream:
using (result.FileStream)
{
int length;
const int byteSize = 1000;
var buffer = new byte[byteSize];
while ((length = result.FileStream.Read(buffer, 0, byteSize)) > 0 && Response.IsClientConnected)
{
Response.OutputStream.Write(buffer, 0, length);
Response.Flush();
}
}
Response.Close();
Please enlighten me because I definitely don't understand something.
You're using Response.Close(), which seems to be much more evil then the documentation would make you believe.
http://forums.iis.net/t/1152058.aspx
Related
I'm currently working on a file downloader project. The application is designed so as to support resumable downloads. All downloaded data and its metadata(download ranges) are stored on the disk immediately per call to ReadBytes. Let's say that I used the following code snippet :-
var reader = new BinaryReader(response.GetResponseStream());
var buffr = reader.ReadBytes(_speedBuffer);
DownloadSpeed += buffr.Length;//used for reporting speed and zeroed every second
Here _speedBuffer is the number of bytes to download which is set to a default value.
I have tested the application by two methods. First is by downloading a file which is hosted on a local IIS server. The speed is great. Secondly, I tried to download the same file's copy(from where it was actually downloaded) from the internet. My net speed is real slow. Now, what I observed that if I increase the _speedBuffer then the downloading speed from the local server is good but for the internet copy, speed reporting is slow. Whereas if I decrease the value of _speedBuffer, the downloading speed(reporting) for the file's internet copy is good but not for the local server. So I thought, why shouldn't I change the _speedBuffer at runtime. But all the custom algorithms(for changing the value) I came up with were in-efficient. Means the download speed was still slow as compared other downloaders.
Is this approach OK?
Am I doing it the wrong way?
Should I stick with default value for _speedBuffer(byte count)?
The problem with ReadBytes in this case is that it attempts to read exactly that number of bytes, or it returns when there is no more data to read.
So you receive a packet containing 99 bytes of data, then calling ReadBytes(100) will wait for the next packet to include that missing byte.
I wouldn't use a BinaryReader at all:
byte[] buffer = new byte[bufferSize];
using (Stream responseStream = response.GetResponseStream())
{
int bytes;
while ((bytes = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
DownloadSpeed += bytes;//used for reporting speed and zeroed every second
// on each iteration, "bytes" bytes of the buffer have been filled, store these to disk
}
// bytes was 0: end of stream
}
I'm writing an application where I need to send a file (~600kB) to another unit via a virtual serialport.
When I send it using a terminal application (TeraTerm) it takes less than 10 seconds, but using my program it takes 1-2 minutes.
My code is very simple:
port.WriteTimeout = 30000;
port.ReadTimeout = 5000;
port.WriteBufferSize = 1024 * 1024; // Buffer size larger than file size
...
fs = File.OpenRead(filename);
byte[] filedata = new byte[fs.Length];
fs.Read(filedata, 0, Convert.ToInt32(fs.Length));
...
for (int iter = 0; iter < filedata.Length; iter++) {
port.Write(filedata, iter, 1);
}
Calling port.Write with the entire file length seems to always cause a write timeout for unknown reason, so I'm writing 1 byte at a time.
Solved it, here's the details in case someone else finds this it might give some hints on what's wrong.
I was reading the file wrong, somehow the application used \r\n as newlines when transferring. The file itself is a Intel .hex file which contains checksums which were calculated using \r as newlines.
Checksum errors caused the other device to ACK very slowly, thus making the transfer super slow combined with the PC application now handling checking for checksum errors.
If you have similar errors I recommend using a software snoop to monitor what's actually being sent
I have a external server URL to which I pass credentials and and id to get the audio file like http://myexternalserver.com/?u=xxx&p=xxx&id=xxx
In order to avoid doing this from javascript and exposing the credentials to user, I was attempting to call the url from backend and stream it to the UI request(on my server)
using (Stream mystream = httpResponse2.GetResponseStream())
{
using (BinaryReader reader = new BinaryReader(mystream))
{
int length = 2048;
byte[] buffer = new byte[length];
System.Web.HttpResponse response = System.Web.HttpContext.Current.Response;
response.BufferOutput = true;
response.ContentType = "audio/wav";
while((bytesRead = reader.Read(buffer, 0, buffer.Length)) > 0)
{
response.OutputStream.Write(buffer, 0, bytesRead);
}
response.End();
}
}
Using this approach, I am successfully able to play the stream in an <audio> element.
Below are issues which I'm facing:
While the stream is playing, the seek control bar is always stuck at 0 as audio length is Infinity.Due to this I am unable to use control slider to seek to buffer areas
When the stream ends, $("audio")[0].duration returns 9188187664790.943 (or some huge number for 20 - 30 seconds audio) and audio's display time shows -596523:-14:-8 (while playing this was a number going from 00:01 to 00:24 and then suddenly to a negative number).
I'm unable to find a solution which will allow seeking into an unbuffered area.
I'm also not quite sure if this is a correct/best approach, so suggestions on approach would also be very helpful.
I was able to solve the issue.
What I did was I examined what my HTTP server responded to requests for regular mp3 files stored on the server statically. I noticed that the server was setting two headers which I missed. Those were Accept-Ranges: bytes and Content-Length: xxx.
When I set those headers all the issues from the question disappeared.
Hope this will help somebody.
I have a text file that's appended to over time and periodically I want to truncate it down to a certain size, e.g. 10MB, but keeping the last 10MB rather than the first.
Is there any clever way to do this? I'm guessing I should seek to the right point, read from there into a new file, delete old file and rename new file to old name. Any better ideas or example code? Ideally I wouldn't read the whole file into memory because the file could be big.
Please no suggestions on using Log4Net etc.
If you're okay with just reading the last 10MB into memory, this should work:
using(MemoryStream ms = new MemoryStream(10 * 1024 * 1024)) {
using(FileStream s = new FileStream("yourFile.txt", FileMode.Open, FileAccess.ReadWrite)) {
s.Seek(-10 * 1024 * 1024, SeekOrigin.End);
s.CopyTo(ms);
s.SetLength(10 * 1024 * 1024);
s.Position = 0;
ms.Position = 0; // Begin from the start of the memory stream
ms.CopyTo(s);
}
}
You don't need to read the whole file before writing it, especially not if you're writing into a different file. You can work in chunks; reading a bit, writing, reading a bit more, writing again. In fact, that's how all I/O is done anyway. Especially with larger files you never really want to read them in all at once.
But what you propose is the only way of removing data from the beginning of a file. You have to rewrite it. Raymond Chen has a blog post on exactly that topic, too.
I tested the solution from "false" but it doesn't work for me, it trims the file but keeps the beginning of it, not the end.
I suspect that CopyTo copies the whole stream instead of starting for the stream position. Here's how I made it work:
int trimSize = 10 * 1024 * 1024;
using (MemoryStream ms = new MemoryStream(trimSize))
{
using (FileStream s = new FileStream(logFilename, FileMode.Open, FileAccess.ReadWrite))
{
s.Seek(-trimSize, SeekOrigin.End);
byte[] bytes = new byte[trimSize];
s.Read(bytes, 0, trimSize);
ms.Write(bytes, 0, trimSize);
ms.Position = 0;
s.SetLength(trimSize);
s.Position = 0;
ms.CopyTo(s);
}
}
You could read the file into a binary stream and use the seek method to just retrieve the last 10MB of data and load them into memory. Then you save this stream into a new file and delete the old one. The text data could be truncated though so you have to decide if this is exceptable.
Look here for an example of the Seek method:
http://www.dotnetperls.com/seek
In C# I want to record an audio stream
I am doing something along the lines of:
HttpWebRequest req;
req = (HttpWebRequest)WebRequest.Create("http://url.com/stream");
Webresponse resp = req.GetResponse();
Stream s = resp.GetResponseStream();
fs = File.Exists(fileName)
? new FileStream(fileName, FileMode.Append)
: new FileStream(fileName, FileMode.Create);
byte[] buffer = new byte[4096];
while (s.CanRead)
{
Array.Clear(buffer, 0, buffer.Length);
total += s.Read(buffer, 0, buffer.Length);
fs.Write(buffer, 0, buffer.Length);
}
and the file size grows but can't be played back by VLC or any other program.
This is not my exact code I do a lot of error checking etc, but this gives the general idea.
Array.Clear(buffer, 0, buffer.Length);
total += s.Read(buffer, 0, buffer.Length);
fs.Write(buffer, 0, buffer.Length);
You do not have to clear the whole array before you read - there's no point in doing this. But you have to check how many bytes you actually read, there's no guarantee the whole array is filled every time (and it probably won't):
int bytesRead = s.Read(buffer, 0, buffer.Length);
fs.Write(buffer, 0, bytesRead);
total+=bytesRead;
Also whether the file plays (even when it is not corrupted anymore once you fix the file writing code) back depends on the format that you are downloading - what codec / file type is it?
THe problem is the streamed bits don't have context. When you stream to an application, there is a tacit agreement that you are dealing with file type X and the streaming program then tries to play the bits.
WHen you stream to a file, you have to add the context. One of the most important bits is the header identifying the type of file and other information.
If you can add the header, you can play the file from the file system. The header will not be part of the stream, as the server and client have agreed on what type fo file it it is already.
If you create a streaming player, you can possibly play back the bits you saved, as you negotiate the type. BUt to have it automagically work from file, you have to add the header.
Trying to save streamed MP3 audio to disk is essentially impossible without a detailed understanding of both the stream format and the file format for MP3. What you're getting from the stream is a series of "windowed" chunks of audio converted to frequency domain; the player receiving the stream converts the chunks back into time-domain audio on the fly and plays them one after the other.
To make an MP3 file, you would have to first write out a header containing the format information and then write each chunk of data. But most likely the format for storing these chunks in a file is different from the way in which they're compacted into a stream.
Sorry, but I would seriously advise you to give this up. One major reason that music services stream instead of offering file downloads is specifically because it's so difficult to save an MP3-type stream to disk (it would be a trivial matter to save an uncompressed audio stream to a WAV file).