I´m trying to play wav files with the audio element. This works great when I reference the file directly BUT when I try to stream it via a c# FileStream it stops working.
The code below work perfect for Chrome and Opera on PC but I am trying to get it to work on Ipad and Firefox.
I am using .net framwork 2.0
I have to use wav.
<audio id="audio" src="www.acme.com/streamer.aspx?music=test.wav" controls preload="auto"></audio> Not working
<audio id="audio" src="test.wav" controls preload="auto"></audio> Working
The stream function below:
string wavFileName = test.wav;
FileStream soundStream;
long FileSize;
soundStream = new FileStream(#"\\netshare\" + wavfileName, FileMode.Open, FileAccess.Read, FileShare.Read, 256);
FileSize = soundStream.Length;
byte[] Buffer = new byte[(int)FileSize];
int count = 0;
int offset = 0;
while ((count = soundStream.Read(Buffer, offset,
Buffer.Length)) > 0)
{
Response.OutputStream.Write(Buffer, offset, count);
}
Response.Buffer = true;
Response.Clear();
Response.AddHeader("Content-Type", "audio/wave");
Response.AddHeader("Content-Disposition", "attachment;filename=" + wavfileName);
Response.ContentType = "audio/wave";
Response.BinaryWrite(Buffer);
if (soundStream != null)
{
soundStream.Close();
}
Response.End();
(btw, this is my first post, hope i did not break any rules)
The code seems fine. I think you need to make sure that the wav file is in the write PCM format. Some browsers have issues playing certain PCMs. I found out that just because a browse(html5)) supports a wav file does not mean it will actually play all wav type.
You might also want to flush the Response.
Related
I need to redirect an audio stream to avoid CORS in order to use AudioContext() for visualizations.
Instead of using the original stream url, I'd point the player to "mysite.com/stream/", which would intercept the stream and feed it to the player.
I really don't know what I'm doing in this case, which should be obvious from my attempt below. Thanks to anyone for some help.
System.Net.HttpWebRequest web = (System.Net.HttpWebRequest)System.Net.WebRequest.Create("https://ssl.geckohost.nz/proxy/caitlinssl?mp=/stream");
Response.ContentType = "audio/aac";
char[] buffer = new char[8192];
byte[] buffer_bytes = new byte[8192];
using (System.Net.HttpWebResponse web_resp = (System.Net.HttpWebResponse)web.GetResponse())
{
using (System.IO.Stream stream_web = web_resp.GetResponseStream())
{
//stream_web.CopyTo(Response.OutputStream);
using (System.IO.StreamReader stream_rdr = new System.IO.StreamReader(stream_web))
{
for (var i = 0; i < 100; i++)
{
stream_rdr.Read(buffer, 0, 8192);
Response.Write(buffer);
Response.Flush();
}
}
}
}
The solution I found was originally created for use with NAudio to play a remote MP3 file, but I was able to repurpose it for my needs. There are probably better ways, but this works.
Play audio from a stream using C#
Edit: It seems that the connection is dropped server to server, but I'm not positive. IIS will not send "Connection: keep-alive" to the client, however it is being sent to the client by the original stream. Tried adding KeepAlive = true to the WebRequest, and changing the protocol version to HTTP 1.0 to keep the server to server connection alive with no joy.
string url = "https://ssl.geckohost.nz/proxy/caitlinssl?mp=/stream";
Response.ContentType = "audio/aac";
using (Stream stream = WebRequest.Create(url).GetResponse().GetResponseStream())
{
byte[] buffer = new byte[32768];
int read;
while ((read = stream.Read(buffer, 0, buffer.Length)) > 0)
{
Response.OutputStream.Write(buffer, 0, read);
Response.Flush();
}
}
Conclusion: It seems that in this case it is not possible with IIS to persist the connection server to server, as far as I was able to find. I'm not worried about scalability as this is a personal project for a user of 1. The solution I ended up with is to start capturing the stream with ffmpeg on click, transcode it to flac/mkv, and use a variation of the above to read the file while it's being written and feed it to the player (WPF WebView [IE/Edge]).
In an ASP.NET application, I am using the code shown below to read video files from a shared path and play them in the browser. It is working for files which are less than 300 mb in size, but it is throwing out of memory exceptions for 650 mb file and for greater than 2 gb it is throwing
The file is too long. This operation is currently limited to supporting files less than 2 gigabytes in size.
My code:
ImpersonationHelper.Impersonate(ConfigurationManager.AppSettings["Domain"], ConfigurationManager.AppSettings["UserName"],ConfigurationManager.AppSettings["Password"], delegate
{
FileBuffer = System.IO.File.ReadAllBytes(filepath);
if (FileBuffer != null)
{
Response.Buffer = true;
Response.ContentType = contenttype;
Response.AddHeader("Content-Disposition", "inline");
Response.AddHeader("content-length", FileBuffer.Length.ToString());
Response.BinaryWrite(FileBuffer);
Response.Flush();
HttpContext.Current.ApplicationInstance.CompleteRequest();
Response.Close();
Response.End();
}
});
Loading entire file into memory is bad idea. Instead of using FileBuffer array you can do below:
// ...
using (var stream = new FileStream(filepath, FileMode.Open, FileAccess.Read))
{
Response.AddHeader("content-length", stream.Length.ToString());
stream.CopyTo(Response.OutputStream);
}
You should also remove this line Response.Buffer = true; and add Response.BufferOutput = false; in the beginning. With BufferOutput == true web server saves entire output to memory and only after this sends it to client. We don't want this to happen.
I tried to write a text file using OutputStream.Write but, I've noticed that the last character of the file is not being sent.
Whether the file is 6KB or 242KB, the last character is skipped.
AwAIB0UfFlJTSTNABEQWGgMVAhNFBkVeKgVTSx4LCVQMBUUQRUMwBVFTGwcAEAASRRNTBipNQQMFDREYB
BUAH0VIKgVLRVccCRFFGRcbR08wREgDEQENEUkCDRNOTX5cS1ZXHgQGHFYIB0NOfgUIAwMABFQABBcdUg
YpRFcDHgZBAA0TRTEDBj1KQEZXREEdRRIKHFQGK0tARgUbFRULEkUFSF9+R1FXVxwJEUUkAAFQSTBWQQ0
xBBQHDV5MUkFIOgV2RgQYDhoWE0sxTEktQAwKVx8AB0UCDRcAQyxXS1FZSBUcBAIWUldDN1dAAxEHE1QI
E0UTTkJ+TARAFgYVVBAYARdSVSpESkdXAAAcBFg=
Note: whole text is one line in the text file.
My text file is somewhat similar to the above. So can anyone help me out?
My code:
var path = Request.QueryString["path"];
path = Server.MapPath(Url.Content("~/Files/" + path + "/somefile.txt"));
Response.Buffer = false;
Response.ContentType = "text/plain";
FileInfo file = new FileInfo(path);
int len = (int)file.Length, bytes;
Response.AppendHeader("content-length", len.ToString());
byte[] buffer = new byte[1024];
Stream outStream = Response.OutputStream;
using (Stream stream = System.IO.File.OpenRead(path))
{
while (len > 0 && (bytes = stream.Read(buffer, 0, buffer.Length)) > 0)
{
outStream.Write(buffer, 0, bytes);
len -= bytes;
}
}
Response.Flush();
Response.Close();
UPDATE 1:
Now shows the complete text from the file.
UPDATE 2 :
Updated the complete C# code.
UPDATE 3:
Thanks, my friends, for all your efforts! I somehow made it work - the problem was Response.Flush() and Response.Close(); once I removed these 2 statements it started working. I don't understand why this problem occurred as I always use Response.Flush() and Response.Close(). I never received this kind of error but this was the first time. If anyone could give me an explanation, it would be appreciated. I will mark #AlexeiLevenkov's post as the answer, as I just tested it again without the Response.Flush and Close() and it is working.
Stream.CopyTo is easier approaach (as long as you can use .Net 4.0).
using (var stream = System.IO.File.OpenRead(path))
{
stream.CopyTo(outStream);
}
I think you need to call Response.Flush() or outStream.Flush()
An user posts this article about how to use HttpResponse.Filter to compress large amounts of data. But what will happen if I try to transfer a 4G file? will it load the whole file in memory in order to compress it? or otherwise it will compress it chunk by chunk?
I mean, I'm doing this right now:
public void GetFile(HttpResponse response)
{
String fileName = "example.iso";
response.ClearHeaders();
response.ClearContent();
response.ContentType = "application/octet-stream";
response.AppendHeader("Content-Disposition", "attachment; filename=" + fileName);
response.AppendHeader("Content-Length", new FileInfo(fileName).Length.ToString());
using (FileStream fs = new FileStream(Path.Combine(HttpContext.Current.Server.MapPath("~/App_Data"), fileName), FileMode.Open))
using (DeflateStream ds = new DeflateStream(fs,CompressionMode.Compress))
{
Byte[] buffer = new Byte[4096];
Int32 readed = 0;
while ((readed = ds.Read(buffer, 0, buffer.Length)) > 0)
{
response.OutputStream.Write(buffer, 0, readed);
response.Flush();
}
}
}
So at the same time I'm reading, I'm compressing and sending it. Then I wanna know if HttpResponse.Filter do the same thing, or otherwise it will load the whole file in memory in order to compress it.
Also, I'm a little bit insecure about this... maybe is needed to load the whole file in memory to compress it... is it?
Cheers.
HttpResponse.Filter is a stream: You can write in it in chunks.
You are doing it correctly. You are using FileStream and DeflateStream to read from file and compress it. You are reading 4096 bytes at time and then writing them into the Response Stream. So all you are using is 4096 byte(and a little bit more) of memory.
I'm trying to create a method which continually streams a zip file as a user downloads it (so that there is no wasted streaming)
I added a thread.sleep to simulate latency
public override void ExecuteResult(ControllerContext context) {
HttpResponseBase response = context.HttpContext.Response;
response.Clear();
response.ClearContent();
response.ClearHeaders();
response.Cookies.Clear();
response.ContentType = ContentType;
response.ContentEncoding = Encoding.Default;
response.AddHeader("Content-Type", ContentType);
context.HttpContext.Response.AddHeader("Content-Disposition",
String.Format("attachment; filename={0}",
this.DownloadName));
int ind = 0;
using (ZipOutputStream zipOStream =
new ZipOutputStream(context.HttpContext.Response.OutputStream))
{
foreach (var file in FilesToZip)
{
ZipEntry entry = new ZipEntry(FilesToZipNames[ind++]);
zipOStream.PutNextEntry(entry);
Thread.Sleep(1000);
zipOStream.Write(file, 0, file.Length);
zipOStream.Flush();
}
zipOStream.Finish();
}
response.OutputStream.Flush();
}
It seems like the zip will not start streaming until all the files are files are zipped. Is there a way to stream continuously? Maybe with a different library?
Assuming the zip format is partial to streaming, your problem is that your response is being buffered by default. If you set HttpResponseBase.BufferOutput to false, it should start streaming immediately.
I think you might need to be flushing the outputstream rather than the zipostream to see any output to http. So response.OutputStream.Flush() would appear in your loop in that case. Not sure if it will actually solver your problem though.