I'm making a small application which sent a part of video file to client to play on <video> element.
This is the code I'm having:
[RoutePrefix("api/video")]
public class VideoApiController : ApiController
{
[Route("")]
[HttpGet]
public async Task<HttpResponseMessage> GetVideoAsync([FromUri] GetVideoViewModel model)
{
var path = System.Web.Hosting.HostingEnvironment.MapPath($"~/wwwroot/{model.FileName}");
if (path == null || !File.Exists(path))
return new HttpResponseMessage(HttpStatusCode.NotFound);
using (var fileStream = File.OpenRead(path))
{
var bytes = new byte[model.To];
fileStream.Read(bytes, 0, bytes.Length - 1);
var memoryStream = new MemoryStream(bytes);
memoryStream.Seek(0, SeekOrigin.Begin);
var httpResponseMessage = new HttpResponseMessage(HttpStatusCode.PartialContent);
httpResponseMessage.Content = new ByteRangeStreamContent(memoryStream, new RangeHeaderValue(model.From, model.To), "video/webm");
return httpResponseMessage;
}
}
}
Then, I tested my end-point with postman.
If I selected byte range from 0 to 100000(http://localhost:64186/api/video?fileName=elephants-dream.webm&from=0&to=100000), video could be displayed on result panel:
However, when I selected byte range from 100000 to 200000(http://localhost:64186/api/video?fileName=elephants-dream.webm&from=100000&to=200000) at the first tiem, video was blank:
As I understand, video/webm uses a codec, metadata is included at some first bytes of stream.
If I want to play a part of video without play it from the beginning. What should I do ?
Thank you.
This is not very easy to do.
The first part of the stream is not just some metadata, it is all the information needed to tell the players how and what we are playing and even the color information to use in the streaming.
What you can do is read the header of the video extract the keyframes (Called Cues) then seek through your stream and start streaming the bits out with a new header. Basically, you will be building a web streaming software from scratch that will only work for this very specific video format (codec)
But here is the info to get you started and how to write the header in C++
Or you use ffmpeg and seek keyframes to jump to a good spot and let hundreds of other developers scream in frustration instead.
Microsoft has a zombie project which lets you wrap ffmpeg into c#.
Here are some other options for ffmpeg.
And how you can seek keyframes.
The other solution is to install a video streaming software to handle this.
Related
I am using a MediaPlaybackList to essentially 'stream' audio data coming in via Bluetooth as a byte[] on a fixed time gather interval. According to the MS documentation, MediaPlaybackList is 'gapless' playback between audio samples. But in my case, I have a popping sound and gap when transitioning to the next audio sample.
byte[] audioContent = new byte[audioLength];
chatReader.ReadBytes(audioContent);
MediaPlaybackItem mediaPlaybackItem = new MediaPlaybackItem(MediaSource.CreateFromStream(new MemoryStream(audioContent).AsRandomAccessStream(), "audio/mpeg"));
playbackList.Items.Add(mediaPlaybackItem);
if (_mediaPlayerElement.MediaPlayer.PlaybackSession.PlaybackState != MediaPlaybackState.Playing)
{
_mediaPlayerElement.MediaPlayer.Play(); ;
}
How can I achieve truly 'gapless' streaming audio using a method similar to this?
Also, I have tried writing my stream to a file realtime as the data comes in just to see if the popping sound or the gap is there. It plays from the file that the bytes are appended to perfectly with no pop or gap.
using (var stream = await playbackFile.OpenStreamForWriteAsync())
{
stream.Seek(0, SeekOrigin.End);
await stream.WriteAsync(audioContent, 0, audioContent.Length);
}
The MediaPlayer and in particular the MediaPlayerList is not designed to be used with a "live" audio stream. You must finish writing the data to the byte stream before adding it to the list and starting the MediaPlayer. Using the MediaPlayer is not the correct solution for this particular scenario.
A better solution would be to use the Audio Graph. The Audio Graph allows you to add input sources from actual audio endpoints so you don't need to fill the byte buffer with the streaming audio. You can then use sub-mix nodes to mix between the audio endpoint streams with no clicks or pops.
I am trying to send a continuous stream, from a C# application, to an ASP Core REST API.
I define a continuous stream as for example someone talking into a microphone and the sound being sent directly, without being saved to a local file) to the Rest API to be saved to file.
I have been searching a lot on Google for something like that and so far could not find anything really useful.
I have been trying to emulate it by sending a large file (297MB).
This is what I have so far for the client side:
string TARGETURL = "http://localhost:58000/api/file/";
string filePath = #"G:\Voice\Samples\The Monkey's Paw.wav";
byte[] fileContent = File.ReadAllBytes(filePath);
var dummyStream = new MemoryStream(fileContent);
var inputData = new StreamContent(dummyStream);
HttpResponseMessage response = this._httpClient.PostAsync(TARGETURL, inputData).Result;
HttpContent result = response.Content;
if (response.IsSuccessStatusCode)
{
string contents = result.ReadAsStringAsync().Result;
}
else
{
// do something
}
And for the server side:
[Route("")]
[HttpPost]
public async Task<JsonResult> Post()
{
Dictionary<string, object> rv = new Dictionary<string, object>();
try
{
string file = Path.Combine(#"G:\Voice\Samples\dummy.txt");
using (FileStream fs = new FileStream(file, FileMode.Create, FileAccess.Write,
FileShare.None, 4096, useAsync: true))
{
await Request.Body.CopyToAsync(fs);
}
// complete the transaction
rv.Add("success", true);
rv.Add("error", "");
}
catch(Exception ex)
{
}
return Json(rv);
}
When I am sending the file, the server throw the following exception:
The request's Content-Length 304137380 is larger than the request body size limit 30000000.
I know that I could increase the body size limit, but that's not a longer term solution as the stream length could get longer that any limit I set.
That's why I am trying to find a solution that send the stream by chunks for the server to rebuild and write to a file.
What you probably want to do is use a different network stack. A web application is always going to try and fit everything into HTTP. This is a very specific kind of way to communicate. And REST is built on top of these ideas as well. Things are generally though of as a document with references on the Internet, and REST is an extension to this idea.
It does however sit on top of some other great technologies that might suit your need better.
There's nothing to stop you using the internet, but maybe you need to look at possibly a UDP or TCP level implementation. Be aware that you will still be sending information in packets. There is no such thing as a constant stream of bits on the internet. A sound wave in the real world is an infinite thing, but computers are rubbish at that.
Maybe start by taking a look at using sockets and a library like NAudio.
I'm trying to reimplement an existing Matlab 8-band equalizer GUI I created for a project last week in C#. In Matlab, songs load as a dynamic array into memory, where they can be freely manipulated and playing is as easy as sound(array).
I found the NAudio library which conveniently already has Mp3 extractors, players, and both convolution and FFT defined. I was able to open the Mp3 and read all its data into an array (though I'm not positive I'm going about it correctly.) However, even after looking through a couple of examples, I'm struggling to figure out how to take the array and write it back into a stream in such a way as to play it properly (I don't need to write to file).
Following the examples I found, I read my mp3's like this:
private byte[] CreateInputStream(string fileName)
{
byte[] stream;
if (fileName.EndsWith(".mp3"))
{
WaveStream mp3Reader = new Mp3FileReader(fileName);
songFormat = mp3Reader.WaveFormat; // songFormat is a class field
long sizeOfStream = mp3Reader.Length;
stream = new byte[sizeOfStream];
mp3Reader.Read(stream, 0, (int) sizeOfStream);
}
else
{
throw new InvalidOperationException("Unsupported Exception");
}
return stream;
}
Now I have an array of bytes presumably containing raw audio data, which I intend to eventually covert to floats so as to run through the DSP module. Right now, however, I'm simply trying to see if I can play the array of bytes.
Stream outstream = new MemoryStream(stream);
WaveFileWriter wfr = new WaveFileWriter(outstream, songFormat);
// outputStream is an array of bytes and a class variable
wfr.Write(outputStream, 0, (int)outputStream.Length);
WaveFileReader wr = new WaveFileReader(outstream);
volumeStream = new WaveChannel32(wr);
waveOutDevice.Init(volumeStream);
waveOutDevice.Play();
Right now I'm getting errors thrown in WaveFileReader(outstream) which say that it can't read past the end of the stream. I suspect that's not the only thing I'm not doing correctly. Any insights?
Your code isn't working because you never close the WaveFileWriter so its headers aren't written correctly, and you also would need to rewind the MemoryStream.
However, there is no need for writing a WAV file if you want to play back an array of byes. Just use a RawSourceWaveStream and pass in your MemoryStream.
You may also find the AudioFileReader class more suitable to your needs as it will provide the samples as floating point directly, and allow you to modify the volume.
I want to stream data from a server into a MediaElement in my Windows 8 Store (formerly Metro) app. However, I need to "record" the stream while it is streaming, so it can be served from cache if re-requested, so I don't want to feed the URL directly into the MediaElement.
Currently, the stumbling block is that MediaElement.SetSource() accepts an IRandomAccessStream, not a System.IO.Stream, which is what I get from HttpWebResponse.GetResponseStream().
The code I have now, which does not work:
var request = WebRequest.CreateHttp(url);
request.AllowReadStreamBuffering = false;
request.BeginGetResponse(ar =>
{
var response = ((HttpWebResponse)request.EndGetResponse(ar));
// this is System.IO.Stream:
var stream = response.GetResponseStream();
// this needs IRandomAccessStream:
MediaPlayer.SetSource(stream, "audio/mp3");
}, null);
Is there a solution that allows me to stream audio, but allows me to copy the stream to disk when it has finished reading from the remote side?
I haven't experimented with the following idea, but it might work: You could start streaming the web data into a file and then, after a few seconds (for buffering), pass that file to the MediaElement.
I noticed that MediaElement can be picky about opening a file that is being written into, but I have seen it work. Though, I can't say why it sometimes work and why it sometimes doesn't.
I guess this would help you to convert Stream to IRandomAccessStream
InMemoryRandomAccessStream ras = new InMemoryRandomAccessStream();
using (Stream stream = response.GetResponseStream();)
{
await stream.CopyToAsync(ras.AsStreamForWrite());
}
I need to develop WinForms app, which will be able to decrypt a media file (a movie) and then play it without saving decrypted file to the HDD (the decrypted file finally will be stored in the memory stream) The problem is, how then play that movie from the memory stream ? Is it possible ?
It is possible, but I expect you will need to write your own DirectShow filter to do so, which once created will act as a file reader (implementing the IFileSourceFilter interface), and, as the video plays, will read successive frames from the file, decrypt them, and pass them up to the next filter.
This will only work however if the file is encrypted in a sequential form (i.e. each individual frame is encrypted as a seperate entity). Otherwise, you will have to decrypt the entire file at once, which could be intensive, slow, and probably have to hit the hard drive to store the end file.
But anyway, this link should get you started: http://msdn.microsoft.com/en-us/library/dd375454%28VS.85%29.aspx
I'm afraid that in order to create the DirectShow filter, you will need to use C++, and it isn't the easiest API to get your head around.
An alternate way to do it may be to use the Windows Media Format SDK, which allows you to pass custom video packets to a renderer in real time. There is also a good interop library for C# (WindowsMediaLib)
First of all, it's a good idea to encrypt source video piece by piece. So the encrypted video file is a set of encrypted parts. Just split original file into parts of the same size and encrypt them.
Here the scheme (OutputStream is a stream of encrypted video file, InputStream is original file stream, ChunkSize is a size of each part in the original file, also we write some metadata: sizes of original and encrypted pieces):
using (BinaryWriter Writer = new BinaryWriter(OutputStream))
{
byte[] Buf = new byte[ChunkSize];
List<int> SourceChunkSizeList = new List<int>();
List<int> EncryptedChunkSizeList = new List<int>();
int ReadBytes;
while ((ReadBytes = InputStream.Read(Buf, 0, Buf.Length)) > 0)
{
byte[] EncryptedData = Encrypt(Buf, ReadBytes);
OutputStream.Write(EncryptedData, 0, EncryptedData.Length);
SourceChunkSizeList.Add(ReadBytes);
EncryptedChunkSizeList.Add(EncryptedData.Length);
}
foreach (int SourceChunkSize in SourceChunkSizeList)
Writer.Write(SourceChunkSize);
foreach (int EncryptedChunkSize in EncryptedChunkSizeList)
Writer.Write(EncryptedChunkSize);
}
Such metadata will help to find encrypted part rapidly.
Secondly, don't decrypt encrypted data in each read request. Cache it: video playing in the most case is just a sequential reading.
The tricky part is how to play encrypted video file. You may write either a DirectShow filter (video specific solution), or check a 3rd party product (multipurpose solution): BoxedApp, a virtualization SDK. What's cool is that they have an article that shows how to solve exact your task, look: http://boxedapp.com/encrypted_video_streaming.html