Getting mp3 file length - c#

I am currently trying to write an Audio Player in C#. I am using BASS library to deal with playing music but now i have small problem with getting length of the song.
Well i have read BASS doc and found a way:
"All" i need to do is
int stream = Bass.BASS_StreamCreateFile(filepath,....);
int length = Bass.BASS_ChannelBytes2Seconds(stream, Bass.BASS_ChannelGetLength(stream));
And in most of cases i get valid length of song. And here the problem starts. As far as i know the stream creation operation is quite expensive (correct me if i am mistaken) and creating a stream only to get length of the song looks a little silly.
So my question is: Is there any other way to get it without creating steam file (not being so expensive). I will be thinking later about reading id3 tags. Is creating that stream "evil that must be done no matter what" and even if i would try to get it with other library it would do exactly the same thing?

You can use the Microsoft.WindowsAPICodePack.Shell:
using Microsoft.WindowsAPICodePack.Shell;
Then code like so:
string file = "myfile.mp3"
ShellFile so = ShellFile.FromFilePath(file);
double 100nanoseconds;
double.TryParse(so.Properties.System.Media.Duration.Value.ToString(), out 100nanoseconds);
There is a code project that could help you as well

Related

Hololens video stream with spatial data

I'm using the hololens and I'm trying to save a video stream with the world/projection matrices avaiable.
I've been trying to just take a sequence of pictures and save the data, but I can't find a way to save the image and the matrices.
When saving to disk, there is no option to get the photocaptureframe (which contains the matrix data), when saving to memory, I seem to not be able to save the image to disk.
I tried using the following methode, but this seemed to crash my unity program:
List<byte> imageBufferList = new List<byte>();
photoCaptureFrame.CopyRawImageDataIntoBuffer(imageBufferList);
byte[] myArrayImage = imageBufferList.ToArray();
And then use this to convert the byte array:
using (MemoryStream mStream = new MemoryStream(byteArrayIn))
return Image.FromStream(mStream);
After which I save the result.
When I remove the memorystream thing, the program doesn't crash (but it doesn't save my image either).
I've been looking all over the internet but there are a lot of vague statements about it
a) not beeing possible
b) using the memorystream (but that crashes)
Any suggestions?
If anyone knows a way to save all the the matrix (projection and world) data per frame and the corresponding frame for a video stream, it would be a great help.
Edit: I also tried to look into https://github.com/VulcanTechnologies/HoloLensCameraStream but this seems to give problems with newer Unity versions. Any remarks about this?
To clarify my end goal:
When filming, the program should save all frames and the corresponding matrices, for example:
Frame_01, Frame_02, Frame_03, ... (.jpg/png/...)
World_matrix_01, World_matrix_02, ... (.txt)
Projection_matrix_01, Projection_matrix_02,... (.txt)
Edit: I also tried to look into https://github.com/VulcanTechnologies/HoloLensCameraStream but this seems to give problems with newer Unity versions. Any remarks about this?
I used it and it worked very well in Unity 2018.3.13f1. You can only test in Hololens out of debug mode.

How can I download a video using VideoLib when it's Async?

Alright, here is my dilemma:
I wanted to learn how to use NuGet, so I tested it out using a system called VideoLibrary(https://www.nuget.org/packages/VideoLibrary/). I successfully got it installed to my computer, and was finally got it working inside of code. I was able to successfully download a video, which is the purpose of the extension, however it was in the wrong format. So, having read through the questions and answers section, I got this code:
var youTube = YouTube.Default;
var allVideos = await youTube.GetAllVideosAsync(URL);
var mp4Videos = allVideos.Where(_ => _.Format == VideoFormat.Mp4);
Now, from what my understanding is, that first downloads all the videos, and then sets the variable to only the MP4 video.
However, after that is done, I don't know how to save it to a byte array. Typically, to save a video using VideoLib, you have to assign it to a byte array and then save the byte array to a file. So I used this code previously with a video that wasn't async:
byte[] bytes = video.GetBytes();
var stream = video.Stream();
File.WriteAllBytes(#"C:\" + fullName, bytes);
Now, this method doesn't work with my videos now because they're async. THe developer mentioned later that you could assign the video to a byte array by using:
byte[] bytes = allVideos.GetBytesAsync();
However, that doesn't work for some reason.
I'm assuming I'm using it wrong and that's why, but I don't know.
The code is underlined in red, and it gives:
'IEnumerable<YouTubeVideo>' does not contain a definition to 'GetBytesAsync'
and no extension method 'GetBytesAsync accepting a first argument of type
'IEnumerable<YouTubeVideo>'coulb be found
Any help would be appreciated.
Okay, so, I've discovered the answer to my question:
The problem is that allVideos is multiple videos. When you use the method GetAllVideosAsync it retrieves a whole list of videos. You have to narrow it down.
When you use mp4Videos you've narrowed it down to one video, however the computer still doesn't know that because it's a variable of multiple videos.
So, simply put the code through this:
var SingleVideo = mp4Videos.FirstOrDefault();
Then, using the SingleVideo variable, you can save it to a byte array like a conventional video, but using the async method:
Byte[] Byte = await SingleVideo.GetBytesAsync();
Once the contents are saved to the byte array, you can save it any way you please.
I hope this helps anyone who needs it!

Audio stream for multiple outputs (single producer, multi-consumer)

I am attempting to propagate a single sound source to multiple outputs (such as one microphone input to multiple sound cards or channels). The output does not have to be sync'd (a few ms delay is acceptable) but it would be nice if it could be sync'd.
I have successfully written code that loops a microphone input to an output using a WaveIn, a BufferedWaveProvider, and a WaveOut. However when I try to read one BufferedWaveProvider with two instances of WaveOut, the two outputs create this odd 'interleaved' choppy sound. Here is a code snippet for the output portion;
private void CreateWaveOutDevice()
{
waveProvider = new BufferedWaveProvider(waveIn.WaveFormat);
waveOut = new WaveOut();
waveOut.DeviceNumber = 0; //Sound card 1
waveOut.DesiredLatency = 100;
waveOut.Init(waveProvider);
waveOut.PlaybackStopped += wavePlayer_PlaybackStopped;
waveOut2 = new WaveOut();
waveOut2.DeviceNumber = 1; //Sound card 2
waveOut2.DesiredLatency = 100;
waveOut2.Init(waveProvider);
waveOut2.PlaybackStopped += wavePlayer_PlaybackStopped;
waveOut.Play();
waveOut2.Play();
}
I think the reason this is happening is because when the waveProvider circular buffer is read, the data is deleted so the two read methods are 'fighting' over the data which results in the choppy sound.
So I really have two questions;
1.) I see the Naudio library contains many types of waveStreams (RawSourceWaveStream is particularly interesting) However, I have been unable to find a good example of how to read a single stream with multiple waveOut methods. I have also been unable to create working code using waveStream with multiple outputs. Is anyone familiar with waveStreams and knows if this is something that can be done?
2.) If the Naudio wave streams cannot be used in a single producer multiple consumer situation then I believe I would need to make a circular buffer that is not cleared on a read, but only when the buffer is full and new data is pushed in. The code won't care if the data was read or not it just keeps filling the buffer. Would this be the correct approach?
I've spent days searching so hopefully this hasn't already been asked. Thanks for reading.
If you're just reading from a microphone and want two WaveOut's to play it, then the simple option is to create two BufferedWaveProviders, one for each WaveOut, and then when audio is received, send it to both.
Likewise if you were playing from an audio file to two soundcards, the easiest way is to use two reader objects and start them both separately.
There is unfortunately no easy way to synchronize, short of starting and stopping both players at the same time.
There are a few more advanced ways to try to split off an audio stream to two readers, but there can be complications especially if the two readers are not able to read at roughly the same rate.

video not playing completly after join operation

This following is a code to join two videos. When I run the program it joins two videos and puts joined video in a folder. The joined video size is correct as it should be.
But when I play the video it plays the first part of the video in WMP but when i play the video in VLC it plays the second part of video.
public void JoiningVideo()
{
string j = #"D:/test2";
string outputpath = #"D:/test3/beforeEventab1.wmv";
DirectoryInfo di = new DirectoryInfo(j);
FileStream fs;
fs = new FileStream(outputpath, FileMode.Append);
foreach (FileInfo fi in di.GetFiles(#"*.wmv"))
{
byte[] bytesource = System.IO.File.ReadAllBytes(fi.FullName);
fs.Write(bytesource, 0, bytesource.Length);
}
fs.Close();
}
You know that each video-file starts with something called "header" ?
This part of the file contains information about the length etc.
If you want to join to seperate video files, you have to merge the headers to a new one containing information about both (joined) parts and to make sure that both videos fit to each other. (*)
Otherwise the video is not a valid file.
Due to the differences of the decoders of WMP and VLC, one recognises the first and the other one recognises the second file.
You can be lucky of that the the programs even played this 'corrupt' file! ;)
Just ask a search engine about merge wmv for a solution that should work for you!
(*)
To merge two videos they need to have
The same format (e.g. Resolution, Framerate, Bitrate)
If this does not apply, at least one of them has to be converted to match the other video
The videos have to be 'glued' together, it is not sufficient to append one's data to the other one.
Each video starts with a header. This header has to be changed to comprise information about the new (joined) video.
Also the raw image data cannot be simply appended. Every image is like a piece of a puzzle fitting to the next image in the video. The transition is like a new piece of a puzzle that has to be created. It may even be necessary to change/reorder the whole second file in order to get a working transition.
I am not a specialist, but at leas I can tell you, that this procedure is different for each type (MPEG, WMV, ..) of video. The best approach is to use an existing library for this purpose.

How do I merge/join MP3 files with C#?

I have a library of different words/phrases, and in order to build sentences at present I add a combination of these phrases into a playlist to make a sentence. Unfortunately if the user is running CPU intensive applications (which most of my users are) there can be a lag of a few seconds mid-sentence (in between phrases).
In order to combat this I was thinking of an approach which will merge the right combination of MP3 files on the fly into an appropriate phrase, save this in the %temp% directory, and then play this 1 MP3 file which should overcome the problem I'm experiencing with gaps.
What is the easiest way to do this in C#? Is there an easy way to do this? The files are fairly small, 3-4 seconds long each, and a sentence can consist of 3-20ish phrases.
here's how you can concatenate MP3 files using NAudio:
public static void Combine(string[] inputFiles, Stream output)
{
foreach (string file in inputFiles)
{
Mp3FileReader reader = new Mp3FileReader(file);
if ((output.Position == 0) && (reader.Id3v2Tag != null))
{
output.Write(reader.Id3v2Tag.RawData, 0, reader.Id3v2Tag.RawData.Length);
}
Mp3Frame frame;
while ((frame = reader.ReadNextFrame()) != null)
{
output.Write(frame.RawData, 0, frame.RawData.Length);
}
}
}
see here for more info
MP3 files consist of "frames", that each represent a short snippet (I think around 25 ms) of audio.
So yes, you can just concatenate them without a problem.
As MP3s are a compressed audio source, I imagine that you can't just concatenate them into a single file without decoding each one first to the wave form that it would play. This may be quite intensive. Perhaps you could cheat by using a critical section when playing back your phrase so that the CPU is not stolen from you until the phrase was complete. This isn't necessarily playing nice with other threads but might work if your phrases are short.
On simple option is to shell to the command line:
copy /b *.mp3 c:\new.mp3
Better would be to concatenate the streams. That's been answered here:
What would be the fastest way to concatenate three files in C#?

Categories

Resources