The following code will successfully, load, play, edit audio samples and (almost) write to audio file. I say almost because when I comment out the "Play" code it works, but leaving it in causes the buffer read:
audioFile.Read(buffer, 0, numSamples);
to result in zeros.
Do I need to reset the audioFile somehow? All the examples I've found don't mention any need for this.
using System;
using NAudio.Wave;
namespace NAudioTest
{
class TestPlayer
{
static void Main(string[] args)
{
string infileName = "c:\\temp\\pink.wav";
string outfileName = "c:\\temp\\pink_out.wav";
// load the file
var audioFile = new AudioFileReader(infileName);
// play the file
var outputDevice = new WaveOutEvent();
outputDevice.Init(audioFile);
outputDevice.Play();
//Since Play only means "start playing" and isn't blocking, we can wait in a loop until playback finishes....
while (outputDevice.PlaybackState == PlaybackState.Playing) { System.Threading.Thread.Sleep(1000); }
// edit the samples in file
int fs = audioFile.WaveFormat.SampleRate;
int numSamples = (int)audioFile.Length / sizeof(float); // length is the number of bytes - 4 bytes in a float
float[] buffer = new float[numSamples];
audioFile.Read(buffer, 0, numSamples);
float volume = 0.5f;
for (int n = 0; n < numSamples; n++) { buffer[n] *= volume; }
// write edited samples to new file
var writer = new WaveFileWriter(outfileName,audioFile.WaveFormat);
writer.WriteSamples(buffer,0,numSamples);
}
}
}
You must call Dispose on your writer before it is a valid WAV file. I recommend that you put it in a using block.
using(var writer = new WaveFileWriter(outfileName,audioFile.WaveFormat))
{
writer.WriteSamples(buffer,0,numSamples);
}
I am trying to save my kinect raw depth-data and i dont want to use the Kinect Studio, because i need the raw-data for further calculations. I am using the kinectv2 and kinect sdk!
My problem is that i just get low FPS for the saved data. Its about 15-17FPS.
Here my Framereader ( in further steps i want to save colorstream also):
frameReader = kinectSensor.OpenMultiSourceFrameReader(FrameSourceTypes.Depth);
frameReader.MultiSourceFrameArrived += Reader_MultiSourceFrameArrived;
Here the Event:
void Reader_MultiSourceFrameArrived(object sender, MultiSourceFrameArrivedEventArgs e)
{
var reference = e.FrameReference.AcquireFrame();
saveFrameTest(reference);
frame_num++;
}
Here the saving-function:
private unsafe void saveFrameTest(Object reference)
{
MultiSourceFrame mSF = (MultiSourceFrame)reference;
using (var frame = mSF.DepthFrameReference.AcquireFrame())
{
if (frame != null)
{
using (Microsoft.Kinect.KinectBuffer depthBuffer = frame.LockImageBuffer())
{
if ((frame.FrameDescription.Width * frame.FrameDescription.Height) == (depthBuffer.Size / frame.FrameDescription.BytesPerPixel))
{
ushort* frameData = (ushort*)depthBuffer.UnderlyingBuffer;
byte[] rawDataConverted = new byte[(int)(depthBuffer.Size / 2)];
for (int i = 0; i < (int)(depthBuffer.Size / 2); ++i)
{
ushort depth = frameData[i];
rawDataConverted[i] = (byte)(depth >= frame.DepthMinReliableDistance && depth <= frame.DepthMaxReliableDistance ? (depth) : 0);
}
String date = string.Format("{0:hh-mm-ss}", DateTime.Now);
String filePath = System.IO.Directory.GetCurrentDirectory() + "/test/" +date+".raw";
File.WriteAllBytes(filePath, rawDataConverted);
rawDataConverted = null;
}
}
}
}
}
Further Infomration:
I included my code in a simple Console-Application on a Intel® Xeon® Processor E5-1620 3,7GHz with 16 GB RAM.
i think the for-loop is taking to much time:
for (int i = 0; i < (int)(depthBuffer.Size / 2); ++i)
{
ushort depth = frameData[i];
rawDataConverted[i] = (byte)(depth >= frame.DepthMinReliableDistance && depth <= frame.DepthMaxReliableDistance ? (depth) : 0);
}
I could improved my framerate. Now, i am accessing the kinectbuffer directly and resign the for-loop.
Microsoft.Kinect.KinectBuffer depthBuffer = frame.LockImageBuffer();
Marshal.Copy(depthBuffer.UnderlyingBuffer, rawData_depth, 0, (depthImageSize));
depthBuffer.Dispose();
frame.Dispose();
However i couldnt get the 30FPS-rate. Now it is about 25 FPS.
You could try something like this to get your array.
It's what I normally use.
var frame = frameReference.AcquireFrame();
var frameDescription = frame.FrameDescription;
ushort[] frameData = new ushort[frameDescription.Width * frameDescription.Height];
frame.CopyFrameDataToArray(frameData);
I am having problem mixing two different audio samples into one by simply adding the bytes of both audio samples.
After below process when I try to open mixed.mp3 file in media player it says:
Windows Media Player encountered a problem while playing the file.
Here is the code I'm using to mix the audio files:
byte[] bytes1,bytes2,final;
int length1,length2,max;
// Getting byte[] of audio file
using ( BinaryReader b = new BinaryReader(File.Open("background.mp3" , FileMode.Open)) )
{
length1 = (int)b.BaseStream.Length;
bytes1 = b.ReadBytes(length1);
}
using ( BinaryReader b = new BinaryReader(File.Open("voice.mp3" , FileMode.Open)) )
{
length2 = (int)b.BaseStream.Length;
bytes2 = b.ReadBytes(length2);
}
// Getting max length
if(length1 > length2){
max = length1;
}else{
max = length2;
}
// Initializing output byte[] of max length
final = new byte[max];
// Adding byte1 and byte2 and copying into final
for (int i=0;i<max;i++)
{
byte b1 , b2;
if(i < length1){
b1 = bytes1[i];
}else{
b1 = 0;
}
if ( i < length2 ){
b2 = bytes2[i];
}
else{
b2 = 0;
}
final[i] = (byte)(b1 + b2);
}
// Writing final[] as an mp3 file
File.WriteAllBytes("mixed.mp3" , final);
Note: I tried to mix two same files and it worked, that is, the media player didn't throw any errors and played it correctly.
This is mostly likely due to the fact that you aren't decoding the MP3 files before you mix them. And you're "just adding" the samples together, which is going to result in clipping; you should first use a library to decode the MP3 files to PCM, which will then allow you to mix them.
To correctly mix the samples you should be doing:
final[i] = (byte)(b1 / 2 + b2 / 2);
Otherwise your calculations will overflow (also, I'd generally recommend normalising your audio to floats before manipulating them). It should also be noted that you're mixing all the bytes in the MP3 files, i.e., you're messing with the headers (hence WMP refusing to play your "Mixed" file). You should only mix the actual audio data (the samples) of the files, not the entire file.
I've provided a (commented) working example1 using the NAudio library (it exports the mixed audio to a wav file to avoid further complications):
// You can get the library via NuGet if preferred.
using NAudio.Wave;
...
var fileA = new AudioFileReader("Input path 1");
// Calculate our buffer size, since we're normalizing our samples to floats
// we'll need to account for that by dividing the file's audio byte count
// by its bit depth / 8.
var bufferA = new float[fileA.Length / (fileA.WaveFormat.BitsPerSample / 8)];
// Now let's populate our buffer with samples.
fileA.Read(bufferA, 0, bufferA.Length);
// Do it all over again for the other file.
var fileB = new AudioFileReader("Input path 2");
var bufferB = new float[fileB.Length / (fileB.WaveFormat.BitsPerSample / 8)];
fileB.Read(bufferB, 0, bufferB.Length);
// Calculate the largest file (simpler than using an 'if').
var maxLen = (long)Math.Max(bufferA.Length, bufferB.Length);
var final = new byte[maxLen];
// For now, we'll just save our mixed data to a wav file.
// (Things can get a little complicated when encoding to MP3.)
using (var writer = new WaveFileWriter("Output path", fileA.WaveFormat))
{
for (var i = 0; i < maxLen; i++)
{
float a, b;
if (i < bufferA.Length)
{
// Reduce the amplitude of the sample by 2
// to avoid clipping.
a = bufferA[i] / 2;
}
else
{
a = 0;
}
if (i < bufferB.Length)
{
b = bufferB[i] / 2;
}
else
{
b = 0;
}
writer.WriteSample(a + b);
}
}
1 The input files must be of the same sample rate, bit depth and channel count in order for it to work correctly.
I'm new in naudio. And I wanna increase volume by X db. I've written this piece of code:
public static void IncreaseVolume(string inputPath, string outputPath, double db)
{
double linearScalingRatio = Math.Pow(10d, db / 10d);
using (WaveFileReader reader = new WaveFileReader(inputPath))
{
VolumeWaveProvider16 volumeProvider = new VolumeWaveProvider16(reader);
using (WaveFileWriter writer = new WaveFileWriter(outputPath, reader.WaveFormat))
{
while (true)
{
var frame = reader.ReadNextSampleFrame();
if (frame == null)
break;
writer.WriteSample(frame[0] * (float)linearScalingRatio);
}
}
}
}
Ok, this works, but how can I find by how many decibels I've increased each sample? May anyone explain this moment for me and provide any examples?
UPDATE:
using (WaveFileReader reader = new WaveFileReader(inFile))
{
float Sum = 0f;
for (int i = 0; i < reader.SampleCount; i++)
{
var sample = reader.ReadNextSampleFrame();
Sum += sample[0] * sample[0];
}
var db = 20 * Math.Log10(Math.Sqrt(Sum / reader.SampleCount) / 1);
Console.WriteLine(db);
Console.ReadLine();
}
Your code looks good. To measure the average sound level of an audio sample you need to calculate the RMS (root mean square) of this sound level:
RMS := Sqrt( Sum(x_i*x_i)/N)
with x_i being the i-th sample and N the number of samples. The RMS is the average amplitude of your signal. Use
RMS_dB = 20*log(RMS/ref)
(with ref being 1.0 or 32767.0)
to convert it to a decibel value.
You may calculate this RMS value before and after you change the volume. The difference should be erxactly the dB you used in your IncreaseVolume()
Just adding a comment for people
The input db in line is decibel and you need to convert it into amplitude.
double linearScalingRatio = Math.Pow(10d, db / 10d);
The table is as follows-
https://blog.demofox.org/2015/04/14/decibels-db-and-amplitude/
so you need to provide value as 6 in db, to make it twice as load.
Another point already mentioned it should be
double linearScalingRatio = Math.Pow(10d, db / 20d);
Like many people already seem to have (there are several threads on this subject here) I am looking for ways to create video from a sequence of images.
I want to implement my functionality in C#!
Here is what I wan't to do:
/*Pseudo code*/
void CreateVideo(List<Image> imageSequence, long durationOfEachImageMs, string outputVideoFileName, string outputFormat)
{
// Info: imageSequence.Count will be > 30 000 images
// Info: durationOfEachImageMs will be < 300 ms
if (outputFormat = "mpeg")
{
}
else if (outputFormat = "avi")
{
}
else
{
}
//Save video file do disk
}
I know there's a project called Splicer (http://splicer.codeplex.com/) but I can't find suitable documentation or clear examples that I can follow (these are the examples that I found).
The closest I want to do, which I find here on CodePlex is this:
How can I create a video from a directory of images in C#?
I have also read a few threads about ffmpeg (for example this: C# and FFmpeg preferably without shell commands? and this: convert image sequence using ffmpeg) but I find no one to help me with my problem and I don't think ffmpeg-command-line-style is the best solution for me (because of the amount of images).
I believe that I can use the Splicer-project in some way (?).
In my case, it is about about > 30 000 images where each image should be displayed for about 200 ms (in the videostream that I want to create).
(What the video is about? Plants growing ...)
Can anyone help me complete my function?
Well, this answer comes a bit late, but since I have noticed some activity with my original question lately (and the fact that there was not provided a working solution) I would like to give you what finally worked for me.
I'll split my answer into three parts:
Background
Problem
Solution
Background
(this section is not important for the solution)
My original problem was that I had a lot of images (i.e. a huge amount), images that were individually stored in a database as byte arrays. I wanted to make a video sequence with all these images.
My equipment setup was something like this general drawing:
The images depicted growing tomato plants in different states. All images were taken every 1 minute under daytime.
/*pseudo code for taking and storing images*/
while (true)
{
if (daylight)
{
//get an image from the camera
//store the image as byte array to db
}
//wait 1 min
}
I had a very simple db for storing images, there were only one table (the table ImageSet) in it:
Problem
I had read many articles about ffmpeg (please see my original question) but I couldn't find any on how to go from a collection of images to a video.
Solution
Finally, I got a working solution!
The main part of it comes from the open source project AForge.NET. In short, you could say that AForge.NET is a computer vision and artificial intelligence library in C#.
(If you want a copy of the framework, just grab it from http://www.aforgenet.com/)
In AForge.NET, there is this VideoFileWriter class (a class for writing videofiles with help of ffmpeg). This did almost all of the work. (There is also a very good example here)
This is the final class (reduced) which I used to fetch and convert image data into a video from my image database:
public class MovieMaker
{
public void Start()
{
var startDate = DateTime.Parse("12 Mar 2012");
var endDate = DateTime.Parse("13 Aug 2012");
CreateMovie(startDate, endDate);
}
/*THIS CODE BLOCK IS COPIED*/
public Bitmap ToBitmap(byte[] byteArrayIn)
{
var ms = new System.IO.MemoryStream(byteArrayIn);
var returnImage = System.Drawing.Image.FromStream(ms);
var bitmap = new System.Drawing.Bitmap(returnImage);
return bitmap;
}
public Bitmap ReduceBitmap(Bitmap original, int reducedWidth, int reducedHeight)
{
var reduced = new Bitmap(reducedWidth, reducedHeight);
using (var dc = Graphics.FromImage(reduced))
{
// you might want to change properties like
dc.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
dc.DrawImage(original, new Rectangle(0, 0, reducedWidth, reducedHeight), new Rectangle(0, 0, original.Width, original.Height), GraphicsUnit.Pixel);
}
return reduced;
}
/*END OF COPIED CODE BLOCK*/
private void CreateMovie(DateTime startDate, DateTime endDate)
{
int width = 320;
int height = 240;
var framRate = 200;
using (var container = new ImageEntitiesContainer())
{
//a LINQ-query for getting the desired images
var query = from d in container.ImageSet
where d.Date >= startDate && d.Date <= endDate
select d;
// create instance of video writer
using (var vFWriter = new VideoFileWriter())
{
// create new video file
vFWriter.Open("nameOfMyVideoFile.avi", width, height, framRate, VideoCodec.Raw);
var imageEntities = query.ToList();
//loop throught all images in the collection
foreach (var imageEntity in imageEntities)
{
//what's the current image data?
var imageByteArray = imageEntity.Data;
var bmp = ToBitmap(imageByteArray);
var bmpReduced = ReduceBitmap(bmp, width, height);
vFWriter.WriteVideoFrame(bmpReduced);
}
vFWriter.Close();
}
}
}
}
Update 2013-11-29 (how to) (Hope this is what you asked for #Kiquenet?)
Download AForge.NET Framework from the downloads page (Download full ZIP archive and you will find many interesting Visual Studio solutions with projects, like Video, in the AForge.NET Framework-2.2.5\Samples folder...)
Namespace: AForge.Video.FFMPEG (from the documentation)
Assembly: AForge.Video.FFMPEG (in AForge.Video.FFMPEG.dll) (from the documentation) (you can find this AForge.Video.FFMPEG.dll in the AForge.NET Framework-2.2.5\Release folder)
If you want to create your own solution, make sure you have a reference to AForge.Video.FFMPEG.dll in your project. Then it should be easy to use the VideoFileWriter class. If you follow the link to the class you will find a very good (and simple example). In the code, they are feeding the VideoFileWriter with Bitmap image in a for-loop
I found this code in the slicer samples, looks pretty close to to what you want:
string outputFile = "FadeBetweenImages.wmv";
using (ITimeline timeline = new DefaultTimeline())
{
IGroup group = timeline.AddVideoGroup(32, 160, 100);
ITrack videoTrack = group.AddTrack();
IClip clip1 = videoTrack.AddImage("image1.jpg", 0, 2); // play first image for a little while
IClip clip2 = videoTrack.AddImage("image2.jpg", 0, 2); // and the next
IClip clip3 = videoTrack.AddImage("image3.jpg", 0, 2); // and finally the last
IClip clip4 = videoTrack.AddImage("image4.jpg", 0, 2); // and finally the last
}
double halfDuration = 0.5;
// fade out and back in
group.AddTransition(clip2.Offset - halfDuration, halfDuration, StandardTransitions.CreateFade(), true);
group.AddTransition(clip2.Offset, halfDuration, StandardTransitions.CreateFade(), false);
// again
group.AddTransition(clip3.Offset - halfDuration, halfDuration, StandardTransitions.CreateFade(), true);
group.AddTransition(clip3.Offset, halfDuration, StandardTransitions.CreateFade(), false);
// and again
group.AddTransition(clip4.Offset - halfDuration, halfDuration, StandardTransitions.CreateFade(), true);
group.AddTransition(clip4.Offset, halfDuration, StandardTransitions.CreateFade(), false);
// add some audio
ITrack audioTrack = timeline.AddAudioGroup().AddTrack();
IClip audio =
audioTrack.AddAudio("testinput.wav", 0, videoTrack.Duration);
// create an audio envelope effect, this will:
// fade the audio from 0% to 100% in 1 second.
// play at full volume until 1 second before the end of the track
// fade back out to 0% volume
audioTrack.AddEffect(0, audio.Duration,
StandardEffects.CreateAudioEnvelope(1.0, 1.0, 1.0, audio.Duration));
// render our slideshow out to a windows media file
using (
IRenderer renderer =
new WindowsMediaRenderer(timeline, outputFile, WindowsMediaProfiles.HighQualityVideo))
{
renderer.Render();
}
}
I could not manage to get the above example to work. However I did find another library that works amazingly well once. Try via NuGet "accord.extensions.imaging.io", then I wrote the following little function:
private void makeAvi(string imageInputfolderName, string outVideoFileName, float fps = 12.0f, string imgSearchPattern = "*.png")
{ // reads all images in folder
VideoWriter w = new VideoWriter(outVideoFileName,
new Accord.Extensions.Size(480, 640), fps, true);
Accord.Extensions.Imaging.ImageDirectoryReader ir =
new ImageDirectoryReader(imageInputfolderName, imgSearchPattern);
while (ir.Position < ir.Length)
{
IImage i = ir.Read();
w.Write(i);
}
w.Close();
}
It reads all images from a folder and makes a video out of them.
If you want to make it nicer you could probably read the image dimensions instead of hard coding, but you got the point.
The FFMediaToolkit is a good solution in 2020, with .NET Core support.
https://github.com/radek-k/FFMediaToolkit
FFMediaToolkit is a cross-platform .NET Standard library for creating and reading video files. It uses native FFmpeg libraries by the FFmpeg.Autogen bindings.
The README of the library has a nice example for the question asked.
// You can set there codec, bitrate, frame rate and many other options.
var settings = new VideoEncoderSettings(width: 1920, height: 1080, framerate: 30, codec: VideoCodec.H264);
settings.EncoderPreset = EncoderPreset.Fast;
settings.CRF = 17;
var file = MediaBuilder.CreateContainer(#"C:\videos\example.mp4").WithVideo(settings).Create();
while(file.Video.FramesCount < 300)
{
file.Video.AddFrame(/*Your code*/);
}
file.Dispose(); // MediaOutput ("file" variable) must be disposed when encoding is completed. You can use `using() { }` block instead.
This is a solution for creating a video from an image sequence using Visual Studio using C#.
My starting point was "Hauns TM"'s answer below but my requirements were more basic than theirs so this solution might be more appropriated for less advanced users ( like myself )
Libraries:
using System;
using System.IO;
using System.Drawing;
using Accord.Video.FFMPEG;
You can get the FFMPEG libarary by searching for FFMPEG in "Tools -> NuGet Package Manager -> Manage NuGet Packages for a Solution..."
The variables that I passed into the function are:
outputFileName = "C://outputFolder//outputMovie.avi"
inputImageSequence =
["C://inputFolder//image_001.avi",
"C://inputFolder//image_002.avi",
"C://inputFolder//image_003.avi",
"C://inputFolder//image_004.avi"]
Function:
private void videoMaker( string outputFileName , string[] inputImageSequence)
{
int width = 1920;
int height = 1080;
var framRate = 25;
using (var vFWriter = new VideoFileWriter())
{
// create new video file
vFWriter.Open(outputFileName, width, height, framRate, VideoCodec.Raw);
foreach (var imageLocation in inputImageSequence)
{
Bitmap imageFrame = System.Drawing.Image.FromFile(imageLocation) as Bitmap;
vFWriter.WriteVideoFrame(imageFrame);
}
vFWriter.Close();
}
}
It looks like many of these answers are a bit obsolete year 2020, so I add my thoughts.
I have been working on the same problem and have published the .NET Core project Time Lapse Creator on GitHub: https://github.com/pekspro/TimeLapseCreator It shows how to add information on extra frame (like a timestamp for instance), background audio, title screen, fading and some more. And then ffmpeg is used to make the rendering. This is done in this function:
// Render video from a list of images, add background audio and a thumbnail image.
private async Task RenderVideoAsync(int framesPerSecond, List<string> images, string ffmpgPath,
string audioPath, string thumbnailImagePath, string outPath,
double videoFadeInDuration = 0, double videoFadeOutDuration = 0,
double audioFadeInDuration = 0, double audioFadeOutDuration = 0)
{
string fileListName = Path.Combine(OutputPath, "framelist.txt");
var fileListContent = images.Select(a => $"file '{a}'{Environment.NewLine}duration 1");
await File.WriteAllLinesAsync(fileListName, fileListContent);
TimeSpan vidLengthCalc = TimeSpan.FromSeconds(images.Count / ((double)framesPerSecond));
int coverId = -1;
int audioId = -1;
int framesId = 0;
int nextId = 1;
StringBuilder inputParameters = new StringBuilder();
StringBuilder outputParameters = new StringBuilder();
inputParameters.Append($"-r {framesPerSecond} -f concat -safe 0 -i {fileListName} ");
outputParameters.Append($"-map {framesId} ");
if(videoFadeInDuration > 0 || videoFadeOutDuration > 0)
{
List<string> videoFilterList = new List<string>();
if (videoFadeInDuration > 0)
{
//Assume we fade in from first second.
videoFilterList.Add($"fade=in:start_time={0}s:duration={videoFadeInDuration.ToString("0", NumberFormatInfo.InvariantInfo)}s");
}
if (videoFadeOutDuration > 0)
{
//Assume we fade out to last second.
videoFilterList.Add($"fade=out:start_time={(vidLengthCalc.TotalSeconds - videoFadeOutDuration).ToString("0.000", NumberFormatInfo.InvariantInfo)}s:duration={videoFadeOutDuration.ToString("0.000", NumberFormatInfo.InvariantInfo)}s");
}
string videoFilterString = string.Join(',', videoFilterList);
outputParameters.Append($"-filter:v:{framesId} \"{videoFilterString}\" ");
}
if (thumbnailImagePath != null)
{
coverId = nextId;
nextId++;
inputParameters.Append($"-i {thumbnailImagePath} ");
outputParameters.Append($"-map {coverId} ");
outputParameters.Append($"-c:v:{coverId} copy -disposition:v:{coverId} attached_pic ");
}
if (audioPath != null)
{
audioId = nextId;
nextId++;
inputParameters.Append($"-i {audioPath} ");
outputParameters.Append($"-map {audioId} ");
if(audioFadeInDuration <= 0 && audioFadeOutDuration <= 0)
{
// If no audio fading, just copy as it is.
outputParameters.Append($"-c:a copy ");
}
else
{
List<string> audioEffectList = new List<string>();
if(audioFadeInDuration > 0)
{
//Assume we fade in from first second.
audioEffectList.Add($"afade=in:start_time={0}s:duration={audioFadeInDuration.ToString("0", NumberFormatInfo.InvariantInfo)}s");
}
if (audioFadeOutDuration > 0)
{
//Assume we fade out to last second.
audioEffectList.Add($"afade=out:start_time={(vidLengthCalc.TotalSeconds - audioFadeOutDuration).ToString("0.000", NumberFormatInfo.InvariantInfo)}s:duration={audioFadeOutDuration.ToString("0.000", NumberFormatInfo.InvariantInfo)}s");
}
string audioFilterString = string.Join(',', audioEffectList);
outputParameters.Append($"-filter:a \"{audioFilterString}\" ");
}
}
int milliseconds = vidLengthCalc.Milliseconds;
int seconds = vidLengthCalc.Seconds;
int minutes = vidLengthCalc.Minutes;
var hours = (int)vidLengthCalc.TotalHours;
string durationString = $"{hours:D}:{minutes:D2}:{seconds:D2}.{milliseconds:D3}";
outputParameters.Append($"-c:v:{framesId} libx264 -pix_fmt yuv420p -to {durationString} {outPath} -y ");
string parameters = inputParameters.ToString() + outputParameters.ToString();
try
{
await Task.Factory.StartNew(() =>
{
var outputLog = new List<string>();
using (var process = new Process
{
StartInfo =
{
FileName = ffmpgPath,
Arguments = parameters,
UseShellExecute = false,
CreateNoWindow = true,
// ffmpeg send everything to the error output, standard output is not used.
RedirectStandardError = true
},
EnableRaisingEvents = true
})
{
process.ErrorDataReceived += (sender, e) =>
{
if (string.IsNullOrEmpty(e.Data))
{
return;
}
outputLog.Add(e.Data.ToString());
Console.WriteLine(e.Data.ToString());
};
process.Start();
process.BeginErrorReadLine();
process.WaitForExit();
if (process.ExitCode != 0)
{
throw new Exception($"ffmpeg failed error exit code {process.ExitCode}. Log: {string.Join(Environment.NewLine, outputLog)}");
}
Console.WriteLine($"Exit code: {process.ExitCode}");
}
});
}
catch(Win32Exception )
{
Console.WriteLine("Oh no, failed to start ffmpeg. Have you downloaded and copied ffmpeg.exe to the output folder?");
}
Console.WriteLine();
Console.WriteLine("Video was successfully created. It is availible at: " + Path.GetFullPath(outPath));
}
This function is based on Splicer.Net library.Took me ages to understand how that library works.
Make sure that your fps(frame per second )is correct. By the way standard 24 f/s.
In my case I have 15 images and I now that I need 7 seconds video-> so fps =2.
Fps may vary according to platform...or developer usage.
public bool CreateVideo(List<Bitmap> bitmaps, string outputFile, double fps)
{
int width = 640;
int height = 480;
if (bitmaps == null || bitmaps.Count == 0) return false;
try
{
using (ITimeline timeline = new DefaultTimeline(fps))
{
IGroup group = timeline.AddVideoGroup(32, width, height);
ITrack videoTrack = group.AddTrack();
int i = 0;
double miniDuration = 1.0 / fps;
foreach (var bmp in bitmaps)
{
IClip clip = videoTrack.AddImage(bmp, 0, i * miniDuration, (i + 1) * miniDuration);
System.Diagnostics.Debug.WriteLine(++i);
}
timeline.AddAudioGroup();
IRenderer renderer = new WindowsMediaRenderer(timeline, outputFile, WindowsMediaProfiles.HighQualityVideo);
renderer.Render();
}
}
catch { return false; }
return true;
}
Hope this helps.