The existing question suggests CurrentClockSpeed, but in my system, it just returns the same value as MaxClockSpeed. The code below prints out the same two values over and over again.
Task.Run(() =>
{
ManagementObject Mo = new ManagementObject("Win32_Processor.DeviceID='CPU0'");
while (true)
{
Debug.WriteLine("Max=" + Mo["MaxClockSpeed"] + ", Current=" + Mo["CurrentClockSpeed"]);
System.Threading.Thread.Sleep(1000);
}
Mo.Dispose(); //return and such later in the code
});
But all other applications like Task Manager, CPU-Z, Hardware Info, etc, show variable clock speed. That is, if I run a process that uses 100% of the CPU, the speed goes up, and if I terminate that process, it goes down. How can I get THAT value?
I mean, for example, the value in the "Speed" section of the screenshot I found in Google Search. Not the "Maximum speed" value that never changes.
If you mean CPU current usage processes
use this function in seperate thread :
private void get_cpuUsage()
{
try
{
string processname = System.Reflection.Assembly.GetExecutingAssembly().GetName().Name;
var perfCounter = new PerformanceCounter("Process", "% Processor Time", processname);
int coreCount = 0;
foreach (var item in new System.Management.ManagementObjectSearcher("Select * from Win32_Processor").Get())
{
coreCount += int.Parse(item["NumberOfCores"].ToString());
}
while (true)
{
Thread.Sleep(500);
double perfVal = perfCounter.NextValue() / Environment.ProcessorCount;
int cpu = (int)Math.Round(perfVal, 0);// /
double cpuvalue = Math.Round(perfVal, 1);
Invoke((MethodInvoker)delegate
{
cpu_bar.Text = cpuvalue.ToString(); // diaplay current % processes
});
}
}
catch(Exception ex)
{
messagebox.show(ex.message);
}
}
What I need to do is record 2 usb webcams at 60 fps 1280x720 format in UWP c#.
The camera's are currently previewed in a CaptureElement
This is how it start previewing:
public async Task StartPreviewSideAsync(DeviceInformation deviceInformation)
{
if (deviceInformation != null)
{
var settings = new MediaCaptureInitializationSettings {VideoDeviceId = deviceInformation.Id};
try
{
_mediaCaptureSide = new MediaCapture();
var profiles = MediaCapture.FindAllVideoProfiles(deviceInformation.Id);
Debug.WriteLine(MediaCapture.IsVideoProfileSupported(deviceInformation.Id) + " count: " + profiles.Count);
var match = (from profile in profiles
from desc in profile.SupportedRecordMediaDescription
where desc.Width == 1280 && desc.Height == 720 && Math.Abs(Math.Round(desc.FrameRate) - 60) < 1
select new {profile, desc}).FirstOrDefault();
if (match != null)
{
settings.VideoProfile = match.profile;
settings.RecordMediaDescription = match.desc;
}
await _mediaCaptureSide.InitializeAsync(settings);
SideCam.Source = _mediaCaptureSide;
await _mediaCaptureSide.StartPreviewAsync();
_displayRequestSide = new DisplayRequest();
_displayRequestSide.RequestActive();
DisplayInformation.AutoRotationPreferences = DisplayOrientations.Landscape;
CameraManager.GetCameraManager.CurrentSideCamera = deviceInformation;
IsPreviewingSide = true;
}
catch (UnauthorizedAccessException)
{
Debug.WriteLine("The app was denied access to the camera");
IsPreviewingSide = false;
}
catch (Exception ex)
{
Debug.WriteLine("MediaCapture initialization failed. {0}", ex.Message);
IsPreviewingSide = false;
}
}
}
And this is the method that starts the recording:
public IAsyncOperation<LowLagMediaRecording> RecBackCam(StorageFile fileBack)
{
var mp4File = MediaEncodingProfile.CreateMp4(VideoEncodingQuality.HD720p);
if (mp4File.Video != null)
mp4File.Video.Bitrate = 3000000;
return _mediaCaptureBack.PrepareLowLagRecordToStorageFileAsync(mp4File, fileBack);
}
but it does not record 60fps because it cannot find a profile for it (in the preview method).
and when I use this (in the recoding method):
mp4File.Video.FrameRate.Numerator = 3600;
mp4File.Video.FrameRate.Denominator = 60;
it records 60 frames per second but frame 1 and 2 are the same 3 and 4 and so on. But I need actual 60 frames per second.
all the basics of the code comes from the mdsn website
link to code on msdn.
To set the frame rate for MediaCapture, we can use VideoDeviceController.SetMediaStreamPropertiesAsync method like the following:
await _mediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.VideoRecord, encodingProperties);
For the encodingProperties, we can get it from VideoDeviceController.GetAvailableMediaStreamProperties method, this method returns a list of the supported encoding properties for the video device. If your camera supports 60 fps, 1280x720 resolution then you should be able to find the corresponding VideoEncodingProperties in this list.
For more info, please see the article Set format, resolution, and frame rate for MediaCapture and also Camera resolution sample on GitHub.
I have a application which programs firmware to a circuit board. In the application you can program a single board, or a tray. When programming a tray you can only load 14 at a time.
The user may want to program say 30 boards, so I want the program to program the 14 boards and then tell the user they need to reload a tray.
At the moment I only have one board to practice on, so I have just been reprogramming the same one pretending its a tray.
I have tried to resolve this using loops but when I press the start button it all freezes and stops responding.
The following is my code:
private void setFirmwareMultiple()
{
clearTicksandCrosses();
string firmwareLocation = Firmware(productComboBox.Text); //get the firmware location
string STPath = #"C:\Users\Falconex\Documents\FalconexTest\FalconexTest\ST-LINK Utility\ST-LINK_CLI.exe"; //file location
string result; //set string
string result2; //set string
int counter = 0;
int numberOfBoards = int.Parse(numberOfBoardsTextBox.Text);
while (numberOfBoards > counter) {
for (int i = 0; i > 14; i = i + 1) {
ProcessStartInfo start = new ProcessStartInfo(); //new process start info
start.FileName = STPath; //set file name
start.Arguments = "-C -ME -p " + firmwareLocation + " -v -Run"; //set arguments
start.UseShellExecute = false; //set shell execute (need this to redirect output)
start.RedirectStandardOutput = true; //redirect output
start.RedirectStandardInput = true; //redirect input
start.WindowStyle = ProcessWindowStyle.Hidden; //hide window
start.CreateNoWindow = true; //create no window
string picNumber = i.ToString();
using (Process process = Process.Start(start)) //create process
{
programmingTextBlock.Text = "Board Programming...";
System.Windows.Application.Current.Dispatcher.Invoke(DispatcherPriority.Background,
new Action(delegate { }));
try
{
while (process.HasExited == false) //while open
{
process.StandardInput.WriteLine(); //send enter key
}
using (StreamReader reader = process.StandardOutput) //create stream reader
{
result = reader.ReadToEnd(); //read till end of process
File.WriteAllText("File.txt", result); //write to file
}
saveReport();
}
catch { } //so doesn't blow up
finally
{
int code = process.ExitCode; //get exit code
codee = code.ToString(); //set code to string
File.WriteAllText("Code.txt", codee); //save code
if (code == 0)
{
tick1.Visibility = Visibility.Visible;
counter = counter + 1;
}
else
{
cross1.Visibility = Visibility.Visible;
}
programmingTextBlock.Text = "";
}
}
System.Windows.MessageBox.Show("Load new boards");
}
}
}
I have put the total amount of boards the user wants in the for loop.
I think it maybe to do with the for loop. Because at first, in the for loop, I accidently put (i<14) and it caused it to run fine, however it then didn't stop.
Any help would be massively appreciated!
Thank you in advance,
Lucy
As the code stands now, your for loop's content never gets executed. The condition in the for loop is a continue condition. Since i is initialized with 0 the condition i > 14 is never met. So the result is an infinite outer while loop.
Your first "accident" with i < 14 was correct. But then the loop did not stop, because your inner while loop never finishes:
while (process.HasExited == false) //while open
{
process.StandardInput.WriteLine(); //send enter key
}
At first, please don't compare a bool to true or false. A simple while (!process.HasExited) is enough.
Secondly, you have to refresh your process instance to update the HasExited property correctly:
while (!process.HasExited) //while open
{
process.StandardInput.WriteLine(); //send enter key
process.Refresh(); // update the process's properties!
}
You may also consider to add a Thread.Sleep(...) in that loop.
The answer is simple:
while (numberOfBoards > counter) {
for (int i = 0; i > 14; i = i + 1) {
In the code above, the for loop will never be executed, because i will be always less than 14.
Because this, counter will never increment, and than, the while will never finish.
But besides this, your approach to the looping is wrong. The following example (fully test program) is something you should do instead:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
int counter = 0;
int i = 0;
int numberOfBoards = 35;
for (; numberOfBoards > counter; i++, counter++)
{
Console.WriteLine("Counter {0}/i {1}", counter, i);
//call your thread here.
//make sure that he exists.
//use somekind of timeout to finish
//alert the user in case of failure but move to the next anyway to avoid an infinite looping.
if (i == 13) i = 0;
}
Console.WriteLine("Press any key");
Console.ReadKey();
}
}
}
Like many people already seem to have (there are several threads on this subject here) I am looking for ways to create video from a sequence of images.
I want to implement my functionality in C#!
Here is what I wan't to do:
/*Pseudo code*/
void CreateVideo(List<Image> imageSequence, long durationOfEachImageMs, string outputVideoFileName, string outputFormat)
{
// Info: imageSequence.Count will be > 30 000 images
// Info: durationOfEachImageMs will be < 300 ms
if (outputFormat = "mpeg")
{
}
else if (outputFormat = "avi")
{
}
else
{
}
//Save video file do disk
}
I know there's a project called Splicer (http://splicer.codeplex.com/) but I can't find suitable documentation or clear examples that I can follow (these are the examples that I found).
The closest I want to do, which I find here on CodePlex is this:
How can I create a video from a directory of images in C#?
I have also read a few threads about ffmpeg (for example this: C# and FFmpeg preferably without shell commands? and this: convert image sequence using ffmpeg) but I find no one to help me with my problem and I don't think ffmpeg-command-line-style is the best solution for me (because of the amount of images).
I believe that I can use the Splicer-project in some way (?).
In my case, it is about about > 30 000 images where each image should be displayed for about 200 ms (in the videostream that I want to create).
(What the video is about? Plants growing ...)
Can anyone help me complete my function?
Well, this answer comes a bit late, but since I have noticed some activity with my original question lately (and the fact that there was not provided a working solution) I would like to give you what finally worked for me.
I'll split my answer into three parts:
Background
Problem
Solution
Background
(this section is not important for the solution)
My original problem was that I had a lot of images (i.e. a huge amount), images that were individually stored in a database as byte arrays. I wanted to make a video sequence with all these images.
My equipment setup was something like this general drawing:
The images depicted growing tomato plants in different states. All images were taken every 1 minute under daytime.
/*pseudo code for taking and storing images*/
while (true)
{
if (daylight)
{
//get an image from the camera
//store the image as byte array to db
}
//wait 1 min
}
I had a very simple db for storing images, there were only one table (the table ImageSet) in it:
Problem
I had read many articles about ffmpeg (please see my original question) but I couldn't find any on how to go from a collection of images to a video.
Solution
Finally, I got a working solution!
The main part of it comes from the open source project AForge.NET. In short, you could say that AForge.NET is a computer vision and artificial intelligence library in C#.
(If you want a copy of the framework, just grab it from http://www.aforgenet.com/)
In AForge.NET, there is this VideoFileWriter class (a class for writing videofiles with help of ffmpeg). This did almost all of the work. (There is also a very good example here)
This is the final class (reduced) which I used to fetch and convert image data into a video from my image database:
public class MovieMaker
{
public void Start()
{
var startDate = DateTime.Parse("12 Mar 2012");
var endDate = DateTime.Parse("13 Aug 2012");
CreateMovie(startDate, endDate);
}
/*THIS CODE BLOCK IS COPIED*/
public Bitmap ToBitmap(byte[] byteArrayIn)
{
var ms = new System.IO.MemoryStream(byteArrayIn);
var returnImage = System.Drawing.Image.FromStream(ms);
var bitmap = new System.Drawing.Bitmap(returnImage);
return bitmap;
}
public Bitmap ReduceBitmap(Bitmap original, int reducedWidth, int reducedHeight)
{
var reduced = new Bitmap(reducedWidth, reducedHeight);
using (var dc = Graphics.FromImage(reduced))
{
// you might want to change properties like
dc.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
dc.DrawImage(original, new Rectangle(0, 0, reducedWidth, reducedHeight), new Rectangle(0, 0, original.Width, original.Height), GraphicsUnit.Pixel);
}
return reduced;
}
/*END OF COPIED CODE BLOCK*/
private void CreateMovie(DateTime startDate, DateTime endDate)
{
int width = 320;
int height = 240;
var framRate = 200;
using (var container = new ImageEntitiesContainer())
{
//a LINQ-query for getting the desired images
var query = from d in container.ImageSet
where d.Date >= startDate && d.Date <= endDate
select d;
// create instance of video writer
using (var vFWriter = new VideoFileWriter())
{
// create new video file
vFWriter.Open("nameOfMyVideoFile.avi", width, height, framRate, VideoCodec.Raw);
var imageEntities = query.ToList();
//loop throught all images in the collection
foreach (var imageEntity in imageEntities)
{
//what's the current image data?
var imageByteArray = imageEntity.Data;
var bmp = ToBitmap(imageByteArray);
var bmpReduced = ReduceBitmap(bmp, width, height);
vFWriter.WriteVideoFrame(bmpReduced);
}
vFWriter.Close();
}
}
}
}
Update 2013-11-29 (how to) (Hope this is what you asked for #Kiquenet?)
Download AForge.NET Framework from the downloads page (Download full ZIP archive and you will find many interesting Visual Studio solutions with projects, like Video, in the AForge.NET Framework-2.2.5\Samples folder...)
Namespace: AForge.Video.FFMPEG (from the documentation)
Assembly: AForge.Video.FFMPEG (in AForge.Video.FFMPEG.dll) (from the documentation) (you can find this AForge.Video.FFMPEG.dll in the AForge.NET Framework-2.2.5\Release folder)
If you want to create your own solution, make sure you have a reference to AForge.Video.FFMPEG.dll in your project. Then it should be easy to use the VideoFileWriter class. If you follow the link to the class you will find a very good (and simple example). In the code, they are feeding the VideoFileWriter with Bitmap image in a for-loop
I found this code in the slicer samples, looks pretty close to to what you want:
string outputFile = "FadeBetweenImages.wmv";
using (ITimeline timeline = new DefaultTimeline())
{
IGroup group = timeline.AddVideoGroup(32, 160, 100);
ITrack videoTrack = group.AddTrack();
IClip clip1 = videoTrack.AddImage("image1.jpg", 0, 2); // play first image for a little while
IClip clip2 = videoTrack.AddImage("image2.jpg", 0, 2); // and the next
IClip clip3 = videoTrack.AddImage("image3.jpg", 0, 2); // and finally the last
IClip clip4 = videoTrack.AddImage("image4.jpg", 0, 2); // and finally the last
}
double halfDuration = 0.5;
// fade out and back in
group.AddTransition(clip2.Offset - halfDuration, halfDuration, StandardTransitions.CreateFade(), true);
group.AddTransition(clip2.Offset, halfDuration, StandardTransitions.CreateFade(), false);
// again
group.AddTransition(clip3.Offset - halfDuration, halfDuration, StandardTransitions.CreateFade(), true);
group.AddTransition(clip3.Offset, halfDuration, StandardTransitions.CreateFade(), false);
// and again
group.AddTransition(clip4.Offset - halfDuration, halfDuration, StandardTransitions.CreateFade(), true);
group.AddTransition(clip4.Offset, halfDuration, StandardTransitions.CreateFade(), false);
// add some audio
ITrack audioTrack = timeline.AddAudioGroup().AddTrack();
IClip audio =
audioTrack.AddAudio("testinput.wav", 0, videoTrack.Duration);
// create an audio envelope effect, this will:
// fade the audio from 0% to 100% in 1 second.
// play at full volume until 1 second before the end of the track
// fade back out to 0% volume
audioTrack.AddEffect(0, audio.Duration,
StandardEffects.CreateAudioEnvelope(1.0, 1.0, 1.0, audio.Duration));
// render our slideshow out to a windows media file
using (
IRenderer renderer =
new WindowsMediaRenderer(timeline, outputFile, WindowsMediaProfiles.HighQualityVideo))
{
renderer.Render();
}
}
I could not manage to get the above example to work. However I did find another library that works amazingly well once. Try via NuGet "accord.extensions.imaging.io", then I wrote the following little function:
private void makeAvi(string imageInputfolderName, string outVideoFileName, float fps = 12.0f, string imgSearchPattern = "*.png")
{ // reads all images in folder
VideoWriter w = new VideoWriter(outVideoFileName,
new Accord.Extensions.Size(480, 640), fps, true);
Accord.Extensions.Imaging.ImageDirectoryReader ir =
new ImageDirectoryReader(imageInputfolderName, imgSearchPattern);
while (ir.Position < ir.Length)
{
IImage i = ir.Read();
w.Write(i);
}
w.Close();
}
It reads all images from a folder and makes a video out of them.
If you want to make it nicer you could probably read the image dimensions instead of hard coding, but you got the point.
The FFMediaToolkit is a good solution in 2020, with .NET Core support.
https://github.com/radek-k/FFMediaToolkit
FFMediaToolkit is a cross-platform .NET Standard library for creating and reading video files. It uses native FFmpeg libraries by the FFmpeg.Autogen bindings.
The README of the library has a nice example for the question asked.
// You can set there codec, bitrate, frame rate and many other options.
var settings = new VideoEncoderSettings(width: 1920, height: 1080, framerate: 30, codec: VideoCodec.H264);
settings.EncoderPreset = EncoderPreset.Fast;
settings.CRF = 17;
var file = MediaBuilder.CreateContainer(#"C:\videos\example.mp4").WithVideo(settings).Create();
while(file.Video.FramesCount < 300)
{
file.Video.AddFrame(/*Your code*/);
}
file.Dispose(); // MediaOutput ("file" variable) must be disposed when encoding is completed. You can use `using() { }` block instead.
This is a solution for creating a video from an image sequence using Visual Studio using C#.
My starting point was "Hauns TM"'s answer below but my requirements were more basic than theirs so this solution might be more appropriated for less advanced users ( like myself )
Libraries:
using System;
using System.IO;
using System.Drawing;
using Accord.Video.FFMPEG;
You can get the FFMPEG libarary by searching for FFMPEG in "Tools -> NuGet Package Manager -> Manage NuGet Packages for a Solution..."
The variables that I passed into the function are:
outputFileName = "C://outputFolder//outputMovie.avi"
inputImageSequence =
["C://inputFolder//image_001.avi",
"C://inputFolder//image_002.avi",
"C://inputFolder//image_003.avi",
"C://inputFolder//image_004.avi"]
Function:
private void videoMaker( string outputFileName , string[] inputImageSequence)
{
int width = 1920;
int height = 1080;
var framRate = 25;
using (var vFWriter = new VideoFileWriter())
{
// create new video file
vFWriter.Open(outputFileName, width, height, framRate, VideoCodec.Raw);
foreach (var imageLocation in inputImageSequence)
{
Bitmap imageFrame = System.Drawing.Image.FromFile(imageLocation) as Bitmap;
vFWriter.WriteVideoFrame(imageFrame);
}
vFWriter.Close();
}
}
It looks like many of these answers are a bit obsolete year 2020, so I add my thoughts.
I have been working on the same problem and have published the .NET Core project Time Lapse Creator on GitHub: https://github.com/pekspro/TimeLapseCreator It shows how to add information on extra frame (like a timestamp for instance), background audio, title screen, fading and some more. And then ffmpeg is used to make the rendering. This is done in this function:
// Render video from a list of images, add background audio and a thumbnail image.
private async Task RenderVideoAsync(int framesPerSecond, List<string> images, string ffmpgPath,
string audioPath, string thumbnailImagePath, string outPath,
double videoFadeInDuration = 0, double videoFadeOutDuration = 0,
double audioFadeInDuration = 0, double audioFadeOutDuration = 0)
{
string fileListName = Path.Combine(OutputPath, "framelist.txt");
var fileListContent = images.Select(a => $"file '{a}'{Environment.NewLine}duration 1");
await File.WriteAllLinesAsync(fileListName, fileListContent);
TimeSpan vidLengthCalc = TimeSpan.FromSeconds(images.Count / ((double)framesPerSecond));
int coverId = -1;
int audioId = -1;
int framesId = 0;
int nextId = 1;
StringBuilder inputParameters = new StringBuilder();
StringBuilder outputParameters = new StringBuilder();
inputParameters.Append($"-r {framesPerSecond} -f concat -safe 0 -i {fileListName} ");
outputParameters.Append($"-map {framesId} ");
if(videoFadeInDuration > 0 || videoFadeOutDuration > 0)
{
List<string> videoFilterList = new List<string>();
if (videoFadeInDuration > 0)
{
//Assume we fade in from first second.
videoFilterList.Add($"fade=in:start_time={0}s:duration={videoFadeInDuration.ToString("0", NumberFormatInfo.InvariantInfo)}s");
}
if (videoFadeOutDuration > 0)
{
//Assume we fade out to last second.
videoFilterList.Add($"fade=out:start_time={(vidLengthCalc.TotalSeconds - videoFadeOutDuration).ToString("0.000", NumberFormatInfo.InvariantInfo)}s:duration={videoFadeOutDuration.ToString("0.000", NumberFormatInfo.InvariantInfo)}s");
}
string videoFilterString = string.Join(',', videoFilterList);
outputParameters.Append($"-filter:v:{framesId} \"{videoFilterString}\" ");
}
if (thumbnailImagePath != null)
{
coverId = nextId;
nextId++;
inputParameters.Append($"-i {thumbnailImagePath} ");
outputParameters.Append($"-map {coverId} ");
outputParameters.Append($"-c:v:{coverId} copy -disposition:v:{coverId} attached_pic ");
}
if (audioPath != null)
{
audioId = nextId;
nextId++;
inputParameters.Append($"-i {audioPath} ");
outputParameters.Append($"-map {audioId} ");
if(audioFadeInDuration <= 0 && audioFadeOutDuration <= 0)
{
// If no audio fading, just copy as it is.
outputParameters.Append($"-c:a copy ");
}
else
{
List<string> audioEffectList = new List<string>();
if(audioFadeInDuration > 0)
{
//Assume we fade in from first second.
audioEffectList.Add($"afade=in:start_time={0}s:duration={audioFadeInDuration.ToString("0", NumberFormatInfo.InvariantInfo)}s");
}
if (audioFadeOutDuration > 0)
{
//Assume we fade out to last second.
audioEffectList.Add($"afade=out:start_time={(vidLengthCalc.TotalSeconds - audioFadeOutDuration).ToString("0.000", NumberFormatInfo.InvariantInfo)}s:duration={audioFadeOutDuration.ToString("0.000", NumberFormatInfo.InvariantInfo)}s");
}
string audioFilterString = string.Join(',', audioEffectList);
outputParameters.Append($"-filter:a \"{audioFilterString}\" ");
}
}
int milliseconds = vidLengthCalc.Milliseconds;
int seconds = vidLengthCalc.Seconds;
int minutes = vidLengthCalc.Minutes;
var hours = (int)vidLengthCalc.TotalHours;
string durationString = $"{hours:D}:{minutes:D2}:{seconds:D2}.{milliseconds:D3}";
outputParameters.Append($"-c:v:{framesId} libx264 -pix_fmt yuv420p -to {durationString} {outPath} -y ");
string parameters = inputParameters.ToString() + outputParameters.ToString();
try
{
await Task.Factory.StartNew(() =>
{
var outputLog = new List<string>();
using (var process = new Process
{
StartInfo =
{
FileName = ffmpgPath,
Arguments = parameters,
UseShellExecute = false,
CreateNoWindow = true,
// ffmpeg send everything to the error output, standard output is not used.
RedirectStandardError = true
},
EnableRaisingEvents = true
})
{
process.ErrorDataReceived += (sender, e) =>
{
if (string.IsNullOrEmpty(e.Data))
{
return;
}
outputLog.Add(e.Data.ToString());
Console.WriteLine(e.Data.ToString());
};
process.Start();
process.BeginErrorReadLine();
process.WaitForExit();
if (process.ExitCode != 0)
{
throw new Exception($"ffmpeg failed error exit code {process.ExitCode}. Log: {string.Join(Environment.NewLine, outputLog)}");
}
Console.WriteLine($"Exit code: {process.ExitCode}");
}
});
}
catch(Win32Exception )
{
Console.WriteLine("Oh no, failed to start ffmpeg. Have you downloaded and copied ffmpeg.exe to the output folder?");
}
Console.WriteLine();
Console.WriteLine("Video was successfully created. It is availible at: " + Path.GetFullPath(outPath));
}
This function is based on Splicer.Net library.Took me ages to understand how that library works.
Make sure that your fps(frame per second )is correct. By the way standard 24 f/s.
In my case I have 15 images and I now that I need 7 seconds video-> so fps =2.
Fps may vary according to platform...or developer usage.
public bool CreateVideo(List<Bitmap> bitmaps, string outputFile, double fps)
{
int width = 640;
int height = 480;
if (bitmaps == null || bitmaps.Count == 0) return false;
try
{
using (ITimeline timeline = new DefaultTimeline(fps))
{
IGroup group = timeline.AddVideoGroup(32, width, height);
ITrack videoTrack = group.AddTrack();
int i = 0;
double miniDuration = 1.0 / fps;
foreach (var bmp in bitmaps)
{
IClip clip = videoTrack.AddImage(bmp, 0, i * miniDuration, (i + 1) * miniDuration);
System.Diagnostics.Debug.WriteLine(++i);
}
timeline.AddAudioGroup();
IRenderer renderer = new WindowsMediaRenderer(timeline, outputFile, WindowsMediaProfiles.HighQualityVideo);
renderer.Render();
}
}
catch { return false; }
return true;
}
Hope this helps.