Image sequence to video stream? - c#

Like many people already seem to have (there are several threads on this subject here) I am looking for ways to create video from a sequence of images.
I want to implement my functionality in C#!
Here is what I wan't to do:
/*Pseudo code*/
void CreateVideo(List<Image> imageSequence, long durationOfEachImageMs, string outputVideoFileName, string outputFormat)
{
// Info: imageSequence.Count will be > 30 000 images
// Info: durationOfEachImageMs will be < 300 ms
if (outputFormat = "mpeg")
{
}
else if (outputFormat = "avi")
{
}
else
{
}
//Save video file do disk
}
I know there's a project called Splicer (http://splicer.codeplex.com/) but I can't find suitable documentation or clear examples that I can follow (these are the examples that I found).
The closest I want to do, which I find here on CodePlex is this:
How can I create a video from a directory of images in C#?
I have also read a few threads about ffmpeg (for example this: C# and FFmpeg preferably without shell commands? and this: convert image sequence using ffmpeg) but I find no one to help me with my problem and I don't think ffmpeg-command-line-style is the best solution for me (because of the amount of images).
I believe that I can use the Splicer-project in some way (?).
In my case, it is about about > 30 000 images where each image should be displayed for about 200 ms (in the videostream that I want to create).
(What the video is about? Plants growing ...)
Can anyone help me complete my function?

Well, this answer comes a bit late, but since I have noticed some activity with my original question lately (and the fact that there was not provided a working solution) I would like to give you what finally worked for me.
I'll split my answer into three parts:
Background
Problem
Solution
Background
(this section is not important for the solution)
My original problem was that I had a lot of images (i.e. a huge amount), images that were individually stored in a database as byte arrays. I wanted to make a video sequence with all these images.
My equipment setup was something like this general drawing:
The images depicted growing tomato plants in different states. All images were taken every 1 minute under daytime.
/*pseudo code for taking and storing images*/
while (true)
{
if (daylight)
{
//get an image from the camera
//store the image as byte array to db
}
//wait 1 min
}
I had a very simple db for storing images, there were only one table (the table ImageSet) in it:
Problem
I had read many articles about ffmpeg (please see my original question) but I couldn't find any on how to go from a collection of images to a video.
Solution
Finally, I got a working solution!
The main part of it comes from the open source project AForge.NET. In short, you could say that AForge.NET is a computer vision and artificial intelligence library in C#.
(If you want a copy of the framework, just grab it from http://www.aforgenet.com/)
In AForge.NET, there is this VideoFileWriter class (a class for writing videofiles with help of ffmpeg). This did almost all of the work. (There is also a very good example here)
This is the final class (reduced) which I used to fetch and convert image data into a video from my image database:
public class MovieMaker
{
public void Start()
{
var startDate = DateTime.Parse("12 Mar 2012");
var endDate = DateTime.Parse("13 Aug 2012");
CreateMovie(startDate, endDate);
}
/*THIS CODE BLOCK IS COPIED*/
public Bitmap ToBitmap(byte[] byteArrayIn)
{
var ms = new System.IO.MemoryStream(byteArrayIn);
var returnImage = System.Drawing.Image.FromStream(ms);
var bitmap = new System.Drawing.Bitmap(returnImage);
return bitmap;
}
public Bitmap ReduceBitmap(Bitmap original, int reducedWidth, int reducedHeight)
{
var reduced = new Bitmap(reducedWidth, reducedHeight);
using (var dc = Graphics.FromImage(reduced))
{
// you might want to change properties like
dc.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
dc.DrawImage(original, new Rectangle(0, 0, reducedWidth, reducedHeight), new Rectangle(0, 0, original.Width, original.Height), GraphicsUnit.Pixel);
}
return reduced;
}
/*END OF COPIED CODE BLOCK*/
private void CreateMovie(DateTime startDate, DateTime endDate)
{
int width = 320;
int height = 240;
var framRate = 200;
using (var container = new ImageEntitiesContainer())
{
//a LINQ-query for getting the desired images
var query = from d in container.ImageSet
where d.Date >= startDate && d.Date <= endDate
select d;
// create instance of video writer
using (var vFWriter = new VideoFileWriter())
{
// create new video file
vFWriter.Open("nameOfMyVideoFile.avi", width, height, framRate, VideoCodec.Raw);
var imageEntities = query.ToList();
//loop throught all images in the collection
foreach (var imageEntity in imageEntities)
{
//what's the current image data?
var imageByteArray = imageEntity.Data;
var bmp = ToBitmap(imageByteArray);
var bmpReduced = ReduceBitmap(bmp, width, height);
vFWriter.WriteVideoFrame(bmpReduced);
}
vFWriter.Close();
}
}
}
}
Update 2013-11-29 (how to) (Hope this is what you asked for #Kiquenet?)
Download AForge.NET Framework from the downloads page (Download full ZIP archive and you will find many interesting Visual Studio solutions with projects, like Video, in the AForge.NET Framework-2.2.5\Samples folder...)
Namespace: AForge.Video.FFMPEG (from the documentation)
Assembly: AForge.Video.FFMPEG (in AForge.Video.FFMPEG.dll) (from the documentation) (you can find this AForge.Video.FFMPEG.dll in the AForge.NET Framework-2.2.5\Release folder)
If you want to create your own solution, make sure you have a reference to AForge.Video.FFMPEG.dll in your project. Then it should be easy to use the VideoFileWriter class. If you follow the link to the class you will find a very good (and simple example). In the code, they are feeding the VideoFileWriter with Bitmap image in a for-loop

I found this code in the slicer samples, looks pretty close to to what you want:
string outputFile = "FadeBetweenImages.wmv";
using (ITimeline timeline = new DefaultTimeline())
{
IGroup group = timeline.AddVideoGroup(32, 160, 100);
ITrack videoTrack = group.AddTrack();
IClip clip1 = videoTrack.AddImage("image1.jpg", 0, 2); // play first image for a little while
IClip clip2 = videoTrack.AddImage("image2.jpg", 0, 2); // and the next
IClip clip3 = videoTrack.AddImage("image3.jpg", 0, 2); // and finally the last
IClip clip4 = videoTrack.AddImage("image4.jpg", 0, 2); // and finally the last
}
double halfDuration = 0.5;
// fade out and back in
group.AddTransition(clip2.Offset - halfDuration, halfDuration, StandardTransitions.CreateFade(), true);
group.AddTransition(clip2.Offset, halfDuration, StandardTransitions.CreateFade(), false);
// again
group.AddTransition(clip3.Offset - halfDuration, halfDuration, StandardTransitions.CreateFade(), true);
group.AddTransition(clip3.Offset, halfDuration, StandardTransitions.CreateFade(), false);
// and again
group.AddTransition(clip4.Offset - halfDuration, halfDuration, StandardTransitions.CreateFade(), true);
group.AddTransition(clip4.Offset, halfDuration, StandardTransitions.CreateFade(), false);
// add some audio
ITrack audioTrack = timeline.AddAudioGroup().AddTrack();
IClip audio =
audioTrack.AddAudio("testinput.wav", 0, videoTrack.Duration);
// create an audio envelope effect, this will:
// fade the audio from 0% to 100% in 1 second.
// play at full volume until 1 second before the end of the track
// fade back out to 0% volume
audioTrack.AddEffect(0, audio.Duration,
StandardEffects.CreateAudioEnvelope(1.0, 1.0, 1.0, audio.Duration));
// render our slideshow out to a windows media file
using (
IRenderer renderer =
new WindowsMediaRenderer(timeline, outputFile, WindowsMediaProfiles.HighQualityVideo))
{
renderer.Render();
}
}

I could not manage to get the above example to work. However I did find another library that works amazingly well once. Try via NuGet "accord.extensions.imaging.io", then I wrote the following little function:
private void makeAvi(string imageInputfolderName, string outVideoFileName, float fps = 12.0f, string imgSearchPattern = "*.png")
{ // reads all images in folder
VideoWriter w = new VideoWriter(outVideoFileName,
new Accord.Extensions.Size(480, 640), fps, true);
Accord.Extensions.Imaging.ImageDirectoryReader ir =
new ImageDirectoryReader(imageInputfolderName, imgSearchPattern);
while (ir.Position < ir.Length)
{
IImage i = ir.Read();
w.Write(i);
}
w.Close();
}
It reads all images from a folder and makes a video out of them.
If you want to make it nicer you could probably read the image dimensions instead of hard coding, but you got the point.

The FFMediaToolkit is a good solution in 2020, with .NET Core support.
https://github.com/radek-k/FFMediaToolkit
FFMediaToolkit is a cross-platform .NET Standard library for creating and reading video files. It uses native FFmpeg libraries by the FFmpeg.Autogen bindings.
The README of the library has a nice example for the question asked.
// You can set there codec, bitrate, frame rate and many other options.
var settings = new VideoEncoderSettings(width: 1920, height: 1080, framerate: 30, codec: VideoCodec.H264);
settings.EncoderPreset = EncoderPreset.Fast;
settings.CRF = 17;
var file = MediaBuilder.CreateContainer(#"C:\videos\example.mp4").WithVideo(settings).Create();
while(file.Video.FramesCount < 300)
{
file.Video.AddFrame(/*Your code*/);
}
file.Dispose(); // MediaOutput ("file" variable) must be disposed when encoding is completed. You can use `using() { }` block instead.

This is a solution for creating a video from an image sequence using Visual Studio using C#.
My starting point was "Hauns TM"'s answer below but my requirements were more basic than theirs so this solution might be more appropriated for less advanced users ( like myself )
Libraries:
using System;
using System.IO;
using System.Drawing;
using Accord.Video.FFMPEG;
You can get the FFMPEG libarary by searching for FFMPEG in "Tools -> NuGet Package Manager -> Manage NuGet Packages for a Solution..."
The variables that I passed into the function are:
outputFileName = "C://outputFolder//outputMovie.avi"
inputImageSequence =
["C://inputFolder//image_001.avi",
"C://inputFolder//image_002.avi",
"C://inputFolder//image_003.avi",
"C://inputFolder//image_004.avi"]
Function:
private void videoMaker( string outputFileName , string[] inputImageSequence)
{
int width = 1920;
int height = 1080;
var framRate = 25;
using (var vFWriter = new VideoFileWriter())
{
// create new video file
vFWriter.Open(outputFileName, width, height, framRate, VideoCodec.Raw);
foreach (var imageLocation in inputImageSequence)
{
Bitmap imageFrame = System.Drawing.Image.FromFile(imageLocation) as Bitmap;
vFWriter.WriteVideoFrame(imageFrame);
}
vFWriter.Close();
}
}

It looks like many of these answers are a bit obsolete year 2020, so I add my thoughts.
I have been working on the same problem and have published the .NET Core project Time Lapse Creator on GitHub: https://github.com/pekspro/TimeLapseCreator It shows how to add information on extra frame (like a timestamp for instance), background audio, title screen, fading and some more. And then ffmpeg is used to make the rendering. This is done in this function:
// Render video from a list of images, add background audio and a thumbnail image.
private async Task RenderVideoAsync(int framesPerSecond, List<string> images, string ffmpgPath,
string audioPath, string thumbnailImagePath, string outPath,
double videoFadeInDuration = 0, double videoFadeOutDuration = 0,
double audioFadeInDuration = 0, double audioFadeOutDuration = 0)
{
string fileListName = Path.Combine(OutputPath, "framelist.txt");
var fileListContent = images.Select(a => $"file '{a}'{Environment.NewLine}duration 1");
await File.WriteAllLinesAsync(fileListName, fileListContent);
TimeSpan vidLengthCalc = TimeSpan.FromSeconds(images.Count / ((double)framesPerSecond));
int coverId = -1;
int audioId = -1;
int framesId = 0;
int nextId = 1;
StringBuilder inputParameters = new StringBuilder();
StringBuilder outputParameters = new StringBuilder();
inputParameters.Append($"-r {framesPerSecond} -f concat -safe 0 -i {fileListName} ");
outputParameters.Append($"-map {framesId} ");
if(videoFadeInDuration > 0 || videoFadeOutDuration > 0)
{
List<string> videoFilterList = new List<string>();
if (videoFadeInDuration > 0)
{
//Assume we fade in from first second.
videoFilterList.Add($"fade=in:start_time={0}s:duration={videoFadeInDuration.ToString("0", NumberFormatInfo.InvariantInfo)}s");
}
if (videoFadeOutDuration > 0)
{
//Assume we fade out to last second.
videoFilterList.Add($"fade=out:start_time={(vidLengthCalc.TotalSeconds - videoFadeOutDuration).ToString("0.000", NumberFormatInfo.InvariantInfo)}s:duration={videoFadeOutDuration.ToString("0.000", NumberFormatInfo.InvariantInfo)}s");
}
string videoFilterString = string.Join(',', videoFilterList);
outputParameters.Append($"-filter:v:{framesId} \"{videoFilterString}\" ");
}
if (thumbnailImagePath != null)
{
coverId = nextId;
nextId++;
inputParameters.Append($"-i {thumbnailImagePath} ");
outputParameters.Append($"-map {coverId} ");
outputParameters.Append($"-c:v:{coverId} copy -disposition:v:{coverId} attached_pic ");
}
if (audioPath != null)
{
audioId = nextId;
nextId++;
inputParameters.Append($"-i {audioPath} ");
outputParameters.Append($"-map {audioId} ");
if(audioFadeInDuration <= 0 && audioFadeOutDuration <= 0)
{
// If no audio fading, just copy as it is.
outputParameters.Append($"-c:a copy ");
}
else
{
List<string> audioEffectList = new List<string>();
if(audioFadeInDuration > 0)
{
//Assume we fade in from first second.
audioEffectList.Add($"afade=in:start_time={0}s:duration={audioFadeInDuration.ToString("0", NumberFormatInfo.InvariantInfo)}s");
}
if (audioFadeOutDuration > 0)
{
//Assume we fade out to last second.
audioEffectList.Add($"afade=out:start_time={(vidLengthCalc.TotalSeconds - audioFadeOutDuration).ToString("0.000", NumberFormatInfo.InvariantInfo)}s:duration={audioFadeOutDuration.ToString("0.000", NumberFormatInfo.InvariantInfo)}s");
}
string audioFilterString = string.Join(',', audioEffectList);
outputParameters.Append($"-filter:a \"{audioFilterString}\" ");
}
}
int milliseconds = vidLengthCalc.Milliseconds;
int seconds = vidLengthCalc.Seconds;
int minutes = vidLengthCalc.Minutes;
var hours = (int)vidLengthCalc.TotalHours;
string durationString = $"{hours:D}:{minutes:D2}:{seconds:D2}.{milliseconds:D3}";
outputParameters.Append($"-c:v:{framesId} libx264 -pix_fmt yuv420p -to {durationString} {outPath} -y ");
string parameters = inputParameters.ToString() + outputParameters.ToString();
try
{
await Task.Factory.StartNew(() =>
{
var outputLog = new List<string>();
using (var process = new Process
{
StartInfo =
{
FileName = ffmpgPath,
Arguments = parameters,
UseShellExecute = false,
CreateNoWindow = true,
// ffmpeg send everything to the error output, standard output is not used.
RedirectStandardError = true
},
EnableRaisingEvents = true
})
{
process.ErrorDataReceived += (sender, e) =>
{
if (string.IsNullOrEmpty(e.Data))
{
return;
}
outputLog.Add(e.Data.ToString());
Console.WriteLine(e.Data.ToString());
};
process.Start();
process.BeginErrorReadLine();
process.WaitForExit();
if (process.ExitCode != 0)
{
throw new Exception($"ffmpeg failed error exit code {process.ExitCode}. Log: {string.Join(Environment.NewLine, outputLog)}");
}
Console.WriteLine($"Exit code: {process.ExitCode}");
}
});
}
catch(Win32Exception )
{
Console.WriteLine("Oh no, failed to start ffmpeg. Have you downloaded and copied ffmpeg.exe to the output folder?");
}
Console.WriteLine();
Console.WriteLine("Video was successfully created. It is availible at: " + Path.GetFullPath(outPath));
}

This function is based on Splicer.Net library.Took me ages to understand how that library works.
Make sure that your fps(frame per second )is correct. By the way standard 24 f/s.
In my case I have 15 images and I now that I need 7 seconds video-> so fps =2.
Fps may vary according to platform...or developer usage.
public bool CreateVideo(List<Bitmap> bitmaps, string outputFile, double fps)
{
int width = 640;
int height = 480;
if (bitmaps == null || bitmaps.Count == 0) return false;
try
{
using (ITimeline timeline = new DefaultTimeline(fps))
{
IGroup group = timeline.AddVideoGroup(32, width, height);
ITrack videoTrack = group.AddTrack();
int i = 0;
double miniDuration = 1.0 / fps;
foreach (var bmp in bitmaps)
{
IClip clip = videoTrack.AddImage(bmp, 0, i * miniDuration, (i + 1) * miniDuration);
System.Diagnostics.Debug.WriteLine(++i);
}
timeline.AddAudioGroup();
IRenderer renderer = new WindowsMediaRenderer(timeline, outputFile, WindowsMediaProfiles.HighQualityVideo);
renderer.Render();
}
}
catch { return false; }
return true;
}
Hope this helps.

Related

Why does NAudio read zeros to buffer after playing the file but not before?

The following code will successfully, load, play, edit audio samples and (almost) write to audio file. I say almost because when I comment out the "Play" code it works, but leaving it in causes the buffer read:
audioFile.Read(buffer, 0, numSamples);
to result in zeros.
Do I need to reset the audioFile somehow? All the examples I've found don't mention any need for this.
using System;
using NAudio.Wave;
namespace NAudioTest
{
class TestPlayer
{
static void Main(string[] args)
{
string infileName = "c:\\temp\\pink.wav";
string outfileName = "c:\\temp\\pink_out.wav";
// load the file
var audioFile = new AudioFileReader(infileName);
// play the file
var outputDevice = new WaveOutEvent();
outputDevice.Init(audioFile);
outputDevice.Play();
//Since Play only means "start playing" and isn't blocking, we can wait in a loop until playback finishes....
while (outputDevice.PlaybackState == PlaybackState.Playing) { System.Threading.Thread.Sleep(1000); }
// edit the samples in file
int fs = audioFile.WaveFormat.SampleRate;
int numSamples = (int)audioFile.Length / sizeof(float); // length is the number of bytes - 4 bytes in a float
float[] buffer = new float[numSamples];
audioFile.Read(buffer, 0, numSamples);
float volume = 0.5f;
for (int n = 0; n < numSamples; n++) { buffer[n] *= volume; }
// write edited samples to new file
var writer = new WaveFileWriter(outfileName,audioFile.WaveFormat);
writer.WriteSamples(buffer,0,numSamples);
}
}
}
You must call Dispose on your writer before it is a valid WAV file. I recommend that you put it in a using block.
using(var writer = new WaveFileWriter(outfileName,audioFile.WaveFormat))
{
writer.WriteSamples(buffer,0,numSamples);
}

Mixing two audio samples from MP3 files

I am having problem mixing two different audio samples into one by simply adding the bytes of both audio samples.
After below process when I try to open mixed.mp3 file in media player it says:
Windows Media Player encountered a problem while playing the file.
Here is the code I'm using to mix the audio files:
byte[] bytes1,bytes2,final;
int length1,length2,max;
// Getting byte[] of audio file
using ( BinaryReader b = new BinaryReader(File.Open("background.mp3" , FileMode.Open)) )
{
length1 = (int)b.BaseStream.Length;
bytes1 = b.ReadBytes(length1);
}
using ( BinaryReader b = new BinaryReader(File.Open("voice.mp3" , FileMode.Open)) )
{
length2 = (int)b.BaseStream.Length;
bytes2 = b.ReadBytes(length2);
}
// Getting max length
if(length1 > length2){
max = length1;
}else{
max = length2;
}
// Initializing output byte[] of max length
final = new byte[max];
// Adding byte1 and byte2 and copying into final
for (int i=0;i<max;i++)
{
byte b1 , b2;
if(i < length1){
b1 = bytes1[i];
}else{
b1 = 0;
}
if ( i < length2 ){
b2 = bytes2[i];
}
else{
b2 = 0;
}
final[i] = (byte)(b1 + b2);
}
// Writing final[] as an mp3 file
File.WriteAllBytes("mixed.mp3" , final);
Note: I tried to mix two same files and it worked, that is, the media player didn't throw any errors and played it correctly.
This is mostly likely due to the fact that you aren't decoding the MP3 files before you mix them. And you're "just adding" the samples together, which is going to result in clipping; you should first use a library to decode the MP3 files to PCM, which will then allow you to mix them.
To correctly mix the samples you should be doing:
final[i] = (byte)(b1 / 2 + b2 / 2);
Otherwise your calculations will overflow (also, I'd generally recommend normalising your audio to floats before manipulating them). It should also be noted that you're mixing all the bytes in the MP3 files, i.e., you're messing with the headers (hence WMP refusing to play your "Mixed" file). You should only mix the actual audio data (the samples) of the files, not the entire file.
I've provided a (commented) working example1 using the NAudio library (it exports the mixed audio to a wav file to avoid further complications):
// You can get the library via NuGet if preferred.
using NAudio.Wave;
...
var fileA = new AudioFileReader("Input path 1");
// Calculate our buffer size, since we're normalizing our samples to floats
// we'll need to account for that by dividing the file's audio byte count
// by its bit depth / 8.
var bufferA = new float[fileA.Length / (fileA.WaveFormat.BitsPerSample / 8)];
// Now let's populate our buffer with samples.
fileA.Read(bufferA, 0, bufferA.Length);
// Do it all over again for the other file.
var fileB = new AudioFileReader("Input path 2");
var bufferB = new float[fileB.Length / (fileB.WaveFormat.BitsPerSample / 8)];
fileB.Read(bufferB, 0, bufferB.Length);
// Calculate the largest file (simpler than using an 'if').
var maxLen = (long)Math.Max(bufferA.Length, bufferB.Length);
var final = new byte[maxLen];
// For now, we'll just save our mixed data to a wav file.
// (Things can get a little complicated when encoding to MP3.)
using (var writer = new WaveFileWriter("Output path", fileA.WaveFormat))
{
for (var i = 0; i < maxLen; i++)
{
float a, b;
if (i < bufferA.Length)
{
// Reduce the amplitude of the sample by 2
// to avoid clipping.
a = bufferA[i] / 2;
}
else
{
a = 0;
}
if (i < bufferB.Length)
{
b = bufferB[i] / 2;
}
else
{
b = 0;
}
writer.WriteSample(a + b);
}
}
1 The input files must be of the same sample rate, bit depth and channel count in order for it to work correctly.

Slow Camera Capture

I am using Windows 8's media capture class to click a photo in a desktop application and copy it to clipboard.
My function takes two inputs as argument,
1) the desired device (front, back or usb web cam) and
2) the desired resolution
Here is the function:
async public void UseCamera(int x, int y)
{
MediaCapture _mediaCapture = new MediaCapture();
var _ImageFormat = ImageEncodingProperties.CreatePng();
var _fileStream = new InMemoryRandomAccessStream();
MediaCaptureInitializationSettings _cameraSettings1 = new MediaCaptureInitializationSettings();
DeviceInformationCollection _deviceInformationCollection = null;
IReadOnlyList<IMediaEncodingProperties> res;
_deviceInformationCollection = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture);
if (x > _deviceInformationCollection.Count - 1)
{
MessageBox.Show("Device Not found");
}
else
{
_cameraSettings1.VideoDeviceId = _deviceInformationCollection[x].Id;
_cameraSettings1.AudioDeviceId = "";
_cameraSettings1.PhotoCaptureSource = PhotoCaptureSource.VideoPreview;
_cameraSettings1.StreamingCaptureMode = StreamingCaptureMode.Video;
await _mediaCapture.InitializeAsync(_cameraSettings1);
res = _mediaCapture.VideoDeviceController.GetAvailableMediaStreamProperties(MediaStreamType.VideoPreview);
uint maxResolution = 0;
List<int> indexMaxResolution = new List<int>();
if (res.Count >= 1)
{
for (int i = 0; i < res.Count; i++)
{
VideoEncodingProperties vp = (VideoEncodingProperties)res[i];
if (vp.Width > maxResolution)
{
indexMaxResolution.Add(i);
maxResolution = vp.Width;
}
}
indexMaxResolution.Reverse();
if (y > indexMaxResolution.Count())
{
MessageBox.Show("Maximum supported resolution index : " + (indexMaxResolution.Count - 1).ToString());
}
//this is the part that I believe is the trouble maker
else
{
await _mediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.VideoPreview, res[indexMaxResolution.ElementAt(y)]);
await _mediaCapture.CapturePhotoToStreamAsync(_ImageFormat, _fileStream);
Clipboard.SetImage(Image.FromStream(_fileStream.AsStream()));
}
}
}
}
The function is working, but the problem is that it is incredibly slow.. it takes almost 4-5 seconds to capture a photo. Can anyone tell me where am i going wrong and how can I speed things up. Because I tested my camera, and it can take clicks # almost 2 pics per second..
If you move all the initialization and device information queries to an initialization function you might see an increase in speed.
In my experience gathering device information is slow.
Try to do as much as possible upfront so that when the time comes to capture, only the necessary things need to be done.

Compare picture in Word file and in a folder?

So I'm using Word.Interloop and in order to compare two pics, I guess I have to transform the current picture(in word file) to a bitmap image and then compare it with a bitmap image object from desktop?
Or perhaps the is a simpler way to do so?
Word.InlineShape x;
x.isEqual( Picture from Desktop/ bitmapImage.Object);
I have made a small sample showing how this can be accomplished. The main idea is to represent your image from your desktop as a Bitmap instance and then compare it pixel by pixel to the Bitmap instance in your document. The comparison is done by first copying an inline shape to the clipboard, then turning it into a Bitmap, and then compare it with the reference (from the desktop) - first by size and then pixel by pixel.
The sample is implemented as a C# console application using .NET 4.5, Microsoft Office Object Library version 15.0, and Microsoft Word Object Library version 15.0.
using System;
using System.Drawing;
using System.Threading;
using System.Windows.Forms;
using Application = Microsoft.Office.Interop.Word.Application;
namespace WordDocStats
{
class Program
{
// General idea is based on: https://stackoverflow.com/a/7937590/700926
static void Main()
{
// Open a doc file
var wordApplication = new Application();
var document = wordApplication.Documents.Open(#"C:\Users\Username\Documents\document.docx");
// Load the image to compare against.
var bitmapToCompareAgainst = new Bitmap(#"C:\Users\Username\Documents\image.png");
// For each inline shape, do a comparison
// By inspection you can see that the first inline shape have index 1 ( and not zero as one might expect )
for (var i = 1; i <= wordApplication.ActiveDocument.InlineShapes.Count; i++)
{
// closure
// http://confluence.jetbrains.net/display/ReSharper/Access+to+modified+closure
var inlineShapeId = i;
// parameterized thread start
// https://stackoverflow.com/a/1195915/700926
var thread = new Thread(() => CompareInlineShapeAndBitmap(inlineShapeId, bitmapToCompareAgainst, wordApplication));
// STA is needed in order to access the clipboard
// https://stackoverflow.com/a/518724/700926
thread.SetApartmentState(ApartmentState.STA);
thread.Start();
thread.Join();
}
// Close word
wordApplication.Quit();
Console.ReadLine();
}
// General idea is based on: https://stackoverflow.com/a/7937590/700926
protected static void CompareInlineShapeAndBitmap(int inlineShapeId, Bitmap bitmapToCompareAgainst, Application wordApplication)
{
// Get the shape, select, and copy it to the clipboard
var inlineShape = wordApplication.ActiveDocument.InlineShapes[inlineShapeId];
inlineShape.Select();
wordApplication.Selection.Copy();
// Check data is in the clipboard
if (Clipboard.GetDataObject() != null)
{
var data = Clipboard.GetDataObject();
// Check if the data conforms to a bitmap format
if (data != null && data.GetDataPresent(DataFormats.Bitmap))
{
// Fetch the image and convert it to a Bitmap
var image = (Image)data.GetData(DataFormats.Bitmap, true);
var currentBitmap = new Bitmap(image);
var imagesAreEqual = true;
// Compare the images - first by size and then pixel by pixel
// Based on: http://www.c-sharpcorner.com/uploadfile/prathore/image-comparison-using-C-Sharp/
if(currentBitmap.Width == bitmapToCompareAgainst.Width && currentBitmap.Height == bitmapToCompareAgainst.Height)
{
for (var i = 0; i < currentBitmap.Width; i++)
{
if(!imagesAreEqual)
break;
for (var j = 0; j < currentBitmap.Height; j++)
{
if (currentBitmap.GetPixel(i, j).Equals(bitmapToCompareAgainst.GetPixel(i, j)))
continue;
imagesAreEqual = false;
break;
}
}
}
else
{
imagesAreEqual = false;
}
Console.WriteLine("Inline shape #{0} is equal to the 'external' bitmap: {1}", inlineShapeId, imagesAreEqual);
}
else
{
Console.WriteLine("Clipboard data is not in an image format");
}
}
else
{
Console.WriteLine("Clipboard is empty");
}
}
}
}
References:
Threadstart with params: https://stackoverflow.com/a/1195915/700926
Extracting inline shapes as images from word in C#: https://stackoverflow.com/a/7937590/700926
Comparing images in C#:
http://www.c-sharpcorner.com/uploadfile/prathore/image-comparison-using-C-Sharp/
Details on how to retrieve an image from the clipboard in C#: https://stackoverflow.com/a/998825/700926
Details on how to access the clipboard from C#: https://stackoverflow.com/a/518724/700926

How can I resample wav file

Currently I'm recording an audio signal with following specs:
Channels: 1
SamplesPerSecond: 8000
BitsPerSample: 16
How can I convert this .wav-file to eg following specs (pure c# is preferred):
Channels: 1
SamplesPerSecond: 22050
BitsPerSample: 16
Windows API (one of) to resample audio is Audio Resampler DSP. This transform class is pretty straightforward to set up input and output types, then push input data and pull output.
Another task you would possible deal additionally with is reading from file and writing into a new file (you did not specify if it is actually needed in your original description though).
You might also want to use third party libraries like NAudio.
See also:
C# resample audio from 8khz to 44.1/48khz
Audio DSP in C#
try Naudio - it is a free + opensource .NET library offering several things including the ability to resample AFAIK.
As requested sample source for resampling
AS3 function for resampling. You can easy change to convert this code to C#:
private function resampling(fromSampleRate:int, toSampleRate:int, quality:int = 10):void
{
var samples:Vector.<Number> = new Vector.<Number>;
var srcLength:uint = this._samples.length;
var destLength:uint = this._samples.length*toSampleRate/fromSampleRate;
var dx:Number = srcLength/destLength;
// fmax : nyqist half of destination sampleRate
// fmax / fsr = 0.5;
var fmaxDivSR:Number = 0.5;
var r_g:Number = 2 * fmaxDivSR;
// Quality is half the window width
var wndWidth2:int = quality;
var wndWidth:int = quality*2;
var x:Number = 0;
var i:uint, j:uint;
var r_y:Number;
var tau:int;
var r_w:Number;
var r_a:Number;
var r_snc:Number;
for (i=0;i<destLength;++i)
{
r_y = 0.0;
for (tau=-wndWidth2;tau < wndWidth2;++tau)
{
// input sample index
j = (int)(x+tau);
// Hann Window. Scale and calculate sinc
r_w = 0.5 - 0.5 * Math.cos(2*Math.PI*(0.5 + (j-x)/wndWidth));
r_a = 2*Math.PI*(j-x)*fmaxDivSR;
r_snc = 1.0;
if (r_a != 0)
r_snc = Math.sin(r_a)/r_a;
if ((j >= 0) && (j < srcLength))
{
r_y += r_g * r_w * r_snc * this._samples[j];
}
}
samples[i] = r_y;
x += dx;
}
this._samples = samples.concat();
samples.length = 0;
}
Try code below from C# resample audio from 8khz to 44.1/48khz
static void Resample(string fileName)
{
IntPtr formatNew = AudioCompressionManager.GetPcmFormat(2, 16, 44100);
WaveReader wr = new WaveReader(File.OpenRead(fileName));
IntPtr format = wr.ReadFormat();
byte[] data = wr.ReadData();
wr.Close();
//PCM 8000 Hz -> PCM 44100
byte[] dataNew = AudioCompressionManager.Resample(format, data, formatNew);
WaveWriter ww = new WaveWriter(File.Create(fileName + ".wav"),
AudioCompressionManager.FormatBytes(formatNew));
ww.WriteData(dataNew);
ww.Close();
}

Categories

Resources