I am not worked with .Net project for a while and now I need to use a library to show liveview from the webcam and take a picture or make a video. Can anyone suggest me a good opensource library for do that? After a quick search I found AForge library but I don't know if it is what I'm looking for.
I used WebCam_Capture before;
WebCAM.cs:
using System;
using System.IO;
using System.Linq;
using System.Text;
using WebCam_Capture;
using System.Collections.Generic;
namespace controlPC{
class WebCam{
private WebCamCapture webcam;
private System.Windows.Forms.PictureBox _FrameImage;
private int FrameNumber = 50;
public void InitializeWebCam(ref System.Windows.Forms.PictureBox ImageControl){
webcam = new WebCamCapture();
webcam.FrameNumber = ((ulong)(0ul));
webcam.TimeToCapture_milliseconds = FrameNumber;
webcam.ImageCaptured += new WebCamCapture.WebCamEventHandler(webcam_ImageCaptured);
_FrameImage = ImageControl;}
void webcam_ImageCaptured(object source, WebcamEventArgs e){
_FrameImage.Image = e.WebCamImage;}
public void Start(){
webcam.TimeToCapture_milliseconds = FrameNumber;
webcam.Start(0);}
public void Stop(){
webcam.Stop();}
public void Continue(){
webcam.TimeToCapture_milliseconds = FrameNumber;
webcam.Start(this.webcam.FrameNumber);}
MainForm.cs
using WebCam_Capture;
void MainFormLoad(object sender, EventArgs e)
{
webcam = new WebCam();
webcam.InitializeWebCam(ref pictureBox1);
}
Related
I am hoping someone can help me in getting an issue of mine to work, I feel as if it is an easy one, however not having any luck in fixing what I am trying to do. I want to be able to pause a video which I am playing using vlc.dotnet below is a brief summary of the structure of my code.
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Runtime.InteropServices;
using System.Reflection;
using System.IO;
using Vlc.DotNet.Forms;
using System.Threading;
using Vlc.DotNet.Core;
using System.Diagnostics;
namespace TS1_C
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void Form1_Load(object sender, EventArgs e)
{
button8.Click += new EventHandler(this.button8_Click);
}
void listBox1_MouseDoubleClick(object sender, MouseEventArgs e)
{
string chosen = listBox1.SelectedItem.ToString();
string final = selectedpath2 + "\\" + chosen; //Path
playfile(final);
}
void playfile(string final)
{
var control = new VlcControl();
var currentAssembly = Assembly.GetEntryAssembly();
var currentDirectory = new FileInfo(currentAssembly.Location).DirectoryName;
// Default installation path of VideoLAN.LibVLC.Windows
var libDirectory = new DirectoryInfo(Path.Combine(currentDirectory, "libvlc", IntPtr.Size == 4 ? "win-x86" : "win-x64"));
control.BeginInit();
control.VlcLibDirectory = libDirectory;
control.Dock = DockStyle.Fill;
control.EndInit();
panel1.Controls.Add(control);
control.Play();
}
private void button8_Click(object sender, EventArgs e)
{
}
}
}
As you can see I have one method which takes a double click from an item in a list box and plays it using the method playfile. However I want to be able to pause the video using my button known as button8. I have tried many things even this
control.Paused += new System.EventHandler<VlcMediaPlayerPausedEventArgs>(button8_Click);
Which I put into the playfile method, however nothing seems to work. I am wondering if my whole method in which I play a file using playfile(); is completely wrong. I am hoping someone can help me in trying to achieve what I need
Thank you
Your control should be initialized only once:
private VlcControl control;
public Form1()
{
InitializeComponent();
control = new VlcControl();
var currentAssembly = Assembly.GetEntryAssembly();
var currentDirectory = new FileInfo(currentAssembly.Location).DirectoryName;
// Default installation path of VideoLAN.LibVLC.Windows
var libDirectory = new DirectoryInfo(Path.Combine(currentDirectory, "libvlc", IntPtr.Size == 4 ? "win-x86" : "win-x64"));
control.BeginInit();
control.VlcLibDirectory = libDirectory;
control.Dock = DockStyle.Fill;
control.EndInit();
panel1.Controls.Add(control);
}
then, your play method could be simplified:
void playfile(string url)
{
control.Play(url);
}
And for your pause method:
private void button8_Click(object sender, EventArgs e)
{
control.Pause();
}
After I connect to USB camera, and read a frame, and convert to bitmap,
it crashes after I write bitmap to PictureBox.
I'm developing a Visual Studio 2017 Pro C# Windows forms project.
Also, if I debug process_video_NewFrame() and step through it, then crash 'Parameter is not valid.' occurs at line in Program.cs:
Application.Run(new Control_Panel()).
Control_Panel.cs
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.Drawing;
using Accord.Video;
using Accord.Video.DirectShow;
namespace ACCORD_WindowsFormsApp9
{
public partial class Control_Panel : Form
{
VideoCaptureDevice videoSource;
Bitmap bitmap;
public Control_Panel()
{
InitializeComponent();
}
private void button_Start_Frame_Captcha_Click(object sender, EventArgs e)
{
var videoDevices = new FilterInfoCollection(FilterCategory.VideoInputDevice);
videoSource = new VideoCaptureDevice(videoDevices[0].MonikerString);
videoSource.NewFrame += new NewFrameEventHandler(process_video_NewFrame);
videoSource.Start();
}
// The video_NewFrame used in the example above could be defined as:
private void process_video_NewFrame(object sender, NewFrameEventArgs eventArgs)
{
// get new frame
bitmap = eventArgs.Frame;
System.Windows.Forms.Application.DoEvents();
this.pictureBoxLatestCameraFrame.Image = bitmap;
}
private void button_Stop_Frame_Captcha_Click(object sender, EventArgs e)
{
videoSource.SignalToStop();
}
}
}
Program.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace ACCORD_WindowsFormsApp9
{
static class Program
{
/// <summary>
/// The main entry point for the application.
/// </summary>
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new Control_Panel());
}
}
}
Fixed! Thanks, Sriram. Keep that crystal ball polished... New to Bitmap, I mistakenly assumed it was an image, when actually it is an object, and so requires a new instance each iteration, ie...
private void process_video_NewFrame(object sender, NewFrameEventArgs eventArgs)
{
// Ax previous Bitmap object:
if( loops > 0 )
{
camera_snapshot_bitmap.Dispose();
}
// get new frame
camera_snapshot_bitmap = new Bitmap( eventArgs.Frame );
//ax System.Windows.Forms.Application.DoEvents();
this.pictureBoxLatestCameraFrame.Image = camera_snapshot_bitmap;
++loops;
Console.WriteLine("Loops " + loops );
}
I am trying to create a recorder from audio coming out from soundcard and this is my progress so far, the problem is the recorded audio when saving to file is so large like a song can reach up to hundreds of megabyte.
here's my code
using NAudio.CoreAudioApi;
using NAudio.Wave;
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace Record_From_Soundcard
{
public partial class frmMain : Form
{
private WaveFileWriter writer;
private WasapiLoopbackCapture waveInSel;
public frmMain()
{
InitializeComponent();
}
private void frmMain_Load(object sender, EventArgs e)
{
MMDeviceEnumerator deviceEnum = new MMDeviceEnumerator();
MMDeviceCollection deviceCol = deviceEnum.EnumerateAudioEndPoints(DataFlow.Render, DeviceState.Active);
cboAudioDrivers.DataSource = deviceCol.ToList();
}
private void btnStopRecord_Click(object sender, EventArgs e)
{
waveInSel.StopRecording();
writer.Close();
}
private void btnStartRecord_Click(object sender, EventArgs e)
{
using (SaveFileDialog _sfd = new SaveFileDialog())
{
_sfd.Filter = "Mp3 File (*.mp3)|*.mp3";
if (_sfd.ShowDialog() == System.Windows.Forms.DialogResult.OK)
{
MMDevice _device = (MMDevice)cboAudioDrivers.SelectedItem;
waveInSel = new WasapiLoopbackCapture(_device);
writer = new WaveFileWriter(_sfd.FileName, waveInSel.WaveFormat);
waveInSel.DataAvailable += (n, m) =>
{
writer.Write(m.Buffer, 0, m.BytesRecorded);
};
waveInSel.StartRecording();
}
}
}
}
}
can anyone help me on how to compress audio upon saving?
maybe it will be added on this part
waveInSel.DataAvailable += (n, m) =>
{
writer.Write(m.Buffer, 0, m.BytesRecorded);
};
Thanks in advance.. ;)
Try this using naudio dll
Using NAudio.Wave; Using NAudio.Wave.SampleProviders;Using NAudio.Lame
Private void WaveToMP3(int bitRate = 128){
String waveFileName = Application.StartupPath + #"\Temporal\mix.wav";
String mp3FileName = Application.StartupPath + #"\Grabaciones\ " + Strings.Format(DateTime.Now, "dd-MM-yyyy.HH.mm.ss") + ".mp3";
Using (var reader = New AudioFileReader(waveFileName))
{
Using (var writer = New LameMP3FileWriter(mp3FileName, reader.WaveFormat, bitRate))
{
reader.CopyTo(writer);
}
reader.Close();
}
}
You can't make an MP3 file by saving a WAV file with a .MP3 extension (which is what you are doing here). You will need to select an encoder available on your machine, and pass the audio through that. I go into some detail about how to do this with NAudio in this article.
I am having an error with this Speech Recognition, I keep getting "At least one grammar must be loaded before doing a recognition" I can't get the images to display when you say its corresponding linked name.
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Text;
using System.Windows.Forms;
using SpeechLib;
using System.IO;
using System.Speech.Recognition;
using System.Globalization;
namespace SimpleSpeechRecognition
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private SpeechRecognitionEngine recognizer;
private void Form1_Load(object sender, EventArgs e)
{
speechListBox1.Items.Add("Dog");
speechListBox1.Items.Add("Elephant");
speechListBox1.SpeechEnabled = true;
recognizer = new SpeechRecognitionEngine(new CultureInfo("en-GB"));
recognizer.SetInputToDefaultAudioDevice();
Choices choices = new Choices("Dog", "Elephant");
GrammarBuilder m_GrammarBuilder = new GrammarBuilder(choices);
Grammar m_Speech = new Grammar(m_GrammarBuilder);
recognizer.LoadGrammar(m_Speech);
recognizer.SpeechRecognized += new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
recognizer.RecognizeAsync(RecognizeMode.Multiple);
}
void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
foreach (RecognizedWordUnit word in e.Result.Words)
{
switch (word.Text)
{
case "Dog":
pictureBox1.Image = Image.FromFile("C:\\" + "dog.jpg");;
break;
case "Elephant":
pictureBox1.Image = Image.FromFile("C:\\" + "elephant.jpg");
break;
}
}
}
private void speechListBox1_SelectedIndexChanged(object sender, EventArgs e)
{
//MessageBox.Show(speechListBox1.SelectedItems[0].ToString());
SayPhrase(speechListBox1.SelectedItems[0].ToString());
//pictureBox1.Image = Image.FromFile("C:\\" + "dog.jpg");
//pictureBox1.Image = Image.FromFile(((FileInfo)speechListBox1.SelectedItem).FullName);
pictureBox1.Refresh();
}
private void SayPhrase(string PhraseToSay )
{
SpeechVoiceSpeakFlags SpFlags = new SpeechVoiceSpeakFlags();
SpVoice Voice = new SpVoice();
Voice.Speak(PhraseToSay, SpFlags);
}
}
}
The errors self-explanatory:
The speech engine must have a collection of 'Choices' to listen out for, however these need to be built into appropriate Grammar for the speech engine to listen out for.
GrammarBuilder m_GrammarBuilder = new GrammarBuilder(choices);
Grammar m_Speech = (m_GrammarBuilder);
Then just load the grammar in:
recognizer.LoadGrammar(m_Speech);
I think that should solve your problem. It also worth noting that you can unload and load different sets of grammar via the .UnloadGrammar() function as well.
Additionally, it's also worth initializing a SpeechRecognitionEngine with an appropriate culture info. For English (UK) this is:
new SpeechRecognitionEngine(new CultureInfo("en-GB"))
I am working on a project which involves extracting features from color and depth frames from Kinect Camera. The problem I am facing is that whenever I try displaying 2 Images, the UI hangs. When I tried debugging, the depthFrame and colorFrame were coming as null. If enable only the color steam then both colorImage and featureImage1 are displayed properly and if I enable only the depth stream, it works as it should. But when I enable them both, the UI hangs. I have no idea what is causing the problem. I have the following the following code for my Kinect Application. What is the cause of this problem and how can I fix it?
Config: Windows 8 Pro 64bit, 2Ghz Core2Duo, VisualStudio 2012 Ultimate, EmguCV 2.4.0.
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Linq;
using System.Runtime.InteropServices;
using System.Text;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;
using Microsoft.Kinect;
using Emgu.CV;
using Emgu.CV.WPF;
using Emgu.CV.Structure;
using Emgu.Util;
namespace features
{
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
}
private Image<Bgra, Byte> cvColorImage;
private Image<Gray, Int16> cvDepthImage;
private int colorWidth = 640;
private int colorHeight = 480;
private int depthWidth = 640;
private int depthHeight = 480;
private static readonly int Bgr32BytesPerPixel = (PixelFormats.Bgr32.BitsPerPixel + 7) / 8;
private byte[] colorPixels;
private byte[] depthPixels;
private short[] rawDepthData;
private bool first = true;
private bool firstDepth = true;
Image<Bgra, byte> image2;
private void Window_Loaded(object sender, RoutedEventArgs e)
{
kinectSensorChooser.KinectSensorChanged += new DependencyPropertyChangedEventHandler(kinectSensorChooser_KinectSensorChanged);
}
void kinectSensorChooser_KinectSensorChanged(object sender, DependencyPropertyChangedEventArgs e)
{
KinectSensor oldSensor = (KinectSensor)e.OldValue;
KinectStop(oldSensor);
KinectSensor _sensor = (KinectSensor)e.NewValue;
_sensor.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);
_sensor.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30);
_sensor.DepthFrameReady += new EventHandler<DepthImageFrameReadyEventArgs>(_sensor_DepthFrameReady);
_sensor.ColorFrameReady += new EventHandler<ColorImageFrameReadyEventArgs>(_sensor_ColorFrameReady);
_sensor.DepthStream.FrameHeight);
try
{
_sensor.Start();
}
catch
{
kinectSensorChooser.AppConflictOccurred();
}
}
void KinectStop(KinectSensor sensor)
{
if (sensor != null)
{
sensor.Stop();
}
}
private void Window_Closed(object sender, EventArgs e)
{
KinectStop(kinectSensorChooser.Kinect);
}
void _sensor_ColorFrameReady(object sender, ColorImageFrameReadyEventArgs e)
{
using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
{
if (colorFrame == null) return;
if (first)
{
this.colorPixels = new Byte[colorFrame.PixelDataLength];
first = false;
}
colorFrame.CopyPixelDataTo(this.colorPixels); //raw data in bgrx format
processColor();
}
}
void _sensor_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
{
using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
{
if (depthFrame == null) return;
if (firstDepth)
{
this.rawDepthData = new short[depthFrame.PixelDataLength];
firstDepth = false;
}
depthFrame.CopyPixelDataTo(rawDepthData);
processDepth();
}
}
private void processColor(){...}
private void processDepth(){...}
}
}
The processDepth function is as follows. I am just making an Image from the RAW depth data.
private void processDepth() {
GCHandle pinnedArray = GCHandle.Alloc(this.rawDepthData, GCHandleType.Pinned);
IntPtr pointer = pinnedArray.AddrOfPinnedObject();
cvDepthImage = new Image<Gray, Int16>(depthWidth, depthHeight, depthWidth << 1, pointer);
pinnedArray.Free();
depthImage.Source = BitmapSourceConvert.ToBitmapSource(cvDepthImage.Not().Bitmap);
}
The processColor function is as follows. Here just for the sake of it, I am trying to display the cloned image instead of extracting features, just to check the lag. When both streams are enabled (color and depth) the following function displays the colorImage properly, but as soon as I uncomment the commented lines, the UI hangs.
private void processColor() {
GCHandle handle = GCHandle.Alloc(this.colorPixels, GCHandleType.Pinned);
Bitmap image = new Bitmap(colorWidth, colorHeight, colorWidth<<2, System.Drawing.Imaging.PixelFormat.Format32bppRgb, handle.AddrOfPinnedObject());
handle.Free();
cvColorImage = new Image<Bgra, byte>(image);
image.Dispose();
BitmapSource src = BitmapSourceConvert.ToBitmapSource(cvColorImage.Bitmap);
colorImage.Source = src;
//image2 = new Image<Bgra, byte>(cvColorImage.ToBitmap()); //uncomment and it hangs
//featureImage1.Source = BitmapSourceConvert.ToBitmapSource(image2.Bitmap); //uncomment and it hangs
}
I see code that do a lot of work in event handlers. I almost certain that handlers is called in GUI thread. I suggest you to extract your code to the background thread routine. Don't forget that updating of the Form's controls (depthImage and controlImage) should be done using BeginInvoke method of the parent form,