I'm using OpenCVSharp4 and It necessary to make a capture from webcam.
But sometimes camera could be busy by another application as Skype, Teams, etc.
I try to use IsOpened() method to define that, but it returns true when webcam is busy and makes black capture. (I've used default windows 10 Camera application to make my webcamera busy)
using (var videoCapture = new VideoCapture(0))
using (var frame = new Mat())
{
if (videoCapture.Open(0) && videoCapture.IsOpened())
{
if (videoCapture.Read(frame) && !frame.Empty())
{
var image = frame.ToBitmap();
return image;
}
}
}
Also I've tried to use -1 instead of 0 as a parameter of Open method, but it is totally not working by that way.
Does anybody have an idea how can I deal with that issue, and correctly define that camera is busy?
Related
I am trying to record a video of a camera while using MediaFrameReader. For my application I need the MediaFrameReader instance to process each frame individually. I have tried to apply LowLagMediaRecording additionally to simply save the camerastream to a file, but it seems that you can not use both methods simultaniously. That meant that I am stuck to MediaFrameReader only where I can access each frame in the Frame_Arrived method.
I have tried several approaches and found two working solutions with the MediaComposition class creating MediaClip objects. You can either save each frame as a JPEG-file and render all images finally to a video file. This process is very slow since you constantly need to access the hard drive. Or you can create a MediaStreamSample object from the Direct3DSurface of a frame. This way you can save the data in RAM (first in GPU RAM, if full then RAM) instead of the hard drive which in theory is much faster. The problem is that calling the RenderToFileAsync-method of the MediaComposition class requires all MediaClips already being added to the internal list. This leads to exceeding the RAM after an already quite short time of recording. After collecting data of approximately 5 minutes, windows already created a 70GB swap file which defies the cause for choosing this path in the first place.
I also tried the third party library OpenCvSharp to save the processed frames as a video. I have done that previously in python without any problems. In UWP though, I am not able to interact with the file system without a StorageFile object. So, all I get from OpenCvSharp is an UnauthorizedAccessException when I try to save the rendered video to the file system.
So, to summerize: what I need is a way to render the data of my camera frames to a video while the data is still coming in, so I can dispose every frame after it having been processed, like a python OpenCV implementation would do it. I am very thankfull for every hint. Here are parts of my code to better understand the context:
private void ColorFrameArrived(MediaFrameReader sender, MediaFrameArrivedEventArgs args)
{
MediaFrameReference colorFrame = sender.TryAcquireLatestFrame();
if (colorFrame != null)
{
if (currentMode == StreamingMode)
{
colorRenderer.Draw(colorFrame, true);
}
if (currentMode == RecordingMode)
{
MediaStreamSample sample = MediaStreamSample.CreateFromDirect3D11Surface(colorFrame.VideoMediaFrame.Direct3DSurface, new TimeSpan(0, 0, 0, 0, 33));
ColorComposition.Clips.Add(MediaClip.CreateFromSurface(sample.Direct3D11Surface, new TimeSpan(0, 0, 0, 0, 33)));
}
}
}
private async Task CreateVideo(MediaComposition composition, string outputFileName)
{
try
{
await mediaFrameReaderColor.StopAsync();
mediaFrameReaderColor.FrameArrived -= ColorFrameArrived;
mediaFrameReaderColor.Dispose();
StorageFolder folder = await documentsFolder.GetFolderAsync(directory);
StorageFile vid = await folder.CreateFileAsync(outputFileName + ".mp4", CreationCollisionOption.GenerateUniqueName);
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
await composition.RenderToFileAsync(vid, MediaTrimmingPreference.Precise);
stopwatch.Stop();
Debug.WriteLine("Video rendered: " + stopwatch.ElapsedMilliseconds);
composition.Clips.Clear();
composition = null;
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
}
}
I am trying to record a video of a camera while using MediaFrameReader. For my application I need the MediaFrameReader instance to process each frame individually.
Please create custom video effect and add it to the MediaCapture object to implement your requirement. Using the custom video effect will allow us to process the frames in the context of the MediaCapture object without using the MediaFrameReader, in this way all of the limitations of the MediaFrameReader that you have mentioned go away.
Besides, it also have a number of build-in effects that allow us to analyze camera frames, for more information, please check the following articles:
#MediaCapture.AddVideoEffectAsync
https://learn.microsoft.com/en-us/uwp/api/windows.media.capture.mediacapture.addvideoeffectasync.
#Custom video effects
https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/custom-video-effects.
#Effects for analyzing camera frames
https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/scene-analysis-for-media-capture.
Thanks.
I am creating a simple application which scrapes some XML relating to the status of some machine tools which are outputting live sensor data and uses the X/Y coordinates of the device to make a little rat dance around the screen.
The rat is placed in the correct location the first time the machine is polled but doesn't move each time the draw function is called by subsequent timer driven events.
I assumed this was just due to machine being on standby and the only coordinate changes being little jitters of the servo but just to check I created a random number generator and had the system use the randomly generated coordinates instead of the scaled X/Y data coming in.
I then found that the rat doesn't move!
This is the function where I am drawing the rat(s) (There are 2 systems but we are only worrying about 'bakugo' right now) We are looking particularly at if (dekuWake == false) and (bauwake == true);
Here I have had the values printed to the console (Driven by a timer) and the "system.drawing.point(s)" are shown to be valid (in range and changing).
The timer is initiated by a button in form1.
Timer event calls polling function which scrapes the XY variables from the site (See my question here for that function - What is wrong with my use of XPath in C#?)
At this point it ascertains whether the status was 'AVAILABLE' (which it is) and sets the 'rat's' 'awake' bool to true (determines which images are drawn, if a machine is offline the 'rat' stays in its box)
It then scales the coordinates to the resolution of the program window
(Normally, right now it is stepping through 2 arrays of integers generated when the polling first begins. The update coordinate function sets the X,Y coords of ImageRat.Bakugo) and calls drawRats().
Why does changing the location of my images not actually relocate the pictureboxes?
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.IO;
using System.Reflection;
using System.Drawing;
using System.Windows.Forms;
namespace XMLRats3
{
public class Drawing
{
private PictureBox HouseImage;
private PictureBox DekuImage;
private PictureBox BakuImage;
public Drawing(PictureBox house, PictureBox deku, PictureBox baku)
{
HouseImage = house;
DekuImage = deku;
BakuImage = baku;
}
public void ClearRats()
{
HouseImage.Hide();
DekuImage.Hide();
BakuImage.Hide();
}
public void DrawRats(bool DekuWake, bool BakuWake) // Call this function using active status of 2 machines
{
ClearRats();
/*// This shows that the generated coordinates are reaching this point successfully
Console.WriteLine("BAKU X: " + ImageRat.Bakugo.PosX);
Console.WriteLine("BAKU Y: " + ImageRat.Bakugo.PosY);
*/
System.Drawing.Point DekuCoord = new System.Drawing.Point(ImageRat.Deku.PosX, ImageRat.Deku.PosY); // Create a 'System Point' for Deku
System.Drawing.Point BakuCoord = new System.Drawing.Point(ImageRat.Bakugo.PosX, ImageRat.Bakugo.PosY); // Create a 'System Point' for Bakugo
if (DekuWake == false)
{
DekuImage.Hide();
if (BakuWake == false)
{
BakuImage.Hide();
HouseImage.Image = DesktopApp1.Properties.Resources.bothsleep;// set HouseImage to both sleep
}
else
{
BakuImage.Location = BakuCoord;
Console.WriteLine("Point:" + BakuCoord);
//Console.WriteLine("Reaching Relocation condition"); // Ensure we are getting here as animation not working
BakuImage.Show();
//BakuImage.
HouseImage.Image = DesktopApp1.Properties.Resources.dekuSleep; //Set HouseImage to DekuSleep
}
}
else //DekuWake == true
{
DekuImage.Show();
if (BakuWake == true)
{
HouseImage.Image = DesktopApp1.Properties.Resources.nosleep;//Set House image to nosleep
BakuImage.Location = DekuCoord;
DekuImage.Show();
BakuImage.Location = BakuCoord;
BakuImage.Show();
}
else
{
BakuImage.Hide();
HouseImage.Image = DesktopApp1.Properties.Resources.bakusleep;// Set house image to bakusleep
DekuImage.Location = DekuCoord;
DekuImage.Show();
}
}
HouseImage.Show(); // Out here as it should always happen
}
}
}
Ok so I don't have an exact answer on how to resolve this but I can tell you why it occurs and point you in the direction of some knowledge that will help you.
At the time I wrote this code I was (and still am) very new to C# and the concept of multithreaded applications in general.
It's a matter of poor software architecture.
The problem here is that the UI can only be updated from a single thread in c#, and since the timer runs in another thread, anything called from the timer is not allowed to update the UI.
I think it's possible to dip into the UI thread using delegates, though I haven't read into this enough yet to give you exact information on how this is done.
Hopefully this will help the people who have starred my question!
How do I update the GUI from another thread?
Assuming that you are using a timer form a different class, that occupies a different thread than the main one, for a timer to use an object, you should add the object as one of "Synchronizing Objects". So assuming that your timer is called timer1, in a method where you set the properties of the timer, you should write the following line of code
timer1.SynchronizingObject = yourPictureBox;
I hope that solves your problem
I have a problem with my windows phone 8.1 app. I works fine until i turn on the lock screen, using power button.
It keeps running like its supposed to - but no longer plays the .wav files it´s supposed to.
I have set breakpoints at the methods responsible for playing the sounds, and it seems to run at it should.. Everything else works, all the timer threads and so forth.
I´m using MediaElements to play the sounds, and i have set the properties to
snd.AudioCategory = Windows.UI.Xaml.Media.AudioCategory.BackgroundCapableMedia;
I have also enabled the background audio task in the Package.appmanifest.
I have tried a lot of stuff including adding this code :
Microsoft.Phone.Shell.PhoneApplicationService.Current.ApplicationIdleDetectionMode =
Microsoft.Phone.Shell.IdleDetectionMode.Enabled;
This dosent work however, since it wont recognize the namespace.. Apparently its not used in 8.1 but only 8.0.
This is the method used to play audio :
public async void CountDownFromThree()
{
MediaElement snd = null;
snd = SourceGrid.Children.FirstOrDefault(m => (m as MediaElement) != null) as MediaElement;
if (snd == null)
{
snd = new MediaElement();
SourceGrid.Children.Add(snd);
}
StorageFolder folder = await Package.Current.InstalledLocation.GetFolderAsync(#"Assets\SoundsFolder");
StorageFile file = await folder.GetFileAsync("start-beeps.wav");
var stream = await file.OpenAsync(FileAccessMode.Read);
snd.SetSource(stream, file.ContentType);
snd.MediaEnded += snd_MediaEnded;
snd.Play();
}
Ok. So it seems that in windows phone 8.1, BackgroundMediaPlayer is the way to go. I´ve completely removed all MediaElements - which IMHO having to be part of the visual tree - was pretty wierd afterall.
I found a few resources that helped me, links are below.
http://www.jayway.com/2014/04/24/windows-phone-8-1-for-developers-the-background-media-player/
This codesample helped me a lot, it could be boiled down to very few lines of code for my intended purpose :
https://code.msdn.microsoft.com/windowsapps/BackgroundAudio-63bbc319
I am developing an application to show video using webcam and IpCamera.
For IpCamera, it shows video stream for sometime but after that it stops streaming and application hangs.
I am using Emgu.CV Library to grab frames and show it in the picture control.
I have tried below code for display of video by using function QueryFrame().
for connecting Ip camera
Capture capture = new Capture(URL);
for grabbing frames
Image<Bgr, Byte> ImageFrame = capture.QueryFrame();
After some time the QueryFrame() provide null value and application hangs.
Can any one tell me why this is happening and how I can handle it?
Thank you in advance.
Sorry about the delay but I have provide an example that works with several public IP cameras. It will need the EMGU reference replacing with your current version and the target build directory should be set to "EMGU Version\bin" alternatively extract it to the examples folder.
http://sourceforge.net/projects/emguexample/files/Capture/CameraCapture%20Public%20IP.zip/download
Rather than using the older QueryFrame() method it uses the RetrieveBgrFrame() method. It has worked reasonably well and I have had no null exceptions. However if you do replace the ProcessFrame() method with something like this
private void ProcessFrame(object sender, EventArgs arg)
{
//If you want to access the image data the use the following method call
//Image<Bgr, Byte> frame = new Image<Bgr,byte>(_capture.RetrieveBgrFrame().ToBitmap());
if (RetrieveBgrFrame.Checked)
{
Image<Bgr, Byte> frame = _capture.RetrieveBgrFrame();
//because we are using an autosize picturebox we need to do a thread safe update
if(frame!=null) DisplayImage(frame.ToBitmap());
}
else if (RetrieveGrayFrame.Checked)
{
Image<Gray, Byte> frame = _capture.RetrieveGrayFrame();
//because we are using an autosize picturebox we need to do a thread safe update
if (frame != null) DisplayImage(frame.ToBitmap());
}
}
Cheers
Chris
I'm writing an application that can take several different external inputs (keyboard presses, motion gestures, speech) and produce similar outputs (for instance, pressing "T" on the keyboard will do the same thing as saying the word "Travel" out loud). Because of that, I don't want any of the input managers to know about each other. Specifically, I don't want the Kinect manager (as much as possible) to know about the Speech manager and vice versa, even though I'm using the Kinect's built-in microphone (the Speech manager should work with ANY microphone). I'm using System.Speech in the Speech manager as opposed to Microsoft.Speech.
I'm having a problem where as soon as the Kinect motion recognition module is enabled, the speech module stops receiving input. I've tried a whole bunch of things like inverting the skeleton stream and audio stream, capturing the audio stream in different ways, etc. I finally narrowed down the problem: something about how I'm initializing my modules does not play nicely with how my application deals with events.
The application works great until motion capture starts. If I completely exclude the Kinect module, this is how my main method looks:
// Main.cs
public static void Main()
{
// Create input managers
KeyboardMouseManager keymanager = new KeyboardMouseManager();
SpeechManager speechmanager = new SpeechManager();
// Start listening for keyboard input
keymanager.start();
// Start listening for speech input
speechmanager.start()
try
{
Application.Run();
}
catch (Exception ex)
{
MessageBox.Show(ex.StackTrace);
}
}
I'm using Application.Run() because my GUI is handled by an outside program. This C# application's only job is to receive input events and run external scripts based on that input.
Both the keyboard and speech modules receive events sporadically. The Kinect, on the other hand, generates events constantly. If my gestures happened just as infrequently, a polling loop might be the answer with a wait time between each poll. However, I'm using the Kinect to control mouse movement... I can't afford to wait between skeleton event captures, because then the mouse would be very laggy; my skeleton capture loop needs to be as constant as possible. This presented a big problem, because now I can't have my Kinect manager on the same thread (or message pump? I'm a little hazy on the difference, hence why I think the problem lies here): from the way I understand it, being on the same thread would not allow keyboard or speech events to consistently get through. Instead, I kind of hacked together a solution where I made my Kinect manager inherit from System.Windows.Forms, so that it would work with Application.Run().
Now, my main method looks like this:
// Main.cs
public static void Main()
{
// Create input managers
KeyboardMouseManager keymanager = new KeyboardMouseManager();
KinectManager kinectManager = new KinectManager();
SpeechManager speechmanager = new SpeechManager();
// Start listening for keyboard input
keymanager.start();
// Attempt to launch the kinect sensor
bool kinectLoaded = kinectManager.start();
// Use the default microphone (if applicable) if kinect isn't hooked up
// Use the kinect microphone array if the kinect is working
if (kinectLoaded)
{
speechmanager.start(kinectManager);
}
else
{
speechmanager.start();
}
try
{
// THIS IS THE PLACE I THINK I'M DOING SOMETHING WRONG
Application.Run(kinectManager);
}
catch (Exception ex)
{
MessageBox.Show(ex.StackTrace);
}
For some reason, the Kinect microphone loses its "default-ness" as soon as the Kinect sensor is started (if this observation is incorrect, or there is a workaround, PLEASE let me know). Because of that, I was required to make a special start() method in the Speech manager, which looks like this:
// SpeechManager.cs
/** For use with the Kinect Microphone **/
public void start(KinectManager kinect)
{
// Get the speech recognizer information
RecognizerInfo recogInfo = SpeechRecognitionEngine.InstalledRecognizers().FirstOrDefault();
if (null == recogInfo)
{
Console.WriteLine("Error: No recognizer information found on Kinect");
return;
}
SpeechRecognitionEngine recognizer = new SpeechRecognitionEngine(recogInfo.Id);
// Loads all of the grammars into the recognizer engine
loadSpeechBindings(recognizer);
// Set speech event handler
recognizer.SpeechRecognized += speechRecognized;
using (var s = kinect.getAudioSource().Start() )
{
// Set the input to the Kinect audio stream
recognizer.SetInputToAudioStream(s, new SpeechAudioFormatInfo(EncodingFormat.Pcm, 16000, 16, 1, 32000, 2, null));
// Recognize asycronous speech events
recognizer.RecognizeAsync(RecognizeMode.Multiple);
}
}
For reference, the start() method in the Kinect manager looks like this:
// KinectManager.cs
public bool start()
{
// Code from Microsoft Sample
kinect = (from sensorToCheck in KinectSensor.KinectSensors where sensorToCheck.Status == KinectStatus.Connected select sensorToCheck).FirstOrDefault();
// Fail elegantly if no kinect is detected
if (kinect == null)
{
connected = false;
Console.WriteLine("Couldn't find a Kinect");
return false;
}
// Start listening
kinect.Start();
// Enable listening for all skeletons
kinect.SkeletonStream.Enable();
// Obtain the KinectAudioSource to do audio capture
source = kinect.AudioSource;
source.EchoCancellationMode = EchoCancellationMode.None; // No AEC for this sample
source.AutomaticGainControlEnabled = false; // Important to turn this off for speech recognition
kinect.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(allFramesReady);
connected = true;
return true;
}
So when I disable motion capture (by having my main() look similar to the first code segment), speech recognition works fine. When I enable motion capture, motion works great but no speech gets recognized. In both cases, keyboard events always work. There are no errors, and through tracing I found out that all the data in the speech manager is initialized correctly... it seems like the speech recognition events just disappear. How can I reorganize this code so that the input modules can work independently? Do I use threading, or just Application.Run() in a different way?
The Microsoft Kinect SDK have several known issues, one of them being that audio is not processed if you begin tracking the skeleton after starting the audio processor. From the known issues:
Audio is not processed if skeleton stream is enabled after starting audio capture
Due to a bug, enabling or disabling the SkeletonStream will stop the AudioSource
stream returned by the Kinect sensor. The following sequence of instructions will
stop the audio stream:
kinectSensor.Start();
kinectSensor.AudioSource.Start(); // --> this will create an audio stream
kinectSensor.SkeletonStream.Enable(); // --> this will stop the audio stream as an undesired side effect
The workaround is to invert the order of the calls or to restart the AudioSource after changing SkeletonStream status.
Workaround #1 (start audio after skeleton):
kinectSensor.Start();
kinectSensor.SkeletonStream.Enable();
kinectSensor.AudioSource.Start();
Workaround #2 (restart audio after skeleton):
kinectSensor.Start();
kinectSensor.AudioSource.Start(); // --> this will create an audio stream
kinectSensor.SkeletonStream.Enable(); // --> this will stop the audio stream as an undesired side effect
kinectSensor.AudioSource.Start(); // --> this will create another audio stream
Resetting the SkeletonStream engine status is an expensive call. It should be made at application startup only, unless the app has specific needs that require turning Skeleton on and off.
I also hope that when you say you're using "version 1" of the SDK, you mean "version 1.6". If you are using anything but 1.5 or 1.6, you are only hurting yourself due to the many changes that were made in 1.5.