OpenCVsharp4 save Image at max resolution - c#

I am using Opencvsharp from shimat for building an application. Code simply opens camera, saves the image and close it using below code.
using OpenCvSharp;
VideoCapture capture;
Mat frame;
private void btn_Camera_Click(object sender, EventArgs e)
{
capture = new VideoCapture();
frame = new Mat();
capture.Open(1);
capture.Read(frame);
if (capture.Read(frame))
{
frame.SaveImage("#test.jpg");
}
capture.Release();
}
However the picture is saved at 640x480 resolution whereas the camera is capable of capturing 1280x720 resolution pictures.
I tried setting the VideoCapture properties like below
capture.Set(VideoCaptureProperties.FrameHeight, 720);
capture.Set(VideoCaptureProperties.FrameWidth, 1280);
But still the saved image is of 480p resolution. Is there a way to save it at 720p resolution, like the default windows camera app does.
Also I don't want to save it in 480p and then resize to 720p as that doesn't help in getting the details that needs to captured.
I know in opencv Python its possible. Am looking for something similar in C# with Opencvsharp4

When capturing via OpenCvSharp, 640x480 is the default resolution.
You must set the desired resolution before the device is opened (which is done implicitly when you grab frames) e.g.:
int frameWidth = 1280;
int frameHeight = 720;
int cameraDeviceId = 1;
var videoCapture = VideoCapture.FromCamera(cameraDeviceId);
if (!videoCapture.Set(VideoCaptureProperties.FrameWidth, frameWidth))
{
logger.LogWarning($"Failed to set FrameWidth to {frameWidth}");
}
if (!videoCapture.Set(VideoCaptureProperties.FrameHeight, frameHeight))
{
logger.LogWarning($"Failed to set FrameHeight to {frameHeight}");
}
using (videoCapture)
{
videoCapture.Grab();
var image = videoCapture.RetrieveMat();
logger.LogInformation($"Image size [{image.Width} x {image.Height}]");
}

Related

Microsoft media encoder screen capture, save a specific part of the video being recorded

I'm coding a screen capture program in C# using windows media encoder. While the screen is being recorded i want to save the last 30sec of the current video being recorded in a separate video when a btn gets clicked. Is there a function in encoder that would allow me to do that?
I've attached the code that I'm currently using. Couldn't find any documentation on media encoder...
void startRecording() {
System.Drawing.Size workingArea = SystemInformation.WorkingArea.Size;
Rectangle captureRec = new Rectangle(0, 0, workingArea.Width -(workingArea.Width % 4), workingArea.Height - (workingArea.Height % 4));
job.CaptureRectangle = captureRec;
job.ShowFlashingBoundary = true;
job.ShowCountdown = true;
job.CaptureMouseCursor = true;
job.AddAudioDeviceSource(AudioDevices());
job.OutputPath = #"C:\Users\Moe36\Desktop\Screen Recorder";
job.Start();
}
void saveLast30Sec() {
//Save last 30sec of the currently recorded video as a separate video file.
}

Clone a image to a larger one without resize using Magick.NET

I've searched a bit around the discussions\forums/StackOverflow/Official documentation, but i couldn't find much information about how to achieve what i'm trying. Most of the official documentation covers the command-line version of ImageMagick.
I'll describe what i'm trying to do:
I have a image loaded that i would like to paste into a larger one.
Ex: the image i loaded has 9920 width, 7085 height. I would like to place it in the middle of a larger one (10594 width, 7387 height). I do have all border calculation ready ([larger width - original width / 2] , same goes for height).
But i don't know how to do it using MagickImage. Here's the max i got:
private void drawInkzone(MagickImage loadedImage, List<string>inkzoneAreaInformation, string filePath)
{
unitConversion converter = new unitConversion();
List<double> inkZoneInfo = inkZoneListFill(inkzoneAreaInformation);
float DPI = getImageDPI(filePath);
double zoneAreaWidth_Pixels = converter.mmToPixel(inkZoneInfo.ElementAt(4), DPI);
double zoneAreaHeight_Pixels = converter.mmToPixel(inkZoneInfo.ElementAt(5), DPI);
using (MagickImage image = new MagickImage(MagickColor.FromRgb(255, 255, 255), Convert.ToInt32(zoneAreaWidth_Pixels), Convert.ToInt32(zoneAreaHeight_Pixels)))
{
//first: defining the larger image, with a white background (must be transparent, but for now its okay)
using (MagickImage original = loadedImage.Clone())
{
//Cloned the original image (already passed as parameter)
}
}
Here's the max i got. In order to achieve this, i used the following post:
How to process only one part of image by ImageMagick?
And i'm not using GDI+ because i'll be always working with larger TIFF files (big resolutions), and GDI+ tends to throw exceptions (Parameter not valid, out of memory) when it can't handle everything (i loaded three images with an resolution like that, and got out of memory).
Any help will be kindly appreciate, thanks.
Pablo.
You could either Composite the image on top of a new image with the required background or you could Clone and Extent if with the required background. In the answer from #Pablo Costa there is an example for Compositing the image so here is an example on how you could extent the image:
private void drawInkzone(MagickImage loadedImage, List<string> inkzoneAreaInformation, string filePath)
{
unitConversion converter = new unitConversion();
List<double> inkZoneInfo = inkZoneListFill(inkzoneAreaInformation);
float DPI = getImageDPI(filePath);
double zoneAreaWidth_Pixels = converter.mmToPixel(inkZoneInfo.ElementAt(4), DPI);
double zoneAreaHeight_Pixels = converter.mmToPixel(inkZoneInfo.ElementAt(5), DPI);
using (MagickImage image = loadedImage.Clone())
{
MagickColor background = MagickColors.Black;
int width = (int)zoneAreaWidth_Pixels;
int height = (int)zoneAreaHeight_Pixels;
image.Extent(width, height, Gravity.Center, background);
image.Write(#"C:\DI_PLOT\whatever.png");
}
}
I managed to accomplish what i needed.
Cool that i didn't had to calculate borders.
Here's the code:
private void drawInkzone(MagickImage loadedImage, List<string>inkzoneAreaInformation, string filePath)
{
unitConversion converter = new unitConversion();
List<double> inkZoneInfo = inkZoneListFill(inkzoneAreaInformation); //Larger image information
float DPI = getImageDPI(filePath);
double zoneAreaWidth_Pixels = converter.mmToPixel(inkZoneInfo.ElementAt(4), DPI); //Width and height for the larger image are in mm , converted them to pixel
double zoneAreaHeight_Pixels = converter.mmToPixel(inkZoneInfo.ElementAt(5), DPI);//Formula (is: mm * imageDPI) / 25.4
using (MagickImage image = new MagickImage(MagickColor.FromRgb(0, 0, 0), Convert.ToInt32(zoneAreaWidth_Pixels), Convert.ToInt32(zoneAreaHeight_Pixels)))
{
//first: defining the larger image, with a white background (must be transparent, but for now its okay)
using (MagickImage original = loadedImage.Clone())
{
//Cloned the original image (already passed as parameter)
image.Composite(loadedImage, Gravity.Center);
image.Write(#"C:\DI_PLOT\whatever.png");
}
}
Hope this helps someone :)

Not able to capture full screen from secondary Monitor using Directshow

I am trying to capture the screen of the PC connected to my PC via HDMI using Direct-show. I am using capture card as the hardware and Direct show's sample grabber method to render those captured frames.
The issue is I am not able to render the full screen of the secondary monitor on to my computer. Both PC's are of different configuration. I have tried giving different frame size values like 1366*768 but it just picks up 1280*768 I believe .
Moreover I have set the frame size to be captured as 1366*768 and even my PC is set to the same display setting still it does not render the full screen.
Here is the code I am using for capturing and rendering. Capture class has a property called frame size whose size has been set to 1366*768 but as soon as I do that and run the code it shows a total blank screen and when i change the setting to 1280*768 it will render the secondary monitor but won't render the full screen of it.
Size size = new Size(1366, 768);
capture.FrameSize = size;
where capture class has the below given property
public Size FrameSize
{
get
{
BitmapInfoHeader bmiHeader;
bmiHeader = (BitmapInfoHeader) getStreamConfigSetting( videoStreamConfig, "BmiHeader" );
// Size size = new Size( bmiHeader.Width, bmiHeader.Height );
Size size = new Size(1280, 768);
return( size );
}
set
{
BitmapInfoHeader bmiHeader;
bmiHeader = (BitmapInfoHeader) getStreamConfigSetting( videoStreamConfig, "BmiHeader" );
bmiHeader.Width = 1280;
bmiHeader.Height = 768;
setStreamConfigSetting( videoStreamConfig, "BmiHeader", bmiHeader );
//#if NEWCODE
this.videoCaps = null;
//#endif
}
}
Any suggestions or findings that how can I capture the full screen of the secondary monitor will be really appreciated.

Facial detection coordinates using a camera

I need a way to grab the coordinates of the face in C# for Windows Phone 8.1 in the camera view. I haven't been able to find anything on the web so I'm thinking it might not be possible. What I need is the x and y (and possibly area) of the "box" that forms around the face when it is detected in the camera view. Has anyone done this before?
Code snippet (bear in mind this is part of an app from the tutorial I linked below the code. It's not copy-pasteable, but should provide some help)
const string MODEL_FILE = "haarcascade_frontalface_alt.xml";
FaceDetectionWinPhone.Detector m_detector;
public MainPage()
{
InitializeComponent();
m_detector = new FaceDetectionWinPhone.Detector(System.Xml.Linq.XDocument.Load(MODEL_FILE));
}
void photoChooserTask_Completed(object sender, PhotoResult e)
{
if (e.TaskResult == TaskResult.OK)
{
BitmapImage bmp = new BitmapImage();
bmp.SetSource(e.ChosenPhoto);
WriteableBitmap btmMap = new WriteableBitmap(bmp);
//find faces from the image
List<FaceDetectionWinPhone.Rectangle> faces =
m_detector.getFaces(
btmMap, 10f, 1f, 0.05f, 1, false, false);
//go through each face, and draw a red rectangle on top of it.
foreach (var r in faces)
{
int x = Convert.ToInt32(r.X);
int y = Convert.ToInt32(r.Y);
int width = Convert.ToInt32(r.Width);
int height = Convert.ToInt32(r.Height);
btmMap.FillRectangle(x, y, x + height, y + width, System.Windows.Media.Colors.Red);
}
//update the bitmap before drawing it.
btmMap.Invalidate();
facesPic.Source = btmMap;
}
}
This is taken from developer.nokia.com
To do this in real-time, you need to intercept the viewfinder image, perhaps using the NewCameraFrame method (EDIT: not sure if you should use this method or PhotoCamera.GetPreviewBufferArgb32 as described below. I have to leave it up to your research)
So basically your task has 2 parts:
Get the viewfinder image
Detect faces on it (using something like the code above)
If I were you, I'd first do step 2. on an image loaded from disk, and once you can detect faces on that, I'd see how to obtain current viewfinder image and detect faces on that. X,Y coordinates are easy enough to obtain once you've detected the face - see code above.
(EDIT): I think you should try using PhotoCamera.GetPreviewBufferArgb32 method to obtain the viewfinder image. Look here MSDN documentation. Also, be sure to search through MSDN docs and tutorials. This should be more than enough to complete step 1.
A lot of face detection algorithms use Haar classifiers, Viola-Jones algorithm etc. If you're familiar with that, you'll feel more confident in what you're doing, but you can do without. Also, read the materials that I linked - they seem fairly good.

How To Extract specific Frame using AForge Library

I'm trying to extract a specific frame from a video file. I have a frame that I want when I play a video file with the aforge library. I call a new frame event, and if the new frame matches my specific frame, then it shows me a message: "Frame Match". This specific frame randomly appears in a video file. Here is my code:
private void Form1_Load(object sender, EventArgs e)
{
IVideoSource videoSource = new FileVideoSource(#"e:\media\test\a.mkv");
playerControl.VideoSource = videoSource;
playerControl.Start( );
videoSource.NewFrame += new AForge.Video.NewFrameEventHandler(Video_NewFrame );
}
private void Video_NewFrame(object sender, AForge.Video.NewFrameEventArgs eventArgs)
{
//Create Bitmap from frame
Bitmap FrameData = new Bitmap(eventArgs.Frame);
//Add to PictureBox
pictureBox1.Image = FrameData;
//compare current frame to specific fram
if (pictureBox1.Image == pictureBox2.Image)
{
MessageBox.Show("Frame Match");
}
}
pictureBox2.image is a fixed frame that I want to match. This code is working fine when I play video files and extract new frames, but I am unable to compare new frames to specific frames. Please guide me on how to achieve this.
You can take a look at:
https://github.com/dajuric/accord-net-extensions
var capture = new FileCapture(#"C:\Users\Public\Videos\Sample Videos\Wildlife.wmv");
capture.Open();
capture.Seek(<yourFrameIndex>, SeekOrigin.Begin);
var image = capture.ReadAs<Bgr, byte>();
or you can use standard IEnumerable like:
var capture = new FileCapture(#"C:\Users\Public\Videos\Sample Videos\Wildlife.wmv");
capture.Open();
var image = capture.ElementAt(<yourFrameIndex>); //will actually just cast image
Examples are included.
moved to: https://github.com/dajuric/dot-imaging
As far as I can understand your problem the issue is that you can't compare image to image this way. I think you will find that the way to do this is to build a histogram table and then compare image histograms.
Some of the related things to look into are:
how to compare two images
image comparer class form VS 2015 unit testing
The second one is from unit testing library so not sure of performance (haven't tried myself yet)

Categories

Resources