I have images that could be as big as 20000x20000 pixels in height and width. (Not more than 50 MB). Loading such images into a picture box using C# is very slow and when I need to slice this thing into 200x200 pixels images, it is very slow (few images per second). My current application works fine but very slow and am looking into much faster approaches if available.
This is how I currently loading the image into the image box. (Which is slow too - few seconds to load)
resizedImage = new System.Drawing.Bitmap(OriginalImage, new System.Drawing.Size((int)(OriginalImage.Width * zoomFactor), (int)(OriginalImage.Height * zoomFactor)));
imgBox.Image = resizedImage;
What is the best way to slice this image faster?
This is my current slicing routine .
private void CropSolderJoints()
{
//ordinatesToCrop CordsToCrop = createCordinatesToCrop();
try
{
PB.Maximum = Adjustedcordinates.Pins.Count;
double OriginX = Convert.ToDouble(txtOriginX.Text);
double OriginY = Convert.ToDouble(txtOriginY.Text);
double Scale = Convert.ToDouble(txtScale.Text);
double BumpSize = Convert.ToDouble(txtBumpSize.Text);
Stopwatch watch = new Stopwatch();
watch.Start();
System.Drawing.PointF pos = new System.Drawing.PointF();
double zoomFactor = (float)(double)(Math.Max(imgBox.Width, imgBox.Height) / (double)Math.Max(OriginalImage.Width, OriginalImage.Height));
System.Drawing.Size lensPixelSize = new System.Drawing.Size((int)Math.Round(BumpSize / zoomFactor), (int)Math.Round(BumpSize / zoomFactor));
Rectangle CropJointRectangle = new Rectangle(0, 0, (int)Math.Round(BumpSize / zoomFactor), (int)Math.Round(BumpSize / zoomFactor));
int pinCount = 0;
float radius = (float)(BumpSize) / 2;
foreach (PinCordinates p in Adjustedcordinates.Pins)
{
{
{
PB.Invoke(new MethodInvoker(delegate {
}));
string folderName = new DirectoryInfo(imageFolder).Name;
string filename = dgvImages.SelectedCells[1].Value.ToString();
string Imagefilename = String.Format("{0}_{1}_{2}_{3}", folderName, Path.GetFileNameWithoutExtension(filename), p.defect.ToString(), p.PinLabel);
string saveFilePath = String.Format("{0}\\{1}.{2}", imageFolder, Imagefilename, "png");
float TopLeftX = (float)(OriginX + (float)(p.Cordinates.X * Scale));
float TopLeftY = (float)(OriginY + (float)(p.Cordinates.Y * Scale));
float length = (float)(BumpSize);
float width = (float)(BumpSize);
pos = new System.Drawing.PointF((float)(TopLeftX), (float)(TopLeftY));
imageLens.Location = pkg.GetLensPosition(pos, imageLens);
imageLens.Size = lensUseRelativeSize
? pkg.GetScaledLensSize(imgBox.ClientRectangle, SourceImage.Size, lensPixelSize)
: lensPixelSize;
RectangleF section = pkg.CanvasToImageRect(imgBox.ClientRectangle, SourceImage.Size, imageLens);
Bitmap imgJoint = new Bitmap((int)Math.Round(BumpSize / zoomFactor), (int)Math.Round(BumpSize / zoomFactor));
Graphics g = Graphics.FromImage(imgJoint);
pkg.DrawImageSelection(g, CropJointRectangle, section, SourceImage);
g.DrawImage(imgJoint, 0, 0);
picZoom.Image = imgJoint;
imgJoint.Save(saveFilePath);
PB.Value = pinCount;
pinCount++;
picZoom.Refresh();
}
}
}
PB.Value = 0;
watch.Stop();
label1.Text = watch.Elapsed.TotalSeconds.ToString();
MessageBox.Show("Slicing completed sucessfully." + pinCount.ToString() + "bumps sliced.");
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}
}
Related
I'm using Emgu CV to detect an object using a HAAR cascade, then I am sending the bounding box of the HAAR cascade to a CSRT motion tracker. Then I compute the centroid of the CSRT motion tracker and have a pan/tilt telescope mount that will move the camera until the centroid of the tracker and image are the same. In the code below I am using an .avi video file but I will eventually be using this with a live video camera.
I am using ImageViewer to display both the HAAR cascade and CSRT motion tracker at the same time. The problem is the CSRT motion tracker viewer is using all my RAM. If I comment out the viewer.ShowDialog(); line then there is no memory leak, but I also can't see the tracker.
This is on Windows 7 by the way, running Visual Studio 2017, .NET 4.7.3, Emgu 3.4.3.3016.
The HAAR cascade function was also causing a memory leak, but I was able to fix it by using .Dispose() on the mat file at the end of the function. It didn't help with the CSRT motion tracker function.
public void Tracker()
{
if (!this.detectedBBox.Width.Equals(0))
{
Emgu.CV.UI.ImageViewer viewer = new Emgu.CV.UI.ImageViewer();
Emgu.CV.Tracking.TrackerCSRT myTracker = new Emgu.CV.Tracking.TrackerCSRT();
using (Emgu.CV.VideoCapture capture1 = new Emgu.CV.VideoCapture("c:\\Users\\Windows7\\33a.avi"))
using (Emgu.CV.VideoStab.CaptureFrameSource frameSource = new Emgu.CV.VideoStab.CaptureFrameSource(capture1))
{
Rectangle myRectangle = this.detectedBBox;
Emgu.CV.Mat myFrame = frameSource.NextFrame().Clone();
myTracker.Init(myFrame, myRectangle);
Application.Idle += delegate (object c, EventArgs f)
{
myFrame = frameSource.NextFrame().Clone();
myTracker.Update(myFrame, out myRectangle);
if (myFrame != null)
{
int fXcenter = myFrame.Width / 2;
int fYcenter = myFrame.Height / 2;
int dx;
int dy;
int swidth = myRectangle.Width;
int sheight = myRectangle.Height;
int shalfwidth = swidth / 2;
int shalfheight = sheight / 2;
int sXcentroid = myRectangle.X + shalfwidth;
int sYcentroid = myRectangle.Y + shalfheight;
if (sXcentroid >= fXcenter) { dx = sXcentroid - fXcenter; } else { dx = fXcenter - sXcentroid; }
if (sYcentroid >= fYcenter) { dy = sYcentroid - fYcenter; } else { dy = fXcenter - sYcentroid; }
string caption = "Center point: (" + sXcentroid + "," + sYcentroid + ")";
string caption2 = "Dist from center: (" + dx + "," + dy + ")";
Emgu.CV.CvInvoke.Rectangle(myFrame, myRectangle, new Emgu.CV.Structure.Bgr(Color.Red).MCvScalar, 2);
Emgu.CV.CvInvoke.PutText(myFrame, caption, new System.Drawing.Point(10, 20), Emgu.CV.CvEnum.FontFace.HersheyComplex, .5, new Emgu.CV.Structure.Bgr(0, 255, 0).MCvScalar);
Emgu.CV.CvInvoke.PutText(myFrame, caption2, new System.Drawing.Point(10, 35), Emgu.CV.CvEnum.FontFace.HersheyComplex, .5, new Emgu.CV.Structure.Bgr(0, 255, 0).MCvScalar);
Point start = new Point(fXcenter, fYcenter);
Point end = new Point(sXcentroid, sYcentroid);
Emgu.CV.Structure.LineSegment2D line = new Emgu.CV.Structure.LineSegment2D(start, end);
Emgu.CV.CvInvoke.Line(myFrame, start, end, new Emgu.CV.Structure.Bgr(0, 255, 0).MCvScalar, 2, new Emgu.CV.CvEnum.LineType(), 0);
string caption3 = "Line length: " + line.Length.ToString();
Emgu.CV.CvInvoke.PutText(myFrame, caption3, new System.Drawing.Point(10, 50), Emgu.CV.CvEnum.FontFace.HersheyComplex, .5, new Emgu.CV.Structure.Bgr(0, 255, 0).MCvScalar);
}
viewer.Image = myFrame;
};
viewer.Text = "Tracker";
viewer.ShowDialog();
}
}
}
Everything in the code works except for the memory leak.
I'm trying to create an application that receives a sequence of images by Spout but I can not figure out the exact logic to it ...
I've never worked with OpenGL so do not know much about the correct syntax ...
All information is from the help here and looks very well explained ... but I could not properly understand how to put this logic to work ...
https://github.com/matjacques/LightjamsSpout
Below is my code and all I get is just a static image ...
I'm getting the result of a Bitmap and putting in a PictureBox ...
How do I put these images in an OpenGL object type?
public Form1()
{
InitializeComponent();
try
{
string senderName = "";
var receiver = new LightjamsSpoutLib.LightjamsSpoutReceiver();
int nbSenders = receiver.NbSenders();
for (int t = 0; t < nbSenders; ++t)
{
string name; int w, h;
receiver.GetSenderInfo(t, out name, out w, out h);
lblConnection.Text = name;
senderName = name;
// can instanciate the object in any thread
LightjamsSpoutLib.GLContext m_glContext = new LightjamsSpoutLib.GLContext();
m_glContext.Create();
LightjamsSpoutLib.LightjamsSpoutReceiver m_receiver = new LightjamsSpoutLib.LightjamsSpoutReceiver();
// the senderName as retrieved when enumerating senders or "" to use the active sender
int m_width = 500;
int m_height = 500;
m_receiver.Connect(senderName, out m_width, out m_height);
Bitmap m_bitmap = new Bitmap(m_width, m_height, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
const int bytesPerPixel = 3;
int stride = 4 * ((m_width * bytesPerPixel + 3) / 4);
byte[] m_buffer = new byte[stride * m_height];
var data = m_bitmap.LockBits(new Rectangle(0, 0, m_width, m_height), System.Drawing.Imaging.ImageLockMode.WriteOnly, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
m_receiver.ReceiveImageIntPtr((long)data.Scan0, LightjamsSpoutLib.EPixelFormat.BGR);
m_bitmap.UnlockBits(data);
// Put in PictureBox (Only Static)
PictureBox Screen = new PictureBox();
Screen.Height = m_height;
Screen.Width = m_width;
Screen.Image = m_bitmap;
this.Controls.Add(Screen);
}
}
catch (System.Runtime.InteropServices.COMException e)
{
}
}
I have an application A that is made in WPF and WinForms.I have written another Application in WinForms for Capturing Screen. The problem I'm facing is that The dialog boxes that come up in Application A do not captured in the screen. The whole screen gets captured including the area behind the dialog box but the dialog box doesn't get captured.
public void CaptureScreen(string filepath)
{
string[] words = filepath.Split('\\');
string newFilePath = " ";
foreach (string word in words)
{
if (!(word.Contains(".bmp")))
{
newFilePath = newFilePath + word + "//";
}
else
{
newFilePath = newFilePath + word;
}
}
this.WindowState = FormWindowState.Minimized;
Screen[] screens;
screens = Screen.AllScreens;
int noofscreens = screens.Length, maxwidth = 0, maxheight = 0;
for (int i = 0; i < noofscreens; i++)
{
if (maxwidth < (screens[i].Bounds.X + screens[i].Bounds.Width)) maxwidth = screens[i].Bounds.X + screens[i].Bounds.Width;
if (maxheight < (screens[i].Bounds.Y + screens[i].Bounds.Height)) maxheight = screens[i].Bounds.Y + screens[i].Bounds.Height;
}
var width = maxwidth;
var height = maxheight;
Point sourcePoint = Point.Empty;
Point destinationPoint = Point.Empty;
Rectangle rect = new Rectangle(0, 0, width, height);
Bitmap bitmap = new Bitmap(rect.Width, rect.Height);
Graphics g = Graphics.FromImage(bitmap);
// g.CopyFromScreen(sourcePoint, destinationPoint, rect.Size);
g.CopyFromScreen(new Point(rect.Left, rect.Top), Point.Empty, rect.Size);
bitmap.Save(filepath, ImageFormat.Bmp);
//Console.WriteLine("(width, height) = ({0}, {1})", maxx - minx, maxy - miny);
}
}
}
Bitmap bmpScreenshot = new Bitmap(Screen.AllScreens[1].Bounds.Width, Screen.AllScreens[1].Bounds.Height, PixelFormat.Format32bppArgb);
Graphics.FromImage(bmpScreenshot).CopyFromScreen(
Screen.AllScreens[1].Bounds.X,
Screen.AllScreens[1].Bounds.Y,
0,
0,
Screen.AllScreens[1].Bounds.Size,
CopyPixelOperation.SourceCopy);
this.picExtendedModitorScreen.Image = bmpScreenshot;
this.picExtendedModitorScreen.Refresh();
Put this code in timer tick event.
I have put extended screen 1 in the code you can change it to any other.
I have read the article at Show Image In dataGrid bu t am un sure of how to add the image. I am following the Windows 7 touch screen development kit example for images and would like to place the images in a data grid so they can scroll (the example has them in a circle on the canvas).
So, when adding an image in the image paths are just placed in a string array:
private string[] GetPictureLocations()
{
string[] pictures = Directory.GetFiles(Environment.GetFolderPath(Environment.SpecialFolder.MyPictures), "*.jpg");
// If there are no pictures in MyPictures
if (pictures.Length == 0)
pictures = new string[]
{
#"images\Pic1.jpg",
#"images\Pic2.jpg",
#"images\Pic3.jpg",
#"images\Pic4.jpg"
};
return pictures;
}
//Load pictures to the canvas
private void LoadPictures()
{
string[] pictureLocations = GetPictureLocations();
double angle = 0;
double angleStep = 360 / pictureLocations.Length;
foreach (string filePath in pictureLocations)
{
try
{
Picture p = new Picture();
p.ImagePath = filePath;
p.Width = 300;
p.Angle = 180 - angle;
double angleRad = angle * Math.PI / 180.0;
p.X = Math.Sin(angleRad) * 300 + (_canvas.ActualWidth - 300) / 2.0;
p.Y = Math.Cos(angleRad) * 300 + (_canvas.ActualHeight - 300) / 2.0;
_canvas.Children.Add(p);
angle += angleStep;
}
catch (Exception ex)
{
System.Diagnostics.Trace.WriteLine("Error:" + ex.Message);
}
}
}
The example from the stack overflow article is:
DataGridTemplateColumn col1 = new DataGridTemplateColumn();
col1.Header = "MyHeader";
FrameworkElementFactory factory1 = new FrameworkElementFactory(typeof(Image));
Binding b1 = new Binding("Picture");
b1.Mode = BindingMode.TwoWay;
factory1.SetValue(Image.SourceProperty, b1);
DataTemplate cellTemplate1 = new DataTemplate();
cellTemplate1.VisualTree = factory1;
col1.CellTemplate = cellTemplate1;
datagrid.Columns.Add(col1);
I am unsure how to consolidate the two so I can show the loaded images (p) in the datagrid. Or is there an easier way?
So what I'm trying to do is take the Kinect Skeletal Sample and save x amount of photos, only when a human goes by. I have gotten it to work, except once it detects a human it just records x amount of photos even once the person leaves the vision of Kinect. Does anyone know how to make it so that once a person enters it starts recording, and once they leave it stops?
Variables
Runtime nui;
int totalFrames = 0;
int totalFrames2 = 0;
int lastFrames = 0;
int lastFrameWithMotion = 0;
int stopFrameNumber = 100;
DateTime lastTime = DateTime.MaxValue;
Entering/Exiting the Frame
void nui_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
{
SkeletonFrame skeletonFrame = e.SkeletonFrame;
int iSkeleton = 0;
++totalFrames;
string bb1 = Convert.ToString(totalFrames);
//Uri uri1 = new Uri("C:\\Research\\Kinect\\Proposal_Skeleton\\Skeleton_Img" + bb1 + ".png");
Uri uri1 = new Uri("C:\\temp\\Skeleton_Img" + bb1 + ".png");
// string file_name_3 = "C:\\Research\\Kinect\\Proposal_Skeleton\\Skeleton_Img" + bb1 + ".png"; // xxx
Brush[] brushes = new Brush[6];
brushes[0] = new SolidColorBrush(Color.FromRgb(255, 0, 0));
brushes[1] = new SolidColorBrush(Color.FromRgb(0, 255, 0));
brushes[2] = new SolidColorBrush(Color.FromRgb(64, 255, 255));
brushes[3] = new SolidColorBrush(Color.FromRgb(255, 255, 64));
brushes[4] = new SolidColorBrush(Color.FromRgb(255, 64, 255));
brushes[5] = new SolidColorBrush(Color.FromRgb(128, 128, 255));
skeleton.Children.Clear();
//byte[] skeletonFrame32 = new byte[(int)(skeleton.Width) * (int)(skeleton.Height) * 4];
foreach (SkeletonData data in skeletonFrame.Skeletons)
{
if (SkeletonTrackingState.Tracked == data.TrackingState)
{
// Draw bones
Brush brush = brushes[iSkeleton % brushes.Length];
skeleton.Children.Add(getBodySegment(data.Joints, brush, JointID.HipCenter, JointID.Spine, JointID.ShoulderCenter, JointID.Head));
skeleton.Children.Add(getBodySegment(data.Joints, brush, JointID.ShoulderCenter, JointID.ShoulderLeft, JointID.ElbowLeft, JointID.WristLeft, JointID.HandLeft));
skeleton.Children.Add(getBodySegment(data.Joints, brush, JointID.ShoulderCenter, JointID.ShoulderRight, JointID.ElbowRight, JointID.WristRight, JointID.HandRight));
skeleton.Children.Add(getBodySegment(data.Joints, brush, JointID.HipCenter, JointID.HipLeft, JointID.KneeLeft, JointID.AnkleLeft, JointID.FootLeft));
skeleton.Children.Add(getBodySegment(data.Joints, brush, JointID.HipCenter, JointID.HipRight, JointID.KneeRight, JointID.AnkleRight, JointID.FootRight));
// Draw joints
// try to add a comment, xxx
foreach (Joint joint in data.Joints)
{
Point jointPos = getDisplayPosition(joint);
Line jointLine = new Line();
jointLine.X1 = jointPos.X - 3;
jointLine.X2 = jointLine.X1 + 6;
jointLine.Y1 = jointLine.Y2 = jointPos.Y;
jointLine.Stroke = jointColors[joint.ID];
jointLine.StrokeThickness = 6;
skeleton.Children.Add(jointLine);
}
// ExportToPng(uri1, skeleton);
// SoundPlayerAction Source = "C:/LiamScienceFair/muhaha.wav";
//SoundPlayer player1 = new SoundPlayer("muhaha.wav")
// player1.Play();
// MediaPlayer.
// axWindowsMediaPlayer1.currentPlaylist = axWindowsMediaPlayer1.mediaCollection.getByName("mediafile");
nui.VideoFrameReady += new EventHandler<ImageFrameReadyEventArgs>(nui_ColorFrameReady2);
}
iSkeleton++;
} // for each skeleton
}
Actual Code
void nui_ColorFrameReady2(object sender, ImageFrameReadyEventArgs e)
{
// 32-bit per pixel, RGBA image xxx
PlanarImage Image = e.ImageFrame.Image;
int deltaFrames = totalFrames - lastFrameWithMotion;
if (totalFrames2 <= stopFrameNumber & deltaFrames > 300)
{
++totalFrames2;
string bb1 = Convert.ToString(totalFrames2);
// string file_name_3 = "C:\\Research\\Kinect\\Proposal\\Depth_Img" + bb1 + ".jpg"; xxx
string file_name_4 = "C:\\temp\\Video2_Img" + bb1 + ".jpg";
video.Source = BitmapSource.Create(
Image.Width, Image.Height, 96, 96, PixelFormats.Bgr32, null, Image.Bits, Image.Width * Image.BytesPerPixel);
BitmapSource image4 = BitmapSource.Create(
Image.Width, Image.Height, 96, 96, PixelFormats.Bgr32, null, Image.Bits, Image.Width * Image.BytesPerPixel);
image4.Save(file_name_4, Coding4Fun.Kinect.Wpf.ImageFormat.Jpeg);
if (totalFrames2 == stopFrameNumber)
{
lastFrameWithMotion = totalFrames;
stopFrameNumber += 100;
}
}
}
In most setups I have used in the skeletal tracking event area there is a check for if (skeleton != null) all you need to do is reset your trigger once a null skeleton is received.
The SDK will send a skeleton frame every time the event is fired so...
if(skeleton != null)
{
\\do image taking here
}
else
{
\\reset image counter
}
I would try something like this. Create a bool class variable named SkeletonInFrame and initialize it to false. Every time SkeletonFrameReady fires, set this bool to true. When you process a color frame, only process if this variable is true. Then after you process a color frame, set the variable to false. This should help you stop processing frame when you are no longer receiving skeleton events.