I've been playing around with NGif Animator for resizing animated gifs and it does resize but parts in many animated gifs I've tried get erased. I looked through the comments on that page and didn't see anyone else mention it.
To eliminate resizing as the cause I simply loop through the frames and save each one. Each frame is a System.Drawing.Image. Transparency is set to none (Color.Empty).
This is my test method currently:
GifDecoder gifDecoder = new GifDecoder();
MemoryStream memoryStream = new MemoryStream();
new BinaryWriter((Stream)memoryStream).Write(imageToResize); // byte array
memoryStream.Position = 0L;
gifDecoder.Read((Stream)memoryStream);
memoryStream.Dispose();
string filename = Guid.NewGuid().ToString().Replace("-", String.Empty) + ".gif";
string output = path + #"\" + filename;
AnimatedGifEncoder animatedGifEncoder = new AnimatedGifEncoder();
animatedGifEncoder.Start(output);
animatedGifEncoder.SetRepeat(gifDecoder.GetLoopCount());
animatedGifEncoder.SetQuality(10); // They say 20 is max quality will get, I've tried higher. Makes it a little bit better but black areas instead of gray. 10 is their default.
animatedGifEncoder.SetTransparent(Color.Empty); // This is default either way
int frameCount = gifDecoder.GetFrameCount();
int num = 0;
Image frame;
Image image = null;
for (int index = frameCount; num < index; ++num)
{
frame = gifDecoder.GetFrame(num);
animatedGifEncoder.SetDelay(gifDecoder.GetDelay(num));
string fname = #"C:\Development\images\frame_" + num.ToString() + ".gif";
if (File.Exists(fname)) { File.Delete(fname); }
frame.Save(fname);
animatedGifEncoder.AddFrame(image);
}
animatedGifEncoder.Finish();
Here's an example of what's happening:
The background is gone and it's gray.
It's supposed to look like:
Anyone have experience with NGif and know what would cause this? The first frame is always fine. It's the others after that have a problem so I'm guessing something isn't being reset from frame to frame (or re-read). I've been adding more things to their reset frame method but so far it hasn't helped. That now looks like:
protected void ResetFrame()
{
lastDispose = dispose;
lastRect = new Rectangle(ix, iy, iw, ih);
lastImage = image;
lastBgColor = bgColor;
delay = 0;
transparency = false; // I don't want transparency
lct = null;
act = null;
transIndex = -1;
}
There is actually a bug in their code, a byte array not being reset. Check the comments on their page for a solution
Related
I would like to display 13 pictureBox, however, it ends up with only the last one visible.
So I was wondering if I did it in a wrong way.
The following code get image from resources folder.
var testP = new PictureBox();
for (int i = 0; i < 13; i++)
{
testP.Width = 65;
testP.Height = 80;
testP.BorderStyle = BorderStyle.None;
testP.SizeMode = PictureBoxSizeMode.StretchImage;
test[i] = getImage(testP, testPTemp[i]);
}
The following code is trying to display 13 pictureBox with shifting location.
These two codes segments should be able to perform the action.
test = new PictureBox[13];
for (var i = 0; i < 13; i++)
{
test[i].Image = (Image)Properties.Resources.ResourceManager.GetObject("_" + testTemp[i]);
test[i].Left = 330;
test[i].Top = 500;
test[i].Location = new Point(test[i].Location.X + 0 * displayShift, test[i].Location.Y);
this.Controls.Add(test[i]);
}
Here is the getImage()
private PictureBox getImage(PictureBox pB, string i) // Get image based on the for loop number (i)
{
pB.Image = (Image)Properties.Resources.ResourceManager.GetObject("_" + i); // Get the embedded image
pB.SizeMode = PictureBoxSizeMode.StretchImage;
return pB;
}
I'm pretty sure there are all PictureBox Controls but they have all the same location so they are lying above each other. That's why only the last one is visible to you.
I think you should replace the 0 with the i variable.
test[i].Location = new Point(test[i].Location.X + i * displayShift, test[i].Location.Y); this.Controls.Add(test[i]);
It's hard to tell the exact problem based off the code you've provided. One possible issue could be that when you are creating the PictureBoxes you only create a single instance before the for loop and then fill the array with references to that instance. Another possibility is that when you're calculating the X position of the controls, you're multiplying by 0 which will always result in 0 (meaning all the controls are at location 330).
Below is code that will achieve basically what you're trying but without all your code I can't give you a more specific example.
In Your Class
const int PICTURE_WIDTH = 65;
const int PICTURE_HEIGHT = 85;
Inside You Function
//Loop through each image
for(int i = 0; i < testTemp[i].length; i++)
{
//Create a picture box
PictureBox pictureBox = new PictureBox();
pictureBox.BorderStyle = BorderStyle.None;
pictureBox.SizeMode = PictureBoxSizeMode.StretchImage;
//Load the image date
pictureBox.Image = (Image)Properties.Resources.ResourceManager.GetObject("_" + testTemp[i]);
//Set it's size
pictureBox.Size = new Size(PICTURE_WIDTH, PICTURE_HEIGHT);
//Position the picture at (330,500) with a left offset of how many images we've gone through so far
pictureBox.Location = new Point(330 + (i * PICTURE_WIDTH), 500);
//Add the picture box to the list of controls
this.Controls.Add(pictureBox);
}
If you need to keep a list of the picture boxes, just create a new list before the loop and add each pictureBox to the list inside the loop. If the control/window you're adding these PictureBoxes to needs to scroll left or right to see all the images set the AutoScroll property to true.
I'm using this method to display in pictureBox1 animated gif i created.
The animated gif is already have it's own speed. For example 1 frame per second or i can set it to 1 frame each ms.
public void pictureBoxImage(string pbImage)
{
Image img2 = null;
try
{
using (img = Image.FromFile(pbImage))
{
Image i = this.pictureBox1.Image;
this.pictureBox1.Image = null;
if (i != null)
i.Dispose();
MemoryStream m = _memSt;
_memSt = new MemoryStream();
img.Save(_memSt, System.Drawing.Imaging.ImageFormat.Gif);
if (m != null)
m.Dispose();
img2 = Image.FromStream(_memSt);
}
if (img2 != null)
pictureBox1.Image = img2;
label2.Text = numberOfFiles.ToString();
label6.Text = nameOfStartFile.ToString();
label4.Text = nameOfEndFile.ToString();
}
catch (Exception err)
{
Logger.Write("Animation Error >>> " + err);
}
}
For example pbImage contain:
C:\previewDirectory\preview.gif
My question is that if there is any way to change the MemoryStream variable speed maybe so it will display the animated gif in a different speed ? Or if the animated speed file gif on my hard disk saved as speed of 1ms for example so that is the speed and can't be changed ?
I want to in pictureBox1 using hScrollBar to change the speed of the animated gif that is displayed in the pictureBox1.
You are confusing things. The animation speed is defined in the GIF file itself. I.e. a display time for each frame is defined. This has absolutely nothing to do with MemoryStreams or the speed of MemoryStreams.
If you want to change the animation speed, change it in the GIF-file by using a graphics or animation application capabale of doing it.
You can do it here: http://ezgif.com/speed
So here is my problem
I've used a scanner to scan an object in greyscale and convert it into a JPEG format to be analyzed by a C# program. The image's pixelformat is 8BppIndexed.
When I import this image into C# and draw a histogram of it, I only see 16 grayscale values, like this:
All the values in between these peaks are 0.
This is what the normal histogram should look like (don't mind the colors, this histogram is made with another tool):
The first histogram (int[]) is formed with this code:
public static int[] GetHistogram(Bitmap b)
{
int[] myHistogram = new int[256];
for (int i = 0; i < myHistogram.Length; i++)
myHistogram[i] = 0;
BitmapData bmData = null;
try
{
//Lock it fixed with 32bpp
bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
int scanline = bmData.Stride;
System.IntPtr Scan0 = bmData.Scan0;
unsafe
{
byte* p = (byte*)(void*)Scan0;
int nWidth = b.Width;
int nHeight = b.Height;
for (int y = 0; y < nHeight; y++)
{
for (int x = 0; x < nWidth; x++)
{
long Temp = 0;
Temp += p[0]; // p[0] - blue, p[1] - green , p[2]-red
Temp += p[1];
Temp += p[2];
Temp = (int)Temp / 3;
myHistogram[Temp]++;
//we do not need to use any offset, we always can increment by pixelsize when
//locking in 32bppArgb - mode
p += 4;
}
}
}
b.UnlockBits(bmData);
}
catch
{
try
{
b.UnlockBits(bmData);
}
catch
{
}
}
return myHistogram;
}
To be sure this code is not the problem, I've tried using the AForge.Math.Histogram way and even a for - in - for loop to iterate through all pixels. Each time I get the same result.
Now here is the funny part(s):
When I draw the histogram with any other tool (used 3 others), I get
a normal histogram. This tells me that the information is within the image, but my code just can't get it out.
When I scan the exact same object and set the settings to export the image into a .bmp file, c# is able to draw a normal histogram
With another random .jpg image I found on my computer, c# is able to draw a normal
histogram.
These points tell me that there is probably something wrong with the way that I import the image into my code, so I tried different ways to import the image:
Bitmap bmp = (Bitmap)Bitmap.FromFile(path);
or
Bitmap bmp = AForge.Imaging.Image.FromFile(path);
or
Stream imageStreamSource = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read);
System.Windows.Media.Imaging.JpegBitmapDecoder decoder = new System.Windows.Media.Imaging.JpegBitmapDecoder(imageStreamSource, System.Windows.Media.Imaging.BitmapCreateOptions.PreservePixelFormat, System.Windows.Media.Imaging.BitmapCacheOption.Default);
System.Windows.Media.Imaging.BitmapSource bitmapSource = decoder.Frames[0];
System.Windows.Controls.Image image = new System.Windows.Controls.Image();
image.Source = bitmapSource;
image.Stretch = System.Windows.Media.Stretch.None;
MemoryStream ms = new MemoryStream();
var encoder = new System.Windows.Media.Imaging.BmpBitmapEncoder();
encoder.Frames.Add(System.Windows.Media.Imaging.BitmapFrame.Create(image.Source as System.Windows.Media.Imaging.BitmapSource));
encoder.Save(ms);
ms.Flush();
System.Drawing.Image myImage = System.Drawing.Image.FromStream(ms);
Bitmap bmp = (Bitmap)Bitmap.FromStream(ms);
None of which gave a different histogram than the one with just 16 results.
I can not use the .bmp extension in my scanner, because I need to make a great many images and one .bmp image is around 200mb (yea, the images need a high resolution), while the .jpg is only around 30mb. Plus I've already made many .jpg images that can not be remade because the objects that have been scanned no longer exist.
NOTE: I know that using the .jpg extension is a lossy way to compress the images. That is not the current issue.
This is what a histogram, created with the exact same code as the first one, looks like with another random .jpg image from my computer:
Does this sound familiar to anyone? I feel like I've tried everything. Is there another way to solve this problem that I have not yet found?
EDIT
I thought I had found an extremely dirty way to fix my problem, but it does change the histogram:
Bitmap temp = (Bitmap)Bitmap.FromFile(m_sourceImageFileName);
if (temp.PixelFormat == PixelFormat.Format8bppIndexed ||
temp.PixelFormat == PixelFormat.Format4bppIndexed ||
temp.PixelFormat == PixelFormat.Format1bppIndexed ||
temp.PixelFormat == PixelFormat.Indexed)
{
//Change pixelformat to a format that AForge can work with
Bitmap tmp = temp.Clone(new Rectangle(0, 0, temp.Width, temp.Height), PixelFormat.Format24bppRgb);
//This is a super dirty way to make sure the histogram shows more than 16 grey values.
for (int i = 0; true; i++)
{
if (!File.Exists(m_sourceImageFileName + i + ".jpg"))
{
tmp.Save(m_sourceImageFileName + i + ".jpg");
tmp.Dispose();
temp = AForge.Imaging.Image.FromFile(m_sourceImageFileName + i + ".jpg");
File.Delete(m_sourceImageFileName + i + ".jpg");
break;
}
}
}
Bitmap properImage = temp;
This is the new histogram:
As you can see, it's not the same as what the histogram should look like.
I found out that the problem might be because the image is an 8bppIndexed jpeg image, and jpeg only supports 24bppRgb images. Any solutions?
I think the clue is in the type being "indexed" in your second line. There are probably only 16 colours in the lookup table. Can you post your original scanned image so we can see if there are really more shades in it? If not, try using ImageMagick to count the colours
Like this to get a histogram:
convert yourimage.jpg -format %c histogram:info:-
convert yourimage.jpg -colorspace rgb -colors 256 -depth 8 -format "%c" histogram:info:
Or count the unique colours like this:
identify -verbose yourimage.jpg | grep -i colors:
Or dump all the pixels like this:
convert yourimage.jpg -colorspace rgb -colors 256 -depth 8 txt:
Well, I solved it by opening the JPEG and saving it as bmp with the ImageJ library in java. I've made a .jar file from the code and I use this code to get the bmp into my c# code:
string extension = m_sourceImageFileName.Substring(m_sourceImageFileName.LastIndexOf("."), m_sourceImageFileName.Length - m_sourceImageFileName.LastIndexOf("."));
int exitcode;
ProcessStartInfo ProcessInfo;
Process process;
ProcessInfo = new ProcessStartInfo("java.exe", #"-jar ""C:\Users\stevenh\Documents\Visual Studio 2010\Projects\BlackSpotDetection V2.0\ConvertToBmp\dist\ConvertToBmp.jar"" " + extension + " " + m_sourceImageFileName + " " + m_addedImageName);
ProcessInfo.CreateNoWindow = true;
ProcessInfo.UseShellExecute = false;
// redirecting standard output and error
ProcessInfo.RedirectStandardError = true;
ProcessInfo.RedirectStandardOutput = true;
process = Process.Start(ProcessInfo);
process.WaitForExit();
//Reading output and error
string output = process.StandardOutput.ReadToEnd();
string error = process.StandardError.ReadToEnd();
exitcode = process.ExitCode;
if (exitcode != 0)
{
statusLabel.Text = output;
MessageBox.Show("Error in external process: converting image to bmp.\n" + error);
//Exit code '0' denotes success and '1' denotes failure
return;
}
else
statusLabel.Text = "Awesomeness";
process.Close();
Bitmap realImage = AForge.Imaging.Image.FromFile(m_addedImageName);
File.Delete(m_addedImageName);
The jar will receive the extension, m_sourceImageFileName and m_addedImageFileName. It will open the sourceImage and save it under the name of m_addedImageFileName
I'm using the AForge library to open the image, because this library doesn't lock the image while it's opened, which makes me able to delete the 'home-made' image.
I'm having issues opening multiple image files from the users desktop and then converting those images to a scaled down size which then gets displayed on the UI (after all the converting is done). I can't find what the issue is exactly but what I've observed is that there seems to be a 5 second limit between hitting the "Open" button on the "OpenFileDialog" box control and how much time I have to read those File(s). I've used 6 files ranging in size of 9-11MB, and in another case I've used 50 1-2MB files and in all cases the process will read up until 5 seconds have expired. It never fails on the same image either so the image isn't causing the issue which would further make me believe its not a file count issue. If I test this process with only a few small sized files it happens under 1 second and there is not failure and I see all images on the UI. That is why I'm guessing its a timing issue. I know silverlight has a security exception between when the user interacts with a control (button) and how much time can elapse before displaying the "OpenFileDialog" box but this time limit seems to be different but I can't find any documentation.
Here is the code I'm using. It seems to be a pretty common recipe used everywhere but posting for completeness. The error happens on the line
var bitmap = new WriteableBitmap(bitmapImage);
The reason it fails is because the bitmapImage pixelWidth/Height == 0. Here is the full code.
private const int MaxPixelSize = 500;
public byte[] Convert(FileInfo fileInfo, FileTypes fileType, DateTime startTime)
{
byte[] result = null;
using (var stream = fileInfo.OpenRead())
{
DateTime EndTime = DateTime.Now;
if (fileType == FileTypes.JPG || fileType == FileTypes.BMP || fileType == FileTypes.PNG)
{
var bitmapImage = new BitmapImage();
bitmapImage.SetSource(stream);
double scaleX = 1;
double scaleY = 1;
if (bitmapImage.PixelWidth > MaxPixelSize)
{
scaleX = MaxPixelSize / (double)bitmapImage.PixelWidth;
}
if (bitmapImage.PixelHeight > MaxPixelSize)
{
scaleY = MaxPixelSize / (double)bitmapImage.PixelHeight;
}
var scale = Math.Min(scaleX, scaleY);
var bitmap = new WriteableBitmap(bitmapImage);
var resizedBitmap = bitmap.Resize((int)((double)bitmapImage.PixelWidth * scale), (int)((double)bitmapImage.PixelHeight * scale), WriteableBitmapExtensions.Interpolation.Bilinear);
using (var scaleStream = new MemoryStream())
{
var encoder = new JpegEncoder();
var image = resizedBitmap.ToImage();
encoder.Encode(image, scaleStream);
result = scaleStream.GetBuffer();
}
}
else
{
result = new byte[stream.Length];
stream.Read(result, 0, (int)stream.Length);
}
}
return result;
}
Any help or suggestions are welcomed.
Thanks,
Dean
if bitmapImage.ImageOpened event is executed, you can get valid pixelWidth and height.
when bitmapImage.SetSource(stream) is excuted, this event will be invoked.
I have a Kinect WPF Application that takes images from the Kinect, does some feature detection using EmguCV (A C# opencv wrapper) and displays the output on the using a WPF image.
I have had this working before, but the application now refuses to update the screen image when the imagesource is written to, but I have not changed the way it works.
the Image(called video) is written to as such:
video.Source = bitmapsource;
in the colorframeready event handler.
This works fine until I introduce some opencv code before the imagesource is written to. It does not matter what source is used, so I don't think it is a conflict there. I have narrowed down the offending EmguCV code to this line:
RecentKeyPoints = surfCPU.DetectKeyPointsRaw(ImageRecent, null);
which jumps straight into the opencv code. It is worth noting that:
ImageRecent has completely different origins to the bitmapsource updating the screen.
Reading video.Source returns the bitmapsource, so it seems to be writing correctly, just not updating the screen.
Let me know if you want any more information...
void nui_ColorFrameReady(object sender, ColorImageFrameReadyEventArgs e)
{
// Checks for a recent Depth Image
if (!TrackingReady) return;
// Stores image
using (ColorImageFrame colorImageFrame = e.OpenColorImageFrame())
{
if (colorImageFrame != null)
{
if (FeatureTracker.ColourImageRecent == null)
//allocate the first time
FeatureTracker.ColourImageRecent = new byte[colorImageFrame.PixelDataLength];
colorImageFrame.CopyPixelDataTo(FeatureTracker.ColourImageRecent);
}
else return;
}
FeatureTracker.FeatureDetect(nui);
//video.Source = FeatureTracker.ColourImageRecent.ToBitmapSource();
video.Source = ((Bitmap)Bitmap.FromFile("test1.png")).ToBitmapSource();
TrackingReady = false;
}
public Bitmap FeatureDetect(KinectSensor nui)
{
byte[] ColourClone = new byte[ColourImageRecent.Length];
Array.Copy(ColourImageRecent, ColourClone, ColourImageRecent.Length);
Bitmap test = (Bitmap)Bitmap.FromFile("test1.png");
test.RotateFlip(RotateFlipType.RotateNoneFlipY);
Image<Gray, Byte> ImageRecent = new Image<Gray, byte>(test);
SURFDetector surfCPU = new SURFDetector(2000, false);
VectorOfKeyPoint RecentKeyPoints;
Matrix<int> indices;
Matrix<float> dist;
Matrix<byte> mask;
bool MatchFailed = false;
// extract SURF features from the object image
RecentKeyPoints = surfCPU.DetectKeyPointsRaw(ImageRecent, null);
//Matrix<float> RecentDescriptors = surfCPU.ComputeDescriptorsRaw(ImageRecent, null, RecentKeyPoints);
//MKeyPoint[] RecentPoints = RecentKeyPoints.ToArray();
// don't feature detect on first attempt, just store image details for next attempt
#region
/*
if (KeyPointsOld == null)
{
KeyPointsOld = RecentKeyPoints;
PointsOld = RecentPoints;
DescriptorsOld = RecentDescriptors;
return ImageRecent.ToBitmap();
}
*/
#endregion
// Attempt to match points to their nearest neighbour
#region
/*
BruteForceMatcher SURFmatcher = new BruteForceMatcher(BruteForceMatcher.DistanceType.L2F32);
SURFmatcher.Add(RecentDescriptors);
int k = 5;
indices = new Matrix<int>(DescriptorsOld.Rows, k);
dist = new Matrix<float>(DescriptorsOld.Rows, k);
*/
// Match features, provide the top k matches
//SURFmatcher.KnnMatch(DescriptorsOld, indices, dist, k, null);
// Create mask and set to allow all features
//mask = new Matrix<byte>(dist.Rows, 1);
//mask.SetValue(255);
#endregion
//Features2DTracker.VoteForUniqueness(dist, 0.8, mask);
// Check number of good maches and for error and end matching if true
#region
//int nonZeroCount = CvInvoke.cvCountNonZero(mask);
//if (nonZeroCount < 5) MatchFailed = true;
/*
try
{
nonZeroCount = Features2DTracker.VoteForSizeAndOrientation(RecentKeyPoints, KeyPointsOld, indices, mask, 1.5, 20);
}
catch (SystemException)
{
MatchFailed = true;
}
if (nonZeroCount < 5) MatchFailed = true;
if (MatchFailed)
{
return ImageRecent.ToBitmap();
}
*/
#endregion
//DepthMapColourCoordsRecent = CreateDepthMap(nui, DepthImageRecent);
//PointDist[] FeatureDistances = DistanceToFeature(indices, mask, RecentPoints);
//Image<Rgb,Byte> rgbimage = ImageRecent.Convert<Rgb, Byte>();
//rgbimage = DrawPoints(FeatureDistances, rgbimage);
// Store recent image data for next feature detect.
//KeyPointsOld = RecentKeyPoints;
//PointsOld = RecentPoints;
//DescriptorsOld = RecentDescriptors;
//CreateDepthMap(nui, iva);
//rgbimage = CreateDepthImage(DepthMapColourCoordsRecent, rgbimage);
// Convert image back to a bitmap
count++;
//Bitmap bitmap3 = rgbimage.ToBitmap();
//bitmapstore = bitmap3;
//bitmap3.Save("test" + count.ToString() + ".png");
return null;
}
This is a little late, but I had a similar problem and thought I'd share my solution.
In my case I was processing the depth stream. The default resolution was 640x480, and Emgu just wasn't able to process the image fast enough to keep up with the frameready handler. As soon as I reduced the depth stream resolution to 320x240 the problem went away.
I also went a bit further and moved my image processing to a different thread which sped it up even more (do a search for ComponentDispatcher.ThreadIdle). I'm still not able to do 640x480 at a reasonable frame rate, but at least the image renders so I can see what's going on.