How to capture a frame of a video using directx - c#

I am fairly new to DirectX and c#I have a new challenge where I am trying to process the frames of video(60 fps) coming as a video stream over HDMI from another PC(using Directx C#). I am using video capture card for capturing video. Moreover my piece of code enables me to capture the video perfectly.
However, I have a requirement where I need to be able to process the frames of video at the same time when it is streaming(may be in a separate thread).
I have tried using AForge library to capture the frames but that only works with the integrated web camera.When I try yo run this with capture card it only shows a black screen
Any pointers or links for reference will be really appreciated.

Finally I found the solution myself..The frames of the video stream can be extracted using Directshow's SampleGrabber method.
/// <summary> Interface frame event </summary>
public delegate void HeFrame(System.Drawing.Bitmap BM);
/// <summary> Frame event </summary>
public event HeFrame FrameEvent2;
private byte[] savedArray;
private int bufferedSize;
int ISampleGrabberCB.BufferCB(double SampleTime, IntPtr pBuffer,
int BufferLen )
{
this.bufferedSize = BufferLen;
int stride = this.SnapShotWidth * 3;
Marshal.Copy( pBuffer, this.savedArray, 0, BufferLen );
GCHandle handle = GCHandle.Alloc( this.savedArray, GCHandleType.Pinned );
int scan0 = (int) handle.AddrOfPinnedObject();
scan0 += (this.SnapShotHeight - 1) * stride;
Bitmap b = new Bitmap(this.SnapShotWidth, this.SnapShotHeight, -stride,
System.Drawing.Imaging.PixelFormat.Format24bppRgb, (IntPtr) scan0 );
handle.Free();
SetBitmap=b;
return 0;
}
/// <summary> capture event, triggered by buffer callback. </summary>
private void OnCaptureDone()
{
Trace.WriteLine( "!!DLG: OnCaptureDone" );
}
/// <summary> Allocate memory space and set SetCallBack </summary>
public void GrapImg()
{
Trace.Write ("IMG");
if( this.savedArray == null )
{
int size = this.snapShotImageSize;
if( (size < 1000) || (size > 16000000) )
return;
this.savedArray = new byte[ size + 64000 ];
}
sampGrabber.SetCallback( this, 1 );
}
/// <summary> Transfer bitmap upon firing event </summary>
public System.Drawing.Bitmap SetBitmap
{
set
{
this.FrameEvent2(value);
}
}
Here is the link to the Article.Might help some one as it took me a lot of time to get to this.

Related

Big red X on image and I cannot detect error

To be brief, I'm trying to implement some motion detection algorithm and in my case I'm working on UWP using portable version of AForge library to processing image. Avoiding this issue, I have problem with conversion SoftwareBitmap object (that I get from MediaFrameReader) to Bitmap object (and vice versa) that I use in my code associated with motion detection. In consequence of this conversion, I get proper image with big red X in the foreground. Code below:
private async void FrameArrived(MediaFrameReader sender, MediaFrameArrivedEventArgs args)
{
var frame = sender.TryAcquireLatestFrame();
if (frame != null && !_detectingMotion)
{
SoftwareBitmap aForgeInputBitmap = null;
var inputBitmap = frame.VideoMediaFrame?.SoftwareBitmap;
if (inputBitmap != null)
{
_detectingMotion = true;
//The XAML Image control can only display images in BRGA8 format with premultiplied or no alpha
if (inputBitmap.BitmapPixelFormat == BitmapPixelFormat.Bgra8
&& inputBitmap.BitmapAlphaMode == BitmapAlphaMode.Premultiplied)
{
aForgeInputBitmap = SoftwareBitmap.Copy(inputBitmap);
}
else
{
aForgeInputBitmap = SoftwareBitmap.Convert(inputBitmap, BitmapPixelFormat.Bgra8, BitmapAlphaMode.Ignore);
}
await _aForgeHelper.MoveBackgrounds(aForgeInputBitmap);
SoftwareBitmap aForgeOutputBitmap = await _aForgeHelper.DetectMotion();
_frameRenderer.PresentSoftwareBitmap(aForgeOutputBitmap);
_detectingMotion = false;
}
}
}
class AForgeHelper
{
private Bitmap _background;
private Bitmap _currentFrameBitmap;
public async Task MoveBackgrounds(SoftwareBitmap currentFrame)
{
if (_background == null)
{
_background = TransformToGrayscale(await ConvertSoftwareBitmapToBitmap(currentFrame));
}
else
{
// modifying _background in compliance with algorithm - in this case irrelevant
}
}
public async Task<SoftwareBitmap> DetectMotion()
{
// to check only this conversion
return await ConvertBitmapToSoftwareBitmap(_background);
}
private static async Task<Bitmap> ConvertSoftwareBitmapToBitmap(SoftwareBitmap input)
{
Bitmap output = null;
await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
WriteableBitmap tmpBitmap = new WriteableBitmap(input.PixelWidth, input.PixelHeight);
input.CopyToBuffer(tmpBitmap.PixelBuffer);
output = (Bitmap)tmpBitmap;
});
return output;
}
private static async Task<SoftwareBitmap> ConvertBitmapToSoftwareBitmap(Bitmap input)
{
SoftwareBitmap output = null;
await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, () =>
{
WriteableBitmap tmpBitmap = (WriteableBitmap)input;
output = new SoftwareBitmap(BitmapPixelFormat.Bgra8, tmpBitmap.PixelWidth, tmpBitmap.PixelHeight,
BitmapAlphaMode.Premultiplied);
output.CopyFromBuffer(tmpBitmap.PixelBuffer);
});
return output;
}
private static Bitmap TransformToGrayscale(Bitmap input)
{
Grayscale grayscaleFilter = new Grayscale(0.2125, 0.7154, 0.0721);
Bitmap output = grayscaleFilter.Apply(input);
return output;
}
Certainly, I've tried to detect some errors using try-catch clauses. I've found nothing. Thanks in advance.
EDIT (29/03/2018):
Generally, my app targets at providing some features associated with Kinect Sensor. App user has possibility to choose some feature from features list. First of all, this app have to be available on Xbox One, therefore I've chosen UWP. Due to 'scientific' issues, I've implemented MVVM pattern, using MVVM Light framework. As for PresentSoftwareBitmap() method, it comes from Windows-universal-samples repo and I paste FrameRenderer helper class below:
[ComImport]
[Guid("5B0D3235-4DBA-4D44-865E-8F1D0E4FD04D")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
unsafe interface IMemoryBufferByteAccess
{
void GetBuffer(out byte* buffer, out uint capacity);
}
class FrameRenderer
{
private Image _imageElement;
private SoftwareBitmap _backBuffer;
private bool _taskRunning = false;
public FrameRenderer(Image imageElement)
{
_imageElement = imageElement;
_imageElement.Source = new SoftwareBitmapSource();
}
// Processes a MediaFrameReference and displays it in a XAML image control
public void ProcessFrame(MediaFrameReference frame)
{
var softwareBitmap = FrameRenderer.ConvertToDisplayableImage(frame?.VideoMediaFrame);
if (softwareBitmap != null)
{
// Swap the processed frame to _backBuffer and trigger UI thread to render it
softwareBitmap = Interlocked.Exchange(ref _backBuffer, softwareBitmap);
// UI thread always reset _backBuffer before using it. Unused bitmap should be disposed.
softwareBitmap?.Dispose();
// Changes to xaml ImageElement must happen in UI thread through Dispatcher
var task = _imageElement.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,
async () =>
{
// Don't let two copies of this task run at the same time.
if (_taskRunning)
{
return;
}
_taskRunning = true;
// Keep draining frames from the backbuffer until the backbuffer is empty.
SoftwareBitmap latestBitmap;
while ((latestBitmap = Interlocked.Exchange(ref _backBuffer, null)) != null)
{
var imageSource = (SoftwareBitmapSource)_imageElement.Source;
await imageSource.SetBitmapAsync(latestBitmap);
latestBitmap.Dispose();
}
_taskRunning = false;
});
}
}
// Function delegate that transforms a scanline from an input image to an output image.
private unsafe delegate void TransformScanline(int pixelWidth, byte* inputRowBytes, byte* outputRowBytes);
/// <summary>
/// Determines the subtype to request from the MediaFrameReader that will result in
/// a frame that can be rendered by ConvertToDisplayableImage.
/// </summary>
/// <returns>Subtype string to request, or null if subtype is not renderable.</returns>
public static string GetSubtypeForFrameReader(MediaFrameSourceKind kind, MediaFrameFormat format)
{
// Note that media encoding subtypes may differ in case.
// https://learn.microsoft.com/en-us/uwp/api/Windows.Media.MediaProperties.MediaEncodingSubtypes
string subtype = format.Subtype;
switch (kind)
{
// For color sources, we accept anything and request that it be converted to Bgra8.
case MediaFrameSourceKind.Color:
return Windows.Media.MediaProperties.MediaEncodingSubtypes.Bgra8;
// The only depth format we can render is D16.
case MediaFrameSourceKind.Depth:
return String.Equals(subtype, Windows.Media.MediaProperties.MediaEncodingSubtypes.D16, StringComparison.OrdinalIgnoreCase) ? subtype : null;
// The only infrared formats we can render are L8 and L16.
case MediaFrameSourceKind.Infrared:
return (String.Equals(subtype, Windows.Media.MediaProperties.MediaEncodingSubtypes.L8, StringComparison.OrdinalIgnoreCase) ||
String.Equals(subtype, Windows.Media.MediaProperties.MediaEncodingSubtypes.L16, StringComparison.OrdinalIgnoreCase)) ? subtype : null;
// No other source kinds are supported by this class.
default:
return null;
}
}
/// <summary>
/// Converts a frame to a SoftwareBitmap of a valid format to display in an Image control.
/// </summary>
/// <param name="inputFrame">Frame to convert.</param>
public static unsafe SoftwareBitmap ConvertToDisplayableImage(VideoMediaFrame inputFrame)
{
SoftwareBitmap result = null;
using (var inputBitmap = inputFrame?.SoftwareBitmap)
{
if (inputBitmap != null)
{
switch (inputFrame.FrameReference.SourceKind)
{
case MediaFrameSourceKind.Color:
// XAML requires Bgra8 with premultiplied alpha.
// We requested Bgra8 from the MediaFrameReader, so all that's
// left is fixing the alpha channel if necessary.
if (inputBitmap.BitmapPixelFormat != BitmapPixelFormat.Bgra8)
{
System.Diagnostics.Debug.WriteLine("Color frame in unexpected format.");
}
else if (inputBitmap.BitmapAlphaMode == BitmapAlphaMode.Premultiplied)
{
// Already in the correct format.
result = SoftwareBitmap.Copy(inputBitmap);
}
else
{
// Convert to premultiplied alpha.
result = SoftwareBitmap.Convert(inputBitmap, BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied);
}
break;
case MediaFrameSourceKind.Depth:
// We requested D16 from the MediaFrameReader, so the frame should
// be in Gray16 format.
if (inputBitmap.BitmapPixelFormat == BitmapPixelFormat.Gray16)
{
// Use a special pseudo color to render 16 bits depth frame.
var depthScale = (float)inputFrame.DepthMediaFrame.DepthFormat.DepthScaleInMeters;
var minReliableDepth = inputFrame.DepthMediaFrame.MinReliableDepth;
var maxReliableDepth = inputFrame.DepthMediaFrame.MaxReliableDepth;
result = TransformBitmap(inputBitmap, (w, i, o) => PseudoColorHelper.PseudoColorForDepth(w, i, o, depthScale, minReliableDepth, maxReliableDepth));
}
else
{
System.Diagnostics.Debug.WriteLine("Depth frame in unexpected format.");
}
break;
case MediaFrameSourceKind.Infrared:
// We requested L8 or L16 from the MediaFrameReader, so the frame should
// be in Gray8 or Gray16 format.
switch (inputBitmap.BitmapPixelFormat)
{
case BitmapPixelFormat.Gray16:
// Use pseudo color to render 16 bits frames.
result = TransformBitmap(inputBitmap, PseudoColorHelper.PseudoColorFor16BitInfrared);
break;
case BitmapPixelFormat.Gray8:
// Use pseudo color to render 8 bits frames.
result = TransformBitmap(inputBitmap, PseudoColorHelper.PseudoColorFor8BitInfrared);
break;
default:
System.Diagnostics.Debug.WriteLine("Infrared frame in unexpected format.");
break;
}
break;
}
}
}
return result;
}
/// <summary>
/// Transform image into Bgra8 image using given transform method.
/// </summary>
/// <param name="softwareBitmap">Input image to transform.</param>
/// <param name="transformScanline">Method to map pixels in a scanline.</param>
private static unsafe SoftwareBitmap TransformBitmap(SoftwareBitmap softwareBitmap, TransformScanline transformScanline)
{
// XAML Image control only supports premultiplied Bgra8 format.
var outputBitmap = new SoftwareBitmap(BitmapPixelFormat.Bgra8,
softwareBitmap.PixelWidth, softwareBitmap.PixelHeight, BitmapAlphaMode.Premultiplied);
using (var input = softwareBitmap.LockBuffer(BitmapBufferAccessMode.Read))
using (var output = outputBitmap.LockBuffer(BitmapBufferAccessMode.Write))
{
// Get stride values to calculate buffer position for a given pixel x and y position.
int inputStride = input.GetPlaneDescription(0).Stride;
int outputStride = output.GetPlaneDescription(0).Stride;
int pixelWidth = softwareBitmap.PixelWidth;
int pixelHeight = softwareBitmap.PixelHeight;
using (var outputReference = output.CreateReference())
using (var inputReference = input.CreateReference())
{
// Get input and output byte access buffers.
byte* inputBytes;
uint inputCapacity;
((IMemoryBufferByteAccess)inputReference).GetBuffer(out inputBytes, out inputCapacity);
byte* outputBytes;
uint outputCapacity;
((IMemoryBufferByteAccess)outputReference).GetBuffer(out outputBytes, out outputCapacity);
// Iterate over all pixels and store converted value.
for (int y = 0; y < pixelHeight; y++)
{
byte* inputRowBytes = inputBytes + y * inputStride;
byte* outputRowBytes = outputBytes + y * outputStride;
transformScanline(pixelWidth, inputRowBytes, outputRowBytes);
}
}
}
return outputBitmap;
}
/// <summary>
/// A helper class to manage look-up-table for pseudo-colors.
/// </summary>
private static class PseudoColorHelper
{
#region Constructor, private members and methods
private const int TableSize = 1024; // Look up table size
private static readonly uint[] PseudoColorTable;
private static readonly uint[] InfraredRampTable;
// Color palette mapping value from 0 to 1 to blue to red colors.
private static readonly Color[] ColorRamp =
{
Color.FromArgb(a:0xFF, r:0x7F, g:0x00, b:0x00),
Color.FromArgb(a:0xFF, r:0xFF, g:0x00, b:0x00),
Color.FromArgb(a:0xFF, r:0xFF, g:0x7F, b:0x00),
Color.FromArgb(a:0xFF, r:0xFF, g:0xFF, b:0x00),
Color.FromArgb(a:0xFF, r:0x7F, g:0xFF, b:0x7F),
Color.FromArgb(a:0xFF, r:0x00, g:0xFF, b:0xFF),
Color.FromArgb(a:0xFF, r:0x00, g:0x7F, b:0xFF),
Color.FromArgb(a:0xFF, r:0x00, g:0x00, b:0xFF),
Color.FromArgb(a:0xFF, r:0x00, g:0x00, b:0x7F),
};
static PseudoColorHelper()
{
PseudoColorTable = InitializePseudoColorLut();
InfraredRampTable = InitializeInfraredRampLut();
}
/// <summary>
/// Maps an input infrared value between [0, 1] to corrected value between [0, 1].
/// </summary>
/// <param name="value">Input value between [0, 1].</param>
[MethodImpl(MethodImplOptions.AggressiveInlining)] // Tell the compiler to inline this method to improve performance
private static uint InfraredColor(float value)
{
int index = (int)(value * TableSize);
index = index < 0 ? 0 : index > TableSize - 1 ? TableSize - 1 : index;
return InfraredRampTable[index];
}
/// <summary>
/// Initializes the pseudo-color look up table for infrared pixels
/// </summary>
private static uint[] InitializeInfraredRampLut()
{
uint[] lut = new uint[TableSize];
for (int i = 0; i < TableSize; i++)
{
var value = (float)i / TableSize;
// Adjust to increase color change between lower values in infrared images
var alpha = (float)Math.Pow(1 - value, 12);
lut[i] = ColorRampInterpolation(alpha);
}
return lut;
}
/// <summary>
/// Initializes pseudo-color look up table for depth pixels
/// </summary>
private static uint[] InitializePseudoColorLut()
{
uint[] lut = new uint[TableSize];
for (int i = 0; i < TableSize; i++)
{
lut[i] = ColorRampInterpolation((float)i / TableSize);
}
return lut;
}
/// <summary>
/// Maps a float value to a pseudo-color pixel
/// </summary>
private static uint ColorRampInterpolation(float value)
{
// Map value to surrounding indexes on the color ramp
int rampSteps = ColorRamp.Length - 1;
float scaled = value * rampSteps;
int integer = (int)scaled;
int index =
integer < 0 ? 0 :
integer >= rampSteps - 1 ? rampSteps - 1 :
integer;
Color prev = ColorRamp[index];
Color next = ColorRamp[index + 1];
// Set color based on ratio of closeness between the surrounding colors
uint alpha = (uint)((scaled - integer) * 255);
uint beta = 255 - alpha;
return
((prev.A * beta + next.A * alpha) / 255) << 24 | // Alpha
((prev.R * beta + next.R * alpha) / 255) << 16 | // Red
((prev.G * beta + next.G * alpha) / 255) << 8 | // Green
((prev.B * beta + next.B * alpha) / 255); // Blue
}
/// <summary>
/// Maps a value in [0, 1] to a pseudo RGBA color.
/// </summary>
/// <param name="value">Input value between [0, 1].</param>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static uint PseudoColor(float value)
{
int index = (int)(value * TableSize);
index = index < 0 ? 0 : index > TableSize - 1 ? TableSize - 1 : index;
return PseudoColorTable[index];
}
#endregion
/// <summary>
/// Maps each pixel in a scanline from a 16 bit depth value to a pseudo-color pixel.
/// </summary>
/// <param name="pixelWidth">Width of the input scanline, in pixels.</param>
/// <param name="inputRowBytes">Pointer to the start of the input scanline.</param>
/// <param name="outputRowBytes">Pointer to the start of the output scanline.</param>
/// <param name="depthScale">Physical distance that corresponds to one unit in the input scanline.</param>
/// <param name="minReliableDepth">Shortest distance at which the sensor can provide reliable measurements.</param>
/// <param name="maxReliableDepth">Furthest distance at which the sensor can provide reliable measurements.</param>
public static unsafe void PseudoColorForDepth(int pixelWidth, byte* inputRowBytes, byte* outputRowBytes, float depthScale, float minReliableDepth, float maxReliableDepth)
{
// Visualize space in front of your desktop.
float minInMeters = minReliableDepth * depthScale;
float maxInMeters = maxReliableDepth * depthScale;
float one_min = 1.0f / minInMeters;
float range = 1.0f / maxInMeters - one_min;
ushort* inputRow = (ushort*)inputRowBytes;
uint* outputRow = (uint*)outputRowBytes;
for (int x = 0; x < pixelWidth; x++)
{
var depth = inputRow[x] * depthScale;
if (depth == 0)
{
// Map invalid depth values to transparent pixels.
// This happens when depth information cannot be calculated, e.g. when objects are too close.
outputRow[x] = 0;
}
else
{
var alpha = (1.0f / depth - one_min) / range;
outputRow[x] = PseudoColor(alpha * alpha);
}
}
}
/// <summary>
/// Maps each pixel in a scanline from a 8 bit infrared value to a pseudo-color pixel.
/// </summary>
/// /// <param name="pixelWidth">Width of the input scanline, in pixels.</param>
/// <param name="inputRowBytes">Pointer to the start of the input scanline.</param>
/// <param name="outputRowBytes">Pointer to the start of the output scanline.</param>
public static unsafe void PseudoColorFor8BitInfrared(
int pixelWidth, byte* inputRowBytes, byte* outputRowBytes)
{
byte* inputRow = inputRowBytes;
uint* outputRow = (uint*)outputRowBytes;
for (int x = 0; x < pixelWidth; x++)
{
outputRow[x] = InfraredColor(inputRow[x] / (float)Byte.MaxValue);
}
}
/// <summary>
/// Maps each pixel in a scanline from a 16 bit infrared value to a pseudo-color pixel.
/// </summary>
/// <param name="pixelWidth">Width of the input scanline.</param>
/// <param name="inputRowBytes">Pointer to the start of the input scanline.</param>
/// <param name="outputRowBytes">Pointer to the start of the output scanline.</param>
public static unsafe void PseudoColorFor16BitInfrared(int pixelWidth, byte* inputRowBytes, byte* outputRowBytes)
{
ushort* inputRow = (ushort*)inputRowBytes;
uint* outputRow = (uint*)outputRowBytes;
for (int x = 0; x < pixelWidth; x++)
{
outputRow[x] = InfraredColor(inputRow[x] / (float)UInt16.MaxValue);
}
}
}
// Displays the provided softwareBitmap in a XAML image control.
public void PresentSoftwareBitmap(SoftwareBitmap softwareBitmap)
{
if (softwareBitmap != null)
{
// Swap the processed frame to _backBuffer and trigger UI thread to render it
softwareBitmap = Interlocked.Exchange(ref _backBuffer, softwareBitmap);
// UI thread always reset _backBuffer before using it. Unused bitmap should be disposed.
softwareBitmap?.Dispose();
// Changes to xaml ImageElement must happen in UI thread through Dispatcher
var task = _imageElement.Dispatcher.RunAsync(CoreDispatcherPriority.Normal,
async () =>
{
// Don't let two copies of this task run at the same time.
if (_taskRunning)
{
return;
}
_taskRunning = true;
// Keep draining frames from the backbuffer until the backbuffer is empty.
SoftwareBitmap latestBitmap;
while ((latestBitmap = Interlocked.Exchange(ref _backBuffer, null)) != null)
{
var imageSource = (SoftwareBitmapSource)_imageElement.Source;
await imageSource.SetBitmapAsync(latestBitmap);
latestBitmap.Dispose();
}
_taskRunning = false;
});
}
}
}
This is output image, that I've got after conversion issues and grayscale process:
Output image
As for VS version - Visual Studio Enterprise 2017, Version 15.6.1 (to be precise). Once more, thanks in advance for help.

Copying From and To Clipboard loses image transparency

I've been trying to copy a transparent PNG image to clipboard and preserve its transparency to paste it into a specific program that supports it.
I tried many solutions already but the background always ended up gray in one way or another.
So I tried copying the same image using Chrome and pasting it into the program and it worked. It preserved transparency. So then I tried Getting the image from the Clipboard that I had copied using Chrome and Set the image again, expecting the transparency to still be there - but no, transparency was not preserved even though I just took the image from the clipboard and set it again.
var img = Clipboard.GetImage(); // copied using Chrome and transparency is preserved
Clipboard.SetImage(img); // transparency lost
Same issue even if I use the System.Windows.Forms.Clipboard or try getting and setting the DataObject instead of the Image.
The Windows clipboard, by default, does not support transparency, but you can put content on the clipboard in many types together to make sure most applications find some type in it that they can use. Sadly, the most common type, DeviceIndependentBitmap (which Windows itself seems to use) is a really dirty and unreliable one. I wrote a big rant explanation about that here.
I'll assume you have read through that before continuing with my answer here, because it contains the background information required for the next part.
Now, the cleanest way of putting an image on the clipboard with transparency support is a PNG stream, but it won't guarantee that all applications can paste it. Gimp supports PNG paste, and apparently so do the newer MS Office programs, but Google Chrome, for example, doesn't, and will only accept the messy DIB type detailed in the answer I linked to. On the other hand, Gimp will not accept DIB as having transparency, because its creators actually followed the format's specifications, and realized that the format was unreliable (as clearly demonstrated by that question I linked).
Because of the DIB mess, sadly, the best thing to do is simply to put it in there in as many generally-supported types as you can, including PNG, DIB and the normal Image.
PNG and DIB are both put on the clipboard in the same way: by putting them in the DataObject as MemoryStream, and then giving the clipboard the "copy" instruction when actually putting it on.
Most of this is straightforward, but the DIB one is a bit more complex. Note that the following part contains a couple of references to my own toolsets. The GetImageData one can be found in this answer, the BuildImage one can be found here, and the ArrayUtils ones are given below.
These toolsets all use System.Drawing, though. You'll have to figure out for yourself exactly how to do the same things in WPF.
/// <summary>
/// Copies the given image to the clipboard as PNG, DIB and standard Bitmap format.
/// </summary>
/// <param name="image">Image to put on the clipboard.</param>
/// <param name="imageNoTr">Optional specifically nontransparent version of the image to put on the clipboard.</param>
/// <param name="data">Clipboard data object to put the image into. Might already contain other stuff. Leave null to create a new one.</param>
public static void SetClipboardImage(Bitmap image, Bitmap imageNoTr, DataObject data)
{
Clipboard.Clear();
if (data == null)
data = new DataObject();
if (imageNoTr == null)
imageNoTr = image;
using (MemoryStream pngMemStream = new MemoryStream())
using (MemoryStream dibMemStream = new MemoryStream())
{
// As standard bitmap, without transparency support
data.SetData(DataFormats.Bitmap, true, imageNoTr);
// As PNG. Gimp will prefer this over the other two.
image.Save(pngMemStream, ImageFormat.Png);
data.SetData("PNG", false, pngMemStream);
// As DIB. This is (wrongly) accepted as ARGB by many applications.
Byte[] dibData = ConvertToDib(image);
dibMemStream.Write(dibData, 0, dibData.Length);
data.SetData(DataFormats.Dib, false, dibMemStream);
// The 'copy=true' argument means the MemoryStreams can be safely disposed after the operation.
Clipboard.SetDataObject(data, true);
}
}
/// <summary>
/// Converts the image to Device Independent Bitmap format of type BITFIELDS.
/// This is (wrongly) accepted by many applications as containing transparency,
/// so I'm abusing it for that.
/// </summary>
/// <param name="image">Image to convert to DIB</param>
/// <returns>The image converted to DIB, in bytes.</returns>
public static Byte[] ConvertToDib(Image image)
{
Byte[] bm32bData;
Int32 width = image.Width;
Int32 height = image.Height;
// Ensure image is 32bppARGB by painting it on a new 32bppARGB image.
using (Bitmap bm32b = new Bitmap(image.Width, image.Height, PixelFormat.Format32bppArgb))
{
using (Graphics gr = Graphics.FromImage(bm32b))
gr.DrawImage(image, new Rectangle(0, 0, bm32b.Width, bm32b.Height));
// Bitmap format has its lines reversed.
bm32b.RotateFlip(RotateFlipType.Rotate180FlipX);
Int32 stride;
bm32bData = ImageUtils.GetImageData(bm32b, out stride);
}
// BITMAPINFOHEADER struct for DIB.
Int32 hdrSize = 0x28;
Byte[] fullImage = new Byte[hdrSize + 12 + bm32bData.Length];
//Int32 biSize;
ArrayUtils.WriteIntToByteArray(fullImage, 0x00, 4, true, (UInt32)hdrSize);
//Int32 biWidth;
ArrayUtils.WriteIntToByteArray(fullImage, 0x04, 4, true, (UInt32)width);
//Int32 biHeight;
ArrayUtils.WriteIntToByteArray(fullImage, 0x08, 4, true, (UInt32)height);
//Int16 biPlanes;
ArrayUtils.WriteIntToByteArray(fullImage, 0x0C, 2, true, 1);
//Int16 biBitCount;
ArrayUtils.WriteIntToByteArray(fullImage, 0x0E, 2, true, 32);
//BITMAPCOMPRESSION biCompression = BITMAPCOMPRESSION.BITFIELDS;
ArrayUtils.WriteIntToByteArray(fullImage, 0x10, 4, true, 3);
//Int32 biSizeImage;
ArrayUtils.WriteIntToByteArray(fullImage, 0x14, 4, true, (UInt32)bm32bData.Length);
// These are all 0. Since .net clears new arrays, don't bother writing them.
//Int32 biXPelsPerMeter = 0;
//Int32 biYPelsPerMeter = 0;
//Int32 biClrUsed = 0;
//Int32 biClrImportant = 0;
// The aforementioned "BITFIELDS": colour masks applied to the Int32 pixel value to get the R, G and B values.
ArrayUtils.WriteIntToByteArray(fullImage, hdrSize + 0, 4, true, 0x00FF0000);
ArrayUtils.WriteIntToByteArray(fullImage, hdrSize + 4, 4, true, 0x0000FF00);
ArrayUtils.WriteIntToByteArray(fullImage, hdrSize + 8, 4, true, 0x000000FF);
Array.Copy(bm32bData, 0, fullImage, hdrSize + 12, bm32bData.Length);
return fullImage;
}
Now, as for getting an image off the clipboard, I noticed there is apparently a difference in behaviour between .Net 3.5 and the later ones, which seem to actually use that DIB. Given that difference, and given how unreliable the DIB format is, you'll want to actually check manually for all types, preferably starting with the completely reliable PNG format.
You can get the DataObject from the clipboard with this code:
DataObject retrievedData = Clipboard.GetDataObject() as DataObject;
The CloneImage function used here is basically just the combination of my GetImageData and BuildImage toolsets, ensuring that a new image is created without any backing resources that might mess up; image objects are known to cause crashes when they're based on a Stream that then gets disposed. A compacted and optimised version of it was posted here, in a question well worth reading on the subject of why this cloning is so important.
/// <summary>
/// Retrieves an image from the given clipboard data object, in the order PNG, DIB, Bitmap, Image object.
/// </summary>
/// <param name="retrievedData">The clipboard data.</param>
/// <returns>The extracted image, or null if no supported image type was found.</returns>
public static Bitmap GetClipboardImage(DataObject retrievedData)
{
Bitmap clipboardimage = null;
// Order: try PNG, move on to try 32-bit ARGB DIB, then try the normal Bitmap and Image types.
if (retrievedData.GetDataPresent("PNG", false))
{
MemoryStream png_stream = retrievedData.GetData("PNG", false) as MemoryStream;
if (png_stream != null)
using (Bitmap bm = new Bitmap(png_stream))
clipboardimage = ImageUtils.CloneImage(bm);
}
if (clipboardimage == null && retrievedData.GetDataPresent(DataFormats.Dib, false))
{
MemoryStream dib = retrievedData.GetData(DataFormats.Dib, false) as MemoryStream;
if (dib != null)
clipboardimage = ImageFromClipboardDib(dib.ToArray());
}
if (clipboardimage == null && retrievedData.GetDataPresent(DataFormats.Bitmap))
clipboardimage = new Bitmap(retrievedData.GetData(DataFormats.Bitmap) as Image);
if (clipboardimage == null && retrievedData.GetDataPresent(typeof(Image)))
clipboardimage = new Bitmap(retrievedData.GetData(typeof(Image)) as Image);
return clipboardimage;
}
public static Bitmap ImageFromClipboardDib(Byte[] dibBytes)
{
if (dibBytes == null || dibBytes.Length < 4)
return null;
try
{
Int32 headerSize = (Int32)ArrayUtils.ReadIntFromByteArray(dibBytes, 0, 4, true);
// Only supporting 40-byte DIB from clipboard
if (headerSize != 40)
return null;
Byte[] header = new Byte[40];
Array.Copy(dibBytes, header, 40);
Int32 imageIndex = headerSize;
Int32 width = (Int32)ArrayUtils.ReadIntFromByteArray(header, 0x04, 4, true);
Int32 height = (Int32)ArrayUtils.ReadIntFromByteArray(header, 0x08, 4, true);
Int16 planes = (Int16)ArrayUtils.ReadIntFromByteArray(header, 0x0C, 2, true);
Int16 bitCount = (Int16)ArrayUtils.ReadIntFromByteArray(header, 0x0E, 2, true);
//Compression: 0 = RGB; 3 = BITFIELDS.
Int32 compression = (Int32)ArrayUtils.ReadIntFromByteArray(header, 0x10, 4, true);
// Not dealing with non-standard formats.
if (planes != 1 || (compression != 0 && compression != 3))
return null;
PixelFormat fmt;
switch (bitCount)
{
case 32:
fmt = PixelFormat.Format32bppRgb;
break;
case 24:
fmt = PixelFormat.Format24bppRgb;
break;
case 16:
fmt = PixelFormat.Format16bppRgb555;
break;
default:
return null;
}
if (compression == 3)
imageIndex += 12;
if (dibBytes.Length < imageIndex)
return null;
Byte[] image = new Byte[dibBytes.Length - imageIndex];
Array.Copy(dibBytes, imageIndex, image, 0, image.Length);
// Classic stride: fit within blocks of 4 bytes.
Int32 stride = (((((bitCount * width) + 7) / 8) + 3) / 4) * 4;
if (compression == 3)
{
UInt32 redMask = ArrayUtils.ReadIntFromByteArray(dibBytes, headerSize + 0, 4, true);
UInt32 greenMask = ArrayUtils.ReadIntFromByteArray(dibBytes, headerSize + 4, 4, true);
UInt32 blueMask = ArrayUtils.ReadIntFromByteArray(dibBytes, headerSize + 8, 4, true);
// Fix for the undocumented use of 32bppARGB disguised as BITFIELDS. Despite lacking an alpha bit field,
// the alpha bytes are still filled in, without any header indication of alpha usage.
// Pure 32-bit RGB: check if a switch to ARGB can be made by checking for non-zero alpha.
// Admitted, this may give a mess if the alpha bits simply aren't cleared, but why the hell wouldn't it use 24bpp then?
if (bitCount == 32 && redMask == 0xFF0000 && greenMask == 0x00FF00 && blueMask == 0x0000FF)
{
// Stride is always a multiple of 4; no need to take it into account for 32bpp.
for (Int32 pix = 3; pix < image.Length; pix += 4)
{
// 0 can mean transparent, but can also mean the alpha isn't filled in, so only check for non-zero alpha,
// which would indicate there is actual data in the alpha bytes.
if (image[pix] == 0)
continue;
fmt = PixelFormat.Format32bppPArgb;
break;
}
}
else
// Could be supported with a system that parses the colour masks,
// but I don't think the clipboard ever uses these anyway.
return null;
}
Bitmap bitmap = ImageUtils.BuildImage(image, width, height, stride, fmt, null, null);
// This is bmp; reverse image lines.
bitmap.RotateFlip(RotateFlipType.Rotate180FlipX);
return bitmap;
}
catch
{
return null;
}
}
Because BitConverter always requires that dumb check on system endianness, I got my own ReadIntFromByteArray and WriteIntToByteArray in an ArrayUtils class:
public static void WriteIntToByteArray(Byte[] data, Int32 startIndex, Int32 bytes, Boolean littleEndian, UInt32 value)
{
Int32 lastByte = bytes - 1;
if (data.Length < startIndex + bytes)
throw new ArgumentOutOfRangeException("startIndex", "Data array is too small to write a " + bytes + "-byte value at offset " + startIndex + ".");
for (Int32 index = 0; index < bytes; index++)
{
Int32 offs = startIndex + (littleEndian ? index : lastByte - index);
data[offs] = (Byte)(value >> (8 * index) & 0xFF);
}
}
public static UInt32 ReadIntFromByteArray(Byte[] data, Int32 startIndex, Int32 bytes, Boolean littleEndian)
{
Int32 lastByte = bytes - 1;
if (data.Length < startIndex + bytes)
throw new ArgumentOutOfRangeException("startIndex", "Data array is too small to read a " + bytes + "-byte value at offset " + startIndex + ".");
UInt32 value = 0;
for (Int32 index = 0; index < bytes; index++)
{
Int32 offs = startIndex + (littleEndian ? index : lastByte - index);
value += (UInt32)(data[offs] << (8 * index));
}
return value;
}

Avoiding creating PictureBoxes again and again

I've got the following problem. My intention is to move several images from the right to the left in a Windows Form. The code below works quite fine. What bothers me is the fact that every time a PictureBox object is created, this procedure eats up enormous amounts of memory. Each image follows the previous image uninterruptedly from the right to the left. The images display a sky moving from one side to another. It should look like a plane's flying through the air.
How is it possible to avoid using too much memory? Is there something I can do with PaintEvent and GDI? I'm not very familiar with graphics programming.
using System;
using System.Drawing;
using System.Windows.Forms;
using System.Collections.Generic;
public class Background : Form
{
private PictureBox sky, skyMove;
private Timer moveSky;
private int positionX = 0, positionY = 0, width, height;
private List<PictureBox> consecutivePictures;
public Background(int width, int height)
{
this.width = width;
this.height = height;
// Creating Windows Form
this.Text = "THE FLIGHTER";
this.Size = new Size(width, height);
this.StartPosition = FormStartPosition.CenterScreen;
this.FormBorderStyle = FormBorderStyle.FixedSingle;
this.MaximizeBox = false;
// The movement of the sky becomes possible by the timer.
moveSky = new Timer();
moveSky.Tick += new EventHandler(moveSky_XDirection_Tick);
moveSky.Interval = 10;
moveSky.Start();
consecutivePictures = new List<PictureBox>();
skyInTheWindow();
this.ShowDialog();
}
// sky's direction of movement
private void moveSky_XDirection_Tick(object sender, EventArgs e)
{
for (int i = 0; i < 100; i++)
{
skyMove = consecutivePictures[i];
skyMove.Location = new Point(skyMove.Location.X - 6, skyMove.Location.Y);
}
}
private void skyInTheWindow()
{
for (int i = 0; i < 100; i++)
{
// Loading sky into the window
sky = new PictureBox();
sky.Image = new Bitmap("C:/MyPath/Sky.jpg");
sky.SetBounds(positionX, positionY, width, height);
this.Controls.Add(sky);
consecutivePictures.Add(sky);
positionX += width;
}
}
}
You seem to be loading the same bitmap 100 times. There's your memory problem right there, not the 100 PictureBoxs. A PictureBox should have a low memory overhead because they don't include the image in their memory consumption, it is the referenced Bitmap that is much more likely to consume large amounts of memory.
It's easily fixed - consider loading the bitmap once and then applying it to all your PictureBoxs.
Change:
private void skyInTheWindow()
{
for (int i = 0; i < 100; i++)
{
// Loading sky into the window
sky = new PictureBox();
sky.Image = new Bitmap("C:/MyPath/Sky.jpg");
sky.SetBounds(positionX, positionY, width, height);
this.Controls.Add(sky);
consecutivePictures.Add(sky);
positionX += width;
}
}
...to:
private void skyInTheWindow()
{
var bitmap = new Bitmap("C:/MyPath/Sky.jpg"); // load it once
for (int i = 0; i < 100; i++)
{
// Loading sky into the window
sky = new PictureBox();
sky.Image = bitmap; // now all picture boxes share same image, thus less memory
sky.SetBounds(positionX, positionY, width, height);
this.Controls.Add(sky);
consecutivePictures.Add(sky);
positionX += width;
}
}
You could just have a single PictureBox stretched to the width of the background but shift it over time. Of course you'll need to draw something on the edge where a gap would appear.
You might get a bit of flicker with repeated PictureBox though which is one of the things I'm worried about but it might still serve.
Or what I'd do is create a UserControl and override OnPaint and just turn it into a draw bitmap issue and not have PictureBoxs at all. Much faster and efficient and no flicker. :) This is purely optional
You have the potential to eliminate any flicker too if you draw first to an offscreen Graphics and Bitmap and "bitblit" the results to the visible screen.
Would you mind giving me some code which serves as a point of reference because for me it's hard to implement into code? I'm not very familiar in graphics programming and I really want to learn from one another. The code without flickering is better
As requested I have included the code below:
Flicker Free Offscreen Rendering UserControl
Essentially what this does is to create an offscreen bitmap that we will draw into first. It is the same size as the UserControl. The control's OnPaint calls DrawOffscreen passing in the Graphics that is attached to the offscreen bitmap. Here we loop around just rendering the tiles/sky that are visible and ignoring others so as to improve performance.
Once it's all done we zap the entire offscreen bitmap to the display in one operation. This serves to eliminate:
Flicker
Tearing effects (typically associated with lateral movement)
There is a Timer that is scheduled to update the positions of all the tiles based on the time since the last update. This allows for a more realistic movement and avoids speed-ups and slow-downs under load. Tiles are moved in the OnUpdate method.
Some important properties:
DesiredFps - desired frames/second. This directly controls how frequently the OnUpdate method is called. It does not directly control how frequently OnPaint is called
NumberOfTiles - I've set it to your 100 (cloud images)
Speed - the speed in pixels/second the bitmaps move. Tied to DesiredFps. This is a load-independent; computer-performance-independent value
Painting
If you note in the code for Timer1OnTick I call Invalidate(Bounds); after animating everything. This does not cause an immediate paint rather Windows will queue a paint operation to be done at a later time. Consecutive pending operations will be fused into one. This means that we can be animating positions more frequently than painting during heavy load. Animation mechanic is independent of paint. That's a good thing, you don't want to be waiting for paints to occur.
You will note that I override OnPaintBackground and essentially do nothing. I do this because I don't want .NET to erase the background and causing unnecessary flicker prior to calling my OnPaint. I don't even bother erasing the background in DrawOffscreen because we're just going to draw bitmaps over it anyway. However if the control was resized larger than the height of the sky bitmap and if it is a requirement then you may want to. Performance-hit is pretty negligible I suppose when you are arguably drawing multiple sky-bitmaps anyway.
When you build the code, you can plonk it on any Form. The control will be visible in the Toolbox. Below I have plonked it on my MainForm.
The control also demonstrates design-time properties and defaults which you can see below. These are the settings that seem to work well for me. Try changing them for different effects.
If you dock the control and your form is resizable then you can resize the app at runtime. Useful for measuring performance. WinForms is not particularly hardware-accelerated (unlike WPF) so I wouldn't recommend the window to be too large.
Code:
#region
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Diagnostics;
using System.Drawing;
using System.Linq;
using System.Windows.Forms;
using SkyAnimation.Properties;
#endregion
namespace SkyAnimation
{
/// <summary>
/// </summary>
public partial class NoFlickerControl : UserControl
{
#region Fields
private readonly List<RectangleF> _tiles = new List<RectangleF>();
private DateTime _lastTick;
private Bitmap _offscreenBitmap;
private Graphics _offscreenGraphics;
private Bitmap _skyBitmap;
#endregion
#region Constructor
public NoFlickerControl()
{
// set defaults first
DesiredFps = Defaults.DesiredFps;
NumberOfTiles = Defaults.NumberOfTiles;
Speed = Defaults.Speed;
InitializeComponent();
if (DesignMode)
{
return;
}
_lastTick = DateTime.Now;
timer1.Tick += Timer1OnTick;
timer1.Interval = 1000/DesiredFps; // How frequenty do we want to recalc positions
timer1.Enabled = true;
}
#endregion
#region Properties
/// <summary>
/// This controls how often we recalculate object positions
/// </summary>
/// <remarks>
/// This can be independant of rendering FPS
/// </remarks>
/// <value>
/// The frames per second.
/// </value>
[DefaultValue(Defaults.DesiredFps)]
public int DesiredFps { get; set; }
[DefaultValue(Defaults.NumberOfTiles)]
public int NumberOfTiles { get; set; }
/// <summary>
/// Gets or sets the sky to draw.
/// </summary>
/// <value>
/// The sky.
/// </value>
[Browsable(false)]
public Bitmap Sky { get; set; }
/// <summary>
/// Gets or sets the speed in pixels/second.
/// </summary>
/// <value>
/// The speed.
/// </value>
[DefaultValue(Defaults.Speed)]
public float Speed { get; set; }
#endregion
#region Methods
private void HandleResize()
{
// the control has resized, time to recreate our offscreen bitmap
// and graphics context
if (Width == 0
|| Height == 0)
{
// nothing to do here
}
_offscreenBitmap = new Bitmap(Width, Height);
_offscreenGraphics = Graphics.FromImage(_offscreenBitmap);
}
private void NoFlickerControl_Load(object sender, EventArgs e)
{
SkyInTheWindow();
HandleResize();
}
private void NoFlickerControl_Resize(object sender, EventArgs e)
{
HandleResize();
}
/// <summary>
/// Handles the SizeChanged event of the NoFlickerControl control.
/// </summary>
/// <param name="sender">The source of the event.</param>
/// <param name="e">The <see cref="EventArgs" /> instance containing the event data.</param>
private void NoFlickerControl_SizeChanged(object sender, EventArgs e)
{
HandleResize();
}
/// <summary>
/// Raises the <see cref="E:System.Windows.Forms.Control.Paint" /> event.
/// </summary>
/// <param name="e">A <see cref="T:System.Windows.Forms.PaintEventArgs" /> that contains the event data. </param>
protected override void OnPaint(PaintEventArgs e)
{
var g = e.Graphics;
var rc = e.ClipRectangle;
if (_offscreenBitmap == null
|| _offscreenGraphics == null)
{
g.FillRectangle(Brushes.Gray, rc);
return;
}
DrawOffscreen(_offscreenGraphics, ClientRectangle);
g.DrawImageUnscaled(_offscreenBitmap, 0, 0);
}
private void DrawOffscreen(Graphics g, RectangleF bounds)
{
// We don't care about erasing the background because we're
// drawing over it anyway
//g.FillRectangle(Brushes.White, bounds);
//g.SetClip(bounds);
foreach (var tile in _tiles)
{
if (!(bounds.Contains(tile) || bounds.IntersectsWith(tile)))
{
continue;
}
g.DrawImageUnscaled(_skyBitmap, new Point((int) tile.Left, (int) tile.Top));
}
}
/// <summary>
/// Paints the background of the control.
/// </summary>
/// <param name="e">A <see cref="T:System.Windows.Forms.PaintEventArgs" /> that contains the event data.</param>
protected override void OnPaintBackground(PaintEventArgs e)
{
// NOP
// We don't care painting the background here because
// 1. we want to do it offscreen
// 2. the background is the picture anyway
}
/// <summary>
/// Responsible for updating/translating game objects, not drawing
/// </summary>
/// <param name="totalMillisecondsSinceLastUpdate">The total milliseconds since last update.</param>
/// <remarks>
/// It is worth noting that OnUpdate could be called more times per
/// second than OnPaint. This is fine. It's generally a sign that
/// rendering is just taking longer but we are able to compensate by
/// tracking time since last update
/// </remarks>
private void OnUpdate(double totalMillisecondsSinceLastUpdate)
{
// Remember that we measure speed in pixels per second, hence the
// totalMillisecondsSinceLastUpdate
// This allows us to have smooth animations and to compensate when
// rendering takes longer for certain frames
for (int i = 0; i < _tiles.Count; i++)
{
var tile = _tiles[i];
tile.Offset((float)(-Speed * totalMillisecondsSinceLastUpdate / 1000f), 0);
_tiles[i] = tile;
}
}
private void SkyInTheWindow()
{
_tiles.Clear();
// here I load the bitmap from my embedded resource
// but you easily could just do a new Bitmap ("C:/MyPath/Sky.jpg");
_skyBitmap = Resources.sky400x400;
var bounds = new Rectangle(0, 0, _skyBitmap.Width, _skyBitmap.Height);
for (var i = 0; i < NumberOfTiles; i++)
{
// Loading sky into the window
_tiles.Add(bounds);
bounds.Offset(bounds.Width, 0);
}
}
private void Timer1OnTick(object sender, EventArgs eventArgs)
{
if (DesignMode)
{
return;
}
var ellapsed = DateTime.Now - _lastTick;
OnUpdate(ellapsed.TotalMilliseconds);
_lastTick = DateTime.Now;
// queue cause a repaint
// It's important to realise that repaints are queued and fused
// together if the message pump gets busy
// In other words, there may not be a 1:1 of OnUpdate : OnPaint
Invalidate(Bounds);
}
#endregion
}
public static class Defaults
{
public const int DesiredFps = 30;
public const int NumberOfTiles = 100;
public const float Speed = 300f;
}
}
This isn't directly an answer to this question - I think that's primarily because of all the Bitmap images you're creating. You should only create one and then the problem goes away.
What I'm suggesting here is an alternative way of coding this that cuts the code enormously.
All of my code goes straight in your Background constructor after the line this.MaximizeBox = false;. Everything after that is removed.
So start with loading the image:
var image = new Bitmap(#"C:\MyPath\Sky.jpg");
Next, work out how many picture boxes do I need to tile the image across the form based on the width and height passed in:
var countX = width / image.Width + 2;
var countY = height / image.Height + 2;
Now create the actual picture boxes that will populate the screen:
var pictureBoxData =
(
from x in Enumerable.Range(0, countX)
from y in Enumerable.Range(0, countY)
let positionX = x * image.Width
let positionY = y * image.Height
let pictureBox = new PictureBox()
{
Image = image,
Location = new Point(positionX, positionY),
Size = new Size(image.Width, image.Height),
}
select new
{
positionX,
positionY,
pictureBox,
}
).ToList();
Next, add them all to the Controls collection:
pictureBoxData.ForEach(pbd => this.Controls.Add(pbd.pictureBox));
Finally, use Microsoft's Reactive Framework (NuGet Rx-WinForms) to create a timer that will update the Left position of the picture boxes:
var subscription =
Observable
.Generate(
0,
n => true,
n => n >= image.Width ? 0 : n + 1,
n => n,
n => TimeSpan.FromMilliseconds(10.0))
.ObserveOn(this)
.Subscribe(n =>
{
pictureBoxData
.ForEach(pbd => pbd.pictureBox.Left = pbd.positionX - n);
});
Finally, before launching the dialog, we need a way to cleanup all of the above so that the form closes cleanly. Do this:
var disposable = new CompositeDisposable(image, subscription);
this.FormClosing += (s, e) => disposable.Dispose();
Now you can do the ShowDialog:
this.ShowDialog();
And that's it.
Apart from nugetting Rx-WinForms, you need to add the following using statements to the top of the code:
using System.Reactive.Linq;
using System.Reactive.Disposables;
It all worked nicely for me:
The variables and names haven't been translated into English. I nevertheless hope that it's understandable for all of you.
using System;
using System.Drawing;
using System.Windows.Forms;
using System.Collections.Generic;
/// <summary>
/// Scrolling Background - Bewegender Hintergrund
/// </summary>
public class ScrollingBackground : Form
{
/* this = fremde Attribute und Methoden,
* ohne this = eigene Attribute und Methoden
*/
private PictureBox picBoxImage;
private PictureBox[] listPicBoxAufeinanderfolgendeImages;
private Timer timerBewegungImage;
private const int constIntAnzahlImages = 2,
constIntInterval = 1,
constIntPositionY = 0;
private int intPositionX = 0,
intFeinheitDerBewegungen,
intBreite,
intHoehe;
private string stringTitel,
stringBildpfad;
// Konstruktor der Klasse Hintergrund
/// <summary>
/// Initialisiert eine neue Instanz der Klasse Hintergrund unter Verwendung der angegebenen Ganzzahlen und Zeichenketten.
/// Es wird ein Windows-Fenster erstellt, welches die Möglichkeit hat, ein eingefügtes Bild als bewegenden Hintergrund darzustellen.
/// </summary>
/// <param name="width">Gibt die Breite des Fensters an und passt den darin befindlichen Hintergrund bzgl. der Breite automatisch an.</param>
/// <param name="height">Gibt die Höhe des Fensters an und passt den darin befindlichen Hintergrund bzgl. der Höhe automatisch an.</param>
/// <param name="speed">Geschwindigkeit der Bilder</param>
/// <param name="title">Titel des Fensters</param>
/// <param name="path">Pfad des Bildes, welches als Hintergrund dient</param>
public ScrollingBackground(int width, int height, int speed, string title, string path)
{
// Klassennutzer können Werte setzen
intBreite = width;
intHoehe = height;
intFeinheitDerBewegungen = speed;
stringTitel = title;
stringBildpfad = path;
// Windows-Fenster wird erschaffen
this.Text = title;
this.Size = new Size(this.intBreite, this.intHoehe);
this.StartPosition = FormStartPosition.CenterScreen;
this.FormBorderStyle = FormBorderStyle.FixedSingle;
this.MaximizeBox = false;
// Die Bewegung des Bildes wird durch den Timer ermöglicht.
timerBewegungImage = new Timer();
timerBewegungImage.Tick += new EventHandler(bewegungImage_XRichtung_Tick);
timerBewegungImage.Interval = constIntInterval;
timerBewegungImage.Start();
listPicBoxAufeinanderfolgendeImages = new PictureBox[2];
imageInWinFormLadenBeginn();
this.ShowDialog();
}
// Bewegungsrichtung des Bildes
private void bewegungImage_XRichtung_Tick(object sender, EventArgs e)
{
for (int i = 0; i < constIntAnzahlImages; i++)
{
picBoxImage = listPicBoxAufeinanderfolgendeImages[i];
// Flackerreduzierung - Minimierung des Flackerns zwischen zwei Bildern
this.DoubleBuffered = true;
// Bilder werden in X-Richtung bewegt
picBoxImage.Location = new Point(picBoxImage.Location.X - intFeinheitDerBewegungen, picBoxImage.Location.Y);
// Zusammensetzung beider gleicher Bilder, welche den Effekt haben, die Bilder ewig fortlaufend erscheinen zu lassen
if (listPicBoxAufeinanderfolgendeImages[1].Location.X <= 0)
{
imageInWinFormLadenFortsetzung();
}
}
}
// zwei PictureBoxes mit jeweils zwei gleichen Bildern werden angelegt
private void imageInWinFormLadenBeginn()
{
Bitmap bitmapImage = new Bitmap(stringBildpfad);
for (int i = 0; i < constIntAnzahlImages; i++)
{
// Bild wird in Fenster geladen
picBoxImage = new PictureBox();
picBoxImage.Image = bitmapImage;
// Bestimmung der Position und Größe des Bildes
picBoxImage.SetBounds(intPositionX, constIntPositionY, intBreite, intHoehe);
this.Controls.Add(picBoxImage);
listPicBoxAufeinanderfolgendeImages[i] = picBoxImage;
// zwei PictureBoxes mit jeweils zwei gleichen Bildern werden nebeneinander angefügt
intPositionX += intBreite;
}
}
// Wiederholte Nutzung der PictureBoxes
private void imageInWinFormLadenFortsetzung()
{
// erste PictureBox mit Image wird wieder auf ihren Anfangswert "0" gesetzt - Gewährleistung der endlos laufenden Bilder
picBoxImage = listPicBoxAufeinanderfolgendeImages[0];
picBoxImage.SetBounds(intPositionX = 0, constIntPositionY, intBreite, intHoehe);
// zweite PictureBox mit Image wird wieder auf ihren Anfangswert "intBreite" gesetzt - Gewährleistung der endlos laufenden Bilder
picBoxImage = listPicBoxAufeinanderfolgendeImages[1];
picBoxImage.SetBounds(intPositionX = intBreite, constIntPositionY, intBreite, intHoehe);
}
}
Regards,
Lucky Buggy

Flipped Bitmap from Twain

I'm currently getting scanned pages through Twain and transforming the pages to Bitmap, using the BitmapRenderer of twaindotnet project, as described in this post.
My scanner allows me to scan recto and verso.
When I scan recto only pages, it works like a charm: the generated bitmaps are perfect. But when it scans recto-verso, the bitmap are flipped . Sometimes vertically, sometimes horizontally.
I can't use the Bitmap.RotateFlip() method because the effect doesn't concern each picture, but only when recto-verso pages.
I've tried the Bitmap.FromHbitmap() described here or the default constructor, but it throws an error related to GDI+.
I'm pretty sure the issue is where the bitmap is converted from the pointer, in the BitmapRenderer class. Here is the code (I did not include the Dispose() methods for clarity purpose) :
public class BitmapRenderer : IDisposable
{
private readonly IntPtr _picturePointer;
private readonly IntPtr _bitmapPointer;
private readonly IntPtr _pixelInfoPointer;
private Rectangle _rectangle;
private readonly BitmapInfoHeader _bitmapInfo;
/// <summary>
/// Initializes a new instance of the <see cref="BitmapRenderer"/> class.
/// </summary>
/// <param name="picturePointer_">The picture pointer.</param>
public BitmapRenderer(IntPtr picturePointer_)
{
_picturePointer = picturePointer_;
_bitmapPointer = Kernel32Native.GlobalLock(picturePointer_);
_bitmapInfo = new BitmapInfoHeader();
Marshal.PtrToStructure(_bitmapPointer, _bitmapInfo);
_rectangle = new Rectangle();
_rectangle.X = _rectangle.Y = 0;
_rectangle.Width = _bitmapInfo.Width;
_rectangle.Height = _bitmapInfo.Height;
if (_bitmapInfo.SizeImage == 0)
{
_bitmapInfo.SizeImage = ((((_bitmapInfo.Width*_bitmapInfo.BitCount) + 31) & ~31) >> 3)*
_bitmapInfo.Height;
}
// The following code only works on x86
Debug.Assert(Marshal.SizeOf(typeof (IntPtr)) == 4);
int pixelInfoPointer = _bitmapInfo.ClrUsed;
if ((pixelInfoPointer == 0) && (_bitmapInfo.BitCount <= 8))
pixelInfoPointer = 1 << _bitmapInfo.BitCount;
pixelInfoPointer = (pixelInfoPointer*4) + _bitmapInfo.Size + _bitmapPointer.ToInt32();
_pixelInfoPointer = new IntPtr(pixelInfoPointer);
}
/// <summary>
/// Renders to bitmap.
/// </summary>
/// <returns></returns>
public Bitmap RenderToBitmap()
{
Bitmap bitmap = new Bitmap(_rectangle.Width, _rectangle.Height);
using (Graphics graphics = Graphics.FromImage(bitmap))
{
IntPtr hdc = graphics.GetHdc();
try
{
Gdi32Native.SetDIBitsToDevice(hdc, 0, 0, _rectangle.Width, _rectangle.Height,
0, 0, 0, _rectangle.Height, _pixelInfoPointer, _bitmapPointer, 0);
}
finally
{
graphics.ReleaseHdc(hdc);
}
}
bitmap.SetResolution(PpmToDpi(_bitmapInfo.XPelsPerMeter), PpmToDpi(_bitmapInfo.YPelsPerMeter));
return bitmap;
}
private static float PpmToDpi(double pixelsPerMeter_)
{
double pixelsPerMillimeter = pixelsPerMeter_/1000.0;
double dotsPerInch = pixelsPerMillimeter*25.4;
return (float) Math.Round(dotsPerInch, 2);
}
I don't understand where this is from or how to solve it.
EDIT
Well, it appears this situation is not related to twain conversion to bitmap (the issue is not from twaindotnet project at all).
It only occurs with handwritten pages. This is an automatic OCR issue.
Does someone know how to disable OCR for handwritten document ?

Can you tell me how to detect an event from a sound whether a part of its wave or spectrum exceeds a specific amount of threshold? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to catch the event when spectrum of an audio reached a specific height, like triggered event made by a loud sound?
I want to detect for a example, a beat or a loud sound from an audio file. All the modules are working except that I don't know how to code the detection. Some says, I'll iterate the data from the spectrum and record the parts where there is a load sound or a beat.
I'll show you the code of my FFT, I got this from NAudio. Can you show me if I can detect an event here.
For example:
if (waveLeft[] > amplitudeThreshold || waveleft[] < -amplitudeThreshold)
listbox.items.add(ActiveStream.CurrentTime)
That's the idea.
So here's the code.
public SampleAggregator(int bufferSize)
{
channelData = new Complex[bufferSize];
}
public void Clear()
{
volumeLeftMaxValue = float.MinValue;
volumeRightMaxValue = float.MinValue;
volumeLeftMinValue = float.MaxValue;
volumeRightMinValue = float.MaxValue;
channelDataPosition = 0;
}
/// <summary>
/// Add a sample value to the aggregator.
/// </summary>
/// <param name="value">The value of the sample.</param>
public void Add(float leftValue, float rightValue)
{
if (channelDataPosition == 0)
{
volumeLeftMaxValue = float.MinValue;
volumeRightMaxValue = float.MinValue;
volumeLeftMinValue = float.MaxValue;
volumeRightMinValue = float.MaxValue;
}
// Make stored channel data stereo by averaging left and right values.
channelData[channelDataPosition].X = (leftValue + rightValue) / 2.0f;
channelData[channelDataPosition].Y = 0;
channelDataPosition++;
volumeLeftMaxValue = Math.Max(volumeLeftMaxValue, leftValue);
volumeLeftMinValue = Math.Min(volumeLeftMinValue, leftValue);
volumeRightMaxValue = Math.Max(volumeRightMaxValue, rightValue);
volumeRightMinValue = Math.Min(volumeRightMinValue, rightValue);
if (channelDataPosition >= channelData.Length)
{
channelDataPosition = 0;
}
}
/// <summary>
/// Performs an FFT calculation on the channel data upon request.
/// </summary>
/// <param name="fftBuffer">A buffer where the FFT data will be stored.</param>
public void GetFFTResults(float[] fftBuffer)
{
Complex[] channelDataClone = new Complex[4096];
channelData.CopyTo(channelDataClone, 0);
// 4096 = 2^12
FastFourierTransform.FFT(true, 12, channelDataClone);
for (int i = 0; i < channelDataClone.Length / 2; i++)
{
// Calculate actual intensities for the FFT results.
fftBuffer[i] = (float)Math.Sqrt(channelDataClone[i].X * channelDataClone[i].X + channelDataClone[i].Y * channelDataClone[i].Y);
}
}
Thank you for the help. :)
The basic idea:
You segment your stream of wave-form samples into timeslices and convert each slice to the F-domain with your FFT.
Then you have a stream of FFT frames in which you check each channel (bin) for peaks. You'll need a lastValue and maybe even a little state-machine per bin.
FFT width and peaklevel to be configured.

Categories

Resources