Recording RGB stream from the kinect sensor - c#

I'm doing a WPF application and one of the functions is to record video (Only RGB stream) from Kinect sensor (using Aforge and SDK 1.5).
In my application, i have a button that when clicked, it should save the video stream into an avi file.
I've added the references and I copied all the .dll files into my projects folder (as was explained on other forums) but for some reason I receive this error:
{"Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information.":null}
So inside private void button4_Click(object sender, RoutedEventArgs e) is the following code:
int width = 640;
int height = 480;
// create instance of video writer
VideoFileWriter writer = new VideoFileWriter();
// create new video file
writer.Open("test.avi", width, height, 25, VideoCodec.MPEG4);
// create a bitmap to save into the video file
Bitmap image = new Bitmap (width, height, DrawingColor.PixelFormat.Format24bppRgb);
for (int i = 0; i < 1000; i++)
{
image.SetPixel(i % width, i % height, Color.Red);
writer.WriteVideoFrame(image);
}
writer.Close();
}
I will really appreciate your help and also im flexible with the way to record RGB stream(if you recommend another way), as long a its not complicated because im new with C#

The reason the video is red is because you are turning it red with
for (int i = 0; i < 1000; i++)
{
image.SetPixel(i % width, i % height, Color.Red);
writer.WriteVideoFrame(image);
}
What you should do is convert the BitmapSource/WritableBitmap* (assuming you are displaying Kinect's data with a BitmapSource or WritableBitmap Then you can just add that bitmap to your Video frame. Hope this helps!
**If you are using a WritableBitmap, convert it to a BitmapImage, then convert that to a Bitmap*

Related

Is there a way to keep the alpha channel when compiling bmp's to a video file in c#?

I'm trying to record an animation while keeping the transparent background.
I have some animation running in a picturebox with transparent background using windows forms. I convert every frame to a bmp with an alpha channel and then try to stitch them together to obtain a video that still has the transparent background.
As an example I have 60 frames of a square figure moving to the right.
To create the video I've used the Accord Video library. (Accord.Video.FFMPEG)
public Form1()
{
InitializeComponent();
pictureBox1.Paint += new PaintEventHandler(PictureBox1_Paint);
// create instance of video writer
VideoFileWriter writer = new VideoFileWriter();
// create new video file
writer.Open("test.avi", pictureBox1.ClientSize.Width, pictureBox1.ClientSize.Height, 60, VideoCodec.H264);
// create a bitmap to save into the video file
Bitmap bmp = new Bitmap(pictureBox1.ClientSize.Width, pictureBox1.ClientSize.Height, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
for (int i = 0; i < 60; ++i)
{
pictureBox1.Invalidate();
pictureBox1.DrawToBitmap(bmp, pictureBox1.ClientRectangle);
writer.WriteVideoFrame(bmp);
this.x += 2;
}
writer.Close();
}
private void PictureBox1_Paint(object sender, PaintEventArgs e)
{
Graphics g = e.Graphics;
g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;
SolidBrush brush = new SolidBrush(Color.Blue);
g.FillRectangle(brush, new Rectangle(new Point(x, 50), new Size(50, 50)));
}
private int x = 50;
From what I've read the H.264 codec supports alpha channel, but the result is that it just stacks every single frame on top of eachother and the background turns black (instead of transparent.)
screenshot of the result
Like Anon Coward said, the H.264 codec doesn't support alpha channel.
Sadly enough none of the codecs from the Accord Video library support alpha channel so I resorted to writing all the frames to a directory and then using FFMPEG to compile them together into a video file. I've used the PNG video codec.
TaW also pointed out that I was confusing the BitMap with bmp, which I fixed by drawing everything to a BitMap, using that BitMap for the frames. To draw on the picturebox I then used DrawImage.
Thanks for the help!

Reversed bitmap when creating AVI container

I have a list of images in my program, and I am generating an AVI video from them. For that purpose I use avifilewrapper_src library that handles the creation of video.
The process of creating is:
Bitmap bitmap;
//load the first image
bitmap = (Bitmap)imageSequence[0];
//create a new AVI file
AviManager aviManager = new AviManager(paths.outputVideo, false);
//add a new video stream and one frame to the new file
VideoStream aviStream =
aviManager.AddVideoStream(true, (double)nud_picturePerSec.Value, bitmap);
if(chb_audio.Checked)
aviManager.AddAudioStream(paths.sampleAudio, 0);
int count = 0;
for (int n = 0; n < imageSequence.Count; n++) {
bitmap = (Bitmap)imageSequence[n];
aviStream.AddFrame(bitmap);
bitmap.Dispose();
count++;
}
aviManager.Close();
If I keep giving different images, it works fine. If I however, put two similar images, than the video shows second image upside down (left/right side is correct). By two similar images I mean creating second image and copying it from the first one.
I have a feeling that this is somehow related to streams, but I can't find why the images are inverted.
Well I didn't managed to find the cause of that behavior. But fliping it between each use does the correction well.
bitmap.RotateFlip(RotateFlipType.RotateNoneFlipY);

Insufficient buffer size using WriteableBitmap?

I am modifying the ColorBasic Kinect example in order to display an image overlaid to the video stream. So what I've done is to load an image with transparent background (now a GIF but it may change), and write to the displayed bitmap.
The error I'm getting is that the buffer I'm writing to is too small.
I cannot see what the actual error is (I'm a complete newbie in XAML/C#/Kinect), but the WriteableBitmap is 1920x1080, and the bitmap I want to copy is 200x200, so why am I getting this error? I cannot see how a transparent background could be of any harm, but I am beginning to suspect that...
Note that without the last WritePixels, the code works and I see the webcam's output. My code follows.
The overlay image:
public BitmapImage overlay = new BitmapImage(new Uri("C:\\users\\user\\desktop\\something.gif"));
The callback function that displays the Kinect's webcam (see the default example ColorBasic) with my very small modifications:
private void Reader_ColorFrameArrived(object sender, ColorFrameArrivedEventArgs e)
{
// ColorFrame is IDisposable
using (ColorFrame colorFrame = e.FrameReference.AcquireFrame())
{
if (colorFrame != null)
{
FrameDescription colorFrameDescription = colorFrame.FrameDescription;
using (KinectBuffer colorBuffer = colorFrame.LockRawImageBuffer())
{
this.colorBitmap.Lock();
// verify data and write the new color frame data to the display bitmap
if ((colorFrameDescription.Width == this.colorBitmap.PixelWidth) && (colorFrameDescription.Height == this.colorBitmap.PixelHeight))
{
colorFrame.CopyConvertedFrameDataToIntPtr(
this.colorBitmap.BackBuffer,
(uint)(colorFrameDescription.Width * colorFrameDescription.Height * 4),
ColorImageFormat.Bgra);
this.colorBitmap.AddDirtyRect(new Int32Rect(0, 0, this.colorBitmap.PixelWidth, this.colorBitmap.PixelHeight));
}
if(this.overlay != null)
{
// Calculate stride of source
int stride = overlay.PixelWidth * (overlay.Format.BitsPerPixel / 8);
// Create data array to hold source pixel data
byte[] data = new byte[stride * overlay.PixelHeight];
// Copy source image pixels to the data array
overlay.CopyPixels(data, stride, 0);
this.colorBitmap.WritePixels(new Int32Rect(0, 0, overlay.PixelWidth, overlay.PixelHeight), data, stride, 0);
}
this.colorBitmap.Unlock();
}
}
}
}
Your overlay.Format.BitsPerPixel / 8 will be 1 (because it's a gif), but you're trying to copy it to something that is not a gif, probably BGRA (32 bit). Thus you got a huge difference in size (4x).
.WritePixels should take in the stride value of the destination buffer, but you past it the stride value of the overlay (this can cause weird problems as well).
And finally, even if it went 100% smooth your overlay will not actually "overlay" anything, it will replace -- since I don't see any alpha bending math in your code.
Switch your .gif to a .png (32bit) and see if that helps.
Also, if you're looking for an AlphaBltMerge type code: I wrote the entire thing here.. it's very easy to understand.
Merge 2 - 32bit Images with Alpha Channels

Zxing weird image reverse

I am writing C# lib for very simple recognize image to use it in monodroid and also using zxing port to C#. But after I read image bytes from file I do such thing, same as in zxing barcode scanning.
binaryBitmap = new BinaryBitmap(new HybridBinarizer(new RGBLuminanceSource(rawRgb, width, height, format)));
But somehow it reverse image by vertical. I just saving binaryBitmap as bitmap to file by pixels.
Please help me understand why it's happen? What am I doing wrong?
#Michael am using Zxing.Net.Mobile port, from here https://github.com/Redth/ZXing.Net.Mobile. It's very weird for me it I am using PlanarYUVLuminanceSource - then I get such image http://i.imgur.com/OlwqC0I.png, but if I am using RGBLuminanceSource then I get full almost normal image, see example image. so now I have even 2 questions:
why planar take only part of image and have "layer on layer" effect? and
ok if I will use RGBLuminanceSource then, why it have some invertion of colors, I mean somewhere rectangles border is black and somewhere they are white. because it real image they all black?
UPDATE:
Here is how I get bytes from device and also as you see I set nv21 format, so it must be YUV, no? I wonder, what I am doing wrong that rgb source work(at list image is ok) and PLanarYUV not :((
BTW, original byte from preview frame have result and same file size.
Any suggestion?
public void OnPreviewFrame(byte[] bytes, Android.Hardware.Camera camera)
{
var img = new YuvImage(bytes, ImageFormatType.Nv21, cameraParameters.PreviewSize.Width, cameraParameters.PreviewSize.Height, null); string _fileName2 = "YUV_BYtes_"+ DateTime.Now.Ticks +".txt";
string pathToFile2 = Path.Combine(Android.OS.Environment.ExternalStorageDirectory.AbsolutePath, _fileName2);
using (var fileStream = new FileStream(pathToFile2, FileMode.Append, FileAccess.Write, FileShare.None))
{
fileStream.Write(img.GetYuvData(), 0, img.GetYuvData().Length);
}
}
public void SurfaceChanged(ISurfaceHolder holder, global::Android.Graphics.Format format, int width, int height)
{
if (camera == null)
return;
var parameters = camera.GetParameters();
width = parameters.PreviewSize.Width;
height = parameters.PreviewSize.Height;
parameters.PreviewFormat = ImageFormatType.Nv21;
//parameters.PreviewFrameRate = 15;
//this.height = size.height;
//this.width = size.width;
//camera.setParameters( params );
//parameters.PreviewFormat = ImageFormatType.;
camera.SetParameters(parameters);
camera.SetDisplayOrientation(90);
camera.StartPreview();
cameraResolution = new Size(parameters.PreviewSize.Width, parameters.PreviewSize.Height);
AutoFocus();
}
I think I know what you have done. The data looks like RGB565 bitmap data (or something similar). You can't put such a byte array into the PlanarYUVLuminanceSource. You have to make sure that the byte array which you use with the planar source is really a array with only yuv data, not RGB565.
The rules are easy:
if you use the following code snippet
new RGBLuminanceSource(rawRgb, width, height, format)
make sure that the value of format matches the layout and data of the parameter rawRgb.
if you use somethin glike the following
new PlanarYUVLuminanceSource(yuvBytes, 640, 960, 0, 0, 640, 960, false);
make sure that yuvBytes only contains real yuv data.
I can only give a better answer if you post a more complete code sample.

Resize Image of any size to fixed dimension using C# ASP.Net web form

I have done image resizing while allowing user to upload a specific size image and then crop them to different dimension i have also used jCrop in project to allow users to upload a image of specific size and then select the image area & crop it accordingly.
In new project i have a requirement where user can upload any size image which is at least larger than 500Px in width and then i have to allow user to select the part of image using jCrop and then save image in different dimension of 475x313 , 310x205 while maintaining the aspect ration.
I can do it with if i allow the used to upload a fixed size image but i am not sure how i can handle variable size image.
I also need to display the image uploaded before cropping in a fixed size box.. let us say 300x200. in this area i have to allow the user to select the part of the image before i can crop.
Issue i am facing is how to handle variable length image and show it is a fixed image box of 300x200px.
I wrote an article on using jCrop with dynamically-resized uploaded images, which seems to be what you're needing.
If you're looking for an open-source ASP.NET control that does it for you, check out cropimage.net.
Want to going to by programmatically than you can try this :
if you are using file upload for upload images
string path = Path.GetFileName(fileuploaderID.PostedFile.FileName);
ConvertThumbnails(width, height, fileuploaderID.FileBytes, path);
your function
public void ConvertThumbnails(int width, int height, byte[] filestream, string path)
{
// create an image object, using the filename we just retrieved
var stream = new MemoryStream(filestream);
System.Drawing.Image image = System.Drawing.Image.FromStream(stream);
try
{
int fullSizeImgWidth = image.Width;
int fullSizeImgHeight = image.Height;
float imgWidth = 0.0F;
float imgHeight = 0.0F;
imgWidth = width;
imgHeight = height;
Bitmap thumbNailImg = new Bitmap(image, (int)imgWidth, (int)imgHeight);
MemoryStream ms = new MemoryStream();
// Save to memory using the Jpeg format
thumbNailImg.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg);
// read to end
byte[] bmpBytes = ms.GetBuffer();
item.Attachments.Add(path, bmpBytes);
thumbNailImg.Dispose();
ms.Close();
}
catch (Exception)
{
image.Dispose();
}
}

Categories

Resources