Setting up the constant buffer using SlimDX - c#

I've been following the Microsoft Direct3D11 tutorials but using C# and SlimDX. I'm trying to set the constant buffer but am not sure how to either create or set it.
I'm simply trying to set three matrices (world, view and projection) using a constant buffer but I'm struggling at every stage, creation, data input and passing it to the shader.
The HLSL on MSDN (which I've essentially copied) is:
cbuffer ConstantBuffer : register( b0 )
{
matrix World;
matrix View;
matrix Projection;
}
The C++ code on MSDN is:
ID3D11Buffer* g_pConstantBuffer = NULL;
XMMATRIX g_World;
XMMATRIX g_View;
XMMATRIX g_Projection;
//set up the constant buffer
D3D11_BUFFER_DESC bd;
ZeroMemory( &bd, sizeof(bd) );
bd.Usage = D3D11_USAGE_DEFAULT;
bd.ByteWidth = sizeof(ConstantBuffer);
bd.BindFlags = D3D11_BIND_CONSTANT_BUFFER;
bd.CPUAccessFlags = 0;
if( FAILED(g_pd3dDevice->CreateBuffer( &bd, NULL, &g_pConstantBuffer ) ) )
return hr;
//
// Update variables
//
ConstantBuffer cb;
cb.mWorld = XMMatrixTranspose( g_World );
cb.mView = XMMatrixTranspose( g_View );
cb.mProjection = XMMatrixTranspose( g_Projection );
g_pImmediateContext->UpdateSubresource( g_pConstantBuffer, 0, NULL, &cb, 0, 0 );
Does anybody know how to translate this to SlimDX? Or if anybody knows any SlimDX tutorials or resources that would also be useful.
Thanks.

Something similar to this should work:
var buffer = new Buffer(device, new BufferDescription {
Usage = ResourceUsage.Default,
SizeInBytes = sizeof(ConstantBuffer),
BindFlags = BindFlags.ConstantBuffer
});
var cb = new ConstantBuffer();
cb.World = Matrix.Transpose(world);
cb.View = Matrix.Transpose(view);
cb.Projection = Matrix.Transpose(projection);
var data = new DataStream(sizeof(ConstantBuffer), true, true);
data.Write(cb);
data.Position = 0;
context.UpdateSubresource(new DataBox(0, 0, data), buffer, 0);

Related

Change width, color, line type in SharpGL

Need help because I'm new to OpenGL.
So, my task is to create a control that will draw in real time the process of cutting a workpiece on a CNC machine.
I tried to do it classically through glBegin, glEnd and everything works fine, but due to the large number of vertices, it starts to work slowly at the end. Therefore, I decided to try using VertexBufferArray () and VertexBuffer () - this also works, but in this situation I do not understand how to change the line width and its type (specifically, I have two types - a regular line and a dash-dotted line).
Here is my method where i use array
private void CreateBufferAndDraw(OpenGL GL)
{
try
{
if (shaderProgram == null && vertexBufferArray == null)
{
var vertexShaderSource = ManifestResourceLoader.LoadTextFile("vertex_shader.glsl");
var fragmentShaderSource = ManifestResourceLoader.LoadTextFile("fragment_shader.glsl");
shaderProgram = new ShaderProgram();
shaderProgram.Create(gL, vertexShaderSource, fragmentShaderSource, null);
attribute_vpos = (uint)gL.GetAttribLocation(shaderProgram.ShaderProgramObject, "vPosition");
attribute_vcol = (uint)gL.GetAttribLocation(shaderProgram.ShaderProgramObject, "vColor");
shaderProgram.AssertValid(gL);
}
if (_data == null)
{
_data = new vec3[_Points.Count];
}
else
{
if (_data.Length != _Points.Count)
{
_data = (vec3[])ResizeArray(_data, _Points.Count);
}
}
if (_dataColor == null)
{
_dataColor = new vec3[_Points.Count];
}
else
{
if (_dataColor.Length != _Points.Count)
{
_dataColor = (vec3[])ResizeArray(_dataColor, _Points.Count);
}
}
for (int i = _dataTail; i < _Points.Count; i++)
{
_data[i].y = _Points[i].Y;
_data[i].x = _Points[i].X;
_data[i].z = _Points[i].Z;
_dataColor[i] = new vec3(_Points[i].ToolColor.R / 255.0f, _Points[i].ToolColor.G / 255.0f, _Points[i].ToolColor.B / 255.0f);
}
_dataTail = _Points.Count;
// Create the vertex array object.
vertexBufferArray = new VertexBufferArray();
vertexBufferArray.Create(GL);
vertexBufferArray.Bind(GL);
// Create a vertex buffer for the vertex data.
var vertexDataBuffer = new VertexBuffer();
vertexDataBuffer.Create(GL);
vertexDataBuffer.Bind(GL);
vertexDataBuffer.SetData(GL, attribute_vpos, _data, false, 3);
// Now do the same for the colour data.
var colourDataBuffer = new VertexBuffer();
colourDataBuffer.Create(GL);
colourDataBuffer.Bind(GL);
colourDataBuffer.SetData(GL, attribute_vcol, _dataColor, false, 3);
// Unbind the vertex array, we've finished specifying data for it.
vertexBufferArray.Unbind(GL);
// Bind the shader, set the matrices.
shaderProgram.Bind(GL);
// Set matrixs for shader program
float rads = (90.0f / 360.0f) * (float)Math.PI * 2.0f;
mat4 _mviewdata = glm.translate(new mat4(1f), new vec3(_track_X, _track_Y, _track_Z));
mat4 _projectionMatrix = glm.perspective(rads, (float)this.trackgl_control.Height / (float)this.trackgl_control.Width, 0.001f, 1000f);
mat4 _modelMatrix = glm.lookAt(new vec3(0, 0, -1), new vec3(0, 0, 0), new vec3(0, -1, -1));
shaderProgram.SetUniformMatrix4(GL, "projectionMatrix", _projectionMatrix.to_array());
shaderProgram.SetUniformMatrix4(GL, "viewMatrix", _modelMatrix.to_array());
shaderProgram.SetUniformMatrix4(GL, "modelMatrix", _mviewdata.to_array());
// Bind the out vertex array.
vertexBufferArray.Bind(GL);
GL.LineWidth(5.0f);
// Draw the square.
GL.DrawArrays(OpenGL.GL_LINE_STRIP, 0, _dataTail);
// Unbind our vertex array and shader.
vertexBufferArray.Unbind(GL);
shaderProgram.Unbind(GL);
}
catch (Exception ex) { MessageBox.Show("CreateAndPlotData" + "\n" + ex.ToString()); }
}
This is what i get
As you see, all lines have the same width. So, my question is: Does anyone know what i can do about this?
And another one: what if i need to show a point of the current location of the instrument? I should create another array with one Vertex ?
p.s. Sry for my english, and here here is picture of what i get with glBegin/glEnd
No tag spamming intended.
I'm doing this project partially guided by tutorials in C ++ and C #. Therefore, if someone knows how to do this in C ++, then in this case I will just try to implement the same in C#.

Texture2d from Byte array Sharpdx

I am a C#, SharpDX and Directx newbie. Please excuse my ignorance. I am following up on an old post: Exception of Texture2D.FromMemory() in SharpDX code. It was very helpful.
My goal:
Build a Texture2d from softwarebitmap.
Make the texture available to HLSL.
The way I approached it:
Using IMemoryBufferByteAccess, I was able to retrieve the pointer to byte and the total capacity of Frame. From the previous post, it seems I would need to use the DataRectangle to point to the byte array.
Have 2 textures with different descriptors- Texture1 (_staging_texture)- none binding flag, cpu write and read privileges, usage- staging. I created this texture with the datarectangle pointing to the byte array. Texture2 (_final_texture)- Shader binding flag, no cpu access, usage- default. This texture would be eventually made available to the shader. The intention was to use the copyResource function from Texture1 to Texture2.
Below, I copy my unpolished code for reference:
bitmap = latestFrame.SoftwareBitmap;
Windows.Graphics.Imaging.BitmapBuffer bitmapBuffer= bitmap.LockBuffer(Windows.Graphics.Imaging.BitmapBufferAccessMode.Read);
Windows.Foundation.IMemoryBufferReference bufferReference = bitmapBuffer.CreateReference();
var staging_descriptor = new Texture2DDescription
{
Width = Width,
Height = Height,
MipLevels = 1,
ArraySize = 1,
Format = SharpDX.DXGI.Format.R8G8B8A8_UNorm,
SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0),
Usage = ResourceUsage.Staging,
BindFlags = BindFlags.None,
CpuAccessFlags = CpuAccessFlags.Read | CpuAccessFlags.Write,
OptionFlags = ResourceOptionFlags.None
};
var final_descriptor = new Texture2DDescription
{
Width = Width,
Height = Height,
MipLevels = 1,
ArraySize = 1,
Format = SharpDX.DXGI.Format.R8G8B8A8_UNorm,
SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0),
Usage = ResourceUsage.Default,
BindFlags = BindFlags.ShaderResource,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None
};
var dataRectangle = new SharpDX.DataRectangle();
unsafe
{
byte* dataInBytes;
uint capacityInBytes;
((InteropStatics.IMemoryBufferByteAccess)bufferReference).GetBuffer(out dataInBytes, out capacityInBytes);
dataRectangle.DataPointer = (IntPtr)dataInBytes;
dataRectangle.Pitch = 4;
}
Texture2D _stagingTexture = new Texture2D(device, staging_descriptor, dataRectangle);
Texture2D _finalTexture = new Texture2D(device, final_descriptor);
_stagingTexture.Device.ImmediateContext.CopyResource(_stagingTexture, _finalTexture);
My question is two fold:
The DataRectangle uses IntPtr type while the pointer retrieved from
the interface is Byte array.. Is this not a problem? OR does the
pitch member in the DataRectangle address this? For now I casted
byteArray to IntPtr.
Would this approach work? OR is there a better way to handle this?
Any pointers, suggestions or constructive criticisms would be much appreciated!
a while ago i was looking for the same and I come up with this function that always works fine for my use case
public static Texture2D CreateTexture2DFrombytes(Device device, byte[] RawData, int width, int height)
{
Texture2DDescription desc;
desc.Width = width;
desc.Height = height;
desc.ArraySize = 1;
desc.BindFlags = BindFlags.ShaderResource;
desc.Usage = ResourceUsage.Immutable;
desc.CpuAccessFlags = CpuAccessFlags.None;
desc.Format = Format.B8G8R8A8_UNorm;
desc.MipLevels = 1;
desc.OptionFlags = ResourceOptionFlags.None;
desc.SampleDescription.Count = 1;
desc.SampleDescription.Quality = 0;
DataStream s = DataStream.Create(RawData, true, true);
DataRectangle rect = new DataRectangle(s.DataPointer, width * 4);
Texture2D t2D = new Texture2D(device, desc, rect);
return t2D;
}

Reading Datastream sharpDX Error all values are 0

I followed this solution for my project : How to create bitmap from Surface (SharpDX)
I don't have enough reputation to comment so I'm opening a new question here.
My project is basically in Direct 2D, I have a Surface buffer, a swapchain. I want to put my buffer into a datastream and reads it's value to put it into a bitmap and save it on disk ( like a screen capture), but my code won't work since all the bytes values are 0 (which is black) and this doesn't make sense since my image is fully white with a bit of blue.
Here is my code :
SwapChainDescription description = new SwapChainDescription()
{
ModeDescription = new ModeDescription(this.Width, this.Height, new Rational(60, 1), Format.B8G8R8A8_UNorm),
SampleDescription = new SampleDescription(1, 0),
Usage = Usage.RenderTargetOutput,
BufferCount = 1,
SwapEffect = SwapEffect.Discard,
IsWindowed = true,
OutputHandle = this.Handle
};
Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.Debug | DeviceCreationFlags.BgraSupport, description, out device, out swapChain);
SharpDX.DXGI.Device dxgiDevice = device.QueryInterface<SharpDX.DXGI.Device>();
SharpDX.DXGI.Adapter dxgiAdapter = dxgiDevice.Adapter;
SharpDX.Direct2D1.Device d2dDevice = new SharpDX.Direct2D1.Device(dxgiDevice);
d2dContext = new SharpDX.Direct2D1.DeviceContext(d2dDevice, SharpDX.Direct2D1.DeviceContextOptions.None);
SharpDX.Direct3D11.DeviceContext d3DeviceContext = new SharpDX.Direct3D11.DeviceContext(device);
properties = new BitmapProperties(new SharpDX.Direct2D1.PixelFormat(SharpDX.DXGI.Format.B8G8R8A8_UNorm, SharpDX.Direct2D1.AlphaMode.Premultiplied),
96, 96);
Surface backBuffer = swapChain.GetBackBuffer<Surface>(0);
d2dTarget = new SharpDX.Direct2D1.Bitmap(d2dContext, backBuffer, properties);
d2dContext.Target = d2dTarget;
playerBitmap = this.LoadBitmapFromContentFile(#"C:\Users\ndesjardins\Desktop\wave.png");
//System.Drawing.Bitmap bitmapCanva = new System.Drawing.Bitmap(1254, 735);
d2dContext.BeginDraw();
d2dContext.Clear(SharpDX.Color.White);
d2dContext.DrawBitmap(playerBitmap, new SharpDX.RectangleF(0, 0, playerBitmap.Size.Width, playerBitmap.Size.Height), 1f, SharpDX.Direct2D1.BitmapInterpolationMode.NearestNeighbor);
SharpDX.Direct2D1.SolidColorBrush brush = new SharpDX.Direct2D1.SolidColorBrush(d2dContext, SharpDX.Color.Green);
d2dContext.DrawRectangle(new SharpDX.RectangleF(200, 200, 100, 100), brush);
d2dContext.EndDraw();
swapChain.Present(1, PresentFlags.None);
Texture2D backBuffer3D = backBuffer.QueryInterface<SharpDX.Direct3D11.Texture2D>();
Texture2DDescription desc = backBuffer3D.Description;
desc.CpuAccessFlags = CpuAccessFlags.Read;
desc.Usage = ResourceUsage.Staging;
desc.OptionFlags = ResourceOptionFlags.None;
desc.BindFlags = BindFlags.None;
var texture = new Texture2D(device, desc);
d3DeviceContext.CopyResource(backBuffer3D, texture);
byte[] data = null;
using (Surface surface = texture.QueryInterface<Surface>())
{
DataStream dataStream;
var map = surface.Map(SharpDX.DXGI.MapFlags.Read, out dataStream);
int lines = (int)(dataStream.Length / map.Pitch);
data = new byte[surface.Description.Width * surface.Description.Height * 4];
dataStream.Position = 0;
int dataCounter = 0;
// width of the surface - 4 bytes per pixel.
int actualWidth = surface.Description.Width * 4;
for (int y = 0; y < lines; y++)
{
for (int x = 0; x < map.Pitch; x++)
{
if (x < actualWidth)
{
data[dataCounter++] = dataStream.Read<byte>();
}
else
{
dataStream.Read<byte>();
}
}
}
dataStream.Dispose();
surface.Unmap();
int width = surface.Description.Width;
int height = surface.Description.Height;
byte[] bytewidth = BitConverter.GetBytes(width);
byte[] byteheight = BitConverter.GetBytes(height);
Array.Copy(bytewidth, 0, data, 0, 4);
Array.Copy(byteheight, 0, data, 4, 4);
}
Do you guys have any idea why the byte array that is returned at the end is full of 0 since it should be mostly 255? All I did in my backbuffer was to draw a bitmap image and a rectangle form. Array.Copy is to add the width and height header to the byte array, therefore I could create a bitmap out of it.
I answered in a comment but formatting is horrible so apologies!
https://gamedev.stackexchange.com/a/112978/29920 This looks promising but as you said in reply to mine, this was some time ago and I'm pulling this out of thin air, if it doesn't work either someone with more current knowledge will have to answer or I'll have to grab some source code and try myself.
SharpDX.Direct2D1.Bitmap dxbmp = new SharpDX.Direct2D1.Bitmap(renderTarget,
new SharpDX.Size2(bmpWidth, bmpHeight), new
BitmapProperties(renderTarget.PixelFormat));
dxbmp.CopyFromMemory(bmpBits, bmpWidth * 4);
This looks kind of like what you need. I'm assuming bmpBits in here is either a byte array or a memory stream either of which could then be saved off or at least give you something to look at to see if you're actually getting pixel data

IGraphBuilder use of ColorConverter

I try to use this code to get pictures of my cam:
IGraphBuilder _graph = null;
ISampleGrabber _grabber = null;
IBaseFilter _sourceObject = null;
IBaseFilter _grabberObject = null;
IMediaControl _control = null;
// Create the main graph
_graph = Activator.CreateInstance(Type.GetTypeFromCLSID(FilterGraph)) as IGraphBuilder;
// Create the webcam source
_sourceObject = FilterInfo.CreateFilter(_monikerString);
// Create the grabber
_grabber = Activator.CreateInstance(Type.GetTypeFromCLSID(SampleGrabber)) as ISampleGrabber;
_grabberObject = _grabber as IBaseFilter;
// Add the source and grabber to the main graph
_graph.AddFilter(_sourceObject, "source");
_graph.AddFilter(_grabberObject, "grabber");
IPin pin = _sourceObject.GetPin(PinDirection.Output, 0);
IAMStreamConfig streamConfig = pin as IAMStreamConfig;
int count = 0, size = 0;
streamConfig.GetNumberOfCapabilities(out count, out size);
int width = 0, height = 0;
AMMediaType mediaType = null;
AMMediaType mediaTypeCandidate = null;
for(int index = 0; index < count; index++) {
VideoStreamConfigCaps scc = new VideoStreamConfigCaps();
int test = streamConfig.GetStreamCaps(index, out mediaTypeCandidate, scc);
if(mediaTypeCandidate.MajorType == MediaTypes.Video && mediaTypeCandidate.SubType == MediaSubTypes.YUY2) {
VideoInfoHeader header = (VideoInfoHeader)Marshal.PtrToStructure(mediaTypeCandidate.FormatPtr, typeof(VideoInfoHeader));
if(header.BmiHeader.Width == 1280 && header.BmiHeader.Height == 720) {
width = header.BmiHeader.Width;
height = header.BmiHeader.Height;
if(mediaType != null)
mediaType.Dispose();
mediaType = mediaTypeCandidate;
} else
mediaTypeCandidate.Dispose();
} else
mediaTypeCandidate.Dispose();
}
streamConfig.SetFormat(mediaType);
And it works but i do not see the Image which is generated by this code:
uint pcount = (uint)(_capGrabber.Width * _capGrabber.Height * PixelFormats.Bgr32.BitsPerPixel / 8);
// Create a file mapping
_section = CreateFileMapping(new IntPtr(-1), IntPtr.Zero, 0x04, 0, pcount, null);
_map = MapViewOfFile(_section, 0xF001F, 0, 0, pcount);
// Get the bitmap
BitmapSource = Imaging.CreateBitmapSourceFromMemorySection(_section, _capGrabber.Width,
_capGrabber.Height, PixelFormats.Bgr32, _capGrabber.Width * PixelFormats.Bgr32.BitsPerPixel / 8, 0) as InteropBitmap;
_capGrabber.Map = _map;
// Invoke event
if (NewBitmapReady != null)
{
NewBitmapReady(this, null);
}
Because the SubMediaTyp is YUY2. How can i add a converter to this code? I have read something about a ColorConvert, which can be added to the IGraphBuilder. How does that work?
I would not expect CreateBitmapSourceFromMemorySection to accept anything else than flavors of RGB. Even more unlikely that it accepts YUY2 media type, so you need the DirectShow pipeline to convert video stream to RGB before you export it as a managed bitmap/imaging object.
To achieve this, you typically add Sample Grabber filter initialized to 24-bit RGB subtype and let DirectShow provide necessary converters automatically.
See detailed explanation and code snippets here: DirectShow: Examples for Using SampleGrabber for Grabbing a Frame and...
media.majorType = MediaType.Video;
media.subType = MediaSubType.RGB24;
media.formatPtr = IntPtr.Zero;
hr = sampGrabber.SetMediaType(media);

Image doesn't update when written to.. weird goings on

I have a Kinect WPF Application that takes images from the Kinect, does some feature detection using EmguCV (A C# opencv wrapper) and displays the output on the using a WPF image.
I have had this working before, but the application now refuses to update the screen image when the imagesource is written to, but I have not changed the way it works.
the Image(called video) is written to as such:
video.Source = bitmapsource;
in the colorframeready event handler.
This works fine until I introduce some opencv code before the imagesource is written to. It does not matter what source is used, so I don't think it is a conflict there. I have narrowed down the offending EmguCV code to this line:
RecentKeyPoints = surfCPU.DetectKeyPointsRaw(ImageRecent, null);
which jumps straight into the opencv code. It is worth noting that:
ImageRecent has completely different origins to the bitmapsource updating the screen.
Reading video.Source returns the bitmapsource, so it seems to be writing correctly, just not updating the screen.
Let me know if you want any more information...
void nui_ColorFrameReady(object sender, ColorImageFrameReadyEventArgs e)
{
// Checks for a recent Depth Image
if (!TrackingReady) return;
// Stores image
using (ColorImageFrame colorImageFrame = e.OpenColorImageFrame())
{
if (colorImageFrame != null)
{
if (FeatureTracker.ColourImageRecent == null)
//allocate the first time
FeatureTracker.ColourImageRecent = new byte[colorImageFrame.PixelDataLength];
colorImageFrame.CopyPixelDataTo(FeatureTracker.ColourImageRecent);
}
else return;
}
FeatureTracker.FeatureDetect(nui);
//video.Source = FeatureTracker.ColourImageRecent.ToBitmapSource();
video.Source = ((Bitmap)Bitmap.FromFile("test1.png")).ToBitmapSource();
TrackingReady = false;
}
public Bitmap FeatureDetect(KinectSensor nui)
{
byte[] ColourClone = new byte[ColourImageRecent.Length];
Array.Copy(ColourImageRecent, ColourClone, ColourImageRecent.Length);
Bitmap test = (Bitmap)Bitmap.FromFile("test1.png");
test.RotateFlip(RotateFlipType.RotateNoneFlipY);
Image<Gray, Byte> ImageRecent = new Image<Gray, byte>(test);
SURFDetector surfCPU = new SURFDetector(2000, false);
VectorOfKeyPoint RecentKeyPoints;
Matrix<int> indices;
Matrix<float> dist;
Matrix<byte> mask;
bool MatchFailed = false;
// extract SURF features from the object image
RecentKeyPoints = surfCPU.DetectKeyPointsRaw(ImageRecent, null);
//Matrix<float> RecentDescriptors = surfCPU.ComputeDescriptorsRaw(ImageRecent, null, RecentKeyPoints);
//MKeyPoint[] RecentPoints = RecentKeyPoints.ToArray();
// don't feature detect on first attempt, just store image details for next attempt
#region
/*
if (KeyPointsOld == null)
{
KeyPointsOld = RecentKeyPoints;
PointsOld = RecentPoints;
DescriptorsOld = RecentDescriptors;
return ImageRecent.ToBitmap();
}
*/
#endregion
// Attempt to match points to their nearest neighbour
#region
/*
BruteForceMatcher SURFmatcher = new BruteForceMatcher(BruteForceMatcher.DistanceType.L2F32);
SURFmatcher.Add(RecentDescriptors);
int k = 5;
indices = new Matrix<int>(DescriptorsOld.Rows, k);
dist = new Matrix<float>(DescriptorsOld.Rows, k);
*/
// Match features, provide the top k matches
//SURFmatcher.KnnMatch(DescriptorsOld, indices, dist, k, null);
// Create mask and set to allow all features
//mask = new Matrix<byte>(dist.Rows, 1);
//mask.SetValue(255);
#endregion
//Features2DTracker.VoteForUniqueness(dist, 0.8, mask);
// Check number of good maches and for error and end matching if true
#region
//int nonZeroCount = CvInvoke.cvCountNonZero(mask);
//if (nonZeroCount < 5) MatchFailed = true;
/*
try
{
nonZeroCount = Features2DTracker.VoteForSizeAndOrientation(RecentKeyPoints, KeyPointsOld, indices, mask, 1.5, 20);
}
catch (SystemException)
{
MatchFailed = true;
}
if (nonZeroCount < 5) MatchFailed = true;
if (MatchFailed)
{
return ImageRecent.ToBitmap();
}
*/
#endregion
//DepthMapColourCoordsRecent = CreateDepthMap(nui, DepthImageRecent);
//PointDist[] FeatureDistances = DistanceToFeature(indices, mask, RecentPoints);
//Image<Rgb,Byte> rgbimage = ImageRecent.Convert<Rgb, Byte>();
//rgbimage = DrawPoints(FeatureDistances, rgbimage);
// Store recent image data for next feature detect.
//KeyPointsOld = RecentKeyPoints;
//PointsOld = RecentPoints;
//DescriptorsOld = RecentDescriptors;
//CreateDepthMap(nui, iva);
//rgbimage = CreateDepthImage(DepthMapColourCoordsRecent, rgbimage);
// Convert image back to a bitmap
count++;
//Bitmap bitmap3 = rgbimage.ToBitmap();
//bitmapstore = bitmap3;
//bitmap3.Save("test" + count.ToString() + ".png");
return null;
}
This is a little late, but I had a similar problem and thought I'd share my solution.
In my case I was processing the depth stream. The default resolution was 640x480, and Emgu just wasn't able to process the image fast enough to keep up with the frameready handler. As soon as I reduced the depth stream resolution to 320x240 the problem went away.
I also went a bit further and moved my image processing to a different thread which sped it up even more (do a search for ComponentDispatcher.ThreadIdle). I'm still not able to do 640x480 at a reasonable frame rate, but at least the image renders so I can see what's going on.

Categories

Resources