Issue when casting to and from a void pointer C++ - c#

I'm currently working on a C# compatible DLL for Box2D, and am trying to make two separate methods - one for creating a Shape and another for creating a Fixture.
The fixture needs to be given a shape in order to be initialised, so once the Shape is created in its method, it is cast to a void pointer and sent back to C# to be stored as an IntPtr. Then this is passed to the Fixture creation method when required and converted back into a body from a pointer. The code I'm using is this:
extern "C" __declspec(dllexport) void* CreateBoxShape(float width, float height,
float centreX, float centreY, float angle) {
b2Vec2 centre = b2Vec2(centreX, centreY);
b2PolygonShape* shape;
shape->SetAsBox(width, height, centre, angle);
return static_cast<void*>(shape);
}
extern "C" __declspec(dllexport) void* AddFixture(void* bodyPointer, void* shapePointer, float density) {
b2Body* m_body = static_cast<b2Body*>(bodyPointer);
b2Fixture* m_fixture;
b2PolygonShape* aShape = static_cast<b2PolygonShape*>(shapePointer);
b2PolygonShape shape = *aShape;
m_fixture = m_body->CreateFixture(&shape, density);
return static_cast<void*>(m_fixture);
}
As you can guess, it's not working. The shape is getting modified somewhere along the way. I'm not used to working with void pointers or static_casts so I'd appreciate any help/suggestions. The constructor for CreateFixture is:
b2Fixture * CreateFixture (const b2Shape *shape, float32 density)

Related

How to add method in unmanaged code in a FaceRecognition project

I'm implementing a little FaceRecognition program using Emgu as a wrapper of OpenCV libraries. It seems to work fine, but I need a function that returns all the distances between the image sample and the faces in the database (the FaceRecognizer.Predict method implemented only returns the smallest distance and label).
So I built Emgu from Git, in order to adapt functions in the unmanaged code (cvextern.dll) to my needs.
Here's the original in face_c.cpp
void cveFaceRecognizerPredict(cv::face::FaceRecognizer* recognizer, cv::_InputArray* image, int* label, double* dist)
{
int l = -1;
double d = -1;
recognizer->predict(*image, l, d);
*label = l;
*dist = d;
}
that stores minimum distance and corresponding label in l and d, thanks to predict.
The method I wrote, following the summary in opencv face.hpp:
void cveFaceRecognizerPredictCollector(cv::face::FaceRecognizer * recognizer, cv::_InputArray * image, std::vector<int>* labels, std::vector<double>* distances)
{
std::map<int, double> result_map = std::map<int, double>();
cv::Ptr<cv::face::StandardCollector> collector = cv::face::StandardCollector::create();
recognizer->predict(*image, collector);
result_map = collector->getResultsMap();
for (std::map<int, double>::iterator it = result_map.begin(); it != result_map.end(); ++it) {
distances->push_back(it->second);
labels->push_back(it->first);
}
}
And the caller in c#
using (Emgu.CV.Util.VectorOfInt labels = new Emgu.CV.Util.VectorOfInt())
using (Emgu.CV.Util.VectorOfDouble distances = new Emgu.CV.Util.VectorOfDouble())
using (InputArray iaImage = image.GetInputArray())
{
FaceInvoke.cveFaceRecognizerPredictCollector(_ptr, iaImage, labels, distances);
}
[DllImport(CvInvoke.ExternLibrary, CallingConvention = CvInvoke.CvCallingConvention)]
internal extern static void cveFaceRecognizerPredictCollector(IntPtr recognizer, IntPtr image, IntPtr labels, IntPtr distances);
The application works in real-time, so the function in c# is called continuously. I have only two faces and one label (same person) stored in my database, so the first call returns correctly the only possible label and stores it in labels. Keeping the application running, returned labels and the size of labels vector keep growing, filled with unregistered labels that I don't know where he takes. It seems to me like the collector in c++ is not well referenced, so that every time the function is called it keeps storing data without releasing the previous ones, overwriting them. But it's only my guess, I'm not very good with c++.
What else could possily be wrong?
Hope you can help

Convert OpenCV Mat to Texture2D?

Meta Context:
I'm currently working on a game that utilizes opencv as a substitute for ordinary inputs (keyboard, mouse, etc...). I'm using Unity3D's C# scripts and opencv in C++ via DllImports. My goal is to create an image inside my game coming from opencv.
Code Context:
As done usually in OpenCV, I'm using Mat to represent my image. This is the way that I'm exporting the image bytes:
cv::Mat _currentFrame;
...
extern "C" byte * EXPORT GetRawImage()
{
return _currentFrame.data;
}
And this is how i'm importing from C#:
[DllImport ("ImageInputInterface")]
private static extern IntPtr GetRawImage ();
...
public static void GetRawImageBytes (ref byte[] result, int arrayLength) {
IntPtr a = GetRawImage ();
Marshal.Copy(a, result, 0, arrayLength);
FreeBuffer(a);
}
Judging by the way I understand OpenCV, I expect the byte array to be structured in this way when serialized in a uchar pointer:
b1, g1, r1, b2, g2, r2, ...
I'm converting this BGR array to a RGB array using:
public static void BGR2RGB(ref byte[] buffer) {
byte swap;
for (int i = 0; i < buffer.Length; i = i + 3) {
swap = buffer[i];
buffer[i] = buffer[i + 2];
buffer[i + 2] = swap;
}
}
Finally, I'm using Unity's LoadRawTextureData to load the bytes to a texture:
this.tex = new Texture2D(
ImageInputInterface.GetImageWidth(),
ImageInputInterface.GetImageHeight(),
TextureFormat.RGB24,
false
);
...
ImageInputInterface.GetRawImageBytes(ref ret, ret.Length);
ImageInputInterface.BGR2RGB(ref ret);
tex.LoadRawTextureData(ret);
tex.Apply();
Results:
The final image seems to be scattered in someway, it resembles some shapes, but it seems to triple the shapes as well. This is me holding my hand in front of the camera:
[Me, my hand and the camera]
Doing some tests, I concluded that I decoded the channels correctly, since, using my phone to emit RGB light, I can reproduce the colors from the real world:
[Red Test]
[Blue Test]
[Green Test]
There are also some strange lines in the image:
[Spooky Lines]
There is also my face to compare these images to:
[My face in front of the camera]
Questions:
Since I'm able to correctly decode the color channels, what have I assumed wrong in decoding the OpenCV array? It's that I don't know how the Unity's LoadRawTextureData works, or have I decoded something in the wrong way?
How is the OpenCV Mat.data array structured?
UPDATE
Thanks to #Programmer, his solution worked like magic.
[Me Happy]
I changed his script a little, there was no need to do some stuff. And in my case i needed to use BGR2RGBA, not RGB2RGBA:
extern "C" void EXPORT GetRawImage( byte *data, int width, int height )
{
cv::Mat resizedMat( height, width, _currentFrame.type() );
cv::resize( _currentFrame, resizedMat, resizedMat.size(), cv::INTER_CUBIC );
cv::Mat argbImg;
cv::cvtColor( resizedMat, argbImg, CV_BGR2RGBA );
std::memcpy( data, argbImg.data, argbImg.total() * argbImg.elemSize() );
}
Use SetPixels32 instead of LoadRawTextureData. Instead of returning the array data from C++, do that from C#. Create Color32 array and pin it in c# with GCHandle.Alloc, send the address of the pinned Color32 array to C++, use cv::resize to resize the cv::Mat to match the size of pixels sent from C#. You must do this step or expect some error or issues.
Finally, convert cv::Mat from RGB to ARGB then use std::memcpy to update the array from C++. The SetPixels32 function can then be used to load that updated Color32 array into Texture2D. This is how I do it and it has been working for me without any issues. There might be other better ways to do it but I have never found one.
C++:
cv::Mat _currentFrame;
void GetRawImageBytes(unsigned char* data, int width, int height)
{
//Resize Mat to match the array passed to it from C#
cv::Mat resizedMat(height, width, _currentFrame.type());
cv::resize(_currentFrame, resizedMat, resizedMat.size(), cv::INTER_CUBIC);
//You may not need this line. Depends on what you are doing
cv::imshow("Nicolas", resizedMat);
//Convert from RGB to ARGB
cv::Mat argb_img;
cv::cvtColor(resizedMat, argb_img, CV_RGB2BGRA);
std::vector<cv::Mat> bgra;
cv::split(argb_img, bgra);
std::swap(bgra[0], bgra[3]);
std::swap(bgra[1], bgra[2]);
std::memcpy(data, argb_img.data, argb_img.total() * argb_img.elemSize());
}
C#:
Attach to any GameObject with a Renderer and you should see the cv::Mat displayed and updated on that Object every frame. Code is commented if confused:
using System;
using System.Runtime.InteropServices;
using UnityEngine;
public class Test : MonoBehaviour
{
[DllImport("ImageInputInterface")]
private static extern void GetRawImageBytes(IntPtr data, int width, int height);
private Texture2D tex;
private Color32[] pixel32;
private GCHandle pixelHandle;
private IntPtr pixelPtr;
void Start()
{
InitTexture();
gameObject.GetComponent<Renderer>().material.mainTexture = tex;
}
void Update()
{
MatToTexture2D();
}
void InitTexture()
{
tex = new Texture2D(512, 512, TextureFormat.ARGB32, false);
pixel32 = tex.GetPixels32();
//Pin pixel32 array
pixelHandle = GCHandle.Alloc(pixel32, GCHandleType.Pinned);
//Get the pinned address
pixelPtr = pixelHandle.AddrOfPinnedObject();
}
void MatToTexture2D()
{
//Convert Mat to Texture2D
GetRawImageBytes(pixelPtr, tex.width, tex.height);
//Update the Texture2D with array updated in C++
tex.SetPixels32(pixel32);
tex.Apply();
}
void OnApplicationQuit()
{
//Free handle
pixelHandle.Free();
}
}

Ghost-like artifact when passing image to blur() or Canny() in openCV

So I have some C++ opencv code that I'm calling from C# inside unity to process the input from the webcam.
I compile my C++ code to a DLL that I then import into C# with a DLLImport. I pass a pinned GCHandle reference to the C++ code so that I can manipulate the image array from C++. This all works. I can pass each frame to my C++ dll and have it grayscale it. It works wonderfully.
The problem arises when I try to do things other than just making the frame grayscale. I tried to do a simple blur() and the output comes out with a weird ghosted image to the left and right. I'm not sure what could be going wrong. It also happens when I do GaussianBlur() or Canny().
On the left is when I cover the camera, you can see the weird artifact more clearly. In the middle is the artifact itself after passing through GaussianBlur(). It seems like it creates copies of the image and overlays them with itself. And on the right is when it's just grayscaled to show that THAT works properly. So I figure it's not something that's happening between C# and C++, it's something that happens only when I pass the frame through opencv's blur or gaussianblur or canny.
Here is the C# code in unity
using UnityEngine;
using System.Collections;
using System;
using System.Runtime.InteropServices;
public class camera : MonoBehaviour {
[DllImport("tee")]
public static extern void bw([MarshalAs(UnmanagedType.LPStruct)]
IntPtr data,
int width,
int height);
WebCamTexture back;
Color32[] data;
Byte[] byteData;
Renderer rend;
String test;
Texture2D tex;
GCHandle dataHandle;
// Use this for initialization
void Start () {
back = new WebCamTexture();
back.Play();
rend = GetComponent<Renderer>();
tex = new Texture2D(back.width, back.height, TextureFormat.ARGB32, false);
data = back.GetPixels32();
dataHandle = GCHandle.Alloc(data, GCHandleType.Pinned);
}
void OnDisable()
{
dataHandle.Free();
}
// Update is called once per frame
void Update () {
back.GetPixels32(data);
bw(dataHandle.AddrOfPinnedObject(), back.width, back.height);
tex.SetPixels32(data);
tex.Apply();
rend.material.mainTexture = tex;
}
}
and here is the C++ code that gets compiled into a DLL
#include <opencv2\core\core.hpp>
#include <opencv2\imgproc\imgproc.hpp>
using namespace std;
using namespace cv;
extern "C"
{
__declspec(dllexport) void bw(int data, int width, int height) {
unsigned char * buffer = reinterpret_cast<unsigned char *>(data);
Mat mat = Mat(width, height, CV_8UC4, buffer).clone();
Mat gray;
cvtColor(mat, gray, CV_RGBA2GRAY);
Mat blurred;
GaussianBlur(gray, blurred, Size(3, 3), 2, 2);
if (blurred.isContinuous()) {
for (int i = 0; i < (width * height); i++) {
unsigned char * pxl = buffer + 4 * i;
pxl[0] = blurred.data[i]; //red channel
pxl[1] = blurred.data[i]; //green channel
pxl[2] = blurred.data[i]; //blue channel
pxl[3] = (unsigned char)255; // alpha channel
}
}
}
}
According to OpenCV's documentation, the Mat constructor takes rows and cols as parameters, so you should switch the width and height parameters. See here http://docs.opencv.org/2.4/modules/core/doc/basic_structures.html#mat-mat
Another thing, do you know how the images are stored in C#? Do they have any kind of data alignment(i.e. The rows aren't continuous)? Because that could also be an issue when you create the containing Mat.
I'm from my phone currently, I'll try to reformat my answer ASAP
EDIT: thinking about it, the thing about switching width and height when constructing makes sense only if both OpenCV and texture2D store images in row major order. I've checked here (http://rbwhitaker.wikidot.com/extracting-texture-data) and it seems it's like that.
I think your problem is the way you are accessing blurred pixel value. You should access the channels values using the following instead
for (int i = 0; i < (width * height); i++) {
unsigned char * pxl = buffer + 4 * i;
pxl[0] = blurred.ptr<uchar>(i); //red channel
pxl[1] = blurred.ptr<uchar>(i); //green channel
pxl[2] = blurred.ptr<uchar>(i); //blue channel
pxl[3] = (unsigned char)255; // alpha channel
}
One other thing you can look into is the way opencv stores the pixel values versus pointer access of the data buffer (. You can test this easily by rotating the blurred image before accessing it and see if this gives you the correct output or creating Mat mat = Mat(height,width, CV_8UC4, buffer).clone(); instead.
And you are right about the blurred type, it should be one channel as the gray image.
Try the current code for another way of accessing the values in the blurred image

Marshalling struct containing int and int[] from C# to C++

I have a C++ DLL with unmanaged code and a C# UI. There's a function imported from C++ DLL that takes a written-by-me struct as parameter.
After marshalling the written-by-me struct (MyImage) from C# to C++ I can access the content of the int[] array inside of it, but the content is different. I do not know what I am missing here as I spent quite some time and tried a few tricks to resolve this (obviously not enough).
MyImage struct in C#:
[StructLayout(LayoutKind.Sequential)]
struct MyImage
{
public int width;
public int height;
public int[] bits; //these represent colors of image - 4 bytes for each pixel
}
MyImage struct in C++:
struct MyImage
{
int width;
int height;
Color* bits; //typedef unsigned int Color;
MyImage(int w, int h)
{
bits = new Color[w*h];
}
Color GetPixel(int x, int y)
{
if (x or y out of image bounds) return UNDEFINED_COLOR;
return bits[y*width+x];
}
}
C# function declaration with MyImage as parameter:
[DLLImport("G_DLL.dll")]
public static extern void DisplayImageInPolygon(Point[] p, int n, MyImage texture,
int tex_x0, int tex_y0);
C++ implementation
DLLEXPORT void __stdcall DisplayImageInPolygon(Point *p, int n, MyImage img,
int imgx0, int imgy0)
{
//And below they have improper values (i don't know where they come from)
Color test1 = img.GetPixel(0,0);
Color test2 = img.GetPixel(1,0);
}
So when debugging the problem I noticed that the MyImage.bits array in c++ struct holds different data.
How can I fix it?
Since the bits field is a pointer to memory allocated in the native code, you are going to need to declare it as IntPtr in the C# code.
struct MyImage
{
public int width;
public int height;
public IntPtr bits;
}
If you want to access individual pixels in the C# code you'll need to write a GetPixel method, just as you did in the C++ code.
Note that since the bits field is a pointer to memory allocated in the native code, I'd expect the actual code to have a destructor for the struct that calls delete[] bits. Otherwise your code will leak.
This also means that you are going to need to create and destroy instances in the native code, and never do so in the managed code. Is this the policy you currently follow? I suspect not based on the code that I can see here.
You also need to reconsider passing the struct by value. Do you really want to take a copy of it when you call that function? Doing so means you've got two instances of the struct whose bits fields both point to the same memory. But, which one owns that memory? This structure really needs to be passed by reference.
I think you've got some problems in your design, but I can't see enough of the code, or know enough about your problem to be able to give you concrete advice.
In comments you state that your main goal is to transfer these bits from your C# code to the C++ code. I suggest you do it like this:
MyImage* NewImage(int w, int h, Color* bits)
{
MyImage* img = new MyImage;
img->w = w;
img->h = h;
img->bits = new Color[w*h];
for (int i=0; i<w*h; i++)
img->bits[i] = bits[i];
return img;
}
void DeleteImage(MyImage* img)
{
delete[] img->bits;
delete img;
}
void DoSomethingWithImage(MyImage* img)
{
// do whatever it is you need to do
}
On the C# side you can declare it like this:
[DllImport(#"dllname.dll", CallingConvention=CallingConvention.Cdecl)]
static extern IntPtr NewImage(int w, int h, int[] bits);
[DllImport(#"dllname.dll", CallingConvention=CallingConvention.Cdecl)]
static extern void DeleteImage(ImtPtr img);
[DllImport(#"dllname.dll", CallingConvention=CallingConvention.Cdecl)]
static extern void DoSomethingWithImage(ImtPtr img);
The first thing you should try is declaring your C# code with unsigned int types as well. It is possible that one bit is being interpreted as a sign for your int.
So in C# something like this (just note the bits is now uint[]):
[StructLayout(LayoutKind.Sequential)]
struct MyImage
{
public int width;
public int height;
public uint[] bits; //these represent colors of image - 4 bytes for each pixel
}
You can use the PInvoke Interop Assistant. You simply paste your struct and function declaration and it will generate the C# code for you. It has helped me a lot quite a few times.

nesting structures, interfaces, and classes within an interface

I'm actually translating some C++ code (which I know very little about, and have never really used) to C#. Normally in C# I wouldn't find myself doing something like this, as it does seem a little odd, but with the way the code in C++ is setup, I find it hard not to do it this way. Admittedly, I'm not very experienced with programming at all, but for the amount of time I've been doing it I've been able to grasp the concepts fairly well.
Anyway, here's the C++ code. It's in a header file, too.
#ifndef _SPRITE_H_
#define _SPRITE_H_
#ifdef __cplusplus
extern "C" {
#endif /* __cplusplus */
#ifndef NULL
#define NULL ((void *) 0)
#endif
typedef struct {
unsigned char *data;
int len;
int width;
int height;
} SpriteImage;
typedef struct {
unsigned char b;
unsigned char g;
unsigned char r;
unsigned char unused;
} SpritePalette;
typedef struct {
char *filename;
unsigned int nimages;
SpriteImage *images;
unsigned int palette_size;
SpritePalette *palette;
} Sprite;
typedef enum {
/* Developer errors */
SE_BADARGS,
/* sprite_new() errors */
SE_CANTOPEN,
SE_INVALID,
/* sprite_to_bmp(), sprite_to_bmp_file() and sprite_to_rgb() errors */
SE_INDEX,
/* sprite_to_bmp_file() errors */
SE_CANTWRITE
} SpriteError;
//Funcion para hacer uso de reverse_palette desde el exterior
SpritePalette * get_pal(SpritePalette *palette,int palette_len);
/* Open sprite file */
Sprite *sprite_open (const char *fname, SpriteError *error);
Sprite *sprite_open_from_data (const unsigned char *data, unsigned int size, SpriteError *error);
/* Change palette of sprite*/
void change_palete(Sprite *sprite, const char *fname, SpriteError *error);
/* Converts a sprite to bitmap file in memory */
void *sprite_to_bmp (Sprite *sprite, int i, int *size, SpriteError *error);
/* Like sprite_to_bmp(), but saves the result to a file */
int sprite_to_bmp_file (Sprite *sprite, int i, const char *writeToFile, SpriteError *error);
/* Converts a sprite to raw RGB data. The rowstride/pitch is 3*width. */
void *sprite_to_rgb (Sprite *sprite, int i, int *size, SpriteError *error);
/* Frees a Sprite* pointer */
void sprite_free (Sprite *sprite);
#ifdef __cplusplus
}
#endif /* __cplusplus */
#endif /* _SPRITE_H_ */
By the way, does anyone know what the deal is with the '#' reference?
I have no idea what these refer to.
And here's the C#:
interface Sprite
{
public class SpriteImage
{
private byte *data;
private int length;
private int width;
private int height;
}
public class SpritePalette
{
byte b;
byte g;
byte r;
byte unused;
}
public class Sprite
{
string fileName;
uint nImages;
uint palette_size;
SpriteImage image;
SpritePalette palette;
}
public enum SpriteErrors
{
None, //--default value
BadArguments, //--dev errors
/*--errors derived from any instance/call of the NewSprite() method */
CantOpen,
Invalid,
/*SpriteToBMP(), SpriteToBMPFile(), and SpriteToRGB() errors*/
Index,
CantWrite //--SpriteToBMPFile() errors
}
public interface ISprite
{
SpritePalette GetPalette(SpritePalette palette, int paletteLength);
Sprite SpriteOpen(string firstName, SpriteErrors* error);
Sprite SpriteOpenFromData(byte* data, uint size, SpriteErrors* error);
}
I'm sure you can connect the dots here. Keep in mind that this isn't my code, obviously, so I don't really know much about it. If anyone needs anymore material though I'd be happy to provide it if necessary.
a couple of points:
1) there is your types shouldn't be inside of an interface
2) pointers should either be converted to memebers as you hve in the sprite class or to arrays as you should have in the SpriteImage struct
3) unless its a trivial port it is going to be very difficult to write without having a good understanding of both languages and the code to be ported
You appear to be trying to port this SourceForge project from C++ to C#:
Ragnarok Online Sprite Viewer
That viewer is not only written in C++ but is also based on the Qt Toolkit.
I know your question is about this translating this particular header file from C++ to C# and what is the best approach, but my opinion is that unless you are very comfortable with C++ and are willing to learn a lot about Qt, your chances of success at this porting project are not very good. This is a big project even for a programmer seasoned in both C++ and C#.
However, if you still want to do this thing, then the approach you should take is to create a single large SpriteUtility static class and put all of the free C++ functions you find into that class as static C# methods. Yes, you can also put the C++ structs you see as nested classes. You don't need any interfaces whatsoever.
It doesn't need to be beautiful C# code; you are trying to port it verbatim doing as little damage to it as possible. Once it is working you can refactor it to make it more object-oriented in the traditional C# style.

Categories

Resources