Copy C++ FFmpeg AVFrame to C# WritableBitmap - c#

I have a C++ function definition with ffmpeg scaler The last two parameter is my question (buffer and stride):
int scale_decoded_video_frame(void* handle, void* scalerHandle, void* scaledBuffer, int scaledBufferStride)
sws_scale(
scalerContext->sws_context,
srcData,
context->frame->linesize,
0,
scalerContext->source_height,
reinterpret_cast<uint8_t**>(&scaledBuffer),
&scaledBufferStride);
When I declared it in C# I use the following code:
[DllImport(LibraryName, EntryPoint = "scale_decoded_video_frame", CallingConvention = CallingConvention.Cdecl)]
public static extern int ScaleDecodedVideoFrame(IntPtr handle, IntPtr scalerHandle, IntPtr scaledBuffer, int scaledBufferStride);
And the C# call:
WriteableBitmap w = new WriteableBitmap(1920, 1080, 96, 96, PixelFormats.Pbgra32, null);
RenderOptions.SetBitmapScalingMode(w, BitmapScalingMode.NearestNeighbor);
w.Lock();
Int32Rect rect = new Int32Rect(0, 0, 1920, 1080);
resultCode = FFmpegVideoPInvoke.ScaleDecodedVideoFrame(decoderPtr, scalerHandle, w.BackBuffer, w.BackBufferStride);
if (resultCode == 0)
w.AddDirtyRect(rect);
w.Unlock();
It works perfect, but I want to skip the scale function and only want to get the decoded image to the WritableBitmap buffer. I try to use this code, change the sws_scale function to memcpy, but it shows nothing. The context->frame is AVFrame*.
memcpy(scaledBuffer, context->frame->data, AV_NUM_DATA_POINTERS * sizeof(uint8_t));
memcpy(&scaledBufferStride, context->frame->linesize, AV_NUM_DATA_POINTERS * sizeof(int));
How can I copy the AVFrame buffer to the WritableBitmap buffer?

Related

C# Linux Framebuffer Unsafe byte[] to CairoSharp ImageSurface

I am trying to create an image surface in c# CairoSharp using these two constructors:
public ImageSurface(byte[] data, Format format, int width, int height, int stride); public ImageSurface(IntPtr data, Format format, int width, int height, int stride);
I am trying to get the array of the linux framebuffer from a memorymappedfile:
var file = MemoryMappedFile.CreateFromFile("/dev/fb0", FileMode.Open, null, (3840 * 2160 * (32 / 8)));
I know I have to use an unsafe context to get it but am unsure the proper syntax to get the sequential pointer from the memeoryMapped object.
The constructors for the ImageSurface will not work with the MemoryMappedFile directly. You will have to Read bytes from the MemoryMappedFile and use those bytes to create the ImageSurface.
I never used C# on Linux before so I don't really know if all those objects are available but maybe like this?
private static void test()
{
Bitmap bmp = (Bitmap)Image.FromFile("some image");
BitmapData imgData = null;
try
{
imgData = bmp.LockBits(
new Rectangle(0, 0, bmp.Width, bmp.Height),
ImageLockMode.ReadWrite,
bmp.PixelFormat
);
int finalLength = imgData.Stride * imgData.Height;
byte[] buf = new byte[finalLength];
IntPtr ptr = imgData.Scan0;
System.Runtime.InteropServices.Marshal.Copy(ptr, buf, 0, finalLength);
bmp.UnlockBits(imgData);
// Pointer to first byte lives inside of fixed context.
unsafe
{
fixed (byte* p = buf)
{
// your cairo code...
}
}
// Alternative...
var ptPinned = System.Runtime.InteropServices.GCHandle.Alloc(
buf, System.Runtime.InteropServices.GCHandleType.Pinned);
IntPtr ptCairo = ptPinned.AddrOfPinnedObject();
ptPinned.Free();
}
finally
{
if (imgData != null) {
bmp.UnlockBits(imgData);
}
}
}
In any case I am certain that you have to pass the pointer of an already allocated buffer. In the test above I just loaded the image pixels of a bitmap into a byte array. In order to get the pointer you Marshal it or use fixed. That is all on Windows though.

Why did I failed to convert IntPtr to byte[]?

I'm working on a project with making DLL to do model's inference from C#.
During the inference process, we do the pre-processing, inference, and post-processing to complete the work.
At the beginning, I write all these process in one DLL, and it works fine.
But for the concept of "modularity", my boss ask me to split these process into three different DLLs.
So, the ideal dataflow should be like this:
C# || DLL (return cv::Mat*)
pre-processing || image->byte[] -----> Mat (...) -> new Mat(result)
↓
============================================================↓==============
Intptr <---------------------------
==========================↓================================================
inference || byte[] -----> Mat (...) -> new Mat(result)
↓
============================================================↓==============
Intptr <---------------------------
==========================↓===============================================
post-processing || byte[] -----> Mat (...) -> new Mat(result)
But during the process of passing cv::Mat* to C# and pass the byte[] converted from Intptr,
the input data(cv::Mat) is not the same as it showed from the original cv2.Mat(Intptr). And here's the code I write for doing these(it's simplified...):
//C++(DLL)
//pre-processing DLL ---------------------------------------------------
//dll_pre.h
extern "C" LIB_API cv::Mat* img_padding(unsigned char* img_pointer, int img_H, int img_W, int* len);
//dll_pre.cpp
LIB_API cv::Mat* img_padding(unsigned char* img_pointer, int img_H, int img_W)
{
cv::Mat input_pic = cv::Mat(img_H, img_W, CV_8UC(pic_ch), img_pointer);
//do something...
*len = input_pic.total() * input_pic.elemSize();
return new cv::Mat(input_pic);
}
// ---------------------------------------------------------------------
//inference DLL --------------------------------------------------------
//dll_inf.h
extern "C" LIB_API void dll_inference(unsigned char* img_pointer, int img_H, int img_W);
//dll_inf.cpp'
LIB_API void dll_inference(unsigned char* img_pointer, int img_H, int img_W)
{
cv::Mat input_pic = cv::Mat(img_H, img_W, CV_8UC(pic_ch), img_pointer);
cv::imwrite("dll_pic.png", input_pic)
//do something...
}
// ---------------------------------------------------------------------
//C#
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Drawing; //Bitmap
using System.Drawing.Imaging;
using System.IO; //Memory Stream
using System.Runtime.InteropServices; //Marshal
using System.Diagnostics;//Timer
using OpenCvSharp;
namespace Test_DLL_Console
{
class Program
{
[DllImport(#"dll_pre.dll")]
private static extern IntPtr img_padding(byte[] img, int img_H, int img_W, out int rt_len);
[DllImport(#"dll_inf.dll")]
private static extern void dll_inference(byte[] img, int img_H, int img_W);
public static byte[] GetRGBValues(Bitmap bmp, out int bmp_w)
{
// Lock the bitmap's bits.
Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
System.Drawing.Imaging.BitmapData bmpData =
bmp.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp.PixelFormat);
// Get the address of the first line.
IntPtr ptr = bmpData.Scan0;
// Declare an array to hold the bytes of the bitmap.
int bytes = bmpData.Stride * bmp.Height;
byte[] rgbValues = new byte[bytes];
Console.WriteLine($"bytes->{bytes}");
// Copy the RGB values into the array.
System.Runtime.InteropServices.Marshal.Copy(ptr, rgbValues, 0, bytes); bmp.UnlockBits(bmpData);
bmp_w = bmpData.Stride;
return rgbValues;
}
static void Main()
{
Bitmap image = new Bitmap("test_pic.bmp");
int img_W = 0;
int rt_len = 0;
byte[] dst = GetRGBValues(image, out img_W);
IntPtr pre_res = img_padding(dst, image.Height, img_W, out rt_len);
//Mat pad_res = new Mat(pre_res); //This is the way I get return picture from DLL in C#
////pad_res is different from dll_pic.png I saved from dll_inf
byte[] btarr = new byte[rt_len];
Marshal.Copy(pre_res, btarr, 0, rt_len);
dll_inference(btarr, image.Height, img_W);
}
}
}
To sum up, so far I've done the conversion from IntPtr to byte[] successfully:
dataflow: IntPtr >> Mat >> Bitmap >> byte[]
But it'll take too much time for the conversion(in my opinion...), so I want to make it more simple
I also tried another way of conversion but it still failed:
dataflow: IntPtr >> Mat >> byte[]
with this code
//C#
byte[] btarr = new byte[rt_len];
Mat X = new Mat(pre_res);
X.GetArray(image.Height, img_W, btarr);
dll_inference(btarr, image.Height, img_W);
//(But the result is also different from the way it should be...)
I don't know where did I do wrong...
Any help or advice is greatly appreciated!!

How to transfer an image from C++ to Unity3D using OpenCV?

I have a trouble in transferring an image from C++ to Unity3D(C#). I used OpenCV to read a Mat type image and get the data point with img.data. When I build the dll and call the corresponding function, I get error with all data in the bytes array is 0, so I want to know how to transfer an image from C++ to C#. Thank you.
Sorry for late reply,
code shown as the follows:
C++ code:
DLL_API BYTE* _stdcall MatToByte()
{
Mat srcImg = imread("F:\\3121\\image\\image_0\\000000.png");
nHeight = srcImg.rows;
nWidth = srcImg.cols;
nBytes = nHeight * nWidth * nFlag / 8;
if (pImg)
delete[] pImg;
pImg = new BYTE[nBytes];
memcpy(pImg, srcImg.data, nBytes);
return pImg;
}
C# code
[DllImport("Display")]
public static extern IntPtr MatToByte();
byte[] Bytes = new byte[nBytes];
IntPtr tmp = MatToByte();
Marshal.Copy(tmp, Bytes, 0, nBytes);
Texture2D texture = new Texture2D(width, height);
texture.LoadImage(Bytes);
left_plane.GetComponent<Renderer>().material.mainTexture = texture;

How do I send an image from C# to a DLL accepting OpenCV Mat properly?

I have a C++ class for which I created a C DLL to be used in our C# solution. Now I need to send images to the DLL, but I don't know the proper way of doing this. This is the C++ function signature:
std::vector<Prediction> Classify(const cv::Mat& img, int N = 2);
And this is how I went about it. Currently I tried to create this wrapper method in the DLL:
#ifdef CDLL2_EXPORTS
#define CDLL2_API __declspec(dllexport)
#else
#define CDLL2_API __declspec(dllimport)
#endif
#include "../classification.h"
extern "C"
{
CDLL2_API void Classify_image(unsigned char* img_pointer, unsigned int height, unsigned int width, char* out_result, int* length_of_out_result, int top_n_results = 2);
//...
}
Code in the dll:
CDLL2_API void Classify_image(unsigned char* img_pointer, unsigned int height, unsigned int width, char* out_result, int* length_of_out_result, int top_n_results)
{
auto classifier = reinterpret_cast<Classifier*>(GetHandle());
cv::Mat img = cv::Mat(height, width, CV_32FC3, (void*)img_pointer);
std::vector<Prediction> result = classifier->Classify(img, top_n_results);
//misc code...
*length_of_out_result = ss.str().length();
}
and in the C# code I wrote :
[DllImport(#"CDll2.dll", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Ansi)]
static extern void Classify_image(byte[] img, uint height, uint width, byte[] out_result, out int out_result_length, int top_n_results = 2);
private string Classify(Bitmap img, int top_n_results)
{
byte[] result = new byte[200];
int len;
var img_byte = (byte[])(new ImageConverter()).ConvertTo(img, typeof(byte[]));
Classify_image(img_byte, (uint)img.Height, (uint)img.Width,res, out len, top_n_results);
return ASCIIEncoding.ASCII.GetString(result);
}
but whenever I try to run the code, I get Access violation error:
An unhandled exception of type 'System.AccessViolationException'
occurred in Classification Using dotNet.exe
Additional information: Attempted to read or write protected memory.
This is often an indication that other memory is corrupt.
and exception error says:
{"Attempted to read or write protected memory. This is often an
indication that other memory is corrupt."}
Deeper investigation into the code made it clear that, I get the exception error in this function:
void Classifier::Preprocess(const cv::Mat& img, std::vector<cv::Mat>* input_channels)
{
/* Convert the input image to the input image format of the network. */
cv::Mat sample;
if (img.channels() == 3 && num_channels_ == 1)
cv::cvtColor(img, sample, cv::COLOR_BGR2GRAY);
else if (img.channels() == 4 && num_channels_ == 1)
cv::cvtColor(img, sample, cv::COLOR_BGRA2GRAY);
else if (img.channels() == 4 && num_channels_ == 3)
cv::cvtColor(img, sample, cv::COLOR_BGRA2BGR);
else if (img.channels() == 1 && num_channels_ == 3)
cv::cvtColor(img, sample, cv::COLOR_GRAY2BGR);
else
sample = img;
//resize image according to the input
cv::Mat sample_resized;
if (sample.size() != input_geometry_)
cv::resize(sample, sample_resized, input_geometry_);
else
sample_resized = sample;
cv::Mat sample_float;
if (num_channels_ == 3)
sample_resized.convertTo(sample_float, CV_32FC3);
else
sample_resized.convertTo(sample_float, CV_32FC1);
cv::Mat sample_normalized;
cv::subtract(sample_float, mean_, sample_normalized);
/* This operation will write the separate BGR planes directly to the
* input layer of the network because it is wrapped by the cv::Mat
* objects in input_channels. */
cv::split(sample_normalized, *input_channels);
CHECK(reinterpret_cast<float*>(input_channels->at(0).data)
== net_->input_blobs()[0]->cpu_data())
<< "Input channels are not wrapping the input layer of the network.";
}
The access violation occurs when it is attempted to resize the image meaning running this snippet:
//resize image according to the input
cv::Mat sample_resized;
if (sample.size() != input_geometry_)
cv::resize(sample, sample_resized, input_geometry_);
Further investigation and debugging (here) made the culprit visible!
This approach proved to be plain wrong, or at the very least buggy. Using this code, the image on the C++ side seemed to have been initialized properly, the number of channels, the height, and width all seemed fine.
But the moment you attempt to use the image, either by resizing it or even showing it using imshow(), it would crash the application and be giving an access violation exception, the very same error that happened when resizing and is posted in the question.
Looking at this answer, I changed the C# code responsible for handing the image to the dll. the new code is as follows :
//Dll import
[DllImport(#"CDll2.dll", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Ansi)]
static extern void Classify_Image(IntPtr img, uint height, uint width, byte[] out_result, out int out_result_length, int top_n_results = 2);
//...
//main code
Bitmap img = new Bitmap(txtImagePath.Text);
BitmapData bmpData = img.LockBits(new Rectangle(0, 0, img.Width, img.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
result = Classify_UsingImage(bmpData, 1);
img.UnlockBits(bmpData); //Remember to unlock!!!
and the C++ code in the DLL:
CDLL2_API void Classify_Image(unsigned char* img_pointer, unsigned int height, unsigned int width, char* out_result, int* length_of_out_result, int top_n_results)
{
auto classifier = reinterpret_cast<Classifier*>(GetHandle());
cv::Mat img = cv::Mat(height, width, CV_8UC3, (void*)img_pointer, Mat::AUTO_STEP);
std::vector<Prediction> result = classifier->Classify(img, top_n_results);
//...
*length_of_out_result = ss.str().length();
}
And doing so rectified all access violations I was getting previously.
Although I can now easily send images from C# to the DLL, I have some issue with the current implementation of mine. I don't know how to send the OpenCV type from C# to the needed function, currently I'm using a hardcoded image type as you can see, and this begs the question, what should I do when my input image is grayscale or even a png with 4 channels?
After Trying many different approaches, I guess it would be beneficial to other people who seek to the the same thing, to know this.
To cut a very long story short (see this question), The best way that I could find is this (as #EdChum says it) :
I'd pass the file as memory to your openCV dll, this should be able
to call imdecode which will sniff the file type, additionally you can
pass the flag
And also explained here to send a pointer to the DLL and there use imdecode to decode the image. This solved a lot of issues other approaches introduced. And will also save you alot of headaches.
Here is the code of intrest:
This is how my functions in the DLL and C# should have looked :
#ifdef CDLL2_EXPORTS
#define CDLL2_API __declspec(dllexport)
#else
#define CDLL2_API __declspec(dllimport)
#endif
#include "classification.h"
extern "C"
{
CDLL2_API void Classify_Image(unsigned char* img_pointer, long data_len, char* out_result, int* length_of_out_result, int top_n_results = 2);
//...
}
The actual method :
CDLL2_API void Classify_Image(unsigned char* img_pointer, long data_len,
char* out_result, int* length_of_out_result, int top_n_results)
{
auto classifier = reinterpret_cast<Classifier*>(GetHandle());
vector<unsigned char> inputImageBytes(img_pointer, img_pointer + data_len);
cv::Mat img = imdecode(inputImageBytes, CV_LOAD_IMAGE_COLOR);
cv::imshow("img just recieved from c#", img);
std::vector<Prediction> result = classifier->Classify(img, top_n_results);
//...
*length_of_out_result = ss.str().length();
}
Here is the C# Dll Import:
[DllImport(#"CDll2.dll", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Ansi)]
static extern void Classify_Image(byte[] img, long data_len, byte[] out_result, out int out_result_length, int top_n_results = 2);
and This is the actual method sending the image back to the DLL:
private string Classify_UsingImage(Bitmap image, int top_n_results)
{
byte[] result = new byte[200];
int len;
Bitmap img;
if (chkResizeImageCShap.Checked)
img = ResizeImage(image, int.Parse(txtWidth.Text), (int.Parse(txtHeight.Text)));
else
img = image;
ImageFormat fmt = new ImageFormat(image.RawFormat.Guid);
var imageCodecInfo = ImageCodecInfo.GetImageEncoders().FirstOrDefault(codec => codec.FormatID == image.RawFormat.Guid);
//this is for situations, where the image is not read from disk, and is stored in the memort(e.g. image comes from a camera or snapshot)
if (imageCodecInfo == null)
{
fmt = ImageFormat.Jpeg;
}
using (MemoryStream ms = new MemoryStream())
{
img.Save(ms,fmt);
byte[] image_byte_array = ms.ToArray();
Classify_Image(image_byte_array, ms.Length, result, out len, top_n_results);
}
return ASCIIEncoding.ASCII.GetString(result);
}

Getting distorted images after sending them from C# to OpenCV in C++?

I created a C DLL out of my C++ class which uses OpenCV for image manipulations and want to use this DLL in my C# application. Currently, this is how I have implemented it:
#ifdef CDLL2_EXPORTS
#define CDLL2_API __declspec(dllexport)
#else
#define CDLL2_API __declspec(dllimport)
#endif
#include "../classification.h"
extern "C"
{
CDLL2_API void Classify_image(unsigned char* img_pointer, unsigned int height, unsigned int width, char* out_result, int* length_of_out_result, int top_n_results = 2);
//...
}
C# related code:
DLL Import section:
//Dll import
[DllImport(#"CDll2.dll", CallingConvention = CallingConvention.Cdecl, CharSet = CharSet.Ansi)]
static extern void Classify_Image(IntPtr img, uint height, uint width, byte[] out_result, out int out_result_length, int top_n_results = 2);
The actual function sending the image to the DLL:
//...
//main code
private string Classify(int top_n)
{
byte[] res = new byte[200];
int len;
Bitmap img = new Bitmap(txtImagePath.Text);
BitmapData bmpData = img.LockBits(new Rectangle(0, 0, img.Width, img.Height),
ImageLockMode.ReadWrite,
PixelFormat.Format24bppRgb);
Classify_Image(bmpData.Scan0, (uint)bmpData.Height, (uint)bmpData.Width, res, out len, top_n);
img.UnlockBits(bmpData); //Remember to unlock!!!
//...
}
and the C++ code in the DLL :
CDLL2_API void Classify_Image(unsigned char* img_pointer, unsigned int height, unsigned int width,
char* out_result, int* length_of_out_result, int top_n_results)
{
auto classifier = reinterpret_cast<Classifier*>(GetHandle());
cv::Mat img = cv::Mat(height, width, CV_8UC3, (void*)img_pointer, Mat::AUTO_STEP);
std::vector<Prediction> result = classifier->Classify(img, top_n_results);
//...
*length_of_out_result = ss.str().length();
}
This works perfectly with some images but it doesn't work with others, for example when I try to imshow the image in the Classify_Image, right after being created from the data sent by C# application, I am faced with images like this :
Problematic example:
Fine example:
Your initial issue is to do with what is called stride or pitch of image buffers.
Basically for performance reasons pixel row values can be memory aligned, here we see that in your case it's causing the pixel rows to not align because the row size is not equal to the pixel row width.
The general case is:
resolution width * bit-depth (in bytes) * num of channels + padding
in your case the bitmap class state:
The stride is the width of a single row of pixels (a scan line),
rounded up to a four-byte boundary
So if we look at the problematic image, it has a resolution of 1414 pixel width, this is a 8-bit RGB bitmap so if we do the maths:
1414 * 1 * 3 (we have RGB so 3 channels) = 4242 bytes
So now divide by 4-bytes:
4242 / 4 = 1060.5
So we are left with 0.5 * 4 bytes = 2 bytes padding
So the stride is in fact 4244 bytes.
So this needs to be passed through so that the stride is correct.
Looking at what you're doing, I'd pass the file as memory to your openCV dll, this should be able to call imdecode which will sniff the file type, additionally you can pass the flag cv::IMREAD_GRAYSCALE which will load the image and convert the grayscale on the fly.

Categories

Resources