Update a D3D9 texture from CUDA - c#

I’m working on a prototype that integrates WPF, Direct3D9 (using Microsoft’s D3DImage WPF class), and CUDA (I need to be able to generate a texture for the D3DImage on the GPU).
The problem is, CUDA doesn’t update my texture. No error codes are returned, the texture just stays unchanged. Even if I read after my own write, I don't see any changes. How to update my D3D9 texture?
I'm not even running any CUDA kernels, for debug purposes I only using cuMemcpy2D API to write the CUDA memory by copying some fake data from the CPU.
Here’s the code, it’s C# but I’ve placed native APIs in the comments:
static void updateTexture( Texture tx )
{
var size = tx.getSize();
using( CudaDirectXInteropResource res = new CudaDirectXInteropResource( tx.NativePointer, CUGraphicsRegisterFlags.None, CudaContext.DirectXVersion.D3D9 ) ) // cuGraphicsD3D9RegisterResource
{
res.Map(); // = cuGraphicsMapResources
using( CudaArray2D arr = res.GetMappedArray2D( 0, 0 ) ) // cuGraphicsSubResourceGetMappedArray, cuArrayGetDescriptor. The size is correct here, BTW
{
// Debug code below - don't run any kernels for now, just call cuMemcpy2D to write the GPU memory
uint[] arrWhite = new uint[ size.Width * size.Height ];
for( int i = 0; i < arrWhite.Length; i++ )
arrWhite[ i ] = 0xFF0000FF;
arr.CopyFromHostToThis( arrWhite ); // cuMemcpy2D
uint[] test = new uint[ size.Width * size.Height ];
arr.CopyFromThisToHost( test ); // The values here are correct
}
res.UnMap(); // cuGraphicsUnmapResources
}
tx.AddDirtyRectangle();
// Map again and check what's in the resource
using( CudaDirectXInteropResource res = new CudaDirectXInteropResource( tx.NativePointer, CUGraphicsRegisterFlags.None, CudaContext.DirectXVersion.D3D9 ) )
{
res.Map();
using( CudaArray2D arr = res.GetMappedArray2D( 0, 0 ) )
{
uint[] test = new uint[ size.Width * size.Height ];
arr.CopyFromThisToHost( test ); // All zeros :-(
Debug.WriteLine( "First pixel: {0:X}", test[ 0 ] );
}
res.UnMap();
}
}

As hinted by the commenter, I’ve tried creating a single instance of CudaDirectXInteropResource along with the D3D texture.
It worked.
It’s counter-intuitive and undocumented, but it looks like cuGraphicsUnregisterResource destroys the newly written data.
At least on my machine with GeForce GTX 960, Cuda 7.0 and Windows 8.1 x64.
So, the solution — call cuGraphicsD3D9RegisterResource once per texture, and use cuGraphicsMapResources / cuGraphicsUnmapResources API to allow CUDA to access the texture data.

Related

How to use the ARCore camera image in OpenCV in an Unity Android app?

I am trying to use OpenCV for hand gesture recognition in my Unity ARCore game. However, with the deprecation of TextureReaderAPI, the only way to capture the image from the camera is by using Frame.CameraImage.AcquireCameraImageBytes(). The problem with that is not only that the image is in 640x480 resolution (this cannot be changed AFAIK), but it is also in YUV_420_888 format.
As if that were not enough, OpenCV does not have free C#/Unity packages, so if I do not want to cash out 20$ for a paid package, I need to use available C++ or python versions. How do I move the YUV image to OpenCV, convert it to an RGB (or HSV) color space, and then either do some processing on it or return it back to Unity?
In this example, I will use C++ OpenCV libraries and Visual Studio 2017 and I will try to capture ARCore camera image, move it to OpenCV (as efficiently as possible), convert it to RGB color space, then move it back to Unity C# code and save it in the phone's memory.
Firstly, we have to create a C++ dynamic library project to use with OpenCV. For this, I highly recommend to follow both Pierre Baret's and Ninjaman494's answers on this question: OpenCV + Android + Unity. The process is rather straightforward, and if you will not deviate from their answers too much (i.e. you can safely download a newer than 3.3.1 version of OpenCV, but be careful when compiling for ARM64 instead of ARM, etc.), you should be able to call a C++ function from C#.
In my experience, I had to solve two problems - firstly, if you made the project part of your C# solution instead of creating a new solution, Visual Studio will keep messing with your configuration, like trying to compile a x86 version instead of an ARM version. To save yourself the hassle, create a completely separate solution. The other problem is that some functions failed to link for me, thus throwing a undefined reference linker error (undefined reference to 'cv::error(int, std::string const&, char const*, char const*, int, to be exact). If this happens and the problem is with a function that you do not really need, just recreate the function in your code - for instance if you have problems with cv::error, add this code in the end of your .cpp file:
namespace cv {
__noreturn void error(int a, const String & b, const char * c, const char * d, int e) {
throw std::string(b);
}
}
Sure, this is ugly and dirty way to do things, so if you know how to fix the linker error, please do so and let me know.
Now, you should have a working C++ code that compiles and can be run from a Unity Android application. However, what we want is for OpenCV to not return a number, but to convert an image. So change your code to this:
.h file
extern "C" {
namespace YOUR_OWN_NAMESPACE
{
int ConvertYUV2RGBA(unsigned char *, unsigned char *, int, int);
}
}
.cpp file
extern "C" {
int YOUR_OWN_NAMESPACE::ConvertYUV2RGBA(unsigned char * inputPtr, unsigned char * outputPtr, int width, int height) {
// Create Mat objects for the YUV and RGB images. For YUV, we need a
// height*1.5 x width image, that has one 8-bit channel. We can also tell
// OpenCV to have this Mat object "encapsulate" an existing array,
// which is inputPtr.
// For RGB image, we need a height x width image, that has three 8-bit
// channels. Again, we tell OpenCV to encapsulate the outputPtr array.
// Thanks to specifying existing arrays as data sources, no copying
// or memory allocation has to be done, and the process is highly
// effective.
cv::Mat input_image(height + height / 2, width, CV_8UC1, inputPtr);
cv::Mat output_image(height, width, CV_8UC3, outputPtr);
// If any of the images has not loaded, return 1 to signal an error.
if (input_image.empty() || output_image.empty()) {
return 1;
}
// Convert the image. Now you might have seen people telling you to use
// NV21 or 420sp instead of NV12, and BGR instead of RGB. I do not
// understand why, but this was the correct conversion for me.
// If you have any problems with the color in the output image,
// they are probably caused by incorrect conversion. In that case,
// I can only recommend you the trial and error method.
cv::cvtColor(input_image, output_image, cv::COLOR_YUV2RGB_NV12);
// Now that the result is safely saved in outputPtr, we can return 0.
return 0;
}
}
Now, rebuild the solution (Ctrl + Shift + B) and copy the libProjectName.so file to Unity's Plugins/Android folder, as in the linked answer.
The next thing is to save the image from ARCore, move it to C++ code, and get it back. Let us add this inside the class in our C# script:
[DllImport("YOUR_OWN_NAMESPACE")]
public static extern int ConvertYUV2RGBA(IntPtr input, IntPtr output, int width, int height);
You will be prompted by Visual Studio to add System.Runtime.InteropServices using clause - do so.
This allows us to use the C++ function in our C# code. Now, let's add this function to our C# component:
public Texture2D CameraToTexture()
{
// Create the object for the result - this has to be done before the
// using {} clause.
Texture2D result;
// Use using to make sure that C# disposes of the CameraImageBytes afterwards
using (CameraImageBytes camBytes = Frame.CameraImage.AcquireCameraImageBytes())
{
// If acquiring failed, return null
if (!camBytes.IsAvailable)
{
Debug.LogWarning("camBytes not available");
return null;
}
// To save a YUV_420_888 image, you need 1.5*pixelCount bytes.
// I will explain later, why.
byte[] YUVimage = new byte[(int)(camBytes.Width * camBytes.Height * 1.5f)];
// As CameraImageBytes keep the Y, U and V data in three separate
// arrays, we need to put them in a single array. This is done using
// native pointers, which are considered unsafe in C#.
unsafe
{
for (int i = 0; i < camBytes.Width * camBytes.Height; i++)
{
YUVimage[i] = *((byte*)camBytes.Y.ToPointer() + (i * sizeof(byte)));
}
for (int i = 0; i < camBytes.Width * camBytes.Height / 4; i++)
{
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i] = *((byte*)camBytes.U.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i + 1] = *((byte*)camBytes.V.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
}
}
// Create the output byte array. RGB is three channels, therefore
// we need 3 times the pixel count
byte[] RGBimage = new byte[camBytes.Width * camBytes.Height * 3];
// GCHandles help us "pin" the arrays in the memory, so that we can
// pass them to the C++ code.
GCHandle YUVhandle = GCHandle.Alloc(YUVimage, GCHandleType.Pinned);
GCHandle RGBhandle = GCHandle.Alloc(RGBimage, GCHandleType.Pinned);
// Call the C++ function that we created.
int k = ConvertYUV2RGBA(YUVhandle.AddrOfPinnedObject(), RGBhandle.AddrOfPinnedObject(), camBytes.Width, camBytes.Height);
// If OpenCV conversion failed, return null
if (k != 0)
{
Debug.LogWarning("Color conversion - k != 0");
return null;
}
// Create a new texture object
result = new Texture2D(camBytes.Width, camBytes.Height, TextureFormat.RGB24, false);
// Load the RGB array to the texture, send it to GPU
result.LoadRawTextureData(RGBimage);
result.Apply();
// Save the texture as an PNG file. End the using {} clause to
// dispose of the CameraImageBytes.
File.WriteAllBytes(Application.persistentDataPath + "/tex.png", result.EncodeToPNG());
}
// Return the texture.
return result;
}
To be able to run unsafe code, you also need to allow it in Unity. Go to Player Settings (Edit > Project Settings > Player Settings and check the Allow unsafe code checkbox.)
Now, you can call the CameraToTexture() function, let's say, every 5 seconds from Update(), and the camera image should be saved as /Android/data/YOUR_APPLICATION_PACKAGE/files/tex.png. The image will probably be landscape oriented, even if you held the phone in portrait mode, but this is not that hard to fix anymore. Also, you might notice a freeze everytime the image is saved - I recommend calling this function in a separate thread because of this. Also, the most demanding operation here is saving the image as an PNG file, so if you need it for any other reason, you should be fine (still use the separate thread, though).
If you want to undestand the YUV_420_888 format, why you need a 1.5*pixelCount array, and why we modified the arrays the way we did, read https://wiki.videolan.org/YUV/#NV12. Other websites seem to have incorrect information about how this format works.
Also, feel free to comment me with any issues you might have, and I will try to help with them, as well as any feedback for both the code and answer.
APPENDIX 1: According to https://docs.unity3d.com/ScriptReference/Texture2D.LoadRawTextureData.html, you should use GetRawTextureData instead of LoadRawTextureData, to prevent copying. To do this, just pin the array returned by GetRawTextureData instead of the RGBimage array (which you can remove). Also, do not forget to call result.Apply(); afterwards.
APPENDIX 2: Do not forget to call Free() on both GCHandles when you are done using them.
I figured out how to get the full resolution CPU image in Arcore 1.8.
I can now get the full camera resolution with cameraimagebytes.
put this in your class variables:
private ARCoreSession.OnChooseCameraConfigurationDelegate m_OnChoseCameraConfiguration = null;
put this in Start()
m_OnChoseCameraConfiguration = _ChooseCameraConfiguration; ARSessionManager.RegisterChooseCameraConfigurationCallback(m_OnChoseCameraConfiguration); ARSessionManager.enabled = false; ARSessionManager.enabled = true;
Add this callback to the class:
private int _ChooseCameraConfiguration(List<CameraConfig> supportedConfigurations) { return supportedConfigurations.Count - 1; }
Once you add those, you should have cameraimagebytes returning the full resolution of the camera.
For everyone who want to try this with OpencvForUnity:
public Mat getCameraImage()
{
// Use using to make sure that C# disposes of the CameraImageBytes afterwards
using (CameraImageBytes camBytes = Frame.CameraImage.AcquireCameraImageBytes())
{
// If acquiring failed, return null
if (!camBytes.IsAvailable)
{
Debug.LogWarning("camBytes not available");
return null;
}
// To save a YUV_420_888 image, you need 1.5*pixelCount bytes.
// I will explain later, why.
byte[] YUVimage = new byte[(int)(camBytes.Width * camBytes.Height * 1.5f)];
// As CameraImageBytes keep the Y, U and V data in three separate
// arrays, we need to put them in a single array. This is done using
// native pointers, which are considered unsafe in C#.
unsafe
{
for (int i = 0; i < camBytes.Width * camBytes.Height; i++)
{
YUVimage[i] = *((byte*)camBytes.Y.ToPointer() + (i * sizeof(byte)));
}
for (int i = 0; i < camBytes.Width * camBytes.Height / 4; i++)
{
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i] = *((byte*)camBytes.U.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i + 1] = *((byte*)camBytes.V.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
}
}
// Create the output byte array. RGB is three channels, therefore
// we need 3 times the pixel count
byte[] RGBimage = new byte[camBytes.Width * camBytes.Height * 3];
// GCHandles help us "pin" the arrays in the memory, so that we can
// pass them to the C++ code.
GCHandle pinnedArray = GCHandle.Alloc(YUVimage, GCHandleType.Pinned);
IntPtr pointer = pinnedArray.AddrOfPinnedObject();
Mat input = new Mat(camBytes.Height + camBytes.Height / 2, camBytes.Width, CvType.CV_8UC1);
Mat output = new Mat(camBytes.Height, camBytes.Width, CvType.CV_8UC3);
Utils.copyToMat(pointer, input);
Imgproc.cvtColor(input, output, Imgproc.COLOR_YUV2RGB_NV12);
pinnedArray.Free();
return output;
}
}
Here is an implementation of this which just uses the free plugin OpenCV Plus Unity. Very simple to set up and great documentation if you are familiar with OpenCV.
This implementation rotates the image properly using OpenCV, stores them into memory and upon exiting the app, saves them to file. I have tried to strip all Unity aspects from the code so that the function GetCameraImage() can be run on a separate thread.
I can confirm it works on Andoird (GS7), I presume it will work pretty universally.
using System;
using System.Collections.Generic;
using GoogleARCore;
using UnityEngine;
using OpenCvSharp;
using System.Runtime.InteropServices;
public class CamImage : MonoBehaviour
{
public static List<Mat> AllData = new List<Mat>();
public static void GetCameraImage()
{
// Use using to make sure that C# disposes of the CameraImageBytes afterwards
using (CameraImageBytes camBytes = Frame.CameraImage.AcquireCameraImageBytes())
{
// If acquiring failed, return null
if (!camBytes.IsAvailable)
{
return;
}
// To save a YUV_420_888 image, you need 1.5*pixelCount bytes.
// I will explain later, why.
byte[] YUVimage = new byte[(int)(camBytes.Width * camBytes.Height * 1.5f)];
// As CameraImageBytes keep the Y, U and V data in three separate
// arrays, we need to put them in a single array. This is done using
// native pointers, which are considered unsafe in C#.
unsafe
{
for (int i = 0; i < camBytes.Width * camBytes.Height; i++)
{
YUVimage[i] = *((byte*)camBytes.Y.ToPointer() + (i * sizeof(byte)));
}
for (int i = 0; i < camBytes.Width * camBytes.Height / 4; i++)
{
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i] = *((byte*)camBytes.U.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i + 1] = *((byte*)camBytes.V.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
}
}
// GCHandles help us "pin" the arrays in the memory, so that we can
// pass them to the C++ code.
GCHandle pinnedArray = GCHandle.Alloc(YUVimage, GCHandleType.Pinned);
IntPtr pointerYUV = pinnedArray.AddrOfPinnedObject();
Mat input = new Mat(camBytes.Height + camBytes.Height / 2, camBytes.Width, MatType.CV_8UC1, pointerYUV);
Mat output = new Mat(camBytes.Height, camBytes.Width, MatType.CV_8UC3);
Cv2.CvtColor(input, output, ColorConversionCodes.YUV2BGR_NV12);// YUV2RGB_NV12);
// FLIP AND TRANPOSE TO VERTICAL
Cv2.Transpose(output, output);
Cv2.Flip(output, output, FlipMode.Y);
AllData.Add(output);
pinnedArray.Free();
}
}
}
I then call ExportImages() when exiting the program to save to file.
private void ExportImages()
{
/// Write Camera intrinsics to text file
var path = Application.persistentDataPath;
StreamWriter sr = new StreamWriter(path + #"/intrinsics.txt");
sr.WriteLine(CameraIntrinsicsOutput.text);
Debug.Log(CameraIntrinsicsOutput.text);
sr.Close();
// Loop through Mat List, Add to Texture and Save.
for (var i = 0; i < CamImage.AllData.Count; i++)
{
Mat imOut = CamImage.AllData[i];
Texture2D result = Unity.MatToTexture(imOut);
result.Apply();
byte[] im = result.EncodeToJPG(100);
string fileName = "/IMG" + i + ".jpg";
File.WriteAllBytes(path + fileName, im);
string messge = "Succesfully Saved Image To " + path + "\n";
Debug.Log(messge);
Destroy(result);
}
}
Seems you already fix this.
But for anyone who want to combine AR with hand gesture recognition and tracking, try Manomotion: https://www.manomotion.com/
free SDk and work prefect in 12/2020.
Use the SDK Community Edition and Download ARFoundation version

SharpDX XAudio2: 6 SourceVoice limit

I have been playing around with SharpDX.XAudio2 for a few days now, and while things have been largely positive (the odd software quirk here and there) the following problem has me completely stuck:
I am working in C# .NET using VS2015.
I am trying to play multiple sounds simultaneously.
To do this, I have made:
- Test.cs: Contains main method
- cSoundEngine.cs: Holds XAudio2, MasteringVoice, and sound management methods.
- VoiceChannel.cs: Holds a SourceVoice, and in future any sfx/ related data.
cSoundEngine:
List<VoiceChannel> sourceVoices;
XAudio2 engine;
MasteringVoice master;
public cSoundEngine()
{
engine = new XAudio2();
master = new MasteringVoice(engine);
sourceVoices = new List<VoiceChannel>();
}
public VoiceChannel AddAndPlaySFX(string filepath, double vol, float pan)
{
/**
* Set up and start SourceVoice
*/
NativeFileStream fileStream = new NativeFileStream(filepath, NativeFileMode.Open, NativeFileAccess.Read);
SoundStream soundStream = new SoundStream(fileStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source);
sourceVoices.Add(voice);
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
return voice;
}
Test.cs:
cSoundEngine engine = new cSoundEngine();
total = 6;
for (int i = 0; i < total; i++)
{
string filepath = System.IO.Directory.GetParent(System.IO.Directory.GetCurrentDirectory()).Parent.FullName + #"\Assets\Planet.wav";
VoiceChannel sfx = engine.AddAndPlaySFX(filepath, 0.1, 0);
}
Console.Read(); //Input anything to end play.
There is currently nothing worth showing in VoiceChannel.cs - it holds 'SourceVoice source' which is the one parameter sent in the constructor!
Everything is fine and well running with up to 5 sounds (total = 5). All you hear is the blissful drone of Planet.wav. Any higher than 5 however causes the console to freeze for ~5 seconds, then close (likely a c++ error which debugger can't handle). Sadly no error message for us to look at or anything.
From testing:
- Will not crash as long as you do not have more than 5 running sourcevoices.
- Changing sample rate does not seem to help.
- Setting inputChannels for master object to a different number makes no difference.
- MasteringVoice seems to say the max number of inputvoices is 64.
- Making each sfx play from a different wav file makes no difference.
- Setting the volume for sourcevoices and/or master makes no difference.
From the XAudio2 API Documentation I found this quote: 'XAudio2 removes the 6-channel limit on multichannel sounds, and supports multichannel audio on any multichannel-capable audio card. The card does not need to be hardware-accelerated.'. This is the closest I have come to finding something that mentions this problem.
I am not well experienced with programming sfx and a lot of this is very new to me, so feel free to call me an idiot where appropriate but please try and explain things in layman terms.
Please, if you have any ideas or answers they would be greatly appreciated!
-Josh
As Chuck has suggested, I have created a databank which holds the .wav data, and I just reference the single data store with each buffer. This has improved the sound limit up to 20 - however this has not fixed the problem as a whole, likely because I have not implemented this properly.
Implementation:
class SoundDataBank
{
/**
* Holds a single byte array for each sound
*/
Dictionary<eSFX, Byte[]> bank;
string curdir => Directory.GetParent(Directory.GetCurrentDirectory()).Parent.FullName;
public SoundDataBank()
{
bank = new Dictionary<eSFX, byte[]>();
bank.Add(eSFX.planet, NativeFile.ReadAllBytes(curdir + #"\Assets\Planet.wav"));
bank.Add(eSFX.base1, NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav"));
}
public Byte[] GetSoundData(eSFX sfx)
{
byte[] output = bank[sfx];
return output;
}
}
In SoundEngine we create a SoundBank object (initialised in SoundEngine constructor):
SoundDataBank soundBank;
public VoiceChannel AddAndPlaySFXFromStore(eSFX sfx, double vol)
{
/**
* sourcevoice will be automatically added to MasteringVoice and engine in the constructor.
*/
byte[] buffer = soundBank.GetSoundData(sfx);
MemoryStream memoryStream = new MemoryStream(buffer);
SoundStream soundStream = new SoundStream(memoryStream);
SourceVoice source = new SourceVoice(engine, soundStream.Format);
AudioBuffer audioBuffer = new AudioBuffer()
{
Stream = soundStream.ToDataStream(),
AudioBytes = (int)soundStream.Length,
Flags = SharpDX.XAudio2.BufferFlags.EndOfStream
};
//Make voice wrapper
VoiceChannel voice = new VoiceChannel(source, engine, MakeOutputMatrix());
//Volume
source.SetVolume((float)vol);
//Play sound
source.SubmitSourceBuffer(audioBuffer, soundStream.DecodedPacketsInfo);
source.Start();
sourceVoices.Add(voice);
return voice;
}
Following this implementation now lets me play up to 20 sound effects - but NOT because we are playing from the soundbank. Infact, even running the old method for sound effects now gets up to 20 sfx instances.
This has improved up to 20 because we have done NativeFile.ReadAllBytes(curdir + #"\Assets\Base.wav") in the constructor for the SoundBank.
I suspect NativeFile is holding a store of loaded file data, so you regardless of whether you run the original SoundEngine.AddAndPlaySFX() or SoundEngine.AddAndPlaySFXFromStore(), they are both running from memory?
Either way, this has quadrupled the limit from before, so this has been incredibly useful - but requires further work.

How does MediaCapture.StartPreviewToCustomSinkAsync method work?

Why is Windows documentation so lacking ?? It seems impossible to find an example of how this method is supposed to work StartPreviewToCustomSinkAsync
What I am trying to do is get a preview image from a video source (via MediaCapture) but can't understand how this method works (especially what the second parameter, IMediaExtension, is supposed to be/do).
Any chance any of you can help me with this ?
If all you need is to get a preview frame every now and then, there is a sample on the Microsoft github page that is relevant, although they target Windows 10. You may be interested in migrating your project to get this functionality.
GetPreviewFrame: This sample will capture preview frames as opposed to full-blown photos. Once it has a preview frame, it can edit the pixels on it.
Here is the relevant part:
private async Task GetPreviewFrameAsSoftwareBitmapAsync()
{
// Get information about the preview
var previewProperties = _mediaCapture.VideoDeviceController.GetMediaStreamProperties(MediaStreamType.VideoPreview) as VideoEncodingProperties;
// Create the video frame to request a SoftwareBitmap preview frame
var videoFrame = new VideoFrame(BitmapPixelFormat.Bgra8, (int)previewProperties.Width, (int)previewProperties.Height);
// Capture the preview frame
using (var currentFrame = await _mediaCapture.GetPreviewFrameAsync(videoFrame))
{
// Collect the resulting frame
SoftwareBitmap previewFrame = currentFrame.SoftwareBitmap;
// Add a simple green filter effect to the SoftwareBitmap
EditPixels(previewFrame);
}
}
private unsafe void EditPixels(SoftwareBitmap bitmap)
{
// Effect is hard-coded to operate on BGRA8 format only
if (bitmap.BitmapPixelFormat == BitmapPixelFormat.Bgra8)
{
// In BGRA8 format, each pixel is defined by 4 bytes
const int BYTES_PER_PIXEL = 4;
using (var buffer = bitmap.LockBuffer(BitmapBufferAccessMode.ReadWrite))
using (var reference = buffer.CreateReference())
{
// Get a pointer to the pixel buffer
byte* data;
uint capacity;
((IMemoryBufferByteAccess)reference).GetBuffer(out data, out capacity);
// Get information about the BitmapBuffer
var desc = buffer.GetPlaneDescription(0);
// Iterate over all pixels
for (uint row = 0; row < desc.Height; row++)
{
for (uint col = 0; col < desc.Width; col++)
{
// Index of the current pixel in the buffer (defined by the next 4 bytes, BGRA8)
var currPixel = desc.StartIndex + desc.Stride * row + BYTES_PER_PIXEL * col;
// Read the current pixel information into b,g,r channels (leave out alpha channel)
var b = data[currPixel + 0]; // Blue
var g = data[currPixel + 1]; // Green
var r = data[currPixel + 2]; // Red
// Boost the green channel, leave the other two untouched
data[currPixel + 0] = b;
data[currPixel + 1] = (byte)Math.Min(g + 80, 255);
data[currPixel + 2] = r;
}
}
}
}
}
And declare this outside your class:
[ComImport]
[Guid("5b0d3235-4dba-4d44-865e-8f1d0e4fd04d")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
unsafe interface IMemoryBufferByteAccess
{
void GetBuffer(out byte* buffer, out uint capacity);
}
And of course, your project will have to allow unsafe code for all of this to work.
Have a closer look at the sample to see how to get all the details. Or, to have a walkthrough, you can watch the camera session from a recent //build/ conference, which includes a little bit of a walkthrough through some camera samples.
Alternatively, if you're bound to 8.1, you can look into the Lumia Imaging SDK, which can notify you when there's a new preview frame available.
There are much of examples on GitHub. If you are developing for Windows Phone 8.1 - samples are here
According this example recording looking like this:
private StspMediaSinkProxy mediaSink;
mediaSink = new StspMediaSinkProxy();
MediaEncodingProfile encodingProfile = MediaEncodingProfile.CreateMp4(VideoEncodingQuality.Qvga);
var mfExtension = await mediaSink.InitializeAsync(encodingProfile.Audio, encodingProfile.Video);
await mediaCapture.StartRecordToCustomSinkAsync(encodingProfile, mfExtension);
So, you can understand how to get IMediaExtension from MediaEncodingProfile from this example.
You haven't post any code, but making Preview should be similar to code I have provide

Video Capture output always in 320x240 despite changing resolution

Ok I have been at this for 2 days and need help with this last part.
I have a Microsoft LifeCam Cinema camera and I use the .NET DirectShowLib to capture the video stream. Well actually I use WPFMediaKit, but I am in the source code of that dealing directly with the direct show library now.
What I have working is:
- View the video output of the camera
- Record the video output of the camera in ASF or AVI (the only 2 MediaType's supported with ICaptureGraphBuilder2)
The problem is: I can save it as a .avi. This works fine and at a resolution of 1280x720 but it saves the file in RAW output. Meaning it is about 50-60MB per second. Way too high.
Or I can switch it to .asf and it outputs a WMV, but when I do this the capture and the output both go to resolution 320x240.
In WPFMediaKit there is a function I changed because apparently with Microsoft LifeCam Cinema cameras a lot of people have this problem. So instead of creating or changing the AMMediaType you iterate through and then use that to call SetFormat.
///* Make the VIDEOINFOHEADER 'readable' */
var videoInfo = new VideoInfoHeader();
int iCount = 0, iSize = 0;
videoStreamConfig.GetNumberOfCapabilities(out iCount, out iSize);
IntPtr TaskMemPointer = Marshal.AllocCoTaskMem(iSize);
AMMediaType pmtConfig = null;
for (int iFormat = 0; iFormat < iCount; iFormat++)
{
IntPtr ptr = IntPtr.Zero;
videoStreamConfig.GetStreamCaps(iFormat, out pmtConfig, TaskMemPointer);
videoInfo = (VideoInfoHeader)Marshal.PtrToStructure(pmtConfig.formatPtr, typeof(VideoInfoHeader));
if (videoInfo.BmiHeader.Width == DesiredWidth && videoInfo.BmiHeader.Height == DesiredHeight)
{
///* Setup the VIDEOINFOHEADER with the parameters we want */
videoInfo.AvgTimePerFrame = DSHOW_ONE_SECOND_UNIT / FPS;
if (mediaSubType != Guid.Empty)
{
int fourCC = 0;
byte[] b = mediaSubType.ToByteArray();
fourCC = b[0];
fourCC |= b[1] << 8;
fourCC |= b[2] << 16;
fourCC |= b[3] << 24;
videoInfo.BmiHeader.Compression = fourCC;
// pmtConfig.subType = mediaSubType;
}
/* Copy the data back to unmanaged memory */
Marshal.StructureToPtr(videoInfo, pmtConfig.formatPtr, true);
hr = videoStreamConfig.SetFormat(pmtConfig);
break;
}
}
/* Free memory */
Marshal.FreeCoTaskMem(TaskMemPointer);
DsUtils.FreeAMMediaType(pmtConfig);
if (hr < 0)
return false;
return true;
When that was implemented I could finally view the captured video as 1280x720 as long as I set the SetOutputFilename to a MediaType.Avi.
If I set it to a MediaType.Asf it goes to 320x240 and the output is the same.
Or the AVI works and outputs in the correct format but does so in RAW video, hence a very large file size. I have attempted to add a compressor to the graph but with no luck, this is far out of my experience.
I am looking for 1 of 2 answers.
Recording the ASF at 1280x720
Adding a compressor to the graph so that the filesize of my outputted AVI is small.
I figured this out. So I am posting it here for any other poor soul who passes by wondering why it doesn't work.
Download the source of the WPFMediaKit, you are going to need to change some code.
Go to Folder DirectShow > MediaPlayers and open up VideoCapturePlayer.cs
Find the function SetVideoCaptureParameters and replace it with this:
/// <summary>
/// Sets the capture parameters for the video capture device
/// </summary>
private bool SetVideoCaptureParameters(ICaptureGraphBuilder2 capGraph, IBaseFilter captureFilter, Guid mediaSubType)
{
/* The stream config interface */
object streamConfig;
/* Get the stream's configuration interface */
int hr = capGraph.FindInterface(PinCategory.Capture,
MediaType.Video,
captureFilter,
typeof(IAMStreamConfig).GUID,
out streamConfig);
DsError.ThrowExceptionForHR(hr);
var videoStreamConfig = streamConfig as IAMStreamConfig;
/* If QueryInterface fails... */
if (videoStreamConfig == null)
{
throw new Exception("Failed to get IAMStreamConfig");
}
///* Make the VIDEOINFOHEADER 'readable' */
var videoInfo = new VideoInfoHeader();
int iCount = 0, iSize = 0;
videoStreamConfig.GetNumberOfCapabilities(out iCount, out iSize);
IntPtr TaskMemPointer = Marshal.AllocCoTaskMem(iSize);
AMMediaType pmtConfig = null;
for (int iFormat = 0; iFormat < iCount; iFormat++)
{
IntPtr ptr = IntPtr.Zero;
videoStreamConfig.GetStreamCaps(iFormat, out pmtConfig, TaskMemPointer);
videoInfo = (VideoInfoHeader)Marshal.PtrToStructure(pmtConfig.formatPtr, typeof(VideoInfoHeader));
if (videoInfo.BmiHeader.Width == DesiredWidth && videoInfo.BmiHeader.Height == DesiredHeight)
{
///* Setup the VIDEOINFOHEADER with the parameters we want */
videoInfo.AvgTimePerFrame = DSHOW_ONE_SECOND_UNIT / FPS;
if (mediaSubType != Guid.Empty)
{
int fourCC = 0;
byte[] b = mediaSubType.ToByteArray();
fourCC = b[0];
fourCC |= b[1] << 8;
fourCC |= b[2] << 16;
fourCC |= b[3] << 24;
videoInfo.BmiHeader.Compression = fourCC;
// pmtConfig.subType = mediaSubType;
}
/* Copy the data back to unmanaged memory */
Marshal.StructureToPtr(videoInfo, pmtConfig.formatPtr, true);
hr = videoStreamConfig.SetFormat(pmtConfig);
break;
}
}
/* Free memory */
Marshal.FreeCoTaskMem(TaskMemPointer);
DsUtils.FreeAMMediaType(pmtConfig);
if (hr < 0)
return false;
return true;
}
Now that will sort out your screen display at what ever desired resolution you want, provided that your camera supports it.
Next you will soon figure out that this new correct capture you have isnt applied when writing the video to disk.
Since the ICaptureBuilder2 method only supports Avi and Asf (which is wmv) you need to set your mediatype to one of them.
hr = graphBuilder.SetOutputFileName(MediaSubType.Asf, this.m_fileName, out mux, out sink);
You will find that line in the SetupGraph function.
Asf will only output in 320x240, yet the Avi will output in the desired resolution, but uncompressed (meaning 50-60MB per second for a 1280x720 video feed), which is too high.
So that leaves you with 2 options
Figure out how to add a encoder (compression filter) to the Avi output
Figure out how to change the WMV profile
I tried 1, with no success. Mainly due to the fact this is my first time working with DirectShow and only just grasp the meaning of graphs.
But I was successful with #2 and here is how I did it.
Special thanks to (http://www.codeproject.com/KB/audio-video/videosav.aspx) I pulled out the needed code from here.
Create a new class in the same folder called WMLib.cs and place the following in it
Download the demo project from http://www.codeproject.com/KB/audio-video/videosav.aspx and copy and paste the WMLib.cs into your project (change the namespace as necessary)
Create a function in the VideoCapturePlayer.cs class
/// <summary>
/// Configure profile from file to Asf file writer
/// </summary>
/// <param name="asfWriter"></param>
/// <param name="filename"></param>
/// <returns></returns>
public bool ConfigProfileFromFile(IBaseFilter asfWriter, string filename)
{
int hr;
//string profilePath = "test.prx";
// Set the profile to be used for conversion
if ((filename != null) && (File.Exists(filename)))
{
// Load the profile XML contents
string profileData;
using (StreamReader reader = new StreamReader(File.OpenRead(filename)))
{
profileData = reader.ReadToEnd();
}
// Create an appropriate IWMProfile from the data
// Open the profile manager
IWMProfileManager profileManager;
IWMProfile wmProfile = null;
hr = WMLib.WMCreateProfileManager(out profileManager);
if (hr >= 0)
{
// error message: The profile is invalid (0xC00D0BC6)
// E.g. no <prx> tags
hr = profileManager.LoadProfileByData(profileData, out wmProfile);
}
if (profileManager != null)
{
Marshal.ReleaseComObject(profileManager);
profileManager = null;
}
// Config only if there is a profile retrieved
if (hr >= 0)
{
// Set the profile on the writer
IConfigAsfWriter configWriter = (IConfigAsfWriter)asfWriter;
hr = configWriter.ConfigureFilterUsingProfile(wmProfile);
if (hr >= 0)
{
return true;
}
}
}
return false;
}
In the SetupGraph function find the SetOutputFileName and below it put
ConfigProfileFromFile(mux, "c:\wmv.prx");
Now create a file called wmv.prx on your c: drive and place the relevant information in it.
You can see a sample of a PRX file from the demo project here: http://www.codeproject.com/KB/audio-video/videosav.aspx (Pal90.prx)
And now enjoy your .wmv file outputted at the right size.
Yes I know the code I placed in was rather scrappy but I will leave it up to you to polish it up.
Lifecam is known for unobvious behavior with setting capture format (more specifically, falling back to other formats). See prevoius discussions which are likely to suggest you a solution:
Can't make IAMStreamConfig.SetFormat() to work with LifeCam Studio
IAMStreamConfig settings are ignored by stream - Microsoft LifeCam Cinema records only on 640x480
Can Microsoft LifeCam record at a frame rates lower than 30 fps?

Convert IplImage into a JPEG without using CvSaveImage in OpenCV

I wish to convert an IplImage into a JPEG image in the memory (in order to stream it as M-JPEG frame over sockets) .
I know I can use CvSaveImage for this, that creates a jpeg file, I read it again and then stream it over the network.
I wish to avoid this extra disk write-read ops for faster operation. Any insights ?
Check out this question. I am not sure how you can use the solution in C#, but maybe it can help.
If your tag is correct and this is in C# then you should check out OpenCVSharp.
http://code.google.com/p/opencvsharp/
With it you can do...
IplImage ipl = new IplImage("foo.png", LoadMode.Color);
Bitmap bitmap = ipl.ToBitmap();
I also found an example of someone doing it using VC++.NET
//IplImage -> Bitmap
void Fill_Bitmap(Bitmap* bitmap, IplImage* image){
int nl= image->height;
int nc= image->width * image->nChannels;
int step= image->widthStep;
unsigned char* data=reinterpret_cast<unsigned char*>(image->imageData);
for(int i=0; i<nl; i++){
for(int j=0; j<nc; j+= image->nChannels){
bitmap->SetPixel(j/3,i,Color::FromArgb(data[j],data[j+1],data[j+2]));
}
data+= step;
}
};
Assume that in your main function:
void main(){
...
imRGB=cvCreateImage( cvSize(col,row), 8, 3 );
Tbitmap=new Bitmap(col,row,PixelFormat::Format24bppRgb);
...
Fill_Bitmap(Tbitmap,imRGB);
}
Good luck!
Pretty Simple
All you need to load files from the memory buffer is a different src manager (libjpeg). I have tested the following code in Ubuntu 8.10.
/******************************** First define mem buffer function bodies **************/
/*
* memsrc.c
*
* Copyright (C) 1994-1996, Thomas G. Lane.
* This file is part of the Independent JPEG Group's software.
* For conditions of distribution and use, see the accompanying README file.
*
* This file contains decompression data source routines for the case of
* reading JPEG data from a memory buffer that is preloaded with the entire
* JPEG file. This would not seem especially useful at first sight, but
* a number of people have asked for it.
* This is really just a stripped-down version of jdatasrc.c. Comparison
* of this code with jdatasrc.c may be helpful in seeing how to make
* custom source managers for other purposes.
*/
/* this is not a core library module, so it doesn't define JPEG_INTERNALS */
include "jinclude.h"
include "jpeglib.h"
include "jerror.h"
/* Expanded data source object for memory input */
typedef struct {
struct jpeg_source_mgr pub; /* public fields */
JOCTET eoi_buffer[2]; /* a place to put a dummy EOI */
} my_source_mgr;
typedef my_source_mgr * my_src_ptr;
/*
* Initialize source --- called by jpeg_read_header
* before any data is actually read.
*/
METHODDEF(void)
init_source (j_decompress_ptr cinfo)
{
/* No work, since jpeg_memory_src set up the buffer pointer and count.
* Indeed, if we want to read multiple JPEG images from one buffer,
* this *must* not do anything to the pointer.
*/
}
/*
* Fill the input buffer --- called whenever buffer is emptied.
*
* In this application, this routine should never be called; if it is called,
* the decompressor has overrun the end of the input buffer, implying we
* supplied an incomplete or corrupt JPEG datastream. A simple error exit
* might be the most appropriate response.
*
* But what we choose to do in this code is to supply dummy EOI markers
* in order to force the decompressor to finish processing and supply
* some sort of output image, no matter how corrupted.
*/
METHODDEF(boolean)
fill_input_buffer (j_decompress_ptr cinfo)
{
my_src_ptr src = (my_src_ptr) cinfo->src;
WARNMS(cinfo, JWRN_JPEG_EOF);
/* Create a fake EOI marker */
src->eoi_buffer[0] = (JOCTET) 0xFF;
src->eoi_buffer[1] = (JOCTET) JPEG_EOI;
src->pub.next_input_byte = src->eoi_buffer;
src->pub.bytes_in_buffer = 2;
return TRUE;
}
/*
* Skip data --- used to skip over a potentially large amount of
* uninteresting data (such as an APPn marker).
*
* If we overrun the end of the buffer, we let fill_input_buffer deal with
* it. An extremely large skip could cause some time-wasting here, but
* it really isn't supposed to happen ... and the decompressor will never
* skip more than 64K anyway.
*/
METHODDEF(void)
skip_input_data (j_decompress_ptr cinfo, long num_bytes)
{
my_src_ptr src = (my_src_ptr) cinfo->src;
if (num_bytes > 0) {
while (num_bytes > (long) src->pub.bytes_in_buffer) {
num_bytes -= (long) src->pub.bytes_in_buffer;
(void) fill_input_buffer(cinfo);
/* note we assume that fill_input_buffer will never return FALSE,
* so suspension need not be handled.
*/
}
src->pub.next_input_byte += (size_t) num_bytes;
src->pub.bytes_in_buffer -= (size_t) num_bytes;
}
}
/*
* An additional method that can be provided by data source modules is the
* resync_to_restart method for error recovery in the presence of RST markers.
* For the moment, this source module just uses the default resync method
* provided by the JPEG library. That method assumes that no backtracking
* is possible.
*/
/*
* Terminate source --- called by jpeg_finish_decompress
* after all data has been read. Often a no-op.
*
* NB: *not* called by jpeg_abort or jpeg_destroy; surrounding
* application must deal with any cleanup that should happen even
* for error exit.
*/
METHODDEF(void)
term_source (j_decompress_ptr cinfo)
{
/* no work necessary here */
}
/*
* Prepare for input from a memory buffer.
*/
GLOBAL(void)
jpeg_memory_src (j_decompress_ptr cinfo, const JOCTET * buffer, size_t bufsize)
{
my_src_ptr src;
/* The source object is made permanent so that a series of JPEG images
* can be read from a single buffer by calling jpeg_memory_src
* only before the first one.
* This makes it unsafe to use this manager and a different source
* manager serially with the same JPEG object. Caveat programmer.
*/
if (cinfo->src == NULL) { /* first time for this JPEG object? */
cinfo->src = (struct jpeg_source_mgr *)
(*cinfo->mem->alloc_small) ((j_common_ptr) cinfo, JPOOL_PERMANENT,
SIZEOF(my_source_mgr));
}
src = (my_src_ptr) cinfo->src;
src->pub.init_source = init_source;
src->pub.fill_input_buffer = fill_input_buffer;
src->pub.skip_input_data = skip_input_data;
src->pub.resync_to_restart = jpeg_resync_to_restart; /* use default method */
src->pub.term_source = term_source;
src->pub.next_input_byte = buffer;
src->pub.bytes_in_buffer = bufsize;
}
Then the usage is pretty simple. You may need to replace SIZEOF() with sizeof(). Find a standard decompression example. Just replace "jpeg_stdio_src" with "jpeg_memory_src". Hope that helps!
use CxImage http://www.codeproject.com/KB/graphics/cximage.aspx

Categories

Resources