IOS Marshal.PtrToStructure System.ExecutionEngineException - c#

Hey I was wondering if there is a fix for the following issue as i can't seem to find one anywhere and I have now ran out of ideas.
I am writing a program using Xamarin.Forms to run across both android and IOS, the following both code versions work on android, However throws a ExecutionEngineException when the second version is ran on IOS but the first one works.
protected static object ReadStruct(BinaryReader reader, Type structType, ChunkHeader chunkHeader)
{
int size = Marshal.SizeOf(structType);
if (size != chunkHeader.chunkSize) // check the actual size of this chunk object with the expected size from the stream
{
throw new IOException("Invalid file format, incorrect " + chunkHeader.chunkName + " chunk size (exp.: " + size + ", read: " + chunkHeader.chunkSize + ")");
}
byte[] data = reader.ReadBytes(size);
IntPtr buffer = Marshal.AllocHGlobal(size);
Marshal.Copy(data, 0, buffer, size);
// the line that crashes follows this
object structObject = Marshal.PtrToStructure(buffer, structType);
Marshal.FreeHGlobal(buffer);
return structObject;
}
The above code stays the same in both versions.
public struct OvdBChunk
{
// stuff in here but not important
}
The above sample works but for this and all future updates i will need to inherit the old versions as stuff may have been updated in newer devices so have been added to but will always keep the old stuff so i changed it to the following
public class OvdBChunk : OvdAChunk
{
// stuff in here but not important
}
The structType in part is the part that is changing in the top code snippet.
Any idea why when it is a class instead of a strut it throws the System.ExecutionEngineException. and any ideas on how i can fix it?

Related

C# - Shell32.NameSpace does not work when trying to extract metadata from files

I am attempting to get the metadata from a few music files and failing miserably. Online, there seems to be absolutely NO HOPE in finding an answer; no matter what I google. I thought it would be a great time to come and ask here because of this.
The specific error I got was: Error HRESULT E_FAIL has been returned from a call to a COM component. I really wish I could elaborate on this issue, but I'm simply getting nothing back from the COMException object. The error code was -2147467259, and it in hex is -0x7FFFBFFB, and Microsoft have not documented this specific error.
I 70% sure that its not the file's fault. My code will run through a directory full of music and convert the file into a song, hence the ConvertFileToSong name. The function would not be running if the file were to not exist is what I'm trying to say.
The only thing I can really say is that I'm using Dotnet 6, and have a massive headache.
Well, I guess I could also share another problem I had before this error showed up. Dotnet6 has top level code or whatever its called, this means that I can't add the [STAThread] attribute. To solve this, I simply added the code bellow to the top. Not sure why I have to set it to unknown, but that's what I (someone else on Stack Overflow) have to do. That solved that previous problem that the Shell32 could not start, but could that be causing my current problem? Who knows... definitely not me.
Thread.CurrentThread.SetApartmentState(ApartmentState.Unknown);
Thread.CurrentThread.SetApartmentState(ApartmentState.STA);
Here is the code:
// Help from: https://stackoverflow.com/questions/37869388/how-to-read-extended-file-properties-file-metadata
public static Song ConvertFileToSong(FileInfo file)
{
Song song = new Song();
List<string> headers = new List<string>();
// initialise the windows shell to parse attributes from
Shell32.Shell shell = new Shell32.Shell();
Shell32.Folder objFolder = null;
try
{
objFolder = shell.NameSpace(file.FullName);
}
catch (COMException e)
{
int code = e.ErrorCode;
string hex = code.ToString();
Console.WriteLine("MESSAGE: " + e.Message + ", CODE: " + hex);
return null;
}
Shell32.FolderItem folderItem = objFolder.ParseName(file.Name);
// the rest of the code is not important, but I'll leave it there anyway
// pretty much loop infinetly with a counter better than
// while loop because we don't have to declare an int on a new
// line
for (int i = 0; i < short.MaxValue; i++)
{
string header = objFolder.GetDetailsOf(null, i);
// the header does not exist, so we must exit
if (String.IsNullOrEmpty(header)) break;
headers.Add(header);
}
// Once the code works, I'll try and get this to work
song.Title = objFolder.GetDetailsOf(folderItem, 0);
return song;
}
Good night,
Diseased Finger
Ok, so the solution isn't that hard. I used file.FullName which includes the file's name, but Shell32.NameSpace ONLY requires the directory name (discluding the file name).
This is the code that fixed it:
public static Song ConvertFileToSong(FileInfo file)
{
// .....
Shell32.Shell shell = new Shell32.Shell();
Shell32.Folder objFolder = file.DirectoryName;
Shell32.FolderItem folderItem = objFolder.ParseName(file.Name);
// .....
return something;
}

How to use the ARCore camera image in OpenCV in an Unity Android app?

I am trying to use OpenCV for hand gesture recognition in my Unity ARCore game. However, with the deprecation of TextureReaderAPI, the only way to capture the image from the camera is by using Frame.CameraImage.AcquireCameraImageBytes(). The problem with that is not only that the image is in 640x480 resolution (this cannot be changed AFAIK), but it is also in YUV_420_888 format.
As if that were not enough, OpenCV does not have free C#/Unity packages, so if I do not want to cash out 20$ for a paid package, I need to use available C++ or python versions. How do I move the YUV image to OpenCV, convert it to an RGB (or HSV) color space, and then either do some processing on it or return it back to Unity?
In this example, I will use C++ OpenCV libraries and Visual Studio 2017 and I will try to capture ARCore camera image, move it to OpenCV (as efficiently as possible), convert it to RGB color space, then move it back to Unity C# code and save it in the phone's memory.
Firstly, we have to create a C++ dynamic library project to use with OpenCV. For this, I highly recommend to follow both Pierre Baret's and Ninjaman494's answers on this question: OpenCV + Android + Unity. The process is rather straightforward, and if you will not deviate from their answers too much (i.e. you can safely download a newer than 3.3.1 version of OpenCV, but be careful when compiling for ARM64 instead of ARM, etc.), you should be able to call a C++ function from C#.
In my experience, I had to solve two problems - firstly, if you made the project part of your C# solution instead of creating a new solution, Visual Studio will keep messing with your configuration, like trying to compile a x86 version instead of an ARM version. To save yourself the hassle, create a completely separate solution. The other problem is that some functions failed to link for me, thus throwing a undefined reference linker error (undefined reference to 'cv::error(int, std::string const&, char const*, char const*, int, to be exact). If this happens and the problem is with a function that you do not really need, just recreate the function in your code - for instance if you have problems with cv::error, add this code in the end of your .cpp file:
namespace cv {
__noreturn void error(int a, const String & b, const char * c, const char * d, int e) {
throw std::string(b);
}
}
Sure, this is ugly and dirty way to do things, so if you know how to fix the linker error, please do so and let me know.
Now, you should have a working C++ code that compiles and can be run from a Unity Android application. However, what we want is for OpenCV to not return a number, but to convert an image. So change your code to this:
.h file
extern "C" {
namespace YOUR_OWN_NAMESPACE
{
int ConvertYUV2RGBA(unsigned char *, unsigned char *, int, int);
}
}
.cpp file
extern "C" {
int YOUR_OWN_NAMESPACE::ConvertYUV2RGBA(unsigned char * inputPtr, unsigned char * outputPtr, int width, int height) {
// Create Mat objects for the YUV and RGB images. For YUV, we need a
// height*1.5 x width image, that has one 8-bit channel. We can also tell
// OpenCV to have this Mat object "encapsulate" an existing array,
// which is inputPtr.
// For RGB image, we need a height x width image, that has three 8-bit
// channels. Again, we tell OpenCV to encapsulate the outputPtr array.
// Thanks to specifying existing arrays as data sources, no copying
// or memory allocation has to be done, and the process is highly
// effective.
cv::Mat input_image(height + height / 2, width, CV_8UC1, inputPtr);
cv::Mat output_image(height, width, CV_8UC3, outputPtr);
// If any of the images has not loaded, return 1 to signal an error.
if (input_image.empty() || output_image.empty()) {
return 1;
}
// Convert the image. Now you might have seen people telling you to use
// NV21 or 420sp instead of NV12, and BGR instead of RGB. I do not
// understand why, but this was the correct conversion for me.
// If you have any problems with the color in the output image,
// they are probably caused by incorrect conversion. In that case,
// I can only recommend you the trial and error method.
cv::cvtColor(input_image, output_image, cv::COLOR_YUV2RGB_NV12);
// Now that the result is safely saved in outputPtr, we can return 0.
return 0;
}
}
Now, rebuild the solution (Ctrl + Shift + B) and copy the libProjectName.so file to Unity's Plugins/Android folder, as in the linked answer.
The next thing is to save the image from ARCore, move it to C++ code, and get it back. Let us add this inside the class in our C# script:
[DllImport("YOUR_OWN_NAMESPACE")]
public static extern int ConvertYUV2RGBA(IntPtr input, IntPtr output, int width, int height);
You will be prompted by Visual Studio to add System.Runtime.InteropServices using clause - do so.
This allows us to use the C++ function in our C# code. Now, let's add this function to our C# component:
public Texture2D CameraToTexture()
{
// Create the object for the result - this has to be done before the
// using {} clause.
Texture2D result;
// Use using to make sure that C# disposes of the CameraImageBytes afterwards
using (CameraImageBytes camBytes = Frame.CameraImage.AcquireCameraImageBytes())
{
// If acquiring failed, return null
if (!camBytes.IsAvailable)
{
Debug.LogWarning("camBytes not available");
return null;
}
// To save a YUV_420_888 image, you need 1.5*pixelCount bytes.
// I will explain later, why.
byte[] YUVimage = new byte[(int)(camBytes.Width * camBytes.Height * 1.5f)];
// As CameraImageBytes keep the Y, U and V data in three separate
// arrays, we need to put them in a single array. This is done using
// native pointers, which are considered unsafe in C#.
unsafe
{
for (int i = 0; i < camBytes.Width * camBytes.Height; i++)
{
YUVimage[i] = *((byte*)camBytes.Y.ToPointer() + (i * sizeof(byte)));
}
for (int i = 0; i < camBytes.Width * camBytes.Height / 4; i++)
{
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i] = *((byte*)camBytes.U.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i + 1] = *((byte*)camBytes.V.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
}
}
// Create the output byte array. RGB is three channels, therefore
// we need 3 times the pixel count
byte[] RGBimage = new byte[camBytes.Width * camBytes.Height * 3];
// GCHandles help us "pin" the arrays in the memory, so that we can
// pass them to the C++ code.
GCHandle YUVhandle = GCHandle.Alloc(YUVimage, GCHandleType.Pinned);
GCHandle RGBhandle = GCHandle.Alloc(RGBimage, GCHandleType.Pinned);
// Call the C++ function that we created.
int k = ConvertYUV2RGBA(YUVhandle.AddrOfPinnedObject(), RGBhandle.AddrOfPinnedObject(), camBytes.Width, camBytes.Height);
// If OpenCV conversion failed, return null
if (k != 0)
{
Debug.LogWarning("Color conversion - k != 0");
return null;
}
// Create a new texture object
result = new Texture2D(camBytes.Width, camBytes.Height, TextureFormat.RGB24, false);
// Load the RGB array to the texture, send it to GPU
result.LoadRawTextureData(RGBimage);
result.Apply();
// Save the texture as an PNG file. End the using {} clause to
// dispose of the CameraImageBytes.
File.WriteAllBytes(Application.persistentDataPath + "/tex.png", result.EncodeToPNG());
}
// Return the texture.
return result;
}
To be able to run unsafe code, you also need to allow it in Unity. Go to Player Settings (Edit > Project Settings > Player Settings and check the Allow unsafe code checkbox.)
Now, you can call the CameraToTexture() function, let's say, every 5 seconds from Update(), and the camera image should be saved as /Android/data/YOUR_APPLICATION_PACKAGE/files/tex.png. The image will probably be landscape oriented, even if you held the phone in portrait mode, but this is not that hard to fix anymore. Also, you might notice a freeze everytime the image is saved - I recommend calling this function in a separate thread because of this. Also, the most demanding operation here is saving the image as an PNG file, so if you need it for any other reason, you should be fine (still use the separate thread, though).
If you want to undestand the YUV_420_888 format, why you need a 1.5*pixelCount array, and why we modified the arrays the way we did, read https://wiki.videolan.org/YUV/#NV12. Other websites seem to have incorrect information about how this format works.
Also, feel free to comment me with any issues you might have, and I will try to help with them, as well as any feedback for both the code and answer.
APPENDIX 1: According to https://docs.unity3d.com/ScriptReference/Texture2D.LoadRawTextureData.html, you should use GetRawTextureData instead of LoadRawTextureData, to prevent copying. To do this, just pin the array returned by GetRawTextureData instead of the RGBimage array (which you can remove). Also, do not forget to call result.Apply(); afterwards.
APPENDIX 2: Do not forget to call Free() on both GCHandles when you are done using them.
I figured out how to get the full resolution CPU image in Arcore 1.8.
I can now get the full camera resolution with cameraimagebytes.
put this in your class variables:
private ARCoreSession.OnChooseCameraConfigurationDelegate m_OnChoseCameraConfiguration = null;
put this in Start()
m_OnChoseCameraConfiguration = _ChooseCameraConfiguration; ARSessionManager.RegisterChooseCameraConfigurationCallback(m_OnChoseCameraConfiguration); ARSessionManager.enabled = false; ARSessionManager.enabled = true;
Add this callback to the class:
private int _ChooseCameraConfiguration(List<CameraConfig> supportedConfigurations) { return supportedConfigurations.Count - 1; }
Once you add those, you should have cameraimagebytes returning the full resolution of the camera.
For everyone who want to try this with OpencvForUnity:
public Mat getCameraImage()
{
// Use using to make sure that C# disposes of the CameraImageBytes afterwards
using (CameraImageBytes camBytes = Frame.CameraImage.AcquireCameraImageBytes())
{
// If acquiring failed, return null
if (!camBytes.IsAvailable)
{
Debug.LogWarning("camBytes not available");
return null;
}
// To save a YUV_420_888 image, you need 1.5*pixelCount bytes.
// I will explain later, why.
byte[] YUVimage = new byte[(int)(camBytes.Width * camBytes.Height * 1.5f)];
// As CameraImageBytes keep the Y, U and V data in three separate
// arrays, we need to put them in a single array. This is done using
// native pointers, which are considered unsafe in C#.
unsafe
{
for (int i = 0; i < camBytes.Width * camBytes.Height; i++)
{
YUVimage[i] = *((byte*)camBytes.Y.ToPointer() + (i * sizeof(byte)));
}
for (int i = 0; i < camBytes.Width * camBytes.Height / 4; i++)
{
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i] = *((byte*)camBytes.U.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i + 1] = *((byte*)camBytes.V.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
}
}
// Create the output byte array. RGB is three channels, therefore
// we need 3 times the pixel count
byte[] RGBimage = new byte[camBytes.Width * camBytes.Height * 3];
// GCHandles help us "pin" the arrays in the memory, so that we can
// pass them to the C++ code.
GCHandle pinnedArray = GCHandle.Alloc(YUVimage, GCHandleType.Pinned);
IntPtr pointer = pinnedArray.AddrOfPinnedObject();
Mat input = new Mat(camBytes.Height + camBytes.Height / 2, camBytes.Width, CvType.CV_8UC1);
Mat output = new Mat(camBytes.Height, camBytes.Width, CvType.CV_8UC3);
Utils.copyToMat(pointer, input);
Imgproc.cvtColor(input, output, Imgproc.COLOR_YUV2RGB_NV12);
pinnedArray.Free();
return output;
}
}
Here is an implementation of this which just uses the free plugin OpenCV Plus Unity. Very simple to set up and great documentation if you are familiar with OpenCV.
This implementation rotates the image properly using OpenCV, stores them into memory and upon exiting the app, saves them to file. I have tried to strip all Unity aspects from the code so that the function GetCameraImage() can be run on a separate thread.
I can confirm it works on Andoird (GS7), I presume it will work pretty universally.
using System;
using System.Collections.Generic;
using GoogleARCore;
using UnityEngine;
using OpenCvSharp;
using System.Runtime.InteropServices;
public class CamImage : MonoBehaviour
{
public static List<Mat> AllData = new List<Mat>();
public static void GetCameraImage()
{
// Use using to make sure that C# disposes of the CameraImageBytes afterwards
using (CameraImageBytes camBytes = Frame.CameraImage.AcquireCameraImageBytes())
{
// If acquiring failed, return null
if (!camBytes.IsAvailable)
{
return;
}
// To save a YUV_420_888 image, you need 1.5*pixelCount bytes.
// I will explain later, why.
byte[] YUVimage = new byte[(int)(camBytes.Width * camBytes.Height * 1.5f)];
// As CameraImageBytes keep the Y, U and V data in three separate
// arrays, we need to put them in a single array. This is done using
// native pointers, which are considered unsafe in C#.
unsafe
{
for (int i = 0; i < camBytes.Width * camBytes.Height; i++)
{
YUVimage[i] = *((byte*)camBytes.Y.ToPointer() + (i * sizeof(byte)));
}
for (int i = 0; i < camBytes.Width * camBytes.Height / 4; i++)
{
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i] = *((byte*)camBytes.U.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
YUVimage[(camBytes.Width * camBytes.Height) + 2 * i + 1] = *((byte*)camBytes.V.ToPointer() + (i * camBytes.UVPixelStride * sizeof(byte)));
}
}
// GCHandles help us "pin" the arrays in the memory, so that we can
// pass them to the C++ code.
GCHandle pinnedArray = GCHandle.Alloc(YUVimage, GCHandleType.Pinned);
IntPtr pointerYUV = pinnedArray.AddrOfPinnedObject();
Mat input = new Mat(camBytes.Height + camBytes.Height / 2, camBytes.Width, MatType.CV_8UC1, pointerYUV);
Mat output = new Mat(camBytes.Height, camBytes.Width, MatType.CV_8UC3);
Cv2.CvtColor(input, output, ColorConversionCodes.YUV2BGR_NV12);// YUV2RGB_NV12);
// FLIP AND TRANPOSE TO VERTICAL
Cv2.Transpose(output, output);
Cv2.Flip(output, output, FlipMode.Y);
AllData.Add(output);
pinnedArray.Free();
}
}
}
I then call ExportImages() when exiting the program to save to file.
private void ExportImages()
{
/// Write Camera intrinsics to text file
var path = Application.persistentDataPath;
StreamWriter sr = new StreamWriter(path + #"/intrinsics.txt");
sr.WriteLine(CameraIntrinsicsOutput.text);
Debug.Log(CameraIntrinsicsOutput.text);
sr.Close();
// Loop through Mat List, Add to Texture and Save.
for (var i = 0; i < CamImage.AllData.Count; i++)
{
Mat imOut = CamImage.AllData[i];
Texture2D result = Unity.MatToTexture(imOut);
result.Apply();
byte[] im = result.EncodeToJPG(100);
string fileName = "/IMG" + i + ".jpg";
File.WriteAllBytes(path + fileName, im);
string messge = "Succesfully Saved Image To " + path + "\n";
Debug.Log(messge);
Destroy(result);
}
}
Seems you already fix this.
But for anyone who want to combine AR with hand gesture recognition and tracking, try Manomotion: https://www.manomotion.com/
free SDk and work prefect in 12/2020.
Use the SDK Community Edition and Download ARFoundation version

Error: 'The process cannot access the file because it is being used by another process' in Visual Studio c#

I am having some trouble with this error in Visual Studio:
An unhandled exception of type 'System.IO.IOException' occurred in mscorlib.dll
Additional information: The process cannot access the file 'C:\Users\aheij\Desktop\KinectOutput\swipe.txt' because it is being used by another process.
What I Have tried:
I have tried using these codes obtained from other solved StackOverflow questions similar to mine to try to solve the problem - but even that didn't seem to work?
Ive tried checking if the file is in use, but with no success.
I also run Visual Studio as administrator.
The file is not read-only.
The folder is not open, and the file is not being used in any other program when executing the code - at least not that I can see/know of.
The code:
So, to add some context to my code: I am writing some quick gesture detection code to the Kinect c# BodyBasics SDK v2 code freely available. This is a helper method that I added, that gets called in when a person is in view. If that person is executing the gesture, the method writes the frame time and gesture name to a text file.
Half the time, when the code does work, the gesture recognition works well. However, the other half of the time, the code breaks/stops because the writing to file bit causes the error.
Below is my code to see if the person is standing in the neutral position - its a bit waffly, so apologies about that. I have commented 'ERROR' where the error is (unsurprisingly):
private void Neutral_stance(VisualStyleElement.Tab.Body body, IReadOnlyDictionary<JointType, Joint> joints, IDictionary<JointType, Point> jointPoints, BodyFrame bf)
{
CameraSpacePoint left_hand = joints[JointType.HandLeft].Position;
CameraSpacePoint left_elbow = joints[JointType.ElbowLeft].Position;
CameraSpacePoint left_shoulder = joints[JointType.ShoulderLeft].Position;
CameraSpacePoint left_hip = joints[JointType.HipLeft].Position;
CameraSpacePoint right_hand = joints[JointType.HandRight].Position;
CameraSpacePoint right_elbow = joints[JointType.ElbowRight].Position;
CameraSpacePoint right_shoulder = joints[JointType.ShoulderRight].Position;
CameraSpacePoint right_hip = joints[JointType.HipRight].Position;
double vertical_error = 0.15;
double shoulderhand_xrange_l = Math.Abs(left_hand.X - left_shoulder.X);
double shoulderhand_xrange_r = Math.Abs(right_hand.X - right_shoulder.X);
if (bf != null)
{
TimeSpan frametime = bf.RelativeTime;
string path_p = #"C:\Users\aheij\Desktop\KinectOutput\Punch.txt"; //write to punch file
string path_s = #"C:\Users\aheij\Desktop\KinectOutput\swipe.txt"; //write to swipe file
if (left_hand.Y < left_elbow.Y)
{
if (right_hand.Y < right_elbow.Y)
{
if (shoulderhand_xrange_l < vertical_error)
{
if (shoulderhand_xrange_r < vertical_error)
{
Gesture_being_done.Text = " Neutral";
File.AppendAllText(path_p, frametime.ToString() + " Neutral" + Environment.NewLine); //ERROR
File.AppendAllText(path_s, frametime.ToString() + " Neutral" + Environment.NewLine); //ERROR
}
}
}
}
else
{
Gesture_being_done.Text = " Unknown";
File.AppendAllText(path_p, frametime.ToString() + " Unknown" + Environment.NewLine); //ERROR
File.AppendAllText(path_s, frametime.ToString() + " Unknown" + Environment.NewLine); //ERROR
}
}
}
Any solutions/ideas/suggestions to point me on the right track would be appreciated. I think that it would be good to use the 'using streamwriter' method as opposed to the method I am using here - but I am not sure how? Any help would be appreciated.
Additonal Info: Using Visual Studio 2015; Using windows 10.
Sidenote: I read somewhere that the Windows Search tool in Windows 10 can interfere and cause problems like this so I need to disable it?
As suggested to me I used the Filestream method & ensured the file was closed after use. But, even this still caused the same error.
Thus, I also got rid of having two file-writing actions in rapid succession of each other. I dont know if this is technically right or even true, but based off of this post here: link, my error could be coming up because I am trying to execute the second 'write to text file' line whilst the previous 'write to text file' line is still executing/writing to that same folder & location - hence the clash? Please someone, correct me if I am wrong.
Either way, this seems to have worked.
See below for my edited/corrected method:
private void Neutral_stance(Body body, IReadOnlyDictionary<JointType, Joint> joints, IDictionary<JointType, Point> jointPoints, BodyFrame bf)
{
//cameraspace point joint stuff here again (see original post for this bit leading up to the if statements.)
if (bf != null)
{
TimeSpan frametime = bf.RelativeTime;
string path_s = #"C:\Users\aheij\Desktop\KinectOutput\swipe.txt";
if (left_hand.Y < left_elbow.Y)
{
if (right_hand.Y < right_elbow.Y)
{
if (shoulderhand_xrange_l < vertical_error)
{
if (shoulderhand_xrange_r < vertical_error)
{
Gesture_being_done.Text = " Neutral";
FileStream fs_s = new FileStream(path_s, FileMode.Append); //swipe
byte[] bdatas = Encoding.Default.GetBytes(frametime.ToString() + " Neutral" + Environment.NewLine);
fs_s.Write(bdatas, 0, bdatas.Length);
fs_s.Close();
}
}
}
}
else
{
Gesture_being_done.Text = " Unknown";
FileStream fs_s = new FileStream(path_s, FileMode.Append);
byte[] bdatas = Encoding.Default.GetBytes(frametime.ToString() + " Unknown" + Environment.NewLine);
fs_s.Write(bdatas, 0, bdatas.Length);
fs_s.Close();
}
}
}
Do let me know if there is any way I can make this more elegant or anything else I should be aware of w.r.t this answer.
The code is based off of the code found here: FileStream Tutorial website

Xamarin Monotouch Audio Units Callbacks

I have a problem with Audio Units in MonoTouch/Xamarin.
It seems like I can't get a callback on recording, just playback.
I used this example:
https://github.com/xamarin/monotouch-samples/blob/master/AUSoundTriggeredPlayingSoundMemoryBased/ExtAudioBufferPlayer.cs
and looked for Obj C examples. The Obj C examples are pretty much the same like my code so Im a little bit confused about this thing.
The output if running my example is:
INPUT0
Which is the bus number for output.
So the expected output should be:
INPUT1
So my question is: How do I get a recording callback and a playback callback running the same time, or just how do I get a recording callback.
My Code:
void prepareAudioUnit()
{
// AudioSession
AudioSession.Initialize();
AudioSession.Category = AudioSessionCategory.PlayAndRecord;
AudioSession.PreferredHardwareIOBufferDuration = Config.packetLength;
AudioSession.PreferredHardwareSampleRate = Format.samplingRate;
//AudioSession.SetActive (false);
AudioSession.SetActive(true);
Logger.log("HWSR:" + AudioSession.CurrentHardwareSampleRate);
// Getting AudioComponent Remote output
_audioComponent = AudioComponent.FindComponent(AudioTypeOutput.VoiceProcessingIO);
// creating an audio unit instanc
_audioUnit = new AudioUnit(_audioComponent);
// turning on microphone
_audioUnit.SetEnableIO(true,
AudioUnitScopeType.Input,
1 // Remote Input
);
_audioUnit.SetEnableIO(true,
AudioUnitScopeType.Output,
0 // Remote output
);
// setting audio format
_audioUnit.SetAudioFormat(Format.AudioStreamBasicDescription,
AudioUnitScopeType.Output,
1
);
_audioUnit.SetAudioFormat(Format.AudioStreamBasicDescription,
AudioUnitScopeType.Input,
0
);
// setting callback method
_audioUnit.SetRenderCallback(_audioUnit_OutputCallback, AudioUnitScopeType.Global, 0);
_audioUnit.SetRenderCallback(_audioUnit_InputCallback, AudioUnitScopeType.Global, 1);
}
AudioUnitStatus _audioUnit_OutputCallback(AudioUnitRenderActionFlags actionFlags, AudioTimeStamp timeStamp, uint busNumber, uint numberFrames, AudioBuffers data)
{
Logger.log("OUTPUT" + busNumber);
return AudioUnitStatus.NoError;
}
AudioUnitStatus _audioUnit_InputCallback(AudioUnitRenderActionFlags actionFlags, AudioTimeStamp timeStamp, uint busNumber, uint numberFrames, AudioBuffers data)
{
Logger.log("INPUT" + busNumber);
return AudioUnitStatus.NoError;
}
This problem is a bug in Xamarin, they forgot to add a method for InputCallbacks.
I reported the bug but for the people needing the same:
http://nopaste.info/8d0aca98d9.html
Its not good, but it shows how to solve the problem to write a fix yourself till Xamarin updates this.

Best way to play MIDI sounds using C#

I'm trying to rebuild an old metronome application that was originally written using MFC in C++ to be written in .NET using C#. One of the issues I'm running into is playing the midi files that are used to represent the metronome "clicks".
I've found a few articles online about playing MIDI in .NET, but most of them seem to rely on custom libraries that someone has cobbled together and made available. I'm not averse to using these, but I'd rather understand for myself how this is being done, since it seems like it should be a mostly trivial exercise.
So, am I missing something? Or is it just difficult to use MIDI inside of a .NET application?
I'm working on a C# MIDI application at the moment, and the others are right - you need to use p/invoke for this. I'm rolling my own as that seemed more appropriate for the application (I only need a small subset of MIDI functionality), but for your purposes the C# MIDI Toolkit might be a better fit. It is at least the best .NET MIDI library I found, and I searched extensively before starting the project.
I think you'll need to p/invoke out to the windows api to be able to play midi files from .net.
This codeproject article does a good job on explaining how to do this:
vb.net article to play midi files
To rewrite this is c# you'd need the following import statement for mciSendString:
[DllImport("winmm.dll")]
static extern Int32 mciSendString(String command, StringBuilder buffer,
Int32 bufferSize, IntPtr hwndCallback);
Hope this helps - good luck!
midi-dot-net got me up and running in minutes - lightweight and right-sized for my home project. It's also available on GitHub. (Not to be confused with the previously mentioned MIDI.NET, which also looks promising, I just never got around to it.)
Of course NAudio (also mentioned above) has tons of capability, but like the original poster I just wanted to play some notes and quickly read and understand the source code.
I think it's much better to use some library that which has advanced features for MIDI data playback instead of implementing it by your own. For example, with DryWetMIDI (I'm the author of it) to play MIDI file via default synthesizer (Microsoft GS Wavetable Synth):
using Melanchall.DryWetMidi.Devices;
using Melanchall.DryWetMidi.Core;
// ...
var midiFile = MidiFile.Read("Greatest song ever.mid");
using (var outputDevice = OutputDevice.GetByName("Microsoft GS Wavetable Synth"))
{
midiFile.Play(outputDevice);
}
Play will block the calling thread until entire file played. To control playback of a MIDI file, obtain Playback object and use its Start/Stop methods (more details in the Playback article of the library docs):
var playback = midiFile.GetPlayback(outputDevice);
// You can even loop playback and speed it up
playback.Loop = true;
playback.Speed = 2.0;
playback.Start();
// ...
playback.Stop();
// ...
playback.Dispose();
outputDevice.Dispose();
I can't claim to know much about it, but I don't think it's that straightforward - Carl Franklin of DotNetRocks fame has done a fair bit with it - have you seen his DNRTV?
You can use the media player:
using WMPLib;
//...
WindowsMediaPlayer wmp = new WindowsMediaPlayer();
wmp.URL = Path.Combine(Application.StartupPath ,"Resources/mymidi1.mid");
wmp.controls.play();
For extensive MIDI and Wave manipulation in .NET, I think hands down NAudio is the solution (Also available via NuGet).
A recent addition is MIDI.NET that supports Midi Ports, Midi Files and SysEx.
Sorry this question is a little old now, but the following worked for me (somewhat copied from Win32 - Midi looping with MCISendString):
[DllImport("winmm.dll")]
static extern Int32 mciSendString(String command, StringBuilder buffer, Int32 bufferSize, IntPtr hwndCallback);
public static void playMidi(String fileName, String alias)
{
mciSendString("open " + fileName + " type sequencer alias " + alias, new StringBuilder(), 0, new IntPtr());
mciSendString("play " + alias, new StringBuilder(), 0, new IntPtr());
}
public static void stopMidi(String alias)
{
mciSendString("stop " + alias, null, 0, new IntPtr());
mciSendString("close " + alias, null, 0, new IntPtr());
}
A full listing of command strings is given here. The cool part about this is you can just use different things besides sequencer to play different things, say waveaudio for playing .wav files. I can't figure out how to get it to play .mp3 though.
Also, note that the stop and close command must be sent on the same thread that the open and play commands were sent on, otherwise they will have no effect and the file will remain open. For example:
[DllImport("winmm.dll")]
static extern Int32 mciSendString(String command, StringBuilder buffer,
Int32 bufferSize, IntPtr hwndCallback);
public static Dictionary<String, bool> playingMidi = new Dictionary<String, bool>();
public static void PlayMidi(String fileName, String alias)
{
if (playingMidi.ContainsKey(alias))
throw new Exception("Midi with alias '" + alias + "' is already playing");
playingMidi.Add(alias, false);
Thread stoppingThread = new Thread(() => { StartAndStopMidiWithDelay(fileName, alias); });
stoppingThread.Start();
}
public static void StopMidiFromOtherThread(String alias)
{
if (!playingMidi.ContainsKey(alias))
return;
playingMidi[alias] = true;
}
public static bool isPlaying(String alias)
{
return playingMidi.ContainsKey(alias);
}
private static void StartAndStopMidiWithDelay(String fileName, String alias)
{
mciSendString("open " + fileName + " type sequencer alias " + alias, null, 0, new IntPtr());
mciSendString("play " + alias, null, 0, new IntPtr());
StringBuilder result = new StringBuilder(100);
mciSendString("set " + alias + " time format milliseconds", null, 0, new IntPtr());
mciSendString("status " + alias + " length", result, 100, new IntPtr());
int midiLengthInMilliseconds;
Int32.TryParse(result.ToString(), out midiLengthInMilliseconds);
Stopwatch timer = new Stopwatch();
timer.Start();
while(timer.ElapsedMilliseconds < midiLengthInMilliseconds && !playingMidi[alias])
{
}
timer.Stop();
StopMidi(alias);
}
private static void StopMidi(String alias)
{
if (!playingMidi.ContainsKey(alias))
throw new Exception("Midi with alias '" + alias + "' is already stopped");
// Execute calls to close and stop the player, on the same thread as the play and open calls
mciSendString("stop " + alias, null, 0, new IntPtr());
mciSendString("close " + alias, null, 0, new IntPtr());
playingMidi.Remove(alias);
}
A new player emerges:
https://github.com/atsushieno/managed-midi
https://www.nuget.org/packages/managed-midi/
Not much in the way of documentation, but one focus of this library is cross platform support.
System.Media.SoundPlayer is a good, simple way of playing WAV files. WAV files have some advantages over MIDI, one of them being that you can control precisely what each instrument sounds like (rather than relying on the computer's built-in synthesizer).

Categories

Resources