I am using the library DryWetMIDI for .net 7 and I am trying to connect a MIDI output device to MAUI. When I connect a input device it seems to work fine but the only output from the outputdevice I could get was the following error: Internal error (OUT_SENDSHORTRESULT_INVALIDHANDLE). When I tried everything in a simple console application it works perfectly.
Also because of my lack in experience in Maui I don’t really know if I should change something in the project dependencies or in the builder. Or maybe declare the MIDI in the App or the Appshell...
So I tried to create a input device and a output device and connect them to eachother (This is what the DryWetMIDI suggested). Next I try to get the events from the input and the outup device, the input device works but the output device doesnt.
I use the following code where the ouput device doesnt work in Maui:
private InputDevice inputDevice;
private OutputDevice outputDevice;
private DevicesConnector devicesConnector;
void ConnectMidi()
{
//create input device
inputDevice = InputDevice.GetByName("Keystation Mini 32");
inputDevice.EventReceived += OnEventReceived;
//create ouput device;
outputDevice = OutputDevice.GetByName("Microsoft GS Wavetable Synth");
outputDevice.EventSent += OnEventSent;
//connect them
devicesConnector = inputDevice.Connect(outputDevice);
inputDevice.StartEventsListening();
}
public void OnEventReceived(object sender, MidiEventReceivedEventArgs e)
{
var midiDevice = (MidiDevice)sender;
Debug.WriteLine("This gets called when a key is pressed") ;
}
public void OnEventSent(object sender, MidiEventReceivedEventArgs e)
{
var midiDevice = (MidiDevice)sender;
Debug.WriteLine("This gets never called");
}
If there is anohter solution using a diffrent library or something else I would love to hear it!
Hopefully this makes my problem clear and thanks in advance.
(This is also my first post so feedback would also be nice)
I'm the author of the DryWetMIDI. I've analyzed the problem:
The bug is not with the library, it's a MAUI related one.
The error comes from midiOutOpen system Windows function which fails for Microsoft GS Wavetable Synth in a MAUI project but works for any other project types.
I've reported the bug in MAUI repo: https://github.com/dotnet/maui/issues/12368. So what we have is to wait for response from Microsoft.
Update:
The bug has been moved to WinUI repo: https://github.com/microsoft/microsoft-ui-xaml/issues/8055. Also you can mark your project as Unpackaged and the issue should go away.
Related
I want to capture image via SNAPI API from Symbol (currently Zebra) barcode scanner, model DS4208 (we're also using another but compatible models from Zebra).
Barcode capturing/recognition works pretty well, but looks like SnapiDLL.SNAPI_SnapShot(hScanner) call don't work correctly: no WM_XFERSTATUS message received at all.
Here is small & simplified code snippet:
// Set image format
short[] parms = new short[2] { (short)SnapiParamIds.ImageFileType, (short)SnapiImageTypes.Jpeg };
var retCode = SnapiDLL.SNAPI_SetParameters(parms, 2, _devHandles[0]);
Debug.WriteLine($"SNAPI_SetParameters retCode={retCode}");
Application.DoEvents();
Thread.Sleep(50);
retCode = SnapiDLL.SNAPI_SnapShot(_devHandles[0]);
Debug.WriteLine($"SNAPI_SnapShot retCode={retCode}");
Application.DoEvents();
Thread.Sleep(50);
retCode = SnapiDLL.SNAPI_PullTrigger(_devHandles[0]);
Debug.WriteLine($"SNAPI_PullTrigger retCode={retCode}");
Application.DoEvents();
Thread.Sleep(50);
Return codes always 0 (i.e. no error), but no WM_XFERSTATUS message received by my message handler.
P.S. C# application from Zebra's SDK which is using CoreScanner driver and OCX, is working fine and able to capture images and video. But I'd like to avoid CoreScanner driver installation for some reasons; for barcode scanning small and simple SNAPI.dll works pretty fine, and I expect to get it work for image capturing too - probably, I'm doing something wrong...
P.P.S. Guys, please DO NOT COMMENT if you have NO EXPERIENCE WORKING WITH SYMBOL BARCODE SCANNERS & SNAPI, and CAN NOT PROVIDE a working snippet!
After contacting Zebra tech support (they are successors of Symbol/Motorola barcode scanner business), I figured out that the imaging/video functionality is broken in SNAPI.dll for the x64 OSes (but most of the rest API calls are working properly). Unfortunately, SNAPI isn't supported by Zebra anymore, and I should use Zebra's CoreScanner API. The good news is: this API is working fine, as it supposed to be. Not a very good news: I should use additional installation package from Zebra.
Using an android scanner device, running KitKat.
Using Xamarin in Visual Studio Enterprise 2017, 15.9.9.
I need to generate a "Success" or "Error" sound, based on the content of the scanned barcode.
I have two files: "Success.mp3" and "Error.wav", neither of which will play.
Since this should be a very simple process, I am trying to use Android's MediaPlayer class, rather than add some NuGet package.
I am using Dependency Injection to properly run the Android code, since Xamarin does not have any sort or media API.
I instantiate Android's MediaPlayer as variable "player", and it does successfully instantiate, but as soon as I attempt to do anything with it, it throws a Null Exception error and the value of "player" displays as Null.
I have been experimenting with different ways to do this and have copies of the sound files stored both in the Assets folder, and the Resources/Raw folder (see below).
Here is my method:
public void PlaySound(string soundType) {
var filename =
global::Android.App.Application.Context.Assets.OpenFd(soundType);
if (player == null) {
MediaPlayer player = new MediaPlayer();
}
//This is where the error happens
player.SetDataSource(filename);
player.Prepare();
player.Start();
}
I have also tried the last three lines as the following, with the same result:
player.Prepared += (s, e) => {
player.Start();
};
player.SetDataSource(filename.FileDescriptor, filename.StartOffset,
filename.Length);
player.Prepare();
I have also attempted to utilize what so many people demonstrate as the way to do this, but it does not work for me. This is where the file must be stored in Resources/Raw:
player = MediaPlayer.Create(global::Android.App.Application.Context,
Resource.Raw.someFileName);
Whatever value that you use for "someFileName", all Visual Studio gives you is "'Resource.Raw' does not contain a definition for 'someFileName'".
Resource.designer.CS does contain entries for both files:
public const int Error = 2131230720;
public const int Success = 2131230721;
Expected results: sound, or some meaningful error message that puts me on the right path.
I am still relatively new to Xamarin and am probably missing something that would be obvious to veteran eyes. I have tried so many other things, most of which are not mentioned here, grasping for some straw. This should be simple, but is proving otherwise. Thank you for any help that you can provide.
I have a textblock (ContentTextBlock) with AutomationProperties.LiveSettings="Assertive". I'm just testing and checking how useful this feature is. And... am disappointed so far.
private void Button_Click(object sender, RoutedEventArgs e)
{
ContentTextBlock.Text += " test";
var peer = UIElementAutomationPeer.FromElement(ContentTextBlock);
if(peer == null)
{
peer = UIElementAutomationPeer.CreatePeerForElement(ContentTextBlock);
peer.RaiseAutomationEvent(AutomationEvents.LiveRegionChanged);
}
peer.RaiseAutomationEvent(AutomationEvents.LiveRegionChanged);
}
When using Narrator, this works as advertised. Whenever clicking the button, Narrator announces the TextBlock text ("test" "test test" "test test test")... But when I use NVDA or JAWS, nothing happens, although the screenreader versions are relatively up-to-date. Did they really not add any support for live-regions or am I just missing an important point?
Whilst I didn't manage to get live regions to work, I found another workaround:
Tolk by Davy Kager
Tolk is a library which can, among oterrs
Detect which supported screen reader, if any, is running
Pass strings to the screen reader's speech engine and braille.
Also has support for SAPI
To include Tolk in your C# project, download it from the link above, then include tolk.cs (from src/dotnet) in your project, and place tolk.dll (it's found in bin) in the folder with your executable (or somewhere in the PATH variable). Make sure that the dll version matches your CPU target (x86/x64). Do the same for the dlls in the lib directory. Then you can use it according to the code found in the examples folder.
PS. Tolk works on Win 7 as well, so that's a bonus. The live-regions of WPF were only supported from Win 8 on.
This program is an audio visualizer for an rgb keyboard that listens to windows' default audio device. My audio setup is a bit more involved, and I use way more than just the default audio device. For instance, when I play music from Winamp it goes through the device Auxillary 1 (Synchronous Audio Router) instead of Desktop Input (Synchronous Audio Router) which I have set as Default. I'd like to be able change the device that the program listens to for the visualization.
I found in the source where the audio device is declared; Lines 32-36 in CSCoreAudioInput.cs:
public void Initialize()
{
MMDevice captureDevice = MMDeviceEnumerator.DefaultAudioEndpoint(DataFlow.Render, Role.Console);
WaveFormat deviceFormat = captureDevice.DeviceFormat;
_audioEndpointVolume = AudioEndpointVolume.FromDevice(captureDevice);
}
The way that I understand it from the documentation, the section MMDeviceEnumerator.DefaultAudioEndpoint(DataFlow.Render, Role.Console) is where Windows gives the application my default IMMEndpoint "Desktop Input."
How would I go about changing DefaultAudioEndpoint?
Further Reading shows a few ways to get an IMMDevice, with DefaultAudioEnpoint being one of them. It seems to me that I'd have to enumerate the devices, and then separate out Auxillary 1 (Synchronous Audio Router) using PKEY_Device_FriendlyName. That's a bit much for me, as I have little to no C# experience. Is there an easier way to go about choosing a different endpoint? Am I on the right track? or am I missing the mark completely?
Also, what is the difference between MMDevice and IMMDevice? The source only seems to use MMDevice while all the Microsoft documentation references IMMDevice.
Thanks.
I DID IT!
I've found why the program uses MMDevice rather than IMMDevice. The developer has chosen to use the CSCore Library rather than Windows' own Core Audio API.
From continued reading of the CSCore MMDeviceEnumerator Documentation, it looks like I'll have to make a separate program that outputs all endpoints and their respective Endpoint ID Strings. Then I can substitute the DefaultAudioEndpoint method with the GetDevice(String id) method, where String id is the ID of whichever Endpoint I chose from the separate program.
To find the the Endpoint I wanted, I wrote this short program to find all the info I wanted:
static void Main(string[] args)
{
MMDeviceEnumerator enumerator = new MMDeviceEnumerator();
MMDeviceCollection collection = enumerator.EnumAudioEndpoints(DataFlow.Render,DeviceState.Active);
Console.WriteLine($"\nNumber of active Devices: {collection.GetCount()}");
int i = 0;
foreach (MMDevice device in collection){
Console.WriteLine($"\n{i} Friendly name: {device.FriendlyName}");
Console.WriteLine($"Endpoint ID: {device.DeviceID}");
i++;
}
Console.ReadKey();
}
This showed me that the Endpoint I wanted was item number 3 (2 in an array) on my list, and instead of using GetDevice(String id) I used ItemAt(int deviceIndex).
MMDeviceEnumerator enumerator = new MMDeviceEnumerator();
MMDeviceCollection collection = enumerator.EnumAudioEndpoints(DataFlow.Render,DeviceState.Active);
MMDevice captureDevice = collection.ItemAt(2);
However in this case, the program was not using captureDevice to bring in the audio data. These were the magic lines:
_capture = new WasapiLoopbackCapture(100, new WaveFormat(deviceFormat.SampleRate, deviceFormat.BitsPerSample, i));
_capture.Initialize();
I found that WasapiLoopbackCapture uses Windows' default device unless changed, and the code was using DefaultAudioEndpoint to get the properties of the default device. So I added
_capture.Device = captureDevice;
//before
_capture.Initialize();
And now the program properly pulls the audio data off of my non-default audio endpoint.
I had been asked to solve a similar type of problem this week. Although there are a few librarys to do this I was specifically asked to do this for "non ish" programmers so I developed this in PowerShell.
Powershell default audio device changer - Github
Maybe you can alter it to your needs.
I have a usb connected MSR reader and i am trying to get it by using the sample codes proveded in here. This works fine but the problem is when i add the same code to my app it doesn't work. GetDefaultAsync returns null.
private static MagneticStripeReader _reader = null;
public static async void StartRead()
{
if (await CreateDefaultMagneticStripeReaderObject())
{
....
}
}
private static async Task<bool> CreateDefaultMagneticStripeReaderObject()
{
if (_reader == null)
{
_reader = await MagneticStripeReader.GetDefaultAsync();
if (_reader == null)
return false;
}
return true;
}
My code is like above, very similer to sample but it doesnt work. Also i've added the device capability of pointOfService. So that is not the case.
I was in the exact same situation and I spent the last 5 hours, finally I know what was going on. You are missing a capability in the Package.appxmanifest
'pointOfService' is the capability you want to include. This capability does not show in the UI and therefore I could not find any difference between my broken project and Microsoft's sample project. You can not add that capability using the UI. You have to manually add it by modifying the XML file.
The sample project by Microsoft have it too
https://github.com/Microsoft/Windows-universal-samples/blob/master/Samples/MagneticStripeReader/cs/Package.appxmanifest#L53
Make sure the card reader is in HID mode and not Keyboard emulation mode. That was one of my problems.
To do this is really wonky. MagTek has a ActiveX control on their website to assist us... because ActiveX is awful, you can only use it with InternetExplorer (it won't even work with Edge.)
go here in IE: https://www.magtek.com/changemode/
Enable active X when it pops up, and you can change from hid to keyboard and back.