i'm using CScore to play an mp3 file,
first, i have this on my public partial public MMDevice SelfDevice;
next, i get the default playback device with this code:-
MMDeviceEnumerator enumerator = new MMDeviceEnumerator();
SelfDevice = enumerator.GetDefaultAudioEndpoint(DataFlow.Render, Role.Console);
now selfDevice has the default playback device.
then, i run this code to play an mp3 file
if (CSCore.SoundOut.WasapiOut.IsSupportedOnCurrentPlatform)
_soundOutToSelf = new CSCore.SoundOut.WasapiOut() { Device = SelfDevice };
else
_soundOutToSelf = new DirectSoundOut();
var source = CodecFactory.Instance.GetCodec(pathToMP3())
.Loop()
.ChangeSampleRate(32000)
.ToSampleSource()
.AppendSource(Equalizer.Create10BandEqualizer)
.ToWaveSource();
_soundOutToSelf.Initialize(source);
_soundOutToSelf.Play();
_soundOutToSelf.Volume = 1;
MessageBox.Show(SelfDevice.FriendlyName);
it works, but only when i set my HDMI Audio as my default playback device, it plays no music when my Speakers are on default, also the messagebox is returning the right playback device name, so there's no problem with the device variable. what's the problem here?
also, i tried to make a new project and run the code there and it worked without any problem, so i reverted to an older version to the project when it was working, but now it's not working either.
i deleted the debug folder and the problem still occurs.
EDIT: i found out i can fix the problem by changing the assemblyname of the program, but i still don't know why windows is blocking my program?
changing the assemblyname or create a new project fixes the problem, still don't know what's the real cause.
Related
I program C# Android app using Xamarin. I wrote this code:
protected MediaPlayer player;
protected override void OnCreate(Bundle savedInstanceState)
{
base.OnCreate(savedInstanceState);
SetContentView(Resource.Layout.layout1);
this.Window.AddFlags(WindowManagerFlags.Fullscreen);
player = new MediaPlayer();
player.Reset();
var fileDescriptor = Assets.OpenFd("MySound.mp3");
player.SetDataSource(fileDescriptor.FileDescriptor);
player.Prepare();
player.Start();
}
MySound.mp3 file is direct in the Assets folder. When I run app there is an error:
Java.IO.IOException
Message=Prepare failed.: status=0x1
in the line with player.Prepare();
What is wrong? Why it's no working?
This seems to be a common exception with MediaPlayer.Prepare() on Android devices. A quick search turned up this:
https://social.msdn.microsoft.com/Forums/en-US/15f9c371-1b8d-4927-9555-f40e2829c377/mediaplayer-prepare-failed-stauts0x1?forum=xamarinandroid
Quote by Anonymous:
Hi, I wasted a lot of time, trying to find a solution to this issue on
my side too. So I post here, just in case it is helping some people.
My situation : I wanted to load an audio file from my assets (so not
registered in my resources). I am using a similar code as Ike Nwaogu,
except that I am using an AssetFileDescriptor to open my file (in my
activity class code, so I have access to "Assets") :
string path = "Audio/myfile.ogg";
Android.Content.Res.AssetFileDescriptor afd = Assets.OpenFd(path);
MediaPlayer soundPlayer = new MediaPlayer();
if (afd != null)
{
soundPlayer.Reset();
soundPlayer.SetDataSource(afd.FileDescriptor);
soundPlayer.Prepare();
soundPlayer.Enabled = true;
afd.Close();
}
I was failing on the Prepare(). I tried to add the access to external
storage permissions (but it did make sense since it
was loaded from my assets directly, I tried just in case).
Just by chance, by seeing other people samples on the forums, I added
the afd.StartOffset, afd.DeclaredLength to the parameters:
soundPlayer.SetDataSource(afd.FileDescriptor, afd.StartOffset,
afd.DeclaredLength);
and it worked ... I don't know if it just luck and if is going to fail
again later or if there is a bug in API ...
According to various sources, using getStartOffset and getLength when setting the DataSource should fix this:
player.setDataSource(fileDescriptor.getFileDescriptor(), fileDescriptor.getStartOffset(), fileDescriptor.getLength());
Alternatively, here are a few more ideas:
Android MediaPlayer throwing "Prepare failed.: status=0x1" on 2.1, works on 2.2
https://forum.processing.org/two/discussion/6722/andriod-mediaplayer-prepare-failed-status-0x1
Using an android scanner device, running KitKat.
Using Xamarin in Visual Studio Enterprise 2017, 15.9.9.
I need to generate a "Success" or "Error" sound, based on the content of the scanned barcode.
I have two files: "Success.mp3" and "Error.wav", neither of which will play.
Since this should be a very simple process, I am trying to use Android's MediaPlayer class, rather than add some NuGet package.
I am using Dependency Injection to properly run the Android code, since Xamarin does not have any sort or media API.
I instantiate Android's MediaPlayer as variable "player", and it does successfully instantiate, but as soon as I attempt to do anything with it, it throws a Null Exception error and the value of "player" displays as Null.
I have been experimenting with different ways to do this and have copies of the sound files stored both in the Assets folder, and the Resources/Raw folder (see below).
Here is my method:
public void PlaySound(string soundType) {
var filename =
global::Android.App.Application.Context.Assets.OpenFd(soundType);
if (player == null) {
MediaPlayer player = new MediaPlayer();
}
//This is where the error happens
player.SetDataSource(filename);
player.Prepare();
player.Start();
}
I have also tried the last three lines as the following, with the same result:
player.Prepared += (s, e) => {
player.Start();
};
player.SetDataSource(filename.FileDescriptor, filename.StartOffset,
filename.Length);
player.Prepare();
I have also attempted to utilize what so many people demonstrate as the way to do this, but it does not work for me. This is where the file must be stored in Resources/Raw:
player = MediaPlayer.Create(global::Android.App.Application.Context,
Resource.Raw.someFileName);
Whatever value that you use for "someFileName", all Visual Studio gives you is "'Resource.Raw' does not contain a definition for 'someFileName'".
Resource.designer.CS does contain entries for both files:
public const int Error = 2131230720;
public const int Success = 2131230721;
Expected results: sound, or some meaningful error message that puts me on the right path.
I am still relatively new to Xamarin and am probably missing something that would be obvious to veteran eyes. I have tried so many other things, most of which are not mentioned here, grasping for some straw. This should be simple, but is proving otherwise. Thank you for any help that you can provide.
This program is an audio visualizer for an rgb keyboard that listens to windows' default audio device. My audio setup is a bit more involved, and I use way more than just the default audio device. For instance, when I play music from Winamp it goes through the device Auxillary 1 (Synchronous Audio Router) instead of Desktop Input (Synchronous Audio Router) which I have set as Default. I'd like to be able change the device that the program listens to for the visualization.
I found in the source where the audio device is declared; Lines 32-36 in CSCoreAudioInput.cs:
public void Initialize()
{
MMDevice captureDevice = MMDeviceEnumerator.DefaultAudioEndpoint(DataFlow.Render, Role.Console);
WaveFormat deviceFormat = captureDevice.DeviceFormat;
_audioEndpointVolume = AudioEndpointVolume.FromDevice(captureDevice);
}
The way that I understand it from the documentation, the section MMDeviceEnumerator.DefaultAudioEndpoint(DataFlow.Render, Role.Console) is where Windows gives the application my default IMMEndpoint "Desktop Input."
How would I go about changing DefaultAudioEndpoint?
Further Reading shows a few ways to get an IMMDevice, with DefaultAudioEnpoint being one of them. It seems to me that I'd have to enumerate the devices, and then separate out Auxillary 1 (Synchronous Audio Router) using PKEY_Device_FriendlyName. That's a bit much for me, as I have little to no C# experience. Is there an easier way to go about choosing a different endpoint? Am I on the right track? or am I missing the mark completely?
Also, what is the difference between MMDevice and IMMDevice? The source only seems to use MMDevice while all the Microsoft documentation references IMMDevice.
Thanks.
I DID IT!
I've found why the program uses MMDevice rather than IMMDevice. The developer has chosen to use the CSCore Library rather than Windows' own Core Audio API.
From continued reading of the CSCore MMDeviceEnumerator Documentation, it looks like I'll have to make a separate program that outputs all endpoints and their respective Endpoint ID Strings. Then I can substitute the DefaultAudioEndpoint method with the GetDevice(String id) method, where String id is the ID of whichever Endpoint I chose from the separate program.
To find the the Endpoint I wanted, I wrote this short program to find all the info I wanted:
static void Main(string[] args)
{
MMDeviceEnumerator enumerator = new MMDeviceEnumerator();
MMDeviceCollection collection = enumerator.EnumAudioEndpoints(DataFlow.Render,DeviceState.Active);
Console.WriteLine($"\nNumber of active Devices: {collection.GetCount()}");
int i = 0;
foreach (MMDevice device in collection){
Console.WriteLine($"\n{i} Friendly name: {device.FriendlyName}");
Console.WriteLine($"Endpoint ID: {device.DeviceID}");
i++;
}
Console.ReadKey();
}
This showed me that the Endpoint I wanted was item number 3 (2 in an array) on my list, and instead of using GetDevice(String id) I used ItemAt(int deviceIndex).
MMDeviceEnumerator enumerator = new MMDeviceEnumerator();
MMDeviceCollection collection = enumerator.EnumAudioEndpoints(DataFlow.Render,DeviceState.Active);
MMDevice captureDevice = collection.ItemAt(2);
However in this case, the program was not using captureDevice to bring in the audio data. These were the magic lines:
_capture = new WasapiLoopbackCapture(100, new WaveFormat(deviceFormat.SampleRate, deviceFormat.BitsPerSample, i));
_capture.Initialize();
I found that WasapiLoopbackCapture uses Windows' default device unless changed, and the code was using DefaultAudioEndpoint to get the properties of the default device. So I added
_capture.Device = captureDevice;
//before
_capture.Initialize();
And now the program properly pulls the audio data off of my non-default audio endpoint.
I had been asked to solve a similar type of problem this week. Although there are a few librarys to do this I was specifically asked to do this for "non ish" programmers so I developed this in PowerShell.
Powershell default audio device changer - Github
Maybe you can alter it to your needs.
I'd like to play a previously recordet *.oni-File in C#/WPF. While, with the help of this tutorial I was able to get to RGB- and Depth-Stream to show up on my UI, I don't know how to play an *.oni-file.
The OpenNI page mentions, that I'd just have to "connect" to the file instead of the device, but I can't find the proper piece of code to do so.
The openni::Device class provides an interface to a single physical hardware device (via a driver). It can also provide an interface to a simulated hardware device via a recorded ONI file taken from a physical device.
If connecting to an ONI file instead of a physical device, it is only required that the ONI recording be available on the system running the application, and that the application have read access to this file.
I also found some clues / discussions, but none of it did help much
C# problem with .oni player
OpenNI-dev: Not able to play the skeletonRec.oni
EDIT: I found a way to at least get the recording played using the SamplesConfig.xml. I just inserted the following code into the <ProductionNodes>:
<Recording file="\test.oni" playbackSpeed="1.0"/>
Sadly, that recording crashes to program when it's done playing - I'm now looking for a way to loop the recording...
EDIT 2: Just if anybody should be interested, I'm using those lines to set the recording on loop:
ScriptNode scriptNode;
context = Context.CreateFromXmlFile(path + "\\" + configuration, out scriptNode);
Player p = (Player)context.FindExistingNode(NodeType.Player);
if (p!=null) p.SetRepeat(true); //Make sure it's really a recording.
If anybody should need the code one day - I managed to load the file and play the recording without the need of a config file:
Context context = new Context();
// Add license
License license = new License();
license.Vendor = "vendor";
license.Key = "key";
context.AddLicense(license);
// Open file
context.OpenFileRecordingEx("record.oni");
// Set to repeat
Player p = (Player)context.FindExistingNode(NodeType.Player);
if (p != null) p.SetRepeat(true);
I am writing a simple timer app for OSX10.6 using Mono. How can I play an alarm sound (it may be a wav/mp3 file or something else)?
I tried several ways, unfortunately none did work:
NSSound seems to be not supported by Mono yet.
MonoMac.AppKit.NSSound alarm = new MonoMac.AppKit.NSSound("alarm.wav");
alarm.Play();
Using a SoundPlayer didn't work either:
System.Media.SoundPlayer player = new System.Media.SoundPlayer("alarm.wav");
player.PlaySync();
I was able to play a sound by opening a System.Diagnostics.Process and then use the OSX command line command afplay. Unfortunately, this command opens in a new terminal window, which is quite annoying in a GUI app.
I realize there are CoreAudio bindings in Mono. But I have not figured out how to use them to play a sound.
You're using the wrong NSSound constructor is all.
new MonoMac.AppKit.NSSound("alarm.wav") expects an NSData (implicitly cast from string), you want to use new NSSound(string, bool). Probably want to pass false for the second parameter.
I threw together a quicky test project (based on the default MonoMac project) to confirm this works:
public override void FinishedLaunching (NSObject notification)
{
mainWindowController = new MainWindowController ();
mainWindowController.Window.MakeKeyAndOrderFront (this);
// Only lines added
var sound = new NSSound("/Users/kevinmontrose/Desktop/bear_growl_y.wav", byRef: false);
sound.Play();
}