I'm dealing with sound software and I'm setting the default Audio device using nircmd.exe perfectly, but I also need to adjust the balance levels, either using a similar tool (since nircmd.exe doesn't handle that) or programatically using C#.
I've seen NAudio has a read-only property that reads the values
defaultDevice.AudioMeterInformation.PeakValues[0]; //i.e. left channel
But there's no setter for that.
Is there any familiar way to achieve this?
Any help is appreciated, thanks.
Ok, found the answer.
using NAudio.CoreAudioApi;
MMDeviceEnumerator devEnum = new MMDeviceEnumerator();
MMDevice defaultDevice = devEnum.GetDefaultAudioEndpoint(DataFlow.Capture, Role.Communications);
defaultDevice.AudioEndpointVolume.Channels[0].VolumeLevel = 10;
defaultDevice.AudioEndpointVolume.Channels[1].VolumeLevel = 6;
Related
I want to chose a specific sample rate for my audio card programmatically in c# with Naudio.
My output is a WasapiOut in exclusive mode.
I already tried a lot of things, but nothing worked and I've searched everywhere and I only found this : How to Change Speaker Configuration in Windows in C#?
But they didn't really find a right solution.
Here's my WasapiOut :
var enumerator = new MMDeviceEnumerator();
MMDevice device = enumerator.EnumerateAudioEndPoints(DataFlow.Render, DeviceState.Active).FirstOrDefault(d => d.DeviceFriendlyName == name);
outputDevice = new WasapiOut(device, AudioClientShareMode.Exclusive, false,200);
What I don't understand is that here :
https://github.com/naudio/NAudio/blob/master/Docs/WasapiOut.md
It says that :
"If you choose AudioClientShareMode.Exclusive then you are requesting exclusive access to the sound card. The benefits of this approach are that you can specify the exact sample rate you want"
And I didn't find anywhere how to specify the sample rate.
If someone here know the answer it would be great, thanks !
Edit :
I think I found a way by doing this :
var waveFormat5 = WaveFormat.CreateIeeeFloatWaveFormat(Int32.Parse(comboBox1.Text), 2);
var test2 = new MixingSampleProvider(waveFormat5);
var audioFile = new AudioFileReader("test.wav");
var input = audioFile;
test2.ReadFully = true;
test2.AddMixerInput(new AutoDisposeFileReader(input,waveFormat5));
outputDevice.Init(test2);
With "outputDevice" as my WasapiOut.
So I set the ouputDevice sample rate to the one that I chose with the Mixing Sample Provider and then I send an audiofile to that Mixer, is that the right way to do it ?
Because my audiofile sample rate is at 44100, and I chose to put my outputDevice sample rate to also 44100, but when I make outputDevice.Play(), the sound that I ear is faster than the original.
Once you've created an instance of WasapiOut you call Init passing the audio you want to play. It will try to use the sample rate (and WaveFormat) of that audio directly, assuming the soundcard supports it. Usi
I solved my problem, I used an AudioPlaybackEngine (https://markheath.net/post/fire-and-forget-audio-playback-with) with a MixingSampleProvider, and a try/catch to handle the message error of "the inputs are not a the same sample rate".
This program is an audio visualizer for an rgb keyboard that listens to windows' default audio device. My audio setup is a bit more involved, and I use way more than just the default audio device. For instance, when I play music from Winamp it goes through the device Auxillary 1 (Synchronous Audio Router) instead of Desktop Input (Synchronous Audio Router) which I have set as Default. I'd like to be able change the device that the program listens to for the visualization.
I found in the source where the audio device is declared; Lines 32-36 in CSCoreAudioInput.cs:
public void Initialize()
{
MMDevice captureDevice = MMDeviceEnumerator.DefaultAudioEndpoint(DataFlow.Render, Role.Console);
WaveFormat deviceFormat = captureDevice.DeviceFormat;
_audioEndpointVolume = AudioEndpointVolume.FromDevice(captureDevice);
}
The way that I understand it from the documentation, the section MMDeviceEnumerator.DefaultAudioEndpoint(DataFlow.Render, Role.Console) is where Windows gives the application my default IMMEndpoint "Desktop Input."
How would I go about changing DefaultAudioEndpoint?
Further Reading shows a few ways to get an IMMDevice, with DefaultAudioEnpoint being one of them. It seems to me that I'd have to enumerate the devices, and then separate out Auxillary 1 (Synchronous Audio Router) using PKEY_Device_FriendlyName. That's a bit much for me, as I have little to no C# experience. Is there an easier way to go about choosing a different endpoint? Am I on the right track? or am I missing the mark completely?
Also, what is the difference between MMDevice and IMMDevice? The source only seems to use MMDevice while all the Microsoft documentation references IMMDevice.
Thanks.
I DID IT!
I've found why the program uses MMDevice rather than IMMDevice. The developer has chosen to use the CSCore Library rather than Windows' own Core Audio API.
From continued reading of the CSCore MMDeviceEnumerator Documentation, it looks like I'll have to make a separate program that outputs all endpoints and their respective Endpoint ID Strings. Then I can substitute the DefaultAudioEndpoint method with the GetDevice(String id) method, where String id is the ID of whichever Endpoint I chose from the separate program.
To find the the Endpoint I wanted, I wrote this short program to find all the info I wanted:
static void Main(string[] args)
{
MMDeviceEnumerator enumerator = new MMDeviceEnumerator();
MMDeviceCollection collection = enumerator.EnumAudioEndpoints(DataFlow.Render,DeviceState.Active);
Console.WriteLine($"\nNumber of active Devices: {collection.GetCount()}");
int i = 0;
foreach (MMDevice device in collection){
Console.WriteLine($"\n{i} Friendly name: {device.FriendlyName}");
Console.WriteLine($"Endpoint ID: {device.DeviceID}");
i++;
}
Console.ReadKey();
}
This showed me that the Endpoint I wanted was item number 3 (2 in an array) on my list, and instead of using GetDevice(String id) I used ItemAt(int deviceIndex).
MMDeviceEnumerator enumerator = new MMDeviceEnumerator();
MMDeviceCollection collection = enumerator.EnumAudioEndpoints(DataFlow.Render,DeviceState.Active);
MMDevice captureDevice = collection.ItemAt(2);
However in this case, the program was not using captureDevice to bring in the audio data. These were the magic lines:
_capture = new WasapiLoopbackCapture(100, new WaveFormat(deviceFormat.SampleRate, deviceFormat.BitsPerSample, i));
_capture.Initialize();
I found that WasapiLoopbackCapture uses Windows' default device unless changed, and the code was using DefaultAudioEndpoint to get the properties of the default device. So I added
_capture.Device = captureDevice;
//before
_capture.Initialize();
And now the program properly pulls the audio data off of my non-default audio endpoint.
I had been asked to solve a similar type of problem this week. Although there are a few librarys to do this I was specifically asked to do this for "non ish" programmers so I developed this in PowerShell.
Powershell default audio device changer - Github
Maybe you can alter it to your needs.
The question says it all.
I would like to create the simplest possible VU-meter example, using the new UWP Media Graph API, but so far, I haven't found any good examples.
There are a couple of questions in this:
I am using the "normal" code to enumerate my microphones:
var deviceInformation = await DeviceInformation.FindAllAsync(MediaDevice.GetAudioCaptureSelector());
However, when I create an AudioGraphSettings object, I cannot find a property to pass the device found. There is a property named DesiredRenderDeviceAudioProcessing however, I'm not sure I understand it's purpose.
Following the best examples I've found, I proceed to create a graph, and use that to get an InputNode as such:
var creationResult = await AudioGraph.CreateAsync(settings);
if (creationResult.Status != AudioGraphCreationStatus.Success)
return;
_graph = creationResult.Graph;
var inputNodeCreationResult = await _graph.CreateDeviceInputNodeAsync(Windows.Media.Capture.MediaCategory.Media);
if (inputNodeCreationResult.Status != AudioDeviceNodeCreationStatus.Success)
{
DestroyGraph();
return;
}
_inputNode = inputNodeCreationResult.DeviceInputNode;
From here on, I'm running blind. Not finding any good tutorials, examples or documentation to help me.
I am only interested in sound level (dB), not the waveform. Is there anyone that can help me complete this, or point me to some decent documentation?
"Scenario 2: Device Capture" from the Windows Universal Samples - Audio Creation project should provide some guidance. From your code it looks like you're on track. Might just be a case of adding the following:
_frameOutputNode = _graph.CreateFrameOutputNode();
_frameOutputNode.Start();
_graph.QuantumProcessed += Graph_QuantumProcessed;
_graph.Start();
And using the Graph_QuantumProcessed callback to analyse the AudioFrame provided by a call to _frameOutputNode.GetFrame().
Hope it helps.
want to control microphone volume from inside of my application, i tried to search it and tried many different solutions but haven't got any success :( all solutions are bit confusing and incomplete.
already spend lots of time so i need your help guys, if anybody help me to do this. i want to control Microphone level using c# from my application.
get MicrophoneLevel
set MicrophoneLevel
I hope you just need to adjust the volume level for your own application only. You can do that with NAudio perhaps.
UnsignedMixerControl volumeControl;
int waveInDeviceNumber = 0;
var mixerLine = new MixerLine((IntPtr)waveInDeviceNumber,
0, MixerFlags.WaveIn);
foreach (var control in mixerLine.Controls)
{
if (control.ControlType == MixerControlType.Volume)
{
volumeControl = control as UnsignedMixerControl;
break;
}
}
volumeControl.Percent = 30; // you are setting volume here, as a percentage.
For more information, refer to the article .NET Voice Recorder.
I'd like to play a previously recordet *.oni-File in C#/WPF. While, with the help of this tutorial I was able to get to RGB- and Depth-Stream to show up on my UI, I don't know how to play an *.oni-file.
The OpenNI page mentions, that I'd just have to "connect" to the file instead of the device, but I can't find the proper piece of code to do so.
The openni::Device class provides an interface to a single physical hardware device (via a driver). It can also provide an interface to a simulated hardware device via a recorded ONI file taken from a physical device.
If connecting to an ONI file instead of a physical device, it is only required that the ONI recording be available on the system running the application, and that the application have read access to this file.
I also found some clues / discussions, but none of it did help much
C# problem with .oni player
OpenNI-dev: Not able to play the skeletonRec.oni
EDIT: I found a way to at least get the recording played using the SamplesConfig.xml. I just inserted the following code into the <ProductionNodes>:
<Recording file="\test.oni" playbackSpeed="1.0"/>
Sadly, that recording crashes to program when it's done playing - I'm now looking for a way to loop the recording...
EDIT 2: Just if anybody should be interested, I'm using those lines to set the recording on loop:
ScriptNode scriptNode;
context = Context.CreateFromXmlFile(path + "\\" + configuration, out scriptNode);
Player p = (Player)context.FindExistingNode(NodeType.Player);
if (p!=null) p.SetRepeat(true); //Make sure it's really a recording.
If anybody should need the code one day - I managed to load the file and play the recording without the need of a config file:
Context context = new Context();
// Add license
License license = new License();
license.Vendor = "vendor";
license.Key = "key";
context.AddLicense(license);
// Open file
context.OpenFileRecordingEx("record.oni");
// Set to repeat
Player p = (Player)context.FindExistingNode(NodeType.Player);
if (p != null) p.SetRepeat(true);