I have a WPF application for broadcasting video using Microsoft.expression.encoder and framework 4.0, but i got a delay of 15 sec while broadcasting.Is there any suggestion to reduce the delay while broadcasting.
below is the Code
using Microsoft.Expression.Encoder.Live;
using Microsoft.Expression.Encoder;
private void button1_Click(object sender, RoutedEventArgs e)
{
try
{
EncoderDevice video = null;
EncoderDevice audio = null;
GetSelectedVideoAndAudioDevices(out video, out audio);
StopJob();
if (video == null)
{
return;
}
StopJob();
_job = new LiveJob();
if (video != null && audio != null)
{
//StopJob();
_deviceSource = null;
_deviceSource = _job.AddDeviceSource(video, audio);
_job.ActivateSource(_deviceSource);
// Finds and applys a smooth streaming preset
//_job.ApplyPreset(LivePresets.VC1HighSpeedBroadband4x3);
// Creates the publishing format for the job
PullBroadcastPublishFormat format = new PullBroadcastPublishFormat();
format.BroadcastPort = 9090;
format.MaximumNumberOfConnections = 50;
// Adds the publishing format to the job
_job.PublishFormats.Add(format);
// Starts encoding
_job.StartEncoding();
}
//webCamCtrl.StartCapture();
}
catch (Exception ex)
{
WriteLogFile(this.GetType().Name, "button1_Click", ex.Message.ToString());
}
}
I am using MediaElement to show the webcam both on my server and client systems.
on Client Side
try
{
theMainWindow.getServerIPAddress();
IP = theMainWindow.machineIP;
MediaElement1.Source = new Uri("http://" + IP + ":9090/");
}
catch (Exception ex)
{
}
you could propably use IIS Smooth Streaming to start playback with lower bitrate material and gradually increase it as your client buffer fills up. Silverlight has builtin support for Smooth Streaming and the same could be implemented by hand in WPF (at least theoritecally).
Is there any particular thing stopping you from using SL on client side?
Related
Its been almost 4 years since I've programmed in c# so I'm really rusty. I was given a project to create a program that will be able to read the vibration from an external Vibration Sensor connected via 2x micro usb cable. I am able to obtain the vibration x,y values from the external device however; I now need to plot the raw data obtained in a graph showing amplitude vs time Example Picture given!show raw data amplitude vs time. I also need to be able to grab these plotted points in the graph and export them to excel or any other means of storage for later analysis. Can you guys please help or point me in the right direction. Thanks Vibration results
Code example `
using Hardware.PR49;
using System;
using System.Collections.Generic;
using System.Threading;
namespace PR49SensorExample
{
class Program
{
static void Main(string[] args)
{
var task = PR49Detection.GetSerialPortSensorInfos();
if (!task.Wait(5000))
{
Console.WriteLine("Searching PR49 timed out.");
return;
}
var all49 = task.Result;
if (all49.Count == 0)
{
Console.WriteLine("No PR49 connected.");
return;
}
var port = all49[0].Port;
var sensor = new SerialDevice49(port);
var sources = sensor.SignalSources;
var xAxisSource = sources[0];
var yAxisSource = sources[1];
var xData = new List<double>();
var yData = new List<double>();
try
{
sensor.Connect();
sensor.StartDAQ();
}
catch
{
Console.WriteLine("Failed to connect or start sensor.");
return;
}
while (xData.Count < 20000)
{
xData.AddRange(xAxisSource.ReadDecodedData());
yData.AddRange(yAxisSource.ReadDecodedData());
Console.WriteLine($"data points: X: {xData.Count}, Y: {yData.Count}");
Thread.Sleep(200);
}
try
{
sensor.StopDAQ();
sensor.Disconnect();
}
catch
{
Console.WriteLine("Failed to stop or disconnect sensor.");
return;
}
Console.WriteLine("\nSensor stopped.");
Console.ReadKey();
}
}
}
`
I am trying to transfer a file to my iphone using 32feet bluetooth, but cannot seem to get past the ObexWebResponse.
I have read many post on this but none of the solutions seem to work for me.
The Error i get is
// Connect failed
// The requested address is not valid in its context "address:Guid"
private BluetoothClient _bluetoothClient;
private BluetoothComponent _bluetoothComponent;
private List<BluetoothDeviceInfo> _inRangeBluetoothDevices;
private BluetoothDeviceInfo _hlkBoardDevice;
private EventHandler<BluetoothWin32AuthenticationEventArgs> _bluetoothAuthenticatorHandler;
private BluetoothWin32Authentication _bluetoothAuthenticator;
public BTooth() {
_bluetoothClient = new BluetoothClient();
_bluetoothComponent = new BluetoothComponent(_bluetoothClient);
_inRangeBluetoothDevices = new List<BluetoothDeviceInfo>();
_bluetoothAuthenticatorHandler = new EventHandler<BluetoothWin32AuthenticationEventArgs>(_bluetoothAutenticator_handlePairingRequest);
_bluetoothAuthenticator = new BluetoothWin32Authentication(_bluetoothAuthenticatorHandler);
_bluetoothComponent.DiscoverDevicesProgress += _bluetoothComponent_DiscoverDevicesProgress;
_bluetoothComponent.DiscoverDevicesComplete += _bluetoothComponent_DiscoverDevicesComplete;
ConnectAsync();
}
public void ConnectAsync() {
_inRangeBluetoothDevices.Clear();
_hlkBoardDevice = null;
_bluetoothComponent.DiscoverDevicesAsync(255, true, true, true, false, null);
}
private void PairWithBoard() {
Console.WriteLine("Pairing...");
bool pairResult = BluetoothSecurity.PairRequest(_hlkBoardDevice.DeviceAddress, null);
if (pairResult) {
Console.WriteLine("Success");
Console.WriteLine($"Authenticated equals {_hlkBoardDevice.Authenticated}");
} else {
Console.WriteLine("Fail"); // Instantly fails
}
}
private void _bluetoothComponent_DiscoverDevicesProgress(object sender, DiscoverDevicesEventArgs e) { _inRangeBluetoothDevices.AddRange(e.Devices); }
private void _bluetoothComponent_DiscoverDevicesComplete(object sender, DiscoverDevicesEventArgs e) {
for (int i = 0; i < _inRangeBluetoothDevices.Count; ++i) {
if (_inRangeBluetoothDevices[i].DeviceName == "Uranus") {
_hlkBoardDevice = _inRangeBluetoothDevices[i];
PairWithBoard();
TransferFile();
return;
}
}
// no devices found
}
private void _bluetoothAutenticator_handlePairingRequest(object sender, BluetoothWin32AuthenticationEventArgs e) {
e.Confirm = true; // Never reach this line
}
// not working
// transfers a file to the phone
public void TransferFile() {
string file = "E:\\test.txt",
filename = System.IO.Path.GetFileName(file);
string deviceAddr = _hlkBoardDevice.DeviceAddress.ToString();
BluetoothAddress addr = BluetoothAddress.Parse(deviceAddr);
_bluetoothClient.Connect(BluetoothAddress.Parse(deviceAddr), BluetoothService.SerialPort);
Uri u = new Uri($"obex://{deviceAddr}/{file}");
ObexWebRequest owr = new ObexWebRequest(u);
owr.ReadFile(file);
// error:
// Connect failed
// The requested address is not valid in its context ...
var response = (ObexWebResponse)owr.GetResponse();
Console.WriteLine("Response Code: {0} (0x{0:X})", response.StatusCode);
response.Close();
}
The pairing and authentication works just fine, and I can get the BluetoothService.Handsfree to make a call for me but the transferring of the file fails. Not knowing what the actual error is, I tried almost every service available with no luck.
Can you help me figure out what is going on? This is my first attempt working with Bluetooth services so I still have a ton to learn.
Is it possible to transfer a file from iPhone to Windows desktop via Bluetooth?
However, in case you need to transfer media files (images, videos, etc) from Android device, you can use ObexListener class provided by 32Feet library for this purpose, and then you can simply call _obexListener.GetContext() method that will block and wait for incoming connections.
Once a new connection is received, you can save the received file to local storage, as shown in the below example:
ObexListener _listener = new ObexListener();
_listener.Start();
// This method will block and wait for incoming connections
ObexListenerContext _context = _listener.GetContext();
// Once new connection is received, you can save the file to local storage
_context.Request.WriteFile(#"c:\sample.jpg");
NOTE: When working with OBEX on Windows, make sure to disable the "Bluetooth OBEX Service" Windows service, in order not to let it handle the incoming OBEX requests instead of the desired application.
I walked away from this for a while. and started Trying to use xamiren but then had to create a virtual Mac so that I could have the apple store to just load software on my phone. From there xamerin 'should' work well but its another field and tons more to firgure out.
Hello i am trying to map system mic audio to external sound card speaker and external sound card mic audio to system speaker. By using code
public void MapForManualCall()
{
try
{
if (db.getResultOnQuery("SELECT [Value] FROM [dbo].[SystemProperties] where property='RecordingEnabled'").Rows[0][0].ToString().Equals("YES"))
{
SystemMic = new NAudio.Wave.WaveInEvent();
SystemMic.DeviceNumber = 0;
SystemMic.WaveFormat = new NAudio.Wave.WaveFormat(44100, NAudio.Wave.WaveIn.GetCapabilities(SystemMic.DeviceNumber).Channels);
SoundcardMic = new NAudio.Wave.WaveInEvent();
SoundcardMic.DeviceNumber = 1;
SoundcardMic.WaveFormat = new NAudio.Wave.WaveFormat(44100, NAudio.Wave.WaveIn.GetCapabilities(SoundcardMic.DeviceNumber).Channels);
//NAudio.Wave.WaveInProvider waveIn = new NAudio.Wave.WaveInProvider(sourceStream);
// used to set listen property of mic on
var waveOutReceiver = new NAudio.Wave.WaveOut();
waveOutReceiver.DeviceNumber = 0;
// used to wavout caller voice on receiver speaker
NAudio.Wave.WaveInProvider waveInProviderCaller = new NAudio.Wave.WaveInProvider(SystemMic);
waveOutReceiver.Init(waveInProviderCaller);
waveOutReceiver.Play();
var waveOutCaller = new NAudio.Wave.WaveOut();
waveOutCaller.DeviceNumber = 1;
// used to wavout receiver voice on caller speaker
NAudio.Wave.WaveInProvider waveInProviderReceiver = new NAudio.Wave.WaveInProvider(SoundcardMic);
waveOutCaller.Init(waveInProviderReceiver);
waveOutCaller.Play();
//sourceStream.StartRecording();
//waveOut.Play();
// SoundcardMic.DataAvailable += new EventHandler<NAudio.Wave.WaveInEventArgs>(waveIn_DataAvailable1);
// writer1 = new NAudio.Wave.WaveFileWriter(outputFilenameReceiver, SoundcardMic.WaveFormat);
SoundcardMic.StartRecording();
//SystemMic.DataAvailable += new EventHandler<NAudio.Wave.WaveInEventArgs>(waveIn_DataAvailable);
//writer = new NAudio.Wave.WaveFileWriter(outputFilenameCaller, SystemMic.WaveFormat);
SystemMic.StartRecording();
// MapSpeakerNMic();
}
}
catch (Exception ex)
{
MessageBox.Show("Please Check Headphone and Device Cable Connected Properly!");
}
}
Code above works perfect but there is delay of 3-4 seconds between mapping. When i am trying above task using Listen functionalities of windows 7 it works perfect. According to me it can be issue of reading writing buffer. Don't know how to do it...
Latency is the issue here. There is latency at the recording and playback stage. You will find it very hard to reduce this to small values without using something like ASIO. However, all the NAudio APIs allow you to specify buffer sizes so you can see how low you can go before you get dropouts.
I am trying to record audio and play it directly (I want to hear my voice in the headphone without saving it) however the MediaElement and the MediaCapture seems non to work at the same time.
I initialized my MediaCapture so:
_mediaCaptureManager = new MediaCapture();
var settings = new MediaCaptureInitializationSettings();
settings.StreamingCaptureMode = StreamingCaptureMode.Audio;
settings.MediaCategory = MediaCategory.Other;
settings.AudioProcessing = AudioProcessing.Default;
await _mediaCaptureManager.InitializeAsync(settings);
However I don't really know how to proceed; I am wonderign if one of these ways could work (I tryied implement them without success, and I have not found examples):
Is there a way to use StartPreviewAsync() recording Audio, or it only works for Videos? I noticed that I get the following error:"The specified object or value does not exist" while setting my CaptureElement Source; it only happens if I write "settings.StreamingCaptureMode = StreamingCaptureMode.Audio;" while everyting works for .Video.
How can I record to a stream using StartRecordToStreamAsync(); I mean, how have I to initialize the IRandomAccessStream and read from it? Can I write on a stream while I keep reading for it?
I read that changing AudioCathegory of the MediaElement and the MediaCathegory of the MediaCapture to Communication there is a possibility it could work. However, while my code works (it just have to record and save in a file) with the previous setting, it don't works if I wrote "settings.MediaCategory = MediaCategory.Communication;" instead of "settings.MediaCategory = MediaCategory.Other;". Can you tell me why?
Here is my current program that just record, save and play:
private async void CaptureAudio()
{
try
{
_recordStorageFile = await KnownFolders.VideosLibrary.CreateFileAsync(fileName, CreationCollisionOption.GenerateUniqueName);
MediaEncodingProfile recordProfile = MediaEncodingProfile.CreateWav(AudioEncodingQuality.Auto);
await _mediaCaptureManager.StartRecordToStorageFileAsync(recordProfile, this._recordStorageFile);
_recording = true;
}
catch (Exception e)
{
Debug.WriteLine("Failed to capture audio:"+e.Message);
}
}
private async void StopCapture()
{
if (_recording)
{
await _mediaCaptureManager.StopRecordAsync();
_recording = false;
}
}
private async void PlayRecordedCapture()
{
if (!_recording)
{
var stream = await _recordStorageFile.OpenAsync(FileAccessMode.Read);
playbackElement1.AutoPlay = true;
playbackElement1.SetSource(stream, _recordStorageFile.FileType);
playbackElement1.Play();
}
}
If you have any suggestion I'll be gratefull.
Have a good day.
Would you consider targeting Windows 10 instead? The new AudioGraph API allows you to do just this, and the Scenario 2 (Device Capture) in the SDK sample demonstrates it well.
First, the sample populates all output devices into a list:
private async Task PopulateDeviceList()
{
outputDevicesListBox.Items.Clear();
outputDevices = await DeviceInformation.FindAllAsync(MediaDevice.GetAudioRenderSelector());
outputDevicesListBox.Items.Add("-- Pick output device --");
foreach (var device in outputDevices)
{
outputDevicesListBox.Items.Add(device.Name);
}
}
Then it gets to building the AudioGraph:
AudioGraphSettings settings = new AudioGraphSettings(AudioRenderCategory.Media);
settings.QuantumSizeSelectionMode = QuantumSizeSelectionMode.LowestLatency;
// Use the selected device from the outputDevicesListBox to preview the recording
settings.PrimaryRenderDevice = outputDevices[outputDevicesListBox.SelectedIndex - 1];
CreateAudioGraphResult result = await AudioGraph.CreateAsync(settings);
if (result.Status != AudioGraphCreationStatus.Success)
{
// TODO: Cannot create graph, propagate error message
return;
}
AudioGraph graph = result.Graph;
// Create a device output node
CreateAudioDeviceOutputNodeResult deviceOutputNodeResult = await graph.CreateDeviceOutputNodeAsync();
if (deviceOutputNodeResult.Status != AudioDeviceNodeCreationStatus.Success)
{
// TODO: Cannot create device output node, propagate error message
return;
}
deviceOutputNode = deviceOutputNodeResult.DeviceOutputNode;
// Create a device input node using the default audio input device
CreateAudioDeviceInputNodeResult deviceInputNodeResult = await graph.CreateDeviceInputNodeAsync(MediaCategory.Other);
if (deviceInputNodeResult.Status != AudioDeviceNodeCreationStatus.Success)
{
// TODO: Cannot create device input node, propagate error message
return;
}
deviceInputNode = deviceInputNodeResult.DeviceInputNode;
// Because we are using lowest latency setting, we need to handle device disconnection errors
graph.UnrecoverableErrorOccurred += Graph_UnrecoverableErrorOccurred;
// Start setting up the output file
FileSavePicker saveFilePicker = new FileSavePicker();
saveFilePicker.FileTypeChoices.Add("Pulse Code Modulation", new List<string>() { ".wav" });
saveFilePicker.FileTypeChoices.Add("Windows Media Audio", new List<string>() { ".wma" });
saveFilePicker.FileTypeChoices.Add("MPEG Audio Layer-3", new List<string>() { ".mp3" });
saveFilePicker.SuggestedFileName = "New Audio Track";
StorageFile file = await saveFilePicker.PickSaveFileAsync();
// File can be null if cancel is hit in the file picker
if (file == null)
{
return;
}
MediaEncodingProfile fileProfile = CreateMediaEncodingProfile(file);
// Operate node at the graph format, but save file at the specified format
CreateAudioFileOutputNodeResult fileOutputNodeResult = await graph.CreateFileOutputNodeAsync(file, fileProfile);
if (fileOutputNodeResult.Status != AudioFileNodeCreationStatus.Success)
{
// TODO: FileOutputNode creation failed, propagate error message
return;
}
fileOutputNode = fileOutputNodeResult.FileOutputNode;
// Connect the input node to both output nodes
deviceInputNode.AddOutgoingConnection(fileOutputNode);
deviceInputNode.AddOutgoingConnection(deviceOutputNode);
Once all of that is done, you can record to a file while at the same time playing the recorded audio like so:
private async Task ToggleRecordStop()
{
if (recordStopButton.Content.Equals("Record"))
{
graph.Start();
recordStopButton.Content = "Stop";
}
else if (recordStopButton.Content.Equals("Stop"))
{
// Good idea to stop the graph to avoid data loss
graph.Stop();
TranscodeFailureReason finalizeResult = await fileOutputNode.FinalizeAsync();
if (finalizeResult != TranscodeFailureReason.None)
{
// TODO: Finalization of file failed. Check result code to see why, propagate error message
return;
}
recordStopButton.Content = "Record";
}
}
I am developing a voice recorder app for Windows Phone 8.1 that stores the recordings on the local storage and a cloud storage service.
Everything's almost done except the fact that being able to pause an ongoing recording is a strong requirement for this app and I have to get it done.
Now, since PauseRecordAsync() and ResumeRecordAsync() are not available for Windows Phone 8.1 in the MediaCapture class but they will be available in Windows 10, I had to make a workaround: Every time the pause button is pressed, an audio chunk is saved in the temp folder and that file is stored in an array. When the stop button is pressed, the last chunk is stored in the array and the following Concatenation function is called and a final audio temp file is created:
public async Task<IStorageFile> ConcatenateAudio([ReadOnlyArray]IStorageFile[] audioFiles, IStorageFolder outputFolder, string outputfileName)
{
IStorageFile _OutputFile = await outputFolder.CreateFileAsync(outputfileName, CreationCollisionOption.ReplaceExisting);
MediaComposition _MediaComposition = new MediaComposition();
MediaEncodingProfile _MediaEncodingProfile = MediaEncodingProfile.CreateM4a(AudioEncodingQuality.High);
foreach (IStorageFile _AudioFile in audioFiles)
{
if(_AudioFile != null)
{
BackgroundAudioTrack _BackgroundAudioTrack = await BackgroundAudioTrack.CreateFromFileAsync(_AudioFile);
MediaClip _MediaClip = MediaClip.CreateFromColor(Windows.UI.Colors.Black, _BackgroundAudioTrack.TrimmedDuration); // A dummy black video is created witn the size of the current audio chunk.
// Without this, the duration of the MediaComposition object is always 0.
// It's a messy workaround but it gets the job done.
// Windows 10 will dirrectly support PauseRecordAsync() and ResumeRecordAsync() for MediaCapture tho'. Yay! :D
_MediaClip.Volume = 0;
_BackgroundAudioTrack.Volume = 1;
_MediaComposition.Clips.Add(_MediaClip);
_MediaComposition.BackgroundAudioTracks.Add(_BackgroundAudioTrack);
}
}
TranscodeFailureReason _TranscodeFailureReason = await _MediaComposition.RenderToFileAsync(_OutputFile, MediaTrimmingPreference.Fast, _MediaEncodingProfile);
if (_TranscodeFailureReason != TranscodeFailureReason.None)
{
throw new Exception("Audio Concatenation Failed: " + _TranscodeFailureReason.ToString());
}
return _OutputFile;
}
The problem is that when I play the file, all the audio chunks are played from the beginning of the final audio file at the same time instead of playing the second one right after the first one ended and so on. They are all playing one over the other. The length of the file on the other hand is correct and after all audio files finished playing, it's total silence.
I figured it out. I had to manually set the delay for BackgroundAudioTrack.
Here is the working code:
public async Task<IStorageFile> ConcatenateAudio([ReadOnlyArray]IStorageFile[] audioFiles, IStorageFolder outputFolder, string outputfileName)
{
IStorageFile _OutputFile = await outputFolder.CreateFileAsync(outputfileName, CreationCollisionOption.ReplaceExisting);
MediaComposition _MediaComposition = new MediaComposition();
MediaEncodingProfile _MediaEncodingProfile = MediaEncodingProfile.CreateM4a(AudioEncodingQuality.High);
TimeSpan totalDelay = TimeSpan.Zero;
foreach (IStorageFile _AudioFile in audioFiles)
{
if (_AudioFile != null)
{
BackgroundAudioTrack _BackgroundAudioTrack = await BackgroundAudioTrack.CreateFromFileAsync(_AudioFile);
MediaClip _MediaClip = MediaClip.CreateFromColor(Windows.UI.Colors.Black, _BackgroundAudioTrack.TrimmedDuration); // A dummy black video is created witn the size of the current audio chunk.
// Without this, the duration of the MediaComposition object is always 0.
// It's a messy workaround but it gets the job done.
// Windows 10 will dirrectly support PauseRecordAsync() and ResumeRecordAsync() for MediaCapture tho'. Yay! :D
_MediaClip.Volume = 0;
_BackgroundAudioTrack.Volume = 1;
_MediaComposition.Clips.Add(_MediaClip);
_MediaComposition.BackgroundAudioTracks.Add(_BackgroundAudioTrack);
_BackgroundAudioTrack.Delay = totalDelay;
totalDelay += _BackgroundAudioTrack.TrimmedDuration;
}
}
TranscodeFailureReason _TranscodeFailureReason = await _MediaComposition.RenderToFileAsync(_OutputFile, MediaTrimmingPreference.Fast, _MediaEncodingProfile);
if (_TranscodeFailureReason != TranscodeFailureReason.None)
{
throw new Exception("Audio Concatenation Failed: " + _TranscodeFailureReason.ToString());
}
return _OutputFile;
}