I am currently developing a c# Windows Form. My intention is to create a form that can make calls to other PCs in the same network.
So far I have found solutions that record the audio from my microphone and then convert it to bytes and send it using Tcp sockets. The thing is, is there a way to directly convert the audio to bytes and send it through a socket without recording the audio in a file and then send it.
Thanks in advance.
would converting the record to a memorystream be what your looking for?
if so you want this How to record audio using naudio onto byte[] rather than file
You can then write the stream to a tcpsocket. (you could write the thing direct to networkstream but i would consider it bad practice)
It would be wise to write samplerate*3 just in case of lantcy issues.
Related
Im new to this forum (as registered user) and trying to figure out how to properly get an audio stream from an IP camera via rtsp protocol and transfer it e.g directly to byte array.
It should be real time so I want to avoid creating any audio files or so.
I've found Ozeki SDK but it's shareware.
My project is only for academical purposes.
Of course Im not looking for exact solution, just for some proper library which could contain features to handle this issue.
Thank you very much for any answer in advance.
I'm developing a C# video streaming application using NReco library. At this moment I was able to encode audio and video separately and save data in to queue. I can stream video using UDP protocol without audio and it played nicely in ffplay equally I can stream audio using UDP protocol without video and it also played nicely.Now I want to merge these two streams in to one and stream through UDP protocol and player should play both audio and video. But I have no idea how to do it. I would appreciate if some one can give me any points to do this or any other different method achieve this.
Thank You.
The answer highly depends on the source of video and audio streams. NReco.VideoConverter is a wrapper to FFMpeg tool and it actually can combine video and audio streams (see filters configuration in FFMpeg documentation) if either video or audio input can be specified as ffmpeg input source (UDP stream or direct show input device).
If both video and audio data represented by byte stream in your C# code you cannot pass them together using NReco.VideoConverter (ConvertLiveMedia method) because it uses stdin for communicating with ffmpeg and only one stream can be passed from C# code.
I have an app which is receiving wave data (PCM raw data) via network through UDP port.
How can I set up to play received wave data using Naudio.
I have tried to find with google and read some stuff related to Naudio documentation, but so far haven't any success.
Any help or hint would be appreciated. Thanks in advance.
The NAudioDemo application demonstrates how to do this in the Network Chat demo. You use the BufferedWaveProvider to store the decompressed audio as it arrives, and use that to feed WaveOut. you might also want to automatically pause if there is not enough buffered audio, to prevent stuttering playback.
Well I did some work on this nAudio stuff long back and am sorry I might not be of much help as I am afraid I hardly remember it...
But I think there was something like WaveOut Class and a WaveStream which contained your WAV data and you call Play on the WaveOut class after associating it with WaveStream.
Try give a look to this WavOut class you might get some clues, also I was quite new to this Audio world when I worked on that and my approach was to take their sample program that plays a wav file and see how they are doing it...that was how I figured out what needs to be done....
Good Luck...
Simple.
UDP Stream --> buffer --> NAudio WaveStream
First, check that the source PCM audio can be played correctly by NAudio. Do this "offline", before sending it via a socket.
I will do some research and post some code later.
Is there any way to play MP3 directly from a memory stream (without any temp. files) using VB.NET or C#? or play from SQLCe database?
Thanks
I'll suggest you try Mp3Sharp. It is a port of JavaLayer and it is written in C#. I am currently using it and SlimDX to play ShoutCast Mp3 streams. So far it works very well. There is an Mp3Stream class which you use to read the stream and return a predetermined number of PCM bytes. You can write those bytes to a DirectSound buffer for playback if you wish.
IrrKlang can also do this for you.
I plan to build a small audio-recorder app in C#.
My laptop has a built in Microphone that's always active, so I want to use that as an early-stage test. I would simply start recording, save the file as a .wav or even use the LAME dll to make it into an MP3.
The problem is, I don't know how to contact that microphone. Do I use a library that can detect a device, or do I just catch a stream of bytes from the port that the device is on?
I don't have any experience with receiving data from connected devices. I suppose that I'll need to enter all the data into a byte array and then Serialize that into a WAV file, but I'm not sure.
Can I get some pointers on this subject?
Look into SlimDX.