Azure Media Services Modify stream (add images / text like twitch) - c#

I'm in the process of developing a Twitch-like application that supports Live Streaming. I would like to use Azure Media Services for this.
Looking at the REST Api of Azure Media Services it really looks like it can handle almost all the stuff that I require, for example playing advertisements. There is just one thing I can't seem to find and I really hope someone is able to guide me into the right direction.
How am I able to 'modify' the stream in such a way that it will show images / texts on the live video stream? For example as a donation comes in at Twitch the users are presented with a question on the video for the streamer.
Thanks!

When your Channel has Live Encoding enabled, you have a component in your pipeline that is processing video, and can manipulate it. You can signal for the Channel to insert slates and/or advertisements into the outgoing adaptive bitrate stream. Slates are still images that you can use to cover up the input live feed in certain cases (for example during a commercial break). Advertising signals, are time-synchronized signals you embed into the outgoing stream to tell the video player to take special action – such as to switch to an advertisement at the appropriate time.

Related

Stream audio from one device to another

I am looking to create a project where user A will stream audio and user B will receive it, I am not looking to upload the audio into a WebServer and then Download it. I have done quite a lot of research but I didn't come to a final design. I am asking for guides and not for you to design my application, where am I supposed to start with such a project?
I have a design in mind but not sure how feasible is it with IOS xamarin.
I would like to know your thoughts on this design.
User A will choose audio file from their playlist
Then I want to decode that audio into bits (Packets) and then send packets over to User B
User B will receive these packets and then encode them back to be an audio file
I am looking to achieve this using HTTP protocol. This is what I was able to get to. I am welcoming any ideas or guides as to where I should start with such a project.
P.S. I don't mind switching to swift/objective-C if it's not possible with Xamarin.
You can copy the voice transferring concept from the Internet calling concept. You will get the idea regarding how the voice is being transferred along with encryption and decryption of the packets.
You can get a little brief from here and here.
Once you can get the hang of it, you can easily switch with the audio files which you wants to play.

c#/.NET - Recording the current audio being played

Is there any way in c#/.net of recording the current audio being played? I've searched a lot on the internet but the only result I could find is recording using a microphone.
I dont want to record using microphone input, I want to record what is being played on the computer when I click a record button.
Thanks
You have two options here:
Hardware loopback device - virtual "Stereo Mix" audio device, which acts as a regular audio capture device and in the same time produces a copy of mixed audio feed played through default audio output device of the system. Since such device shows up as real audio input device, you can use standard APIs, libraries and even exitsing applications to record from such device.
Programmatic access to a virtual loopback device as if it was microphone-like device. API on the background will duplicate played audio content and make it available for reading back as it plays. The good news is that you can access the mixed audio feed for device of your interest.
Both options are described in detail in Loopback Recording article on MSDN and available via standard audio APIs, specifically WASAPI.
For C# development you are likely to use a wrapper like NAudio.
For option 1 you will find quite a few questions on StackOverflow, and for the other option the keyword is AUDCLNT_STREAMFLAGS_LOOPBACK.
The only way to be able to receive data from another application is if the developer provides an access point, normally through some SDK, API, or other means. Without this, there is no way for your application code to receive the bytes from the other application.
The reason a microphone works is because it is receiving the sound output bytes from the application and sending those soundwave bytes back into your PC to render and output the sound. Since you have access to these bytes from the microphone you are able to capture the sound.
See if there is an API or an SDK from the developer of the application you are trying to get sound from.

C# How to keep the DirectShow filter graph running on video window resize, minimize, device lost, reset?

Our app is using C#/WinForms/VMR9/DirectShowLib-2005 to either play back a local video file or to receive (and render) the live video stream via udp using a third party DirectShow filter. The video stream uses H.265 coding and sends 1080p files.
I also have that DirectShow filter recording the live video feed to a local file for me.
When I resize the form during video playback or live video feed playback, I'm getting a device lost and need to reset it. I'm freeing all the resources, but device reset still fails unless I also destroy the graph as well. But it's used to receive my live video feed and record it.
So, problem is I would like to keep the video feed recorded without interruption by resize, move to another monitor, device lost or reset. What are my options to achieve this? We can consider converting the code to WPF/WF, purchasing a commercially available or using a free plugin to do the job, etc. Need an advice here.
Second question on the same subject, if I may. While live feed is recorded to a local file and we're playing back that live feed in the video window, we also display a time line (slider control), representing the time from the beginning of the live video feed till a present moment (which moves forward while the live feed is active). I need to give user the ability to select any previous moment in time and immediately play that part of the recorded video back, while live feed is still recorded to the same file. After reviewing a part of the recorded video, I need to know how to let user to continue watching the live feed.
I am not sure which technology we should be using to achieve that as well. I would appreciate any help.
Thank you very much.
Recording filter graphs are sensitive to unexpected state transitions and it is the assumption that recording takes place "at once" without pausing and continuations, including such as caused by necessity to reset video hardware or change format.
The typical method is to separate recording from other activity into separate graph. A dedicated recording graph would receive data produced outside and record it into file (or stream to network). Playback and presentation activity running in another graph can be flexibly reset or reconfigured as needed.
See also:
Directshow Preview Only and Capture & Preview with a single Graph

Record video from camcorder in C#

I need to be able to record video from an external camera in a C# application.
Unfortunately a webcam is pretty much out of the question as the application will record outside and during the evening/night. That is why I was thinking of a camcorder since it also has manual control over exposure and focus, lower noise and better sensor.
So far I would use the AV/S-Video output from the camcorder and send the signal to a USB capture card (the computer is a laptop so no PCI-E cards).
How would I be able to access the video stream from the C# application, now that it comes from the capture card ?
Does my proposed system seem feasible (achievable, good video quality, good fps)? Does anybody have another working solution?
Thanks
This Code Project Article could be of a good starting point.
The Author mentions :
The main goal of the application was to make it flexible and
extensible. The application itself can communicate with any video
source – it may be an IP video camera or a server, it may be a local
camera attached to USB, it may be an MMS stream from a remote server,
or it may be any other video source. And more of it, the application
can work with all these video sources simultaneously, displaying them
all on a single screen.
The solution I used in the end was Microsoft Expression Encoder.

Using Core Audio API to change volume of rear channels

I'm trying to create a background application that allows me to easily change volumes of the rear channels
I've looked into the Core Audio API, and although I managed to change the balance/volume of the front speakers, the API seemingly had no access to the rear channels or any other surround channel. It only counted 2 channels for my audio device.
Is it in any way possible, using any API, to control the rear channel's volume?
Thanks in advance!
EDIT
Thanks, FMOD looks like what I need, although it's a bit overwhelming. :P What would I need to do, to change the volume of a specified channel. I believe I need this function:
FMOD.RESULT result = channel.setVolume(1.0f);
But then I need to find a way to specify the channel...
Also, to be clear: I need to change the volume of any running application, say Winamp.
Best chose for working with audio file is "Fmod.dll".this have a lot of Privilege to work with audio file.
This is an audio content creation tool for games, with a focus on a
‘Pro Audio’ approach. It has an entirely new interface that will be
more familiar to those using professional Digital Audio Workstations
than existing game audio tools and is loaded with new features.
I already use this,this is very powerful component and easy to use.

Categories

Resources