I try to save a video with audio and save it as an uncompressed avi file. The graph is as you can see in the picture. The problem is that the sound recording is ~500ms behind the video. It doesn't matter which sources I have. What can I do to have video and audio in sync?
Default audio capture buffer is pretty large and is about 500 ms in length. You start getting the data once the buffer is filled and hence the lag. Large buffers might be okay for some scenarios and are not good for other. You can use IAMBufferNegotiation interface to adjust the buffering.
See related (you will see 500 ms lag is a typical complaint):
Audio Sync problems using DirectShow.NET
Minimizing audio capture latency in DirectShow
Delay in video in DirectShow graph
Related
I observe noise which shows up periodically (every 5 seconds or so) when i use Asio sound card with the custom built audio processing application visualisation tab which displays the frequency analysis.
The noise is not observed when using a Direct Sound card with the same audio.
I have tried changing the number of channels that are listening to the audio for Asio from 8 to 2 but that doesn't fix the issue!
The sampling rate is 48kHz.( Tweaked it to 44kHz, doesn't fix the issue).
The audio processing application is written in C# and made use of NAudio API.
I've included the images for the waveform in the link:
https://www.sendbig.com/view-files?Id=8fe0ff05-d27e-9ec2-161f-415d923599b7
The first image is the clean signal with no noise and the next image shows the audio along with the noise.
Any inputs on this is appreciated!
I'm developing an app that consumes the 8tracks API. Some of the playlists have gifs as the playlist art and I would like to have those gifs play in the app. UWP does not support native gif playback so i'm trying to figure out a way to make them play. So far I have tried using XamlAnimatedGif but its performance is bad, especially on phones.
Now i'm using Giphy API to upload the gif which also creates an mp4 version of the gif and plays back smoothly in a MediaElement. I can play up to 10 MP4s at a time smoothly(not that ill ever need that many playing at a given time). I'm wondering if i can eliminate Giphy and have the computer/phone just take each frame from the gif and then encode them to an mp4(or other video file). Is this a good option? what would be any cons to doing this vs. what im already doing with Giphy? If i decide to at least try this Is there any where i can read up on decoding the gifs to frames and encoding them to a video?
You could use the GifBitmapDecoder to get the frames
https://msdn.microsoft.com/en-us/library/system.windows.media.imaging.gifbitmapdecoder(v=vs.90).aspx
and there are a number of options here to convert stills to a video:
How can I create a video from a directory of images in C#?
I've written a DirectShow transform filter (in C# but concept is the same in C++) which buffers multiple video frames before sending them to the renderer (hence a delay). These frames are processed before producing an output frame (think sliding window of say 6 frames).
On a 6fps video source, this causes a 1 second delay. Audio ends up playing back 1 second ahead of video. How do I tell the graph to delay audio by the same amount?
Video and audio renderers present data respecting attached time stamps. You need to restamp your audio data adding the desired delay.
Very similar to this question, I have I networked micro-controller which collects PCM audio (8-bit, 8 kHz) and streams the data as raw bytes over a TCP network socket. I am able to connect to the device, open a NetworkStream, and create a RIFF / wave file out of the collected data.
I would like to take it a step further, and enable live playback of the measurements. My approach so far has been to buffer the incoming data into multiple MemoryStream's with an appropriate RIFF header, and when each chunk is complete to use System.Media.SoundPlayer to play the wave file segment. To avoid high latency, each segment is only 0.5 seconds long.
My major issue with this approach is that often there is a distinctive popping sound between segments (since each chunk is not necessarily zero-centered or zero-ended).
Questions:
Is there a more suitable or direct method to playback live streaming PCM audio in C#?
If not, are there additional steps I can take to make the multiple playbacks run more smoothly?
I don;t think you can manage it without popping sounds with the SoundPlayer, because there shouldn't be any delay in pushing the buffers. Normally you should always have one extra buffer buffered. But the SoundPlayer only buffers one buffer. Even when the soundplayer gives an event that it is ready, you're already too late to play a new sound.
I advise you to check this link: Recording and Playing Sound with the Waveform Audio Interface http://msdn.microsoft.com/en-us/library/aa446573.aspx
There are some examples of the SoundPlayer (skip those), but also how to use the WaveOut. Look at the section Playing with WaveOut.
The SoundPlayer is normally used for notification sounds.
I've written an application that streams live screen to a remote app. It grabs the screen (resizes the image to 640x480) and then compresses the image using GIF compression (using System.Drawing), saves it into a byte[] array and transfers it to the other app.
The problem is that the image I get is about 50KB which means that at 30FPS it would require 1.5MB of data transferred each second. At the moment I only get 8-10 FPS. I know it's possible to solve this somehow. Maybe using the technique that flash videos use?
Personally I'd recommend using VNCSharp - it will do most of the heavy lifting for you. Some might say that it'd be madness to code this up again.
If not then streaming images is a waste of bandwidth - you need to effectively build a video stream and transmit that.
Since you don't need animation and want to stay with loss-less compression you would get somewhat better compression with PNG instead of GIF (and PNG is patent-free). According to this the savings is between 10 to 30%.
I think use screen by screen capture isn´t a good approach to get a live screen streaming. Video formats, usually, assumes that between a lot of frames you have a small couple of areas that changes. In the other hand you´ll need to work a bit more to capture a video from the screen.
You can start from these articles:
http://betterlogic.com/roger/2010/07/list-of-available-directshow-screen-capture-filters/
http://www.codeproject.com/KB/dialog/screencap.aspx
Rather than compressing images, you'd better compress video streams. This is how video codec achieve high compression : by exploiting similarities into consecutives images in the stream.
If you compress your images one by one, you loose this performance advantage, and it makes a huge difference in bandwidth.