I have a DirectShow graph which records and displays a video source. When I move Video Renderer window to other monitor, what I recorded gets deleted and recording starts again. I searched and found this link which says changing monitor stops and starts the graph. How can I stop the graph from being restarted? I don't want to lose my recording while switching between monitors.
Thanks
The behavior you are describing is basically behavior by design (even though the side effect is pretty much annoying and confusing). Moving a video renderer between the monitors makes it re-allocate hardware resources used to present video, and this in turn needs a state transition. For recording, state transition means opening and closing the file.
Your solution is to either split into presentation and recording graphs, or to use custom allocator/presenter to take care of presentation yourself the way you want. Supposedly, graph splitting (what Wimmel suggests in another answer) is the preferable way adding other degrees of freedom in particular.
There is probably a good reason that the EC_DISPLAY_CHANGED Message behaves that way, so I don't know what the disadvantages are when you handle this message yourself and don't restart the graph.
Instead you could separate the rendering graph from the recording using GMFBridge. Use one graph to capture and record. Use the second graph only for rendering, so restarting that graph would not stop the recording.
Edit: Possibly you need to disconnect before the second graph is restarted. That will mean you do need to process the EC_DISPLAY_CHANGED message, even if you use GMFBridge.
m_pController->BridgeGraphs(NULL, NULL);
Related
I have a heatmap plugin integrated in my Unity VR game(with SteamVR). The plugin uses eye tracking information to generate live heatmaps as the user gazes at different elements in the game. As the heatmaps are generated, the whole camera view(user's) is overlayed with heatmaps info and written to a MP4 file using FFMPEG.
The whole process works fine. But I have an annoying problem where during the recording the user's camera view is not stable and keeps flickering and stops only when the recording is stopped. It interrupts the smooth flow of his game
For now, I've narrowed down the code which causes the trouble,
public void Write(byte[] data)
{
if (_subprocess == null) return;
_stdin.Write(data);
_stdin.Flush();
}
From my understanding, It is in this part of the code stdinput is invoked to write to the file system. So, I surmise that the problem must be with accessing the file system which in turn must have caused some time delay when each frame is written in the update method. Correct me if i am wrong here.
The loading screen which appears during every frame write looks something like above. It interrupts the smooth flow of the game and also makes the recording useless as the user keeps focusing on the flicker rather than the actual objects of interest. I would really be grateful if someone shows light on the issue here?
Accessing the file system always takes a huge amount of time. Try offloading that work to another thread or a Coroutine.
I'm using a Wiimote controller as an input device.
I'm using this wrapper for HID calls / polling.
In the demo scene that comes with this wrapper, polling the controller is done in the Update event.
In many Wii games aiming extremely up and down quickly triggers an action.
The wrapper indicates extreme vertical aiming positions (where the aim goes out of scope / is "offscreen") as
Y=-1
I tried to detect such a rapid up-down movements by
1) Detecting if aim is off-screen
2) If yes, have a look if the aim is within the screen again
3) Detect if aim is off-screen again and if all this happened in a certain time period
The problem however is (I think due to the nature of polling only in the Update event), #2) doesn't necessarily have to occur. It's possible that the aim was in the screen, but the controller wasn't polled when it was.
I would like to ask what might be a valid solution to this problem.
Your polling would have to be pretty bad for this to be an issue, but other than increasing the poll rate, there isn't really anything you can do here.
The only other option would be to use the gyroscope or accelerometer instead (sorry, didn't bother to check if your wrapper exposes those). You could essentially combine a harsh vertical shake with the point being off screen, but if your polling is an issue in the original solution, it will still probably be an issue here too.
Our app is using C#/WinForms/VMR9/DirectShowLib-2005 to either play back a local video file or to receive (and render) the live video stream via udp using a third party DirectShow filter. The video stream uses H.265 coding and sends 1080p files.
I also have that DirectShow filter recording the live video feed to a local file for me.
When I resize the form during video playback or live video feed playback, I'm getting a device lost and need to reset it. I'm freeing all the resources, but device reset still fails unless I also destroy the graph as well. But it's used to receive my live video feed and record it.
So, problem is I would like to keep the video feed recorded without interruption by resize, move to another monitor, device lost or reset. What are my options to achieve this? We can consider converting the code to WPF/WF, purchasing a commercially available or using a free plugin to do the job, etc. Need an advice here.
Second question on the same subject, if I may. While live feed is recorded to a local file and we're playing back that live feed in the video window, we also display a time line (slider control), representing the time from the beginning of the live video feed till a present moment (which moves forward while the live feed is active). I need to give user the ability to select any previous moment in time and immediately play that part of the recorded video back, while live feed is still recorded to the same file. After reviewing a part of the recorded video, I need to know how to let user to continue watching the live feed.
I am not sure which technology we should be using to achieve that as well. I would appreciate any help.
Thank you very much.
Recording filter graphs are sensitive to unexpected state transitions and it is the assumption that recording takes place "at once" without pausing and continuations, including such as caused by necessity to reset video hardware or change format.
The typical method is to separate recording from other activity into separate graph. A dedicated recording graph would receive data produced outside and record it into file (or stream to network). Playback and presentation activity running in another graph can be flexibly reset or reconfigured as needed.
See also:
Directshow Preview Only and Capture & Preview with a single Graph
I'm not fully sure what I'm looking for so I was hoping to gain some insight. I'm brainstorming an application I want to write, but I've never dealt with this type of project. I was wondering if any of you knew of any resources, both literature and libraries/APIs that could lead me down the path to creating an application that involves the playback of audio, visualization of audio (like displaying the waveform with a scrolling position), setting playback points, maybe some effects like fade in and out, maybe even beat mapping; all in .net
For example, an application that displays a waveform and has a position indicator that moves with playback. It allows you to visually set multiple start and stop points, etc.
Any suggestions?
Check out http://naudio.codeplex.com/
As you might surmise from the question title, we are required to decode and display multiple (e.g., eight) H.264 encoded videos at the same time (and keep them all time synchronized, but that's another question for another time). The videos are usually at at 25 FPS with a resolution of 640x480.
I'm going to provide a bit of background before I get to the crux of the problem.
The feature needs to be baked into a fairly large C# 3.5 (WinForms) application. The videos will occupy rectangles in the application - the managed code needs to be able to specify where each video is drawn as well as it's size.
We get the H264 packets in C# and fire them into a native H264 decoder to get YUV12 image data.
An early attempt consisted of converting the YUV12 images to RGB24 and BitBlt'ing them to a HWND passed into the native code from C#. While functional, all BitBlt'ing had to happen on the UI thread which caused it to bog down when displaying more than a couple videos (on a 2.6 GHZ core 2 duo).
The current attempt spins up one-thread-per-cpu-core on startup and load balances the decoding/displaying of videos across these threads. The performance of this is mind-blasting (I find watching task manager much more interesting than the videos being displayed). UI-wise, it leaves a lot to be desired.
The millisecond we started drawing to an HWND created on the UI thread (e.g., a panel docked in a WinForms control) from a non-UI thread, we started getting all sorts of funky behavior due to the un-thread-safeness of WinForms. This led us to create the HWND's in native code and draw to those, with C# providing the rectangles they should be drawn to in screen coordinates.
Gah! CanOfWorms.Open().
Problem: When the C# application receives focus, it jumps to the front of the Z-Order and hides the video windows.
Solution: Place the video windows Always On Top.
Problem: When the user switches to another application, the video windows are still on top.
Solution: Detect activation and deactivation of the C# application and show/hide the video windows accordingly.
Problem: User says, "I want my videos playing on one monitor while I edit a Word document in the other!"
Solution: Tell user to shut up and that Word sucks anyways.
Problem: I get fired.
etc. etc.
I guess the crux of the problem is that we have HWND's created on a non-UI thread and we want to 'simulate' those being embedded in the C# application.
Any thoughts/suggestions? Am I completely out to lunch here?
I'm more than open to taking a completely different approach if one exists (This project required a lot of learning - winning the lottery would have a greater likelihood than me having picked the best approach at every step along the road).
Forget about BitBlt-ing and do this:
for each window you want your video to be played, create one DirectShow graph and attach the renderer of the graph to that window
before renderer in the graph put the samplegrabber filter. It will allow you to have callback in which you'll be able to fill the buffer
instead of blitting, decode to the buffer provided in samplegrabber.
In addition, I guess that you'll be able to put raw YUV12 into the buffer, as VMRenderer is able to display them directly.
Use DirectShowNet library.
EDIT:
And yes, BTW, if the videos are on the same 'canvas', you can use same technique with renderer and create only one large window, then shift decoded video rectangles 'by hand' and put them into the framebuffers buffer.
YET ANOTHER EDIT:
BitBlts are ALWAYS serialized, i.e. they can't run in parallel.
The millisecond we started drawing to an HWND created on the UI thread (e.g., a panel docked in a WinForms control) from a non-UI thread, we started getting all sorts of funky behavior due to the un-thread-safeness of WinForms. This led us to create the HWND's in native code and draw to those, with C# providing the rectangles they should be drawn to in screen coordinates.
What kind of funky behavior?
If you mean flickering or drawing delay, have you tried to lock() the panel or any other class for thread/drawing synchronisation?
Again: Whats the exact problem when you send the data to the decoder, receive a image, convert it and then draw it with an OnPaint handler. (Setup a different thread that loops at 25fps, call panel1.Invalidate() then)
I guess the crux of the problem is that we have HWND's created on a non-UI thread and we want to 'simulate' those being embedded in the C# application.
Don't do that. Try to draw the received data in your c# application.
In general, I wouldn't reccomend mixing native code and c#. Having the h264 decoder in native code is the only exception here.
Use your threads to decode the video packets (as you already do) then have one thread that loops and calls Invalidate(as said above). Then have an OnPaint handler for each panel you are displaying a video in. In this handler get the most recent video picture and draw it (e.Graphics).
I hope this helps, but also need more information about the problem...
I like the DirectShow answer posted earlier, but I wanted to include an additional option that might be easier for you to implement, based on this excerpt from your question:
While functional, all BitBlt'ing had to happen on the UI thread which caused it to bog down when displaying more than a couple videos
My idea is to start from that code, and use the Async CTP for Visual Studio 2010 that is currently available and includes a go-live license. From there it should be a relatively simple to modify this existing code to be more responsive: just add await and async keywords in a few places and the rest of the code should be largely unchanged.