How to program a delay of video presentation across shared monitors? - c#

I am looking for a decent programmatic approach to delivering the illusion of "riding in a van". Here is the synopsis:
I have a friend who is opening up a bar in San Francisco with a room interior designed to be like the inside of a van (picture the inside of the Scooby Doo Mystery Machine) . Set into the walls are “windows” and behind those windows are monitors. There are two servers (for the left and right sides) that are delivering simultaneous presentations from pre-recorded footage of a vehicle driving down the road.
At the moment the screens are split across a shared workspace so as items in the background move from the right to the left the impression of motion is flawless. However, once you move the screens apart there is no delay for empty "wall space" or the natural delay that one would expect to perceive as an object progresses from one screen through the space in the wall to the next screen.
Is there a managed code approach I could take to construct a project that could take a time delay argument for delivering content between monitors in this case? Or is there even an off-the-shelf program that might do the trick as well?
EDIT:
What I am really looking for is advice on how to program this: Can I load in a windows media file and stream it to separate monitors on separate threads with a slight delay?

Sure, you just have to do playback on both monitors separately and delay one of the videos.

Related

Graph restarts after monitor change

I have a DirectShow graph which records and displays a video source. When I move Video Renderer window to other monitor, what I recorded gets deleted and recording starts again. I searched and found this link which says changing monitor stops and starts the graph. How can I stop the graph from being restarted? I don't want to lose my recording while switching between monitors.
Thanks
The behavior you are describing is basically behavior by design (even though the side effect is pretty much annoying and confusing). Moving a video renderer between the monitors makes it re-allocate hardware resources used to present video, and this in turn needs a state transition. For recording, state transition means opening and closing the file.
Your solution is to either split into presentation and recording graphs, or to use custom allocator/presenter to take care of presentation yourself the way you want. Supposedly, graph splitting (what Wimmel suggests in another answer) is the preferable way adding other degrees of freedom in particular.
There is probably a good reason that the EC_DISPLAY_CHANGED Message behaves that way, so I don't know what the disadvantages are when you handle this message yourself and don't restart the graph.
Instead you could separate the rendering graph from the recording using GMFBridge. Use one graph to capture and record. Use the second graph only for rendering, so restarting that graph would not stop the recording.
Edit: Possibly you need to disconnect before the second graph is restarted. That will mean you do need to process the EC_DISPLAY_CHANGED message, even if you use GMFBridge.
m_pController->BridgeGraphs(NULL, NULL);

How to load 1000 push pins in nokia maps in small chunks

I am working on a window phone 8 app. Here I have to show around 1000 push Pins on a nokia map which I am able to show up. But my problem is that the push pins are taking a lot of time to load on tha map making abad user experience.
So Is there any method where I could load the pins in chunks so that the user experience becomes good.
Just load the push pins asynchronously...
http://msdn.microsoft.com/en-us/library/windows/apps/hh464924.aspx
rendering 1000 graphics on mobile device maybe be too much. Have you consider using any clustering to summerize close/overlapping features?
http://developer.nokia.com/Community/Wiki/HERE_Maps_API_-_How_to_cluster_map_markers
The basic way to do this is to:
- Add some pins
- wait a small amount of time
- repeat until all pins are loaded
When I did this previously (on WP7) it took some experimentation on how many pins to add and how long to wait to get values that "felt right".
Be aware that you may need to be careful with the threads you perform actions on. i.e. Don't do the waiting on the UI thread but you will need to be on the UI thread to do the adding of items/pins.
Also, be sure to test on actual low spec devices (i.e. Lumia 520) rather than the emulator to get a realistic understanding of the user experience.
As has been mentioned in other answers, having 1000 separate pins on the map is unlikely to be the best way to show information to the user or make best use of resources. There's very little point including pins outside of the visible area and when having lots of pins close together it can be hard to see or interact with specific pins.
By having clustering of pins in close proximity and adjusting which pins are displayed as the user pans and zooms around the map you may also avoid the issue of the time it takes to draw so many pins at once.

Monotouch C# play many sounds simultaneously with volume control

I'm making a War Game for iOS using Monotouch and C#. I'm running into some problem with the audio sound effects.
Here's what I require: The ability to play many sound effects simultaneously (possibly up to 10-20 at once) and the ability to adjust volume (for example, if the user zooms in on the battlefield the gun shot volume gets louder).
Here are my problems:
With AVAudioPlayer, I can adjust volume but I can only play 1 sound per thread. So if I want to play multiple sounds I have to have dozens and dozens of threads going just incase they overlap... This is a war game, picture 20 soldiers on the battlefield. Each soldier would have a "sound thread" to play gun fire sounds when they shoot because It is possible that every soldier could just happen to fire at the same exact time. I don't have a problem with making lots of threads, but my game already has dozens of threads running all the time and adding dozens more could get me into trouble... right? So I'd rather not go this road of adding dozens of more threads unless i have too...
With SystemSound, I can play as many sounds as I want in the same thread, but I can't adjust the volume.... So my work around here is, for every sound effect i have - save it like 4 times at 4 different volumes. That is a big pain... Any way to adjust volume with SystemSounds??
Both of these answer some of my requirements, but neither seems to be a seamless fit. Should I just go the AVAudioPlayer multi-threading nightmare road? Or the SystemSound multi-file-with-different-volume-levels nightmare road? Or is there a better way to do this?
Thanks in advance.
Finally found the solution to my problem. AVAudioPlayer IS capable of playiing multiple sounds at once but only with certain file formats... The details are available in this link. The reason why I couldn't play my sound effects simultaneously was because the file format was compressed and the iphone only has 1 hardware decompressor.
http://brainwashinc.wordpress.com/2009/08/14/iphone-playing-2-sounds-at-once/

Loading components on splash screen

This might be a primitive question but I really like to get more information. I have seen many professional programs have a splash screen and during that a progress bar and some text indicating that program is loading...
I want to know what CAN or SHOULD be loaded during such time? Do they load classes or something? I am noob and do not know what requires loading before a program actually starts.
In summary, yes. They load classes.
If the program's design is modular enough, the outer shell can be small enough to run almost immediately on most devices (think mobile phones here) and display a progress bar while loading behavior (features provided by external modules, assemblies in C#) in the background.
However, that's not always the best approach to program loading. If your user interface can be up and running in less than five seconds on a typical client machine, it may not even be worth a progress bar.
Well it depends on the program. I use a loading screen when my game is randomly generating terrain which can take anywhere between 1 second to 2 minutes.

Scroll Buffering in Windows Applications

How is scrolling typically handled in a Windows application that has computationally expensive graphics to render? For example, if I am rendering a waveform graph of a sound, after processing the wave form from a peakfile, should I:
Render the entire graphical representation to an in-memory GDI surface, and then simply have a scrollable control change the start/end of the render area?
Render the visible portion of the wave only. In a separate thread, process any new chunks of the graphc that come into view.
Render the visible portion of the wave, plus a buffer. This way, there's less of a chance of the user seeing "blank" or "currently rendering" portions of the waveform. Still, if a user quickly scrolls to a distant area, the whole section will be blank until rendering is complete.
The problem is, many applications handle this in different ways.
For example:
Adobe Acrobat - renders blank pages during scroll unless page is in cache. Any pages that would be visible within the document render area are rendered in a separate thread and are presented opon completion.
Microsoft Word - Essentially, the same as above. Documents are separated into distinct pages, so each page is processed/rendered on an as-needed basis and added to a cache.
Internet Explorer - Unknown. It appears that an entire "webpage" is rendered in graphic memory, no matter how many "screens" worth of graphic data it consumes. Theoretically, with a web page that scrolls for 10 or 15 screen lengths, this could mean 50-60MB worth of graphic memory consumption. Could anyone with experience with WebKit or FireFox explain whether or not the rendering engine favors consuming a ton of memory, or tries to render peices of the page "on the fly" to conserve memory?
If it helps, my application is based on C#, .NET 3.5, and WinForms.
This is a complexity vs. user experience trade-off. Your third option will give you the best user experience (they can start to see things right away and start to work). It is also the most complex to code (will take the longest to develop, with the most amount of bugs to kill).
The "correct" solution depends on how "expensive" expensive is, and on the demands of your user base. I would select the option with the least complexity that will provide a user experience that will satisfy the bulk of the customers:
Make it as complex as it needs to be, but no more complex than that.
I think this is actually a memory-usage versus a processor-usage tradeoff. Your first option (rendering the entire wave on an appropriately-sized canvas, and then moving that canvas around with only a visible window portion showing) might be the best approach, assuming you have enough memory for it. After an initial rendering delay, the user experience will be smooth and seamless.
If you don't have enough memory for this, then you have to render the visible portion on the fly. I've written this application (WAV data viewer) many times, and usually GDI+ is more than fast enough to render portions (even large portions) of WAV data in realtime (with a high framerate above 30 fps, which produces perfectly smooth animation). The key to this, however, is not to render each sample value as a separate point - this would be dog slow. What you want to do is for each pixel on your X axis, scan the corresponding chunk of WAV samples to get the minimimum and maximum sample value, and then render a single line between these values.

Categories

Resources