I am working on a window phone 8 app. Here I have to show around 1000 push Pins on a nokia map which I am able to show up. But my problem is that the push pins are taking a lot of time to load on tha map making abad user experience.
So Is there any method where I could load the pins in chunks so that the user experience becomes good.
Just load the push pins asynchronously...
http://msdn.microsoft.com/en-us/library/windows/apps/hh464924.aspx
rendering 1000 graphics on mobile device maybe be too much. Have you consider using any clustering to summerize close/overlapping features?
http://developer.nokia.com/Community/Wiki/HERE_Maps_API_-_How_to_cluster_map_markers
The basic way to do this is to:
- Add some pins
- wait a small amount of time
- repeat until all pins are loaded
When I did this previously (on WP7) it took some experimentation on how many pins to add and how long to wait to get values that "felt right".
Be aware that you may need to be careful with the threads you perform actions on. i.e. Don't do the waiting on the UI thread but you will need to be on the UI thread to do the adding of items/pins.
Also, be sure to test on actual low spec devices (i.e. Lumia 520) rather than the emulator to get a realistic understanding of the user experience.
As has been mentioned in other answers, having 1000 separate pins on the map is unlikely to be the best way to show information to the user or make best use of resources. There's very little point including pins outside of the visible area and when having lots of pins close together it can be hard to see or interact with specific pins.
By having clustering of pins in close proximity and adjusting which pins are displayed as the user pans and zooms around the map you may also avoid the issue of the time it takes to draw so many pins at once.
Related
I have the following scenario in mind:
I want to send (via serial port) some commands to a device. This device does send me back a continuous stream of data (max. 12000 values per second).
To control some settings I need some buttons to send commands to the device to start/stop/change settings before and during data stream. Also I want to have a real time plot of this data. I will filter this data of course. Also at certain timestamps there will be a signal which indicates that I want to cut out a certain window of the received data.
This means I will have two charts. I made already some progress using WPF but now when I interact (zoom/pan) with the lower chart, the upper one freezes noticeable. This is because both have do be refreshed very often!
Work (data receiving/filtering) is done using threads but the update of the plot has to be done within the ui thread.
Any ideas how to solve this issue? Maybe using multiple processes?
You should use Reactive Extensions. It was built for this kind of thing.
http://msdn.microsoft.com/en-us/data/gg577609.aspx
Requesting a clear, picturesque explanation of Reactive Extensions (RX)?
On this second link, although the topic is javascript, much of what it says is about Reactive Extensions and cross-applies to Rx in C#.
I'm making a similar WPF application with real-time waveforms (about 500Hz). I have a background threads that receives real-time data, a separate threads to process them and prepare the data for drawing (I have a buffer with the "size" of the screen where I put the prepared values). In the UI thread I draw the waveforms to the RenderTargetBitmap which is in the end is rendered to the Canvas. This technique allows me have a lot of real-time waveforms on the screen and have zoom and pan working without any problems (about 40-50 fps).
Please let me know if you need some technical details, I can later share them with you.
I think you have some code in the UI thread that is not optimized well or can be moved to the background thread.
Btw, do you use any framework for charts?
Edit
philologon is right, you should use Rx for real-time data, it simplifies code A LOT. I also use them in my project.
Its a commercial product but there is a real-time WPF chart which can handle this use-case and then some. Please take a look at the Tutorial below:
http://www.scichart.com/synchronizing-chartmodifier-mouse-events-across-charts/
There is a live Silverlight demo of this behaviour here:
Sync Multichart Mouse Silverlight Demo
And this chart should be able to handle zooming while inputting values at high speed:
Realtime Performance Demo
Disclosure: I am the owner and tech-lead of SciChart
I want to create a simple video renderer to play around, and do stuff like creating what would be a mobile OS just for fun. My father told me that in the very first computers, you would edit a specific memory address and the screen would update. I would like to simulate this inside a window in Windows. Is there any way I can do this with C#?
This used to be done because you could get direct access to the video buffer. This is typically not available with today's systems, as the video memory is managed by the video driver and OS. Further, there really isn't a 1:1 mapping of video memory buffer and what is displayed anymore. With so much memory available, it became possible to have multiple buffers and switch between them. The currently displayed buffer is called the "front buffer" and other, non-displayed buffers are called "back buffers" (for more, see https://en.wikipedia.org/wiki/Multiple_buffering). We typically write to back buffers and then have the video system update the front buffer for us. This provides smooth updates, as the video driver synchronizes the update with the scan rate of the monitor.
To write to back buffers using C#, my favorite technique is to use the WPF WritableBitmap. I've also used the System.Drawing.Bitmap to update the screen by writing pixels to it via LockBits.
It's a full featured topic that's outside the scope (it won't fit, not that i won't ramble about it for hours :-) of this answer..but this should get you started with drawing in C#
http://www.geekpedia.com/tutorial50_Drawing-with-Csharp.html
Things have come a bit from the old days of direct memory manipulation..although everything is still tied to pixels.
Edit: Oh, and if you run into flickering problems and get stuck, drop me a line and i'll send you a DoubleBuffered panel to paint with.
How is scrolling typically handled in a Windows application that has computationally expensive graphics to render? For example, if I am rendering a waveform graph of a sound, after processing the wave form from a peakfile, should I:
Render the entire graphical representation to an in-memory GDI surface, and then simply have a scrollable control change the start/end of the render area?
Render the visible portion of the wave only. In a separate thread, process any new chunks of the graphc that come into view.
Render the visible portion of the wave, plus a buffer. This way, there's less of a chance of the user seeing "blank" or "currently rendering" portions of the waveform. Still, if a user quickly scrolls to a distant area, the whole section will be blank until rendering is complete.
The problem is, many applications handle this in different ways.
For example:
Adobe Acrobat - renders blank pages during scroll unless page is in cache. Any pages that would be visible within the document render area are rendered in a separate thread and are presented opon completion.
Microsoft Word - Essentially, the same as above. Documents are separated into distinct pages, so each page is processed/rendered on an as-needed basis and added to a cache.
Internet Explorer - Unknown. It appears that an entire "webpage" is rendered in graphic memory, no matter how many "screens" worth of graphic data it consumes. Theoretically, with a web page that scrolls for 10 or 15 screen lengths, this could mean 50-60MB worth of graphic memory consumption. Could anyone with experience with WebKit or FireFox explain whether or not the rendering engine favors consuming a ton of memory, or tries to render peices of the page "on the fly" to conserve memory?
If it helps, my application is based on C#, .NET 3.5, and WinForms.
This is a complexity vs. user experience trade-off. Your third option will give you the best user experience (they can start to see things right away and start to work). It is also the most complex to code (will take the longest to develop, with the most amount of bugs to kill).
The "correct" solution depends on how "expensive" expensive is, and on the demands of your user base. I would select the option with the least complexity that will provide a user experience that will satisfy the bulk of the customers:
Make it as complex as it needs to be, but no more complex than that.
I think this is actually a memory-usage versus a processor-usage tradeoff. Your first option (rendering the entire wave on an appropriately-sized canvas, and then moving that canvas around with only a visible window portion showing) might be the best approach, assuming you have enough memory for it. After an initial rendering delay, the user experience will be smooth and seamless.
If you don't have enough memory for this, then you have to render the visible portion on the fly. I've written this application (WAV data viewer) many times, and usually GDI+ is more than fast enough to render portions (even large portions) of WAV data in realtime (with a high framerate above 30 fps, which produces perfectly smooth animation). The key to this, however, is not to render each sample value as a separate point - this would be dog slow. What you want to do is for each pixel on your X axis, scan the corresponding chunk of WAV samples to get the minimimum and maximum sample value, and then render a single line between these values.
I´ve successfully managed to play up to 8 videos in sync using a single video window with multiple streams using the directshowlib for c#. The problem is the video window plays only on a single screen - when I try to have it span over many screens it does not work. The app window spans correctly, but the video plays only on one screen. Any ideas?
Thanks a lot in advance.
I assume that you're using the VMR with multiple input pins. The VMR is going to render to a single surface, which needs to be on a single display. You should be able to render your streams to multiple VMRs, with each VMR placed on a separate display within your maximised window.
It sounds as though you have all the streams in a single graph. You can separate them into different graphs, with each graph having one source and one renderer. Starting the graphs in sync means using IMediaFilter::Run instead of IMediaControl::Run:
Pick one graph as the master.
Make sure the master has a clock. This is normally done when going active, but you can force it to happen earlier by calling SetDefaultSyncSource on the graph.
Query the graphs for IMediaFilter, get the clock from the master graph using GetSyncSource and pass it to the other graphs using SetSyncSource.
Pause all the graphs.
Wait until GetState returns S_OK (the pause is complete).
Get the time from the graph and add 10ms or so.
Call IMediaFilter::Run on all graphs, passing this time (now + 10ms) as the parameter.
I am looking for a decent programmatic approach to delivering the illusion of "riding in a van". Here is the synopsis:
I have a friend who is opening up a bar in San Francisco with a room interior designed to be like the inside of a van (picture the inside of the Scooby Doo Mystery Machine) . Set into the walls are “windows” and behind those windows are monitors. There are two servers (for the left and right sides) that are delivering simultaneous presentations from pre-recorded footage of a vehicle driving down the road.
At the moment the screens are split across a shared workspace so as items in the background move from the right to the left the impression of motion is flawless. However, once you move the screens apart there is no delay for empty "wall space" or the natural delay that one would expect to perceive as an object progresses from one screen through the space in the wall to the next screen.
Is there a managed code approach I could take to construct a project that could take a time delay argument for delivering content between monitors in this case? Or is there even an off-the-shelf program that might do the trick as well?
EDIT:
What I am really looking for is advice on how to program this: Can I load in a windows media file and stream it to separate monitors on separate threads with a slight delay?
Sure, you just have to do playback on both monitors separately and delay one of the videos.