OpenGL ES Texture streaming or mapping - c#

Situation
I have a video stream coming from a native library in iOS. I'm trying to display the image in an iPhoneOSGameView using glTexImage2D and glTexSubImage2D for updates. I can update subregions of the image, I receive a structure that tells me which rectangle has to be updated on the gpu.
The issue
Framerate is quite low. After much profiling both in OpenGLES and the application code, I have concluded that the application usually is waiting on the texture upload. The slow function is glClear, but I suspect there's an intrinsic glFlush in there.
My question
I've seen some people talking about glMapBuffer that could allow me to stream the video directly to the texture in user-space. I've looked at pixel buffer objects, but they require OpenGLES 3.0 or an extension in 2.0. Is there an efficient way (for mobile) to stream a texture with minimal memory copying OR a way to transfer the texture from different thread?
Additional information
I'm working in C# Xamarin and I'm testing on different devices such as an iPod Touch Gen3, a iPad Air 2 and a iPad Pro 12".

there an efficient way (for mobile) to stream a texture with minimal memory copying
Most operating systems have some media framework which allows import of images directly using the EGL_image_external extension avoiding the need to upload. I'm not sure how it works on iOS, but I strongly suspect it should be possible. It's OS-specific unfortunately, so no standard way of doing it.
OR a way to transfer the texture from different thread?
Just create two GL Contexts in the same share group.
https://developer.apple.com/library/ios/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/ConcurrencyandOpenGLES/ConcurrencyandOpenGLES.html

Related

Drawing geographic map tiles with C#

I am creating a privately consumed custom map overlay.
I cannot use an open source server like MapServer, because of the sheer volumes of data and the format that it is in.
Originally it was going to be a client-side solution that pushed an ArrayBuffer to the client and render the data on a map using WebGL, however we later found out that our users' PCs would be minus a GPU, so they cannot smoothly run the WebGL rendering.
So I took the concepts and applied them to OpenTK - I created an IIS server handler that creates an OpenTK instance, and renders a requested tile.
For prototype's sake, it works - however I feel this is not the best solution.
What is the most efficient way to render out tiles?
I would love to pre-render the tiles, but there are just too many datasets (adding 1000 more per day!) to be able to efficiently do this.
Is OpenTK a good route to go down (because of the hardware acceleration it can take advantage of?), or is there too much overhead in setting up an instance?
Or are the C# Graphics libraries a better route to learn and use?
Or even - is it worth ditching IIS and C# all together and using a different language/framework for serving the images?
Your server only has a single GPU, so launching multiple instances of OpenTK will be significantly slower than launching a single instance and queuing tiles for rendering. Context switching inside the GPU drivers hurts. The latest version of OpenTK starts up in milliseconds so that should not be a problem (but you will have to measure.)

How to grab constant stream of bitmap images from webcam in c#

We have a c# application that performs processing on video streams. This is a low-level application that receives each frame in Bitmap format, so basically we need 25 images each second. This application is already working for some of our media sources, but we now need to add a webcam as an input device.
So we basically need to capture bitmap images from a webcam continuously so that we can pass all these frames as a "stream" to our application.
What is the best and simplest way to access the webcam and read the actual frames directly from the webcam as individual images? I am still in the starting blocks.
There are a multitude of libraries out there that allows one to access the webcam, preview the content of the webcam on a windows panel and then use screen capturing to capture this image again. This, unfortunately, will not give us the necessary performance when capturing 25 frames per second. IVMRWindowlessControl9::GetCurrentImage has been mentioned as another alternative, but this again seems to be aimed at an infrequent snapshot rather than a constant stream of images. Directshow.Net is mentioned by many as a good candidate, but it is unclear how to simply grab the images from the webcam. Also, many sources state a concern about Microsoft no longer supporting Directshow. Also, implementations I've seen of this requires ImageGrabber which is apparently also no longer supported. The newer alternative from MS seems to be Media Foundation, but my research hasn't turned up any working examples of how this can be implemented (and I'm not sure if this will run on older versions of windows such as XP). DirectX.Capture is an awesome library (see a nice implementation) but seems to lack the filters and methods to get the video images directly. I have also started looking at Filters and Filter Graphs but this seems awfully complex and does feel a bit like "reinventing the wheel".
Overall, all the solutions briefly mentioned above seem to rather old. Can someone please point me in the direction of a step-by-step guide for getting a webcam working in C# and grabbing several images per second from it? (We will also have to do audio at some point, so a solution that does not exclude video would be most helpful).
I use AForge.Video (find it here: code.google.com/p/aforge/) because it's a very fast c# implementation. i am very pleased with the performance and it effortlessly captures from two HD webcams at 30fps on an 8 year old PC. the data is supplied as a native IntPtr so it's ideal for further processing using native code or opencv.
opencv wrappers emgu and opencvsharp both implement a rudimentary video capture functionality which might be sufficient for your purposes. clearly if you are going perform image processing / computer vision you might want to use those anyway.
As dr.mo suggests, Aforge was the answer.
I used the tutorial from here: http://en.code-bude.net/2013/01/02/how-to-easily-record-from-a-webcam-in-c/
In the tutorial, they have an event handler fire each time a frame is received from the webcam. In the original tutorial, this bitmap is used to write the image to a PictureBox. I have simply modified it to save the bitmap image to a file rather than to a picturebox. So I have replaced the following code:
pictureBoxVideo.BackgroundImage = (Bitmap)eventArgs.Frame.Clone();
with the following code:
Bitmap myImage = (Bitmap)eventArgs.Frame.Clone();
string strGrabFileName = String.Format("C:\\My_folder\\Snapshot_{0:yyyyMMdd_hhmmss.fff}.bmp", DateTime.Now);
myImage.Save(strGrabFileName, System.Drawing.Imaging.ImageFormat.Bmp);
and it works like a charm!

How can I edit individual pixels in a window?

I want to create a simple video renderer to play around, and do stuff like creating what would be a mobile OS just for fun. My father told me that in the very first computers, you would edit a specific memory address and the screen would update. I would like to simulate this inside a window in Windows. Is there any way I can do this with C#?
This used to be done because you could get direct access to the video buffer. This is typically not available with today's systems, as the video memory is managed by the video driver and OS. Further, there really isn't a 1:1 mapping of video memory buffer and what is displayed anymore. With so much memory available, it became possible to have multiple buffers and switch between them. The currently displayed buffer is called the "front buffer" and other, non-displayed buffers are called "back buffers" (for more, see https://en.wikipedia.org/wiki/Multiple_buffering). We typically write to back buffers and then have the video system update the front buffer for us. This provides smooth updates, as the video driver synchronizes the update with the scan rate of the monitor.
To write to back buffers using C#, my favorite technique is to use the WPF WritableBitmap. I've also used the System.Drawing.Bitmap to update the screen by writing pixels to it via LockBits.
It's a full featured topic that's outside the scope (it won't fit, not that i won't ramble about it for hours :-) of this answer..but this should get you started with drawing in C#
http://www.geekpedia.com/tutorial50_Drawing-with-Csharp.html
Things have come a bit from the old days of direct memory manipulation..although everything is still tied to pixels.
Edit: Oh, and if you run into flickering problems and get stuck, drop me a line and i'll send you a DoubleBuffered panel to paint with.

Pass texture using pointer across process

It's hard to put this into the title, so let me explain.
I have an application that uses Direct3D to display some mesh and directshow(vmr9 + allocator) to play some video, and then send the video frame as texture to the Direct3D portion to be applied onto the mesh. The application needs to run 24/7. At least it's allowed to be restarted every 24hours but not more frequent than that.
Now the problem is that directshow seems to be giving problem after a few hours of playback, either due to the codec, video driver or video file itself. At which point the application simply refuse playing anymore video. But the Direct3D portion is still running fine, mesh still displayed. Once the application is restarted, everything back to normal.
So, I'm thinking of splitting the 2 parts into 2 different process. So that when ever the video process failed to play video, at least I could restart it immediately, without loosing the Direct3D portion.
So here comes the actual question, whether it's possible to pass the texture from the video player to the direct3d process by passing the pointer, aka retrieve the texture of another process from pointer? My initial guess is not possible due to protected memory addressing.
I have TCP communication setup on both process, and let's not worry about communicating the pointer at this point.
This might be a crazy idea, but it will work wonder of it's ever possible
Yes you can do this with Direct3D 9Ex. This only works with Vista and you must use a Direct3DDevice9Ex. You can read about sharing resources here.
Now the problem is that directshow seems to be giving problem after a few hours of playback, either due to the codec, video driver or video file itself. At which point the application simply refuse playing anymore video.
Why not just fix this bug instead?
If you separate it out as a separate process then I suspect this would not be possible, but if it were a child thread then they would have shared memory addressing I believe.
Passing textures doesn't work.
I'd do it using the following methods:
Replace the VMR with a custom renderer+allocator that places the picture into memory
You allocate memory for pictures from a shared memory pool
Once you receive another picture you signal an event
The Direct3D process waits for this event and updates the mesh with the new texture
Note you'll need to transfer the picture data to the graphics card. The big difference is that this transfer now happens in the Direct3D app and not in the DirectShow app.
You could also try to use the VMR for this. I'm not sure if the custom allocator/renderer parts will allow you to render into shared memory.
Maybe you could use the Sample Grabber in your DirectShow host process to get the image as a system memory buffer. Then you could use WriteProcessMemory to write the data into a pre-agreed address (which you setup over TCP or something) in your Direct3D app.

Minimising DirectShow Memory Consumption

So, I have an application which streams two video sources over a local area connection. Each video has its own filter graph, puts the video through a decoding filter, and an Inifinite Pin Tee filter, and then there is a GMFBridge filter, which is used to turn on/off recording using the WM ASF Filter. There is also a video renderer running off a different output of the tee filter.
Now, this all works no problem, however the memory consumption for the entire application is well over 80 MB, and can hit more than 100 when recording is turned on.
I am wondering if there are any tips for minimising DirectShow memory consumption?
I am using DirectShow from C# (.NET 2.0), via the DirectShowLib interop library.
Cheers
My first suggestion w/ a .NET application is to not trust task manager. Use the Performance Monitor and add the Private Bytes counter. That will tell you your true memory usage.
Another note, because you are using 3rd party filters (closed source), there are really no options for lowering your memory usage besides lowering your video resolution and framerate.

Categories

Resources