I'm currently working on machine vision project. The issue is saving all of the images fast enough so that the queue of images doesn't build up in RAM and drain the user's memory. Is there any other method available for fast image saving?
This method helps the CPU issue, but because its not fast enough. The queue of images builds up and overloads the ram so I don't know what else I can do to solve both issues.
The fastest way for writing images in Halcon is using their proprietary format .hobj. It is much faster than any other lossless compression:
You can see the benchmark shown above in example write_image_benchmark.hdev
The only disadvantage is that you cannot open this format without the Halcon license.
Related
I'm working with kinect and I want to save the video and audio stream do a file (it doesn't matter if I can play it or not, I want to save the raw data).
My question is, if I'm constantly writing to disk (25fps) the computer may lag right? so what I'm trying to do is save to file in an efficient way. I thought of having like a list of images (like 5 seconds) and then write it all to file. What do you think? Is this a correct way?
Or is there another way to do this without losing performance?
Thank you
Writing to disk is a low intensitivity task for the CPU, it mostly just uses a memory buffer and some memory bandwidth. However, if you have to access the disk while you are writing to it, you will experience an increased delay.
As for how to do it; I have never worked with video before but I am thinking it is most easily done by using a buffer to hold the captured frames, and then writing from that buffer to the disk.
Saving the frames into arrays of 125 images (5s*25fps) sounds like an inefficient way to buffer the frames.
As for avoiding losing performance there really is no a way to do this; however I cannot see you losing much performance as the bitrate of the captured video and audio is comparatively low.
I have a video analytics program that processes assorted frames from a video. (Several hours long)
The video is likely going to be an MP4 but may be other formats going forwards.
At the moment, I have a C# wrapper around an ffmpeg call to extract an individual frame at the requested time. (I'm using the ffmpeg.exe binary. Not the libraries directly)
At the moment, this all works. But it's slow. Very slow.
I've found ways to improve the speed by storing the extracted frames in a ramdisk while they're being processed. Changing the stored image format etc...
I just wanted to check if anyone could think of any way to pull individual frames out. At split-second accuracy.
I know this is probably possible with DShow etc... I went straight to FFMPEG as I've used it before. But if DShow is likely to be faster I'll gladly change!
In Windows you have native APIs to process, and in particular read from, media files:
DirectShow
Media Foundation
Both provide support for MP4 (H.264 video), DirectShow as a framework extended by third party MP4 Demultiplexer and H.264 decoder (of needed, also Windows 7 provides build it), and Media Foundation - natively or extended by third party extensions depending on OS version.
Both can be interfaced from .NET via open source wrappers DirectShow.NET and Media Foundation .NET respectively. This works out way faster then FFmpeg CLI for individual frames. Also note that you would be able to obtain frames incrementally without need to locate specific time and do excessive duplicated work, not even to mention process startup/initialization overhead. Alternatively you could use FFmpeg/Libav binaries through wrapper into C# and get similar performance.
You can change the position of the offset parameters. The order matters for the speed if the video contains valid meta data you can seek through the video faster.
If you put the offset before the input file the offset will be calculated with the bit rate with is not every time exactly (in case of a variable bit rate), but it is much faster. The correct way is to walk through the video (offset parameter is after the input file) but this takes time.
This is a bit of a weird question but, with the functionalities of C++, c# and objective C as we speak is there any possible way for video content to be uploaded whilst its recording. So as you record the video it would be being compressed and uploaded to a website.
Would this involve cutting the video into small parts as you record, hardly noticeable stops and starts during the recording?
If anyone knows if this is at all possible, please let me know.
Sorry for the odd question.
You've just asked for streaming media -- something that's been done for over a decade (and, if you overlook "television", something that's probably been underway in research settings for several decades).
Typically, the video recorder will feed the raw data through filters of some sort -- correct white balance, sharpen or soften the video, image stabilize, and then compress the raw data using a codec. Most codec designs will happily take a block of input, work on it, and then produce a block of encoded data ready for writing. Instead of writing to disk, you could "write" to a socket opened to a remote machine.
Or, if you're working with an API that only writes to disk, you could easily re-read the data off disk as it is being written and send the data to a remote site. You'd have to "follow" the writing using something like tail -f's magic ability to follow the file as it is written. (Heck, if you're just bodging something together for a one-off, I'd even recommend using tail -f as part of your system.)
It depends on if the application recording to disk is locking the file. My guess is that, unless you wrote the recording software, the application locks the file(or doesn't even create the real file) until it stops recording. If you are writing the recording software as well, then yes, you can do this. you would just use sychronized threads.
I need to edit(Increase the height) the Image on the fly.
The file is mostly 5000*4000 in dimension. I see the memory shoots up to peak level when I create a bmp of large dimensions and call Graphics.DrawImage method on the bmp instance.
How do I get rid of the Out Of Memory exception? Is there a way to work with large bitmaps in c# ?
The problem is the Huge amount of Memory required for the operation. Yours is taking about some GigaBytes, so the solution could be to use a Stream and process the file in chunks.
Or the the best option would be to use some Third party library for it. Below are some for .Net
AForge
Image Resizer
Also have a look at this SO question.
https://stackoverflow.com/questions/158756/what-is-the-best-image-manipulation-library
It's depends on you application specific requeirements, it's not very clear from yuor post, but generaly, working with big media files (images, sounds, videos) I think really good solution is
Memory Mapped Files
Save yuor image on the disk in memory mapped file and resize it having on disk, by free yuor RAM as much as possible from a lot of data that you, probably, don't need to have a fast access(in that moment at least)
Hope this helps.
Regards.
I have problem with image compression. I need to compres a lot of files (700-900kb) to files 70-80kb without
loss of quality. (or small loss ) I found menu item "Save for Web & Devices ..." in Photoshop. It works great.
But I don't want to use photoshop programmatically. May be someone knows how to solve this problem with
other third party components or frameworks?
Thanks for any ideas!
.NET has a number of image decoding/encoding libraries, often tied to a particular GUI framework (e.g. in Windows Forms you have System.Drawing.Image and for WPF, see the Imaging Overview chapter on msdn).
There are also third party libraries specialized in image conversion/compression that you can find online (both free and non free)
Generally though, the amount of saving you get from compressing an image highly depends on the original format. If you already have JPEG photos with normal compression (quality of 85%) then there is not much you can do in terms of making them smaller except resizing them. If you have raw bitmaps (e.g. BMP, uncompressed/low compression TIFF etc.) then you can expect quite large savings with most compressing formats
When choosing image format, consider this:
Photos and similar: JPEG will often do fine. Good savings with reasonable quality loss
Screenshots and similar: PNG will generally give best results (PNG is lossless). JPEG will often create highly visible artifacts on screenshots
Compressing an already compressed image (i.e. PNG, JPEG etc.) with a general purpose compression algorithm like ZIP or RAR will in practice not give you any savings. You may actually end up with a bigger file.
You can have a look at the FreeImage project. It has a C# wrapper that you can use.
Imagemagick allows you to batch-processing on files and offers a everything you could possible ask for when it comes to handling of images
E.g. to resize every image in folder (destroy originals) to QVGA do
mogrify -resize 320x240 *.jpg
To preserve aspect ratio do
mogrify -resize 320x240! *.jpg
If you need to traverse a directory structure, this is how you can do it in *nix based systems (also destroying originals)
find . -type f -name *.jpg -exec convert -resize 800x800 {} \;
There is also an quality switch available, see here