Is it possible to crop screenshot at gpu level - c#

My final goal is to get screenshots of a cropped area on the screen as fast as possible.
I'm using the the desktop duplication api with sharpdx to retrieve the full screenshot from windows according to the official sample :
https://github.com/sharpdx/SharpDX-Samples/blob/master/Desktop/Direct3D11.1/ScreenCapture/Program.cs.
I'm currently cropping at CPU level with a pinned memory bitmap and Buffer.BlockCopy on the raw bytes which, IMO, cannot be considerably anymore faster.
Is it possible to apply a simple crop operation at GPU level?

Related

How can I control the quality of a screenshot in c#

I made a small windows form app that takes screenshots, but the final image looks a bit blurry compared to screenshots using other screenshot softwares.
I mean like the pixel density on screenshots from my app is lower than the density of screenshots from other apps.
I used this tutorial: http://www.developerfusion.com/code/4630/capture-a-screen-shot/
How can I control the pixel density or quality of the screenshots my app takes, without resizing the images?
The problem was just the image format. Probably jpeg format has lower quality than png when taking screenshot. Saving the image as png gave me a real sharp and high quality screenshot.

losing quality when resizing with ImageResizer

I'm using the ImageResizer .net library. It works as expected, but one image messes around.
I've uploaded the image below. I've already tried some things like format=jpg&quality=100, only width=220, also different sizes, but it always adds this blurry border around the image.
The original image is a png.
This one is the original image:
This one is resized by the ImageResizer:
And this one is resized with photoshop:
EDIT:
If you're running into the same issue. Try to set up the SpeedOrQuality Plugin. I've set it to speed=3 and the image is sharp again.
Vector graphics require different resampling algorithms than photographs.
ImageResizer V4 includes higher quality image resampling options under the FastScaling plugin.
For graphics (non-photographic images), I suggest playing with &f.sharpen=0..100, &down.preserve=-5..5, and &down.filter=Robidoux. Make sure &fastscale=true and FastScaling is installed.
You can certainly find a good configuration for your rasterized vector art and set up a preset for it. FastScaling is capable of much better resampling than Photoshop - on par with Lightroom, in fact.
Enabling fastscaling alone helps substantially (?width=200&fastscale=true):
Adding sharpening gives a very clear result: (?width=220&fastscale=true&f.sharpen=100):
Visibly crisper than Photoshop:
Each time you save a jpeg, you loose quality (the image is reencoded).
I would recommend using the same quality as the original image was save against, it should give the best results.
Using a higher quality is not recommended as it will artificially try to improve quality, mistaking approximations done by previous encoding for details, resulting in things like the blurry border.
Aside that, usually, one should not use a quality over 95 for jpeg encoding.

Zooming an extremely large image

Is there any predefined control in WPF or VS2010 to implement the Image Zooming functionality (like Googlemaps) for a bitmap displayed over a panel using C#? My bitmap will be minimum 8GB in Size.
Thanks in advance
Murali
There is DeepZoom for Silvelight. There is no such thing in WPF. It was planned for WPF4, but removed before RTM.
Update:
Loading images of this size is pretty uncommon. You should consider tiling as others suggested. Also consider if you really need load all data at once. If the image has size of for example 30000x30000 then the user do not really need/can't to see all this data. Use tiling and appropriate image format (jpg etc) for each zoom level.
Relevant links:
Single objects still limited to 2 GB in size in CLR 4.0?
Pushing the Limits of Windows: Physical Memory

C# GDI+/System.Drawing.Graphics - creating a buffer and manually blitting?

I'm creating a cad viewer which deals with very large image files and I am trying to optimise it for as high a framerate and low a memory footprint as possible.
It uses GDI+ for rendering onto a panel.
It's current flaw is with image rendering. Some of the files I'm using reference images which are particularly big (8000x8000 pixels). I've optimised the memory usage by only loading them when they become visible and disposing of them when they're not. This reduces the chance of the program running out of memory but prevents the images from being loaded and unloaded too often; however rendering the images themselves (context.DrawImage) still carries a very large overhead.
I'm now exploring ways of blitting the images into a smaller buffer of some sort, rendering this (generally much smaller) buffer, and then refreshing/rebuilding it when the zoom level changes significantly.
The problem is, I can't find any provision for this in GDI whatsoever. Can anyone suggest how I could achieve it?
I don't think GDI is designed for such high-speed updates of images. If you are trying to scroll the image, and tracking the mouse with each move, try to shift sections of the image and fill in the space opened up by the shift. Essentially reuse the tricks that programmers used when smoothly scrolling/panning graphics at a time when CPU's are slow and RAM is small.
If you're creating a new graphics application that needs a high framerate and are looking for suggestions, then I suggest abandoning GDI+ and using WPF. WPF uses hardware acceleration and supports retained-mode graphics; this has much better performance for less work than GDI+.
If there is some limitation that forbids WPF, please explain it in your question. This is relevant because such limitations can also impact GDI+ drawing.
GDI Binned in favour of Direct3D as 3D elements came into the equation anyway. Images turned into single thumbnails and larger tiles that are loaded in/out as required.
I faced a similar problem when developing my own GIS application. The best solution I found for this (even when using WPF) is to tile big images and display only the portions that are visible. This is being said, I would switch to WPF not only for the reasons given in the above answers but also for the good imaging support offered. See this link for more information

WPF or DirectX for panning and zooming a 5000x5000 picture?

my current code is normal WPF with a custom image view. I need to pan and zoom a very high resolution picture but of course it needs a lot of CPU-Power to do this.
My question is: if I change the control from a image view to something like directX will this increase my zoom and panning expirience a lot or isnt there such a big difference?
(The graphic card we use is a Nvidia ion2 and the CPU is a intel atom with up to 2 Ghz)
2D acceleration is not as perfected as 3D is. See benchmarks here.
I believe using the picture as a texture and controlling the camera for pan and zoom should increase performance a lot.
From my knowledge WPF uses DirectX to render its content, so I wouldn't think that would give you a high performance boost.
If you're having performance issues I would think the answer is in looking through your caching algorythm.
WPF graphics isn't very powerful. We have made all graphics rendering in Direct3D9, and only displaying the 3D scene in D3DImage control.
When talking about large bitmap rendering, we've found the best way is to create a Direct3D texture. The creating of it reasonably fast, and rendering itself is very fast when the image dimensions is less than natively supported by the GPU (caps.MaxTextureWidth, caps.MaxTextureHeight). That typically is 8k x 8k or 16k x 16k. Talking about bitmaps of hundreds of MB, and should be sufficient for your use too.
To see the performance that can be obtained with it, you can download our Chart control, set a large bitmap to background image to a Geographic map. Then you see also how fast is it too zoom & pan etc. Beats WPF built-in image handling for sure :-)
(I'm one of LightningChart developers at Arction)

Categories

Resources