I'm developing a C# software that is able to show previews for files. Basically, there is a tree at the left that shows the disk filesystem entries, and a panel on the right that will show a preview of files selected in the tree, in a resizable panel that contains a docked PictureBox. Initially I only show preview for image files.
This software is designed for game developers, so I need to support all the image formats, I will use the great ImageMagick library for this purpose. The key is that some image files could be big, very big so, I have several questions, about performance and memory consumption.
Do I need to thread the load of the picture? Always, or only if the picture is very big?
Would be a correct if I load the picture directly in the Picturebox, or should I calculate a smaller size image (like a thumbnail or something similar), save it to disk, and then, show it?
Someone knows where can I download big picture files to test my preview with really very large files.
Imo always thread a work like this, there is little point in trying to decide what is big; for images that of are talking about I would of thought most would be enough that under the wrong conditions (computer is spending resources on other processes, not just yours) it could cause a perceivable pause in the UI thread.
Without knowing more, I would just test your implementation when it does the basic stuff and make a judgement call. There is also the question of required quality and desktop resolution of the user - so perhaps this should be configurable in some way.
Imo there could be no better place than PolyCount, specifically look in these forums:
http://www.polycount.com/forum/forumdisplay.php?f=42 &
http://www.polycount.com/forum/forumdisplay.php?f=60
Related
When I upload an image, it is quite big some times, I have to create its thumbnail in a specific way.
What I want is if I declare the size of 96x69 for the thumbnail, those uploaded images which are scalable to this resolution should be scaled, and those uploaded images which are quite different in wxh, example 1000x1000, they should be cropped to maximum better scalability.
Is there any fast library or built in code as I have tried to do in my own way but it is not that perfect?
I strongly recommend ImageResizer which can be found freely on the Nuget. Basically, resizing images is a sophisticated procedure that may be included variety techniques such as cropping, scaling, resizing, moving, trimming, etc. that implementing each of these methods is not an easy job. Hence, it's better to use image-resizer.
you can use jquery for solving this problem. If its feasible in your project to do it client side.
Check these link here..
http://www.jqueryrain.com/demo/jquery-crop-image-plugin/
and
https://code.google.com/p/resize-crop/ this one is best and easiest i guess.
If you want to do it server side for some reason, then have a look on solution mentioned in this question.
Which free image resizing library can I use for resizing and probably serving images?
Hope this will solve your problem.
I am standing between major decision in my WP7 application. It's majore purpose is to display images, always one at a time on fullscreen.
I need perfect support for pinch to zoom, moving image (while zoomed) and switching between images via flick gesture. Most of these things are already implemented in WebBrowser control, so I would just have to generate proper html source with path to image in Isolated storage.
Or should I use common Image control and implement these gestures on my own? I would like your advice before I make this decision.
Are you targeting Windows Phone 8 or 7?
Generally, I would implement my own.
Issues with using the web-browser:
1. Perf is going to be slower.
Memory footprint is going to be higher (though I doubt you really care about this - it's not going to be massive).
If you are going to favorite/download images, it will be harder (if at all possible) to navigate to them in the browser.
The background will always be white, unless you generate HTML each time that will control that bit.
The biggest issue is that the zooming in/out will be.. Janky... You won't be able to control how zoomed out the user can make it - meaning that they can zoom out enough to make the picture very small and it won't "snap back".
It's not a bad stop-gap, and the issues with it are not so big that one can say "no - don't do it", but they are enough that you should reconsider.
I'm working on a solution for my new project (in C#). I'm trying to make a dynamic image/animation combiner (maybe later even work with video's, but that's not required atm).
So basically my program is reading a xml file with all kinds of instructions the user needs to do. In the xml file it is possible that for 1 instruction multiple pictures are needed. So when there are 2 or 3 pictures (the maximum amount of pictures is 4) they need to be combined to 1 picture so I can show that on the image object on the main form. Also it is important that the pictures keep there proportions so the image doesn't look deformed.
I found a solution with GDI+ but it isn't that good as I wanted it and runs pretty slow on a bit older computers. Also combining animations with normal images is a real pain and goes very slow!
Is there a faster/easier way to do this? Maybe WPF is a solution but I got no experience with this.
Thnx for any help in advance!
This questsion is open now for mor then a year but I found the solution some time ago so I will post it here in case it is usefull for somebody else.
The best way to do this was creating to set up a grid dynamic. And then fill up each element of the grid with the needed media (Video, Image, animated gif's and even Viewport3D from another xaml file). Creating a grid in code is really easy so it should be a good solution for everybody who wants to do this in WPF.
I want to create a simple video renderer to play around, and do stuff like creating what would be a mobile OS just for fun. My father told me that in the very first computers, you would edit a specific memory address and the screen would update. I would like to simulate this inside a window in Windows. Is there any way I can do this with C#?
This used to be done because you could get direct access to the video buffer. This is typically not available with today's systems, as the video memory is managed by the video driver and OS. Further, there really isn't a 1:1 mapping of video memory buffer and what is displayed anymore. With so much memory available, it became possible to have multiple buffers and switch between them. The currently displayed buffer is called the "front buffer" and other, non-displayed buffers are called "back buffers" (for more, see https://en.wikipedia.org/wiki/Multiple_buffering). We typically write to back buffers and then have the video system update the front buffer for us. This provides smooth updates, as the video driver synchronizes the update with the scan rate of the monitor.
To write to back buffers using C#, my favorite technique is to use the WPF WritableBitmap. I've also used the System.Drawing.Bitmap to update the screen by writing pixels to it via LockBits.
It's a full featured topic that's outside the scope (it won't fit, not that i won't ramble about it for hours :-) of this answer..but this should get you started with drawing in C#
http://www.geekpedia.com/tutorial50_Drawing-with-Csharp.html
Things have come a bit from the old days of direct memory manipulation..although everything is still tied to pixels.
Edit: Oh, and if you run into flickering problems and get stuck, drop me a line and i'll send you a DoubleBuffered panel to paint with.
How is scrolling typically handled in a Windows application that has computationally expensive graphics to render? For example, if I am rendering a waveform graph of a sound, after processing the wave form from a peakfile, should I:
Render the entire graphical representation to an in-memory GDI surface, and then simply have a scrollable control change the start/end of the render area?
Render the visible portion of the wave only. In a separate thread, process any new chunks of the graphc that come into view.
Render the visible portion of the wave, plus a buffer. This way, there's less of a chance of the user seeing "blank" or "currently rendering" portions of the waveform. Still, if a user quickly scrolls to a distant area, the whole section will be blank until rendering is complete.
The problem is, many applications handle this in different ways.
For example:
Adobe Acrobat - renders blank pages during scroll unless page is in cache. Any pages that would be visible within the document render area are rendered in a separate thread and are presented opon completion.
Microsoft Word - Essentially, the same as above. Documents are separated into distinct pages, so each page is processed/rendered on an as-needed basis and added to a cache.
Internet Explorer - Unknown. It appears that an entire "webpage" is rendered in graphic memory, no matter how many "screens" worth of graphic data it consumes. Theoretically, with a web page that scrolls for 10 or 15 screen lengths, this could mean 50-60MB worth of graphic memory consumption. Could anyone with experience with WebKit or FireFox explain whether or not the rendering engine favors consuming a ton of memory, or tries to render peices of the page "on the fly" to conserve memory?
If it helps, my application is based on C#, .NET 3.5, and WinForms.
This is a complexity vs. user experience trade-off. Your third option will give you the best user experience (they can start to see things right away and start to work). It is also the most complex to code (will take the longest to develop, with the most amount of bugs to kill).
The "correct" solution depends on how "expensive" expensive is, and on the demands of your user base. I would select the option with the least complexity that will provide a user experience that will satisfy the bulk of the customers:
Make it as complex as it needs to be, but no more complex than that.
I think this is actually a memory-usage versus a processor-usage tradeoff. Your first option (rendering the entire wave on an appropriately-sized canvas, and then moving that canvas around with only a visible window portion showing) might be the best approach, assuming you have enough memory for it. After an initial rendering delay, the user experience will be smooth and seamless.
If you don't have enough memory for this, then you have to render the visible portion on the fly. I've written this application (WAV data viewer) many times, and usually GDI+ is more than fast enough to render portions (even large portions) of WAV data in realtime (with a high framerate above 30 fps, which produces perfectly smooth animation). The key to this, however, is not to render each sample value as a separate point - this would be dog slow. What you want to do is for each pixel on your X axis, scan the corresponding chunk of WAV samples to get the minimimum and maximum sample value, and then render a single line between these values.