The project I'm working on requires the ability to transform any of the 4 corners of an image. As GDI+ unfortunately doesn't have this capability, we're resorting into using DirectX's 3D graphics.
While I have a square mesh with a texture showing successfully on-screen, I need to be able to generate an image from this rendering, with the background set to transparent. Is there a way to efficiently achieve this?
Preferably, I'd like to do this without using a Control for initializing a device. Alternatively, I don't mind the option of creating a custom, invisible Control that will generates an image for me.
Edit:
Actually, I just realized a transparent background is strictly not necessary, but it would help the performance of our project a bit.
Anyway, I've had some luck doing something like this, but it is excessively slow. Perhaps there's a better method?
// Create a surface to render an image to
Surface mSurface = mDevice.GetRenderTarget(0);
// Render the visualization
mDevice.Clear(ClearFlags.Target, Color.Transparent, 1.0f, 0);
mDevice.BeginScene();
/* Do some amazing stuff */
// Exit rendering
mDevice.EndScene();
mDevice.Present();
// Render the bitmap images
GraphicsStream mGraphics = SurfaceLoader.SaveToStream(
ImageFileFormat.Bmp, mSurface);
Image mImage = new Bitmap(mGraphics, false);
Well if you use D3D for doing the final rendering to screen then you can easily do the things you are talking about using render-to-texture.
Related
I'm developing custom control on .Net MAUI. For my case, I have to update 100's of points at each invalidate. So I'm going for native rendering. Here for Android, I have rendered points on bitmap and rendered the bitmap once and this performance is fine for me and the same had to dine with MU+
I'm new to IOS native's, and I tried to achieve the same as above using ImageContext as below,
UIGraphics.BeginImageContextWithOptions(image.Size, false, 0);
image.Draw(new CGPoint(0, 0));
//Drawn needed shapes here using Image Context
image = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
Finally drawn the stored images to screen. But this doesn't look performance effective on my case.
My case is store the existing rendering to one object and render current points with the existing one. Please suggest if it can be achieved using some cases too..
I am creating a mobile app using Unity. I would like to place a photo which fills the screen and then place some 3D objects on top of this photo. The photo is created during runtime and is saved to the persistance data folder.
What is the best method to achieve this? The options I see are Raw Image/Image/Sprite. However, I have been unable to achieve the above mentioned goal using either of these component types.
What about making a panel on a canvas, setting the image as background and the canvas to fit screen size?
Then place the objects on front of it, manually or as children of said canvas but higher in the hierarchy.
I don't really understand what you intend to do, just trying to help.
editing my original answer: you need to use RawImage to be able to easily load images that are not marked as Sprite in the editor
You load images from file like this :
Texture2D texture = new Texture2D(1,1); // the following needs a non null starting poin
var path=System.IO.Path.Combine(Application.streamingAssetsPath,"your_file.jpg");
byte[] bytes=System.IO.File.ReadAllBytes(path);
texture.LoadImage(bytes);
rawImage.texture=texture;
As far as keeping it filling the screen the AspectRatioFiter component is likely to do the job
To get 3d objects rendering in front, set your canvas to 'Screen Space - Camera' and point it to your camera. This will behave similar to word space in regards to rendernig order (will get 3d sroted) but canvas size will match camera viewport, so AspectRatioFitter will be able to do its job
I'm making an app for Windows 8.1 where it is important to be able to zoom in and examine images in detail. If I just open up the bitmap and zoom in it looks like.
However when I load the image into my app and use the ScrollViewer to zoom in I get.
As it appears to be trying to interpolate pixel values for some sort of anti-aliasing.
How can I get it so that when I zoom in it shows (as best it can) the exact pixels of the image? In particular I'm using the image as the background to a canvas which is contained in a scroll viewer.
I've looked around on here and MSDN and found a pair of related questions, but as yet they don't seem to have solved my exact problem.
A discussion on WPF
A similar issue with a canvas
Older related question on pixel art
A way to use bitmap encoding (which I couldn't get to work)
Similarly phrased question
There is no easy way to go about this, your best option is to use DirectX to render the image much larger so that you can mitigate the effect of WinRT automatically interpolating pixel values.
As someone explained on MSDN and based on this outstanding request I can't see any other way to accomplish this.
Use Win2D
Win2D is a DirectX inter-op library for WinRT. With this you can render the image at a much larger size, and then set the default zoom level for the scrollViewier to be very small. Because of this when you zoom in it will appear to be that you can see the individual pixels without any fuzzy/blurry interpolation because you will actually be seeing groups of 64 pixels or so all as one color. I couldn't find any way to actually override what kind of interpolation gets done so this seems to be the best method.
Download Win2D as a NuGet package using Visual Studio, Win2D's
quickstart guide does a good job explaining some of the setup
Set up your canvas and the draw event and use the DrawImage function to render the image larger
<ScrollViewer x:Name="Scroller" ZoomMode="Enabled"
MinZoomFactor="0.1" MaxZoomFactor="20">
<canvas:CanvasControl x:Name="canvas" Draw="canvas_Draw" CreateResources="create"/>
</ScrollViewer>
In the canvas_draw function.
canvas.Width = original.Width * 10;
canvas.Height = original.Height * 10;
args.DrawingSession.DrawImage(bitmap,new Rect(0,0,original.Width*10,original.Height*10), new Rect(0,0,original.Width,original.Height), 1.0f, CanvasImageInterpolation.NearestNeighbor);
Make sure to set your canvas to be larger as well
In your code behind set the default zoom of your ScrollVieiwer to be appropriate so your image appears to be the same size.
In the page constructor
Scroller.ZoomToFactor (0.1f);
Other Ways Which I Looked Into and Didn't Work
Making the canvas very large and using BitmapEncoder/BitmapDecoder with the interpolation mode set to NearestNeighbor, this introduced lots of visual artifacts even when scaled to a power of 2 size
Render options only appear to be usable in WPF and not WinRT
It may also be possible to use some image manipulation library to simply make the bitmap 10x or so as large and then use that, but I ended up using Win2D instead.
I'm desperately trying to render images onto a 3D surface in WPF using nearest-neighbor sampling. Below is an example of what I currently have, in all its blurriness. The ImageBrush is given a 64x64 texture.
I've tried decorating the XAML with RenderOptions.BitmapScalingMode="NearestNeighbor" everywhere from the Window to the ImageBrush without fortune. I've tried writing a custom pixel shader, and couldn't get a satisfactory result. It even appears that I cannot set the texture sampler's filtering mode from within the shader code. I've considered work-arounds, such as scaling up the source texture myself, but this would still leave artifacts at two of the edges where it begins interpolating into the next pixel.
Bottom line: Is there any way I can accomplish the effect of nearest neighbor image sampling on a 3D model in WPF?
Just had the same problem, the answer in this thread provides a workaround.
You can use a VisualBrush with an Image then WPF will respect the BitmapScalingMode on the image. You can also set the CachingHint (also on RenderOptions) which might improve performance (did not measure it though).
var image = new Image { Source = new BitmapImage(new Uri("pack://application:,,,/Resources/image.png")) };
RenderOptions.SetCachingHint(image, CachingHint.Cache);
RenderOptions.SetBitmapScalingMode(image, BitmapScalingMode.NearestNeighbor);
var material = new DiffuseMaterial(new VisualBrush(image));
That works for simple scenes, as I said I didn't measure the performance, but I imagine the VisualBrush will hurt for bigger stuff (compared to an ImageBrush). Personally I'll switch over to Direct3D interop and render the scene via SharpDX when I hit that problem.
I can't figure anyway to do this, I don't want to scale it with .Draw, I want to just change the width.
The best way is using .Draw particularly if you are scaling a lot, especially as your object should handle its own drawing capabilities.
You can render the texture using a render target at the size you want, and then saving the render target texture.
Searching for a quick code example came up with the following which is saying the same thing:
How to resize and save a Texture2D in XNA?