Does anybody know if there is a way to ensure that TextBoxes scale nicely on RenderTargetBitmap?
See my code below, I am attempting to produce an output file of a Canvas with various different elements in it.
Images scale perfectly fine with no loss of quality at all, same with my InkCanvas. But TextBoxes just don't scale well at all.
RenderTargetBitmap renderTargetBitmap = new RenderTargetBitmap();
//Produce a PNG output with a pixel size of 5x the onscreen canvas
await renderTargetBitmap.RenderAsync(MaskArea, (int)MaskArea.Width * 5, (int)MaskArea.Height * 5);
var pixelBuffer = await renderTargetBitmap.GetPixelsAsync();
var pixels = pixelBuffer.ToArray();
var displayInformation = DisplayInformation.GetForCurrentView();
var folder = await ApplicationData.Current.LocalFolder.CreateFolderAsync("Print", CreationCollisionOption.OpenIfExists);
var file = await folder.CreateFileAsync("Canvas" + ".png", CreationCollisionOption.ReplaceExisting);
using (var stream = await file.OpenAsync(FileAccessMode.ReadWrite))
{
var encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.PngEncoderId, stream);
encoder.BitmapTransform.InterpolationMode = BitmapInterpolationMode.Cubic;
encoder.SetPixelData(BitmapPixelFormat.Bgra8, BitmapAlphaMode.Premultiplied, (uint)renderTargetBitmap.PixelWidth, (uint)renderTargetBitmap.PixelHeight, displayInformation.RawDpiX, displayInformation.RawDpiY, pixels);
await encoder.FlushAsync();
}
I can't really understand why my InkCanvas in particular renders perfectly but yet TextBoxes lose quality.
As a footnote, I'm aware that one way around it would be to manually add the Text to my output file afterwards with Win2D but I just thought I'd check here first in case I'm missing something.
Thanks
Output (You may have to click on the image to see the quality loss)
Related
I´m loading an image bytes and trying to apply it in a Texture2D.
Don't worry about async/await/thread problems...
UWP code:
StorageFile storageFile = StorageFile.GetFileFromPathAsync(filePath).AsTask().GetAwaiter().GetResult();
// get image size
IRandomAccessStreamWithContentType random = storageFile.OpenReadAsync().AsTask().GetAwaiter().GetResult();
BitmapDecoder decoder = BitmapDecoder.CreateAsync(random).AsTask().GetAwaiter().GetResult();
BitmapFrame bitmapFrame = decoder.GetFrameAsync(0).AsTask().GetAwaiter().GetResult();
PixelDataProvider pixelData = bitmapFrame.GetPixelDataAsync().AsTask().GetAwaiter().GetResult();
return new Dictionary<string, object>
{
{"bytes", pixelData.DetachPixelData()},
{"width", (int) decoder.PixelWidth},
{"height", (int) decoder.PixelHeight}
};
Unity code:
Texture2D texture = new Texture2D(textureSizeStruct.width, textureSizeStruct.height, TextureFormat.RGBA32, false);
texture.LoadRawTextureData(textureBytes);
texture.Apply();
This is how the images show up...
Original:
In the app (sorry for the big white square):
Your image channels are not missing, they are simply in a different order.
Check the docs of Texture2D.LoadRawTextureData:
Passed data should be of required size to fill the whole texture according to its width, height, data format and mipmapCount; otherwise a UnityException is thrown.
Solution:
Pass TextureFormat.BGRA32 instead to your Texture2D constructor.
In UWP side get the pixels from decoder with the right params is needed. Follow below the solution:
StorageFile storageFile = StorageFile.GetFileFromPathAsync(filePath).AsTask().GetAwaiter().GetResult();
IRandomAccessStreamWithContentType random = storageFile.OpenReadAsync().AsTask().GetAwaiter().GetResult();
BitmapDecoder decoder = BitmapDecoder.CreateAsync(random).AsTask().GetAwaiter().GetResult();
// here is the catch
PixelDataProvider pixelData = decoder.GetPixelDataAsync(
BitmapPixelFormat.Rgba8, // <--- you must to get the pixels like this
BitmapAlphaMode.Straight,
new BitmapTransform(),
ExifOrientationMode.RespectExifOrientation,
ColorManagementMode.DoNotColorManage // <--- you must to set this too
).AsTask().GetAwaiter().GetResult();
I have written a little game using IronPython and WPF for didactic purpose and now I want to translate the project to Metro APP for test Shared Projects.
The guilty code is:
def LoadImage(name, sourceRect):
bmp = BitmapImage()
bmp.BeginInit()
bmp.UriSource = Uri("./data/images/" + name, UriKind.Relative)
bmp.SourceRect = sourceRect
bmp.EndInit()
image = Image()
image.Source = bmp
return image
How on the earth I can obtain the same result in a Metro app (using C#)? There must be a way to do this in a simple manner like old BitmapImage. I need this because I have tiled images and I want a portion of it to display. WriteableBitmap work but ignore transparency of the image, so it's useless.
I've been using this code to load an image scaled:
public static async Task SetSourceAsync(
this WriteableBitmap writeableBitmap,
IRandomAccessStream streamSource,
uint decodePixelWidth,
uint decodePixelHeight)
{
var decoder = await BitmapDecoder.CreateAsync(streamSource);
using (var inMemoryStream = new InMemoryRandomAccessStream())
{
var encoder = await BitmapEncoder.CreateForTranscodingAsync(inMemoryStream, decoder);
encoder.BitmapTransform.ScaledWidth = decodePixelWidth;
encoder.BitmapTransform.ScaledHeight = decodePixelHeight;
await encoder.FlushAsync();
inMemoryStream.Seek(0);
await writeableBitmap.SetSourceAsync(inMemoryStream);
}
}
You might use something similar, but you'd specify encoder.BitmapTransform.Bounds instead of ScaledWidth/Height to do the cropping. If you have more specific questions - please clarify.
I already can take a print screen from the actual content of my application.
I choose az UIElement (e.g. a grid), and I render it into a bmp file.
But how can I crop this image as i feel?
The code is below works, just the cropping missing. I work for windows 8.1.
public async void SaveVisualElementToFile(UIElement element, StorageFile file)
{
var renderTargetBitmap = new RenderTargetBitmap();
await renderTargetBitmap.RenderAsync(element);
var pixels = await renderTargetBitmap.GetPixelsAsync();
using (IRandomAccessStream stream = await
file.OpenAsync(FileAccessMode.ReadWrite))
{
var encoder = await BitmapEncoder.CreateAsync(
BitmapEncoder.JpegEncoderId, stream);
byte[] bytes = pixels.ToArray();
encoder.SetPixelData(BitmapPixelFormat.Bgra8,
BitmapAlphaMode.Ignore,
(uint)renderTargetBitmap.PixelWidth,
(uint)renderTargetBitmap.PixelHeight,
96, 96, bytes);
await encoder.FlushAsync();
}
}
There are a couple of ways to do this. There's the traditional way as espoused by MSFT themself. You can also use some extensions such as WinRTXamlToolkit and WriteableBitmapEx. The latter two make it quite easy. Check in their source codes on Codeplex for their sample applications, which will have samples for how to use cropping.
I'm working on a Metro application where I need to generate an animated GIF image.
I've found this tutorial, witch seems to be the one and only resource on animated GIFs for Metro apps.
When running this code, an Exception is thrown on the SetPixelData method, telling me that the allocated buffer memory is insufficient (The message is in my OS language even though my Visual Studio environnement is in English, I think it might be relevant).
I've reduced image size (source and output) and frame number, but I still get this error. (I manipulate way bigger images and byte array in the same application).
Any idea where this memory problem can come from ? A problem with my StorageFile maybe ?
I was seeing this exception when the frameWidth\Height being passed into SetPixelData didn't match the pixel data.
I ended up with this example below. I was seeing the exception you mentioned when the dimensions didn't match the pixelData.
I think this is more stable in Windows 8.1 as it doesn't repro on it.
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(sourceStream);
BitmapTransform transform = new BitmapTransform()
{
ScaledHeight = 900,
ScaledWidth = 600
};
PixelDataProvider pixelData = await decoder.GetPixelDataAsync(BitmapPixelFormat.Rgba8,
BitmapAlphaMode.Straight,
transform,
ExifOrientationMode.RespectExifOrientation,
ColorManagementMode.DoNotColorManage);
StorageFile destinationFile = await ApplicationData.Current.LocalFolder.CreateFileAsync(Path.Combine(Database.rootMoviesFoldersPaths, movie.LocalId + ".jpg"));
using (var destinationStream = await destinationFile.OpenAsync(FileAccessMode.ReadWrite))
{
BitmapEncoder encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.JpegEncoderId, destinationStream);
encoder.SetPixelData(BitmapPixelFormat.Rgba8, BitmapAlphaMode.Premultiplied, 600, 900, 96, 96, pixelData.DetachPixelData());
await encoder.FlushAsync();
movie.HasFolderImage = true;
return true;
}
}
Multiply the buffersize by the bitdepth.
I'm working on a WinRT application that will do some image processing and one of the things I want to do is convert some jpgs or pngs to gif. I have something that sort of works. For some of my test jpgs it works others it's a scrambled image that get's output. Just wondering if there was something I was missing. Here is what I have so far
public async static void ConvertToGif(IRandomAccessStream stream)
{
var decoder = await BitmapDecoder.CreateAsync(stream);
var pixels = await decoder.GetPixelDataAsync();
var file = await KnownFolders.PicturesLibrary.CreateFileAsync("test.gif", CreationCollisionOption.ReplaceExisting);
var outStream = await file.OpenAsync(FileAccessMode.ReadWrite);
var encoder = await BitmapEncoder.CreateAsync(BitmapEncoder.GifEncoderId, outStream);
encoder.SetPixelData(decoder.BitmapPixelFormat, BitmapAlphaMode.Ignore,
decoder.PixelWidth, decoder.PixelHeight,
decoder.DpiX, decoder.DpiY,
pixels.DetachPixelData());
await encoder.FlushAsync();
outStream.Dispose();
}
Smaller jpgs seem to work, but larger ones come out scrambled. Is there another way to achieve this?
Duh, the problem was that I was using PixelWidth/Height and I sholud have been using OrientedPixelWidth/Height.
That seems to have resolved my issue for this.