Lumia Imaging SDK JpegRenderer.RenderAsync InvalidOperationException - c#

I'm using Lumia Imaging SDK ver 2.0 to crop images in Windows Phone 8.1 RT application. The code works fine, but JpegRenderer.RenderAsync() sometimes throws InvalidOperationException, Operation is not valid due to the current state of the object.
The issue reproduces every time with a some images, and crashes the application. I use the following code for cropping:
using (StorageFileImageSource inputImageSource = new StorageFileImageSource(inputImageFile))
{
using (FilterEffect filterEffect = new FilterEffect(inputImageSource))
{
// Create cropping filter.
List<IFilter> filters = new List<IFilter>();
CropFilter cropFilter = new CropFilter(croppedImageSize);
filters.Add(cropFilter);
// Add filters to effects.
filterEffect.Filters = filters;
// Create renderer with above filters and render new image.
using (JpegRenderer renderer = new JpegRenderer(filterEffect))
{
IBuffer croppedImage = await renderer.RenderAsync();
return croppedImage.ToArray();
}
}
}
I referred to this resource and it says the JpegRenderer.RenderAsync() throws InvalidOperationException when the filter property value changes while the rendering is in progress. I don't change the value of the property once it's set, then why is the exception being thrown?

I figured out the problem, and as David said, I was passing wrong dimensions which were larger than the size of the image. I was using BitmapDecoder.PixelHeight and BitmapDecoder.PixelWidth to calculate dimensions.
However, in some images having orientation data in EXIF data, BitmapDecoder.PixelHeight gave the width of the image and vice-versa. For this, I had to use BitmapDecoder.OrientedPixelHeight and BitmapDecoder.OrientedPixelWidth to get the actual height and width of the image, accommodating the orientation of the image.

Related

SkiaSharp drawing with OpenGL/Vulkan backend from console application

I want to draw something with GPU acceleration (using OpenGL or Vulkan) using SkiaSharp and save the image later. There is no need to display the image anywhere in the application, because it's a console application targeting Windows and Linux.
I already tried the following code, with various variations, but nothing worked (raises an exception at var surface = SKSurface.Create(context, false, info); because glInterface and context are null.
Can somebody give me a hint?
var glInterface = GRGlInterface.Create();
var context = GRContext.CreateGl(glInterface);
var info = new SKImageInfo(256, 256);
var surface = SKSurface.Create(context, false, info);
var canvas = surface.Canvas;
In the end it would be nice to have the ability to call SKBitmap.SetPixels(IntPrt) or something similar to set the resulting bitmap buffer to a specific place.
The solution is you need to manually create an OpenGL context first.
Have a look at https://github.com/mono/SkiaSharp/blob/master/tests/Tests/GRContextTest.cs for implementation details.
For copying the rendered pixel buffer you can use SKSurface.ReadPixels.

BadImage Error When Loading Tiff CCITTv4 Via SharpDX.Direct2D1.Bitmap.FromWicBitmap

I'm using SharpDX and its accompanying WIC and Direct2D wrappers to do some serverside image manipulation.
The following code works great with JPEG images and is modeled after the SharpDX docs and this Microsoft sample using D2D directly via C++.
However, I get a BadImage error when I try to load a TIFF CCITT (bitonal 1bpp) image. The BadImage error is only thrown at EndDraw, (which happens later on in the commented DrawEndorsement function), or at this line of code which I inserted to make the point at which the issue occurs more obvious:
SharpDX.Direct2D1.Bitmap bitmap = SharpDX.Direct2D1.Bitmap.FromWicBitmap(_renderTarget, _wicBitmap);
The JPEG image I pass in gets to this point and continues with no issues, but the TIFF I pass in gets to this point and causes FromWicBitmap to barf with a BadImage error.
I'm using FormatConverter to convert the TIFF/JPEG pixel formats to an appropriate and supported D2D pixel format, and the converter does change the pixel format GUID for both images, but, again, FromWicBitmap barfs only on the TIFF.
I assumed I was doing something wrong with conversion or misusing SharpDX/D2D, but when I built and ran the aforementioned Microsoft C++ D2D image viewer sample, it loaded and rendered this same TIFF file with no errors. I double checked the sample's code to verify that I was using all the same pixel formats, options, etc, and it looks like I'm doing almost exactly the same thing with SharpDX that the sample is doing with D2D directly.
Clearly Direct2D doesn't like the pixel format of the TIFF image that WIC is handing it, but why didn't the MS sample exhibit the same behavior, and why didn't FormatConverter fix it?
Am I missing something that the D2D sample code is doing?
Am I missing some trick with SharpDX?
Is this a SharpDX bug?
Thanks!
public byte[] BuildImage(byte[] image, Format saveFormat)
{
SharpDX.WIC.Bitmap _wicBitmap;
WicRenderTarget _renderTarget;
BitmapFrameDecode bSource;
FormatConverter converter = new FormatConverter(_factoryManager.WicFactory);
using (MemoryStream systemStream = new MemoryStream(image))
using (WICStream wicStream = new WICStream(_factoryManager.WicFactory, systemStream))
{
BitmapDecoder inDecoder = new BitmapDecoder(_factoryManager.WicFactory, wicStream, DecodeOptions.CacheOnLoad);
if (inDecoder.FrameCount > 0)
{
bSource = inDecoder.GetFrame(0);
converter.Initialize(bSource, SharpDX.WIC.PixelFormat.Format32bppPRGBA, BitmapDitherType.Solid, null, 0.0f, BitmapPaletteType.MedianCut);
_imageWidth = bSource.Size.Width;
_imageHeight = bSource.Size.Height;
}
else
{
throw new Exception("No frames found!");
}
}
_wicBitmap = new SharpDX.WIC.Bitmap(
_factoryManager.WicFactory,
converter,
BitmapCreateCacheOption.CacheOnDemand
);
_renderTarget = new WicRenderTarget(_factoryManager.D2DFactory, _wicBitmap, new RenderTargetProperties());
SharpDX.Direct2D1.Bitmap bitmap = SharpDX.Direct2D1.Bitmap.FromWicBitmap(_renderTarget, _wicBitmap);
//DrawEndorsement(_renderTarget);
_renderTarget.Dispose();
bSource.Dispose();
converter.Dispose();
return SaveImage(saveFormat, _wicBitmap);
}
As xoofx pointed out, turns out that this was caused by my disposing of the WIC/MemoryStreams underlying the FormatConverter while it was still in use.
This was causing JPEGs to be corrupted on write, and weirdly causing the TIFFs to fail even before that.
Extended the using scope accordingly and that fixed it.

Dynamic image assigned to live tile does not show?

I have a Windows Store app written in C# that works with photos. I want to show the last photo the user selected in the app in the medium size live tile (150 x 150). I am using the code below to do it. When I run the app I don't get any errors, but I don't see the selected photo in the live tile either. I know that I am doing at least some things right. I say this because if the user hasn't selected a photo yet, then I show a test image and I do see that image in the tile. But the test image comes from the app package using the ms-appx protocol, not from the app storage area.
I found a few SO posts on the subject but they are all for Windows Phone. I looked at the KnownFolders list for Windows Store app files, but nothing seemed to map to the SharedContent folder required for files meant for live tile use in Windows Phone. What is wrong with my code?
Note, the vvm.ActiveVideomark.GetThumbnail() call simply retrieves a bitmap as a WriteableBitmap object. As you can see in the code, I am resizing the image to the size required by the Medium live tile (150 x 150). ToJpegFileAsync() is an extension method that encodes a WriteableBitmap object to jpeg bytes and then writes those bytes to a file using the given file name. Both of these calls are well-tested and are not the source of the problem as far as I know.
TileUpdateManager.CreateTileUpdaterForApplication().Clear();
TileUpdateManager.CreateTileUpdaterForApplication().EnableNotificationQueue(true);
var tileXml = TileUpdateManager.GetTemplateContent(TileTemplateType.TileSquare150x150Image);
var tileImage = tileXml.GetElementsByTagName("image")[0] as XmlElement;
// Got a current photo?
if (vvm.ActiveVideomark == null)
// No, just show the regular logo image.
tileImage.SetAttribute("src", "ms-appx:///Assets/Logo.scale-100.png");
else
{
// Resize it to the correct size.
WriteableBitmap wbm = await vvm.ActiveVideomark.GetThumbnail();
WriteableBitmap wbm2 = wbm.Resize(150, 150, WriteableBitmapExtensions.Interpolation.Bilinear);
// Write it to a file so we can pass it to the Live Tile.
string jpegFilename = "LiveTile1.jpg";
StorageFile jpegFile = await wbm2.ToJpegFileAsync(jpegFilename);
// Yes, show the selected image.
tileImage.SetAttribute("src", jpegFile.Path);
}
The src attribute must contain a URI with ms-appx:///, ms-appdata:///local, or http[s]:// schemes. The StorageFile.Path property, as you're using with jpegFile.Path, is a local filesystem pathmame like c:\users\Robert\AppData... which won't be valid. So create your tile images in local app data, and then use ms-appdata:///local/ to refer to them in tile payloads.

SharpDX render in WPF

I want to draw lines as fast as possible. For that reason I implemented a method using InteropBitmap. This works quite good. Next step was to compare with ShardDX. Basically what I want to do is:
Running the following code in a BackgroundWorker. This does inform the WPF about an update of WIC. I found out that this code (creating all needed for ShapeDX and draw line) takes about 10ms longer than doing the same using InteropBitmap.
My question now is simply, how to speed this up? Can I change the code somehow that I only have to call BeginDraw, create lines and EndDraw, not always doing all of this Image Encoding/Decoding stuff? Or is there a better approach?
var wicFactory = new ImagingFactory();
var d2dFactory = new SharpDX.Direct2D1.Factory();
const int width = 800;
const int height = 200;
var wicBitmap = new Bitmap(wicFactory, width, height, SharpDX.WIC.PixelFormat.Format32bppBGR, BitmapCreateCacheOption.CacheOnLoad);
var renderTargetProperties = new RenderTargetProperties(RenderTargetType.Default, new PixelFormat(Format.Unknown, AlphaMode.Unknown), 0, 0, RenderTargetUsage.None, FeatureLevel.Level_DEFAULT);
var d2dRenderTarget = new WicRenderTarget(d2dFactory, wicBitmap, renderTargetProperties);
var solidColorBrush = new SharpDX.Direct2D1.SolidColorBrush(d2dRenderTarget, SharpDX.Color.White);
d2dRenderTarget.BeginDraw();
//draw whatever you want
d2dRenderTarget.EndDraw();
// Memorystream.
MemoryStream ms = new MemoryStream();
var stream = new WICStream(wicFactory, ms);
// JPEG encoder
var encoder = new SharpDX.WIC.JpegBitmapEncoder(wicFactory);
encoder.Initialize(stream);
// Frame encoder
var bitmapFrameEncode = new BitmapFrameEncode(encoder);
bitmapFrameEncode.Initialize();
bitmapFrameEncode.SetSize(width, height);
var pixelFormatGuid = SharpDX.WIC.PixelFormat.FormatDontCare;
bitmapFrameEncode.SetPixelFormat(ref pixelFormatGuid);
bitmapFrameEncode.WriteSource(wicBitmap);
bitmapFrameEncode.Commit();
encoder.Commit();
ms.Seek(0, SeekOrigin.Begin);
// JPEG decoder
var decoder = new System.Windows.Media.Imaging.JpegBitmapDecoder(ms, BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.Default);
// Write to wpf image
_WIC = decoder.Frames[0];
// Tell WPF to update
RaisePropertyChanged("WIC");
bitmapFrameEncode.Dispose();
encoder.Dispose();
stream.Dispose();
With:
System.Windows.Media.Imaging.BitmapFrame _WIC;
public System.Windows.Media.Imaging.BitmapSource WIC
{
get
{
return (System.Windows.Media.Imaging.BitmapSource)_WIC.GetAsFrozen();
}
}
And:
<StackPanel>
<Image Name="huhu1" Source="{Binding WIC}" />
</StackPanel>
SharpDX Toolkit has support for WPF via a Direct3D11-to-Direct3D9 shared texture. It is implemented in the SharpDXElement class.
You may not be able to reuse this as is because Direct2D (which you are using to draw) can interop either with Direct3D11.1 or Direct3D10 and SharpDX uses Direct3D11 for WPF support, so you will need to tweak the solution a little bit.
Basically you need to do the following:
Initialize Direct3D (10 or 11.1).
Initialize Direct2D.
Create the D3D render target with Direct2D support and the Shared flag (here is how it is done in SharpDX).
Initialize Direct3D9.
Create the shared texture.
Bind the texture to an D3DImage.
Do not forget to call D3DImage.AddDirtyRect when the contents of the render target are updated.
From the code you provided it is not clear if you are doing all initializations only once or not, so try to call any initialization code only once and reuse the render target - just clear it at the beginning of every frame. This is mandatory to get a decent performance.
Update: SharpDX.Toolkit has been deprecated and it is not maintained anymore. It is moved to a separate repository.
If you want to share a directx surface with WPF, the best option is using a WPF D3DImage. It promises to work without copying if you use the right color format.
I have only used it with Directx9, but it is possible that its compatible with Direct2D too, but if it isn't D3d9 can draw lines too.
There's a great codeproject article with examples. From managed code using slimdx or sharpdx the only nonobvious caveat is that D3DImage retains a reference count to your DirectX surface, so you need to null the backbuffer expicitly when you want to reset your d3d device.
You can use Smartrak/WpfSharpDxControl: Provides WPF control to host SharpDx content.
It uses sharpDx in WPF and the wpf can add Win32HwndControl to HwndHost.

Instagram Photo Effects in Windows 8 metro apps using c#

I need to implement Instagram photo effects like amaro, hudson, sepia, rise, and so on. I know this article only use basic effects: http://code.msdn.microsoft.com/windowsdesktop/Metro-Style-lightweight-24589f50
Another way suggested by people are to implement Direct2d and then apply using that. But for that I would need to write C++ code, where I have zero experience.
Can anyone suggest some other way to implement Instagram effects in c#?
Is there any built in c++ file for these effects?
Please see this example from CodeProject : Metro Style Lightweight Image Processing
The above example contains these image effects.
Negative
Color filter
Emboss
SunLight
Black & White
Brightness
Oilpaint
Tint
Please note above example seems to be implemented on either developer preview or release preview of Windows 8. So you will get error like this
'Windows.UI.Xaml.Media.Imaging.WriteableBitmap' does not contain a
constructor that takes 1 arguments
So you have to create instance of WriteableBitmap by passing pixel height and pixel width of image. I have edited the sample and it is working for me. You have to change wb = new WriteableBitmap(bs); to wb = await GetWB();
StorageFile originalImageFile;
WriteableBitmap cropBmp;
public async Task<WriteableBitmap> GetWB()
{
if (originalImageFile != null)
{
//originalImageFile is the image either loaded from file or captured image.
using (IRandomAccessStream stream = await originalImageFile.OpenReadAsync())
{
BitmapImage bmp = new BitmapImage();
bmp.SetSource(stream);
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(stream);
byte[] pixels = await GetPixelData(decoder, Convert.ToUInt32(bmp.PixelWidth), Convert.ToUInt32(bmp.PixelHeight));
cropBmp = new WriteableBitmap(bmp.PixelWidth, bmp.PixelHeight);
Stream pixStream = cropBmp.PixelBuffer.AsStream();
pixStream.Write(pixels, 0, (int)(bmp.PixelWidth * bmp.PixelHeight * 4));
}
}
return cropBmp;
}
Let me know if you are facing any problem.

Categories

Resources