Monogame for Windows Phone 8: black textures after being deactivated - c#

I've faced a strange issue in my monogame (3.1) windows phone 8 app. When app is deactivated and then activated all textures become black. This happened also after the lock screen (UserIdleDetectionMode is enabled).
I've checked GraphicsDevice.IsDisposed, GraphicsDevice.IsContentLost, GraphicsDevice.ResourcesLost but everything looks ok. I've implemented reload of all my textures on Activated and Unobscured events, but full texture reload takes too much time. In the same time on Marketplace I see monogame apps easily handling desactivate-activate. Moreover, the same app for windows phone 7 written on xna, restores very quickly. What do I do wrong with monogame?
My app is based on monogame WP8 template.
Update:
Just have found out that all textures which loaded via Content.Load(...) are restored very quickly. But all my textures are written by a hand: I load a file from TileContainer, unpack it, read its data with ImageTools, create Texture2D and set its pixels with loaded data. Jpeg files also are rendered to RenderTarget2D as BGR565 to consume space.
Moreover I widely use RenderTarget2D for rendering text labels with shadows, sprite runtime compositions and so on. So it looks like that Monogame just don't want to restore images loaded not by Content.Load.
Continue investigating...

I just got a response from Tom Spillman in the Monogame forums and apparently the Content.Load stuff is restored normally and other data needs to be reinitialized by the program. What you can do is hook up to GraphicsDevice.DeviceResetting event to get notified when this reset is taking place.

According to monogame devs lost textures is a normal situation.
I've made full texture reload in GraphicsDevice.DeviceReset event. To make it work fast I've implemented load from xnb uncompressed files. It's pretty simple as long as this format just have pixel values in it. This is the only solution.
Here's how to read from uncompressed xnb:
private static Texture2D TextureFromUncompressedXnbStream(GraphicsDevice graphicsDevice, Stream stream)
{
BinaryReader xnbReader = new BinaryReader(stream);
byte cx = xnbReader.ReadByte();
byte cn = xnbReader.ReadByte();
byte cb = xnbReader.ReadByte();
byte platform = xnbReader.ReadByte();
if (cx != 'X' || cn != 'N' || cb != 'B')
return null;
byte version = xnbReader.ReadByte();
byte flags = xnbReader.ReadByte();
bool compressed = (flags & 0x80) != 0;
if (version != 5 && version != 4)
return null;
int xnbLength = xnbReader.ReadInt32();
xnbReader.ReadBytes(0x9D);//skipping processor string
SurfaceFormat surfaceFormat = (SurfaceFormat)xnbReader.ReadInt32();
int width = (xnbReader.ReadInt32());
int height = (xnbReader.ReadInt32());
int levelCount = (xnbReader.ReadInt32());
int levelCountOutput = levelCount;
Texture2D texture = texture = new Texture2D(graphicsDevice, width, height, false, SurfaceFormat.Color);
for (int level = 0; level < levelCount; level++)
{
int levelDataSizeInBytes = (xnbReader.ReadInt32());
byte[] levelData = xnbReader.ReadBytes(levelDataSizeInBytes);
if (level >= levelCountOutput)
continue;
texture.SetData(level, null, levelData, 0, levelData.Length);
}
xnbReader.Dispose();
return texture;
}

Related

Unity Texture2D loadImage exact values

Why with Unity when I load an external 1024x1024 RGBA32 .png (saved via either PaintXP or Gimp) with a blob of (64,64,64) pixels in the centre does the Debug.Log line at the bottom return incorrect values? - The closest I can get is with an uncompressed .png (from Gimp) with values like (65,66,65), but with a standard image they seem to come back as (56,56,56).
Texture2D tex = null;
byte[] fileData;
if (File.Exists(mapPath + "/map.png"))
{
fileData = File.ReadAllBytes(mapPath + "/map.png");
tex = new Texture2D(size, size, TextureFormat.RGBA32, false);
tex.anisoLevel = 0;
tex.Compress(false);
tex.filterMode = FilterMode.Point;
tex.LoadImage(fileData); // Auto-resize the texture dimensions
Color32[] pixelsRaw = tex.GetPixels32(0);
Color32[,] pixels = new Color32[size, size];
for (int j = 0; j < size - 1; j++)
{
for (int i = 0; i < size - 1; i++)
{
pixels[i, j] = pixelsRaw[(j * tex.height) + i];
}
}
Debug.Log(pixels[512, 512]);
}
This was all in an attempt to read a tile-based level from a .png image. But with the returned values being so inaccurate, I can't seem to find a way to make this possible. (I've done this loads of times with Java.awt/LWJGL and it works fine there, why not Unity?)
To clarify, this image is being loaded from outside the Unity project, so there is no way to manually set the compression/format settings via the editor.
There are a couple of problems: compression and gamma correction.
1. When you call Compress on your Texture2D it will compress your texture. The bool parameter only tells it to do a low quality or high quality compression. So just remove the line: tex.Compress(false);
2. The PNG has gamma information. Gimp has an option when you export to png to save gamma or not. So open your image in Gimp and export it with the "Save Gamma" option unchecked.
Alternatively I was able to get the same result by removing the gAMA and sRGB attributes from the png with TweakPNG.

Unity Hololens Asset streaming optimization

I'm working on a Hololens app that displays a PNG image with info for the user. The images are loaded from a StreamingAssets folder in a coroutine. The issue lies in the speed at which these assets are loaded. If the user cycles to another page, the app momentarily drops to about 1-3 FPS on a PC.
What I hope some of you can help me with is think of ways to optimize the storage and streaming of these images. Is there a way to for example, load the image in a lower resolution to save on time and memory (with the hardware's very limited memory) and load in the additional detail when it actually needs to be displayed? Would multi-threading make the framerate better while loading the image?
So Programmer's suggestion in the comments helped eliminate the performance issues completely. The code down below is the coroutine that is used to stream in the required image(s) upon startup.
IEnumerator LoadImages()
{
int oldImgIndx = imageIndex;
imageIndex = 1;
bool thereAreImages = true;
while (thereAreImages && imageIndex < 1000)
{
if (System.IO.File.Exists(CreateFilePath(imageIndex)))
{
string url = "File:///" + CreateFilePath(imageIndex);
Texture2D tex = new Texture2D(4, 4);
WWW www = new WWW(url);
yield return www;
www.LoadImageIntoTexture(tex);
spriteList.Add(Sprite.Create(tex, new Rect(0, 0, tex.width, tex.height), new Vector2(0.5f, 0.5f)));
imageIndex++;
}
else
{
thereAreImages = false;
}
}
finished = true;
imageIndex = oldImgIndx;
}
The issue on the Hololens lies in the www.LoadImageIntoTexture(tex); line. This code is required when displaying multiple images on the PC side of things, however on the Hololens, where only one image is displayed at a time it can be left out.

WP8.1 RT - Changing pixel color

I've been working on some Windows Phone 8.1 RT app's lately which require quite alot of icons. For iOS and Android we can use White on Black icons and turn them into the right color(s) through code. But for WP8.1 it seems quite impossible to run it fast.
class ColoredImage
{
public static WriteableBitmap GetColoredImage(WriteableBitmap bitmap, Color color)
{
var result = bitmap;
for (int i = 0; i < (result.PixelWidth); i++)
{
for (int j = 0; j < result.PixelHeight; j++)
{
if (result.GetPixel(i, j) == Colors.Black)
{
result.SetPixel(i, j, Colors.Transparent);
}
else
{
result.SetPixel(i, j, color);
}
}
}
return result;
}
}
I got this class changing the colors of a WriteableBitmap but it takes about 15 seconds to change the colors of a 62x62 image. Is there anything I am doing wrong, and on what can I improve.
Thanks.
Your algo of reading an image finding pixel and changing their colors is a time consuming process. You have got two ways in which you can do it at a faster pace
1) Using Segoe MDL2 Assets You can use these pre installed font family which contains most of the basic images. Benefit is since these are vector images changing foreground would change the color instantly. Search for CharacterMap and then this font in your system and you will find all available fonts
2) If you are not able to find your particular image in charmap then you will have to create path images using blend which is similar to converting images to vector and then you can easily change color.

Kinect sdk 2.0 vs sdk 1.7

I wrote a c# program that worked with the old kinect.
My graphics library dealing with the kinect. Since the old Kinect is no longer officially sold, i'm now updating it to the newer kinect one, with its Microsoft SDK2.0. For my updated library i try to keep most coding equal.
So i can release an update library instead of the updating the whole program(s)
What i am wondering is, does the new kinect depth data still contains player data as it did in 1.7 SDK i did a bitmask operation to remove that with:
realDepth[i16] = (short)(realDepth[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth);
Is this still needed, i couldnt find any information about its raw depth format.
Also the old Kinect, had some values for
distance unknown
distance to close
distance to far
Does it still provide this ?.
In Microsoft Kinect SDK 2.0, you do not need the bitmask operation to get the correct depth value.
When you access a DepthFrame object, you can use something like this:
private void ProcessDepthFrame(DepthFrame frame)
{
int width = frame.FrameDescription.Width;
int height = frame.FrameDescription.Height;
ushort minDepth = frame.DepthMinReliableDistance;
ushort maxDepth = frame.DepthMaxReliableDistance;
ushort[] depthData = new ushort[width * height];
frame.CopyFrameDataToArray(depthData);
int colorIndex = 0;
for (int depthIndex = 0; depthIndex < depthData.Length; ++depthIndex)
{
ushort depth = depthData[depthIndex];
byte intensity = (byte)(depth >= minDepth && depth <= maxDepth ? depth : 0);
// Do what you want
}
}
Also notice that, in the above example, DepthMinReliableDistance and DepthMaxReliableDistance properties of the DepthFrame class are used to figure out if the depth value is valid or not (which is a little different from SDK v1.x).
For more info, check also this tutorial.

Implementing streaming video for Windows 10 UAP

I need to display in XAML a video stream coming from some network source. Video frames can come at undefined intervals. They're already assembled, decoded and presented in BGRA8 form in memory mapped file. XAML frontend is in C#, backend is written in C using WinAPI.
In C# I have a handle of this file.
Previously in .NET 4.5 I was creating InteropBitmap from this handle with System.Windows.Interop.Imaging.CreateBitmapSourceFromMemorySection and called Invalidate on arriving of new frame. Than I used this InteropBitmap as Source for XAML Image.
Now I need to do the same but for Windows 10 UAP platform.
There are no memory mapped files in .NET Core so I created a CX Windows Runtime Component. Here's most important part of it.
static byte* GetPointerToPixelData(IBuffer^ pixelBuffer, unsigned int *length)
{
if (length != nullptr)
{
*length = pixelBuffer->Length;
}
// Query the IBufferByteAccess interface.
ComPtr<IBufferByteAccess> bufferByteAccess;
reinterpret_cast<IInspectable*>(pixelBuffer)->QueryInterface(IID_PPV_ARGS(&bufferByteAccess));
// Retrieve the buffer data.
byte* pixels = nullptr;
bufferByteAccess->Buffer(&pixels);
return pixels;
}
void Adapter::Invalidate()
{
memcpy(m_bitmap_ptr, m_image, m_sz);
m_bitmap->Invalidate();
}
Adapter::Adapter(int handle, int width, int height)
{
m_sz = width * height * 32 / 8;
// Read access to mapped file
m_image = MapViewOfFile((HANDLE)handle, FILE_MAP_READ, 0, 0, m_sz);
m_bitmap = ref new WriteableBitmap(width, height);
m_bitmap_ptr = GetPointerToPixelData(m_bitmap->PixelBuffer, 0);
}
Adapter::~Adapter()
{
if ( m_image != NULL )
UnmapViewOfFile(m_image);
}
Now I can use m_bitmap as Source for XAML Image ( and don't forget to raise property change on invalidate otherwise image won't update ).
Is there a better or more standard way? How can I create WriteableBitmap from m_image so I won't need additional memcpy on invalidate?
UPDATE: I wonder if I can use MediaElement to display sequence of uncompressed bitmaps and get any benefits from it? MediaElement supports filters which is a very nice feature.

Categories

Resources