XNA: Getting a struct as a texture to the GPU - c#

I use XNA as a nice easy basis for some graphics processing I'm doing on the CPU, because it already provides a lot of the things I need. Currently, my "rendertarget" is an array of a custom Color struct I've written that consists of three floating point fields: R, G, B.
When I want to render this on screen, I manually convert this array to the Color struct that XNA provides (only 8 bits of precision per channel) by simply clamping the result within the byte range of 0-255. I then set this new array as the data of a Texture2D (it has a SurfaceFormat of SurfaceFormat.Color) and render the texture with a SpriteBatch.
What I'm looking for is a way to get rid of this translation process on the CPU and simply send my backbuffer directly to the GPU as some sort of texture, where I want to do some basic post-processing. And I really need a bit more precision than 8 bits there (not necessarily 32-bits, but since what I'm doing isn't GPU intensive, it can't hurt I guess).
How would I go about doing this?
I figured that if I gave Color an explicit size of 32 bytes (so 8 bytes padding, because my three channels only fill 24 bits) through StructLayout and set the SurfaceFormat of the texture that is rendered with the SpriteBatch to SurfaceFormat.Vector4 (32 bytes large) and filled the texture with SetData<Color> that it would maybe work. But I get this exception:
The type you are using for T in this method is an invalid size for this resource.
Is it possible to use any arbitrarily made up struct and interpret it as texture data in the GPU like you can with vertices through VertexDeclaration by specifying how it's laid out?

I think I have what I want by dumping the Color struct I made and using Vector4 for my color information. This works if the SurfaceFormat of the texture is also set to Vector4.

Related

The right way to pack BGRA and ARGB color to int

In SharpDx BGRA shift for red is 16:
(color >> 16) & 255
see here.
But in .NET, ARGB shift for red is also 16:
private const int ARGBRedShift = 16;
see here and here and here.
I'm confused, what's right?
this (like .NET):
public int PackeColorToArgb()
{
int value = B;
value |= G << 8;
value |= R << 16;
value |= A << 24;
return (int)value;
}
or this (like SharpDx):
public int PackeColorToArgb()
{
int value = A;
value |= B << 8;
value |= G << 16;
value |= R << 24;
return (int)value;
}
For .Net 0xFFFF0000 is Argb Red, but for SharpDx this is Bgra Red. what's right?
The right way to pack BGRA and ARGB color to int
That depends on where you're going to use the int. But for the two examples you've provided, you'll pack the byte values into the integer value in exactly the same way. They are both "right". It's only the name that's different, and you can have many different names for the same thing.
Importantly, your second code example — supposedly "correct" for SharpDx — is not correct for the color format you're asking about. You can see right in the source code you reference, while you are packing the bytes in the order A, B, G, and R, LSB to MSB, the correct order of component is in fact BGRA (again, LSB to MSB).
Your second code example should look just like the first.
Long version…
As you can see from the two source code references you've noted, the actual formats are identical. That is, the format from each API, stores the byte values for each color component in the same order, within a single 32-bit integer: blue in the lowest 8 bits, then green, then red, then alpha in the highest 8 bits.
The problem is, there's no uniform standard for naming such formats. In the context of .NET, they list the color components in big-endian order (which could be thought of as a little ironic, since so much of the Windows ecosystem is based on little-endian hardware…but, see below). I.e. the most-significant byte is listed first: "ARGB". I call this name "big-endian order" simply because that name is consistent with a scenario in which one stores the 32-bit integer in a sequence of 4 bytes in memory on a computer running in big-endian mode. The order of component initials in the name is the same order they'd appear in that context.
On the other hand, in the context of SharpDx, the name is consistent with the order of bytes you'd see on little-endian hardware. The blue byte would come first in memory, then green, red, and finally alpha.
Fact is, both of these are somewhat arbitrary. While most mainstream PCs are running in little-endian mode now, which would argue in favor of the SharpDx naming scheme (which is inherited from the DirectX environment), these APIs both also can be found on big-endian hardware as well, especially as .NET Core is gaining traction. And in a lot of cases, the programmer using the API doesn't even really care what order the bytes are in. For code that has to deal with the individual bytes, it's still important, but a lot of the time it's more about just knowing what format the tools you're using is writing bitmaps in, and then making sure you've specified the correct format to the API.
All that said, I suspect that the main reason for the discrepancy has less to do with big-endian vs little-endian and more to do with underlying philosophical differences between the people responsible for the API. The fact is, even on big-endian hardware, the SharpDx format for a pixel where the components show up in memory in BGRA order will be "BGRA". Because, bitmap formats don't change just because the byte-order mode of the hardware is different. A pixel is not an integer. It's just a sequence of bytes. What will be different is that the shift values will have to be different, i.e. reversed, so that for code that does treat the pixel as a single 32-bit integer, it can access the individual components correctly.
Instead, it seems to me that the .NET designers recognized that the users of their API will most of the time be dealing with colors at a high level (e.g. setting the color of a pen or brush), not at the pixel level of bitmaps, and so naming the pixel format according to conventional order makes more sense. On the other hand, in SharpDx people are much more often dealing with the low-level pixel data, and having a name that reflects the actual byte-wise sequence of components for a pixel makes more sense in that context.
Indeed, the .NET code you've referenced doesn't involve bitmap data. The Color struct is only ever dealing with single int values at a time, with respect to the "ARGB" nomenclature. Since conceptually, we imagine numbers in big-endian format (even for decimal, i.e. with the most-significant digits first), ARGB is more human-readable. On the other hand, in the areas of .NET that do involve byte order in pixel formats, you'll find that the naming goes back to being representative of the actual byte order, e.g. the list of PixelFormats introduced with WPF.
Clear as mud, right? :)

Calculating the byte size of an object

I am beginning to utilise directx, and the slimdx wrapper in c#. For many methods it is necessary to calculate the size of an object, for example in the case of buffers and draw calls. In particular, the "stride" being the number of bytes between successive elements.
So far the data I am passing is a single Vector3, of 12 bytes length. Therefore the size of buffer used is the number of elements * 12. In this simple case its easy to see what the size should be. However, how should I calculate it for more complicated examples? For instance:
struct VertexType
{
public Vector3 position;
public Vector4 color;
}
Would this be (12+16) in size? Does the fact it is arranged in a struct add anything to the size of the element?
I have tried using sizeof, but this throws an error stating the object is not of a predetermined size. What would be the correct approach?
Try Marshal.SizeOf - http://msdn.microsoft.com/en-us/library/System.Runtime.InteropServices.Marshal.SizeOf(v=vs.110).aspx
using System.Runtime.InteropServices;
VertexType v = new VertexType;
Marshal.SizeOf(typeof(VertexType)); //For the type
Marshal.SizeOf(v); //For an instance

Save XNA rendered image to disk

I am trying to save an image rendered in XNA to disk. This is actually quite easy, but the kicker is I need the full 32 bit float precision for each channel not just the 0-255 range.
I have been looking into using texture packing (converting the float into a 4 component ARGB), but I worry I will lose precision this way. I need very high accuracy.
Another way I was looking into is using a shader and multiply the float component with 2147483647 (max positive int), then go through each bit and store a binary 0 or 1 in the rendered image. Each image can later be reassembled in regular code to reconstruct the full precision float. This works, but the problem is, shader model 3.0 seems to not support 32 bit int's properly. All I get is 24 bit of precision this way.
Is there a way to do this in a more direct and accurate way?
Since you say that you're using 32-bit precision for each of your four channels, I assume that your texture is using SurfaceFormat.Vector4, which is--as far I'm aware--the only 128-bit texture format supported by XNA. In that case, it's easy enough to retrieve the actual data from a texture that you've rendered:
var tex = new Texture2D(GraphicsDevice, 1024, 1024, false, SurfaceFormat.Vector4);
var data = new Vector4[1024 * 1024];
tex.GetData<Vector4>(data);
Then all you need to do is write some code to save the data array to a file that you can read in later, which is easy enough. Then you can reconstitute the texture:
Vector4[] data = LoadMyData();
tex.SetData(data);

Convert 24 Bit BMP to Monochrome Bit in C#

How would I go about taking a BMP which is 24 bits and converting it to a Monochrome Bitmap? I am not concerned about losing colors, etc. and I do not want to use another file format.
There are basically two methods. You can use Interop methods to read the raw BMP data and write raw BMP data for a monchrome bitmap. There are googlable functions which will do this.
Or, better, you can use ImageMagickObject to convert the image using a stoichastic dither.
Do the second one.
If you do the first one, you should still use a stocichastic dither, but you will have to implement it by hand.
Edit: You asked "what do the following RGB values become"... the answer is they become what you want them to become. YOU DECIDE.
The obvious choices are to either use a strict threshold, where anything less than X becomes black, anything more becomes white, or you can use a stoichastic dither. Select two thresholds, black Threshold bt and white threshold wt, such that 0 < bt < wt < 255. Then for each point choose a random number q between 0.0. and 1.0. Compare the pixel brightness ((r+g+b)/3) to (q*(wt-bt)+bt). If it is greater or equal, it is white, if less, black. This will give you a nice dithered greyscale. For most purposes 31 and 224 are good values for bt and wt, but for photographic images 0 and 255 might be better.

24 bit data type array to hold 24 bit bitmap file

I am attempting to read a bitmap manually. So I read the bitmap file using a filestream. There wasn't a problem until I had to deal with 24 bit bitmap files. Is there a method to actually read a 24 bitmap image into a 24 bit array ?
I hold a 8 bit bitmap image in a byte array like this
byte[] fileBufferArray = new byte[fileLength];
A few options:
If you're not too worried about memory (you don't have a large number or very large bitmaps open), you can store it as 32-bit numbers instead. Often the fourth byte is then interpreted as "alpha" (a blending specifier when rendering the image on a background.) Most modern image manipulation libraries treat color images in this way now.
You can pack the colors into a byte array and access them individually. RGB and BGR are the two most common packing orders. Usually you also end up putting padding bytes at the end of each row so that the width in bytes lines up with DWORD (4-byte) boundaries.
You can split the image into three separate byte array 'planes', which are basically 8-bit images for Red, Green and Blue respectively. This is another common format when doing image processing, as often your filtering steps operate on channels independently.

Categories

Resources