Converting a 16 bit luminance value to 32 bit RGB - c#

I have a 16 bit luminance value stored in two bytes, and I want to convert that to R, G, and B values. I have two questions: how do I convert those two bytes to a short, and assuming that the hue and saturation is 0, how do I turn that short into 8 bits per component RGB values?
(The Convert class doesn't have an option to take two bytes and output a short.)

If the 16 bit value is little endian and unsigned, the 2nd byte would be the one you want to repeat 3x to create the RGB value, and you'd drop the other byte. Or if want the RGB in a 32 bit integer, you could either use bit shifts and adds or just multiply that 2nd byte by 0x10101.

Try something like this:
byte luminance_upper = 23;
byte luminance_lower = 57;
int luminance = ((int)luminance_upper << 8) | (int)luminance_lower;
That will give you a value between 0-65535.
Of course, if you just want to end up with a 32-bit ARGB (greyscale) colour from that, the lower byte isnt going to matter because you're going from a 16-bit luminance to 8-bit R,G,B components.
byte luminance_upper = 255;
// assuming ARGB format
uint argb = (255u << 24) // alpha = 255
| ((uint)luminance_upper << 16) // red
| ((uint)luminance_upper << 8) // green
| (uint)luminance_upper; // blue (hope my endianess is correct)
Depending on your situation, there may be a better way of going about it. For example, the above code does a linear mapping, however you may get better-looking results by using a logarithmic curve to map the 16-bit values to 8-bit RGB components.

You will need the other components too (hue and saturation).
Also, 16-bit sounds a bit weird. Normally those values are floating point.
There is a very good article on CodeProject for conversions of many color spaces.

Related

Unexpected RGB channel values for gray pixel in 48 bits bitmap

Following up on this question.
I've managed to extract pixel informations off a bitmap instance using Bitmap.LockBits. PixelFormat is Format48bppRbg which, based on my understanding and Peter's answer on the aforementioned question, should store each pixel's RGB color channels using two bytes for storage. The bytes's combined value should be equal to a number between 0 and 8192, representing an RBG channel intensity value. That value is obtained by passing both bytes to the BitConverter.ToUInt16 method.
So upon extracting pixel informations for the following 3 x 3 bitmap (magnified here for clarity):
I'm having first row of pixel channels going like this:
Pixel color
Red intensity
Green intensity
Blue intensity
RED
8192
0
0
GREEN
0
8192
0
BLUE
0
0
8192
So far so good.
On the second row, however, it goes like this:
Pixel color
Red intensity
Green intensity
Blue intensity
WHITE
8192
8192
8192
GRAY (!)
1768 (!)
1768 (!)
1768 (!)
BLACK
0
0
0
The white and black pixel channel values make sense to me.
The gray, however, doesn't.
If you use any color picker on the gray pixel above, you should get a perfectly medium gray. In 24 bits color it should be the equivalent of (R: 128, G: 128, B: 128), or #808080 in hexadecimal form.
Then how come, in a Format48bppRpg pixel format, the channels intensity is way below the expected, middle 4096 value? Isn't this GDI+ based range of 0-8192 supposed to work like its 8 bits counterpart, with 0 being the lowest intensity and 8192 the highest? What am I missing here?
For reference, here is a screen capture from Visual Studio debugger showing the raw bytes indexes and values, with additional notes on the stride, channels positions and their extracted intensity value, up to the gray pixel:
The part where you state an incorrect assumption is that #808080 is "perfectly medium gray". It is not, at least not if you look at it from a certain way (more about it here).
Many color standards, including sRGB, use gamma compression to make darker colors more spaced out in the range of 256 values normally used to store RGB colors. This roughly means taking a square root (or 2.2-root) of the relative component value before encoding this value. The reason is that the human eye (like other senses) perceives brightness logarithmically, thus it is important to represent even an arithmetically small change in the brightness if it would mean actually doubling it, for example.
The byte value of 128 is actually about 21,95 % (128/255 ^ 2.2) of full brightness, which is what you're seeing in the case of 16-bit components. The space of possible values there is much larger, thus GDI (or the format) doesn't need to store them in a special way anymore.
In case you need an algorithm, taking the 2.2-root of the value works mostly well, but the correct formula is a bit different, see here. The root function normally has an infinite slope near zero, so the specific formula attempts to fix that by making that portion linear. A piece of code can be derived from that quite easily:
static byte TransformComponent(ushort linear)
{
const double a = 0.055;
var c = linear / 8192.0;
c = c <= 0.0031308 ? 12.92 * c : (1 + a) * Math.Pow(c, 1/2.4) - a;
return (byte)Math.Round(c * Byte.MaxValue);
}
This gives 128 for the value of 1768.

The right way to pack BGRA and ARGB color to int

In SharpDx BGRA shift for red is 16:
(color >> 16) & 255
see here.
But in .NET, ARGB shift for red is also 16:
private const int ARGBRedShift = 16;
see here and here and here.
I'm confused, what's right?
this (like .NET):
public int PackeColorToArgb()
{
int value = B;
value |= G << 8;
value |= R << 16;
value |= A << 24;
return (int)value;
}
or this (like SharpDx):
public int PackeColorToArgb()
{
int value = A;
value |= B << 8;
value |= G << 16;
value |= R << 24;
return (int)value;
}
For .Net 0xFFFF0000 is Argb Red, but for SharpDx this is Bgra Red. what's right?
The right way to pack BGRA and ARGB color to int
That depends on where you're going to use the int. But for the two examples you've provided, you'll pack the byte values into the integer value in exactly the same way. They are both "right". It's only the name that's different, and you can have many different names for the same thing.
Importantly, your second code example — supposedly "correct" for SharpDx — is not correct for the color format you're asking about. You can see right in the source code you reference, while you are packing the bytes in the order A, B, G, and R, LSB to MSB, the correct order of component is in fact BGRA (again, LSB to MSB).
Your second code example should look just like the first.
Long version…
As you can see from the two source code references you've noted, the actual formats are identical. That is, the format from each API, stores the byte values for each color component in the same order, within a single 32-bit integer: blue in the lowest 8 bits, then green, then red, then alpha in the highest 8 bits.
The problem is, there's no uniform standard for naming such formats. In the context of .NET, they list the color components in big-endian order (which could be thought of as a little ironic, since so much of the Windows ecosystem is based on little-endian hardware…but, see below). I.e. the most-significant byte is listed first: "ARGB". I call this name "big-endian order" simply because that name is consistent with a scenario in which one stores the 32-bit integer in a sequence of 4 bytes in memory on a computer running in big-endian mode. The order of component initials in the name is the same order they'd appear in that context.
On the other hand, in the context of SharpDx, the name is consistent with the order of bytes you'd see on little-endian hardware. The blue byte would come first in memory, then green, red, and finally alpha.
Fact is, both of these are somewhat arbitrary. While most mainstream PCs are running in little-endian mode now, which would argue in favor of the SharpDx naming scheme (which is inherited from the DirectX environment), these APIs both also can be found on big-endian hardware as well, especially as .NET Core is gaining traction. And in a lot of cases, the programmer using the API doesn't even really care what order the bytes are in. For code that has to deal with the individual bytes, it's still important, but a lot of the time it's more about just knowing what format the tools you're using is writing bitmaps in, and then making sure you've specified the correct format to the API.
All that said, I suspect that the main reason for the discrepancy has less to do with big-endian vs little-endian and more to do with underlying philosophical differences between the people responsible for the API. The fact is, even on big-endian hardware, the SharpDx format for a pixel where the components show up in memory in BGRA order will be "BGRA". Because, bitmap formats don't change just because the byte-order mode of the hardware is different. A pixel is not an integer. It's just a sequence of bytes. What will be different is that the shift values will have to be different, i.e. reversed, so that for code that does treat the pixel as a single 32-bit integer, it can access the individual components correctly.
Instead, it seems to me that the .NET designers recognized that the users of their API will most of the time be dealing with colors at a high level (e.g. setting the color of a pen or brush), not at the pixel level of bitmaps, and so naming the pixel format according to conventional order makes more sense. On the other hand, in SharpDx people are much more often dealing with the low-level pixel data, and having a name that reflects the actual byte-wise sequence of components for a pixel makes more sense in that context.
Indeed, the .NET code you've referenced doesn't involve bitmap data. The Color struct is only ever dealing with single int values at a time, with respect to the "ARGB" nomenclature. Since conceptually, we imagine numbers in big-endian format (even for decimal, i.e. with the most-significant digits first), ARGB is more human-readable. On the other hand, in the areas of .NET that do involve byte order in pixel formats, you'll find that the naming goes back to being representative of the actual byte order, e.g. the list of PixelFormats introduced with WPF.
Clear as mud, right? :)

Convert Pixel Buffer to B8G8R8A8_UNorm from 16Bit

So I have, from an external native library (c++) a pixel buffer that appears to be in 16Bit RGB (SlimDX equivalent is B5G6R5_UNorm).
I want to display the image that is represented by this buffer using Direct2D. But Direct2D does not support B5G6R5_UNorm.
so I need to convert this pixel buffer to B8G8R8A8_UNorm
I have seen various code snippets of such a task using bit shifting methods, but none of which were specific for my needs or formats. It doesn't help i have zero, nada, none, zilch any clue about bit shifting, or how it is done.
What i am after is a C♯ code example of such a task or any built in method to do the conversion - I don't mind using other library's
Please Note : I know this can be done using the C♯ bitmap classes, but i am trying to not rely on these built in classes (There is something about GDI i don't like), the images (in the form of pixel buffers) will be coming in thick and fast, and i am choosing SlimDX for its ease of use and performance.
The reason why I believe I need to convert the pixel buffer is, if I draw the image using B8G8R8A8_UNorm the image has a green covering and the pixels are just all over the place, hence why i believe i need to first convert or 'upgrade' the pixel buffer to the required format.
Just to add : When i do the above without converting the buffer, the image doesn't fill the entire geometry.
The pixel buffers are provided via byte[] objects
Bit shifting and logical operators are really useful when dealing with image formats, so you owe it to yourself to read more about it. However, I can give you a quick run-down of what this pixel format represents, and how to convert from one to another. I should preface my answer with a warning that I really don't know C# and its support libraries all that well, so there may be an in-box solution for you.
First of all, your pixel buffer has the format B5G6R5_UNORM. So we've got 16 bits (5 red, 6 green, and 5 blue) assigned to each pixel. We can visualize the bit layout of this pixel format as "RRRRRGGGGGGBBBBB", where 'R' stands for bits that belong to the red channel, 'G' for bits that belong to the green channel, and 'B' for bits that belong to the blue channel.
Now, let's say the first 16 bits (two bytes) of your pixel buffer are 1111110100101111. Line that up with the bit layout of your pixel format...
RRRRRGGGGGGBBBBB
1111110100101111
This means the red channel has bits 11111, green has 101001, and blue has 01111. Converting from binary to decimal: red=31, green=41, and blue=15. You'll notice the red channel has all bits set 1, but its value (31) is actually smaller than the green channel (41). However, this doesn't mean the color is more green than red when displayed; the green channel has an extra bit, so it can represent more values than the red and blue channels, but in this particular example there is actually more red in the output color! That's where the UNORM part comes in...
UNORM stands for unsigned normalized integer; this means the color channel values are to be interpreted as evenly spaced floating-point numbers from 0.0 to 1.0. The values are normalized by the number of bits allocated. What does that mean, exactly? Let's say you had a format with only 3 bits to store a channel. This means the channel can have 2^3=8 different values, which are shown below with the respective decimal, binary, and normalized representations. The normalized value is just the decimal value divided by the largest possible decimal value that can be represented with N bits.
Decimal | Binary | Normalized
-----------------------------
0 | 000 | 0/7 = 0.000
1 | 001 | 1/7 =~ 0.142
2 | 010 | 2/7 =~ 0.285
3 | 011 | 3/7 =~ 0.428
4 | 100 | 4/7 =~ 0.571
5 | 101 | 5/7 =~ 0.714
6 | 110 | 6/7 =~ 0.857
7 | 111 | 7/7 = 1.000
Going back to the earlier example, where the pixel had bits 1111110100101111, we already know our decimal values for the three color channels: RGB = {31, 41, 15}. We want the normalized values instead, because the decimal values are misleading and don't tell us much without knowing how many bits they were stored in. The red and blue channels are stored with 5 bits, so the largest decimal value is 2^5-1=31; however, the green channel's largest decimal value is 2^6-1=63. Knowing this, the normalized color channels are:
// NormalizedValue = DecimalValue / MaxDecimalValue
R = 31 / 31 = 1.000
G = 41 / 63 =~ 0.650
B = 15 / 31 =~ 0.483
To reiterate, the normalized values are useful because they represent the relative contribution of each color channel in the output. Adding more bits to a given channel doesn't affect the range of possible color, it simply improves color accuracy (more shades of that color channel, basically).
Knowing all of the above, you should be able to convert from any RGB(A) format, regardless of how many bits are stored in each channel, to any other RGB(A) format. For example, let's convert the normalized values we just calculated to B8G8R8A8_UNORM. This is easy once you have normalized values calculated, because you just scale by the maximum value in the new format. Every channel uses 8 bits, so the maximum value is 2^8-1=255. Since the original format didn't have an alpha channel, you would typically just store the max value (meaning fully opaque).
// OutputValue = InputValueNormalized * MaxOutputValue
B = 0.483 * 255 = 123.165
G = 0.650 * 255 = 165.75
R = 1.000 * 255 = 255
A = 1.000 * 255 = 255
There's only one thing missing now before you can code this. Way up above, I was able to pull out the bits for each channel just by lining up them up and copying them. That's how I got the green bits 101001. In code, this can be done by "masking" out the bits we don't care about. Shifting does exactly what it sounds like: it moves bits to the right or left. When you move bits to the right, the rightmost bit gets discarded and the new leftmost bit is assigned 0. Visualization below using the 16 bit example from above.
1111110100101111 // original 16 bits
0111111010010111 // shift right 1x
0011111101001011 // shift right 2x
0001111110100101 // shift right 3x
0000111111010010 // shift right 4x
0000011111101001 // shift right 5x
You can keep shifting, and eventually you'll end up with sixteen 0's. However, I stopped at five shifts for a reason. Notice now the 6 rightmost bits are the green bits (I've shifted/discarded the 5 blue bits). We've very nearly extracted the exact bits we need, but there's still the extra 5 red bits to the left of the green bits. To remove these, we use a "logical and" operation to mask out only the rightmost 6 bits. The mask, in binary, is 0000000000111111; 1 means we want the bit, and 0 means we don't want it. The mask is all 0's except for the last 6 positions, because we only want the last 6 bits. Line this mask up with the 5x shifted number, and the output is 1 when both bits are 1, and 0 for every other bit:
0000011111101001 // original 16 bits shifted 5x to the right
0000000000111111 // bit mask to extract the rightmost 6 bits
------------------------------------------------------------
0000000000101001 // result of the 'logical and' of the two above numbers
The result is exactly the number we're looking for: the 6 green bits and nothing else. Recall that the leading 0's have no effect on the decimal value (it's still 41). It's very simple to do the 'shift right' (>>) and 'logical and' (&) operations in C# or any other C-like language. Here's what it looks like in C#:
// 0xFD2F is 1111110100101111 in binary
uint pixel = 0xFD2F;
// 0x1F is 00011111 in binary (5 rightmost bits are 1)
uint mask5bits = 0x1F;
// 0x3F is 00111111 in binary (6 rightmost bits are 1)
uint mask6bits = 0x3F;
// shift right 11x (discard 5 blue + 6 green bits), then mask 5 bits
uint red = (pixel >> 11) & mask5bits;
// shift right 5x (discard 5 blue bits), then mask 6 bits
uint green = (pixel >> 5) & mask6bits;
// mask 5 rightmost bits
uint blue = pixel & mask5bits;
Putting it all together, you might end up with a routine that looks similar to this. Do be careful to read up on endianness, however, to make sure the bytes are ordered in the way you expect. In this case, the parameter is a 32-bit unsigned integer (first 16 bits ignored)
byte[] R5G6B5toR8G8B8A8(UInt16 input)
{
return new byte[]
{
(byte)((input & 0x1F) / 31.0f * 255), // blue
(byte)(((input >> 5) & 0x3F) / 63.0f * 255), // green
(byte)(((input >> 11) & 0x1F) / 31.0f * 255), // red
255 // alpha
};
}

adding together the LSB and MSB to obtain a value

I am reading data back from an imaging camera system, this camera detects age, gender etc, one of the values that comes back is the confidence value, this is 2 bytes, and is shown as the LSB and MSB, I have just tried converting these to integers and adding them together, but I don't get the value expected.
is this the correct way to get a value using the LSB and MSB, I have not used this before.
Thanks
Your value is going to be:
Value = LSB + (MSB << 8);
Explanation:
A byte can only store 0 - 255 different values, whereas an int (for this example) is 16 bits.
The MSB is the left hand^ side of the 16 bits, and as such needs to be shifted to the left side to change the bits used. You can then add the two values.
I would suggest looking up the shifting operators.
^ based on endienness (Intel/Motorola)
Assuming that MSB and LSB are most/least significant byte (rather than bit or any other expansion of that acronym), the value can be obtained by MSB * 256 + LSB.

How to perform a Bitwise Operator on a byte array in C#

I am using C# and Microsoft.Xna.Framework.Audio;
I have managed to record some audio into a byte[] array and I am able to play it back.
The audio comes in as 8 bit unsigned data, and I would like to convert it into 16 bit mono signed audio so I can read the frequency what not.
I have read a few places that for sound sampling you perform a Bitwise Operator Or and shift the bits 8 places.
I have performed the code as follows;
soundArray[i] = (short)(buffer[i] | (buffer[i + 1] << 8));
What I end up with is a lot of negative data.
From my understanding it would mostly need to be in the positive and would represent a wave length of data.
Any suggestions or help greatly appreciated,
Cheers.
MonkeyGuy.
This combines two 8-bit unsigned integers into one 16-bit signed integer:
soundArray[i] = (short)(buffer[i] | (buffer[i + 1] << 8));
I think what you might want is to simply scale each 8-bit unsigned integer to a 16-bit signed integer:
soundArray[i] = (short)((buffer[i] - 128) << 8);
Have you tried converting the byte to short before shifting?

Categories

Resources