adding together the LSB and MSB to obtain a value - c#

I am reading data back from an imaging camera system, this camera detects age, gender etc, one of the values that comes back is the confidence value, this is 2 bytes, and is shown as the LSB and MSB, I have just tried converting these to integers and adding them together, but I don't get the value expected.
is this the correct way to get a value using the LSB and MSB, I have not used this before.
Thanks

Your value is going to be:
Value = LSB + (MSB << 8);
Explanation:
A byte can only store 0 - 255 different values, whereas an int (for this example) is 16 bits.
The MSB is the left hand^ side of the 16 bits, and as such needs to be shifted to the left side to change the bits used. You can then add the two values.
I would suggest looking up the shifting operators.
^ based on endienness (Intel/Motorola)

Assuming that MSB and LSB are most/least significant byte (rather than bit or any other expansion of that acronym), the value can be obtained by MSB * 256 + LSB.

Related

What does "int &= 0xFF" in a checksum do?

I implemented this checksum algorithm I found, and it works fine but I can't figure out what this "&= 0xFF" line is actually doing.
I looked up the bitwise & operator, and wikipedia claims it's a logical AND of all the bits in A with B. I also read that 0xFF is equivalent to 255 -- which should mean that all of the bits are 1. If you take any number & 0xFF, wouldn't that be the identity of the number? So A & 0xFF produces A, right?
So then I thought, wait a minute, checksum in the code below is a 32 bit Int, but 0xFF is 8bit. Does that mean that the result of checksum &= 0xFF is that 24 bits end up as zeros and only the remaining 8 bits are kept? In which case, checksum is truncated to 8 bits. Is that what's going on here?
private int CalculateChecksum(byte[] dataToCalculate)
{
int checksum = 0;
for(int i = 0; i < dataToCalculate.Length; i++)
{
checksum += dataToCalculate[i];
}
//What does this line actually do?
checksum &= 0xff;
return checksum;
}
Also, if the result is getting truncated to 8 bits, is that because 32 bits is pointless in a checksum? Is it possible to have a situation where a 32 bit checksum catches corrupt data when 8 bit checksum doesn't?
It is masking off the higher bytes, leaving only the lower byte.
checksum &= 0xFF;
Is syntactically short for:
checksum = checksum & 0xFF;
Which, since it is doing integer operations, the 0xFF gets expanded into an int:
checksum = checksum & 0x000000FF;
Which masks off the upper 3 bytes and returns the lower byte as an integer (not a byte).
To answer your other question: Since a 32-bit checksum is much wider than an 8-bit checksum, it can catch errors that an 8-bit checksum would not, but both sides need to use the same checksum calculations for that to work.
Seems like you have a good understanding of the situation.
Does that mean that the result of checksum &= 0xFF is that 24 bits end up as zeros and only the remaining 8 bits are kept?
Yes.
Is it possible to have a situation where a 32 bit checksum catches corrupt data when 8 bit checksum doesn't?
Yes.
This is performing a simple checksum on the bytes (8 bit values) by adding them and ignoring any overflow out into higher order bits. The final &=0xFF, as you suspected, just truncates the value to the 8LSB of the 32 bit (If that is your compiler's definition of int) value resulting in an unsigned value between 0 and 255.
The truncation to 8 bits and throwing away the higher order bits is simply the algorithm defined for this checksum implementation. Historically this sort of check value was used to provide some confidence that a block of bytes had been transferred over a simple serial interface correctly.
To answer your last question then yes, a 32 bit check value will be able to detect an error that would not be detected with an 8 bit check value.
Yes, the checksum is truncated to 8 bits by the
&= 0xFF. The lowest 8 bits are kept and all higher bits are set to 0.
Narrowing the checksum to 8 bits does decrease the reliability. Just think of two 32bit checksums that are different but the lowest 8 bits are equal. In case of truncating to 8 bits both would be equal, in 32bit case they are not.

Convert Pixel Buffer to B8G8R8A8_UNorm from 16Bit

So I have, from an external native library (c++) a pixel buffer that appears to be in 16Bit RGB (SlimDX equivalent is B5G6R5_UNorm).
I want to display the image that is represented by this buffer using Direct2D. But Direct2D does not support B5G6R5_UNorm.
so I need to convert this pixel buffer to B8G8R8A8_UNorm
I have seen various code snippets of such a task using bit shifting methods, but none of which were specific for my needs or formats. It doesn't help i have zero, nada, none, zilch any clue about bit shifting, or how it is done.
What i am after is a C♯ code example of such a task or any built in method to do the conversion - I don't mind using other library's
Please Note : I know this can be done using the C♯ bitmap classes, but i am trying to not rely on these built in classes (There is something about GDI i don't like), the images (in the form of pixel buffers) will be coming in thick and fast, and i am choosing SlimDX for its ease of use and performance.
The reason why I believe I need to convert the pixel buffer is, if I draw the image using B8G8R8A8_UNorm the image has a green covering and the pixels are just all over the place, hence why i believe i need to first convert or 'upgrade' the pixel buffer to the required format.
Just to add : When i do the above without converting the buffer, the image doesn't fill the entire geometry.
The pixel buffers are provided via byte[] objects
Bit shifting and logical operators are really useful when dealing with image formats, so you owe it to yourself to read more about it. However, I can give you a quick run-down of what this pixel format represents, and how to convert from one to another. I should preface my answer with a warning that I really don't know C# and its support libraries all that well, so there may be an in-box solution for you.
First of all, your pixel buffer has the format B5G6R5_UNORM. So we've got 16 bits (5 red, 6 green, and 5 blue) assigned to each pixel. We can visualize the bit layout of this pixel format as "RRRRRGGGGGGBBBBB", where 'R' stands for bits that belong to the red channel, 'G' for bits that belong to the green channel, and 'B' for bits that belong to the blue channel.
Now, let's say the first 16 bits (two bytes) of your pixel buffer are 1111110100101111. Line that up with the bit layout of your pixel format...
RRRRRGGGGGGBBBBB
1111110100101111
This means the red channel has bits 11111, green has 101001, and blue has 01111. Converting from binary to decimal: red=31, green=41, and blue=15. You'll notice the red channel has all bits set 1, but its value (31) is actually smaller than the green channel (41). However, this doesn't mean the color is more green than red when displayed; the green channel has an extra bit, so it can represent more values than the red and blue channels, but in this particular example there is actually more red in the output color! That's where the UNORM part comes in...
UNORM stands for unsigned normalized integer; this means the color channel values are to be interpreted as evenly spaced floating-point numbers from 0.0 to 1.0. The values are normalized by the number of bits allocated. What does that mean, exactly? Let's say you had a format with only 3 bits to store a channel. This means the channel can have 2^3=8 different values, which are shown below with the respective decimal, binary, and normalized representations. The normalized value is just the decimal value divided by the largest possible decimal value that can be represented with N bits.
Decimal | Binary | Normalized
-----------------------------
0 | 000 | 0/7 = 0.000
1 | 001 | 1/7 =~ 0.142
2 | 010 | 2/7 =~ 0.285
3 | 011 | 3/7 =~ 0.428
4 | 100 | 4/7 =~ 0.571
5 | 101 | 5/7 =~ 0.714
6 | 110 | 6/7 =~ 0.857
7 | 111 | 7/7 = 1.000
Going back to the earlier example, where the pixel had bits 1111110100101111, we already know our decimal values for the three color channels: RGB = {31, 41, 15}. We want the normalized values instead, because the decimal values are misleading and don't tell us much without knowing how many bits they were stored in. The red and blue channels are stored with 5 bits, so the largest decimal value is 2^5-1=31; however, the green channel's largest decimal value is 2^6-1=63. Knowing this, the normalized color channels are:
// NormalizedValue = DecimalValue / MaxDecimalValue
R = 31 / 31 = 1.000
G = 41 / 63 =~ 0.650
B = 15 / 31 =~ 0.483
To reiterate, the normalized values are useful because they represent the relative contribution of each color channel in the output. Adding more bits to a given channel doesn't affect the range of possible color, it simply improves color accuracy (more shades of that color channel, basically).
Knowing all of the above, you should be able to convert from any RGB(A) format, regardless of how many bits are stored in each channel, to any other RGB(A) format. For example, let's convert the normalized values we just calculated to B8G8R8A8_UNORM. This is easy once you have normalized values calculated, because you just scale by the maximum value in the new format. Every channel uses 8 bits, so the maximum value is 2^8-1=255. Since the original format didn't have an alpha channel, you would typically just store the max value (meaning fully opaque).
// OutputValue = InputValueNormalized * MaxOutputValue
B = 0.483 * 255 = 123.165
G = 0.650 * 255 = 165.75
R = 1.000 * 255 = 255
A = 1.000 * 255 = 255
There's only one thing missing now before you can code this. Way up above, I was able to pull out the bits for each channel just by lining up them up and copying them. That's how I got the green bits 101001. In code, this can be done by "masking" out the bits we don't care about. Shifting does exactly what it sounds like: it moves bits to the right or left. When you move bits to the right, the rightmost bit gets discarded and the new leftmost bit is assigned 0. Visualization below using the 16 bit example from above.
1111110100101111 // original 16 bits
0111111010010111 // shift right 1x
0011111101001011 // shift right 2x
0001111110100101 // shift right 3x
0000111111010010 // shift right 4x
0000011111101001 // shift right 5x
You can keep shifting, and eventually you'll end up with sixteen 0's. However, I stopped at five shifts for a reason. Notice now the 6 rightmost bits are the green bits (I've shifted/discarded the 5 blue bits). We've very nearly extracted the exact bits we need, but there's still the extra 5 red bits to the left of the green bits. To remove these, we use a "logical and" operation to mask out only the rightmost 6 bits. The mask, in binary, is 0000000000111111; 1 means we want the bit, and 0 means we don't want it. The mask is all 0's except for the last 6 positions, because we only want the last 6 bits. Line this mask up with the 5x shifted number, and the output is 1 when both bits are 1, and 0 for every other bit:
0000011111101001 // original 16 bits shifted 5x to the right
0000000000111111 // bit mask to extract the rightmost 6 bits
------------------------------------------------------------
0000000000101001 // result of the 'logical and' of the two above numbers
The result is exactly the number we're looking for: the 6 green bits and nothing else. Recall that the leading 0's have no effect on the decimal value (it's still 41). It's very simple to do the 'shift right' (>>) and 'logical and' (&) operations in C# or any other C-like language. Here's what it looks like in C#:
// 0xFD2F is 1111110100101111 in binary
uint pixel = 0xFD2F;
// 0x1F is 00011111 in binary (5 rightmost bits are 1)
uint mask5bits = 0x1F;
// 0x3F is 00111111 in binary (6 rightmost bits are 1)
uint mask6bits = 0x3F;
// shift right 11x (discard 5 blue + 6 green bits), then mask 5 bits
uint red = (pixel >> 11) & mask5bits;
// shift right 5x (discard 5 blue bits), then mask 6 bits
uint green = (pixel >> 5) & mask6bits;
// mask 5 rightmost bits
uint blue = pixel & mask5bits;
Putting it all together, you might end up with a routine that looks similar to this. Do be careful to read up on endianness, however, to make sure the bytes are ordered in the way you expect. In this case, the parameter is a 32-bit unsigned integer (first 16 bits ignored)
byte[] R5G6B5toR8G8B8A8(UInt16 input)
{
return new byte[]
{
(byte)((input & 0x1F) / 31.0f * 255), // blue
(byte)(((input >> 5) & 0x3F) / 63.0f * 255), // green
(byte)(((input >> 11) & 0x1F) / 31.0f * 255), // red
255 // alpha
};
}

Reading 4 bits without losing information

I have come across a problem I cannot seem to solve.
I have a type of file "ASDF", and in their header I can get the necessary information to read them. The problem is that one of the "fields" is only 4 bits long.
So, lets say it's like this:
From bit 0 to 8 it's the index of the current node (I've read this already)
From 8 to 16 it's the index for the next node (Read this as well)
From bit 16 to 20 Length of the content (string, etc..)
So my problem is that if I try to read the "length" with a bytereader I will be losing 4 bits of information, or would be "4 bits off". Is there any way to read only 4 bits?
You should read this byte as you read the others then apply a bitmask of 0x0F
For example
byte result = (byte)(byteRead & 0x0F);
this will preserve the lower four bits in the result.
if the needed bits are the high four then you could apply the shift operator
byte result = (byte)((byteRead & 0x0F) >> 5);

what does this mean in c# while converting to a unsigned int 32

(uint)Convert.ToInt32(elements[0]) << 24;
The << is the left shift operator.
Given that the number is a binary number, it will shift all the bits the specified amount to the left.
If we have
2 << 1
This will take the number 2 in binary (00000010) and shift it to the left one bit. This gives you 4 (000000100).
Overflows
Note that once you get to the very left, the bits are discarded. So assuming you are working with an 8 bit sized integer (I know c# uint like you have in your example is 32 bits - I dont want to have to type out a 32 bit digit, so just assume we are on 8 bits)
255 << 1
will return 254 (11111110).
Use
Being very careful of the overflows mentioned before, bit shifting is a very fast way to multiply or divide by 2. In a highly optimised environment (such as games) this is a very useful way to perform arithmetic very fast.
However, in your example, it is taking only the right most 8 bits of the number making them the left most 8 bits (multiplying it by 16,777,216) . Why you would want do this, I could only guess.
I guess you are referring to Shift operators.
As Mongus Pong said, shifts are usually used to multiply and divide very fast. (And can cause weird problems due to overflow).
I'm going to go out on a limb and trying to guess what your code is doing.
If elements[0] is a byte element(that is to say, it contains only 8 bits), then this code will result in straight-forward multiplication by 2^24. Otherwise, it will drop the 24 high-order bits and multiply by 2^24.

Converting a 16 bit luminance value to 32 bit RGB

I have a 16 bit luminance value stored in two bytes, and I want to convert that to R, G, and B values. I have two questions: how do I convert those two bytes to a short, and assuming that the hue and saturation is 0, how do I turn that short into 8 bits per component RGB values?
(The Convert class doesn't have an option to take two bytes and output a short.)
If the 16 bit value is little endian and unsigned, the 2nd byte would be the one you want to repeat 3x to create the RGB value, and you'd drop the other byte. Or if want the RGB in a 32 bit integer, you could either use bit shifts and adds or just multiply that 2nd byte by 0x10101.
Try something like this:
byte luminance_upper = 23;
byte luminance_lower = 57;
int luminance = ((int)luminance_upper << 8) | (int)luminance_lower;
That will give you a value between 0-65535.
Of course, if you just want to end up with a 32-bit ARGB (greyscale) colour from that, the lower byte isnt going to matter because you're going from a 16-bit luminance to 8-bit R,G,B components.
byte luminance_upper = 255;
// assuming ARGB format
uint argb = (255u << 24) // alpha = 255
| ((uint)luminance_upper << 16) // red
| ((uint)luminance_upper << 8) // green
| (uint)luminance_upper; // blue (hope my endianess is correct)
Depending on your situation, there may be a better way of going about it. For example, the above code does a linear mapping, however you may get better-looking results by using a logarithmic curve to map the 16-bit values to 8-bit RGB components.
You will need the other components too (hue and saturation).
Also, 16-bit sounds a bit weird. Normally those values are floating point.
There is a very good article on CodeProject for conversions of many color spaces.

Categories

Resources