Whats does this sentence mean:
The Stride property, holds the width of one row in bytes. The size of a row however may not be an exact multiple of the pixel size because for efficiency, the system ensures that the data is packed into rows that begin on a four byte boundary and are padded out to a multiple of four bytes.
That means if your image width is 17 pixels and with 3 bytes for color, you get 51 bytes. So your image width in bytes is 51 bytes, then the stride is 52 bytes, which is the image width in bytes rounded up to the next 4-byte boundary.
Stride is padded. That means that it gets rounded up to the nearest multiple of 4.
(assuming 8 bit gray, or 8 bits per pixel):
Width | stride
--------------
1 | 4
2 | 4
3 | 4
4 | 4
5 | 8
6 | 8
7 | 8
8 | 8
9 | 12
10 | 12
11 | 12
12 | 12
etc.
In C#, you might implement this like this:
static int PaddedRowWidth(int bitsPerPixel, int w, int padToNBytes)
{
if (padToNBytes == 0)
throw new ArgumentOutOfRangeException("padToNBytes", "pad value must be greater than 0.");
int padBits = 8* padToNBytes;
return ((w * bitsPerPixel + (padBits-1)) / padBits) * padToNBytes;
}
static int RowStride(int bitsPerPixel, int width) { return PaddedRowWidth(bitsPerPixel, width, 4); }
Let me give you an example:
This means that if the width is 160,
stride will be 160. But if width is
161, then stride will be 164.
Related
I have do not have much knowledge of C and I'm stuck with a problem since one of my colleague is on leave.
I have a 32 bit number and i have to extract bits from it. I did go through a few threads but I'm still not clear how to do so. I would be highly obliged if someone can help me.
Here is an example of what I need to do:
Assume hex number = 0xD7448EAB.
In binary = 1101 0111 0100 0100 1000 1110 1010 1011.
I need to extract the 16 bits, and output that value. I want bits 10 through 25.
The lower 10 bits (Decimal) are ignored. i.e., 10 1010 1011 are ignored.
And the upper 6 bits (Overflow) are ignored. i.e. 1101 01 are ignored.
The remaining 16 bits of data needs to be the output which is 11 0100 0100 1000 11 (numbers in italics are needed as the output).
This was an example but I will keep getting different hex numbers all the time and I need to extract the same bits as I explained.
How do I solve this?
Thank you.
For this example you would output 1101 0001 0010 0011, which is 0xD123, or 53,539 decimal.
You need masks to get the bits you want. Masks are numbers that you can use to sift through bits in the manner you want (keep bits, delete/clear bits, modify numbers etc). What you need to know are the AND, OR, XOR, NOT, and shifting operations. For what you need, you'll only need a couple.
You know shifting: x << y moves bits from x *y positions to the left*.
How to get x bits set to 1 in order: (1 << x) - 1
How to get x bits set to 1, in order, starting from y to y + x: ((1 << x) -1) << y
The above is your mask for the bits you need. So for example if you want 16 bits of 0xD7448EAB, from 10 to 25, you'll need the above, for x = 16 and y = 10.
And now to get the bits you want, just AND your number 0xD7448EAB with the mask above and you'll get the masked 0xD7448EAB with only the bits you want. Later, if you want to go through each one, you'll need to shift your result by 10 to the right and process each bit at a time (at position 0).
The answer may be a bit longer, but it's better design than just hard coding with 0xff or whatever.
OK, here's how I wrote it:
#include <stdint.h>
#include <stdio.h>
main() {
uint32_t in = 0xd7448eab;
uint16_t out = 0;
out = in >> 10; // Shift right 10 bits
out &= 0xffff; // Only lower 16 bits
printf("%x\n",out);
}
The in >> 10 shifts the number right 10 bits; the & 0xffff discards all bits except the lower 16 bits.
I want bits 10 through 25.
You can do this:
unsigned int number = 0xD7448EAB;
unsigned int value = (number & 0x3FFFC00) >> 10;
Or this:
unsigned int number = 0xD7448EAB;
unsigned int value = (number >> 10) & 0xFFFF;
I combined the top 2 answers above to write a C program that extracts the bits for any range of bits (not just 10 through 25) of a 32-bit unsigned int. The way the function works is that it returns bits lo to hi (inclusive) of num.
#include <stdio.h>
#include <stdint.h>
unsigned extract(unsigned num, unsigned hi, unsigned lo) {
uint32_t range = (hi - lo + 1); //number of bits to be extracted
//shifting a number by the number of bits it has produces inconsistent
//results across machines so we need a special case for extract(num, 31, 0)
if(range == 32)
return num;
uint32_t result = 0;
//following the rule above, ((1 << x) - 1) << y) makes the mask:
uint32_t mask = ((1 << range) -1) << lo;
//AND num and mask to get only the bits in our range
result = num & mask;
result = result >> lo; //gets rid of trailing 0s
return result;
}
int main() {
unsigned int num = 0xd7448eab;
printf("0x%x\n", extract(num, 10, 25));
}
I want to make a 16 bit value from 3 little endian bytes with a max value of 31 (this means they're maximum of 5 set bits). How would I get the last 5 bits of the bytes, then put them all together?
e.g. bytes : 0011111 0010101 0011100 into 1111110101111000
I tried this but I think I'm just overwriting my old bits
cp = (bar << 3) | (bag >> 2) | (bab >> 7);
You are not overwriting bits, but you are shifting bits out of the values before even putting them together. bag >> 2 leaves only three bits of the original and bab >> 7 shifts out all five bits plus two more.
Shift the values to the left instead:
cp = (bar << 10) | (bag << 5) | bab;
You want to make room on the right for the other values:
bar << 10 -11111----------
bag << 5 ------10101-----
bab -----------11100
I've been reading though the gif specification trying to understand how the size of a colour table palette is calculated.
From the example on Wikipedia here
byte# hexadecimal text or
(hex) value Meaning
0: 47 49 46
38 39 61 GIF89a Header
Logical Screen Descriptor
6: 03 00 3 - logical screen width in pixels
8: 05 00 5 - logical screen height in pixels
A: F7 - GCT follows for 256 colors with
resolution 3 x 8 bits/primary
If you look at the 10th byte you can see the Hex F7which represents the decimal number 247.
Now I know from reading various code samples that this is a packed value made up from the following:
0x80 | // 1 : global color table flag = 1 (gct used)
0x70 | // 2-4 : color resolution
0x00 | // 5 : gct sort flag = 0
7 |; // 6-8 : gct size
0 |// background color index
0 |// pixel aspect ratio - assume 1:1
I've also determined that the size 7 represents the bit depth minus 1. which can be used to determine the number of colours.
2 ^ (0 + 1) = 4
2 ^ (1 + 1) = 4
2 ^ (2 + 1) = 8
2 ^ (3 + 1) = 16
2 ^ (5 + 1) = 64
2 ^ (6 + 1) = 128
2 ^ (7 + 1) = 256
http://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp
http://www.devx.com/projectcool/Article/19997/0/page/7
What I am looking to find out is how would I calculate the bit depth from the number of colours using C#.
Since this is something you would want to do quickly I would imagine using some sort of bit-shifting mechanism would be the best approach. I'm not a computer scientist though so I struggle with such things.
I've a horrible feeling it's really simple...
I think you're looking for a logarithm. Round the result up in order to calculate the bits depth needed.
/// <summary>
/// Returns how many bits are required to store the specified
/// number of colors. Performs a Log2() on the value.
/// </summary>
/// <param name="colors"></param>
/// <returns></returns>
public static int GetBitsNeededForColorDepth(byte colors) {
return (int)Math.Ceiling(Math.Log(colors, 2));
}
https://github.com/imazen/resizer/blob/c4c586b58b2211ad0f48f7d8285e951ff6f262f9/Plugins/PrettyGifs/PrettyGifs.cs#L239-L241
let i have got two byte variable:
byte a= 255;
byte b= 121;
byte c= (byte) (a + b);
Console.WriteLine(c.ToString());
output:120
please explain me how this is adding values. i know that its crossing size limit of byte but don't know what exactly operation it performs in such situation because its not looking like its chopping the result.
Thanks
EDIT: sorry its 120 as a answer.
You are overflowing the byte storage of 255 so it starts from 0.
So: a + b is an integer = 376
Your code is equivalent to:
byte c = (byte)376;
That's one of the reasons why adding two bytes returns an integer. Casting it back to a byte should be done at your own risk.
If you want to store the integer 376 into bytes you need an array:
byte[] buffer = BitConverter.GetBytes(376);
As you can see the resulting array contains 4 bytes now which is what is necessary to store a 32 bit integer.
It gets obvious when you look at the binary representation of the values:
var | decimal | binary
----|----------------------
a | 255 | 1111 1111
b | 121 | 0111 1001
| |
a+b | 376 | 1 0111 1000
This gets truncated to 8 bits, the overflow bit is disregarded when casting the result to byte:
c | | 0111 1000 => 120
As others are saying, you are overflowing; the a+b operation results in an int, which you are explicitly casting to a byte. Documentation is here, essentially in an unchecked context, the cast is done by truncating the most significant bits.
I guess you mean byte c= (byte)(a + b);
On my end the result here is 120, and that is what I would expect.
a+b equals 376, and all bits that represent 256 and up gets stripped (since byte actually only hold 1 byte), then 120 is what you are left with inside your byte.
I need to convert 16-bit XRGB1555 into 24-bit RGB888. My function for this is below, but it's not perfect, i.e. a value of 0b11111 wil give 248 as the pixel value, not 255. This function is for little-endian, but can easily be modified for big-endian.
public static Color XRGB1555(byte b0, byte b1)
{
return Color.FromArgb(0xFF, (b1 & 0x7C) << 1, ((b1 & 0x03) << 6) | ((b0 & 0xE0) >> 2), (b0 & 0x1F) << 3);
}
Any ideas how to make it work?
You would normally copy the highest bits down to the bottom bits, so if you had five bits as follows:
Bit position: 4 3 2 1 0
Bit variable: A B C D E
You would extend that to eight bits as:
Bit position: 7 6 5 4 3 2 1 0
Bit variable: A B C D E A B C
That way, all zeros remains all zeros, all ones becomes all ones, and values in between scale appropriately.
(Note that A,B,C etc aren't supposed to be hex digits - they are variables representing a single bit).
I'd go with a lookup table. Since there are only 32 different values it even fits in a cache-line.
You can get the 8 bit value from the 5 bit value with:
return (x<<3)||(x>>2);
The rounding might not be perfect though. I.e. the result isn't always closest to the input, but it never is further away that 1/255.