How to display a signed three bytes hexadecimal output in C#? - c#

I want to display a three bytes (signed) in C#.
The code (fragment) I made is:
case 3:
HexadecimalValueRange = SignedValueChecked ?
string.Format("{0:X6}..{1:X6}", (Int32) minValue, (Int32) maxValue)
: string.Format("{0:X6}..{1:X6}", (UInt32)minValue, (UInt32)maxValue);
But it displays an example negative value as 0xFFC00000 where I would like to see 0xC000000, so with 6 'significant' digits (thus without the leading FF).

The leading bits of negative number are significant, so you can't cut the off with String.Format (there is no specifier that let you ignore significant digits, width specify only minimum size and left/right justification).
You can convert the values to 3-byte uint to print them the way you want:
string.Format("{0,-6:X}",(uint)(int.MaxValue & 0xFFFFFF))

Related

C# -Converting int ToString("X6") gives 8 characters

I am tring to convert a wdColor to rgb # color by this flowing code. I am converting the enum wdColor result to hex by ToString("x6").But Sometimes it gives me back a 8 lenght string, and i need 6 length string to convert it to rgb;
var num = -603914241;
var numToHex = y.ToString("x6");
gives "dc00ffff" that has 8 charectars.
The input number is too big to be represented using just 6 characters. x6 means that the output should be at least 6 characters long, padding by zeros as necessary to meet that minimum length - but if the input is too big then it'll use as many characters as necessary to represent it.
According to manual
https://learn.microsoft.com/en-us/dotnet/standard/base-types/standard-numeric-format-strings#XFormatString
The hexadecimal ("X") format specifier converts a number to a string
of hexadecimal digits. The case of the format specifier indicates
whether to use uppercase or lowercase characters for hexadecimal
digits that are greater than 9. For example, use "X" to produce
"ABCDEF", and "x" to produce "abcdef". This format is supported only
for integral types.
The precision specifier indicates the minimum number of digits desired
in the resulting string. If required, the number is padded with zeros
to its left to produce the number of digits given by the precision
specifier.
In your case it means that x6 guarantees at least (not exactly) 6 hexadecimal digits

Inverting all 32 bits in a number using binary operators in c#

I am wondering how you take a number (for example 9), convert it to a 32 int (00000000000000000000000000001001), then invert or flip every bit (11111111111111111111111111110110) so that the zeroes become ones and the ones become zeroes.
I know how to do that by replacing the numbers in a string, but I need to know how to do that with binary operators on a binary number.
I think you have to use this operator, "~", but it just gives me a negative number when I use it on a value.
That is doing the correct functionality. The int data type within C# uses signed integers, so 11111111111111111111111111110110 is in fact a negative number.
As Marc pointed out, if you want to use unsigned values declare your number as a uint.
If you look at the decimal version of your number then its a negative number.
If you declare it as a unsigned int then its a positive one.
But this doesnt matter, binary it will always be 11111111111111111111111111110110.
Try this:
int number = 9;
Console.WriteLine(Convert.ToString(number, 2)); //Gives you 1001
number = ~number; //Invert all bits
Console.WriteLine(Convert.ToString(number, 2));
//Gives you your wanted result: 11111111111111111111111111110110

c#: string format

I came across a code. Can anybody shed a bit of light on it. Plz be kind if anybody finds it a bit basic.
string str= String.Format("{0,2:X2}", (int)value);
Thankyou for your time.
X format returns Hexadecimal representation of your value.
for example String.Format("{0:X}", 10) will return "A", not "10"
X2 will add zeros to the left, if your hexadecimal representation is less than two symbols
for example String.Format("{0:X2}", 10) will return "0A", not "A"
0,2 will add spaces to the left, if the resulting number of symbols is less than 2.
for example String.Format("{0,3:X2}", 10) will return " 0A", but not "0A"
So as result this format {0,2:X2} will return your value in Hexadecimal notation appended by one zero from the left if it is only one symbol and then appended by space from the left if is it one symbol. After reading this several times, you can see, that ,2 is redundant and this format can be simplified to {0:X2} without changing the behavior.
Some notes:
: separates indexing number and specific format applied to that object. For example this code
String.Format("{0:X} {1:N} {0:N}", 10, 20)
shows, that I want to format 10 (index 0) in hexadecimal then show 20 (index 1) in numerical way, and then also format 10 (index 0) in numeric way.
0,2 from the left part of semi-column indicated index position 0 and format ,2 applied to the resulting string, not to a specific object. So this code
String.Format("{0,1} {1,2} {0,4}", 10, 20)
will print first number with at least one symbol, second with at least two symbols and then again first number with at least four symbols occupied. If the number of symbols in resulting string will be less - they will be populated by spaces.
{0,2:X2}
It splits into
0,2 - Format a number 10 into 10
X2 - Formats a number 10 into hexadecimel value 0A.
Update
Code
String.Format("{0,2:X2}", (int)value); // where value = 10
Result: 0A
Live Example: http://ideone.com/NW0U26
Conclusion from me
You can change "{0,2:X2}" to "{0:X2}", live example here.
Reference Links: MSDN
According to MSDN, a format string has the following format:
{index[,alignment][:formatString]}
We can find all of these components (the last two being optional) in your format string:
0 is the index of the parameter to use.
,2 is the alignment part, if the result is shorter than that, it is padded left with spaces.
:X2 is the formatString part. It means the number will be formatted in hexadecimal (uppercase) format, with a minimum width of 2. If the resulting number has less than 2 digits, it is padded with zeroes on the left.
In this specific case the alignment specifier is redundant, because X2 already specifies a minimum width of 2.
See here for more info on the format string:
Composite Formatting
Standard Numeric Format Strings

Int64 seems too short in C#

I'm trying to write the largest int64 value to the command line. I tried using 0x1111111111111111 which is 16 ones, and visual studio says that is int64. I would have assumed that would be int16. What am missing here?
0x is the prefix for hexadecimal and not binary literals. This means that the binary representation of your number is 0001000100010001000100010001000100010001000100010001000100010001
There are unfortunately no binary literals in C#, so you either have to do the calculation yourself (0x7FFFFFFFFFFFFFFF) or use the Convert class, for example:
short s = Convert.ToInt16("1111111111111111", 2); // "2" for binary
In order to just get the largest Int64 number, you don't need to perform any calculations of your own, as it is already available for you in this field:
Int64.MaxValue
The literal 0x1111111111111111 is a hexadecimal number. Each hexadecimal digit can be represented using four bits so with 16 hexadecimal digits you need 4*16 = 64 bits. You probably intended to write the binary number 1111111111111111. You can convert from a binary literal string to an integer using the following code:
Convert.ToInt16("1111111111111111", 2)
This will return the desired number (-1).
To get the largest Int64 you can use Int64.MaxValue (0x7FFFFFFFFFFFFFFF) or if you really want the unsigned value you can use UInt64.MaxValue (0xFFFFFFFFFFFFFFFF).
The largest Int64 value is Int64.MaxValue. To print this in hex, try:
Console.WriteLine(Int64.MaxValue.ToString("X"));

Check if decimal contains decimal places by looking at the bytes

There is a similar question in here. Sometimes that solution gives exceptions because the numbers might be to large.
I think that if there is a way of looking at the bytes of a decimal number it will be more efficient. For example a decimal number has to be represented by some n number of bytes. For example an Int32 is represented by 32 bits and all the numbers that start with the bit of 1 are negative. Maybe there is some kind of similar relationship with decimal numbers. How could you look at the bytes of a decimal number? or the bytes of an integer number?
If you are really talking about decimal numbers (as opposed to floating-point numbers), then Decimal.GetBits will let you look at the individual bits of a decimal. The MSDN page also contains a description of the meaning of the bits.
On the other hand, if you just want to check whether a number has a fractional part or not, doing a simple
var hasFractionalPart = (myValue - Math.Round(myValue) != 0)
is much easier than decoding the binary structure. This should work for decimals as well as classic floating-point data types such as float or double. In the latter case, due to floating-point rounding error, it might make sense to check for Math.Abs(myValue - Math.Round(myValue)) < someThreshold instead of comparing to 0.
If you want a reasonably efficient way of getting the 'decimal' value of a decimal type you can just mod it by one.
decimal number = 4.75M;
decimal fractionalPart = number % 1;
Console.WriteLine(fractionalPart); //will print 0.75
While it may not be the theoretically optimal solution, it'll be quite fast, and almost certainly fast enough for your purposes (far better than string manipulation and parsing, which is a common naive approach).
You can use Decimal.GetBits in order to retrieve the bits from a decimal structure.
The MSDN page linked above details how they are laid out in memory:
The binary representation of a Decimal number consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the integer number and specify what portion of it is a decimal fraction. The scaling factor is implicitly the number 10, raised to an exponent ranging from 0 to 28.
The return value is a four-element array of 32-bit signed integers.
The first, second, and third elements of the returned array contain the low, middle, and high 32 bits of the 96-bit integer number.
The fourth element of the returned array contains the scale factor and sign. It consists of the following parts:
Bits 0 to 15, the lower word, are unused and must be zero.
Bits 16 to 23 must contain an exponent between 0 and 28, which indicates the power of 10 to divide the integer number.
Bits 24 to 30 are unused and must be zero.
Bit 31 contains the sign; 0 meaning positive, and 1 meaning negative.
Going with Oded's detailed info to use GetBits, I came up with this
const int EXP_MASK = 0x00FF0000;
bool hasDecimal = (Decimal.GetBits(value)[3] & EXP_MASK) != 0x0;

Categories

Resources