I am tring to convert a wdColor to rgb # color by this flowing code. I am converting the enum wdColor result to hex by ToString("x6").But Sometimes it gives me back a 8 lenght string, and i need 6 length string to convert it to rgb;
var num = -603914241;
var numToHex = y.ToString("x6");
gives "dc00ffff" that has 8 charectars.
The input number is too big to be represented using just 6 characters. x6 means that the output should be at least 6 characters long, padding by zeros as necessary to meet that minimum length - but if the input is too big then it'll use as many characters as necessary to represent it.
According to manual
https://learn.microsoft.com/en-us/dotnet/standard/base-types/standard-numeric-format-strings#XFormatString
The hexadecimal ("X") format specifier converts a number to a string
of hexadecimal digits. The case of the format specifier indicates
whether to use uppercase or lowercase characters for hexadecimal
digits that are greater than 9. For example, use "X" to produce
"ABCDEF", and "x" to produce "abcdef". This format is supported only
for integral types.
The precision specifier indicates the minimum number of digits desired
in the resulting string. If required, the number is padded with zeros
to its left to produce the number of digits given by the precision
specifier.
In your case it means that x6 guarantees at least (not exactly) 6 hexadecimal digits
Related
Can someone help me out ?
How do I print out the decimals of a number to a certain number of decimals in C# or should i say, how do you add trailing zeros to meet the specified number.
Example: printing to 7 decimals
5.66 should return 0.6600000
0.123456 should return 0.1234560
A simple way to specify the number of digits is to use a custom formatting string. '0' is a placeholder for a digit to always print, '#' would be an digit to print if relevant. So 7 decimals would be "0.0000000", There are also standard formatting strings that may be useful.
If you are not interested in the whole number part you can just subtract it:
var decimalPart = myValue - (int)myValue;
var str = decimalPart.ToString("0.0000000");
i found the solution. You use the float function.
int double= Convert.ToDouble(Console.ReadLine());
Console.WriteLine($"{num:fn}");
f specifies a float
n specifies the number of decimal places.
so f4 = to 4 decimal places
I want to display a three bytes (signed) in C#.
The code (fragment) I made is:
case 3:
HexadecimalValueRange = SignedValueChecked ?
string.Format("{0:X6}..{1:X6}", (Int32) minValue, (Int32) maxValue)
: string.Format("{0:X6}..{1:X6}", (UInt32)minValue, (UInt32)maxValue);
But it displays an example negative value as 0xFFC00000 where I would like to see 0xC000000, so with 6 'significant' digits (thus without the leading FF).
The leading bits of negative number are significant, so you can't cut the off with String.Format (there is no specifier that let you ignore significant digits, width specify only minimum size and left/right justification).
You can convert the values to 3-byte uint to print them the way you want:
string.Format("{0,-6:X}",(uint)(int.MaxValue & 0xFFFFFF))
I'm trying to convert C# double values to string of exponential notation. Consider this C# code:
double d1 = 0.12345678901200021;
Console.WriteLine(d1.ToString("0.0####################E0"));
//outputs: 1.23456789012E-1 (expected: 1.2345678901200021E-1)
Can anyone tell me the format string to output "1.2345678901200021E-1" from double d1, if it's possible?
Double values only hold 15 to 16 digits, you have 17 (if I counted right). Because 64 bit double numbers only hold 16 digits, your last digit is getting truncated and therefore when you convert the number to scientific notation, the last digit appears to have been truncated.
You should use Decimal instead. Decimal types can hold 128 bits of data, while double can only hold 64 bits.
According to the documentation for double.ToString(), double doesn't have the precision:
By default, the return value only contains 15 digits of precision although a maximum of 17 digits is maintained internally. If the value of this instance has greater than 15 digits, ToString returns PositiveInfinitySymbol or NegativeInfinitySymbol instead of the expected number. If you require more precision, specify format with the "G17" format specification, which always returns 17 digits of precision, or "R", which returns 15 digits if the number can be represented with that precision or 17 digits if the number can only be represented with maximum precision.
Console.WriteLine(d1) should show you that double doesn't support your wanted precision. Use decimal instead (64bit vs 128bit).
My immediate window is saying that the maximum resolution you can expect from that double
number is about
15 digits.
My VS2012 immediate window is saying that the resolution of 0.12345678901200021 is actually 16 significant digits:
0.1234567890120002
Therefore we expect that at least the last "2" digit should be reported in the string.
However if you use the "G17" format string:
0.12345678901200021D.ToString("G17");
you will get a string with 16 digit precision.
See this answer.
I came across a code. Can anybody shed a bit of light on it. Plz be kind if anybody finds it a bit basic.
string str= String.Format("{0,2:X2}", (int)value);
Thankyou for your time.
X format returns Hexadecimal representation of your value.
for example String.Format("{0:X}", 10) will return "A", not "10"
X2 will add zeros to the left, if your hexadecimal representation is less than two symbols
for example String.Format("{0:X2}", 10) will return "0A", not "A"
0,2 will add spaces to the left, if the resulting number of symbols is less than 2.
for example String.Format("{0,3:X2}", 10) will return " 0A", but not "0A"
So as result this format {0,2:X2} will return your value in Hexadecimal notation appended by one zero from the left if it is only one symbol and then appended by space from the left if is it one symbol. After reading this several times, you can see, that ,2 is redundant and this format can be simplified to {0:X2} without changing the behavior.
Some notes:
: separates indexing number and specific format applied to that object. For example this code
String.Format("{0:X} {1:N} {0:N}", 10, 20)
shows, that I want to format 10 (index 0) in hexadecimal then show 20 (index 1) in numerical way, and then also format 10 (index 0) in numeric way.
0,2 from the left part of semi-column indicated index position 0 and format ,2 applied to the resulting string, not to a specific object. So this code
String.Format("{0,1} {1,2} {0,4}", 10, 20)
will print first number with at least one symbol, second with at least two symbols and then again first number with at least four symbols occupied. If the number of symbols in resulting string will be less - they will be populated by spaces.
{0,2:X2}
It splits into
0,2 - Format a number 10 into 10
X2 - Formats a number 10 into hexadecimel value 0A.
Update
Code
String.Format("{0,2:X2}", (int)value); // where value = 10
Result: 0A
Live Example: http://ideone.com/NW0U26
Conclusion from me
You can change "{0,2:X2}" to "{0:X2}", live example here.
Reference Links: MSDN
According to MSDN, a format string has the following format:
{index[,alignment][:formatString]}
We can find all of these components (the last two being optional) in your format string:
0 is the index of the parameter to use.
,2 is the alignment part, if the result is shorter than that, it is padded left with spaces.
:X2 is the formatString part. It means the number will be formatted in hexadecimal (uppercase) format, with a minimum width of 2. If the resulting number has less than 2 digits, it is padded with zeroes on the left.
In this specific case the alignment specifier is redundant, because X2 already specifies a minimum width of 2.
See here for more info on the format string:
Composite Formatting
Standard Numeric Format Strings
I'm trying to write the largest int64 value to the command line. I tried using 0x1111111111111111 which is 16 ones, and visual studio says that is int64. I would have assumed that would be int16. What am missing here?
0x is the prefix for hexadecimal and not binary literals. This means that the binary representation of your number is 0001000100010001000100010001000100010001000100010001000100010001
There are unfortunately no binary literals in C#, so you either have to do the calculation yourself (0x7FFFFFFFFFFFFFFF) or use the Convert class, for example:
short s = Convert.ToInt16("1111111111111111", 2); // "2" for binary
In order to just get the largest Int64 number, you don't need to perform any calculations of your own, as it is already available for you in this field:
Int64.MaxValue
The literal 0x1111111111111111 is a hexadecimal number. Each hexadecimal digit can be represented using four bits so with 16 hexadecimal digits you need 4*16 = 64 bits. You probably intended to write the binary number 1111111111111111. You can convert from a binary literal string to an integer using the following code:
Convert.ToInt16("1111111111111111", 2)
This will return the desired number (-1).
To get the largest Int64 you can use Int64.MaxValue (0x7FFFFFFFFFFFFFFF) or if you really want the unsigned value you can use UInt64.MaxValue (0xFFFFFFFFFFFFFFFF).
The largest Int64 value is Int64.MaxValue. To print this in hex, try:
Console.WriteLine(Int64.MaxValue.ToString("X"));