C#: Convert a UInt16 in string format into an integer(decimal) - c#

Here's the problem.
I ,for example,have a string "2500".Its converted from byte array into string.I have to convert it to decimal(int).
This is what I should get:
string : "2500"
byte[] : {0x25, 0x00}
UInt16 : 0x0025 //note its reversed
int : 43 //decimal of 0x0025
How do I do that?

Converting from hex string to UInt16 is UInt16.Parse(s, NumberStyles.AllowHexSpecifier).
You'll need to write some code to do the "reversal in two-digit blocks" though. If you have control over the code that generates the string from the byte array, a convenient way to do this would be to build the string in reverse, e.g. by traversing the array from length - 1 down to 0 instead of in the normal upward direction. Alternatively, assuming that you know it's exactly a 4 character string, s = s.Substring(2, 2) + s.Substring(0, 2) would do the trick.

It might be better to explicitly specify what base you want with Convert.ToUInt16.
Then, you can flip it around with IPAddress.HostToNetworkOrder (though you'll have to cast it to an int16, then cast the result back to a uint16.

there is a special class for conversion : Convert
to convert from string to uint16 use the following method: Convert.ToUInt16
for difference on int.parse and Convert.ToInt32 is explained in this page
and the msdn site for Convert.ToUInt16 here

Assuming your string is always 4 digits long.
You can get away with one string variable but I'm using multiple ones for readability.
string originalValue = "2500";
string reverseValue = originalValue.Substring(2, 2) + originalValue.Substring(0, 2);
string hexValue = Convert.ToUInt16(reverseValue, 16).ToString();
int result = Convert.ToInt32(hexValue, 10);

Related

Converting string "0x1A" to sbyte

I'm getting the exception:
Input string was not in a correct format.
at runtime for the following code snippet.
string str= "0x1A";
sbyte value= Convert.ToSByte(str);
Can anyone help in fixing this?
Convert.ToSByte takes an argument int fromBase to specify the base you're converting from.
In your case you have to do the following:
sbyte s = Convert.ToSByte(str, 16); // s == 26
You can read more about different bases (also called radix) in this Wikipedia article.
If you look up in the documentation of ToSByte under the section Exceptions you will find the condition under which this exception is thrown:
value does not consist of an optional sign followed by a sequence of digits (0 through 9).
You value is written in HEX format, meaning to the base of 16. The input string contains not only "digits (0 through 9)". For that you will need another overload of this method in which you can specify this.
If you look at this overload of Convert.ToSByte Method (String, Int32) you can see that it:
Converts the string representation of a number in a specified base to an equivalent 8-bit signed integer.
The second parameter is :
fromBase
Type: System.Int32
The base of the number in value, which must be 2, 8, 10, or 16.
So specifying the base will releave you from the exception:
string str = "0x1A";
sbyte value = Convert.ToSByte(str, 16);

String Comparison or Parse to Int?

I'll keep this one short. I'm writing a module which will be required to compare two large integers which are input as strings (note: they are large, but not large enough to exceed Int64 bounds).
The strings are padded, so the choice is between taking the extra-step to converting them to their integer equivalent or comparing them as strings.
What I'm doing is converting each of them to Int64 and comparing them that way. However, I believe that string comparisons would also work. Seeing as I'd like it to be as efficient as possible, what are you're opinions on comparison of integers via :
string integer1 = "123";
string integer2 = "456";
if (Int64.Parse(integer1) <= Int64.Parse(integer2))
OR
string integer1 = "123";
string integer2 = "456";
if (integer1.CompareTo(integer2) < 0)
Better to use Int64.TryParse since this is a string fields
string integer1 = "123";
string integer2 = "456";
long value1=0;
long value2=0;
long.TryParse(integer1 ,out value1);
long.TryParse(integer2 ,out value2);
if(value1<=value2)
Nope string comparisons will not work. You should use your first version, you have to convert this strings to numbers parsing them and then compare the numbers.
It would be good to have a look here, where explains thorougly what the CompareTo method does. In a few words:
Compares the current instance with another object of the same type and returns an integer that indicates whether the current instance precedes, follows, or occurs in the same position in the sort order as the other object.
So since "123" and "456" are strings, they compare one string to another and not the one integer to the other.
Last but not least, it would be better to use the TryParse method for parsing your numbers, since your input may be not accidentally an integer. The way you use it is fairly easy:
Int64 value = 0;
Int64.Parse(integer1, out value1);
Where the value1 is the value1 you will get after the conversion of the string integer1. So for both you values, you should use this one if statement:
if(Int64.TryParse(integer1, out value1) && Int64.TryParse(integer2, out value2)
{
if(value1<=value2)
{
}
else
{
}
}
else
{
// Some error would have been happened to at least one of the two conversions.
}
It's fair to question if it is worth the cost of conversion (parse). If String.CompareTo were really efficient AND the number were always of a scale and format* the the string comparison were to be reliable then you might be better off. You could measure the performance, but you'll find the convert and int comparision is faster and more robust than a string comparison.
*String compare works if number strings are of equal length with leading 0s as necessary. So '003','020', and '100' will sort correctly but'3','20', and '100' will not.

Is the use of implicit enum fields to represent numeric values a bad practice?

Is the use of implicit enum fields to represent numeric values a necessarily bad practice?
Here is a use case: I want an easy way to represent hex digits, and since C# enums are based on integers, they seem like a natural match. I don't like a char or a string here, because I have to explicitly validate their values. The problem with enums is that digits [0-9] are not valid field identifiers (with good reason). It occurred to me that I don't need to declare the digits 0-9, because they are implicitly present.
So, my hex digit enum would look like:
public enum Hex : int {
A = 10,
B = 11,
C = 12,
D = 13,
E = 14,
F = 15
}
So, I could write Tuple<Hex,Hex> r = Tuple.Create(Hex.F,(Hex)1);, and r.Item1.ToString() + r.Item2.ToString() would give me "F1". Basically, my question is that if the ToString() value of the numeric constant is what I want to name the enum field, why is it problematic to omit the declaration entirely?
An alternative representation as an enum could have the fields declared with some prefix, such as:
public enum Hex : int {
_0 = 0,
_1 = 1,
_2 = 2,
_3 = 3,
_4 = 4,
_5 = 5,
_6 = 6,
_7 = 7,
_8 = 8,
_9 = 9,
A = 10,
B = 11,
C = 12,
D = 13,
E = 14,
F = 15
}
The problem is that the above example would give me "F_1" instead of "F1". Obviously, this is easy to fix. I'm wondering if there are additional problems with the implicit approach that I am not considering.
It's bad practice because it's a clever trick that's surprising to the people who read your code. It surprised me that it actually worked, it had me saying wtf. Remember the only valid measurement of code quality:
Clever tricks don't belong in code that's meant to be read and maintained by others. If you want to output a number as hex, convert it to a hex string using the normal String.Format("{0:X}", value)
This is a fundamentally broken way to handle hex. Hex is a human interface detail. It is always a string, a representation of a number. Like "1234" is a representation of the value 1234. It happens to be "4D2" when represented in hex but the number in your program is still 1234. A program should only ever concern itself with the number, never with the representation.
Converting a number to hex should only happen when you display the number to human eyes. Simple to do with ToString("X"). And to parse back from human input with TryParse() using NumberStyles.HexNumber. Input and output, at no other point should you ever deal with hex.
I would define a struct for HexDigit. You can add HexDigit 'A' to 'F' as static constants (or static readonly fields).
You can define implicit converters to allow conversion of integers 0-9, conversion to integers, and you can override ToString() to make you Tuples look nice.
That will be much more flexible than an enum.
In my opinion, it is a bad practice. If you need Hex representation, simply create a helper class that handles all the operations you require.
As this article suggests, these code snippets will help in creating the helpers:
// Store integer 182
int decValue = 182;
// Convert integer 182 as a hex in a string variable
string hexValue = decValue.ToString("X");
// Convert the hex string back to the number
int decAgain = int.Parse(hexValue, System.Globalization.NumberStyles.HexNumber);
The reason I believe it's bad practice is because it's not object oriented, and it runs into the problem of relying on the enum to translate all the values hard-coded - which is bad. If you can avoid hard coding anything, it's always a step in the right direction. Also, a helper class is extensible and can be improved over time for additional functionality.
That being said, I DO like the simplicity of enums, but, again, that doesn't supersede the need to keep things OO in my opinion.
I'm not sure what you're actually trying to accomplish here, but if you're looking to limit something to two hexadecimal digits, why wouldn't you just declare it as a byte? While your enum hack is clever, I don't actually see the need for it. It's also likely to be misundertstood if passed to another programmer without explanation as your use of undeclared values against your enum is counterintuitive.
Regarding number bases and literal representations, an integer in computing isn't base-10 or base-16 natively, it's actually base-2 (binary) under the covers and any other represenations are a human convenience. The language already contains ways to represent literal numbers in both decimal and hexadecimal format. Limiting the number is a function of appropriately choosing the type.
If you are instead trying to limit something to any arbitrary even quantity of hexadecimal digits, perhaps simply initializing a byte array like this would be more appropriate:
byte[] hexBytes = new byte[3] { 0xA1, 0xB2, 0xC3 };
Also, by keeping your value as a regular numeric type or using a byte array rather than putting it into Tuples with enums, you retain simple access to a whole range of operations that otherwise become more difficult.
Regarding limiting your numbers to arbitrary odd quantities of hexadecimal digits, you can choose a type that contains at least your desired value + 1 digit and constrain the value at runtime. One possible implementation of this is as follows:
public class ThreeNibbleNumber
{
private _value;
public ushort Value
{
get
{
return _value;
}
set
{
if (value > 4095)
{
throw new ArgumentException("The number is too large.");
}
else
{
_value = value;
}
}
}
public override string ToString()
{
return Value.ToString("x");
}
}
In one of your comments on another answer, you reference the idea of doing CSS colors. If that's what you desire a solution like this seems appropriate:
public struct CssColor
{
public CssColor(uint colorValue)
{
byte[] colorBytes = BitConverter.GetBytes(colorValue);
if (BitConverter.IsLittleEndian)
{
if (colorBytes[3] > 0)
{
throw new ArgumentException("The value is outside the range for a CSS Color.", "s");
}
R = colorBytes[2];
G = colorBytes[1];
B = colorBytes[0];
}
else
{
if (colorBytes[0] > 0)
{
throw new ArgumentException("The value is outside the range for a CSS Color.", "s");
}
R = colorBytes[1];
G = colorBytes[2];
B = colorBytes[3];
}
}
public byte R;
public byte G;
public byte B;
public override string ToString()
{
return string.Format("#{0:x}{1:x}{2:x}", R, G, B).ToUpperInvariant();
}
public static CssColor Parse(string s)
{
if (s == null)
{
throw new ArgumentNullException("s");
}
s = s.Trim();
if (!s.StartsWith("#") || s.Length > 7)
{
throw new FormatException("The input is not a valid CSS color string.");
}
s = s.Substring(1, s.Length - 1);
uint color = uint.Parse(s, System.Globalization.NumberStyles.HexNumber);
return new CssColor(color);
}
}
I don't particularly see why you would want to do this, but you could use the Description attribute on each of your enum values to get rid of the _ and create some kind of static function that allows you to get one of your enum values easily like Hex(15) -> 'F'.
public enum Hex {
[Description("0")] _0 = 0,
...
}

convert a string length to a hex value to add to List<byte>

I have a List that I'm adding 3 bytes to one of which is the length of a string that I'm dynamically passing to my method. How can I determine the length of that string and convert the int into a value that would be accepted in my list.add() method.
Code below:
string myString = "This is a sample string...I need its length";
int theLength = myString.Length;
List<byte> lb = new List<byte>();
lb.Add(0x81);
lb.Add(theLength); // this doesn't work
lb.Add(0x04);
TIA
Try this:
lb.AddRange(BitConverter.GetBytes(theLength))
Of course, you may decide you only need the least significant bit, in which case you could do a simple cast, or index into the result of GetBytes(), which will be 4 bytes long in this case.
More on BitConverter:
http://msdn.microsoft.com/en-us/library/system.bitconverter.getbytes.aspx
Provided the string's length is within the byte range:
lb.Add((byte)theLength);
You have to cast your length into a byte:
lb.Add((byte)theLength);
But as you might guess, your length won't always fit into a single byte. Be more specific about what you expect to do with your list of bytes, we might could provide a better answer (such as using BinaryReader/BinaryWriter instead of a list of bytes).

Convert GUID to string in decimal base (aka to a huge, comma delimited integer, in base ten)

How can I convert a System.GUID (in C#) to a string in decimal base (aka to a huge, comma delimited integer, in base ten)?
Something like 433,352,133,455,122,445,557,129,...
Guid.ToString converts GUIDs to hexadecimal representations.
I'm using C# and .Net 2.0.
Please be aware that guid.ToByteAray() will NOT return an array that can be passed to BigInteger's constructor. To use the array a re-order is needed and a trailing zero to ensure that Biginteger sees the byteArray as a positive number (see MSDN docs). A simple but less performing function is:
private static string GuidToStringUsingStringAndParse(Guid value)
{
var guidBytes = string.Format("0{0:N}", value);
var bigInteger = BigInteger.Parse(guidBytes, NumberStyles.HexNumber);
return bigInteger.ToString("N0", CultureInfo.InvariantCulture);
}
As Victor Derks pointed out in his answer, you should append a 00 byte to the end of the array to ensure the resulting BigInteger is positive.
According to the the BigInteger Structure (System.Numerics) MSDN Documentation:
To prevent the BigInteger(Byte[]) constructor from confusing the two's complement representation of a negative value with the sign and magnitude representation of a positive value, positive values in which the most significant bit of the last byte in the byte array would ordinarily be set should include an additional byte whose value is 0.
(see also: byte[] to unsigned BigInteger?)
Here's code to do it:
var guid = Guid.NewGuid();
return String.Format("{0:N0}",
new BigInteger(guid.ToByteArray().Concat(new byte[] { 0 }).ToArray()));
using System;
using System.Numerics;
Guid guid = Guid.NewGuid();
byte[] guidAsBytes = guid.ToByteArray();
BigInteger guidAsInt = new BigInteger(guidAsBytes);
string guidAsString = guidAsInt.ToString("N0");
Note that the byte order in the byte array reflects endian-ness of the GUID sub-components.
In the interest of brevity, you can accomplish the same work with one line of code:
string GuidToInteger = (new BigInteger(Guid.NewGuid().ToByteArray())).ToString("N0");
Keep in mind that .ToString("N0") is not "NO"... see the difference?
Enjoy

Categories

Resources