i have hex values in the format of 4a0e94ca etc, and i need to convert them into IP's, how can i do this in C# ?
If the values represent IPv4 addresses you can use the long.Parse method and pass the result to the IPAddress constructor:
var ip = new IPAddress(long.Parse("4a0e94ca", NumberStyles.AllowHexSpecifier));
If they represent IPv6 addresses you should convert the hex value to a byte array and then use this IPAddress constructor overload to construct the IPAddress.
Well, take the format of an IP in this form:
192.168.1.1
To get it into a single number, you take each part, OR it together, while shifting it to the left, 8 bits.
long l = 192 | (168 << 8) | (1 << 16) | (1 << 24);
Thus, you can reverse this process for your number.
Like so:
int b1 = (int) (l & 0xff);
int b2 = (int) ((l >> 8) & 0xff);
int b3 = (int) ((l >> 16) & 0xff);
int b4 = (int) ((l >> 24) & 0xff);
-- Edit
Other posters probably have 'cleaner' ways of doing it in C#, so probably use that in production code, but I do think the way I've posted is a nice way to learn the format of IPs.
Check C# convert integer to hex and back again
var ip = String.Format("{0}.{1}.{2}.{3}",
int.Parse(hexValue.Substring(0, 2), System.Globalization.NumberStyles.HexNumber),
int.Parse(hexValue.Substring(2, 2), System.Globalization.NumberStyles.HexNumber),
int.Parse(hexValue.Substring(4, 2), System.Globalization.NumberStyles.HexNumber),
int.Parse(hexValue.Substring(6, 2), System.Globalization.NumberStyles.HexNumber));
Related
Okay so this may sound ridiculous, but as a personal project, I am trying to re-create a TCP networking protocol in C#.
Every TCP packet received has a header that must start with with two Int4 (0 - 15) forming a single Byte. I think using bitwise operators I have extracted the two Int4 from the byte:
Byte firstInt4 = headerByte << 4;
Byte secondInt4 = headerByte >> 4;
The issue is that I now need to be able to write two Int4 to a single Byte, but I have no idea how to do this.
Yes, bitwise operations will do:
Split:
byte header = ...
byte firstInt4 = (byte) (header & 0xF); // 4 low bits
byte secondInt4 = (byte) (headerByte >> 4); // 4 high bits
Combine:
byte header = (byte) ((secondInt4 << 4) | firstInt4);
An int4 is called a "nibble": half a byte is a nibble. :)
Something like:
combinedByte = hiNibble;
combinedByte << 4; // Make space for second nibble.
combinedByte += loNibble;
should do what you want.
Lets say I have the following four variables: player1X, player1Y, player2X, player2Y. These have, for example, respectively the following values: 5, 10, 20, 12. Each of these values is 8 bits at max and I want to store them into one integer (32 bits), how can I achieve this?
By doing this, I want to create a dictionary, keeping count of how often certain states have happened in the game. For example, 5, 10, 20, 12 is one state, 6, 10, 20, 12 would be another.
You can use BitConverter
To get one Integer out of 4 bytes:
int i = BitConverter.ToInt32(new byte[] { player1X, player1Y, player2X, player2Y }, 0);
To get the four bytes out of the integer:
byte[] fourBytes = BitConverter.GetBytes(i);
To "squeeze" 4 8 bits value in a 32 bit space, you need to "shift" the bits for your various values, and add them together.
The opposite operations is to "unshift" and use some modulo to get the individual numbers you need.
Here is an alterantive:
Make a struct with defined packing. Expose:
The int32 and all 4 bytes at the same time
Make sure the apcking overlaps (i.e. int starts at 0, byte variables at 0, 1,2,3
Done.
And you can easily access and work with them WITHOUT a bitconverter et al and never have to define an array, which is expensive jsut to throw it away.
You can place the values by shifting to the apropriate offset
Example:
// Composing
byte x1 = ...;
byte x2 = ...;
byte x3 = ...;
byte x4 = ...;
uint x = x1 | (x2 << 0x8) | (x3 << 0x10) | (x4 << 0x18);
// Decomposing
uint x = ...;
byte x1 = x & 0xFF;
byte x2 = (x >> 0x8) & 0xFF;
byte x3 = (x >> 0x10) & 0xFF;
byte x4 = (x >> 0x18) & 0xFF;
I have a problem that I can't solve whole day. I'm new in C# and I'm asking you tog ive me a hand.
I have two ulong values. I need combine their binary representations and get 16 bytes value. I know C# does not support 128 bit types. But I also do not need to hold this value in variable. I need to convert this value to byte array.
I tried to combine values like this:
long a = ((long)b << 64) + (long)c;
and after convert to byte array with BitConverter.
But I realize that this is incorrect, because size of long values is 8 byte.
I don't want to create a 128 type to get result.
So is there a way to combine and add to byte array directly?
Thanks
Microsoft C# supports integers of arbitrary length, using BigInteger. You could combine your two values, such as in the example you have given, like so:
BigInteger a = b;
a <<= 64;
a += c;
However, as you've indiciated you don't need to store this value. Before you mention, yes, I am aware of endianness. There's machine-dependent endianness and over-the-wire endianness. We should not try to produce any machine-dependent endianness... The way we can produce over-the-wire endianness in the languages I'm most familiar with is using the right-shift and modulo operators, at least for unsigned types. Signed types introduce the complication of encoding the sign, but here's an example I think you might benefit from:
byte[] array = { (byte)(b >> 56), (byte)(b >> 48), (byte)(b >> 40), (byte)(b >> 32),
(byte)(b >> 24), (byte)(b >> 16), (byte)(b >> 8), (byte)(b ),
(byte)(c >> 56), (byte)(c >> 48), (byte)(c >> 40), (byte)(c >> 32),
(byte)(c >> 24), (byte)(c >> 16), (byte)(c >> 8), (byte)(c ) };
If I run this:
var x = (ulong)1;
var y = (ulong)2;
var result =
BitConverter.GetBytes(x)
.Concat(BitConverter.GetBytes(y))
.ToArray();
I get this:
Is that what you want?
I guess its homework?
You can represent your terms, the ulongs a and b as byte arrays.
Afterwards you may create a new byte array c for the sum.
For Each byte go bit by bit and just do a normal two's complement binary addition, remember the carry bit from byte to byte.
If the last byte has carry bit then you should throw "ulong addition overflow exception" of some kind.
I know I can extract bytes from int like this:
bytes[] IntToBytes(int i)
{
return new byte [] {(byte) ((i >> 8) & 0xff), (byte) (i & 0xff)};
}
which I subsequently send as part of a serial transmission. But can I do the reverse, after receiving a sequence of bytes, reconstruct the original data, preserving the sign. Currently, I do this, but it feels a bit over the top:
int BytesToInt( byte hi, byte lo)
{
return ((hi << 24) | (lo << 16)) >> 16;
}
Is there another way or a better way? Does it make a difference if I know I am ultimately dealing with signed 16-bit data only?
You're working with signed 16-bit data only. So why are you passing (and returning) an int and not a short? You're throwing the sign information away, so it will not actually work for negative numbers. Instead, use short and you'll be fine - and the extra type information will make your code safer.
byte[] ShortToBytes(short i)
{
return new byte [] {(byte) ((i >> 8) & 0xff), (byte) (i & 0xff)};
}
short BytesToShort(byte hi, byte lo)
{
return unchecked((short)((hi << 8) | lo));
}
The main benefit (apart from being clearer and actually working) is that you can no longer pass an invalid value to the method. That's always good :)
Oh, and I'd recommend keeping the interface symmetric - BytesToShort should also take a byte[] (or some other structure that has the two bytes).
As part of a unit test, I need to test some boundary conditions. One method accepts a System.Double argument.
Is there a way to get the next-smallest double value? (i.e. decrement the mantissa by 1 unit-value)?
I considered using Double.Epsilon but this is unreliable as it's only the smallest delta from zero, and so doesn't work for larger values (i.e. 9999999999 - Double.Epsilon == 9999999999).
So what is the algorithm or code needed such that:
NextSmallest(Double d) < d
...is always true.
If your numbers are finite, you can use a couple of convenient methods in the BitConverter class:
long bits = BitConverter.DoubleToInt64Bits(value);
if (value > 0)
return BitConverter.Int64BitsToDouble(bits - 1);
else if (value < 0)
return BitConverter.Int64BitsToDouble(bits + 1);
else
return -double.Epsilon;
IEEE-754 formats were designed so that the bits that make up the exponent and mantissa together form an integer that has the same ordering as the floating-point numbers. So, to get the largest smaller number, you can subtract one from this number if the value is positive, and you can add one if the value is negative.
The key reason why this works is that the leading bit of the mantissa is not stored. If your mantissa is all zeros, then your number is a power of two. If you subtract 1 from the exponent/mantissa combination, you get all ones and you'll have to borrow from the exponent bits. In other words: you have to decrement the exponent, which is exactly what we want.
The Wikipedia page on double-precision floating point is here: http://en.wikipedia.org/wiki/Double_precision_floating-point_format
For fun I wrote some code to break out the binary representation of the double format, decrements the mantissa and recomposes the resultant double. Because of the implicit bit in the mantissa we have to check for it and modify the exponent accordingly, and it might fail near the limits.
Here's the code:
public static double PrevDouble(double src)
{
// check for special values:
if (double.IsInfinity(src) || double.IsNaN(src))
return src;
if (src == 0)
return -double.MinValue;
// get bytes from double
byte[] srcbytes = System.BitConverter.GetBytes(src);
// extract components
byte sign = (byte)(srcbytes[7] & 0x80);
ulong exp = ((((ulong)srcbytes[7]) & 0x7F) << 4) + (((ulong)srcbytes[6] >> 4) & 0x0F);
ulong mant = ((ulong)1 << 52) | (((ulong)srcbytes[6] & 0x0F) << 48) | (((ulong)srcbytes[5]) << 40) | (((ulong)srcbytes[4]) << 32) | (((ulong)srcbytes[3]) << 24) | (((ulong)srcbytes[2]) << 16) | (((ulong)srcbytes[1]) << 8) | ((ulong)srcbytes[0]);
// decrement mantissa
--mant;
// check if implied bit has been removed and shift if so
if ((mant & ((ulong)1 << 52)) == 0)
{
mant <<= 1;
exp--;
}
// build byte representation of modified value
byte[] bytes = new byte[8];
bytes[7] = (byte)((ulong)sign | ((exp >> 4) & 0x7F));
bytes[6] = (byte)((((ulong)exp & 0x0F) << 4) | ((mant >> 48) & 0x0F));
bytes[5] = (byte)((mant >> 40) & 0xFF);
bytes[4] = (byte)((mant >> 32) & 0xFF);
bytes[3] = (byte)((mant >> 24) & 0xFF);
bytes[2] = (byte)((mant >> 16) & 0xFF);
bytes[1] = (byte)((mant >> 8) & 0xFF);
bytes[0] = (byte)(mant & 0xFF);
// convert back to double and return
double res = System.BitConverter.ToDouble(bytes, 0);
return res;
}
All of which gives you a value that is different from the initial value by a change in the lowest bit of the mantissa... in theory :)
Here's a test:
public static Main(string[] args)
{
double test = 1.0/3;
double prev = PrevDouble(test);
Console.WriteLine("{0:r}, {1:r}, {2:r}", test, prev, test - prev);
}
Gives the following results on my PC:
0.33333333333333331, 0.33333333333333326, 5.5511151231257827E-17
The difference is there, but is probably below the rounding threshold. The expression test == prev evaluates to false though, and there is an actual difference as shown above :)
In .NET Core 3.0 you can use Math.BitIncrement/Math.BitDecrement. No need to do manual bit manipulation anymore
Returns the smallest value that compares greater than a specified value.
Returns the largest value that compares less than a specified value.
Since .NET Core 7.0 there are also Double.BitIncrement and Double.BitDecrement