Java to C#: code computing from document hash - c#

I have Java code example, of how verification code should be computed. And I have to convert Java code to C#.
First of all, code is computed as:
integer(SHA256(hash)[-2: -1]) mod 10000
Where we take SHA256 result, extract 2 rightmost bytes from it, interpret them as big-endian unsigned integer and take the last 4 digits in decimal for display.
Java code:
public static String calculate(byte[] documentHash) {
byte[] digest = DigestCalculator.calculateDigest(documentHash, HashType.SHA256);
ByteBuffer byteBuffer = ByteBuffer.wrap(digest);
int shortBytes = Short.SIZE / Byte.SIZE; // Short.BYTES in java 8
int rightMostBytesIndex = byteBuffer.limit() - shortBytes;
short twoRightmostBytes = byteBuffer.getShort(rightMostBytesIndex);
int positiveInteger = ((int) twoRightmostBytes) & 0xffff;
String code = String.valueOf(positiveInteger);
String paddedCode = "0000" + code;
return paddedCode.substring(code.length());
}
public static byte[] calculateDigest(byte[] dataToDigest, HashType hashType) {
String algorithmName = hashType.getAlgorithmName();
return DigestUtils.getDigest(algorithmName).digest(dataToDigest);
}
So int C# from Base64 string:
2afAxT+nH5qNYrfM+D7F6cKAaCKLLA23pj8ro3SksqwsdwmC3xTndKJotewzu7HlDy/DiqgkR+HXBiA0sW1x0Q==
should compute code equal to: 3676
Any ideas how to implement this?

class Program
{
static void Main(string[] args)
{
Console.WriteLine(GetCode("2afAxT+nH5qNYrfM+D7F6cKAaCKLLA23pj8ro3SksqwsdwmC3xTndKJotewzu7HlDy/DiqgkR+HXBiA0sW1x0Q=="));
}
public static string GetCode(string str)
{
var sha = System.Security.Cryptography.SHA256.Create();
var hash = sha.ComputeHash(Convert.FromBase64String(str));
var last2 = hash[^2..];
var intVal = ((int) last2[0]) * 0x0100 + ((int) last2[1]);
var digits = intVal % 10000;
return $"{digits:0000}";
}
}

Related

How to encrypt a string using public key cryptography

I am trying to implement my own RSA encryption engine. Given these RSA algorithm values:
p = 61. // A prime number.
q = 53. // Also a prime number.
n = 3233. // p * q.
totient = 3120. // (p - 1) * (q - 1)
e = 991. // Co-prime to the totient (co-prime to 3120).
d = 1231. // d * e = 1219921, which is equal to the relation where 1 + k * totient = 1219921 when k = 391.
I am trying to write a method to encrypt each byte in a string and return back an encrypted string:
public string Encrypt(string m, Encoding encoding)
{
byte[] bytes = encoding.GetBytes(m);
for (int i = 0; i < bytes.Length; i++)
{
bytes[i] = (byte)BigInteger.ModPow(bytes[i], e, n);
}
string encryptedString = encoding.GetString(bytes);
Console.WriteLine("Encrypted {0} as {1}.", m, encryptedString);
return encryptedString;
}
The obvious issue here is that BigInteger.ModPow(bytes[i], e, n) may be too large to fit into a byte-space; it could result in values over 8 bits in size. How do you get around this issue while still being able to decrypt an encrypted string of bytes back into a regular string?
Update: Even encrypting from byte[] to byte[], you reach a case where encrypting that byte using the RSA algorithm goes beyond the size limit of a byte:
public byte[] Encrypt(string m, Encoding encoding)
{
byte[] bytes = encoding.GetBytes(m);
for (int i = 0; i < bytes.Length; i++)
{
bytes[i] = (byte)BigInteger.ModPow(bytes[i], e, n);
}
return bytes;
}
Update: My issue is that encryption would cause a greater number of bytes than the initial input string had:
public byte[] Encrypt(string m, Encoding encoding)
{
byte[] bytes = encoding.GetBytes(m);
byte[] returnBytes = new byte[0];
for (int i = 0; i < bytes.Length; i++)
{
byte[] result = BigInteger.ModPow(bytes[i], (BigInteger)e, n).ToByteArray();
int preSize = returnBytes.Length;
Array.Resize(ref returnBytes, returnBytes.Length + result.Length);
result.CopyTo(returnBytes, preSize);
}
return returnBytes;
}
public string Decrypt(byte[] c, Encoding encoding)
{
byte[] returnBytes = new byte[0];
for (int i = 0; i < c.Length; i++)
{
byte[] result = BigInteger.ModPow(c[i], d, n).ToByteArray();
int preSize = returnBytes.Length;
Array.Resize(ref returnBytes, returnBytes.Length + result.Length);
result.CopyTo(returnBytes, preSize);
}
string decryptedString = encoding.GetString(returnBytes);
return decryptedString;
}
If you ran this code like this:
byte[] encryptedBytes = engine.Encrypt("Hello, world.", Encoding.UTF8);
Console.WriteLine(engine.Decrypt(encryptedBytes, Encoding.UTF8));
The output would be this:
?♥D
?♥→☻►♦→☻►♦oD♦8? ?♠oj?♠→☻►♦;♂?♠♂♠?♠
Obviously, the output is not the original string because I can't just try decrypting each byte at a time, since sometimes two or more bytes of the cypher-text represent the value of one integer that I need to decrypt back to one byte of the original string...so I want to know what the standard mechanism for handling this is.
Your basic code for encrypting and decrypting each byte - the call to ModPow - is working, but you're going about the "splitting the message up and encrypting each piece" inappropriately.
To show that the ModPow part - i.e. the maths - is fine, here's code based on yours, which encrypts a string to a BigInteger[] and back:
using System;
using System.Linq;
using System.Numerics;
using System.Text;
class Test
{
const int p = 61;
const int q = 53;
const int n = 3233;
const int totient = 3120;
const int e = 991;
const int d = 1231;
static void Main()
{
var encrypted = Encrypt("Hello, world.", Encoding.UTF8);
var decrypted = Decrypt(encrypted, Encoding.UTF8);
Console.WriteLine(decrypted);
}
static BigInteger[] Encrypt(string text, Encoding encoding)
{
byte[] bytes = encoding.GetBytes(text);
return bytes.Select(b => BigInteger.ModPow(b, (BigInteger)e, n))
.ToArray();
}
static string Decrypt(BigInteger[] encrypted, Encoding encoding)
{
byte[] bytes = encrypted.Select(bi => (byte) BigInteger.ModPow(bi, d, n))
.ToArray();
return encoding.GetString(bytes);
}
}
Next you need to read more about how a byte[] is encrypted into another byte[] using RSA, including all the different padding schemes etc. There's a lot more to it than just calling ModPow on each byte.
But to reiterate, you should not be doing this to end up with a production RSA implementation. The chances of you doing that without any security flaws are very slim indeed. It's fine to do this for academic interest, to learn more about the principles of cryptography, but leave the real implementations to experts. (I'm far from an expert in this field - there's no way I'd start implementing my own encryption...)
Note: I updated this answer. Please scroll down to the update for how it should actually be implemented because this first way of doing it is not the correct way of doing RSA encryption.
One way I can think to do it is like this (but may not be compliant to standards), and also, note this does not pad:
public byte[] Encrypt(string m, Encoding encoding)
{
byte[] bytes = encoding.GetBytes(m);
byte[] returnBytes = new byte[0];
for (int i = 0; i < bytes.Length; i++)
{
byte[] result = BigInteger.ModPow(bytes[i], (BigInteger)e, n).ToByteArray();
int preSize = returnBytes.Length;
Array.Resize(ref returnBytes, returnBytes.Length + result.Length + 1);
(new byte[] { (byte)(result.Length) }).CopyTo(returnBytes, preSize);
result.CopyTo(returnBytes, preSize + 1);
}
return returnBytes;
}
public string Decrypt(byte[] c, Encoding encoding)
{
byte[] returnBytes = new byte[0];
for (int i = 0; i < c.Length; i++)
{
int dataLength = (int)c[i];
byte[] result = new byte[dataLength];
for (int j = 0; j < dataLength; j++)
{
i++;
result[j] = c[i];
}
BigInteger integer = new BigInteger(result);
byte[] integerResult = BigInteger.ModPow(integer, d, n).ToByteArray();
int preSize = returnBytes.Length;
Array.Resize(ref returnBytes, returnBytes.Length + integerResult.Length);
integerResult.CopyTo(returnBytes, preSize);
}
string decryptedString = encoding.GetString(returnBytes);
return decryptedString;
}
This has the potential of being cross-platform because you have the option of using a different datatype to represent e or n and pass it to a C# back-end service like that. Here is a test:
string stringToEncrypt = "Mary had a little lamb.";
Console.WriteLine("Encrypting the string: {0}", stringToEncrypt);
byte[] encryptedBytes = engine.Encrypt(stringToEncrypt, Encoding.UTF8);
Console.WriteLine("Encrypted text: {0}", Encoding.UTF8.GetString(encryptedBytes));
Console.WriteLine("Decrypted text: {0}", engine.Decrypt(encryptedBytes, Encoding.UTF8));
Output:
Encrypting the string: Mary had a little lamb.
Encrypted text: ☻6☻1♦☻j☻☻&♀☻g♦☻t☻☻1♦☻? ☻g♦☻1♦☻g♦☻?♥☻?☻☻7☺☻7☺☻?♥☻?♂☻g♦☻?♥☻1♦☻$☺☻
c ☻?☻
Decrypted text: Mary had a little lamb.
Update: Everything I said earlier is completely wrong in the implementation of RSA. Wrong, wrong, wrong! This is the correct way to do RSA encryption:
Convert your string to a BigInteger datatype.
Make sure your integer is smaller than the value of n that you've calculated for your algorithm, otherwise you won't be able to decypher it.
Encrypt the integer. RSA works on integer encryption only. This is clear.
Decrypt it from the encrypted integer.
I can't help but wonder that the BigInteger class was mostly created for cryptography.
As an example:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Numerics;
using System.Security.Cryptography;
using System.Text;
using System.Threading.Tasks;
namespace BytePadder
{
class Program
{
const int p = 61;
const int q = 53;
const int n = 3233;
const int totient = 3120;
const int e = 991;
const int d = 1231;
static void Main(string[] args)
{
// ---------------------- RSA Example I ----------------------
// Shows how an integer gets encrypted and decrypted.
BigInteger integer = 1000;
BigInteger encryptedInteger = Encrypt(integer);
Console.WriteLine("Encrypted Integer: {0}", encryptedInteger);
BigInteger decryptedInteger = Decrypt(encryptedInteger);
Console.WriteLine("Decrypted Integer: {0}", decryptedInteger);
// --------------------- RSA Example II ----------------------
// Shows how a string gets encrypted and decrypted.
string unencryptedString = "A";
BigInteger integer2 = new BigInteger(Encoding.UTF8.GetBytes(unencryptedString));
Console.WriteLine("String as Integer: {0}", integer2);
BigInteger encryptedInteger2 = Encrypt(integer2);
Console.WriteLine("String as Encrypted Integer: {0}", encryptedInteger2);
BigInteger decryptedInteger2 = Decrypt(encryptedInteger2);
Console.WriteLine("String as Decrypted Integer: {0}", decryptedInteger2);
string decryptedIntegerAsString = Encoding.UTF8.GetString(decryptedInteger2.ToByteArray());
Console.WriteLine("Decrypted Integer as String: {0}", decryptedIntegerAsString);
Console.ReadLine();
}
static BigInteger Encrypt(BigInteger integer)
{
if (integer < n)
{
return BigInteger.ModPow(integer, e, n);
}
throw new Exception("The integer must be less than the value of n in order to be decypherable!");
}
static BigInteger Decrypt(BigInteger integer)
{
return BigInteger.ModPow(integer, d, n);
}
}
}
Example output:
Encrypted Integer: 1989
Decrypted Integer: 1000
String as Integer: 65
String as Encrypted Integer: 1834
String as Decrypted Integer: 65
Decrypted Integer as String: A
If you are looking to use RSA encryption in C# then you should not be attempting to build your own. For starters the prime numbers you have chosen are probably to small. P and Q are supposed to be large prime numbers.
You should check out some other question/answers:
how to use RSA to encrypt files (huge data) in C#
RSA Encryption of large data in C#
And other references:
http://msdn.microsoft.com/en-us/library/system.security.cryptography.rsacryptoserviceprovider.encrypt(v=vs.110).aspx
http://msdn.microsoft.com/en-us/library/system.security.cryptography.rsacryptoserviceprovider.aspx

Translate ActionScript 3 code I've wrote to C#

recently I have discovered C#, which is really what I want. Before C# I was coding with AS3. I've recoded all my old program using C# but I am blocked with this :
public function Envoie_Serveur(param1:String) : void
{
var _loc_2:* = String(this.CMDTEC % 9000 + 1000).split("");
this.Serveur.send(this.MDT[_loc_2[0]] + this.MDT[_loc_2[1]] + this.MDT[_loc_2[2]] + this.MDT[_loc_2[3]] + param1);
var _loc_3:* = this;
var _loc_4:* = this.CMDTEC + 1;
_loc_3.CMDTEC = _loc_4;
return;
}
CMDTEC and MDT are 2 byteArray (byte[] in C# I guess)
That is what I have tried but which is not working ;c
byte[] _loc_1 = Encode((Int64.Parse(this.CMDTEC[0].ToString("X", System.Globalization.NumberStyles.HexNumber)) % 9000 + 1000) + "");
var fingerprint = new byte[4];
fingerprint[0] = byte.Parse(this.MDT[_loc_1[0]].ToString("X"), System.Globalization.NumberStyles.HexNumber);
fingerprint[1] = byte.Parse(this.MDT[_loc_1[1]].ToString("X"), System.Globalization.NumberStyles.HexNumber);
fingerprint[2] = byte.Parse(this.MDT[_loc_1[2]].ToString("X"), System.Globalization.NumberStyles.HexNumber);
fingerprint[3] = byte.Parse(this.MDT[_loc_1[3]].ToString("X"), System.Globalization.NumberStyles.HexNumber);
this.CMDTEC++;
And for exemple, that is what CMDTEC and MDT contains :
this.MDT = "1400175151406"; (just for exemple, I get this by socket)
this.CMDTEC = "8306"; (idem as ^)
How can I convert properly that to C# please ? Thanks in advance for answers.
Here's an attempt; but you need to add more details to your question regarding inputs and outputs, datatypes etc. Although you are dealing with strings, it appears you are mainly handling numeric values.
The code below is verbose for clarity, it can be condensed a lot more. Please note I haven't actually compiled and tried the code (because I don't have a Serveur object, it won't compile for me).
byte[] MDT = System.Text.Encoding.ASCII.GetBytes ("1400175151406");
byte[] CMDTEC = System.Text.Encoding.ASCII.GetBytes ("8306");
void Envoie_Serveur(string param1)
{
// firstly, get CMDTEC as a string, assuming ascii encoded bytes
string sCMDTEC = System.Text.Encoding.ASCII.GetString(CMDTEC);
// now convert CMDTEC string to an int
int iCMDTEC = int.Parse(sCMDTEC);
// now do modulation etc on the int value
iCMDTEC = iCMDTEC % 9000 + 1000;
// now convert modulated int back into a string
sCMDTEC = iCMDTEC.ToString();
// now convert modulated string back to byte array, assuming ascii encoded bytes
byte[] bCMDTEC = System.Text.Encoding.ASCII.GetBytes(sCMDTEC);
// send the data
this.Serveur.send(((int)this.MDT[bCMDTEC[0]]) + ((int)this.MDT[bCMDTEC[1]]) + ((int)this.MDT[bCMDTEC[2]]) + ((int)this.MDT[bCMDTEC[3]]) + int.Parse(param1));
// convert CMDTEC bytes to string again
sCMDTEC = System.Text.Encoding.ASCII.GetString(CMDTEC);
// convert CMDTEC string to int again
iCMDTEC = int.Parse(sCMDTEC);
// increament CMDTEC
iCMDTEC += 1;
// convert back to string
sCMDTEC = iCMDTEC.ToString();
// convert back to bytes
this.CMDTEC = System.Text.Encoding.ASCII.GetBytes(sCMDTEC);
}

C# equalent to perl `pack("v",value)` while packing some values into `byte[]`

I am trying to replicate behavior of a perl script in my c# code. When we convert any value into the Byte[] it should look same irrespective of the language used. SO
I have this function call which looks like this in perl:
$diag_cmd = pack("V", length($s_part)) . $s_part;
where $s_par is defined in following function. It is taking the .pds file at the location C:\Users\c_desaik\Desktop\DIAG\PwrDB\offtarget\data\get_8084_gpio.pds
$s_part =
sub read_pds
{
my $bin_s;
my $input_pds_file = $_[0];
open(my $fh, '<', $input_pds_file) or die "cannot open file $input_pds_file";
{
local $/;
$bin_s = <$fh>;
}
close($fh);
return $bin_s;
}
My best guess is that this function is reading the .pds file and turning it into a Byte array.
Now, I tried to replicate the behavior into c# code like following
static byte[] ConstructPacket()
{
List<byte> retval = new List<byte>();
retval.AddRange(System.IO.File.ReadAllBytes(#"C:\Users\c_desaik\Desktop\DIAG\PwrDB\offtarget\data\get_8084_gpio.pds"));
return retval.ToArray();
}
But the resulting byte array does not look same. Is there any special mechanism that I have to follow to replicate the behavior of pack("V", length($s_part)) . $s_part ?
As Simon Whitehead mentioned the template character V tells pack to pack your values into unsigned long (32-bit) integers (in little endian order). So you need to convert your bytes to a list (or array) of unsigned integers.
For example:
static uint[] UnpackUint32(string filename)
{
var retval = new List<uint>();
using (var filestream = System.IO.File.Open(filename, System.IO.FileMode.Open))
{
using (var binaryStream = new System.IO.BinaryReader(filestream))
{
var pos = 0;
while (pos < binaryStream.BaseStream.Length)
{
retval.Add(binaryStream.ReadUInt32());
pos += 4;
}
}
}
return retval.ToArray();
}
And call this function:
var list = UnpackUint32(#"C:\Users\c_desaik\Desktop\DIAG\PwrDB\offtarget\data\get_8084_gpio.pds");
Update
If you wanna read one length-prefixed string or a list of them, you can use this function:
private string[] UnpackStrings(string filename)
{
var retval = new List<string>();
using (var filestream = System.IO.File.Open(filename, System.IO.FileMode.Open))
{
using (var binaryStream = new System.IO.BinaryReader(filestream))
{
var pos = 0;
while ((pos + 4) <= binaryStream.BaseStream.Length)
{
// read the length of the string
var len = binaryStream.ReadUInt32();
// read the bytes of the string
var byteArr = binaryStream.ReadBytes((int) len);
// cast this bytes to a char and append them to a stringbuilder
var sb = new StringBuilder();
foreach (var b in byteArr)
sb.Append((char)b);
// add the new string to our collection of strings
retval.Add(sb.ToString());
// calculate start position of next value
pos += 4 + (int) len;
}
}
}
return retval.ToArray();
}
pack("V", length($s_part)) . $s_part
which can also be written as
pack("V/a*", $s_part)
creates a length-prefixed string. The length is stored as a 32-bit unsigned little-endian number.
+----------+----------+----------+----------+-------- ...
| Length | Length | Length | Length | Bytes
| ( 7.. 0) | (15.. 8) | (23..16) | (31..24) |
+----------+----------+----------+----------+-------- ...
This is how you recreate the original string from the bytes:
Read 4 bytes
If using a machine other than a little-endian machine,
Rearrange the bytes into the native order.
Cast those bytes into an 32-bit unsigned integer.
Read a number of bytes equal to that number.
Convert that sequences of bytes into a string.
Some languages provide tools that perform more than one of these steps.
I don't know C#, so I can't write the code for you, but I can give you an example in two other languages.
In Perl, this would be written as follows:
sub read_bytes {
my ($fh, $num_bytes_to_read) = #_;
my $buf = '';
while ($num_bytes_to_read) {
my $num_bytes_read = read($fh, $buf, $num_bytes_to_read, length($buf));
if (!$num_bytes_read) {
die "$!\n" if !defined($num_bytes_read);
die "Premature EOF\n";
}
$num_bytes_to_read -= $num_bytes_read;
}
return $buf;
}
sub read_uint32le { unpack('V', read_bytes($_[0], 4)) }
sub read_pstr { read_bytes($_[0], read_uint32le($_[0])) }
my $str = read_pstr($fh);
In C,
int read_bytes(FILE* fh, void* buf, size_t num_bytes_to_read) {
while (num_bytes_to_read) {
size_t num_bytes_read = fread(buf, 1, num_bytes_to_read, fh);
if (!num_bytes_read)
return 0;
num_bytes_to_read -= num_bytes_read;
buf += num_bytes_read;
}
return 1;
}
int read_uint32le(FILE* fh, uint32_t* p_i) {
int ok = read_bytes(fh, p_i, sizeof(*p_i));
if (!ok)
return 0;
{ /* Rearrange bytes on non-LE machines */
const char* p = (char*)p_i;
*p_i = ((((p[3] << 8) | p[2]) << 8) | p[1]) << 8) | p[0];
}
return 1;
}
char* read_pstr(FILE* fh) {
uint32_t len;
char* buf = NULL;
int ok;
ok = read_uint32le(fh, &len);
if (!ok)
goto ERROR;
buf = malloc(len+1);
if (!buf)
goto ERROR;
ok = read_bytes(fh, buf, len);
if (!ok)
goto ERROR;
buf[len] = '\0';
return buf;
ERROR:
if (p)
free(p);
return NULL;
}
char* str = read_pstr(fh);

Converting UUID to OID, translating java code to C# - how does this work?

I'm trying to convert this code (from David Clunie) to C#, for the purposes of creating OIDs from UUID (or GUID) in my program
public static String createOIDFromUUIDCanonicalHexString(String hexString) throws IllegalArgumentException {
UUID uuid = UUID.fromString(hexString);
long leastSignificantBits = uuid.getLeastSignificantBits();
long mostSignificantBits = uuid.getMostSignificantBits();
BigInteger decimalValue = makeBigIntegerFromUnsignedLong(mostSignificantBits);
decimalValue = decimalValue.shiftLeft(64);
BigInteger bigValueOfLeastSignificantBits = makeBigIntegerFromUnsignedLong(leastSignificantBits);
decimalValue = decimalValue.or(bigValueOfLeastSignificantBits); // not add() ... do not want to introduce question of signedness of long
return OID_PREFIX+"."+decimalValue.toString();
I don't understand why make the longs (leastSignificantBits, mostSignificantBits) from the parts of the UUID, and then make the bigint from them - why not just directly make a BigInt? (since he's shifting the most significant digits left anyway).
Can anyone give me any insight into why this is written the way it is? (disclaimer: I have not tried to run the java code, I'm just trying to implement this in C#)
[EDIT]
Turns out there were several problems - a big one, as Kevin Coulombe points out, is the byte order Microsoft stores GUIDs in. The java code seems to get the bytes in the obvious (left to right) order, but in two separate chunks, apparently (also thanks to Kevin) because there's no easy way to get the whole byte array in Java.
Here is working C# code, forwards and (partially) backwards:
class Program
{
const string OidPrefix = "2.25.";
static void Main(string[] args)
{
Guid guid = new Guid("000000FF-0000-0000-0000-000000000000");
//Guid guid = new Guid("f81d4fae-7dec-11d0-a765-00a0c91e6bf6");
Console.WriteLine("Original guid: " + guid.ToString());
byte[] guidBytes = StringToByteArray(guid.ToString().Replace("-", ""));
BigInteger decimalValue = new BigInteger(guidBytes);
Console.WriteLine("The OID is " + OidPrefix + decimalValue.ToString());
string hexGuid = decimalValue.ToHexString().PadLeft(32, '0');//padded for later use
Console.WriteLine("The hex value of the big int is " + hexGuid);
Guid testGuid = new Guid(hexGuid);
Console.WriteLine("This guid should match the orginal one: " + testGuid);
Console.ReadKey();
}
public static byte[] StringToByteArray(String hex)
{
int NumberChars = hex.Length;
byte[] bytes = new byte[NumberChars / 2];
for (int i = 0; i < NumberChars; i += 2)
bytes[i / 2] = Convert.ToByte(hex.Substring(i, 2), 16);
return bytes;
}
}
I think they do it this way because there is no easy way of converting the UUID to a byte array to pass to the BigInteger in Java.
See this : GUID to ByteArray
In C#, this should be what you are looking for :
String oid = "prefix" + "." + new System.Numerics.BigInteger(
System.Guid.NewGuid().ToByteArray()).ToString();
String oid_prefix = "2.25"
String hexString = "f81d4fae-7dec-11d0-a765-00a0c91e6bf6"
UUID uuid = UUID.fromString(hexString);
long leastSignificantBits = uuid.getLeastSignificantBits();
long mostSignificantBits = uuid.getMostSignificantBits();
mostSignificantBits = mostSignificantBits & Long.MAX_VALUE;
BigInteger decimalValue = BigInteger.valueOf(mostSignificantBits);
decimalValue = decimalValue.setBit(63);
decimalValue = decimalValue.shiftLeft(64);
leastSignificantBits = leastSignificantBits & Long.MAX_VALUE;
BigInteger bigValueLeastSignificantBit = BigInteger.valueOf(leastSignificantBits);
bigValueLeastSignificantBit = bigValueLeastSignificantBit.setBit(63);
decimalValue = decimalValue.or(bigValueLeastSignificantBit);
println "oid is = "+oid_prefix+"."+decimalValue

How to convert an IPv4 address into a integer in C#?

I'm looking for a function that will convert a standard IPv4 address into an Integer. Bonus points available for a function that will do the opposite.
Solution should be in C#.
32-bit unsigned integers are IPv4 addresses. Meanwhile, the IPAddress.Address property, while deprecated, is an Int64 that returns the unsigned 32-bit value of the IPv4 address (the catch is, it's in network byte order, so you need to swap it around).
For example, my local google.com is at 64.233.187.99. That's equivalent to:
64*2^24 + 233*2^16 + 187*2^8 + 99
= 1089059683
And indeed, http://1089059683/ works as expected (at least in Windows, tested with IE, Firefox and Chrome; doesn't work on iPhone though).
Here's a test program to show both conversions, including the network/host byte swapping:
using System;
using System.Net;
class App
{
static long ToInt(string addr)
{
// careful of sign extension: convert to uint first;
// unsigned NetworkToHostOrder ought to be provided.
return (long) (uint) IPAddress.NetworkToHostOrder(
(int) IPAddress.Parse(addr).Address);
}
static string ToAddr(long address)
{
return IPAddress.Parse(address.ToString()).ToString();
// This also works:
// return new IPAddress((uint) IPAddress.HostToNetworkOrder(
// (int) address)).ToString();
}
static void Main()
{
Console.WriteLine(ToInt("64.233.187.99"));
Console.WriteLine(ToAddr(1089059683));
}
}
Here's a pair of methods to convert from IPv4 to a correct integer and back:
public static uint ConvertFromIpAddressToInteger(string ipAddress)
{
var address = IPAddress.Parse(ipAddress);
byte[] bytes = address.GetAddressBytes();
// flip big-endian(network order) to little-endian
if (BitConverter.IsLittleEndian)
{
Array.Reverse(bytes);
}
return BitConverter.ToUInt32(bytes, 0);
}
public static string ConvertFromIntegerToIpAddress(uint ipAddress)
{
byte[] bytes = BitConverter.GetBytes(ipAddress);
// flip little-endian to big-endian(network order)
if (BitConverter.IsLittleEndian)
{
Array.Reverse(bytes);
}
return new IPAddress(bytes).ToString();
}
Example
ConvertFromIpAddressToInteger("255.255.255.254"); // 4294967294
ConvertFromIntegerToIpAddress(4294967294); // 255.255.255.254
Explanation
IP addresses are in network order (big-endian), while ints are little-endian on Windows, so to get a correct value, you must reverse the bytes before converting on a little-endian system.
Also, even for IPv4, an int can't hold addresses bigger than 127.255.255.255, e.g. the broadcast address (255.255.255.255), so use a uint.
#Barry Kelly and #Andrew Hare, actually, I don't think multiplying is the most clear way to do this (alltough correct).
An Int32 "formatted" IP address can be seen as the following structure
[StructLayout(LayoutKind.Sequential, Pack = 1)]
struct IPv4Address
{
public Byte A;
public Byte B;
public Byte C;
public Byte D;
}
// to actually cast it from or to an int32 I think you
// need to reverse the fields due to little endian
So to convert the ip address 64.233.187.99 you could do:
(64 = 0x40) << 24 == 0x40000000
(233 = 0xE9) << 16 == 0x00E90000
(187 = 0xBB) << 8 == 0x0000BB00
(99 = 0x63) == 0x00000063
---------- =|
0x40E9BB63
so you could add them up using + or you could binairy or them together. Resulting in 0x40E9BB63 which is 1089059683. (In my opinion looking in hex it's much easier to see the bytes)
So you could write the function as:
int ipToInt(int first, int second,
int third, int fourth)
{
return (first << 24) | (second << 16) | (third << 8) | (fourth);
}
Try this ones:
private int IpToInt32(string ipAddress)
{
return BitConverter.ToInt32(IPAddress.Parse(ipAddress).GetAddressBytes().Reverse().ToArray(), 0);
}
private string Int32ToIp(int ipAddress)
{
return new IPAddress(BitConverter.GetBytes(ipAddress).Reverse().ToArray()).ToString();
}
As noone posted the code that uses BitConverter and actually checks the endianness, here goes:
byte[] ip = address.Split('.').Select(s => Byte.Parse(s)).ToArray();
if (BitConverter.IsLittleEndian) {
Array.Reverse(ip);
}
int num = BitConverter.ToInt32(ip, 0);
and back:
byte[] ip = BitConverter.GetBytes(num);
if (BitConverter.IsLittleEndian) {
Array.Reverse(ip);
}
string address = String.Join(".", ip.Select(n => n.ToString()));
I have encountered some problems with the described solutions, when facing IP Adresses with a very large value.
The result would be, that the byte[0] * 16777216 thingy would overflow and become a negative int value.
what fixed it for me, is the a simple type casting operation.
public static long ConvertIPToLong(string ipAddress)
{
System.Net.IPAddress ip;
if (System.Net.IPAddress.TryParse(ipAddress, out ip))
{
byte[] bytes = ip.GetAddressBytes();
return
16777216L * bytes[0] +
65536 * bytes[1] +
256 * bytes[2] +
bytes[3]
;
}
else
return 0;
}
The reverse of Davy Landman's function
string IntToIp(int d)
{
int v1 = d & 0xff;
int v2 = (d >> 8) & 0xff;
int v3 = (d >> 16) & 0xff;
int v4 = (d >> 24);
return v4 + "." + v3 + "." + v2 + "." + v1;
}
With the UInt32 in the proper little-endian format, here are two simple conversion functions:
public uint GetIpAsUInt32(string ipString)
{
IPAddress address = IPAddress.Parse(ipString);
byte[] ipBytes = address.GetAddressBytes();
Array.Reverse(ipBytes);
return BitConverter.ToUInt32(ipBytes, 0);
}
public string GetIpAsString(uint ipVal)
{
byte[] ipBytes = BitConverter.GetBytes(ipVal);
Array.Reverse(ipBytes);
return new IPAddress(ipBytes).ToString();
}
My question was closed, I have no idea why . The accepted answer here is not the same as what I need.
This gives me the correct integer value for an IP..
public double IPAddressToNumber(string IPaddress)
{
int i;
string [] arrDec;
double num = 0;
if (IPaddress == "")
{
return 0;
}
else
{
arrDec = IPaddress.Split('.');
for(i = arrDec.Length - 1; i >= 0 ; i = i -1)
{
num += ((int.Parse(arrDec[i])%256) * Math.Pow(256 ,(3 - i )));
}
return num;
}
}
Assembled several of the above answers into an extension method that handles the Endianness of the machine and handles IPv4 addresses that were mapped to IPv6.
public static class IPAddressExtensions
{
/// <summary>
/// Converts IPv4 and IPv4 mapped to IPv6 addresses to an unsigned integer.
/// </summary>
/// <param name="address">The address to conver</param>
/// <returns>An unsigned integer that represents an IPv4 address.</returns>
public static uint ToUint(this IPAddress address)
{
if (address.AddressFamily == AddressFamily.InterNetwork || address.IsIPv4MappedToIPv6)
{
var bytes = address.GetAddressBytes();
if (BitConverter.IsLittleEndian)
Array.Reverse(bytes);
return BitConverter.ToUInt32(bytes, 0);
}
throw new ArgumentOutOfRangeException("address", "Address must be IPv4 or IPv4 mapped to IPv6");
}
}
Unit tests:
[TestClass]
public class IPAddressExtensionsTests
{
[TestMethod]
public void SimpleIp1()
{
var ip = IPAddress.Parse("0.0.0.15");
uint expected = GetExpected(0, 0, 0, 15);
Assert.AreEqual(expected, ip.ToUint());
}
[TestMethod]
public void SimpleIp2()
{
var ip = IPAddress.Parse("0.0.1.15");
uint expected = GetExpected(0, 0, 1, 15);
Assert.AreEqual(expected, ip.ToUint());
}
[TestMethod]
public void SimpleIpSix1()
{
var ip = IPAddress.Parse("0.0.0.15").MapToIPv6();
uint expected = GetExpected(0, 0, 0, 15);
Assert.AreEqual(expected, ip.ToUint());
}
[TestMethod]
public void SimpleIpSix2()
{
var ip = IPAddress.Parse("0.0.1.15").MapToIPv6();
uint expected = GetExpected(0, 0, 1, 15);
Assert.AreEqual(expected, ip.ToUint());
}
[TestMethod]
public void HighBits()
{
var ip = IPAddress.Parse("200.12.1.15").MapToIPv6();
uint expected = GetExpected(200, 12, 1, 15);
Assert.AreEqual(expected, ip.ToUint());
}
uint GetExpected(uint a, uint b, uint c, uint d)
{
return
(a * 256u * 256u * 256u) +
(b * 256u * 256u) +
(c * 256u) +
(d);
}
}
public static Int32 getLongIPAddress(string ipAddress)
{
return IPAddress.NetworkToHostOrder(BitConverter.ToInt32(IPAddress.Parse(ipAddress).GetAddressBytes(), 0));
}
The above example would be the way I go.. Only thing you might have to do is convert to a UInt32 for display purposes, or string purposes including using it as a long address in string form.
Which is what is needed when using the IPAddress.Parse(String) function. Sigh.
If you were interested in the function not just the answer here is how it is done:
int ipToInt(int first, int second,
int third, int fourth)
{
return Convert.ToInt32((first * Math.Pow(256, 3))
+ (second * Math.Pow(256, 2)) + (third * 256) + fourth);
}
with first through fourth being the segments of the IPv4 address.
public bool TryParseIPv4Address(string value, out uint result)
{
IPAddress ipAddress;
if (!IPAddress.TryParse(value, out ipAddress) ||
(ipAddress.AddressFamily != System.Net.Sockets.AddressFamily.InterNetwork))
{
result = 0;
return false;
}
result = BitConverter.ToUInt32(ipAddress.GetAddressBytes().Reverse().ToArray(), 0);
return true;
}
Multiply all the parts of the IP number by powers of 256 (256x256x256, 256x256, 256 and 1. For example:
IPv4 address : 127.0.0.1
32 bit number:
= (127x256^3) + (0x256^2) + (0x256^1) + 1
= 2130706433
here's a solution that I worked out today (should've googled first!):
private static string IpToDecimal2(string ipAddress)
{
// need a shift counter
int shift = 3;
// loop through the octets and compute the decimal version
var octets = ipAddress.Split('.').Select(p => long.Parse(p));
return octets.Aggregate(0L, (total, octet) => (total + (octet << (shift-- * 8)))).ToString();
}
i'm using LINQ, lambda and some of the extensions on generics, so while it produces the same result it uses some of the new language features and you can do it in three lines of code.
i have the explanation on my blog if you're interested.
cheers,
-jc
I think this is wrong: "65536" ==> 0.0.255.255"
Should be: "65535" ==> 0.0.255.255" or "65536" ==> 0.1.0.0"
#Davy Ladman your solution with shift are corrent but only for ip starting with number less or equal 99, infact first octect must be cast up to long.
Anyway convert back with long type is quite difficult because store 64 bit (not 32 for Ip) and fill 4 bytes with zeroes
static uint ToInt(string addr)
{
return BitConverter.ToUInt32(IPAddress.Parse(addr).GetAddressBytes(), 0);
}
static string ToAddr(uint address)
{
return new IPAddress(address).ToString();
}
Enjoy!
Massimo
Assuming you have an IP Address in string format (eg. 254.254.254.254)
string[] vals = inVal.Split('.');
uint output = 0;
for (byte i = 0; i < vals.Length; i++) output += (uint)(byte.Parse(vals[i]) << 8 * (vals.GetUpperBound(0) - i));
var address = IPAddress.Parse("10.0.11.174").GetAddressBytes();
long m_Address = ((address[3] << 24 | address[2] << 16 | address[1] << 8 | address[0]) & 0x0FFFFFFFF);
I use this:
public static uint IpToUInt32(string ip)
{
if (!IPAddress.TryParse(ip, out IPAddress address)) return 0;
return BitConverter.ToUInt32(address.GetAddressBytes(), 0);
}
public static string UInt32ToIp(uint address)
{
return new IPAddress(address).ToString();
}
Take a look at some of the crazy parsing examples in .Net's IPAddress.Parse:
(MSDN)
"65536" ==> 0.0.255.255
"20.2" ==> 20.0.0.2
"20.65535" ==> 20.0.255.255
"128.1.2" ==> 128.1.0.2
I noticed that System.Net.IPAddress have Address property (System.Int64) and constructor, which also accept Int64 data type. So you can use this to convert IP address to/from numeric (although not Int32, but Int64) format.

Categories

Resources