I have Java code example, of how verification code should be computed. And I have to convert Java code to C#.
First of all, code is computed as:
integer(SHA256(hash)[-2: -1]) mod 10000
Where we take SHA256 result, extract 2 rightmost bytes from it, interpret them as big-endian unsigned integer and take the last 4 digits in decimal for display.
Java code:
public static String calculate(byte[] documentHash) {
byte[] digest = DigestCalculator.calculateDigest(documentHash, HashType.SHA256);
ByteBuffer byteBuffer = ByteBuffer.wrap(digest);
int shortBytes = Short.SIZE / Byte.SIZE; // Short.BYTES in java 8
int rightMostBytesIndex = byteBuffer.limit() - shortBytes;
short twoRightmostBytes = byteBuffer.getShort(rightMostBytesIndex);
int positiveInteger = ((int) twoRightmostBytes) & 0xffff;
String code = String.valueOf(positiveInteger);
String paddedCode = "0000" + code;
return paddedCode.substring(code.length());
}
public static byte[] calculateDigest(byte[] dataToDigest, HashType hashType) {
String algorithmName = hashType.getAlgorithmName();
return DigestUtils.getDigest(algorithmName).digest(dataToDigest);
}
So int C# from Base64 string:
2afAxT+nH5qNYrfM+D7F6cKAaCKLLA23pj8ro3SksqwsdwmC3xTndKJotewzu7HlDy/DiqgkR+HXBiA0sW1x0Q==
should compute code equal to: 3676
Any ideas how to implement this?
class Program
{
static void Main(string[] args)
{
Console.WriteLine(GetCode("2afAxT+nH5qNYrfM+D7F6cKAaCKLLA23pj8ro3SksqwsdwmC3xTndKJotewzu7HlDy/DiqgkR+HXBiA0sW1x0Q=="));
}
public static string GetCode(string str)
{
var sha = System.Security.Cryptography.SHA256.Create();
var hash = sha.ComputeHash(Convert.FromBase64String(str));
var last2 = hash[^2..];
var intVal = ((int) last2[0]) * 0x0100 + ((int) last2[1]);
var digits = intVal % 10000;
return $"{digits:0000}";
}
}
I'm trying to convert a base64 into a byte[] and in a audioClip but i'm getting some noise in my sound.
I'm using this class to convert byte[] to AudioClip.
using UnityEngine;
using System.Collections;
using System.IO;
using System;
namespace WWUtils.Audio{
public class WAV {
// properties
public float[] LeftChannel { get; internal set; }
public float[] RightChannel { get; internal set; }
public int ChannelCount { get; internal set; }
public int SampleCount { get; internal set; }
public int Frequency { get; internal set; }
// convert two bytes to one float in the range -1 to 1
static float bytesToFloat(byte firstByte, byte secondByte) {
// convert two bytes to one short (little endian)
short s = (short)((secondByte << 8) | firstByte);
// convert to range from -1 to (just below) 1
return s / 32768.0F;
}
static int bytesToInt(byte[] bytes,int offset=0){
int value=0;
for(int i=0;i<4;i++){
value |= ((int)bytes[offset+i])<<(i*8);
}
return value;
}
private static byte[] GetBytes(string filename){
return File.ReadAllBytes(filename);
}
// Returns left and right double arrays. 'right' will be null if sound is mono.
public WAV(string filename):
this(GetBytes(filename)) {}
public WAV(byte[] wav){
// Determine if mono or stereo
ChannelCount = wav[22]; // Forget byte 23 as 99.999% of WAVs are 1 or 2 channels
// Get the frequency
Frequency = bytesToInt(wav,24);
// Get past all the other sub chunks to get to the data subchunk:
int pos = 12; // First Subchunk ID from 12 to 16
// Keep iterating until we find the data chunk (i.e. 64 61 74 61 ...... (i.e. 100 97 116 97 in decimal))
while(!(wav[pos]==100 && wav[pos+1]==97 && wav[pos+2]==116 && wav[pos+3]==97)) {
pos += 4;
int chunkSize = wav[pos] + wav[pos + 1] * 256 + wav[pos + 2] * 65536 + wav[pos + 3] * 16777216;
pos += 4 + chunkSize;
}
pos += 8;
// Pos is now positioned to start of actual sound data.
SampleCount = (wav.Length - pos)/2; // 2 bytes per sample (16 bit sound mono)
if (ChannelCount == 2) SampleCount /= 2; // 4 bytes per sample (16 bit stereo)
// Allocate memory (right will be null if only mono sound)
LeftChannel = new float[SampleCount];
if (ChannelCount == 2) RightChannel = new float[SampleCount];
else RightChannel = null;
// Write to double array/s:
int i=0;
while (pos < wav.Length) {
LeftChannel[i] = bytesToFloat(wav[pos], wav[pos + 1]);
pos += 2;
if (ChannelCount == 2) {
RightChannel[i] = bytesToFloat(wav[pos], wav[pos + 1]);
pos += 2;
}
i++;
}
}
public override string ToString (){
return string.Format ("[WAV: LeftChannel={0}, RightChannel={1}, ChannelCount={2}, SampleCount={3}, Frequency={4}]", LeftChannel, RightChannel, ChannelCount, SampleCount, Frequency);
}
}
}
And this to play audioClip
var recByte = System.Convert.FromBase64String(base64);
WAV wav = new WAV(recByte);
AudioClip audioClip = AudioClip.Create("soundLenny-Result",
wav.SampleCount, 1, wav.Frequency, false, false);
audioClip.SetData(wav.LeftChannel, 0);
//PlayClip...
I'm trying to convert one of the following types. https://learn.microsoft.com/pt-br/azure/cognitive-services/speech-service/rest-text-to-speech#audio-outputs
with this riff-16khz-16bit-mono-pcm i get some noise on audio.
i'm tryng to convert this format riff-24khz-16bit-mono-pcm.
If i use riff-24khz-16bit-mono-pcm i get a IndexOutOfRangeException, if a use riff-16khz-16bit-mono-pcm audio is noise
This is the error.
IndexOutOfRangeException: Index was outside the bounds of the array. WWUtils.Audio.WAV..ctor (System.Byte[] wav) (at Assets/Scripts/WAV.cs:68)
If i make this with an microphone recorded wav file the script works fine. How can i correct this?
I'm making a function that will allow the user to pass a double value, and then return a UInt16.
This is my code:
public static UInt16 Value_To_BatteryVoltage(double value)
{
var ret = ((int)value << 8);
var retMod = (value % (int)value) * 10;
return (UInt16)(ret + retMod);
}
Basically what it does is as follows, function call:
Value_To_BatteryVoltage(25.10)
Will return: 6401
I can check the result by doing:
public static double VoltageLevel(UInt16 value)
{
return ((value & 0xFF00) >> 8) + ((value & 0x00FF) / 10.0);
}
This is working as expected, BUT, if I do:
Value_To_BatteryVoltage(25.11) //notice the 0.11
I get the wrong result, because:
public static UInt16 Value_To_BatteryVoltage(double value)
{
var ret = ((int)value << 8); // returns 6400 OK
var retMod = (value % (int)value) * 10; //returns 0.11 x 10 = 1.1 WRONG!
return (UInt16)(ret + retMod); //returns 6400, because (UInt16)(6400 + 1.1) = 6401 same as 25.10 so I've lost precision
}
So the question is, is there some way to do this kind of conversion without losing precision?
If I understand the question, you want to store the Characteristic (interger-part) in the first 8 bits of UInt16. And the Mantissa (fractional-part) in the second 8 bits.
This is one way to do it. I treat the double like a string and split it at the decimal. For example:
public static UInt16 Value_To_BatteryVoltage(double value)
{
string[] number = value.ToString().Split('.');
UInt16 c = (UInt16)(UInt16.Parse(number[0]) << 8);
UInt16 m = UInt16.Parse(number[1]);
return (UInt16)(c + m);
}
And here is the output:
Interestingly, I can find implementations for the Internet Checksum in almost every language except C#. Does anyone have an implementation to share?
Remember, the internet protocol specifies that:
"The checksum field is the 16 bit one's complement of the one's
complement sum of all 16 bit words in the header. For purposes of
computing the checksum, the value of the checksum field is zero."
More explanation can be found from Dr. Math.
There are some efficiency pointers available, but that's not really a large concern for me at this point.
Please include your tests! (Edit: Valid comment regarding testing someone else's code - but I am going off of the protocol and don't have test vectors of my own and would rather unit test it than put into production to see if it matches what is currently being used! ;-)
Edit: Here are some unit tests that I came up with. They test an extension method which iterates through the entire byte collection. Please comment if you find fault in the tests.
[TestMethod()]
public void InternetChecksum_SimplestValidValue_ShouldMatch()
{
IEnumerable<byte> value = new byte[1]; // should work for any-length array of zeros
ushort expected = 0xFFFF;
ushort actual = value.InternetChecksum();
Assert.AreEqual(expected, actual);
}
[TestMethod()]
public void InternetChecksum_ValidSingleByteExtreme_ShouldMatch()
{
IEnumerable<byte> value = new byte[]{0xFF};
ushort expected = 0xFF;
ushort actual = value.InternetChecksum();
Assert.AreEqual(expected, actual);
}
[TestMethod()]
public void InternetChecksum_ValidMultiByteExtrema_ShouldMatch()
{
IEnumerable<byte> value = new byte[] { 0x00, 0xFF };
ushort expected = 0xFF00;
ushort actual = value.InternetChecksum();
Assert.AreEqual(expected, actual);
}
I knew I had this one stored away somewhere...
http://cyb3rspy.wordpress.com/2008/03/27/ip-header-checksum-function-in-c/
Well, I dug up an implementation from an old code base and it passes the tests I specified in the question, so here it is (as an extension method):
public static ushort InternetChecksum(this IEnumerable<byte> value)
{
byte[] buffer = value.ToArray();
int length = buffer.Length;
int i = 0;
UInt32 sum = 0;
UInt32 data = 0;
while (length > 1)
{
data = 0;
data = (UInt32)(
((UInt32)(buffer[i]) << 8)
|
((UInt32)(buffer[i + 1]) & 0xFF)
);
sum += data;
if ((sum & 0xFFFF0000) > 0)
{
sum = sum & 0xFFFF;
sum += 1;
}
i += 2;
length -= 2;
}
if (length > 0)
{
sum += (UInt32)(buffer[i] << 8);
//sum += (UInt32)(buffer[i]);
if ((sum & 0xFFFF0000) > 0)
{
sum = sum & 0xFFFF;
sum += 1;
}
}
sum = ~sum;
sum = sum & 0xFFFF;
return (UInt16)sum;
}
I have made an implementation of the IPv4 header checksum calculation, as defined in RFC 791.
Extension Methods
public static ushort GetInternetChecksum(this ReadOnlySpan<byte> bytes)
=> CalculateChecksum(bytes, ignoreHeaderChecksum: true);
public static bool IsValidChecksum(this ReadOnlySpan<byte> bytes)
// Should equal zero (valid)
=> CalculateChecksum(bytes, ignoreHeaderChecksum: false) == 0;
The Checksum Calculation
using System.Buffers.Binary;
private static ushort CalculateChecksum(ReadOnlySpan<byte> bytes, bool ignoreHeaderChecksum)
{
ushort checksum = 0;
for (int i = 0; i <= 18; i += 2)
{
// i = 0 e.g. [0..2] Version and Internal Header Length
// i = 2 e.g. [2..4] Total Length
// i = 4 e.g. [4..6] Identification
// i = 6 e.g. [6..8] Flags and Fragmentation Offset
// i = 8 e.g. [8..10] TTL and Protocol
// i = 10 e.g. [10..12] Header Checksum
// i = 12 e.g. [12..14] Source Address #1
// i = 14 e.g. [14..16] Source Address #2
// i = 16 e.g. [16..18] Destination Address #1
// i = 18 e.g. [18..20] Destination Address #2
if (ignoreHeaderChecksum && i == 10) continue;
ushort value = BinaryPrimitives.ReadUInt16BigEndian(bytes[i..(i + 2)]);
// Each time a carry occurs, we must add a 1 to the sum
if (checksum + value > ushort.MaxValue)
{
checksum++;
}
checksum += value;
}
// One’s complement
return (ushort)~checksum;
}
I'm looking for a function that will convert a standard IPv4 address into an Integer. Bonus points available for a function that will do the opposite.
Solution should be in C#.
32-bit unsigned integers are IPv4 addresses. Meanwhile, the IPAddress.Address property, while deprecated, is an Int64 that returns the unsigned 32-bit value of the IPv4 address (the catch is, it's in network byte order, so you need to swap it around).
For example, my local google.com is at 64.233.187.99. That's equivalent to:
64*2^24 + 233*2^16 + 187*2^8 + 99
= 1089059683
And indeed, http://1089059683/ works as expected (at least in Windows, tested with IE, Firefox and Chrome; doesn't work on iPhone though).
Here's a test program to show both conversions, including the network/host byte swapping:
using System;
using System.Net;
class App
{
static long ToInt(string addr)
{
// careful of sign extension: convert to uint first;
// unsigned NetworkToHostOrder ought to be provided.
return (long) (uint) IPAddress.NetworkToHostOrder(
(int) IPAddress.Parse(addr).Address);
}
static string ToAddr(long address)
{
return IPAddress.Parse(address.ToString()).ToString();
// This also works:
// return new IPAddress((uint) IPAddress.HostToNetworkOrder(
// (int) address)).ToString();
}
static void Main()
{
Console.WriteLine(ToInt("64.233.187.99"));
Console.WriteLine(ToAddr(1089059683));
}
}
Here's a pair of methods to convert from IPv4 to a correct integer and back:
public static uint ConvertFromIpAddressToInteger(string ipAddress)
{
var address = IPAddress.Parse(ipAddress);
byte[] bytes = address.GetAddressBytes();
// flip big-endian(network order) to little-endian
if (BitConverter.IsLittleEndian)
{
Array.Reverse(bytes);
}
return BitConverter.ToUInt32(bytes, 0);
}
public static string ConvertFromIntegerToIpAddress(uint ipAddress)
{
byte[] bytes = BitConverter.GetBytes(ipAddress);
// flip little-endian to big-endian(network order)
if (BitConverter.IsLittleEndian)
{
Array.Reverse(bytes);
}
return new IPAddress(bytes).ToString();
}
Example
ConvertFromIpAddressToInteger("255.255.255.254"); // 4294967294
ConvertFromIntegerToIpAddress(4294967294); // 255.255.255.254
Explanation
IP addresses are in network order (big-endian), while ints are little-endian on Windows, so to get a correct value, you must reverse the bytes before converting on a little-endian system.
Also, even for IPv4, an int can't hold addresses bigger than 127.255.255.255, e.g. the broadcast address (255.255.255.255), so use a uint.
#Barry Kelly and #Andrew Hare, actually, I don't think multiplying is the most clear way to do this (alltough correct).
An Int32 "formatted" IP address can be seen as the following structure
[StructLayout(LayoutKind.Sequential, Pack = 1)]
struct IPv4Address
{
public Byte A;
public Byte B;
public Byte C;
public Byte D;
}
// to actually cast it from or to an int32 I think you
// need to reverse the fields due to little endian
So to convert the ip address 64.233.187.99 you could do:
(64 = 0x40) << 24 == 0x40000000
(233 = 0xE9) << 16 == 0x00E90000
(187 = 0xBB) << 8 == 0x0000BB00
(99 = 0x63) == 0x00000063
---------- =|
0x40E9BB63
so you could add them up using + or you could binairy or them together. Resulting in 0x40E9BB63 which is 1089059683. (In my opinion looking in hex it's much easier to see the bytes)
So you could write the function as:
int ipToInt(int first, int second,
int third, int fourth)
{
return (first << 24) | (second << 16) | (third << 8) | (fourth);
}
Try this ones:
private int IpToInt32(string ipAddress)
{
return BitConverter.ToInt32(IPAddress.Parse(ipAddress).GetAddressBytes().Reverse().ToArray(), 0);
}
private string Int32ToIp(int ipAddress)
{
return new IPAddress(BitConverter.GetBytes(ipAddress).Reverse().ToArray()).ToString();
}
As noone posted the code that uses BitConverter and actually checks the endianness, here goes:
byte[] ip = address.Split('.').Select(s => Byte.Parse(s)).ToArray();
if (BitConverter.IsLittleEndian) {
Array.Reverse(ip);
}
int num = BitConverter.ToInt32(ip, 0);
and back:
byte[] ip = BitConverter.GetBytes(num);
if (BitConverter.IsLittleEndian) {
Array.Reverse(ip);
}
string address = String.Join(".", ip.Select(n => n.ToString()));
I have encountered some problems with the described solutions, when facing IP Adresses with a very large value.
The result would be, that the byte[0] * 16777216 thingy would overflow and become a negative int value.
what fixed it for me, is the a simple type casting operation.
public static long ConvertIPToLong(string ipAddress)
{
System.Net.IPAddress ip;
if (System.Net.IPAddress.TryParse(ipAddress, out ip))
{
byte[] bytes = ip.GetAddressBytes();
return
16777216L * bytes[0] +
65536 * bytes[1] +
256 * bytes[2] +
bytes[3]
;
}
else
return 0;
}
The reverse of Davy Landman's function
string IntToIp(int d)
{
int v1 = d & 0xff;
int v2 = (d >> 8) & 0xff;
int v3 = (d >> 16) & 0xff;
int v4 = (d >> 24);
return v4 + "." + v3 + "." + v2 + "." + v1;
}
With the UInt32 in the proper little-endian format, here are two simple conversion functions:
public uint GetIpAsUInt32(string ipString)
{
IPAddress address = IPAddress.Parse(ipString);
byte[] ipBytes = address.GetAddressBytes();
Array.Reverse(ipBytes);
return BitConverter.ToUInt32(ipBytes, 0);
}
public string GetIpAsString(uint ipVal)
{
byte[] ipBytes = BitConverter.GetBytes(ipVal);
Array.Reverse(ipBytes);
return new IPAddress(ipBytes).ToString();
}
My question was closed, I have no idea why . The accepted answer here is not the same as what I need.
This gives me the correct integer value for an IP..
public double IPAddressToNumber(string IPaddress)
{
int i;
string [] arrDec;
double num = 0;
if (IPaddress == "")
{
return 0;
}
else
{
arrDec = IPaddress.Split('.');
for(i = arrDec.Length - 1; i >= 0 ; i = i -1)
{
num += ((int.Parse(arrDec[i])%256) * Math.Pow(256 ,(3 - i )));
}
return num;
}
}
Assembled several of the above answers into an extension method that handles the Endianness of the machine and handles IPv4 addresses that were mapped to IPv6.
public static class IPAddressExtensions
{
/// <summary>
/// Converts IPv4 and IPv4 mapped to IPv6 addresses to an unsigned integer.
/// </summary>
/// <param name="address">The address to conver</param>
/// <returns>An unsigned integer that represents an IPv4 address.</returns>
public static uint ToUint(this IPAddress address)
{
if (address.AddressFamily == AddressFamily.InterNetwork || address.IsIPv4MappedToIPv6)
{
var bytes = address.GetAddressBytes();
if (BitConverter.IsLittleEndian)
Array.Reverse(bytes);
return BitConverter.ToUInt32(bytes, 0);
}
throw new ArgumentOutOfRangeException("address", "Address must be IPv4 or IPv4 mapped to IPv6");
}
}
Unit tests:
[TestClass]
public class IPAddressExtensionsTests
{
[TestMethod]
public void SimpleIp1()
{
var ip = IPAddress.Parse("0.0.0.15");
uint expected = GetExpected(0, 0, 0, 15);
Assert.AreEqual(expected, ip.ToUint());
}
[TestMethod]
public void SimpleIp2()
{
var ip = IPAddress.Parse("0.0.1.15");
uint expected = GetExpected(0, 0, 1, 15);
Assert.AreEqual(expected, ip.ToUint());
}
[TestMethod]
public void SimpleIpSix1()
{
var ip = IPAddress.Parse("0.0.0.15").MapToIPv6();
uint expected = GetExpected(0, 0, 0, 15);
Assert.AreEqual(expected, ip.ToUint());
}
[TestMethod]
public void SimpleIpSix2()
{
var ip = IPAddress.Parse("0.0.1.15").MapToIPv6();
uint expected = GetExpected(0, 0, 1, 15);
Assert.AreEqual(expected, ip.ToUint());
}
[TestMethod]
public void HighBits()
{
var ip = IPAddress.Parse("200.12.1.15").MapToIPv6();
uint expected = GetExpected(200, 12, 1, 15);
Assert.AreEqual(expected, ip.ToUint());
}
uint GetExpected(uint a, uint b, uint c, uint d)
{
return
(a * 256u * 256u * 256u) +
(b * 256u * 256u) +
(c * 256u) +
(d);
}
}
public static Int32 getLongIPAddress(string ipAddress)
{
return IPAddress.NetworkToHostOrder(BitConverter.ToInt32(IPAddress.Parse(ipAddress).GetAddressBytes(), 0));
}
The above example would be the way I go.. Only thing you might have to do is convert to a UInt32 for display purposes, or string purposes including using it as a long address in string form.
Which is what is needed when using the IPAddress.Parse(String) function. Sigh.
If you were interested in the function not just the answer here is how it is done:
int ipToInt(int first, int second,
int third, int fourth)
{
return Convert.ToInt32((first * Math.Pow(256, 3))
+ (second * Math.Pow(256, 2)) + (third * 256) + fourth);
}
with first through fourth being the segments of the IPv4 address.
public bool TryParseIPv4Address(string value, out uint result)
{
IPAddress ipAddress;
if (!IPAddress.TryParse(value, out ipAddress) ||
(ipAddress.AddressFamily != System.Net.Sockets.AddressFamily.InterNetwork))
{
result = 0;
return false;
}
result = BitConverter.ToUInt32(ipAddress.GetAddressBytes().Reverse().ToArray(), 0);
return true;
}
Multiply all the parts of the IP number by powers of 256 (256x256x256, 256x256, 256 and 1. For example:
IPv4 address : 127.0.0.1
32 bit number:
= (127x256^3) + (0x256^2) + (0x256^1) + 1
= 2130706433
here's a solution that I worked out today (should've googled first!):
private static string IpToDecimal2(string ipAddress)
{
// need a shift counter
int shift = 3;
// loop through the octets and compute the decimal version
var octets = ipAddress.Split('.').Select(p => long.Parse(p));
return octets.Aggregate(0L, (total, octet) => (total + (octet << (shift-- * 8)))).ToString();
}
i'm using LINQ, lambda and some of the extensions on generics, so while it produces the same result it uses some of the new language features and you can do it in three lines of code.
i have the explanation on my blog if you're interested.
cheers,
-jc
I think this is wrong: "65536" ==> 0.0.255.255"
Should be: "65535" ==> 0.0.255.255" or "65536" ==> 0.1.0.0"
#Davy Ladman your solution with shift are corrent but only for ip starting with number less or equal 99, infact first octect must be cast up to long.
Anyway convert back with long type is quite difficult because store 64 bit (not 32 for Ip) and fill 4 bytes with zeroes
static uint ToInt(string addr)
{
return BitConverter.ToUInt32(IPAddress.Parse(addr).GetAddressBytes(), 0);
}
static string ToAddr(uint address)
{
return new IPAddress(address).ToString();
}
Enjoy!
Massimo
Assuming you have an IP Address in string format (eg. 254.254.254.254)
string[] vals = inVal.Split('.');
uint output = 0;
for (byte i = 0; i < vals.Length; i++) output += (uint)(byte.Parse(vals[i]) << 8 * (vals.GetUpperBound(0) - i));
var address = IPAddress.Parse("10.0.11.174").GetAddressBytes();
long m_Address = ((address[3] << 24 | address[2] << 16 | address[1] << 8 | address[0]) & 0x0FFFFFFFF);
I use this:
public static uint IpToUInt32(string ip)
{
if (!IPAddress.TryParse(ip, out IPAddress address)) return 0;
return BitConverter.ToUInt32(address.GetAddressBytes(), 0);
}
public static string UInt32ToIp(uint address)
{
return new IPAddress(address).ToString();
}
Take a look at some of the crazy parsing examples in .Net's IPAddress.Parse:
(MSDN)
"65536" ==> 0.0.255.255
"20.2" ==> 20.0.0.2
"20.65535" ==> 20.0.255.255
"128.1.2" ==> 128.1.0.2
I noticed that System.Net.IPAddress have Address property (System.Int64) and constructor, which also accept Int64 data type. So you can use this to convert IP address to/from numeric (although not Int32, but Int64) format.