EBCDIC to ASCII conversion, handling numeric values - c#

I’m attempting to convert files from ECDIC to ASCII format and have run into an interesting issue. The files contain fixed length records with some fields being signed binary integers (described as B4 in the record layout), and long-precision numeric values (described as L8 in the record layout). I’ve been able to convert character data with no problem, but I’m not sure how to go about converting these numeric values. From a reference manual for the original system (an IBM 5110), the fields are described below.
B indicates the length (2, 4, or 8 bytes) of numeric data items in
fixed-point signed binary integer format that are to be converted to
BASIC internal data format. For record I/O file input, the next 2,
4, or 8 bytes in the record contain a signed binary value to be
converted by the system into internal data format and assigned to the
variable(s) specified in the READ FILE or REREAD FILE statement using
a FORM statement.
and
L indicates long-precision (8 characters) for numeric values. For
input, this entry indicates that an eight-position, long-precision
value in the record is to be assigned without conversion to a
corresponding numeric variable specified in the READ FILE or REREAD
FILE statement.
EDIT: Here's the code I'm using for the conversion
private void ConvertFile(EbcdicFile file)
{
if (file == null) return;
var filePath = Path.Combine(file.Path, file.FileName);
if (!File.Exists(filePath))
{
this.Logger.Info(string.Format("Cannot convert file {0}. It does not exist.", filePath));
return;
}
var ebcdic = Encoding.GetEncoding(37);
string convertedFilepath = Path.Combine(file.Path, file.ConvertedFileName);
byte[] fileData = File.ReadAllBytes(filePath);
if (!file.HasNumericFields)
File.WriteAllBytes(convertedFilepath, Encoding.Convert(ebcdic, Encoding.ASCII, fileData));
else
{
var convertedFileData = new List<byte>();
for (int position = 0; position < fileData.Length; position += file.RecordLength)
{
var segment = new ArraySegment<byte>(fileData, position, file.RecordLength);
file.Fields.ForEach(field =>
{
var fieldSegment = segment.Array.Skip(segment.Offset + field.Start - 1).Take(field.Length);
if (field.Type.Equals("string", StringComparison.OrdinalIgnoreCase))
{
convertedFileData.AddRange(
Encoding.Convert(ebcdic, Encoding.ASCII, fieldSegment.ToArray())
);
}
else if (field.Type.Equals("B4", StringComparison.OrdinalIgnoreCase))
{
// Not sure how to convert this field
}
else if (field.Type.Equals("L8", StringComparison.OrdinalIgnoreCase))
{
// Not sure how to convert this field
}
});
}
File.WriteAllBytes(convertedFilepath, convertedFileData.ToArray());
}
}

You must first know the fixed record size. Use FileStream.Read() to read one record worth of bytes. Then Encoding.GetString() to convert it to a string.
Then fish the fields out of the record using String.SubString(). A B4 is simply a SubString call with a length of 4, L8 with a length of 8. Further convert such a field to a number with Decimal.Parse(). You may have to divide the result, it wasn't clear what fixed-point multiplier is used. Good odds for 100.

Okay, so I've figured out how to convert both fields. B4 fields are very straightforward. They are essentially a 4-byte array which can be converted to an integer.
//The IBM 5110 were big endian machines, so reverse the array
if (BitConverter.IsLittleEndian)
Array.Reverse(by);
int value = BitConverter.ToInt32(by, 0);
The L8 fields are 8-bytes arrays that represented an IBM Double Precision Float. There are many ways this can be converted to an IEEE 754 Float. A few examples can be found at:
How To Read IBM 370 Data from a Binary File
Transform between IEEE, IBM or VAX floating point number formats and bytes expressions
Here's the version I used based on guidance from the articles.
private double IbmFloatToDouble(byte[] value)
{
if (ReferenceEquals(null, value))
throw new ArgumentNullException("value");
if (BitConverter.ToInt64(value, 0) == 0)
return 0;
int exponentBias = 64;
int ibmBase = 16;
double sign = 0.0D;
int signValue = (value[0] & 0x80) >> 7;
int exponentValue = (value[0] & 0x7f);
double fraction1 = (value[1] << 16) + (value[2] << 8) + value[3];
double fraction2 = (value[4] << 24) + (value[5] << 16) + (value[6] << 8) + value[7];
double exponent24 = 16777216.0; // 2^24
double exponent56 = 72057594037927936.0; // 2^56
double mantissa1 = fraction1 / exponent24;
double mantissa2 = fraction2 / exponent56;
double mantissa = mantissa1 + mantissa2;
double exponent = Math.Pow(ibmBase, exponentValue - exponentBias);
if (signValue == 0)
sign = 1.0;
else
sign = -1.0;
return (sign * mantissa * exponent);
}

Related

Convert BigInteger Binary to BigInteger Number?

Currently i am using Long integer type. I used the following to convert from/to binary/number:
Convert.ToInt64(BinaryString, 2); //Convert binary string of base 2 to number
Convert.ToString(LongNumber, 2); //Convert long number to binary string of base 2
Now the numbers i am using have exceeded 64 bit, so is started using BigInteger. I can't seem to find the equivalent of the code above.
How can i convert from a BinaryString that have over 64bits to a BigInteger number and vice versa ?
Update:
The references in the answer contains the answer i want but i am having some trouble in the conversion from Number to Binary.
I have used the following code which is available in the first reference:
public static string ToBinaryString(this BigInteger bigint)
{
var bytes = bigint.ToByteArray();
var idx = bytes.Length - 1;
// Create a StringBuilder having appropriate capacity.
var base2 = new StringBuilder(bytes.Length * 8);
// Convert first byte to binary.
var binary = Convert.ToString(bytes[idx], 2);
// Ensure leading zero exists if value is positive.
if (binary[0] != '0' && bigint.Sign == 1)
{
base2.Append('0');
}
// Append binary string to StringBuilder.
base2.Append(binary);
// Convert remaining bytes adding leading zeros.
for (idx--; idx >= 0; idx--)
{
base2.Append(Convert.ToString(bytes[idx], 2).PadLeft(8, '0'));
}
return base2.ToString();
}
The result i got is wrong:
100001000100000000000100000110000100010000000000000000000000000000000000 ===> 2439583056328331886592
2439583056328331886592 ===> 0100001000100000000000100000110000100010000000000000000000000000000000000
If you put the resulted binary string under each other, you will notice that the conversion is correct and that the problem is that there is a leading zero from the left:
100001000100000000000100000110000100010000000000000000000000000000000000
0100001000100000000000100000110000100010000000000000000000000000000000000
I tried reading the explanation provided in the code and changing it but no luck.
Update 2:
I was able to solve it by changing the following in the code:
// Ensure leading zero exists if value is positive.
if (binary[0] != '0' && bigint.Sign == 1)
{
base2.Append('0');
// Append binary string to StringBuilder.
base2.Append(binary);
}
Unfortunately, there is nothing built-in in the .NET framework.
Fortunately, the StackOverflow community has already solved both problems:
BigInteger -> Binary: BigInteger to Hex/Decimal/Octal/Binary strings?
Binary -> BigInteger: C# Convert large binary string to decimal system
There is a good reference on MSDN about BigIntegers. Can you check it?
https://msdn.microsoft.com/en-us/library/system.numerics.biginteger(v=vs.110).aspx
Also there is a post to convert from binary to biginteger Conversion of a binary representation stored in a list of integers (little endian) into a Biginteger
This example is from MSDN.
string positiveString = "91389681247993671255432112000000";
string negativeString = "-90315837410896312071002088037140000";
BigInteger posBigInt = 0;
BigInteger negBigInt = 0;
try {
posBigInt = BigInteger.Parse(positiveString);
Console.WriteLine(posBigInt);
}
catch (FormatException)
{
Console.WriteLine("Unable to convert the string '{0}' to a BigInteger value.",
positiveString);
}
if (BigInteger.TryParse(negativeString, out negBigInt))
Console.WriteLine(negBigInt);
else
Console.WriteLine("Unable to convert the string '{0}' to a BigInteger value.",
negativeString);
// The example displays the following output:
// 9.1389681247993671255432112E+31
// -9.0315837410896312071002088037E+34

CRC-16 and CRC-32 Checks

I need help trying to verify CRC-16 values (also need help with CRC-32 values). I tried to sit down and understand how CRC works but I am drawing a blank.
My first problem is when trying to use an online calculator for calculating the message "BD001325E032091B94C412AC" into CRC16 = 12AC. The documentation states that the last two octets are the CRC16 value, so I am inputting "BD001325E032091B94C4" into the site http://www.lammertbies.nl/comm/info/crc-calculation.html and receive 5A90 as the result instead of 12AC.
Does anybody know why these values are different and where I can find code for how to calculate CRC16 and CRC32 values (I plan to later learn how to do this but times doesn't allow right now)?
Some more messages are as following:
16000040FFFFFFFF00015FCB
3C00003144010405E57022C7
BA00001144010101B970F0ED
3900010101390401B3049FF1
09900C800000000000008CF3
8590000000000000000035F7
00900259025902590259EBC9
0200002B00080191014BF5A2
BB0000BEE0014401B970E51E
3D000322D0320A2510A263A0
2C0001440000D60000D65E54
--Edit--
I have included more information. The documentation I was referencing is TIA-102.BAAA-A (from the TIA standard). The following is what the documentation states (trying to avoid copyright infringement as much as possible):
The Last Block in a packet comprises several octets of user information and / or
pad octets, followed by a 4-octet CRC parity check. This is referred to as the
packet CRC.
The packet CRC is a 4-octet cyclic redundancy check coded over all of the data
octets included in the Intermediate Blocks and the octets of user information of
the Last Block. The specific calculation is as follows.
Let k be the total number of user information and pad bits over which the packet
CRC is to be calculated. Consider the k message bits as the coefficients of a
polynomial M(x) of degree k–1, associating the MSB of the zero-th message
octet with x^k–1 and the LSB of the last message octet with x^0. Define the
generator polynomial, GM(x), and the inversion polynomial, IM(x).
GM(x) = x^32 + x^26 + x^23 + x^22 + x^16 + x^12 + x^11 + x^10 + x^8 + x^7 + x^5 +
x^4 + x^2 + x + 1
IM(x) = x^31 + x^30 + x^29 + ... + x^2 + x +1
The packet CRC polynomial, FM(x), is then computed from the following formula.
FM(x) = ( x^32 M(x) mod GM(x) ) + IM(x) modulo 2, i.e., in GF(2)
The coefficients of FM(x) are placed in the CRC field with the MSB of the zero-th
octet of the CRC corresponding to x^31 and the LSB of the third octet of the CRC
corresponding to x^0.
In the above quote, I have put ^ to show powers as the formatting didn't stay the same when quoted. I'm not sure what goes to what but does this help?
I have a class I converted from a C++ I found in internet, it uses a long to calculate a CRC32. It adhere to the standard and is the one use by PKZIP, WinZip and Ethernet. To test it, use Winzip and compress a file then calculate the same file with this class, it should return the same CRC. It does for me.
public class CRC32
{
private int[] iTable;
public CRC32() {
this.iTable = new int[256];
Init();
}
/**
* Initialize the iTable aplying the polynomial used by PKZIP, WINZIP and Ethernet.
*/
private void Init()
{
// 0x04C11DB7 is the official polynomial used by PKZip, WinZip and Ethernet.
int iPolynomial = 0x04C11DB7;
// 256 values representing ASCII character codes.
for (int iAscii = 0; iAscii <= 0xFF; iAscii++)
{
this.iTable[iAscii] = this.Reflect(iAscii, (byte) 8) << 24;
for (int i = 0; i <= 7; i++)
{
if ((this.iTable[iAscii] & 0x80000000L) == 0) this.iTable[iAscii] = (this.iTable[iAscii] << 1) ^ 0;
else this.iTable[iAscii] = (this.iTable[iAscii] << 1) ^ iPolynomial;
}
this.iTable[iAscii] = this.Reflect(this.iTable[iAscii], (byte) 32);
}
}
/**
* Reflection is a requirement for the official CRC-32 standard. Note that you can create CRC without it,
* but it won't conform to the standard.
*
* #param iReflect
* value to apply the reflection
* #param iValue
* #return the calculated value
*/
private int Reflect(int iReflect, int iValue)
{
int iReturned = 0;
// Swap bit 0 for bit 7, bit 1 For bit 6, etc....
for (int i = 1; i < (iValue + 1); i++)
{
if ((iReflect & 1) != 0)
{
iReturned |= (1 << (iValue - i));
}
iReflect >>= 1;
}
return iReturned;
}
/**
* PartialCRC caculates the CRC32 by looping through each byte in sData
*
* #param lCRC
* the variable to hold the CRC. It must have been initialize.
* <p>
* See fullCRC for an example
* </p>
* #param sData
* array of byte to calculate the CRC
* #param iDataLength
* the length of the data
* #return the new caculated CRC
*/
public long CalculateCRC(long lCRC, byte[] sData, int iDataLength)
{
for (int i = 0; i < iDataLength; i++)
{
lCRC = (lCRC >> 8) ^ (long) (this.iTable[(int) (lCRC & 0xFF) ^ (int) (sData[i] & 0xff)] & 0xffffffffL);
}
return lCRC;
}
/**
* Caculates the CRC32 for the given Data
*
* #param sData
* the data to calculate the CRC
* #param iDataLength
* then length of the data
* #return the calculated CRC32
*/
public long FullCRC(byte[] sData, int iDataLength)
{
long lCRC = 0xffffffffL;
lCRC = this.CalculateCRC(lCRC, sData, iDataLength);
return (lCRC /*& 0xffffffffL)*/^ 0xffffffffL);
}
/**
* Calculates the CRC32 of a file
*
* #param sFileName
* The complete file path
* #param context
* The context to open the files.
* #return the calculated CRC32 or -1 if an error occurs (file not found).
*/
long FileCRC(String sFileName, Context context)
{
long iOutCRC = 0xffffffffL; // Initilaize the CRC.
int iBytesRead = 0;
int buffSize = 32 * 1024;
FileInputStream isFile = null;
try
{
byte[] data = new byte[buffSize]; // buffer de 32Kb
isFile = context.openFileInput(sFileName);
try
{
while ((iBytesRead = isFile.read(data, 0, buffSize)) > 0)
{
iOutCRC = this.CalculateCRC(iOutCRC, data, iBytesRead);
}
return (iOutCRC ^ 0xffffffffL); // Finalize the CRC.
}
catch (Exception e)
{
// Error reading file
}
finally
{
isFile.close();
}
}
catch (Exception e)
{
// file not found
}
return -1l;
}
}
Read Ross Williams tutorial on CRCs to get a better understanding of CRC's, what defines a particular CRC, and their implementations.
The reveng website has an excellent catalog of known CRCs, and for each the CRC of a test string (nine bytes: "123456789" in ASCII/UTF-8). Note that there are 22 different 16-bit CRCs defined there.
The reveng software on that same site can be used to reverse engineer the polynomial, initialization, post-processing, and bit reversal given several examples as you have for the 16-bit CRC. (Hence the name "reveng".) I ran your data through and got:
./reveng -w 16 -s 16000040FFFFFFFF00015FCB 3C00003144010405E57022C7 BA00001144010101B970F0ED 3900010101390401B3049FF1 09900C800000000000008CF3 8590000000000000000035F7 00900259025902590259EBC9 0200002B00080191014BF5A2 BB0000BEE0014401B970E51E 3D000322D0320A2510A263A0 2C0001440000D60000D65E54
width=16 poly=0x1021 init=0xc921 refin=false refout=false xorout=0x0000 check=0x2fcf name=(none)
As indicated by the "(none)", that 16-bit CRC is not any of the 22 listed on reveng, though it is similar to several of them, differing only in the initialization.
The additional information you provided is for a 32-bit CRC, either CRC-32 or CRC-32/BZIP in the reveng catalog, depending on whether the bits are reversed or not.
There are quite a few parameters to CRC calculations: Polynomial, initial value, final XOR... see Wikipedia for details. Your CRC does not seem to fit the ones on the site you used, but you can try to find the right parameters from your documentation and use a different calculator, e.g. this one (though I'm afraid it doesn't support HEX input).
One thing to keep in mind is that CRC-16 is usually calculated over the data that is supposed to be checksummed plus two zero-bytes, e.g. you are probably looking for a CRC16 function where CRC16(BD001325E032091B94C40000) == 12AC. With checksums calculated in this way, the CRC of the data with checksum appended will work out to 0, which makes checking easier, e.g. CRC16(BD001325E032091B94C412AC) == 0000

Calculate checksum for Laboratory Information System (LIS) frames

I'm developing an instrument driver for a Laboratory Information System. I want to know how to calculate the checksum of a frame.
Explanation of the checksum algorithm:
Expressed by characters [0-9] and [A-F].
Characters beginning from the character after [STX] and until [ETB] or [ETX] (including [ETB] or [ETX]) are added in binary.
The 2-digit numbers, which represent the least significant 8 bits in hexadecimal code, are converted to ASCII characters [0-9] and
[A-F].
The most significant digit is stored in CHK1 and the least significant digit in CHK2.
I am not getting the 3rd and 4th points above.
This is a sample frame:
<STX>2Q|1|2^1||||20011001153000<CR><ETX><CHK1><CHK2><CR><LF>
What is the value of CHK1 and CHK2? How do I implement the given algorithm in C#?
Finally I got answer, here is the code for calculating checksum:
private string CalculateChecksum(string dataToCalculate)
{
byte[] byteToCalculate = Encoding.ASCII.GetBytes(dataToCalculate);
int checksum = 0;
foreach (byte chData in byteToCalculate)
{
checksum += chData;
}
checksum &= 0xff;
return checksum.ToString("X2");
}
You can do this in one line:
return Encoding.ASCII.GetBytes(dataToCalculate).Aggregate((r, n) => r += n).ToString("X2");
private bool CheckChecksum(string data)
{
bool isValid =false
byte[] byteToCalculate = Encoding.ASCII.GetBytes(dataToCalculate);
int checkSum = 0;
for ( int i=i i<byteToCalculate.Length;i++)
{
checkSum += byteToCalculate[i];
}
checksum &= 0xff;
if (checksum == byteToCalculate[ChecksumPlace]
{
return true
}
else
{
return false
}
}

Calculating the number of bits in a Subnet Mask in C#

I have a task to complete in C#. I have a Subnet Mask: 255.255.128.0.
I need to find the number of bits in the Subnet Mask, which would be, in this case, 17.
However, I need to be able to do this in C# WITHOUT the use of the System.Net library (the system I am programming in does not have access to this library).
It seems like the process should be something like:
1) Split the Subnet Mask into Octets.
2) Convert the Octets to be binary.
3) Count the number of Ones in each Octet.
4) Output the total number of found Ones.
However, my C# is pretty poor. Does anyone have the C# knowledge to help?
Bit counting algorithm taken from:
http://www.necessaryandsufficient.net/2009/04/optimising-bit-counting-using-iterative-data-driven-development/
string mask = "255.255.128.0";
int totalBits = 0;
foreach (string octet in mask.Split('.'))
{
byte octetByte = byte.Parse(octet);
while (octetByte != 0)
{
totalBits += octetByte & 1; // logical AND on the LSB
octetByte >>= 1; // do a bitwise shift to the right to create a new LSB
}
}
Console.WriteLine(totalBits);
The most simple algorithm from the article was used. If performance is critical, you might want to read the article and use a more optimized solution from it.
string ip = "255.255.128.0";
string a = "";
ip.Split('.').ToList().ForEach(x => a += Convert.ToInt32(x, 2).ToString());
int ones_found = a.Replace("0", "").Length;
A complete sample:
public int CountBit(string mask)
{
int ones=0;
Array.ForEach(mask.Split('.'),(s)=>Array.ForEach(Convert.ToString(int.Parse(s),2).Where(c=>c=='1').ToArray(),(k)=>ones++));
return ones
}
You can convert a number to binary like this:
string ip = "255.255.128.0";
string[] tokens = ip.Split('.');
string result = "";
foreach (string token in tokens)
{
int tokenNum = int.Parse(token);
string octet = Convert.ToString(tokenNum, 2);
while (octet.Length < 8)
octet = octet + '0';
result += octet;
}
int mask = result.LastIndexOf('1') + 1;
The solution is to use a binary operation like
foreach(string octet in ipAddress.Split('.'))
{
int oct = int.Parse(octet);
while(oct !=0)
{
total += oct & 1; // {1}
oct >>=1; //{2}
}
}
The trick is that on line {1} the binary AND is in sence a multiplication so multiplicating 1x0=0, 1x1=1. So if we have some hypothetic number
0000101001 and multiply it by 1 (so in binary world we execute &), which is nothig else then 0000000001, we get
0000101001
0000000001
Most right digit is 1 in both numbers so making binary AND return 1, otherwise if ANY of the numbers minor digit will be 0, the result will be 0.
So here, on line total += oct & 1 we add to tolal either 1 or 0, based on that digi number.
On line {2}, instead we just shift the minor bit to right by, actually, deviding the number by 2, untill it becomes 0.
Easy.
EDIT
This is valid for intgere and for byte types, but do not use this technique on floating point numbers. By the way, it's pretty valuable solution for this question.

Unpacking EBCDIC Packed Decimals (COMP-3) in an ASCII Conversion

I am using Jon Skeet's EBCDIC implementation in .NET to read a VSAM file downloaded in binary mode with FTP from a mainframe system. It works very well for reading/writing in this encoding, but it does not have anything to read packed-decimal values. My file contains these, and I need to unpack them (at the cost of more bytes, obviously).
How can I do this?
My fields are defined as PIC S9(7)V99 COMP-3.
Ahh, BCD. Honk if you used it in 6502 assembly.
Of course, the best bet is to let the COBOL MOVE do the job for you! One of these possibilities may help.
(Possibility #1) Assuming you do have access to the mainframe and the source code, and the output file is ONLY for your use, modify the program so it just MOVEs the value to a plain unpacked PIC S9(7)V99.
(Possibility #2) Assuming it's not that easy, (e.g., file is input for other pgms, or can't change the code), you can write another COBOL program on the system that reads that file and writes another. Cut and paste the file record layout with the BCD into the new program for input and output files. Modify the output version to be non-packed. Read a record, do a 'move corresponding' to transfer the data, and write, until eof. Then transfer that file.
(Possibility #3) If you can't touch the mainframe, note the description in the article you linked in your comment. BCD is relatively simple. It could be as easy as this (vb.net):
Private Function FromBCD(ByVal BCD As String, ByVal intsz As Integer, ByVal decsz As Integer) As Decimal
Dim PicLen As Integer = intsz + decsz
Dim result As Decimal = 0
Dim val As Integer = Asc(Mid(BCD, 1, 1))
Do While PicLen > 0
result *= 10D
result += val \ 16
PicLen -= 1
If PicLen > 0 Then
result *= 10D
result += val Mod 16
PicLen -= 1
BCD = Mid(BCD, 2)
End If
val = Asc(Mid(BCD, 1, 1))
Loop
If val Mod 16 = &HD& Then
result = -result
End If
Return result / CDec(10 ^ decsz)
End Function
I tested it with a few variations of this call:
MsgBox(FromBCD("#" & Chr(13 + 16), 2, 1))
E.g., is -40.1. But just a few. So it might still be wrong.
So then if your comp-3 starts, say, at byte 10 of the input record layout, this would solve it:
dim valu as Decimal = FromBCD(Mid(InputLine,10,5), 7,2))
Noting the formulas from the data-conversion article for the # of bytes to send in, and the # of 9's before and after the V.
Store the result in a Decimal to avoid rounding errors. Esp if it's $$$. Float & Double WILL cause you grief! If you're not processing it, even a string is better.
of course it could be harder. Where I work, the mainframe is 9 bits per byte. Serious. That's what makes the first 2 possibilities so salient. Of course what really makes them better is the fact the you may be a PC only programmer and this is a great excuse to get a mainframe programmer to do the work for you! If you are so lucky to have that option...
Peace,
-Al
I use this extension method for packed decimal (BCD) conversion:
/// <summary>
/// computes the actual decimal value from an IBM "Packed Decimal" 9(x)v99 (COBOL COMP-3) format
/// </summary>
/// <param name="value">byte[]</param>
/// <param name="precision">byte; decimal places, default 2</param>
/// <returns>decimal</returns>
public static decimal FromPackedDecimal(this byte[] value, byte precision = 2)
{
if (value.Length < 1)
{
throw new System.InvalidOperationException("Cannot unpack empty bytes.");
}
double power = System.Math.Pow(10, precision);
if (power > long.MaxValue)
{
throw new System.InvalidOperationException(
$"Precision too large for valid calculation: {precision}");
}
string hex = System.BitConverter.ToString(value).Replace("-", "");
var bytes = Enumerable.Range(0, hex.Length)
.Select(x => System.Convert.ToByte($"0{hex.Substring(x, 1)}", 16))
.ToList();
long place = 1;
decimal ret = 0;
for (int i = bytes.Count - 2; i > -1; i--)
{
ret += (bytes[i] * place);
place *= 10;
}
ret /= (long)power;
return (bytes.Last() & (1 << 7)) != 0 ? ret * -1 : ret;
}

Categories

Resources