This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to add even parity bit on 7-bit binary number
This is my new code which converts a 7-bit binary number to an 8-bit with even parity. However it does not work. When I type in 0101010 for example, it says the number with even parity is 147. Could you help, showing me what is wrong please?
using System;
using System.Collections.Generic;
using System.Collections;
using System.Linq;
using System.Text;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Please enter a 7-bit binary number:");
int a = Convert.ToInt32(Console.ReadLine());
byte[] numberAsByte = new byte[] { (byte)a };
System.Collections.BitArray bits = new System.Collections.BitArray(numberAsByte);
a = a << 1;
int count = 0;
for (int i = 0; i < 8; i++)
{
if (bits[i])
{
count++;
}
if (count % 2 == 1)
{
bits[7] = true;
}
bits.CopyTo(numberAsByte, 0);
a = numberAsByte[0];
Console.WriteLine("The number with an even parity bit is:");
Console.Write(a);
Console.ReadLine();
}
}
}
Use int.TryParse() on what you got from Console.ReadLine(). You then need to check that the number is between 0 and 127 to ensure it uses only 7 bits. You then need to count the number of 1s in the binary representation of the number. And add 128 to the number to set the parity bit, depending whether you specified odd or even parity.
Counting 1s is your real homework assignment.
By using the BitArray class you can write
int a = Convert.ToInt32(Console.ReadLine());
byte[] numberAsByte = new byte[] { (byte)a };
BitArray bits = new BitArray(numberAsByte);
This converts the single bits of your byte to a BitArray, which represents an array of Booleans that can be handled in a easy way. Note that the constructor of BitArray accepts an array of bytes. Since we have only one byte, we have to pass it a byte array of length 1 containing this single byte (numberAsByte).
Now let us count the bits that are set.
int count = 0;
for (int i = 0; i < 8; i++) {
if (bits[i]) {
count++;
}
}
Note that we simply test for a bit with bits[i], which yields a Boolean value. The test bits[i] == true as being perfectly legal and correct yields the same result but is unnecessarily complicated. The if statement does not require a comparison. All it wants is a Boolean value.
This calculates an odd parity bit.
if (count % 2 == 1) { // Odd number of bits
bits[7] = true; // Set the left most bit as parity bit for even parity.
}
The % operator is the modulo operator. It yields the rest of an integer division. x % 2 yields 0 if x is even. If you want an odd parity bit, you can test count % 2 == 0 instead.
BitArray has a CopyTo method which converts our bits back to an array of bytes (containing only one byte in our case).
bits.CopyTo(numberAsByte, 0);
a = numberAsByte[0];
numberAsByte[0] contains our number with a parity bit.
If you want the parity bit on the right side, then you will have to shift the number to the left by one bit first.
int a = Convert.ToInt32(Console.ReadLine());
a = a << 1;
// Do the parity bit calculation as above and, if necessary
// set the right most bit as parity bit.
bits[0] = true;
According to wikipedia there are two variation of parity bits, so I implemented parameter to select the one you need. It supports user input up to 63 bits, i'm leaving implementation of validation code to you.
ulong GetNumberParity(string input, bool isEvenParity)
{
ulong tmp = Convert.ToUInt64(input, 2);
ulong c = 0;
for (int i = 0; i < 64; i++) c += tmp >> i & 1;
if(isEvenParity)
return Convert.ToUInt64((c % 2 != 0 ? "1" : "0") + input, 2);
else
return Convert.ToUInt64((c % 2 == 0? "1" : "0") + input, 2);
}
Related
int n = Convert.ToInt32(Console.ReadLine());
int factorial = 1;
for (int i = 1; i <= n; i++)
{
factorial *= i;
}
Console.WriteLine(factorial);
This code runs in Console Application, but when a number is above 34 the application returns 0.
Why is 0 returned and what can be done to compute factorial of large numbers?
You're going out of range of what the variable can store. That's effectively a factorial, which grows faster than the exponential. Try using ulong (max value 2^64 = 18,446,744,073,709,551,615) instead of int (max value 2^31 = 2,147,483,647) - ulong p = 1 - that should get you a bit further.
If you need to go even further, .NET 4 and up has BigInteger, which can store arbitrarily large numbers.
You are getting 0 because of the way integer overflow handled in most programming languages. You can easily see what happens if you output results of each computation in a loop (using HEX representation):
int n = Convert.ToInt32(Console.ReadLine());
int factorial = 1;
for (int i = 1; i <= n; i++)
{
factorial *= i;
Console.WriteLine("{0:x}", factorial);
}
Console.WriteLine(factorial);
For n = 34 result look like:
1
2
6
18
78
2d0
13b0
...
2c000000
80000000
80000000
0
Basically multiplying by 2 shifts numbers left and when you multiplied numberer containing enough twos all significant digits will fall out of integer which is 32 bits wide (i.e. first 6 numbers give you 4 twos : 1, 2, 3, 2*2, 5, 2*3, so result of multipying them is 0x2d0 with 4 zero bits at the end).
If you are using .net 4.0 and want to calculate factorial of 1000, then try to use BigInteger instead of Int32 or Int64 or even UInt64. Your problem statement "doesn't work" is not quite sufficient for me to give some good subjection.
Your code will look something like:
using System;
using System.Numerics;
namespace ConsoleApplication1
{
class Program
{
static void Main()
{
int factorial = Convert.ToInt32(Console.ReadLine());
var result = CalculateFactorial(factorial);
Console.WriteLine(result);
Console.ReadLine();
}
private static BigInteger CalculateFactorial(int value)
{
BigInteger result = new BigInteger(1);
for (int i = 1; i <= value; i++)
{
result *= i;
}
return result;
}
}
}
I want for the program to iterate through every possible binary number from 00000000 to 11111111 and I it to calculate the number of consecutive "runs" of ones.
Ex) 00000001 and 11100000 both count as a single runs of ones
00001010 and 11101110 both count as two runs of ones
The problem is, is that it ignores the AND mask part and I don't know why.
{
static void Main(string[] args)
{
//Start
int stuff = BitRunner8();
//Display
Console.Write(stuff);
Console.ReadKey();
}
public static int BitRunner8()
{
int uniRunOrNot = 0;
int uniRunCount = 0;
int uniRunTotal = 0;
//iterating from numbers 0 to 255
for (int x = 0; x < 255; x++)
{
//I use 128 as my AND mask because 128 is 10000000 in binary
for ( int uniMask = 128; uniMask != 0; uniMask >>= 1)
{
//This is the if statement that doesn't return true ever
if ((x & uniMask) != 0)
{
//If the and mask is true, and ther were no previous ones before it, add to the the uniRunCount
if (uniRunOrNot == 0)
{
//Total count of the runs
uniRunCount++;
}
// Making it so that if two consective ones are in a row, the 'if' statement right above would return false,
//so that it wouldn't add to the uniRunCount
uniRunOrNot++;
}
else
{
//add the total number of runs to uniRunTotal, and then reset both uniRunOrNot, and uniRunCount
uniRunTotal += uniRunCount;
uniRunOrNot = uniRunCount = 0;
}
}
}
//Divide the final amount by 256 total numbers
uniRunTotal /= 256;
return uniRunCount;
}
}
The problem is that your code ignores runs that include the least significant bit. Your code updates uniRunTotal only when it discovers a zero bit. When the least significant bit is non-zero, uniRunCount is never added to the total.
Add code after the loop to add uniRunCount to fix this problem.
You can also fix this issue by applying a sentinel strategy: count bits from the other end, and use nine bits instead of eight, because bit number nine is always zero:
for (int uniMask = 1; uniMask <= 256; uniMask <<= 1)
I have a program which reads bytes from the network. Sometimes, those bytes are string representations of integer in decimal or hexadecimal form.
Normally, I parse this with something like
var s=Encoding.ASCII.GetString(p.GetBuffer(),0,(int)p.Length);
int.TryParse(s, out number);
I feel that this is wasteful, as it has to allocate memory to the string without any need for it.
Is there a better way I can do it in c#?
UPDATE
I've seen several suggestions to use BitConverter class. This is not what I need. BitConverter will transform binary representation of int (4 bytes) into int type, but since the int is in ascii form, this doesn't apply here.
I doubt it will have a substantial impact on performance or memory consumption, but you can do this relatively easily. One implementation for converting decimal numbers is shown below:
private static int IntFromDecimalAscii(byte[] bytes)
{
int result = 0;
// For each digit, add the digit's value times 10^n, where n is the
// column number counting from right to left starting at 0.
for(int i = 0; i < bytes.Length; ++i)
{
// ASCII digits are in the range 48 <= n <= 57. This code only
// makes sense if we are dealing exclusively with digits, so
// throw if we encounter a non-digit character
if(bytes[i] < 48 || bytes[i] > 57)
{
throw new ArgumentException("Non-digit character present", "bytes");
}
// The bytes are in order from most to least significant, so
// we need to reverse the index to get the right column number
int exp = bytes.Length - i - 1;
// Digits in ASCII start with 0 at 48, and move sequentially
// to 9 at 57, so we can simply subtract 48 from a valid digit
// to get its numeric value
int digitValue = bytes[i] - 48;
// Finally, add the digit value times the column value to the
// result accumulator
result += digitValue * (int)Math.Pow(10, exp);
}
return result;
}
This can easily be adapted to convert hex values as well:
private static int IntFromHexAscii(byte[] bytes)
{
int result = 0;
for(int i = 0; i < bytes.Length; ++i)
{
// ASCII hex digits are a bit more complex than decimal.
if(bytes[i] < 48 || bytes[i] > 71 || (bytes[i] > 57 && bytes[i] < 65))
{
throw new ArgumentException("Non-digit character present", "bytes");
}
int exp = bytes.Length - i - 1;
// Assume decimal first, then fix it if it's actually hex.
int digitValue = bytes[i] - 48;
// This is safe because we already excluded all non-digit
// characters above
if(bytes[i] > 57) // A-F
{
digitValue = bytes[i] - 55;
}
// For hex, we use 16^n instead of 10^n
result += digitValue * (int)Math.Pow(16, exp);
}
return result;
}
Well, you could be a little less wasteful (at least in the number of source code characters sense) by avoiding the s declaration like:
int.TryParse(Encoding.ASCII.GetString(p.GetBuffer(),0,(int)p.Length), out number);
But, I think the only other real way to get a speed-up would be to do as the commenter suggests and hard code a mapping into a Dictionary or something. This could save some time if you have to do this a lot, but it may not be worth the effort...
I am having problem with this method I wrote to convert UInt64 to a binary array. For some numbers I am getting incorrect binary representation.
Results
Correct
999 = 1111100111
Correct
18446744073709551615 = 1111111111111111111111111111111111111111111111111111111111111111
Incorrect?
18446744073709551614 =
0111111111111111111111111111111111111111111111111111111111111110
According to an online converter the binary value of 18446744073709551614 should be
1111111111111111111111111111111111111111111111111111111111111110
public static int[] GetBinaryArray(UInt64 n)
{
if (n == 0)
{
return new int[2] { 0, 0 };
}
var val = (int)(Math.Log(n) / Math.Log(2));
if (val == 0)
val++;
var arr = new int[val + 1];
for (int i = val, j = 0; i >= 0 && j <= val; i--, j++)
{
if ((n & ((UInt64)1 << i)) != 0)
arr[j] = 1;
else
arr[j] = 0;
}
return arr;
}
FYI: This is not a homework assignment, I require to convert an integer to binary array for encryption purposes, hence the need for an array of bits. Many solutions I have found on this site convert an integer to string representation of binary number which was useless so I came up with this mashup of various other methods.
An explanation as to why the method works for some numbers and not others would be helpful. Yes I used Math.Log and it is slow, but performance can be fixed later.
EDIT: And yes I do need the line where I use Math.Log because my array will not always be 64 bits long, for example if my number was 4 then in binary it is 100 which is array length 3. It is a requirement of my application to do it this way.
It's not the returned array for the input UInt64.MaxValue - 1 which is wrong, it seems like UInt64.MaxValue is wrong.
The array is 65 elements long. This is intuitively wrong because UInt64.MaxValue must fit in 64 bits.
Firstly, instead of doing a natural log and dividing by a log to base 2, you can just do a log to base 2.
Secondly, you also need to do a Math.Ceiling on the returned value because you need the value to fit fully inside the number of bits. Discarding the remainder with a cast to int means that you need to arbitrarily do a val + 1 when declaring the result array. This is only correct for certain scenarios - one of which it is not correct for is... UInt64.MaxValue. Adding one to the number of bits necessary gives a 65-element array.
Thirdly, and finally, you cannot left-shift 64 bits, hence i = val - 1 in the for loop initialization.
Haven't tested this exhaustively...
public static int[] GetBinaryArray(UInt64 n)
{
if (n == 0)
{
return new int[2] { 0, 0 };
}
var val = (int)Math.Ceiling(Math.Log(n,2));
if (val == 0)
val++;
var arr = new int[val];
for (int i = val-1, j = 0; i >= 0 && j <= val; i--, j++)
{
if ((n & ((UInt64)1 << i)) != 0)
arr[j] = 1;
else
arr[j] = 0;
}
return arr;
}
int n = Convert.ToInt32(Console.ReadLine());
int factorial = 1;
for (int i = 1; i <= n; i++)
{
factorial *= i;
}
Console.WriteLine(factorial);
This code runs in Console Application, but when a number is above 34 the application returns 0.
Why is 0 returned and what can be done to compute factorial of large numbers?
You're going out of range of what the variable can store. That's effectively a factorial, which grows faster than the exponential. Try using ulong (max value 2^64 = 18,446,744,073,709,551,615) instead of int (max value 2^31 = 2,147,483,647) - ulong p = 1 - that should get you a bit further.
If you need to go even further, .NET 4 and up has BigInteger, which can store arbitrarily large numbers.
You are getting 0 because of the way integer overflow handled in most programming languages. You can easily see what happens if you output results of each computation in a loop (using HEX representation):
int n = Convert.ToInt32(Console.ReadLine());
int factorial = 1;
for (int i = 1; i <= n; i++)
{
factorial *= i;
Console.WriteLine("{0:x}", factorial);
}
Console.WriteLine(factorial);
For n = 34 result look like:
1
2
6
18
78
2d0
13b0
...
2c000000
80000000
80000000
0
Basically multiplying by 2 shifts numbers left and when you multiplied numberer containing enough twos all significant digits will fall out of integer which is 32 bits wide (i.e. first 6 numbers give you 4 twos : 1, 2, 3, 2*2, 5, 2*3, so result of multipying them is 0x2d0 with 4 zero bits at the end).
If you are using .net 4.0 and want to calculate factorial of 1000, then try to use BigInteger instead of Int32 or Int64 or even UInt64. Your problem statement "doesn't work" is not quite sufficient for me to give some good subjection.
Your code will look something like:
using System;
using System.Numerics;
namespace ConsoleApplication1
{
class Program
{
static void Main()
{
int factorial = Convert.ToInt32(Console.ReadLine());
var result = CalculateFactorial(factorial);
Console.WriteLine(result);
Console.ReadLine();
}
private static BigInteger CalculateFactorial(int value)
{
BigInteger result = new BigInteger(1);
for (int i = 1; i <= value; i++)
{
result *= i;
}
return result;
}
}
}