I am writing a checksum for a manifest file for a courrier based system written in C# in the .NET environment.
I need to have an 8 digit field representing the checksum which is calculated as per the following:
Record Check Sum Algorithm
Form the 32-bit arithmetic sum of the products of
• the 7 low order bits of each ASCII character in the record
• the position of each character in the record numbered from 1 for the first character.
for the length of the record up to but excluding the check sum field itself :
Sum = Σi ASCII( ith character in the record ).( i )
where i runs over the length of the record excluding the check sum field.
After performing this calculation, convert the resultant sum to binary and split the 32 low order
bits of the Sum into eight blocks of 4 bits (octets). Note that each of the octets has a decimal
number value ranging from 0 to 15.
Add an offset of ASCII 0 ( zero ) to each octet to form an ASCII code number.
Convert the ASCII code number to its equivalent ASCII character thus forming printable
characters in the range 0123456789:;<=>?.
Concatenate each of these characters to form a single string of eight (8) characters in overall
length.
I am not the greatest at mathematics so I am struggling to write the code correctly as per the documentation.
I have written the following so far:
byte[] sumOfAscii = null;
for(int i = 1; i< recordCheckSum.Length; i++)
{
string indexChar = recordCheckSum.ElementAt(i).ToString();
byte[] asciiChar = Encoding.ASCII.GetBytes(indexChar);
for(int x = 0; x<asciiChar[6]; x++)
{
sumOfAscii += asciiChar[x];
}
}
//Turn into octets
byte firstOctet = 0;
for(int i = 0;i< sumOfAscii[6]; i++)
{
firstOctet += recordCheckSum;
}
Where recordCheckSum is a string made up of deliveryAddresses, product names etc and excludes the 8-digit checksum.
Any help with calculating this would be greatly appreciated as I am struggling.
There are notes in line as I go along. Some more notes on the calculation at the end.
uint sum = 0;
uint zeroOffset = 0x30; // ASCII '0'
byte[] inputData = Encoding.ASCII.GetBytes(recordCheckSum);
for (int i = 0; i < inputData.Length; i++)
{
int product = inputData[i] & 0x7F; // Take the low 7 bits from the record.
product *= i + 1; // Multiply by the 1 based position.
sum += (uint)product; // Add the product to the running sum.
}
byte[] result = new byte[8];
for (int i = 0; i < 8; i++) // if the checksum is reversed, make this:
// for (int i = 7; i >=0; i--)
{
uint current = (uint)(sum & 0x0f); // take the lowest 4 bits.
current += zeroOffset; // Add '0'
result[i] = (byte)current;
sum = sum >> 4; // Right shift the bottom 4 bits off.
}
string checksum = Encoding.ASCII.GetString(result);
One note, I use the & and >> operators, which you may or may not be familiar with. The & operator is the bitwise and operator. The >> operator is logical shift right.
Related
My end goal is to take a number like 29, pull it apart and then add the two integers that result. So, if the number is 29, for example, the answer would be 2 + 9 = 11.
When I'm debugging, I can see that those values are being held, but it appears that other values are also being incorrect in this case 50, 57. So, my answer is 107. I have no idea where these values are coming from and I don't know where to begin to fix it.
My code is:
class Program
{
static void Main(string[] args)
{
int a = 29;
int answer = addTwoDigits(a);
Console.ReadLine();
}
public static int addTwoDigits(int n)
{
string number = n.ToString();
char[] a = number.ToCharArray();
int total = 0;
for (int i = 0; i < a.Length; i++)
{
total = total + a[i];
}
return total;
}
}
As mentioned the issue with your code is that characters have a ASCII code value when you cast to int which doesn't match with the various numerical digits. Instead of messing with strings and characters just use good old math instead.
public static int AddDigits(int n)
{
int total = 0;
while(n>0)
{
total += n % 10;
n /= 10;
}
return total;
}
Modulo by 10 will result in the least significant digit and because integer division truncates n /= 10 will truncate the least significant digit and eventually become 0 when you run out of digits.
Your code is actually additioning the decimal value of the char.
Take a look at https://www.cs.cmu.edu/~pattis/15-1XX/common/handouts/ascii.html
Decimal value of 2 and 9 are 50 and 57 respectively. You need to convert the char into a int before doing your addition.
int val = (int)Char.GetNumericValue(a[i]);
Try this:
public static int addTwoDigits(int n)
{
string number = n.ToString();
char[] a = number.ToCharArray();
int total = 0;
for (int i = 0; i < a.Length; i++)
{
total = total + (int)Char.GetNumericValue(a[i]);
}
return total;
}
Converted number to char always returns ASCII code.. So you can use GetNumericValue() method for getting value instead of ASCII code
Just for fun, I thought I'd see if I could do it in one line using LINQ and here it is:
public static int AddWithLinq(int n)
{
return n.ToString().Aggregate(0, (total, c) => total + int.Parse(c.ToString()));
}
I don't think it would be particularly "clean" code, but it may be educational at best!
You should you int.TryParse
int num;
if (int.TryParse(a[i].ToString(), out num))
{
total += num;
}
Your problem is that you're adding char values. Remember that the char is an integer value that represents a character in ASCII. When you are adding a[i] to total value, you're adding the int value that represents that char, the compiler automatic cast it.
The problem is in this code line:
total = total + a[i];
The code above is equal to this code line:
total += (int)a[i];
// If a[i] = '2', the character value of the ASCII table is 50.
// Then, (int)a[i] = 50.
To solve your problem, you must change that line by this:
total = (int)Char.GetNumericValue(a[i]);
// If a[i] = '2'.
// Then, (int)Char.GetNumericValue(int)a[i] = 2.
You can see this answer to see how to convert a numeric value
from char to int.
At this page you can see the ASCII table of values.
public static int addTwoDigits(int n)
{
string number = n.ToString()
char[] a = number.ToCharArray();
int total = 0;
for (int i = 0; i < a.Length; i++)
{
total += Convert.ToInt32(number[i].ToString());
}
return total;
}
You don't need to convert the number to a string to find the digits. #juharr already explained how you can calculate the digits and the total in a loop. The following is a recursive version :
int addDigit(int total,int n)
{
return (n<10) ? total + n
: addDigit(total += n % 10,n /= 10);
}
Which can be called with addDigit(0,234233433)and returns 27. If n is less than 10, we are counting the last digit. Otherwise extract the digit and add it to the total then divide by 10 and repeat.
One could get clever and use currying to get rid of the initial total :
int addDigits(int i)=>addDigit(0,i);
addDigits(234233433) also returns 27;
If the number is already a string, one could take advantage of the fact that a string can be treated as a Char array, and chars can be converted to ints implicitly :
var total = "234233433".Sum(c=>c-'0');
This can handle arbitrarily large strings, as long as the total doesn't exceed int.MaxValue, eg:
"99999999999999999999".Sum(x=>x-'0'); // 20 9s returns 180
Unless the number is already in string form though, this isn't efficient nor does it verify that the contents are an actual number.
Below is the checksum description.
The checksum is four ASCII character digits representing the binary sum of the characters including the
first character of the transmission and up to and including the checksum field identifier characters.
To calculate the checksum add each character as an unsigned binary number, take the lower 16 bits of the
total and perform a 2's complement. The checksum field is the result represented by four hex digits.
To verify the correct checksum on received data, simply add all the hex values including the checksum. It
should equal zero.
this is the implementation for ASCII string, but my input string is UTF-8 now.
anyone give some idea to revise the implementation for UTF-8 encoding. Thanks very much.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace SIP2
{
// Adapted from VB.NET from the Library Tech Guy blog
// http://librarytechguy.blogspot.com/2009/11/sip2-checksum_13.html
public class CheckSum
{
public static string ApplyChecksum(string strMsg)
{
int intCtr;
char[] chrArray;
int intAscSum;
bool blnCarryBit;
string strBinVal = String.Empty;
string strInvBinVal;
string strNewBinVal = String.Empty;
// Transfer SIP message to a a character array.Loop through each character of the array,
// converting the character to an ASCII value and adding the value to a running total.
intAscSum = 0;
chrArray = strMsg.ToCharArray();
for (intCtr = 0; intCtr <= chrArray.Length - 1; intCtr++)
{
intAscSum = intAscSum + (chrArray[intCtr]);
}
// Next, convert ASCII sum to a binary digit by:
// 1) taking the remainder of the ASCII sum divided by 2
// 2) Repeat until sum reaches 0
// 3) Pad to 16 digits with leading zeroes
do
{
strBinVal = (intAscSum % 2).ToString() + strBinVal;
intAscSum = intAscSum / 2;
} while (intAscSum > 0);
strBinVal = strBinVal.PadLeft(16, '0');
// Next, invert all bits in binary number.
chrArray = strBinVal.ToCharArray();
strInvBinVal = "";
for (intCtr = 0; intCtr <= chrArray.Length - 1; intCtr++)
{
if (chrArray[intCtr] == '0') { strInvBinVal = strInvBinVal + '1'; }
else { strInvBinVal = strInvBinVal + '0'; }
}
// Next, add 1 to the inverted binary digit. Loop from least significant digit (rightmost) to most (leftmost);
// if digit is 1, flip to 0 and retain carry bit to next significant digit.
blnCarryBit = true;
chrArray = strInvBinVal.ToCharArray();
for (intCtr = chrArray.Length - 1; intCtr >= 0; intCtr--)
{
if (blnCarryBit == true)
{
if (chrArray[intCtr] == '0')
{
chrArray[intCtr] = '1';
blnCarryBit = false;
}
else
{
chrArray[intCtr] = '0';
blnCarryBit = true;
}
}
strNewBinVal = chrArray[intCtr] + strNewBinVal;
}
// Finally, convert binary digit to hex value, append to original SIP message.
return strMsg + (Convert.ToInt16(strNewBinVal, 2)).ToString("X");
}
}
}
Replace the code
for (intCtr = 0; intCtr <= chrArray.Length - 1; intCtr++)
{
intAscSum = intAscSum + (chrArray[intCtr]);
}
chrArray[intCtr] is input ASCII String to ouput the ASCII code in decimal, for example "A" is 65. ASCII encoding only uses 1 byte. UTF-8 uses one byte or more than one byte to represent the UTF-8 char. I think chrArray[intCtr] is designed for ASCII - thus the input of UTF-8 (more than one byte) is not reasonable.
With
int i = 0;
for (i = 0; i < bytes.Length; i++)
{
intAscSum = intAscSum + bytes[i];
}
byte[] bytes = Encoding.UTF8.GetBytes(strMsg);
Add up all the bytes, because one UTF8 char can be more than one byte.
I have a program which reads bytes from the network. Sometimes, those bytes are string representations of integer in decimal or hexadecimal form.
Normally, I parse this with something like
var s=Encoding.ASCII.GetString(p.GetBuffer(),0,(int)p.Length);
int.TryParse(s, out number);
I feel that this is wasteful, as it has to allocate memory to the string without any need for it.
Is there a better way I can do it in c#?
UPDATE
I've seen several suggestions to use BitConverter class. This is not what I need. BitConverter will transform binary representation of int (4 bytes) into int type, but since the int is in ascii form, this doesn't apply here.
I doubt it will have a substantial impact on performance or memory consumption, but you can do this relatively easily. One implementation for converting decimal numbers is shown below:
private static int IntFromDecimalAscii(byte[] bytes)
{
int result = 0;
// For each digit, add the digit's value times 10^n, where n is the
// column number counting from right to left starting at 0.
for(int i = 0; i < bytes.Length; ++i)
{
// ASCII digits are in the range 48 <= n <= 57. This code only
// makes sense if we are dealing exclusively with digits, so
// throw if we encounter a non-digit character
if(bytes[i] < 48 || bytes[i] > 57)
{
throw new ArgumentException("Non-digit character present", "bytes");
}
// The bytes are in order from most to least significant, so
// we need to reverse the index to get the right column number
int exp = bytes.Length - i - 1;
// Digits in ASCII start with 0 at 48, and move sequentially
// to 9 at 57, so we can simply subtract 48 from a valid digit
// to get its numeric value
int digitValue = bytes[i] - 48;
// Finally, add the digit value times the column value to the
// result accumulator
result += digitValue * (int)Math.Pow(10, exp);
}
return result;
}
This can easily be adapted to convert hex values as well:
private static int IntFromHexAscii(byte[] bytes)
{
int result = 0;
for(int i = 0; i < bytes.Length; ++i)
{
// ASCII hex digits are a bit more complex than decimal.
if(bytes[i] < 48 || bytes[i] > 71 || (bytes[i] > 57 && bytes[i] < 65))
{
throw new ArgumentException("Non-digit character present", "bytes");
}
int exp = bytes.Length - i - 1;
// Assume decimal first, then fix it if it's actually hex.
int digitValue = bytes[i] - 48;
// This is safe because we already excluded all non-digit
// characters above
if(bytes[i] > 57) // A-F
{
digitValue = bytes[i] - 55;
}
// For hex, we use 16^n instead of 10^n
result += digitValue * (int)Math.Pow(16, exp);
}
return result;
}
Well, you could be a little less wasteful (at least in the number of source code characters sense) by avoiding the s declaration like:
int.TryParse(Encoding.ASCII.GetString(p.GetBuffer(),0,(int)p.Length), out number);
But, I think the only other real way to get a speed-up would be to do as the commenter suggests and hard code a mapping into a Dictionary or something. This could save some time if you have to do this a lot, but it may not be worth the effort...
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to add even parity bit on 7-bit binary number
This is my new code which converts a 7-bit binary number to an 8-bit with even parity. However it does not work. When I type in 0101010 for example, it says the number with even parity is 147. Could you help, showing me what is wrong please?
using System;
using System.Collections.Generic;
using System.Collections;
using System.Linq;
using System.Text;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Please enter a 7-bit binary number:");
int a = Convert.ToInt32(Console.ReadLine());
byte[] numberAsByte = new byte[] { (byte)a };
System.Collections.BitArray bits = new System.Collections.BitArray(numberAsByte);
a = a << 1;
int count = 0;
for (int i = 0; i < 8; i++)
{
if (bits[i])
{
count++;
}
if (count % 2 == 1)
{
bits[7] = true;
}
bits.CopyTo(numberAsByte, 0);
a = numberAsByte[0];
Console.WriteLine("The number with an even parity bit is:");
Console.Write(a);
Console.ReadLine();
}
}
}
Use int.TryParse() on what you got from Console.ReadLine(). You then need to check that the number is between 0 and 127 to ensure it uses only 7 bits. You then need to count the number of 1s in the binary representation of the number. And add 128 to the number to set the parity bit, depending whether you specified odd or even parity.
Counting 1s is your real homework assignment.
By using the BitArray class you can write
int a = Convert.ToInt32(Console.ReadLine());
byte[] numberAsByte = new byte[] { (byte)a };
BitArray bits = new BitArray(numberAsByte);
This converts the single bits of your byte to a BitArray, which represents an array of Booleans that can be handled in a easy way. Note that the constructor of BitArray accepts an array of bytes. Since we have only one byte, we have to pass it a byte array of length 1 containing this single byte (numberAsByte).
Now let us count the bits that are set.
int count = 0;
for (int i = 0; i < 8; i++) {
if (bits[i]) {
count++;
}
}
Note that we simply test for a bit with bits[i], which yields a Boolean value. The test bits[i] == true as being perfectly legal and correct yields the same result but is unnecessarily complicated. The if statement does not require a comparison. All it wants is a Boolean value.
This calculates an odd parity bit.
if (count % 2 == 1) { // Odd number of bits
bits[7] = true; // Set the left most bit as parity bit for even parity.
}
The % operator is the modulo operator. It yields the rest of an integer division. x % 2 yields 0 if x is even. If you want an odd parity bit, you can test count % 2 == 0 instead.
BitArray has a CopyTo method which converts our bits back to an array of bytes (containing only one byte in our case).
bits.CopyTo(numberAsByte, 0);
a = numberAsByte[0];
numberAsByte[0] contains our number with a parity bit.
If you want the parity bit on the right side, then you will have to shift the number to the left by one bit first.
int a = Convert.ToInt32(Console.ReadLine());
a = a << 1;
// Do the parity bit calculation as above and, if necessary
// set the right most bit as parity bit.
bits[0] = true;
According to wikipedia there are two variation of parity bits, so I implemented parameter to select the one you need. It supports user input up to 63 bits, i'm leaving implementation of validation code to you.
ulong GetNumberParity(string input, bool isEvenParity)
{
ulong tmp = Convert.ToUInt64(input, 2);
ulong c = 0;
for (int i = 0; i < 64; i++) c += tmp >> i & 1;
if(isEvenParity)
return Convert.ToUInt64((c % 2 != 0 ? "1" : "0") + input, 2);
else
return Convert.ToUInt64((c % 2 == 0? "1" : "0") + input, 2);
}
I am looking for a faster algorithm than the below for the following. Given a sequence of 64-bit unsigned integers, return a count of the number of times each of the sixty-four bits is set in the sequence.
Example:
4608 = 0000000000000000000000000000000000000000000000000001001000000000
4097 = 0000000000000000000000000000000000000000000000000001000000000001
2048 = 0000000000000000000000000000000000000000000000000000100000000000
counts 0000000000000000000000000000000000000000000000000002101000000001
Example:
2560 = 0000000000000000000000000000000000000000000000000000101000000000
530 = 0000000000000000000000000000000000000000000000000000001000010010
512 = 0000000000000000000000000000000000000000000000000000001000000000
counts 0000000000000000000000000000000000000000000000000000103000010010
Currently I am using a rather obvious and naive approach:
static int bits = sizeof(ulong) * 8;
public static int[] CommonBits(params ulong[] values) {
int[] counts = new int[bits];
int length = values.Length;
for (int i = 0; i < length; i++) {
ulong value = values[i];
for (int j = 0; j < bits && value != 0; j++, value = value >> 1) {
counts[j] += (int)(value & 1UL);
}
}
return counts;
}
A small speed improvement might be achieved by first OR'ing the integers together, then using the result to determine which bits you need to check. You would still have to iterate over each bit, but only once over bits where there are no 1s, rather than values.Length times.
I'll direct you to the classical: Bit Twiddling Hacks, but your goal seems slightly different than just typical counting (i.e. your 'counts' variable is in a really weird format), but maybe it'll be useful.
The best I can do here is just get silly with it and unroll the inner-loop... seems to have cut the performance in half (roughly 4 seconds as opposed to the 8 in yours to process 100 ulongs 100,000 times)... I used a qick command-line app to generate the following code:
for (int i = 0; i < length; i++)
{
ulong value = values[i];
if (0ul != (value & 1ul)) counts[0]++;
if (0ul != (value & 2ul)) counts[1]++;
if (0ul != (value & 4ul)) counts[2]++;
//etc...
if (0ul != (value & 4611686018427387904ul)) counts[62]++;
if (0ul != (value & 9223372036854775808ul)) counts[63]++;
}
that was the best I can do... As per my comment, you'll waste some amount (I know not how much) running this in a 32-bit environment. If your that concerned over performance it may benefit you to first convert the data to uint.
Tough problem... may even benefit you to marshal it into C++ but that entirely depends on your application. Sorry I couldn't be more help, maybe someone else will see something I missed.
Update, a few more profiler sessions showing a steady 36% improvement. shrug I tried.
Ok let me try again :D
change each byte in 64 bit integer into 64 bit integer by shifting each bit by n*8 in lef
for instance
10110101 -> 0000000100000000000000010000000100000000000000010000000000000001
(use the lookup table for that translation)
Then just sum everything togeter in right way and you got array of unsigned chars whit integers.
You have to make 8*(number of 64bit integers) sumations
Code in c
//LOOKTABLE IS EXTERNAL and has is int64[256] ;
unsigned char* bitcounts(int64* int64array,int len)
{
int64* array64;
int64 tmp;
unsigned char* inputchararray;
array64=(int64*)malloc(64);
inputchararray=(unsigned char*)input64array;
for(int i=0;i<8;i++) array64[i]=0; //set to 0
for(int j=0;j<len;j++)
{
tmp=int64array[j];
for(int i=7;tmp;i--)
{
array64[i]+=LOOKUPTABLE[tmp&0xFF];
tmp=tmp>>8;
}
}
return (unsigned char*)array64;
}
This redcuce speed compared to naive implemetaton by factor 8, becuase it couts 8 bit at each time.
EDIT:
I fixed code to do faster break on smaller integers, but I am still unsure about endianess
And this works only on up to 256 inputs, becuase it uses unsigned char to store data in. If you have longer input string, you can change this code to hold up to 2^16 bitcounts and decrease spped by 2
const unsigned int BYTESPERVALUE = 64 / 8;
unsigned int bcount[BYTESPERVALUE][256];
memset(bcount, 0, sizeof bcount);
for (int i = values.length; --i >= 0; )
for (int j = BYTESPERVALUE ; --j >= 0; ) {
const unsigned int jth_byte = (values[i] >> (j * 8)) & 0xff;
bcount[j][jth_byte]++; // count byte value (0..255) instances
}
unsigned int count[64];
memset(count, 0, sizeof count);
for (int i = BYTESPERVALUE; --i >= 0; )
for (int j = 256; --j >= 0; ) // check each byte value instance
for (int k = 8; --k >= 0; ) // for each bit in a given byte
if (j & (1 << k)) // if bit was set, then add its count
count[i * 8 + k] += bcount[i][j];
Another approach that might be profitable, would be to build an array of 256 elements,
which encodes the actions that you need to take in incrementing the count array.
Here is a sample for a 4 element table, which does 2 bits instead of 8 bits.
int bitToSubscript[4][3] =
{
{0}, // No Bits set
{1,0}, // Bit 0 set
{1,1}, // Bit 1 set
{2,0,1} // Bit 0 and bit 1 set.
}
The algorithm then degenerates to:
pick the 2 right hand bits off of the number.
Use that as a small integer to index into the bitToSubscriptArray.
In that array, pull off the first integer. That is the number of elements in the count array, that you need to increment.
Based on that count, Iterate through the remainder of the row, incrementing count, based on the subscript you pull out of the bitToSubscript array.
Once that loop is done, shift your original number two bits to the right.... Rinse Repeat as needed.
Now there is one issue I ignored, in that description. The actual subscripts are relative. You need to keep track of where you are in the count array. Every time you loop, you add two to an offset. To That offset, you add the relative subscript from the bitToSubscript array.
It should be possible to scale up to the size you want, based on this small example. I would think that another program could be used, to generate the source code for the bitToSubscript array, so that it can be simply hard coded in your program.
There are other variation on this scheme, but I would expect it to run faster on average than anything that does it one bit at a time.
Good Hunting.
Evil.
I believe this should give a nice speed improvement:
const ulong mask = 0x1111111111111111;
public static int[] CommonBits(params ulong[] values)
{
int[] counts = new int[64];
ulong accum0 = 0, accum1 = 0, accum2 = 0, accum3 = 0;
int i = 0;
foreach( ulong v in values ) {
if (i == 15) {
for( int j = 0; j < 64; j += 4 ) {
counts[j] += ((int)accum0) & 15;
counts[j+1] += ((int)accum1) & 15;
counts[j+2] += ((int)accum2) & 15;
counts[j+3] += ((int)accum3) & 15;
accum0 >>= 4;
accum1 >>= 4;
accum2 >>= 4;
accum3 >>= 4;
}
i = 0;
}
accum0 += (v) & mask;
accum1 += (v >> 1) & mask;
accum2 += (v >> 2) & mask;
accum3 += (v >> 3) & mask;
i++;
}
for( int j = 0; j < 64; j += 4 ) {
counts[j] += ((int)accum0) & 15;
counts[j+1] += ((int)accum1) & 15;
counts[j+2] += ((int)accum2) & 15;
counts[j+3] += ((int)accum3) & 15;
accum0 >>= 4;
accum1 >>= 4;
accum2 >>= 4;
accum3 >>= 4;
}
return counts;
}
Demo: http://ideone.com/eNn4O (needs more test cases)
http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetNaive
One of them
unsigned int v; // count the number of bits set in v
unsigned int c; // c accumulates the total bits set in v
for (c = 0; v; c++)
{
v &= v - 1; // clear the least significant bit set
}
Keep in mind, that complexity of this method is aprox O(log2(n)) where n is the number to count bits in, so for 10 binary it need only 2 loops
You should probably take the metod for counting 32 bits whit 64 bit arithmetics and applying it on each half of word, what would take by 2*15 + 4 instructions
// option 3, for at most 32-bit values in v:
c = ((v & 0xfff) * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f;
c += (((v & 0xfff000) >> 12) * 0x1001001001001ULL & 0x84210842108421ULL) %
0x1f;
c += ((v >> 24) * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f;
If you have sse4,3 capable processor you can use POPCNT instruction.
http://en.wikipedia.org/wiki/SSE4