I need to reverse engineer this code into C# and it is critical that ouput is absolutely the same. Any suggestions on the ord function and the "strHash % (1<<64)" part?
def easyHash(s):
"""
MDSD used the following hash algorithm to cal a first part of partition key
"""
strHash = 0
multiplier = 37
for c in s:
strHash = strHash * multiplier + ord(c)
#Only keep the last 64bit, since the mod base is 100
strHash = strHash % (1<<64)
return strHash % 100 #Assume eventVolume is Large
Probably something like this:
note that I'm using ulong instead of long, because I don't want that after the overflow there are negative numbers (they would mess with the calculation). I don't need to do the strHash = strHash % (1<<64) because with ulong it is implicit.
public static int EasyHash(string s)
{
ulong strHash = 0;
const int multiplier = 37;
for (int i = 0; i < s.Length; i++)
{
unchecked
{
strHash = (strHash * multiplier) + s[i];
}
}
return (int)(strHash % 100);
}
the unchecked keyword is normally not necessary, because "normally" C# is compiled in unchecked mode (so without checks for overflows), but code can be compiled in checked mode (there is an option for that). This code, as written, needs the unchecked mode (because it can have overflows), so I force it with the unchecked keyword.
Python: https://ideone.com/RtNsh7
C#: https://ideone.com/0U2Uyd
Related
int n = Convert.ToInt32(Console.ReadLine());
int factorial = 1;
for (int i = 1; i <= n; i++)
{
factorial *= i;
}
Console.WriteLine(factorial);
This code runs in Console Application, but when a number is above 34 the application returns 0.
Why is 0 returned and what can be done to compute factorial of large numbers?
You're going out of range of what the variable can store. That's effectively a factorial, which grows faster than the exponential. Try using ulong (max value 2^64 = 18,446,744,073,709,551,615) instead of int (max value 2^31 = 2,147,483,647) - ulong p = 1 - that should get you a bit further.
If you need to go even further, .NET 4 and up has BigInteger, which can store arbitrarily large numbers.
You are getting 0 because of the way integer overflow handled in most programming languages. You can easily see what happens if you output results of each computation in a loop (using HEX representation):
int n = Convert.ToInt32(Console.ReadLine());
int factorial = 1;
for (int i = 1; i <= n; i++)
{
factorial *= i;
Console.WriteLine("{0:x}", factorial);
}
Console.WriteLine(factorial);
For n = 34 result look like:
1
2
6
18
78
2d0
13b0
...
2c000000
80000000
80000000
0
Basically multiplying by 2 shifts numbers left and when you multiplied numberer containing enough twos all significant digits will fall out of integer which is 32 bits wide (i.e. first 6 numbers give you 4 twos : 1, 2, 3, 2*2, 5, 2*3, so result of multipying them is 0x2d0 with 4 zero bits at the end).
If you are using .net 4.0 and want to calculate factorial of 1000, then try to use BigInteger instead of Int32 or Int64 or even UInt64. Your problem statement "doesn't work" is not quite sufficient for me to give some good subjection.
Your code will look something like:
using System;
using System.Numerics;
namespace ConsoleApplication1
{
class Program
{
static void Main()
{
int factorial = Convert.ToInt32(Console.ReadLine());
var result = CalculateFactorial(factorial);
Console.WriteLine(result);
Console.ReadLine();
}
private static BigInteger CalculateFactorial(int value)
{
BigInteger result = new BigInteger(1);
for (int i = 1; i <= value; i++)
{
result *= i;
}
return result;
}
}
}
I am writing the algorithm which is choosing a subset (of k elements) from a set (of n elements).
I have accomplished the task with a success. It works fine for small numbers.
I have tested it for n=6, k=3 and n=10, k=5.
The problem is starting now, when I need to use it for huge numbers.
Sometimes I would need to use it for let's say n = 96000000 and k = 3000.
To simply testing a bit, lets focus on example for n = 786432 and k = 1000. Then there is 461946653375201 such a possibilities. As the third parameters to my function there is rank number, so the number for particular unique subset. Let's try few random, for example 3264832 works fine (gave me subset of different numbers), but for 4619466533201 the one number (in subset) is repeated several times, what is wrong. It must be set as well subset based on unique numbers !
Question is to make it works correct and what is the problem ? The numbers are too big even for ulong ?
If you have any question feel free to ask.
Here is my code:
public static ulong BinomialCoefficient(ulong N, ulong K)
{
ulong result = 1;
for (ulong i = 1; i <= K; i++)
{
result *= N - (K - i);
result /= i;
}
return result;
}
public static ulong[] ChooseSubsetByRank(ulong sizeOfSet, ulong sizeOfSubset, ulong rank)
{
ulong[] resultingSubset = new ulong[sizeOfSubset];
ulong x = sizeOfSet;
for (ulong i = sizeOfSubset; i > 0; i--)
{
while (BinomialCoefficient(x, i) > rank)
x--;
resultingSubset[i - 1] = x + 1;
rank = BinomialCoefficient(x + 1, i) - rank - 1;
}
return resultingSubset;
}
And below is the run code. To test it you may change the third argument at the line below.
ulong[] arrayTest = Logic.ChooseSubsetByRank(786432, 1000, 4619466533201);
string test = "";
for (int i = 0; i < arrayTest.Length; i++)
test = test + arrayTest[i].ToString() + " ";
System.Windows.MessageBox.Show(" " + test);
No hope. You can not.
As says spender: use BigInteger.
Your calculation is false (probably if you calculate with ulong which is very very limited for this).
C786432,1000 is in reality :
6033573926325594551531868873570215053708823770889227136141180206574788891075585715726697576999866930083212017993760483485644855730323214507786127283118515758667219335061769573572969492263411636472559059114372691043787225874459276616360823293108500929182830806831624098080982165637186175635880811026388564912224747148201420203796293941118006753515861022396665706095036252893420240334110487119413634294555065166398219767688578556791918697815341165100213662715943043737412038535358818942960435634721564898425752479874494445989953267768476995289375942620219089503401832797819758809124329657724691573254079810257990856068363592549560111914326820802223343980843357174727643299789438961341403866942005159819587812937265119974334351031505150775547311257835039161258554849609865661574816771511161168033768782419369241858323336341530982042093999410402417064838718686064312965836862249598770142918659708106482935266574067985412321680292750817019104479650736141502332606724302400412461373311881584020963297279437835819666355490804970115983436645628460688679416826680621378132834857452816232982148238532837600398378710514758276529410600324271797090502818444825427753513255984828515472462706714900697194261105881768124169338072607942675219899630246822298950117323544399023453603528517829390771915103036173961755955159422806483076370762068538902803552244794986362728794573306025683866038470793703513935653987744702277137020842862116544300481688519625708115843299275718747596961899491910480897148955406962985269341341630460910287516984534632412940751629513018144947978952932944251585462754004392953272268819217751573575925319332190435744062763990089885732157684342450873180307735549083984647582210698121884513785762578827079077499321224628231353083451055184483182777799031632857810808269286112679457384588431986459863394440578400765094557059628627207887510198427517980206661794055812198263391603552022883118047415972254211592143706127815985486692600870607976623561998434373091244295356784708997235625422777415209304056464924341151878262503587256198384142718049855042621519149038523177569828231641690393173865902883254477356340730939905543154540746759842093744184723706019384873683467974667731206411977863548104488741332797192887789005759777716153901423692511142309333333044144404295842596379993363263619514077277847401673508888691303190564956937240904605718333403477875735125913053605250218671009674129773564325959311930556006735185907557691220793718745513911096043358579428288852312401862707347174079157233572972231584221683511928548130771207729971476262436947167805862489722247791944393249804177227081889352572247647101767728277149206844417712380170809760442471306983505977784517425621794122861839031329562074224252476256692950187473655698688314932344304325068076491419731413851641058957149245827761363536463550636030779009703117216843500031930755136735771022162481784531500378393390581558695370099627488825651248884473844195719258621451229987520317542943566297340698028466818937335976792343382788134518740623993664131802576690485505420542865842569675333314900726976825951448445467650748963731221593412649796639395685018463040431779020656159571608044184646177251839940386267422657877801967082672251079906237183824765375906939480520508656199566649638083679757430680818796170362008227564859519761936618260089868694546582873807181452115865272320
I've made a small C# program which calculates prime numbers using the Sieve of Eratosthenes.
long n = 100000;
bool[] p = new bool[n+1];
for(long i=2; i<=n; i++)
{
p[i]=true;
}
for(long i=2; i<=X; i++)
{
for(long j=Y; j<=Z; j++)
{
p[i*j]=false;
}
}
for(long i=0; i<=n; i++)
{
if(p[i])
{
Console.Write(" "+i);
}
}
Console.ReadKey(true);
My question is: which X, Y and Z should I choose to make my program as efficient and economical as possible?
Of course we can just take:
X = n
Y = 2
Z = n
But then the program won't be very efficient.
It seems we can take:
X = Math.Sqrt(n)
Y = i
Z = n/i
And apparently the first 100 primes that the program gives are all correct.
There are several optimisations that can be applied without making the program overly complicated.
you can start the crossing out at j = i (effectively i * i instead of 2 * i) since all lower multiples of i have already been crossed out
you can save some work by leaving all even numbers out of the array (remembering to produce the prime 2 out of thin air when needed); hence array cell k represents the odd integer 2 * k + 1
you can make things faster by turning repeated multiplication (i * j) into iterated addition (k += i); instead of looping over j in the inner loop you loop (k = i * i; k <= N; k += i)
in some cases it can be advantageous to initialise the array with 0 (false) and set cells to 1 (true) for composites; its meaning is thus 'is_composite' instead of 'is_prime'
Harvesting all the low-hanging fruit, the loops thus become (in C++, but C# should be sort of similar):
uint32_t max_factor_bit = uint32_t(sqrt(double(n))) >> 1;
uint32_t max_bit = n >> 1;
for (uint32_t i = 3 >> 1; i <= max_factor_bit; ++i)
{
if (composite[i]) continue;
uint32_t n = (i << 1) + 1;
uint32_t k = (n * n) >> 1;
for ( ; k <= max_bit; k += n)
{
composite[k] = true;
}
}
Regarding the computation of max_factor there are some caveats where the compiler can bite you, for larger values of n. There's a topic for that on Code Review.
A further, easy optimisation is to represent the bitmap as an array of bytes, with each byte standing for eight odd integers. For setting bit k in byte array a you would do a[k / CHAR_BIT] |= (1 << (k % CHAR_BIT)) where CHAR_BIT is the number of bits in a byte. However, such bit trickery is normally wrapped into an inline function to keep the code clean. E.g. in C++ I tell the compiler how to generate such functions using a template like this:
template<typename word_t>
inline
void set_bit (word_t *p, uint32_t index)
{
enum { BITS_PER_WORD = sizeof(word_t) * CHAR_BIT };
// we can trust the compiler to use masking and shifting instead of division; we cannot do that
// ourselves without having the log2 which cannot easily be computed as a constexpr
p[index / BITS_PER_WORD] |= word_t(1) << (index % BITS_PER_WORD);
}
This allows me to say set_bit(a, k) for any type of array - byte, integer, whatever - without having to write special code or use invocations; it's basically a type-safe equivalent to the old C-style macros. I'm not certain whether something similar is possible in C#. There is, however, the C# type BitArray where all that stuff is already done for you under the hood.
On pastebin there's a small demo .cpp for the segmented Sieve of Eratosthenes, where two further optimisations are applied: presieving by small integers, and sieving in small, cache friendly blocks so that the full range of 32-bit integers can be sieved in 2 seconds flat. This could give you some inspiration...
When doing the Sieve of Eratosthenes, memory savings easily translate to speed gains because the algorithm is memory-intensive and it tends to stride all over the memory instead of accessing it locally. That's why space savings due to compact representation (only odd integers, packed bits - i.e. BitArray) and localisation of access (by sieving in small blocks instead of the whole array in one go) can speed up the code by one or more orders of magnitude, without making the code significantly more complicated.
It is possible to go far beyond the easy optimisations mentioned here, but that tends to make the code increasingly complicated. One word that often occurs in this context is the 'wheel', which can save a further 50% of memory space. The wiki has an explanation of wheels here, and in a sense the odds-only sieve is already using a 'modulo 2 wheel'. Conversely, a wheel is the extension of the odds-only idea to dropping further small primes from the array, like 3 and 5 in the famous 'mod 30' wheel with modulus 2 * 3 * 5. That wheel effectively stuffs 30 integers into one 8-bit byte.
Here's a runnable rendition of the above code in C#:
static uint max_factor32 (double n)
{
double r = System.Math.Sqrt(n);
if (r < uint.MaxValue)
{
uint r32 = (uint)r;
return r32 - ((ulong)r32 * r32 > n ? 1u : 0u);
}
return uint.MaxValue;
}
static void sieve32 (System.Collections.BitArray odd_composites)
{
uint max_bit = (uint)odd_composites.Length - 1;
uint max_factor_bit = max_factor32((max_bit << 1) + 1) >> 1;
for (uint i = 3 >> 1; i <= max_factor_bit; ++i)
{
if (odd_composites[(int)i]) continue;
uint p = (i << 1) + 1; // the prime represented by bit i
uint k = (p * p) >> 1; // starting point for striding through the array
for ( ; k <= max_bit; k += p)
{
odd_composites[(int)k] = true;
}
}
}
static int Main (string[] args)
{
int n = 100000000;
System.Console.WriteLine("Hello, Eratosthenes! Sieving up to {0}...", n);
System.Collections.BitArray odd_composites = new System.Collections.BitArray(n >> 1);
sieve32(odd_composites);
uint cnt = 1;
ulong sum = 2;
for (int i = 1; i < odd_composites.Length; ++i)
{
if (odd_composites[i]) continue;
uint prime = ((uint)i << 1) + 1;
cnt += 1;
sum += prime;
}
System.Console.WriteLine("\n{0} primes, sum {1}", cnt, sum);
return 0;
}
This does 10^8 in about a second, but for higher values of n it gets slow. If you want to do faster then you have to employ sieving in small, cache-sized blocks.
To my surprise the folowing method produces a different result in debug vs release:
int result = "test".GetHashCode();
Is there any way to avoid this?
I need a reliable way to hash a string and I need the value to be consistent in debug and release mode. I would like to avoid writing my own hashing function if possible.
Why does this happen?
FYI, reflector gives me:
[ReliabilityContract(Consistency.WillNotCorruptState, Cer.MayFail), SecuritySafeCritical]
public override unsafe int GetHashCode()
{
fixed (char* str = ((char*) this))
{
char* chPtr = str;
int num = 0x15051505;
int num2 = num;
int* numPtr = (int*) chPtr;
for (int i = this.Length; i > 0; i -= 4)
{
num = (((num << 5) + num) + (num >> 0x1b)) ^ numPtr[0];
if (i <= 2)
{
break;
}
num2 = (((num2 << 5) + num2) + (num2 >> 0x1b)) ^ numPtr[1];
numPtr += 2;
}
return (num + (num2 * 0x5d588b65));
}
}
GetHashCode() is not what you should be using to hash a string, almost 100% of the time. Without knowing what you're doing, I recommend that you use an actual hash algorithm, like SHA-1:
using(System.Security.Cryptography.SHA1Managed hp = new System.Security.Cryptography.SHA1Managed()) {
// Use hp.ComputeHash(System.Text.Encoding.ASCII (or Unicode, UTF8, UTF16, or UTF32 or something...).GetBytes(theString) to compute the hash code.
}
Update: For something a little bit faster, there's also SHA1Cng, which is significantly faster than SHA1Managed.
Here's a better approach that is much faster than SHA and you can replace the modified GetHasCode with it: C# fast hash murmur2
There are several implementations with different levels of "unmanaged" code, so if you need fully managed it's there and if you can use unsafe it's there too.
/// <summary>
/// Default implementation of string.GetHashCode is not consistent on different platforms (x32/x64 which is our case) and frameworks.
/// FNV-1a - (Fowler/Noll/Vo) is a fast, consistent, non-cryptographic hash algorithm with good dispersion. (see http://isthe.com/chongo/tech/comp/fnv/#FNV-1a)
/// </summary>
private static int GetFNV1aHashCode(string str)
{
if (str == null)
return 0;
var length = str.Length;
// original FNV-1a has 32 bit offset_basis = 2166136261 but length gives a bit better dispersion (2%) for our case where all the strings are equal length, for example: "3EC0FFFF01ECD9C4001B01E2A707"
int hash = length;
for (int i = 0; i != length; ++i)
hash = (hash ^ str[i]) * 16777619;
return hash;
}
I guess this implementation is slower than the unsafe one posted here. But it's much simpler and safe. Works good in case super speed is not needed.
I am looking for a faster algorithm than the below for the following. Given a sequence of 64-bit unsigned integers, return a count of the number of times each of the sixty-four bits is set in the sequence.
Example:
4608 = 0000000000000000000000000000000000000000000000000001001000000000
4097 = 0000000000000000000000000000000000000000000000000001000000000001
2048 = 0000000000000000000000000000000000000000000000000000100000000000
counts 0000000000000000000000000000000000000000000000000002101000000001
Example:
2560 = 0000000000000000000000000000000000000000000000000000101000000000
530 = 0000000000000000000000000000000000000000000000000000001000010010
512 = 0000000000000000000000000000000000000000000000000000001000000000
counts 0000000000000000000000000000000000000000000000000000103000010010
Currently I am using a rather obvious and naive approach:
static int bits = sizeof(ulong) * 8;
public static int[] CommonBits(params ulong[] values) {
int[] counts = new int[bits];
int length = values.Length;
for (int i = 0; i < length; i++) {
ulong value = values[i];
for (int j = 0; j < bits && value != 0; j++, value = value >> 1) {
counts[j] += (int)(value & 1UL);
}
}
return counts;
}
A small speed improvement might be achieved by first OR'ing the integers together, then using the result to determine which bits you need to check. You would still have to iterate over each bit, but only once over bits where there are no 1s, rather than values.Length times.
I'll direct you to the classical: Bit Twiddling Hacks, but your goal seems slightly different than just typical counting (i.e. your 'counts' variable is in a really weird format), but maybe it'll be useful.
The best I can do here is just get silly with it and unroll the inner-loop... seems to have cut the performance in half (roughly 4 seconds as opposed to the 8 in yours to process 100 ulongs 100,000 times)... I used a qick command-line app to generate the following code:
for (int i = 0; i < length; i++)
{
ulong value = values[i];
if (0ul != (value & 1ul)) counts[0]++;
if (0ul != (value & 2ul)) counts[1]++;
if (0ul != (value & 4ul)) counts[2]++;
//etc...
if (0ul != (value & 4611686018427387904ul)) counts[62]++;
if (0ul != (value & 9223372036854775808ul)) counts[63]++;
}
that was the best I can do... As per my comment, you'll waste some amount (I know not how much) running this in a 32-bit environment. If your that concerned over performance it may benefit you to first convert the data to uint.
Tough problem... may even benefit you to marshal it into C++ but that entirely depends on your application. Sorry I couldn't be more help, maybe someone else will see something I missed.
Update, a few more profiler sessions showing a steady 36% improvement. shrug I tried.
Ok let me try again :D
change each byte in 64 bit integer into 64 bit integer by shifting each bit by n*8 in lef
for instance
10110101 -> 0000000100000000000000010000000100000000000000010000000000000001
(use the lookup table for that translation)
Then just sum everything togeter in right way and you got array of unsigned chars whit integers.
You have to make 8*(number of 64bit integers) sumations
Code in c
//LOOKTABLE IS EXTERNAL and has is int64[256] ;
unsigned char* bitcounts(int64* int64array,int len)
{
int64* array64;
int64 tmp;
unsigned char* inputchararray;
array64=(int64*)malloc(64);
inputchararray=(unsigned char*)input64array;
for(int i=0;i<8;i++) array64[i]=0; //set to 0
for(int j=0;j<len;j++)
{
tmp=int64array[j];
for(int i=7;tmp;i--)
{
array64[i]+=LOOKUPTABLE[tmp&0xFF];
tmp=tmp>>8;
}
}
return (unsigned char*)array64;
}
This redcuce speed compared to naive implemetaton by factor 8, becuase it couts 8 bit at each time.
EDIT:
I fixed code to do faster break on smaller integers, but I am still unsure about endianess
And this works only on up to 256 inputs, becuase it uses unsigned char to store data in. If you have longer input string, you can change this code to hold up to 2^16 bitcounts and decrease spped by 2
const unsigned int BYTESPERVALUE = 64 / 8;
unsigned int bcount[BYTESPERVALUE][256];
memset(bcount, 0, sizeof bcount);
for (int i = values.length; --i >= 0; )
for (int j = BYTESPERVALUE ; --j >= 0; ) {
const unsigned int jth_byte = (values[i] >> (j * 8)) & 0xff;
bcount[j][jth_byte]++; // count byte value (0..255) instances
}
unsigned int count[64];
memset(count, 0, sizeof count);
for (int i = BYTESPERVALUE; --i >= 0; )
for (int j = 256; --j >= 0; ) // check each byte value instance
for (int k = 8; --k >= 0; ) // for each bit in a given byte
if (j & (1 << k)) // if bit was set, then add its count
count[i * 8 + k] += bcount[i][j];
Another approach that might be profitable, would be to build an array of 256 elements,
which encodes the actions that you need to take in incrementing the count array.
Here is a sample for a 4 element table, which does 2 bits instead of 8 bits.
int bitToSubscript[4][3] =
{
{0}, // No Bits set
{1,0}, // Bit 0 set
{1,1}, // Bit 1 set
{2,0,1} // Bit 0 and bit 1 set.
}
The algorithm then degenerates to:
pick the 2 right hand bits off of the number.
Use that as a small integer to index into the bitToSubscriptArray.
In that array, pull off the first integer. That is the number of elements in the count array, that you need to increment.
Based on that count, Iterate through the remainder of the row, incrementing count, based on the subscript you pull out of the bitToSubscript array.
Once that loop is done, shift your original number two bits to the right.... Rinse Repeat as needed.
Now there is one issue I ignored, in that description. The actual subscripts are relative. You need to keep track of where you are in the count array. Every time you loop, you add two to an offset. To That offset, you add the relative subscript from the bitToSubscript array.
It should be possible to scale up to the size you want, based on this small example. I would think that another program could be used, to generate the source code for the bitToSubscript array, so that it can be simply hard coded in your program.
There are other variation on this scheme, but I would expect it to run faster on average than anything that does it one bit at a time.
Good Hunting.
Evil.
I believe this should give a nice speed improvement:
const ulong mask = 0x1111111111111111;
public static int[] CommonBits(params ulong[] values)
{
int[] counts = new int[64];
ulong accum0 = 0, accum1 = 0, accum2 = 0, accum3 = 0;
int i = 0;
foreach( ulong v in values ) {
if (i == 15) {
for( int j = 0; j < 64; j += 4 ) {
counts[j] += ((int)accum0) & 15;
counts[j+1] += ((int)accum1) & 15;
counts[j+2] += ((int)accum2) & 15;
counts[j+3] += ((int)accum3) & 15;
accum0 >>= 4;
accum1 >>= 4;
accum2 >>= 4;
accum3 >>= 4;
}
i = 0;
}
accum0 += (v) & mask;
accum1 += (v >> 1) & mask;
accum2 += (v >> 2) & mask;
accum3 += (v >> 3) & mask;
i++;
}
for( int j = 0; j < 64; j += 4 ) {
counts[j] += ((int)accum0) & 15;
counts[j+1] += ((int)accum1) & 15;
counts[j+2] += ((int)accum2) & 15;
counts[j+3] += ((int)accum3) & 15;
accum0 >>= 4;
accum1 >>= 4;
accum2 >>= 4;
accum3 >>= 4;
}
return counts;
}
Demo: http://ideone.com/eNn4O (needs more test cases)
http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetNaive
One of them
unsigned int v; // count the number of bits set in v
unsigned int c; // c accumulates the total bits set in v
for (c = 0; v; c++)
{
v &= v - 1; // clear the least significant bit set
}
Keep in mind, that complexity of this method is aprox O(log2(n)) where n is the number to count bits in, so for 10 binary it need only 2 loops
You should probably take the metod for counting 32 bits whit 64 bit arithmetics and applying it on each half of word, what would take by 2*15 + 4 instructions
// option 3, for at most 32-bit values in v:
c = ((v & 0xfff) * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f;
c += (((v & 0xfff000) >> 12) * 0x1001001001001ULL & 0x84210842108421ULL) %
0x1f;
c += ((v >> 24) * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f;
If you have sse4,3 capable processor you can use POPCNT instruction.
http://en.wikipedia.org/wiki/SSE4