I'm programming an C# emulator and decided to output the PCM using CScore.
When the sample size (for each channel) is one byte, the sound outputs correctly, but when I increase the sample size to 16 bits, the sound is very very noisy.
A related question to that problem is how those 2 bytes are interpreted (are they signed? high byte first?)
This is roughly what I'm doing:
First I generate the samples as such
public void GenerateSamples(int sampleCount)
{
while(sampleCount > 0)
{
--sampleCount;
for(int c = 0; c < _numChannels; ++c)
{
_buffer[_sampleIndex++] = _outputValue;
}
// The amount of ticks in a sample
_tickCounter -= APU.MinimumTickThreshold;
if(_tickCounter < 0)
{
_tickCounter = _tickThreshold;
_up = !_up;
// Replicating signed behaviour
_outputValue = (short)(_up ? 32767 : -32768);
}
}
}
This will generate a simple square wave with the frequency determined by the _tickThreshold. If the _buffer is a byte array, the sound is correct.
I want to output it with shorts because it will enable me to use signed samples and simply add multiple channels in order to mix them.
This is how I'm outputting the sound.
for(int i = 0; i < sampleCount; ++i)
{
for(int c = 0; c < _numChannels; ++c)
{
short sample = _channel.Buffer[_channelSampleIndex++];
// Outputting the samples the other way around doesn't output
// sound for me
_buffer[_sampleIndex++] = (byte)sample;
_buffer[_sampleIndex++] = (byte)(sample >> 8);
}
}
The WaveFormat I'm using is determined like this:
_waveFormat = new WaveFormat(_apu.SampleRate, // 44000
_apu.SampleSize * 8, // 16
_apu.NumChannels); // 2
I'm pretty sure there is something obvious I'm missing, but I've been debugging this for a while and don't seem to pinpoint where the problem is.
Thanks
Walk of shame here.
The problem was that I wasn't taking in account that now I needed to generate half the amount of samples (CScore asked for amount of bytes, not samples).
In my example, I had to divide the sampleCount variable by the sampleSize to generate the correct amount of sound.
The noise came because I wasn't synchronizing the extra samples with the next Read call from CScore (I'm generating sound on the fly, instead of pre-buffering it. This way I have no delay introduced because of extra samples).
I found out about the problem looking at this: SampleToPcm16.cs
Related
I want to split a large array of UTF-8 encoded data, so that decoding it into chars can be parallelized.
It seems that there's no way to find out how many bytes Encoding.GetCharCount reads. I also can't use GetByteCount(GetChars(...)) since it decodes the entire array anyways, which is what I'm trying to avoid.
UTF-8 has well-defined byte sequences and is considered self-synchronizing, meaning given any position in bytes you can find where the character at that position begins at.
The UTF-8 spec (Wikipedia is the easiest link) defines the following byte sequences:
0_______ : ASCII (0-127) char
10______ : Continuation
110_____ : Two-byte character
1110____ : Three-byte character
11110___ : Four-byte character
So, the following method (or something similar) should get your result:
Get the byte count for bytes (bytes.Length, et. al.)
Determine how many sections to split into
Select byte byteCount / sectionCount
Test byte against table:
If byte & 0x80 == 0x00 then you can make this byte part of either section
If byte & 0xE0 == 0xC0 then you need to seek ahead one byte, and keep it with the current section
If byte & 0xF0 == 0xE0 then you need to seek ahead two bytes, and keep it with the current section
If byte & 0xF8 == 0xF0 then you need to seek ahead three bytes, and keep it with the current section
If byte & 0xC0 == 0x80 then you are in a continuation, and should seek ahead until the first byte that does not fit val & 0xB0 == 0x80, then keep up to (but not including) this value in the current section
Select byteStart through byteCount + offset where offset can be defined by the test above
Repeat for each section.
Of course, if we redefine our test as returning the current char start position, we have two cases:
1. If (byte[i] & 0xC0) == 0x80 then we need to move around the array
2. Else, return the current i (since it's not a continuation)
This gives us the following method:
public static int GetCharStart(ref byte[] arr, int index) =>
(arr[index] & 0xC0) == 0x80 ? GetCharStart(ref arr, index - 1) : index;
Next, we want to get each section. The easiest way is to use a state-machine (or abuse, depending on how you look at it) to return the sections:
public static IEnumerable<byte[]> GetByteSections(byte[] utf8Array, int sectionCount)
{
var sectionStart = 0;
var sectionEnd = 0;
for (var i = 0; i < sectionCount; i++)
{
sectionEnd = i == (sectionCount - 1) ? utf8Array.Length : GetCharStart(ref utf8Array, (int)Math.Round((double)utf8Array.Length / sectionCount * i));
yield return GetSection(ref utf8Array, sectionStart, sectionEnd);
sectionStart = sectionEnd;
}
}
Now I built this in this manner because I want to use Parallel.ForEach to demonstrate the result, which makes it super easy if we have an IEnumerable, and it also allows me to be extremely lazy with the processing: we only continue to gather sections when needed, which means we can lazily process it and do it on-demand, which is a good thing, no?
Lastly, we need to be able to get a section of bytes, so we have the GetSection method:
public static byte[] GetSection(ref byte[] array, int start, int end)
{
var result = new byte[end - start];
for (var i = 0; i < result.Length; i++)
{
result[i] = array[i + start];
}
return result;
}
Finally, the demonstration:
var sourceText = "Some test 平仮名, ひらがな string that should be decoded in parallel, this demonstrates that we work flawlessly with Parallel.ForEach. The only downside to using `Parallel.ForEach` the way I demonstrate is that it doesn't take order into account, but oh-well.";
var source = Encoding.UTF8.GetBytes(sourceText);
Console.WriteLine(sourceText);
var results = new ConcurrentBag<string>();
Parallel.ForEach(GetByteSections(source, 10),
new ParallelOptions { MaxDegreeOfParallelism = 1 },
x => { Console.WriteLine(Encoding.UTF8.GetString(x)); results.Add(Encoding.UTF8.GetString(x)); });
Console.WriteLine();
Console.WriteLine("Assemble the result: ");
Console.WriteLine(string.Join("", results.Reverse()));
Console.ReadLine();
The result:
Some test ???, ???? string that should be decoded in parallel, this demonstrates that we work flawlessly with Parallel.ForEach. The only downside to using `Parallel.ForEach` the way I demonstrate is that it doesn't take order into account, but oh-well.
Some test ???, ??
?? string that should b
e decoded in parallel, thi
s demonstrates that we work
flawlessly with Parallel.
ForEach. The only downside
to using `Parallel.ForEach`
the way I demonstrate is
that it doesn't take order into account, but oh-well.
Assemble the result:
Some test ???, ???? string that should be decoded in parallel, this demonstrates that we work flawlessly with Parallel.ForEach. The only downside to using `Parallel.ForEach` the way I demonstrate is that it doesn't take order into account, but oh-well.
Not perfect, but it does the job. If we change MaxDegreesOfParallelism to a higher value, our string gets jumbled:
Some test ???, ??
e decoded in parallel, thi
flawlessly with Parallel.
?? string that should b
to using `Parallel.ForEach`
ForEach. The only downside
that it doesn't take order into account, but oh-well.
s demonstrates that we work
the way I demonstrate is
So, as you can see, super easy. You'll want to make modifications to allow for correct order-reassembly, but this should demonstrate the trick.
If we modify the GetByteSections method as follows, the last section is no longer ~2x the size of the remaining ones:
public static IEnumerable<byte[]> GetByteSections(byte[] utf8Array, int sectionCount)
{
var sectionStart = 0;
var sectionEnd = 0;
var sectionSize = (int)Math.Ceiling((double)utf8Array.Length / sectionCount);
for (var i = 0; i < sectionCount; i++)
{
if (i == (sectionCount - 1))
{
var lengthRem = utf8Array.Length - i * sectionSize;
sectionEnd = GetCharStart(ref utf8Array, i * sectionSize);
yield return GetSection(ref utf8Array, sectionStart, sectionEnd);
sectionStart = sectionEnd;
sectionEnd = utf8Array.Length;
yield return GetSection(ref utf8Array, sectionStart, sectionEnd);
}
else
{
sectionEnd = GetCharStart(ref utf8Array, i * sectionSize);
yield return GetSection(ref utf8Array, sectionStart, sectionEnd);
sectionStart = sectionEnd;
}
}
}
The result:
Some test ???, ???? string that should be decoded in parallel, this demonstrates that we work flawlessly with Parallel.ForEach. The only downside to using `Parallel.ForEach` the way I demonstrate is that it doesn't take order into account, but oh-well. We can continue to increase the length of this string to demonstrate that the last section is usually about double the size of the other sections, we could fix that if we really wanted to. In fact, with a small modification it does so, we just have to remember that we'll end up with `sectionCount + 1` results.
Some test ???, ???? string that should be de
coded in parallel, this demonstrates that we work flawless
ly with Parallel.ForEach. The only downside to using `Para
llel.ForEach` the way I demonstrate is that it doesn't tak
e order into account, but oh-well. We can continue to incr
ease the length of this string to demonstrate that the las
t section is usually about double the size of the other se
ctions, we could fix that if we really wanted to. In fact,
with a small modification it does so, we just have to rem
ember that we'll end up with `sectionCount + 1` results.
Assemble the result:
Some test ???, ???? string that should be decoded in parallel, this demonstrates that we work flawlessly with Parallel.ForEach. The only downside to using `Parallel.ForEach` the way I demonstrate is that it doesn't take order into account, but oh-well. We can continue to increase the length of this string to demonstrate that the last section is usually about double the size of the other sections, we could fix that if we really wanted to. In fact, with a small modification it does so, we just have to remember that we'll end up with `sectionCount + 1` results.
And finally, if for some reason you split into an abnormally large number of sections compared to input size (my input size of ~578 bytes at 250 chars demonstrates this) you'll hit an IndexOutOfRangeException in GetCharStart, the following version fixes that:
public static int GetCharStart(ref byte[] arr, int index)
{
if (index > arr.Length)
{
index = arr.Length - 1;
}
return (arr[index] & 0xC0) == 0x80 ? GetCharStart(ref arr, index - 1) : index;
}
Of course this leaves you with a bunch of empty results, but when you reassemble the string doesn't change, so I'm not even going to bother posting the full scenario test here. (I leave it up to you to experiment.)
Great answer Mathieu and Der, adding a python variant 100% based on your answer which works great:
def find_utf8_split(data, bytes=None):
bytes = bytes or len(data)
while bytes > 0 and data[bytes - 1] & 0xC0 == 0x80:
bytes -= 1
if bytes > 0:
if data[bytes - 1] & 0xE0 == 0xC0: bytes = bytes - 1
if data[bytes - 1] & 0xF0 == 0xE0: bytes = bytes - 1
if data[bytes - 1] & 0xF8 == 0xF0: bytes = bytes - 1
return bytes
This code finds a UTF-8 compatible split in a given byte string. It does not do the split as that would take more memory, that is left to the rest of the code.
For example you could:
position = find_utf8_split(data)
leftovers = data[position:]
text = data[:position].decode('utf-8')
I need to convert a 8-bit number such as 00001110 to char. The problem is easy so I wrote the code and everything is working fine, but now I need to optimize for speed as much as possible.
In test class :
class Program
{
static void Main(string[] args)
{
Random r = new Random();
int[] testTab = new int[8];
Normal n = new Normal();
long time;
Stopwatch watch = new Stopwatch();
watch.Start();
for (int i = 0; i < 9000; i++)
{
for (int j = 0; j < 8; j++)
{
testTab[j] = r.Next(2);
}
n.SetTable(testTab);
n.Decode();
}
watch.Stop();
time = watch.ElapsedTicks;
Console.WriteLine(time);
time = watch.ElapsedMilliseconds;
Console.WriteLine(time);
Console.ReadKey();
}
}
and class with algorithm :
class Normal
{
private int[] _tab = new int[8];
public void SetTable(int[] tab)
{
_tab = tab;
}
public void Decode()
{
char a = ((char)( _tab[0]*1 + _tab[1]*2 + _tab[2]*4 + _tab[3]*8 + _tab[4]*16 + _tab[5]*32 +
_tab[6]*64 + _tab[7]*124));
}
}
In the output for 9000 times I get time 2ms it is not a long time ( for 9000 ) time, but I have good proc in my PC.
The final code will be running in smartphone so there is no powerful CPU. In my algorithm I use random data, in final version I will load data by Camera (so it will be longer ) and try to repeat this operation 10 times in one second so that is why I need best time in even smallest operations.
Is there a faster way to convert byte to char than this?
char a = ((char)( _tab[0]*1 + _tab[1]*2 + _tab[2]*4 + _tab[3]*8 + _tab[4]*16 + _tab[5]*32 + _tab[6]*64 + _tab[7]*128));
tl;dr Your conversion code is already efficient, and is not your bottleneck.
Your benchmarking is flawed. You are not just timing the conversion of binary stored in int[] to integer value. You are also timing the generation of your random data. I expect that the majority of the time is spent generating the random data.
Re-write your benchmarking program to operate on data prepared before you start timing. Make sure that the duration of the test is at least 5 or 10 seconds so that you can generate meaningful answers. If you only run for two milliseconds then the granularity of your timer affects the quality of your results.
Bear in mind that in your real application you will be taking a picture on a camera of a QR code and decoding that. The cost of that is many orders of magnitude greater than the cost of converting the 8 bit int arrays.
Your code to do that conversion is already efficient. Do not seek to optimize it further. Not only is there no need to optimize it, there is little hope for significant gains. For the sake of clarity and conciseness you may well opt to use one of the .net library methods that perform such a conversion, but performance of this part of your program is not an issue.
As an aside, it looks like you need to be converting the 8 bit value to byte, adding these values to a byte array, and then feeding to Encoding.GetString to obtain your text. A cast to UTF-16 char as per your code is not correct.
It worth a try this:
var yourString = "00100000";
char yourChar = (char) Convert.ToByte(yourString, 2); // you got ' ' (space)
It may or may not faster, but definitely simpler, more stable and more maintainable.
I ran some tests with different implementations.
First was #Melnikovl answer.
Second was mine, where I replaced + with | and * with << operator.
Third was author's original solution.
I tested with modified code and measured only conversion code.
First and second solution showed a little better performance. But BitConverter a little more often was better, so I think you should choose it (also because if simplicity of code)
var byte[] bytes = { 1, 1, 1, 1 };
int i = BitConverter.ToInt32(bytes, 0);
char a = (char)i;
Don't forgot to check if byte array litte or big endian
【This is not a duplicate. Similar questions are about scenarios where people have control over the source data. I do not.】
In Japan there's something called the "Emergency Warning Broadcasting System." It looks like this when activated: http://www.youtube.com/watch?v=9hjlYvp9Pxs
In the above video, at around 2:37, a FSK-modulated signal is sent. I want to parse this signal; i.e. given a WAV file that contains the signal, I want to end up with a StringBuilder that contains 0s and 1s to process them later. I have the spec for the binary data and all, but the problem is that I know nothing about audio programming. :(
This is just for a hobby project, but I became hooked. TV and radio makers can pick up this signal and have their appliances do stuff in reaction to it, so it can't be that hard, right? :(
Facts about the signal:
The mark tone is 1024Hz, and the stop tone is 640Hz
Each tone is 15.625ms long
2 second pause before signal begins and after it ends (probably for detection purposes)
What I did so far:
Write a simple RIFF parser that accepts 8bit mono WAV files and allows me to get samples from them. I've tested it and it works.
A loop that takes 15.625ms of samples and:
Uses RMS to look for two seconds of silence
Uses the Goertzel algorithm to decide if the signal is 1024Hz or 640Hz
The problems I have:
0s and 1s are swallowed during the loop depending on the test data.
Given the clarity of the signal (YouTube-to-MP3 rip), that shouldn't happen.
If I generate a repeating 01 sequence in Audacity 30 times, my program will pick up around 10 of the 01 pairs, instead of 30
Sometimes 0s and 1s are swapped (side effect of the above?)
If I tweak the code so it works with one test sound file, other test sound files stop working
My questions:
Can anyone give me a high level overview on how FSK decoding would be done properly in software?
Do I need to apply some sort of filter that limits the signal to 640Hz+1024Hz and mutes everything else?
What is the best approach to keep the timing right? Maybe I'm doing it wrong?
Any links to beginner's literature on this kind of audio processing? I'd really like to learn and get this working.
The code that reads samples is (simplified):
StringBuilder ews_bits = new StringBuilder();
double[] samples = new double[(int)(samplesPerMs * 16.625D)];
int index = 0, readTo = /* current offset + RIFF subChunk2Size */;
BinaryReader br = /* at start of PCM data */;
while (br.BaseStream.Position < readTo)
{
switch (bitsPerSample / 8)
{
case 1: // 8bit
samples[index++] = ((double)br.ReadByte() - 127.5D) / 256D;
break;
case 2: // 16bit
samples[index++] = (double)br.ReadInt16() / 32768D;
break;
}
if (index != samples.Length)
continue;
/****** The sample buffer is full and we must process it. ******/
if (AudioProcessor.IsSilence(ref samples))
{
silence_count++;
if (state == ParserState.Decoding && silence_count > 150)
{
// End of EWS broadcast reached.
EwsSignalParser.Parse(ews_bits.ToString());
/* ... reset state; go back looking for silence... */
}
goto Done;
}
/****** The signal was not silence. ******/
if (silence_count > 120 && state == ParserState.SearchingSilence)
state = ParserState.Decoding;
if (state == ParserState.Decoding)
{
AudioProcessor.Decode(ref samples, sampleRate, ref ews_bits);
bool continue_decoding = /* check first 20 bits for signature */;
if (continue_decoding) goto Done;
// If we get here, we were decoding a junk signal.
state = ParserState.SearchingSilence;
}
/* Not enough silence yet */
silence_count = 0;
Done:
index = 0;
}
The audio processor is just a class with:
public static void Decode(ref double[] samples, int sampleRate, ref StringBuilder bitHolder)
{
double freq_640 = GoertzelMagnitude(ref samples, 640, sampleRate);
double freq_1024 = GoertzelMagnitude(ref samples, 1024, sampleRate);
if (freq_640 > freq_1024)
bitHolder.Append("0");
else
bitHolder.Append("1");
}
public static bool IsSilence(ref double[] samples)
{
// power_RMS = sqrt(sum(x^2) / N)
double sum = 0;
for (int i = 0; i < samples.Length; i++)
sum += samples[i] * samples[i];
double power_RMS = Math.Sqrt(sum / samples.Length);
return power_RMS < 0.01;
}
/// <remarks>http://www.embedded.com/design/embedded/4024443/The-Goertzel-Algorithm</remarks>
private static double GoertzelMagnitude(ref double[] samples, double targetFrequency, int sampleRate)
{
double n = samples.Length;
int k = (int)(0.5D + ((double)n * targetFrequency) / (double)sampleRate);
double w = (2.0D * Math.PI / n) * k;
double cosine = Math.Cos(w);
double sine = Math.Sin(w);
double coeff = 2.0D * cosine;
double q0 = 0, q1 = 0, q2 = 0;
for (int i = 0; i < samples.Length; i++)
{
double sample = samples[i];
q0 = coeff * q1 - q2 + sample;
q2 = q1;
q1 = q0;
}
double magnitude = Math.Sqrt(q1 * q1 + q2 * q2 - q1 * q2 * coeff);
return magnitude;
}
Thanks for reading. I hope you can help me.
This is how I would do it (high level description)
Run your signal through a FFT
look for steady peaks at about 640Hz+1024Hz (I would say at least +/- 10Hz)
if the signal is steady for about 10 ms (with steady I mean about 95% of the samples are in the same range 640Hz+/-10Hz (or 1024Hz+/-10Hz) take it as a detection of the tone. Use this detection also to synchronize your timer that tells you when to expect the next tone.
I got it about 90% working now after rewriting the sample parsing loop and silence detection parts. There were two main problems in my implementation. The first was that the silence detector was overeager, so I changed it from processing every millisecond of samples to every half-millisecond of samples. That brought me exactly to the start of FSK data.
The next problem was that I then thought I could naively let the demodulator look at 15.625ms of samples as it works itself through the WAV file. It turns out that while this works great for the first 90 bits or so, eventually tones become a little longer or shorter than expected and the demodulator goes out of sync. The current code finds and corrects 13 bits with such a timing mismatch. Particularly vulnerable to this are spots where the signal changes from mark to space and vice versa.
Guess there's a reason the word "analog" contains "anal". It is. I really wish I knew more about signal theory and digital signal processing. :(
How I discovered all of this: I imported the MP3 and trimmed it down to the FSK part using Audacity. Then I had Audacity generate labels for every bit. After that I went through highlighting bits according toe the labels.
I'm re-writing alibrary with a mandate to make it totally allocation free. The goal is to have 0 collections after the app's startup phase is done.
Previously, there were a lot of calls like this:
Int32 foo = Int32.Parse(ASCIIEncoding.ASCII.GetString(bytes, start, length));
Which I believe is allocating a string. I couldn't find a C# library function that would do the same thing automatically. I looked at the BitConverter class, but it looks like that is only if your Int32 is encoded with the actual bytes that represent it. Here, I have an array of bytes representing Ascii characters that represent an Int32.
Here's what I did
public static Int32 AsciiBytesToInt32(byte[] bytes, int start, int length)
{
Int32 Temp = 0;
Int32 Result = 0;
Int32 j = 1;
for (int i = start + length - 1; i >= start; i--)
{
Temp = ((Int32)bytes[i]) - 48;
if (Temp < 0 || Temp > 9)
{
throw new Exception("Bytes In AsciiBytesToInt32 Are Not An Int32");
}
Result += Temp * j;
j *= 10;
}
return Result;
}
Does anyone know of a C# library function that already does this in a more optimal way? Or an improvement to make the above run faster (its going to be called millions of times during the day probably). Thanks!
Millions of times per day shouldn't be a problem - I'd expect that to be able to run hundreds of thousands of times per second. Personally I'd rewrite the above to only declare "temp" within the loop (and get rid of the Pascal-cases local variable names - urgh) but it should be okay.
The code would be more immediately understandable as:
int digit = bytes[i] - '0';
which does the same as your
Temp = ((Int32)bytes[i]) - 48;
line, but in a simpler way (IMO). They should behave exactly the same way.
On a general note, trying to write C# without any allocations is pretty harsh, and fights against the way the language and framework are designed. Do you believe this is actually a reasonable requirement? Admittedly I've heard about it being the way some games are written in managed code... but it does seem a bit odd.
Of course, you're going to allocate an exception if the bytes are inappropriate...
EDIT: Note that your code doesn't allow for negative numbers. Is that okay?
(C#, prime generator)
Heres some code a friend and I were poking around on:
public List<int> GetListToTop(int top)
{
top++;
List<int> result = new List<int>();
BitArray primes = new BitArray(top / 2);
int root = (int)Math.Sqrt(top);
for (int i = 3, count = 3; i <= root; i += 2, count++)
{
int n = i - count;
if (!primes[n])
for (int j = n + i; j < top / 2; j += i)
{
primes[j] = true;
}
}
if (top >= 2)
result.Add(2);
for (int i = 0, count = 3; i < primes.Length; i++, count++)
{
if (!primes[i])
{
int n = i + count;
result.Add(n);
}
}
return result;
}
On my dorky AMD x64 1800+ (dual core), for all primes below 1 billion in 34546.875ms. Problem seems to be storing more in the bit array. Trying to crank more than ~2billion is more than the bitarray wants to store. Any ideas on how to get around that?
I would "swap" parts of the array out to disk. By that, I mean, divide your bit array into half-billion bit chunks and store them on disk.
The have only a few chunks in memory at any one time. With C# (or any other OO language), it should be easy to encapsulate the huge array inside this chunking class.
You'll pay for it with a slower generation time but I don't see any way around that until we get larger address spaces and 128-bit compilers.
Or as an alternative approach to the one suggested by Pax, make use of the new Memory-Mapped File classes in .NET 4.0 and let the OS decide which chunks need to be in memory at any given time.
Note however that you'll want to try and optimise the algorithm to increase locality so that you do not needlessly end up swapping pages in and out of memory (trickier than this one sentence makes it sound).
Use multiple BitArrays to increase the maximum size. If a number is to great bit-shift it and store the result in a bit-array for storing bits 33-64.
BitArray second = new BitArray(int.MaxValue);
long num = 23958923589;
if (num > int.MaxValue)
{
int shifted = (int)num >> 32;
second[shifted] = true;
}
long request = 0902305023;
if (request > int.MaxValue)
{
int shifted = (int)request >> 32;
return second[shifted];
}
else return first[request];
Of course it would be nice if BitArray would support size up to System.Numerics.BigInteger.
Swapping to disk will make your code really slow.
I have a 64-bit OS, and my BitArray is also limited to 32-bits.
PS: your prime number calculations looks wierd, mine looks like this:
for (int i = 2; i <= number; i++)
if (primes[i])
for (int scalar = i + i; scalar <= number; scalar += i)
{
primes[scalar] = false;
yield return scalar;
}
The Sieve algorithm would be better performing. I could determine all the 32-bit primes (total about 105 million) for the int range in less than 4 minutes with that. Of course returning the list of primes is a different thing as the memory requirement there would be a little over 400 MB (1 int = 4 bytes). Using a for loop the numbers were printed to a file and then imported to a DB for more fun :) However for the 64 bit primes the program would need several modifications and perhaps require distributed execution over multiple nodes. Also refer to the following links
http://www.troubleshooters.com/codecorn/primenumbers/primenumbers.htm
http://en.wikipedia.org/wiki/Prime-counting_function