How to calculate the transfer rate speed in kilobyte per second, i used stopwatch but it doesnt work , because it gives me an error about div on zero ( count / 0)
public void sendFile(string filePath)
{
Stopwatch stopWatch = new Stopwatch();
FileInfo file = new FileInfo(filePath);
try
{
int fileSize = (int)file.Length;
Program.mainForm.MaxProgressBarHandler(fileSize);
byte[] fileDetial;
string detail = file.Name + "," + fileSize.ToString();
fileDetial = Encoding.ASCII.GetBytes(detail);
client.Send(fileDetial);
byte[] fileData = new byte[fileSize];
int count;
int sum = 0;
file.OpenRead().Read(fileData, 0, fileSize);
while (sum < fileSize)
{
stopWatch.Restart();
if (fileSize - sum < packetSize)
{
count = client.Send(fileData, sum, fileSize - sum, SocketFlags.None);
Program.mainForm.UpdateProgressBarHandler(count);
}
else
{
count = client.Send(fileData, sum, packetSize, SocketFlags.None);
Program.mainForm.UpdateProgressBarHandler(count);
}
stopWatch.Stop();
sum += count;
Program.mainForm.AppendLabel(((fileSize * 8) / stopWatch.ElapsedMilliseconds).ToString());
Console.WriteLine(sum + "of" + fileSize + "sent");
}
}
finally
{
Console.WriteLine("sent");
CloseClient();
}
}
Please help me =)
For first part of your question take a look a this Joel On Software Forum Thread. It is not specifically .Net related but is directly dealing with transferring a file using TCP.
As for second part, since I do not have your full code, so I am not able to see why your stopWatch.ElapsedMilliseconds is equal to zero. My guess is that there was no data to transfer. You could try doing something like this to avoid the divide by zero error.
if (stopWatch.ElapsedMilliseconds != 0)
Program.mainForm.AppendLabel(((fileSize * 8) / stopWatch.ElapsedMilliseconds).ToString());
Though I would probably have a 1 second timer and make sum a Class scoped variable and update your label every second i.e:
public partial class Form1 : Form
{
int sum = 0;
int seconds = 0;
...
private void timer1_Tick(object sender, EventArgs e)
{
seconds += 1;
Program.mainForm.AppendLabel(((sum * 8) / seconds).ToString());
}
and reset them when you finish your transfer.
....
finally
{
timer1.Stop();
sum = 0;
seconds = 0
Console.WriteLine("sent");
CloseClient();
}
Related
So I have the following loop:
for (int i = 1; i < numRows + 2; i++) //numRows was +4, now +2
{
Console.Clear();
Console.WriteLine("Number of rows: " + numRows);
Console.Write("Checking Row #: " + currRowNumber);
//We want to skip every row that is null and continue looping until we have more than 3 rows in a row that are null, then break
if (i > 1) {
i -= 1;
}
//Create Worksheet Range
Microsoft.Office.Interop.Excel.Range range = (Microsoft.Office.Interop.Excel.Range) excelWorkbookWorksheet.Cells[i, 2];
string cellValue = Convert.ToString(range.Value);
if (nullCounter == 3) //was 5
{
Console.WriteLine("\nNull row detected...breaking");
Console.WriteLine("Number of rows deleted: " + numRowsDeleted);
break;
}
if (cellValue != null) {
if (cellValue.Contains(searchText)) {
//Console.WriteLine("Deleting Row: " + Convert.ToString(cellValue));
((Range) excelWorkbookWorksheet.Rows[i]).Delete(XlDeleteShiftDirection.xlShiftUp);
numRowsDeleted++;
//Console.WriteLine("Number of rows deleted: " + numRowsDeleted);
nullCounter = 0;
i--;
currRowNumber++;
rowsPerSecond = i;
} else {
currRowNumber++;
nullCounter = 0;
}
} else {
nullCounter++;
//Console.WriteLine("NullCounter: " + nullCounter);
}
i++;
}
I want to calculate how many rows I'm looping through per second, then calculate from that number how long it will take to complete the entire loop, based on how many rows there are.
Check out setting up a Stopwatch at the beginning of the loop and check its Elapsed property at the end.
Its pretty trivial to get something simple up and running. Consider the following class:
public class TimePredictor
{
private readonly Stopwatch watch = new Stopwatch();
private double currentProgressRate;
public void Start() => watch.Restart();
public void Stop() => watch.Stop();
public double ElapsedTime => watch.ElapsedMilliseconds;
public void Update(long currentProgress)
{
currentProgressRate = watch.ElapsedMilliseconds / (double)currentProgress;
}
public double GetExpectedTotalTime(long total)
=> total * currentProgressRate;
public double GetExpectedTimeLeft(long total)
=> GetExpectedTotalTime(total) - watch.ElapsedMilliseconds;
}
And a trivial use case:
var repetitions = 200;
var predictor = new TimePredictor();
predictor.Start();
for (int i = 0; i < repetitions; i++)
{
Thread.Sleep((new Random()).Next(100, 250));
if (i % 5 == 0)
{
predictor.Update(i);
Console.WriteLine($"Iteration #{i}:");
Console.WriteLine($"\tExpected total time: {predictor.GetExpectedTotalTime(repetitions) / 1000.0:N1}");
Console.WriteLine($"\tExpected time left: {predictor.GetExpectedTimeLeft(repetitions) / 1000.0:N1}");
Console.WriteLine();
}
}
predictor.Stop();
Console.WriteLine($"Total time: {predictor.ElapsedTime / 1000.0:N1}");
You have 2 possible solutions:
Take the StopWatch and a Counter. Start the sw before you start with looping and increase the counter with ervery loop. At the end of a walktrough of the loop you can divide counter / sw.Elapsed
Use a Counter and a Timer with an interval of 1000ms. Increase the counter every time you go though the For-loop. With every tick you get the current Loops / Second.
EDIT: When you are finished with the for-loop stop the Timer and the StopWatch
I'm developping a simple program that analyses frequencies of audio files.
Using an fft length of 8192, samplerate of 44100, if I use as input a constant frequency wav file - say 65Hz, 200Hz or 300Hz - the output is a constant graph at that value.
If I use a recording of someone speaking, the frequencies has peaks as high as 4000Hz, with an average at 450+ish on a 90 seconds file.
At first I thought it was because of the recording being stereo sound, but converting it to mono with the exact same bitrate as the test files doesn't change much. (average goes down from 492 to 456 but that's still way too high)
Has anyone got an idea as to what could cause this ?
I think I shouldn't find the highest value but perhaps take either an average or a median value ?
EDIT : using the average of the magnitudes per 8192 bytes buffer and getting the index that's closest to that magnitude messes everything up.
This is the code for the handler of the event the Sample Aggregator fires when it has calculated fft for current buffer
void FftCalculated(object sender, FftEventArgs e)
{
int length = e.Result.Length;
float[] magnitudes = new float[length];
for (int i = 0; i < length / 2; i++)
{
float real = e.Result[i].X;
float imaginary = e.Result[i].Y;
magnitudes[i] = (float)(10 * Math.Log10(Math.Sqrt((real * real) + (imaginary * imaginary))));
}
float max_mag = float.MinValue;
float max_index = -1;
for (int i = 0; i < length / 2; i++)
if (magnitudes[i] > max_mag)
{
max_mag = magnitudes[i];
max_index = i;
}
var currentFrequency = max_index * SAMPLERATE / 8192;
Console.WriteLine("frequency be " + currentFrequency);
}
ADDITION : this is the code that reads and sends the file to the analysing part
using (var rdr = new WaveFileReader(audioFilePath))
{
var newFormat = new WaveFormat(Convert.ToInt32(SAMPLERATE/*44100*/), 16, 1);
byte[] buffer = new byte[8192];
var audioData = new AudioData(); //custom class for project
using (var conversionStream = new WaveFormatConversionStream(newFormat, rdr))
{
// Used to send audio in realtime, it's a timestamps issue for the graphs
// I'm working on fixing this, but it has lower priority so disregard it :p
TimeSpan audioDuration = conversionStream.TotalTime;
long audioLength = conversionStream.Length;
int waitTime = (int)(audioDuration.TotalMilliseconds / audioLength * 8192);
while (conversionStream.Read(buffer, 0, buffer.Length) != 0)
{
audioData.AudioDataBase64 = Utils.Base64Encode(buffer);
Thread.Sleep(waitTime);
SendMessage("AudioData", Utils.StringToAscii(AudioData.GetJSON(audioData)));
}
Console.WriteLine("Reached End of File");
}
}
This is the code that receives the audio data
{
var audioData = new AudioData();
audioData =
AudioData.GetStateFromJSON(Utils.AsciiToString(receivedMessage));
QueueAudio(Utils.Base64Decode(audioData.AudioDataBase64)));
}
followed by
var waveFormat = new WaveFormat(Convert.ToInt32(SAMPLERATE/*44100*/), 16, 1);
_bufferedWaveProvider = new BufferedWaveProvider(waveFormat);
_bufferedWaveProvider.BufferDuration = new TimeSpan(0, 2, 0);
{
void QueueAudio(byte[] data)
{
_bufferedWaveProvider.AddSamples(data, 0, data.Length);
if (_bufferedWaveProvider.BufferedBytes >= fftLength)
{
byte[] buffer = new byte[_bufferedWaveProvider.BufferedBytes];
_bufferedWaveProvider.Read(buffer, 0, _bufferedWaveProvider.BufferedBytes);
for (int index = 0; index < buffer.Length; index += 2)
{
short sample = (short)((buffer[index] | buffer[index + 1] << 8));
float sample32 = (sample) / 32767f;
sampleAggregator.Add(sample32);
}
}
}
}
And then the SampleAggregator fires the event above when it's done with the fft.
I have a circuit that sends me two different data from sensors. Data is coming as packets. First data is '$' to separate one packet to another. After '$' it sends 16 bytes microphone data and 1 byte pulse sensor data. I have an array to store incoming data and after plotting the data in each 20 ms, i start to write new bytes from zero index of array. I need to plot these data to different graphs using ZedGraph. However i could not separate those data correctly. Sometimes one or more data of audio are shown in other graph. Here is my code:
for (int i = 0; i < 4; i++)
{
if (data[i * 18] == Convert.ToByte('$'))
{
for (int x = ((i * 18) + 1); x < ((i * 18) + 17); x++)
{
listAuido.Add(time, data[x]);
}
for (int a = ((i * 18) + 17); a < ((i * 18) + 18); a++)
{
listPulse.Add(time, data[a]);
}
}
}
How can i solve this issue?
Circuit settings: BaudRate: 38400, Frequency: 200hz, CommunicationType: RS232.
Port Settings:ReadTimeOut=5 WrtieTimeOut=5;
While reading data i am using codes below. Read_Data1 refers data[] the code above. I have a counter and after plotting the data its value equals zero and i prevent my buffer index out of range exception
byte[] Read_Data1 = new byte[1000];
private void myPort_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
if (!myPort.IsOpen)
return;
if (myPort.BytesToRead > 0)
{
byte[] buffer = new byte[myPort.BytesToRead];
myPort.Read(buffer, 0, buffer.Length);
DataConversion.bytetobyte(Read_Data1, buffer, buffer.Length, count);
count += buffer.Length;
DataRecord.SaveBytesToFile(buffer, save.FileName);
}
}
public static void bytetobyte(byte[] Storage, byte[] databyte, int datacount, int count)
{
int abc;
for (abc = 0; abc < datacount; abc++)
{
Storage[abc + count] = databyte[abc];
}
}
Without seeing the data stream its hard to understand exactly what's going on but when you multiply by 18 the offset will be different as i increases. in order for that to work you need to be sure that exactly 4 packets are actually in the buffer at this time or things might get weird. Having worked a far bit with rs232 connected measurement hardware I often find that their software is not all that consistent, so I'd be careful about assuming the data is there :)
Look into how and when you're reading data into the data buffer, are you sure it contains fresh data everytime you call your code?
The loop looks correct but its a little difficult to read, I'd rewrite it as
for (int i = 0; i < 4; i++)
{
if (data[i * 18] == Convert.ToByte('$'))
{
for (int x = 0; x < 16; x++)
{
listAuido.Add(time, data[(i * 18) +1 + x]);
}
for (int a = 0; a < 1; a++) // this doesn't really need to be a loop
{
listPulse.Add(time, data[ ((i * 18) + 17)+ a]);
}
}
I looked at the code you added but its not immediately clear how the first bit of code is called by the second. I still think that there is something with the buffer handling that is causing your issue, but you could possibly eliminate that but using the buffer built into the serial port and just one byte at the time:
while(true){ //you could do this on a separate thread or on a timer
if(port.ReadByte() == Convert.ToByte('$')){
for (int x = 0; x < 16; x++)
listAuido.Add(time, port.ReadByte());
listPulse.Add(time, port.ReadByte());
}
}
I have a method Limit() which counts a bandwidth passed thought some channel in certain time and limits by using Thread.Sleep() it (if bandwidth limit is reached).
Method itself produces proper ( in my opinion results ) but Thread.Sleep doesn't ( due to multithreaded CPU usage ) because i have proper "millisecondsToWait" but speed check afterwards is far from limitation i've passed.
Is there a way to make limitation more precise ?
Limiter Class
private readonly int m_maxSpeedInKbps;
public Limiter(int maxSpeedInKbps)
{
m_maxSpeedInKbps = maxSpeedInKbps;
}
public int Limit(DateTime startOfCycleDateTime, long writtenInBytes)
{
if (m_maxSpeedInKbps > 0)
{
double totalMilliseconds = DateTime.Now.Subtract(startOfCycleDateTime).TotalMilliseconds;
int currentSpeedInKbps = (int)((writtenInBytes / totalMilliseconds));
if (currentSpeedInKbps - m_maxSpeedInKbps > 0)
{
double delta = (double)currentSpeedInKbps / m_maxSpeedInKbps;
int millisecondsToWait = (int)((totalMilliseconds * delta) - totalMilliseconds);
if (millisecondsToWait > 0)
{
Thread.Sleep(millisecondsToWait);
return millisecondsToWait;
}
}
}
return 0;
}
Test Class which always fails in large delta
[TestMethod]
public void ATest()
{
List<File> files = new List<File>();
for (int i = 0; i < 1; i++)
{
files.Add(new File(i + 1, 100));
}
const int maxSpeedInKbps = 1024; // 1MBps
Limiter limiter = new Limiter(maxSpeedInKbps);
DateTime startDateTime = DateTime.Now;
Parallel.ForEach(files, new ParallelOptions {MaxDegreeOfParallelism = 5}, file =>
{
DateTime currentFileStartTime = DateTime.Now;
Thread.Sleep(5);
limiter.Limit(currentFileStartTime, file.Blocks * Block.Size);
});
long roundOfWriteInKB = (files.Sum(i => i.Blocks.Count) * Block.Size) / 1024;
int currentSpeedInKbps = (int) (roundOfWriteInKB/DateTime.Now.Subtract(startDateTime).TotalMilliseconds*1000);
Assert.AreEqual(maxSpeedInKbps, currentSpeedInKbps, string.Format("maxSpeedInKbps {0} currentSpeedInKbps {1}", maxSpeedInKbps, currentSpeedInKbps));
}
I used to use Thread.Sleep a lot until I discovered waithandles. Using waithandles you can suspend threads, which will come alive again when the waithandle is triggered from elsewhere, or when a time threshold is reached. Perhaps it's possible to re-engineer your limit methodology to use waithandles in some way, because in a lot of situations they are indeed much more precise than Thread.Sleep?
You can do it fairly accurately using a busy wait, but I wouldn't recommend it. You should use one of the multimedia timers to wait instead.
However, this method will wait fairly accurately:
void accurateWait(int millisecs)
{
var sw = Stopwatch.StartNew();
if (millisecs >= 100)
Thread.Sleep(millisecs - 50);
while (sw.ElapsedMilliseconds < millisecs)
;
}
But it is a busy wait and will consume CPU cycles terribly. Also it could be affected by garbage collections or task rescheduling.
Here's the test program:
using System;
using System.Diagnostics;
using System.Collections.Generic;
using System.Threading;
namespace Demo
{
class Program
{
void run()
{
for (int i = 1; i < 10; ++i)
test(i);
for (int i = 10; i < 100; i += 5)
test(i);
for (int i = 100; i < 200; i += 10)
test(i);
for (int i = 200; i < 500; i += 20)
test(i);
}
void test(int millisecs)
{
var sw = Stopwatch.StartNew();
accurateWait(millisecs);
Console.WriteLine("Requested wait = " + millisecs + ", actual wait = " + sw.ElapsedMilliseconds);
}
void accurateWait(int millisecs)
{
var sw = Stopwatch.StartNew();
if (millisecs >= 100)
Thread.Sleep(millisecs - 50);
while (sw.ElapsedMilliseconds < millisecs)
;
}
static void Main()
{
new Program().run();
}
}
}
While looking around for a while I found quite a few discussions on how to figure out the number of lines in a file.
For example these three:
c# how do I count lines in a textfile
Determine the number of lines within a text file
How to count lines fast?
So, I went ahead and ended up using what seems to be the most efficient (at least memory-wise?) method that I could find:
private static int countFileLines(string filePath)
{
using (StreamReader r = new StreamReader(filePath))
{
int i = 0;
while (r.ReadLine() != null)
{
i++;
}
return i;
}
}
But this takes forever when the lines themselves from the file are very long. Is there really not a faster solution to this?
I've been trying to use StreamReader.Read() or StreamReader.Peek() but I can't (or don't know how to) make the either of them move on to the next line as soon as there's 'stuff' (chars? text?).
Any ideas please?
CONCLUSION/RESULTS (After running some tests based on the answers provided):
I tested the 5 methods below on two different files and I got consistent results that seem to indicate that plain old StreamReader.ReadLine() is still one of the fastest ways... To be honest, I'm perplexed after all the comments and discussion in the answers.
File #1:
Size: 3,631 KB
Lines: 56,870
Results in seconds for File #1:
0.02 --> ReadLine method.
0.04 --> Read method.
0.29 --> ReadByte method.
0.25 --> Readlines.Count method.
0.04 --> ReadWithBufferSize method.
File #2:
Size: 14,499 KB
Lines: 213,424
Results in seconds for File #1:
0.08 --> ReadLine method.
0.19 --> Read method.
1.15 --> ReadByte method.
1.02 --> Readlines.Count method.
0.08 --> ReadWithBufferSize method.
Here are the 5 methods I tested based on all the feedback I received:
private static int countWithReadLine(string filePath)
{
using (StreamReader r = new StreamReader(filePath))
{
int i = 0;
while (r.ReadLine() != null)
{
i++;
}
return i;
}
}
private static int countWithRead(string filePath)
{
using (StreamReader _reader = new StreamReader(filePath))
{
int c = 0, count = 0;
while ((c = _reader.Read()) != -1)
{
if (c == 10)
{
count++;
}
}
return count;
}
}
private static int countWithReadByte(string filePath)
{
using (Stream s = new FileStream(filePath, FileMode.Open))
{
int i = 0;
int b;
b = s.ReadByte();
while (b >= 0)
{
if (b == 10)
{
i++;
}
b = s.ReadByte();
}
return i;
}
}
private static int countWithReadLinesCount(string filePath)
{
return File.ReadLines(filePath).Count();
}
private static int countWithReadAndBufferSize(string filePath)
{
int bufferSize = 512;
using (Stream s = new FileStream(filePath, FileMode.Open))
{
int i = 0;
byte[] b = new byte[bufferSize];
int n = 0;
n = s.Read(b, 0, bufferSize);
while (n > 0)
{
i += countByteLines(b, n);
n = s.Read(b, 0, bufferSize);
}
return i;
}
}
private static int countByteLines(byte[] b, int n)
{
int i = 0;
for (int j = 0; j < n; j++)
{
if (b[j] == 10)
{
i++;
}
}
return i;
}
No, it is not. Point is - it materializes the strings, which is not needed.
To COUNT it you are much better off to ignore the "string" Part and to go the "line" Part.
a LINE is a seriees of bytes ending with \r\n (13, 10 - CR LF) or another marker.
Just run along the bytes, in a buffered stream, counting the number of appearances of your end of line marker.
The best way to know how to do this fast is to think about the fastest way to do it without using C/C++.
In assembly there is a CPU level operation that scans memory for a character so in assembly you would do the following
Read big part (or all) of the file into memory
Execute the SCASB command
Repeat as needed
So, in C# you want the compiler to get as close to that as possible.
I tried multiple methods and tested their performance:
The one that reads a single byte is about 50% slower than the other methods. The other methods all return around the same amount of time. You could try creating threads and doing this asynchronously, so while you are waiting for a read you can start processing a previous read. That sounds like a headache to me.
I would go with the one liner: File.ReadLines(filePath).Count(); it performs as well as the other methods I tested.
private static int countFileLines(string filePath)
{
using (StreamReader r = new StreamReader(filePath))
{
int i = 0;
while (r.ReadLine() != null)
{
i++;
}
return i;
}
}
private static int countFileLines2(string filePath)
{
using (Stream s = new FileStream(filePath, FileMode.Open))
{
int i = 0;
int b;
b = s.ReadByte();
while (b >= 0)
{
if (b == 10)
{
i++;
}
b = s.ReadByte();
}
return i + 1;
}
}
private static int countFileLines3(string filePath)
{
using (Stream s = new FileStream(filePath, FileMode.Open))
{
int i = 0;
byte[] b = new byte[bufferSize];
int n = 0;
n = s.Read(b, 0, bufferSize);
while (n > 0)
{
i += countByteLines(b, n);
n = s.Read(b, 0, bufferSize);
}
return i + 1;
}
}
private static int countByteLines(byte[] b, int n)
{
int i = 0;
for (int j = 0; j < n; j++)
{
if (b[j] == 10)
{
i++;
}
}
return i;
}
private static int countFileLines4(string filePath)
{
return File.ReadLines(filePath).Count();
}
public static int CountLines(Stream stm)
{
StreamReader _reader = new StreamReader(stm);
int c = 0, count = 0;
while ((c = _reader.Read()) != -1)
{
if (c == '\n')
{
count++;
}
}
return count;
}
Yes, reading lines like that is the fastest and easiest way in any practical sense.
There are no shortcuts here. Files are not line based, so you have to read every single byte from the file to determine how many lines there are.
As TomTom pointed out, creating the strings is not strictly needed to count the lines, but a vast majority of the time spent will be waiting for the data to be read from the disk. Writing a much more complicated algorithm would perhaps shave off a percent of the execution time, and it would dramatically increase the time for writing and testing the code.
There are numerous ways to read a file. Usually, the fastest way is the simplest:
using (StreamReader sr = File.OpenText(fileName))
{
string s = String.Empty;
while ((s = sr.ReadLine()) != null)
{
//do what you gotta do here
}
}
This page does a great performance comparison between several different techniques including using BufferedReaders, reading into StringBuilder objects, and into an entire array.
StreamReader is not the fastest way to read files in general because of the small overhead from encoding the bytes to characters, so reading the file in a byte array is faster.
The results I get are a bit different each time due to caching and other processes, but here is one of the results I got (in milliseconds) with a 16 MB file :
75 ReadLines
82 ReadLine
22 ReadAllBytes
23 Read 32K
21 Read 64K
27 Read 128K
In general File.ReadLines should be a little bit slower than a StreamReader.ReadLine loop.
File.ReadAllBytes is slower with bigger files and will throw out of memory exception with huge files.
The default buffer size for FileStream is 4K, but on my machine 64K seemed the fastest.
private static int countWithReadLines(string filePath)
{
int count = 0;
var lines = File.ReadLines(filePath);
foreach (var line in lines) count++;
return count;
}
private static int countWithReadLine(string filePath)
{
int count = 0;
using (var sr = new StreamReader(filePath))
while (sr.ReadLine() != null)
count++;
return count;
}
private static int countWithFileStream(string filePath, int bufferSize = 1024 * 4)
{
using (var fs = new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
int count = 0;
byte[] array = new byte[bufferSize];
while (true)
{
int length = fs.Read(array, 0, bufferSize);
for (int i = 0; i < length; i++)
if(array[i] == 10)
count++;
if (length < bufferSize) return count;
}
} // end of using
}
and tested with:
var path = "1234567890.txt"; Stopwatch sw; string s = "";
File.WriteAllLines(path, Enumerable.Repeat("1234567890abcd", 1024 * 1024 )); // 16MB (16 bytes per line)
sw = Stopwatch.StartNew(); countWithReadLines(path) ; sw.Stop(); s += sw.ElapsedMilliseconds + " ReadLines \n";
sw = Stopwatch.StartNew(); countWithReadLine(path) ; sw.Stop(); s += sw.ElapsedMilliseconds + " ReadLine \n";
sw = Stopwatch.StartNew(); countWithReadAllBytes(path); sw.Stop(); s += sw.ElapsedMilliseconds + " ReadAllBytes \n";
sw = Stopwatch.StartNew(); countWithFileStream(path, 1024 * 32); sw.Stop(); s += sw.ElapsedMilliseconds + " Read 32K \n";
sw = Stopwatch.StartNew(); countWithFileStream(path, 1024 * 64); sw.Stop(); s += sw.ElapsedMilliseconds + " Read 64K \n";
sw = Stopwatch.StartNew(); countWithFileStream(path, 1024 *128); sw.Stop(); s += sw.ElapsedMilliseconds + " Read 128K \n";
MessageBox.Show(s);