We are working on Windows Desktop application in which we need to capture current internet Bandwidth.
We are downloading a ZIP file multiple times sequentially but our results are not matching with Speed Test.
We are capturing bytes received on ACTIVE network card, but sequential download doesn't provide expected result. We even tried parallel downloading of different files multiple times but failed.
We got success only when we downloaded different files in parallel and performed test using Speed Test simultaneously.
Now here are my few questions:
Does bandwidth between TCP HOPS affects our bandwidth?
Does traffic between TCP HOPS affects our bandwidth?
How to effectively consume entire bandwidth using HTTP / TCP downloads and C# .NET?
Does ISP throttles bandwidth per TCP Socket connection?
Does ISP provides bandwidth to http://www.speedtest.net? (Could be possible as it can always show expected result but other sites cannot)
for (int downloadCount = 0; downloadCount < iterations; downloadCount++)
{
try
{
string downloadUrl = GetUniqueDownloadUrl();
bool isValidUrl = Uri.IsWellFormedUriString(downloadUrl, UriKind.Absolute);
if (true != isValidUrl)
{
return result;
}
// Download file and register total time to download file.
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
byte[] fileContent = webclient.DownloadData(new Uri(downloadUrl, UriKind.Absolute));
stopwatch.Stop();
double downloadTime = stopwatch.ElapsedMilliseconds / 1000; // Milliseconds to Seconds
// Convert bytes to Mbits.
fileSizeInMbits = fileContent.Length / 125000; // bytes to Megabits
double speed = fileSizeInMbits / downloadTime; // speed in Mbps
// Store speeds for average calculation.
speeds.Add(speed);
}
catch (Exception e)
{
result.Error = e;
break;
}
}
}
// Calculate average bandwidth for total successful downloads.
double totalAvgSpeed = speeds.Average();
result.FileSizeInMB = fileSizeInMbits / 8;
result.Speed = Math.Round(totalAvgSpeed, 2, MidpointRounding.AwayFromZero);
return result;
}
There's no such thing as internet "speed" there's only speed between 2 hosts, if you have 1 computer on gigabit ethernet and a server also on gigabit ethernet, even if just 1 node on the way is saturated speed will go down, when you use speedtest.net it has a lot of close servers (including likely one at your isp) so you're going to get a very positive estimate.
And if your isp throttled you'd see it on speedtest just the same.
The only thing to remember is that downloading a file from a server will only give you an estimate of the speed TO/FROM that server, and not an "internet speed" which is a concept that doesn't really exist to begin with.
Related
I have a Redis Database on a Centos server, and 3 Windows servers are connected to it with approximately 1,000 reads/writes per second, all of which are on the same local LAN, so the ping time is less than one millisecond.
The problem is at least 5 percent of reading operations are going timeout, while I read maximum 3KB data in a read operation with 'syncTimeout=15', which is much more than network latency.
I installed Redis on bash on my windows 10, and simulate the problem. I also stopped writing operations. However, the problem still exists with 0.5 percent timeouts, while there is no network latency.
I also used a Centos Server in my LAN to simulate the problem, in this case, I need at 100 milliseconds for 'syncTimeout' to be sure the amount of timeout is less than 1 percent.
I considered using some Dictionaries to cache data from Redis, so there is no need to request per item, and I can take advantage of the pipeline. But I came across StackRedis.L1 which is developed as an L1 cache for Redis, and it is not confident in updating the L1 cache.
This is my code to simulate the problem:
var connectionMulti = ConnectionMultiplexer.Connect(
"127.0.0.1:6379,127.0.0.1:6380,allowAdmin=true,syncTimeout=15");
// 100,000 keys
var testKeys = File.ReadAllLines("D:\\RedisTestKeys.txt");
for (var i = 0; i < 3; i++)
{
var safeI = i;
Task.Factory.StartNew(() =>
{
var serverName = $"server {safeI + 1}";
var stringDatabase = connectionMulti.GetDatabase(12);
PerformanceTest($"{serverName} -> String: ",
key => stringDatabase.StringGet(key), testKeys);
});
}
and the PerformanceTest method is:
private static void PerformanceTest(string testName, Func<string, RedisValue> valueExtractor,
IList<string> keys)
{
Task.Factory.StartNew(() =>
{
Console.WriteLine($"Starting {testName} ...");
var timeouts = 0;
var errors = 0;
long totalElapsedMilliseconds = 0;
var stopwatch = new Stopwatch();
foreach (var key in keys)
{
var redisValue = new RedisValue();
stopwatch.Restart();
try
{
redisValue = valueExtractor(key);
}
catch (Exception e)
{
if (e is TimeoutException)
timeouts++;
else
errors++;
}
finally
{
stopwatch.Stop();
totalElapsedMilliseconds += stopwatch.ElapsedMilliseconds;
lock (FileLocker)
{
File.AppendAllLines("D:\\TestResult.csv",
new[]
{
$"{stopwatch.ElapsedMilliseconds.ToString()},{redisValue.Length()},{key}"
});
}
}
}
Console.WriteLine(
$"{testName} {totalElapsedMilliseconds * 1.0 / keys.Count} (errors: {errors}), (timeouts: {timeouts})");
});
}
I expect all read operations will be done successfully less than 15 milliseconds.
Achieving this, is Considering L1 cache for a Redis cache a good solution? (It is very fast, in the scale of a nanosecond, but how can I do for syncronizing)
Or Redis can be enhanced by clustering or something else? (While I tested it on bash on my PC, and I did not receive expected result)
Or Redis can be enhanced by clustering or something else?
Redis can be clustered, in different ways:
"regular" redis can be replicated to secondary read-only nodes, on the same machine or different machines; you can then send "read" traffic to some of the replicas
redis "cluster" exists, which allows you to split (shard) the keyspace over multiple primaries, sending appropriate requests to each node
redis "cluster" can also make use of readonly replicas of the sharded nodes
Whether that is appropriate or useful is contextual and needs local knowledge and testing.
Achieving this, is Considering L1 cache for a Redis cache a good solution?
Yes, it is a good solution. A request you don't make is much faster (and has much less impact on the impact) than a request you do make. There are tools for helping with cache invalidation, including using the pub/sub API for invalidations. Redis vNext is also looking into additional knowledge APIs specifically for this kind of L1 scenario.
I am using these codes to copy a big file:
const int CopyBufferSize = 64 * 1024;
string src = #"F:\Test\src\Setup.exe";
string dst = #"F:\Test\dst\Setup.exe";
public void CopyFile()
{
Stream input = File.OpenRead(src);
long length = input.Length;
byte[] buffer = new byte[CopyBufferSize];
Stopwatch swTotal = Stopwatch.StartNew();
Invoke((MethodInvoker)delegate
{
progressBar1.Maximum = (int)Math.Abs(length / CopyBufferSize) + 1;
});
using (Stream output = File.OpenWrite(dst))
{
int bytesRead = 1;
// This will finish silently if we couldn't read "length" bytes.
// An alternative would be to throw an exception
while (length > 0 && bytesRead > 0)
{
bytesRead = input.Read(buffer, 0, Math.Min(CopyBufferSize, buffer.Length));
output.Write(buffer, 0, bytesRead);
length -= bytesRead;
Invoke((MethodInvoker)delegate
{
progressBar1.Value++;
label1.Text = (100 * progressBar1.Value / progressBar1.Maximum).ToString() + " %";
label3.Text = ((int)swTotal.Elapsed.TotalSeconds).ToString() + " Seconds";
});
}
Invoke((MethodInvoker)delegate
{
progressBar1.Value = progressBar1.Maximum;
});
}
Invoke((MethodInvoker)delegate
{
swTotal.Stop();
Console.WriteLine("Total time: {0:N4} seconds.", swTotal.Elapsed.TotalSeconds);
label3.Text += ((int)swTotal.Elapsed.TotalSeconds - int.Parse(label3.Text.Replace(" Seconds",""))).ToString() + " Seconds";
});
}
The file size is about 4 GB.
In the first 7 seconds it can copy up to 400 MB then this hot speed calms down.
What happen and how to keep this hot speed or even increase it?
Another question is here:
When the file copied, windows is still working on destination file(about 10 seconds).
Copy Time: 116 seconds
extra time: 10-15 seconds or even more
How to remove or decrease this extra time?
What happens? Caching, mostly.
The OS pretends you copied 400 MiB in seven seconds, but you didn't. You just sent 400 MiB to the OS (or file system) to write in the future, and that's as much as the buffer can take. If you try to write a 400 MiB file and you pull the plug as soon as it's "done", your file will not be written. The same thing deals with the "overtime" - your application has sent all it has to the buffer, but the buffer isn't yet written to the drive itself (either its buffer, or even slower, the actual physical platter).
This is especially visible with USB flash drives, which tend to use caching heavily. This makes working with the (usually very slow) drive much more pleasant, with the trade-off that you have to wait for the OS to finish writing everything before pulling the drive out (that's why you get the "Safe remove" icon).
So it should be obvious that you can't really make the total time shorter. All you can do is try and make the user interface reflect reality a bit better, so that the user doesn't see the "first 400 MiB are so fast!" thing... but it doesn't really work well. In any case, your read->write speed is ~30 MiB/s. The OS just hides the peaks to make it easier to deal with the slow hard drive - very useful when you're dealing with lots of small files, worthless when dealing with files bigger than the buffer.
You have a bit of control over this when you use the FileStream constructor directly, instead of using File.OpenWrite - you can use FileOptions.WriteThrough to instruct the OS to avoid any caching and write directly to disk[1], giving you a better idea of the real write speed. Do note that this usually makes the total time larger, though, and it may make concurrent access even worse. You definitely don't want to use it for small files.
[1] - Haha, right. The drive usually has caching of its own, and some ignore the OS' pleas. Tough luck.
One thing you could try is to increase the buffer size. This really matters when the write cache can no longer keep up (as discussed in other answer). Writing a lot of small blocks is often slower than writing a few large blocks. Instead of 64 kB, try 1 MB, 4 MB or even bigger:
const int CopyBufferSize = 1 * 1024 * 1024; // 1 MB
// or
const int CopyBufferSize = 4 * 1024 * 1024; // 4 MB
I'm working on a download method with resume support in my application (inside a thread).
My application users usually has low internet speed (common between 10 to 50 kbps download speed) and target files are between 3 to 80 MB...
A part of my download codes:
// Buffer size = 1024000 bit = >>
int iBufferSize = 1024;
iBufferSize *= 1000;
/// some codes.....
Try{
/// some codes.....
request = HttpWebRequest.Create(sourceUrl);
request.AddRange((int)downloadedSize)
Stream smRespStream = response.GetResponseStream();
const int MAX_LOOP=5;
var flushStreamCounter = 0;
while ((iByteSize = smRespStream.Read(downBuffer, 0, downBuffer.Length)) > 0)
{
saveFileStream.Write(downBuffer, 0, iByteSize);
//----------------------------------
// some codes
//----------------------------------
// Are these codes necessary to really write data to file now (when the condition is true)?
flushStreamCounter++;
if (flushStreamCounter > MAX_LOOP)
{
saveFileStream.Flush();
flushStreamCounter = 0;
}
}
}
finally
{
if (saveFileStream != null)
{
saveFileStream.Flush();
saveFileStream.Close();
saveFileStream.Dispose();
}
}
When i know the intenet speed is low for my customers i won't lost downloaded data by internet disconnect interrupts or application force closing or PC power off...
So, My Questions:
Should i use Flush method of saveFileStream to write data from memory to hard disk during while loop to prevent lost data, when i know the users maybe force close my application during download or intenet DC or...?
If using Flush is good way and necessary then what is the best MAX_LOOP value in my case and why?
What is the best iBufferSize value for my application (base on application user's internet speed) and why?
Update:
I don't know what are the advantages and disadvantages of using Flush stream during while loop!
When download process of my application is running, the windows explorer dose not show the file size increasing (even by Refresh). It updates the file size after above method complete, so i don't know is it a danger to lost data on application force closing or PC power outage.
I am currently writing a small application to load test a website and am having a few problems.
List<string> pageUrls = new List<string();
// NOT SHOWN ... populate the pageUrls with thousands of links
var parallelOptions = new System.Threading.Tasks.ParallelOptions();
parallelOptions.MaxDegreeOfParallelism = 100;
System.Threading.Tasks.Parallel.ForEach(pageUrls, parallelOptions, pageUrl =>
{
var startedOn = DateTime.UtcNow;
var request = System.Net.HttpWebRequest.Create(pageUrl);
var responseTimeBefore = DateTime.UtcNow;
try
{
var response = (System.Net.HttpWebResponse)request.GetResponse();
responseCode = response.StatusCode.ToString();
response.Close();
}
catch (System.Net.WebException ex)
{
// NOT SHOWN ... write to the error log
}
var responseTimeAfter = DateTime.UtcNow;
var responseDuration = responseTimeAfter - responseTimeBefore;
// NOT SHOWN ... write the response duration out to a file
var endedOn = DateTime.UtcNow;
var threadDuration = endedOn - startedOn;
// sleep for one second
var oneSecond = new TimeSpan(0, 0, 1);
if (threadDuration < oneSecond)
{
System.Threading.Thread.Sleep(oneSecond - threadDuration);
}
}
);
When I set the MaxDegreeOfParallelism to a low value such as 10 everything works fine, the responseDuration stays between 1 and 3 seconds. If I increase the value to 100 (as in the example) the responseDuration climbs quickly until after around 300 requests the it has reached 25 seconds (and still climbing).
I thought I may be doing something wrong so I also ran Apache jMeter with the standard web test plan setup and set the users to 100. After about 300 samples the response times had rocketed to around 40 seconds.
I'm skeptical that my server is reaching its limit. The task manager on the server shows that only 2GB of the 16GB is being used and the processor hangs around 5% effort.
Could I be hitting some limit on the number of simultaneous connections on my client computer? If so, how do I change this?
Am I forgetting to do something in my code? Clean-up/close connections?
Could it be that my code is OK and it is in fact my server that just can't handle the traffic?
For reference my client computer that is running the code above is running Windows 7 and is on the same network as the server I am testing. The server is running Windows Server 2008 IIS 7.5 and is a dedicated 8-core 16GB RAM machine.
MaxDegreeOfParallelism should be used only when you are trying to limit the number of cores to be used as part of your program strategy.
By default, Parallel library utilizes the most number of available threads - so setting this option to any number mostly will limit the performance depending on the environment running it.
I would suggest you to try running this code without setting this option and that should improve the performance.
ParallelOptions.MaxDegreeOfParallelism Property in MSDN - read remarks section for more information.
Several suggestions:
How large is your recorded Jmeter test script and did you insert some think time? The larger the test, the heavier the load.
Make sure the LAN is not in use by competing traffic during test runs. Having a Gigabit ethernet switch should be mandatory.
Do use 2-3 slave machines and avoid using heavy results loggers in Jmeter like tree.You were right to minimize these graphs and results.
my question is mainly about code optimization(at the moment)
I have created a network monitor that monitors different connections on the PC, what i had done is I'm sniffing packets at the 3rd level of the stack(the network level), after capturing the packet, i am supposed to create an object on the UI for each connection, what i am doing at the moment is looking at the overall consumed bandwidth and total data sent every second the program is run. here is that part of the code:
temp= packet_rtxt.TextLength;
tempdr = temp / 1024;
dr_txt.Text=tempdr.ToString();
totaldata = totaldata + temp;
totaldatadisp = totaldata;
packet_rtxt.Text = "";
//unit
if (totaldata < 10485760)
{
if (totaldata < 10240)
unit.Text = "bytes";
else
{
totaldatadisp = totaldatadisp / 1024;
unit.Text = "KBs";
}
}
else
{
totaldata = totaldatadisp / 1048576;
unit.Text = "MBs";
}
test.Text = totaldatadisp.ToString();
tds.Enabled = true;
}
so what im doing so far is writing out the captured packets into a rich text box, taking the length of that rtxt and adding it to a counter for the total data, taking the length and using it as the data rate, then clearing the rtxt for the next bits of data.
the total data recieved part is working fine, however the BPs section works fine for low amounts of data, then it goes crazy if the data rate is over 10kbps(on my pc)
should i try optimizing the whole code, or is there some other method(keep in mind i need to monitor every single connection), or do i need to use different UI controls?
should I focus on optimization or using new ways?
thanks in advance
The standard controls are not made for such a load. You need to separate the logging of data from the display of data.
I'd only show the last say 10kb of text once per second. You can still keep all of the log records in some data structure. But you don't have to push all of them to the UI.
Alternatively you can write your own text-display control but that is going to be a lot more work.