What is the best way to determine available bandwidth in .NET?
We have users that access business applications from various remote access points, wired and wireless and at times the bandwidth can be very low based on where the user is. When the applications appear to be running slow, the issue could be due to low bandwidth and not some other issue.
I would like to be able to run some kind of service that would warn users whenever the available bandwidth dips below a specific threshold.
Any thoughts?
Not beyond the obvious of downloading a file of a known size and timing how long it takes. the disadvantage of that is that you'd need to waste a lot of bandwidth to do it. Also, if you wanted to alert when throughput drops below a threshold, you'll have to run the test more-or-less continuously.
IMHO, I'd live with poor performance in some locations, given that you can't do anything about it if it does occur.
Sorry.
There's no easy way to measure bandwidth without actually using it - which of course will starve the applications. A couple of points to bear in mind though:
1) Is it actually bandwidth that's the problem, or latency? You can measure latency in a less intrusive manner than bandwidth.
2) Are the applications all run from the same server (or at least the same network)? You may find that users will have a good connection to some areas of the net but not others. (It's likely that the last mile will be the limiting factor, but it's not always the case.)
If you're transferring data, simply measure it. You could also download a reference object from somewhere if you want to make it independent of the speed of your server.
Without knowing the exact nature of your connection, or how its used, there are two options that I am aware of.
MultinetGetConnectionPerformance (http://msdn.microsoft.com/en-us/library/aa385342(VS.85).aspx)
System Event Notification Service (http://msdn.microsoft.com/en-us/library/aa377538(VS.85).aspx)
Neither are direct .NET classes, but can be implemented in .NET very easily.
Take a look at both of them and see if they will work for you.
Roy
Related
We are scraping an Web based API using Microsoft Azure. The issue is that there is SO much data to retrieve (there are combinations/permutations involved).
If we use a standard Web Job approach, we calculated it would take about 200 years to process all the data we want to get - and we would like our data to be refreshed every week.
Each request/response from the API takes about a 0.5-1.0 seconds to process. Request size is on average 20000 bytes and the average response is 35000 bytes. I believe the total number of requests is in the millions.
Another way to think about this question would be: how would you use Azure to Web scrape - and make sure you don't overload (in terms of memory + network) the VM it's running on? (I don't think you need too much CPU processing in this case).
What we have tried so far:
Used Service Bus Queues/Worker Roles scaled to 8 small VMs - but this caused a lot of network errors to occur (there must be some network limit to how much EACH worker role VM can handle).
Used Service Bus Queues/Continuous Web Job scaled to 8 small VMs - but this seems to work slower - and even scaled, doesn't give us too much control on what's happening behind the scenes. (We don't REALLY know how many VMs are up).
It seems that these things are built for CPU calculation - not for Web/API scraping.
Just to clarify: I throw my requests into a queue - which then get picked up by my multiple VMs for processing to get the responses. That's how I was using the queues. Each VM was using the ServiceBusTrigger class as prescribed by microsoft.
Is it better to have a lot small VMs or few massive VMs?
What C# classes should we be looking at?
What are the technical best practices when trying to do something like this on Azure?
Actually a web scraper is something that I have up and running, in Azure, for quite some time now :-)
AFAIK there is no 'magic bullet'. Scraping a lot of sources with deadlines is quite hard.
How it works (the most important things):
I use worker roles and C# code for the code itself.
For scheduling, I use the queue storage. I put crawling tasks on the queue with a timeout (e.g. 'when to crawl then') and have the scraper pull them off. You can put triggers on the queue size to ensure you meet deadlines in terms of speed -- personally I don't need them.
SQL Azure is slow, so I don't use that. Instead, I only use table storage for storing the scraped items. Note that updating data might be quite complex.
Don't use too much threading; instead, use async IO for all network traffic.
Also you might have to consider that extra threads require extra memory (parse trees can become quite big) - so there's a trade-off there... I do recall using some threads, but it's really just a few.
Note that probably this does require you to re-design and re-implement your complete web scraper if you're now using a threaded approach.. then again, there are some benefits:
Table storage and queue storage are cheap.
I currently use a single Extra Small VM to scrape well over a thousand web sources.
Inbound network traffic is for free.
As such, the result is quite cheap as well; I'm sure it's much less than the alternatives.
As for classes that I use... well, that's a bit of a long list. I'm using HttpWebRequest for the async HTTP requests and the Azure SDK -- but all the rest is hand crafted (and not open source).
P.S.: This doesn't just hold for Azure; most of this also holds for on-premise scrapers.
I have some experience with scraping so I will share my thoughts.
It seems that these things are built for CPU calculation - not for Web/API scraping.
They are built for dynamic scaling which given your task is not something you really need.
How to make sure you don't overload the VM?
Measure the response times and error rates and tune you code to lower them.
I don't think you need too much CPU processing in this case.
Depends on how much data is coming in each second and what you are doing with it. More complex parsing on quickly incoming data (if you decide to do it on the same machine) will eat up CPU pretty quickly.
8 small VMs caused a lot of network errors to occur (there must be some network limit)
The smaller the VMs the less shared resources they get. There are throughput limits and then there is an issue with your neighbors sharing the actual hardware with you. Often, the smaller your instance size the more trouble you run into.
Is it better to have a lot small VMs or few massive VMs?
In my experience, smaller VMs are too crippled. However, your mileage may vary and it all depends on the particular task and its solution implementation. Really, you have to measure yourself in your environment.
What C# classes should we be looking at?
What are the technical best practices when trying to do something like this on Azure?
With high throughput scraping you should be looking at infrastructure. You will have different latency in different Azure datacenters, and different experience with network latency/sustained throughput at different VM sizes, and depending on who in particular is sharing the hardware with you. The best practice is to try and find what works best for you - change datacenters, VM sizes and otherwise experiment.
Azure may not be the best solution to this problem (unless you are on a spending spree). 8 small VMs is $450 a month. It is enough to pay for an unmanaged dedicated server with 256Gb of RAM, 40 hardware threads and 500Mbps - 1Gbps (or even up to several Gbps bursts) of quality network bandwidth without latency issues.
For you budget, you will have a dedicated server that you cannot overload. You will have more than enough RAM to deal with async pinning (if you decide to go async), or enough hardware threads for multi-threaded synchronous IO which gives the best throughput (if you choose to go synchronous with a fixed-size threadpool).
On a sidenote, depending on the API specifics, it might turn out that your main issue will be the API owner simply throttling you down to a crawl when you start to put too much pressure on the API endpoints.
The ReadWriteTimeout for HttpWebRequests seems to be defaulted to 5 minutes.
Is there a reason why it is that high? I was trying to set the timeout of an API call to 10 seconds, but it was spinning for a over 2 minutes.
WHen I set this to 30 seconds, it times out in a reasonable amount of time now.
Is it dangerous to set this too low?
I can't imagine something taking longer than 20-30 seconds in my application (small 2-30kb payloads).
Reference: http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.readwritetimeout.aspx
Sure there's a reason for a 5 minute time-out. It looks like this:
This contraption is a robotic tape retrieval system, used by the International Centre for Radio Astronomy Research. It stores 32.5 petabytes of historical data. When its server gets an HttpWebRequest, the machine sends the robot on its way to retrieve the tape with the data. This takes a while, as you might imagine.
These systems were quite common a decade ago, around the time .NET was designed. Not so much today, the unrelenting improvements in hard disk storage capacity made them close to obsolete. Although more than 5 petabyte of SAN storage still sets you back a rather major chunk of money. If speed is not essential then tape is hard to beat.
Clearly .NET cannot possibly reliably declare a timeout when it doesn't know anything about what's happening on the other end of the wire. So the default is high. If you have good reasons to believe that there's an upper limit on your particular setup then don't hesitate to lower it. Do make it an editable setting, you can't predict the future.
You can't possibly know what connection speed the users have that connect to your website. And as the creator of this framework you can't know either what the developer will host. This class already existed in .NET 1.1, so for a very long time. And back then the users had slower speed too.
Finding a good default value is very difficult. You don't want to set it too high to prevent security flaws, and you don't want to set it too low because this would result in a million (exaggerated) threads and requests about aborted requests.
I'm sorry I can't give you any official sources, but this is just reasonable.
Why 5 minutes? Why not?
JustAnotherUserYouMayKnow explained it to you pretty good.
But as usual, you have the freedom to change this default value to a value that suits to your very case, so feel free to follow the path that Christian pointed out.
Setting a default value is not an easy task at all when we are talking about millions of users and maybe millions of billions of possible scenarios involved.
The bootom line is that it isn't that much important why it's 5 minutes but rather how you can adjust it to your very needs.
Well by setting it that low you may or may introduce a series of issues. As you may be able to reach the site within a reasonable time, others may not.
A perfect example is Verizon, they invoke a series of Proxy Servers which can drastically slow a connection down. The reason I brought such an example up; is our application specified a one-minute Timeout before it throws an exception.
Our server has no issues with large amounts of request, it handles them quite easily. However, some of our users throughout the world receive this error: Error 10060.
The issue can route from a incorrect Proxy Configuration or Invalid Registry Key which actually handles the Timeout request.
You'd think that one minute would indeed be fast enough, but it actually isn't. As with this customers particular network it doesn't siphon through the data quick enough- thus causing an error.
So you asked:
Why is the HttpWebRequest ReadWrite Timeout Defaulted to five minutes?
They are attempting to account for the lowest common denominator.
Simply, each network and client may have a vast degree of traffic or delays as it moves to the desired location. If it can't get to the destination within your ports ideal socket request your user will experience an exception.
Some really important things to know about a network:
Some networks that are configured have a limited hop count / time to live.
Proxies and Firewalls which are heavy in filtering data and security, may delay your traffic.
Some areas do not have Fiber or Cable high-speed. They may rely on Satellite or DSL.
Each network protocol is different.
Those are a few variables that you have to consider. If we are talking about an internet; each client has a home network; which connects to ISP; which connects to the Internet; which connects to you. So you have several forms of traffic to be aggregated.
If we are talking about an Intranet, with most modern day technology the odds of your time being an issue are slim but still possible.
Also each individual computer can partake or cause an issue. In Windows 8 the default Timeout specified for the browser is one minute; in some cases those users may experience exceptions with your application, your site, or others. So you'd manually alter the ServerTimeOut and TimeOut key in the registry to assign a longer value.
In short:
Client Machines may pose a problem in reaching your site within your allocated time.
Network / ISP may incur a problem for some users.
Your Server may be configured incorrectly or not allocate the right amount of time.
These are all variables that need to be accounted for; as they will impact access to your application. Unfortunately you won't know for certain until it's launched and users begin to utilize your site.
Unfortunately you won't know if your time you specified will be enough; but it defaults to a higher number because there is so much variation across the world that it is trying to consider the lowest common denominator. As your goal is to reach as many people as possible.
By the way very nice question, and some great answers so far as well.
Are there any tips, tricks and techniques to prevent or minimize slowdowns or temporary freeze of an app because of the .NET GC?
Maybe something along the lines of:
Try to use structs if you can, unless the data is too large or will be mostly used inside other classes, etc.
The description of your App does not fit the usual meaning of "realtime". Realtime is commonly used for software that has a max latency in milliseconds or less.
You have a requirement of responsiveness to the user, meaning you could probably tolerate an incidental delay of 500 ms or more. 100 ms won't be noticed.
Luckily for you, the GC won't cause delays that long. And if it did you could use the Server (background) version of the GC, but I know little about the details.
But if your "user experience" does suffer, it probably won't be the GC.
IMHO, if the performance of your application is being affected noticeably by the GC, something is wrong. The GC is designed to work without intervention and without significantly affecting your application. In other words, you shouldn't have to code with the details of the GC in mind.
I would examine the structure of your application and see where the bottlenecks are, maybe using a profiler. Maybe there are places where you could reduce the number of objects that are being created and destroyed.
If parts of your application really need to be real-time, perhaps they should be written in another language that is designed for that sort of thing.
Another trick is to use GC.RegisterForFullNotifications on back-end.
Let say, that you have load balancing server and N app. servers. When load balancer recieves information about possible full GC on one of the servers it will forward requests to other servers for some time therefore SLA will not be affected by GC (which is especially usefull for x64 boxes where more than 4GB can be addressed).
Updated
No, unfortunately I don't have a code but there is a very simple example at MSDN.com with dummy methods like RedirectRequests and AcceptRequests which can be found here: Garbage Collection Notifications
I am getting ready to perform a series of performance comparisons of various of the shelf products.
What do I need to do to show credibility in the tests? How do I design my benchmark tests so that they are respectable?
I am also interested in any suggestions on the actual design of the tests. Ways to load data without effecting the tests (Heisenberg Uncertainty Principle), or ways to monitor... etc
This is a bit tricky to answer without knowing what sort of "off the shelf" products you are trying to assess. Are you looking for UI responsiveness, throughput (e.g. email, transactions/sec), startup time, etc - all of these have different criteria for what measures you should track and different tools for testing or evaluating. But to answer some of your general questions:
Credibility - this is important. Try to make sure that whatever you are measuring has little run to run variance. Utilize the technique of doing several runs of the same scenario, get rid of outliers (i.e. your lowest and highest), and evaluate your avg/max/min/median values. If you're doing some sort of throughput test, consider making it long running so you have a good sample set. For example, if you are looking at something like Microsoft Exchange and thus are using their perf counters, try to make sure you are taking frequent samples (once per sec or every few secs) and have the test run for 20mins or so. Again, chop off the first few mins and the last few mins to eliminate any startup/shutdown noise.
Heisenburg - tricky. In most modern systems, depending on what application/measures you are measuring, you can minimize this impact by being smart about what/how you are measuring. Sometimes (like in the Exchange example), you'll see near 0 impact. Try to use as least invasive tools as possible. For example, if you're measuring startup time, consider using xperfinfo and utilize the events built into the kernel. If you're using perfmon, don't flood the system with extraneous counters that you don't care about. If you're doing some exteremely long running test, ratchet down your sampling interval.
Also try to eliminate any sources of environment variability or possible sources of noise. If you're doing something network intensive, consider isolating the network. Try to disable any services or applications that you don't care about. Limit any sort of disk IO, memory intensive operations, etc. If disk IO might introduce noise in something that is CPU bound, consider using SSD.
When designing your tests, keep repeatability in mind. If you doing some sort of microbenchmark type testing (e.g. perf unit test) then have your infrastructure support running the same operation n times exactly the same. If you're driving UI, try not to physically drive the mouse and instead use the underlying accessibility layer (MSAA, UIAutomation, etc) to hit controls directly programmatically.
Again, this is just general advice. If you have more specifics then I can try to follow up with more relavant guidance.
Enjoy!
Your question is very interesting, but a bit vague, because without knowing what to test it is not easy to give you some clues.
You can test performance from many different angles, then, depending on the use or target of the library you should try one approach or another; I will try to enumerate some of the things you may have to consider for measurement:
Multithreading: if the library uses
it or your software will use the
library in a multithreaded context
then you may have to test it with
many different processor and
multiprocessor configurations to see
how it reacts.
Startup time: its
importance depends on how intensively
will you use the library and what’s
the nature of the product being built
with it (client, server …).
Response time: for this do not take
the first execution, try to execute
the same call many times after the
first one and do an average. Using
System.Diagnostics.StopWatch could be
very useful for that.
Memory
consumption: analyze the growth,
beware of exponential ones ;). Go a
step further and measure quantity of
objects being created and disposed.
Responsiveness: you should not only
measure raw performance, how the user
feels the speed of the product it is
very important too.
Network: if the
library uses resources on the network
you may have to test it with
different bandwidth and latency
configurations, there is software to
simulate these situations.
Data:
try to create many different testing
data packages, trying to cover, for
example: a big bunch of raw data,
then a large set made of many smaller
chunks, a long iteration with small
pieces of data, …
Tools:
System.Diagnostics.Stopwatch: essential for benchmarking method calls
Performance counters: whenever available they are very useful to know what’s happening inside, allowing you to monitor the software without affecting its performance.
Profilers: there are some good memory and performance profilers in the market, but as you said, they always affect the measurements. They are good for finding bottlenecks in your software, but I don’t think you can use them for a comparison test.
Why do you care about the performance? In both cases the time taken to write the message to wherever you a storing your log will be a lot slower than anything else.
If you are really doing that match logging, then you are likely to need to index your log files so you can find the log entry you need, at that point you are not doing standard logging.
I've been researching a bit about .NET Performance counters, but couldn't find anything detailed (unlike the rest of the framework).
My question is: Which are the most relevant performance counters to get the maximum computing power of the machine? Well, the .NET virtual machine, that is..
Thank you,
Chuck Mysak
You haven't described what you mean by "computing power". Here are some of the things you can get through performance counters that might be relevant:
Number of SQL queries executed.
Number of IIS requests completed.
Number of distributed transactions committed.
Bytes of I/O performed (disk, network, etc.).
There are also relative numbers, such as percentages of processor and memory in use which can give an indication of how much of the "power" of your system is actually being used.
However, you will not find anything that correlates cleanly with raw computing "power". In fact, who is to say that the machine's full "power" is being taken advantage of at the time you look at the counters? It sounds like what you really want to do is run a benchmark, which includes the exact work to be performed and the collection of measurements to be taken. You can search the Internet for various benchmark applications. Some of these run directly in .NET while the most popular are native applications which you could shell out to execute.