ASP.net fileupload really slow - c#

We're currently creating an online music store that allows, only for administrators, to upload songs and previews on website. The problem is that uploading songs to our website take about 3 or 4 minutes. Is it normal? So can someone tell me ways I can tell to the website's hosters to check because our client is not really happy to upload about 100-200 songs to start his website that takes about 300-800 minutes, 5-13hours :oP.
Here's httpruntime we've put in web.config :
<httpRuntime maxRequestLength="20480" executionTimeout="240" />
Thanks

First step is to check the host's bandwidth limitations. Is this described in a service agreement or similar? Ask them. If you have access to the host server you could check for yourself using a variety of speed test tools or just simply transferring a file independent of your application.
The other thing to check is the client bandwidth. What's the ISP's bandwidth (downstream and upstream), any limits or throttling, does the speed vary at different times during the day (or night)? It's only going to go as fast as the slowest link in the chain, and if there is DSL/Cable involved remember that often these are asymmetric and if so are usually significantly slower on the upstream than downstream.
If the host and client bandwidths are okay, then start looking at the application's configuration.

Your server can handle fast uploads, the problem is with bandwidth. Internet providers optimize connections for fast downloads and slower uploads. If you can, offer FTP access to admins to upload their files. Should be faster than HTTP anyway.

Related

How to increase speed of Webclient.UploadFileAsync function?

I am using Webclient.UploadFileAsync function to call a rest webservice to upload files to a server. The uploads to server can also be done from a web application.
The server side processing is in milliseconds. So, most of the time of upload is spent in transport. I am able to upload a 6.28 MB file from web application in 2 minutes, But the same upload if done from my winform application using Webclient.UploadFileAsync takes 3 minutes.
Difference between web browser upload & webservice upload is that the former directly saves a file to server and in case of webservice first the webservice is called and then file is saved to the server.
So,what is the reason for such a huge difference in speed ? And how can this difference be reduced ?
Update: I tried using fiddler as suggested, and found an interesting thing.When I uploaded a file, while the fiddler was running, I got huge improvement in upload speed.Close to the speed of web application.And, when I tried uploading when the fiddler wasn't running, I got very slow upload speed as before.So, there seems to be a bug in webclient class.How do I get around this issue?
I can't add comments due to my reputation, so sorry for getting your hopes up in advance, but it would seem that since you have to go through middleware so to speak, the overall load time is increased, if it's not that important and you have the correct tools to do so, there are many FTP clients and libraries out there that could do this, and probably quicker than your web server's speed. Although if it's required for you to go through a web server, I wouldn't have much of an answer other than perhaps using an external webclient that can maybe run slightly faster.
So to sort of answer your question, using a secure FTP library would most likely be faster, and the speed difference is mainly due to the middleware you have to go through before hitting your actual server.

What could be the reason for such kind of Azure Web Site hangs?

I have a rather high-load deployment on Azure: 4 Large instances serving about 300-600 requests per second. Under normal conditions: "Average Response Time" is 70 to 150ms, but sometimes it may grow up to 200-300ms, but it's absolutely OK.
Though, one or two times per day (not at "Rush Hours") I see such picture on the Web Site Monitoring tab:
So, number of requests per minute significantly drops, average response time is growing on to 3 minutes, and after a while – everything comes back to normal.
During this "Blackout" there is only 0.1% requests being dropped (Http Server Errors with timeout), other requests just wait in queue and are normally processed after few minutes. Though, not all clients are ready to wait :-(
Memory usage is under 30% all the time, CPU usage is only up to 40-50%.
What I've already checked?:
Traces for timed-out requests: they did timed out at random locations.
Throttling for Azure Storage and other components used: no throttling at all.
I also tried to route all traffic through CloudFlare: and saw the same problems.
What could be the reason for such problems? What may I check next?
Thank you all in advance!
Update 1: BenV proposed good thing to try, but unfortunately it showed nothing :-(
I configured processes recycling every 500k requests and also added worker nodes, so CPU utilization is now less than 40% all day long, but blackouts still appear.
Update 2: Project uses ASP.Net MVC 4.
I had this exact same problem. For me I saw a lot of WinCache errors in my logs.
Whenever the site would fail, it would have a lot of WinCache errors in the log. WinCache is how IIS handles PHP to try to speed up the processing. It’s a Microsoft built add-on that is enabled by default in IIS and all Azure sites. WinCache would get hung up and instead of recycling and continuing, it would consume all the memory and file handles on an instance, essentially locking it up.
I added new App setting in the Azure Portal to scan a folder for php.ini settings changes.
d:\home\site\ini
Added a file in d:\home\site\ini\settings.ini
that contains the following
wincache.fcenabled=1
session.save_handler = files
memory_limit = 256M
wincache.chkinterval=5
wincache.ucachesize=200
wincache.scachesize=64
wincache.enablecli=1
wincache.ocenabled=0
This does a few things:
wincache.fcenabled=1
Enables file caching using WinCache (I think that's the default anyway)
session.save_handler = files
Changes the session handler from WinCache (Azure Default) to standard file based to reduce the cache engine stress
memory_limit = 256M
wincache.chkinterval=5
wincache.ucachesize=200
wincache.scachesize=64
wincache.enablecli=1
Sets the WinCache size to 256 megabytes per thread and limits the overall Cache size. This forces WinCache to clear out old data and recycle the cache more often.
wincache.ocenabled=0
This is the big one. DISABLE WinCache Operational Code caching. That is WinCache caching the actual PHP scripts into memory. Files are still cached from line one, but PHP is interpreted per normal and not cached into large binary files.
I went from having a my Azure Website crash about once every 3 days with logs that look like yours to 120 days straight so far without any issues.
Good luck!
There's some nice tools available for Web Apps in the preview portal.
The Application Insights extension especially can be useful for monitoring and troubleshooting app performance.

Upload files evenly on servers

Users will be uploading files to my website and I need to distribute them evenly on more than one server and also I need to have column in DB that says to which server the particular file was upload.
So here is my design.
Have enum of server names i.e server1, server2, server3.
Get the last Uploaded server name from DB
If the last uploaded server was server1, then current file should be uploaded to server2 and update DB,
if last uploaded server was server3 \, then current file should be uploaded to server1 and update DB
Application and DB is currently hosted on single server but in future we will move to load balancing.
Let me know if there is any best method other than this.
Your solution should work depending on how your customers use it. I'll give you a quick breakdown of how I've seen this done before.
Round robin DNS (assign multiple IP addresses to the same domain)
Multiple web servers who get traffic based on the DNS round robin
Each web server then has it's own dedicated SQL server.
SQL servers use replication to keep data syncronized.
Single storage server for file uploads. (Unless file uploads is the main function I doubt you'd need >1)
Pros:
VERY easily scale-able until you hit massive levels of traffic, at which point you'll need to rethink the SQL piece.
When purchasing hardware you can spend your money in certain areas to focus it on being a file server/SQL or web server
Cons:
This doesn't provide any true redundancy. And arguably makes it worse due to the tiered approach. This could be resolved with some managed DNS but that still isn't a perfect approach and I know some sys admins who cringe at the thought of managed DNS.

Upload a file to multiple servers

I see a ton of questions about uploading multiple files, but none about uploading a single file to multiple servers, so here goes...
I have an ASP.NET app that will be running on two load balanced servers, and I would like to allow users to upload files and have them end up on both servers. What is the cleanest way to do this? I am using IIS 6 btw.
Some ideas that come to mind are:
1) Use a virtual directory that points to some shared location that both servers can access. Will there be any access issues if the application runs at Network Service? I'm assuming the application will need to run as a user account that exists on the shared location machine. How should the permissions be set for this?
2) It would be nice if I could via jQuery post the request to both of my servers, referencing them by their port numbers. Even though the servers are on the same domain, this violates the same origin policy, right?
Is there another solution I'm overlooking? How do other sites do this?
I think you want to consider this problem more carefully - having a pair (or more) of servers means that some of them will be offline some of the time (at least for occasional reboots).
Uploads when not all of the servers are online won't be able to be sent to all servers immediately, so you'd need either an intermediate server (which would be a point of failure unless it was highly available itself) or a queuing system to "remember" which files were where, and to transfer them when the relevant servers were restored.
Also, you'll want a backup system, and some way to add newly provisioned servers to your cluster. You will also want a way to monitor these files are the same in case they get out of sync. Your architecture needs a lot of careful thought. I don't have the answers :)
The cleanest approach is forwarding the files server-side, really. If you force two uploads via JavaScript, not only will you have to worry about working around XSS safeguards, but you'll also force the user to use their very limited upstream bandwidth twice for each file.
You shouldn't be exposing that kind of detail to the client anyway. The browser doesn't need to know where the file ends up, just who to send it to. If you keep that logic server-side, not only do you keep the details hidden (and thus less prone to errors and exploits), but you'll also get more control over the process. You can create a gateway service later that handles a multitude of back end storages and you can handle failing servers better. You can queue failed uploads and retry. All these come at a very low cost if you do them on the server side, but are a pain to be made to work reliably on the client side.
Keep back end logic to your back end. Load balancing should be hidden from the user, so there's no need to tell them where they are sending their files exactly. Make it optional, if you want, but hide the action from them. Just swallow the file on the gateway server (which can be either of the load balancing servers -- in fact, it should probably be load balanced, too, so it should work with either of them in place) and send it to the other servers from there. The transfer from server to server will probably be faster too.
Your best bet is definitely a NAS, if one is available -- a shared file system that is not specifically associated with any machine. Then you can focus on making the NAS highly available via a clustered frontend.
If that's not an option, you can use a virtual directory on each machine that points to one folder on one of the machines, but then you lose redundancy.
I'm faced with this same challenge at my work. My app is small but needs to be highly available, but there's no NAS in sight. So in each machine's web.config I place a list of all the UNC paths that the uploaded file should be stored. After uploading to a temp folder, I copy the file to each machine one by one. It's not perfect -- a machine could go down, in which case when it came up it might not have all the files (and the copy would be slowed by the hunt for the missing machine) -- but in my situation uploads are so infrequent that it's not worth improvement.
As others have mentioned, Javascript is right out. Upload once.
I have seen this problem solved with a NAS, using credentials for the app pool that can read/write files to that NAS. Make sure your NAS is setup for high availability to prevent single point of failure ie:hot swap w/ raid, multiple array controllers, power supplies..etc
You could also put folder monitoring software on the severs that keep certain directories in sync. I don't recommend this solution.

Upload file to a remote server, how should I?

I am scratching my head about this. My scenario are that I need to upload a file to the company server machine(to a folder on c:) from our hosting one(totally different server). I don't know how I should do this. Any of you got tips or code on how this is done.
Thanks Guys
I would set up an FTP server (like the one in IIS or a third-party server) on the Company Server. If security is an issue then you'll want to set up SFTP (secure FTP) rather than vanilla FTP since FTP is not a natively secure transfer protocol. Then create a service on the Hosting Server to pick up the file(s) as they come in and ship them to the company server using C#/.NET's FTP control. Honestly, it should be pretty straightforward.
Update: Reading your question, I am under the strong impression that you will NOT have a web site running on the company server. That is, you do not need a file upload control in your web app (or already know how to implement one given that the control is right in the web page toolbox). Your question, as I understand it, is how to get a file from the web server over to the company server.
Update 2: Added a note about security. Note that this is less of a concern if the servers are on the same subdomain and won't be routed outside of the company network and/or if the data is not sensitive. I didn't think of this at first because I am working a project like this now but our data is not, in any way, sensitive.
Darren Johnstone's File Upload control is as good a solution as you will find anywhere. It has the ability to handle large files without impacting the ASP.NET server memory, and can display file upload progress without requiring a Flash or Silverlight dependency.
http://darrenjohnstone.net/2008/07/15/aspnet-file-upload-module-version-2-beta-1/
There isnt enough info to tell your whole hosting scenario but I have a few suggestions that might get you started in the right direction:
Is your external server owned by another company or group and you cant modify it? If not you might consider hosting the process on the same machine, either in process or as a separate service on the machine. If it cannot be modified, you might consider hosting the service on the destination machine, that way its in the same place as the files need to show up at.
Do the files need to stay in sync with the process? I.e. do they need to be uploaded, moved and verified as a single operation? If not then a separate process is probably the best way to go. The separate process will give you some flexibility, but remember it will be a separate process and a separate set of code to manage and work with
How big is the file(s) that are being uploaded? Do they vary by upload? are the plain files, binaries (zips, executables, etc)? If the files are small you have more options than if they are large. If they are small enough, you can even relay them in line.
Depending on the answers to the above some of these might work for you:
Use MSMQ. This will work for simple messages under about 3MB without too much hassle. Its ideal for messages that can be directly worked with (such as XML).
Use direct HTTP(s) relaying. On the host machine open a HTTP(s) connetion to the destination machine and transfer the file. Again this will work better for smaller files (i.e. only a few KB since it will be done in-line)
If you have access to the host machine, deploy a separate process on the machine which builds or collects the files and uses any of the listed methods to send them to the destination machine.
You can use SCP, FTP (of any form SFTP, etc) on either the host machine (if you have access) or the target machine to host the incoming files and use a batch process to move the files. This will have a lot of issues to address, such as file size, keeping submissions in sync, and timing. I would consider this as a last resort, depending on the situation.
Again depending on message size, you could also use a layer of abstraction such as a DB to act as the intermediate layer between the two machines. This will work as long as the two machines can see the DB (or other storage location) and both act on it. SQL Server Service Broker could be used for this purpose (and most other DB products offer similar products).
You can look at other products like WSO2 ESB or NServiceBus to facilitate messaging between the two apps and do it inline.
Hopefully that will give you some starting points to look into.

Categories

Resources