... is very slow. We're trying to deploy a 280 MB cspkg file through the VS2010 tools, and it takes roughly 35 minutes to upload, and another 10 minutes to deploy.
Are there any ways to speed up this upload process? We're comptemplating putting invariant data into a blob and pulling it from there, but we'd like to know what's happening in the first place.
edited to reflect we're using vs2010 azure integration tools
Both deployment methods (API and Portal) allow you to deploy from a file that is already uploaded to Azure Storage. The VSTS tools are just utilizing this feature behind the scenes. (In 2010 you have to provide storage credentials for this reason).
You should look into uploading the .cspkg into a Blob directly (vs through VSTS, and then write up a simple upload client that will break the upload into blocks, which can be uploaded simultaneously. You can then tweak this (block size and # of blocks uploading at a time) to better utilize your outgoing bandwidth. Then you just use the api to "assemble" them in Azure once they are all there. This should really speed up the upload.
I think to answer your question as to "whats happening", you are just getting synchronous WebClient I/O to Azure Storage, and all the limitations that come with it.
We have been hitting a very similar problem recently, as we had to package about 40MB of 3rd party libraries to establish a SQL connection toward Oracle from Windows Azure.
Through Lokad.CQRS, we did exactly what you suggest, aka, putting all big static libraries and keeping the Azure package as lean as possible. It works very nicely.
Related
I am using Webclient.UploadFileAsync function to call a rest webservice to upload files to a server. The uploads to server can also be done from a web application.
The server side processing is in milliseconds. So, most of the time of upload is spent in transport. I am able to upload a 6.28 MB file from web application in 2 minutes, But the same upload if done from my winform application using Webclient.UploadFileAsync takes 3 minutes.
Difference between web browser upload & webservice upload is that the former directly saves a file to server and in case of webservice first the webservice is called and then file is saved to the server.
So,what is the reason for such a huge difference in speed ? And how can this difference be reduced ?
Update: I tried using fiddler as suggested, and found an interesting thing.When I uploaded a file, while the fiddler was running, I got huge improvement in upload speed.Close to the speed of web application.And, when I tried uploading when the fiddler wasn't running, I got very slow upload speed as before.So, there seems to be a bug in webclient class.How do I get around this issue?
I can't add comments due to my reputation, so sorry for getting your hopes up in advance, but it would seem that since you have to go through middleware so to speak, the overall load time is increased, if it's not that important and you have the correct tools to do so, there are many FTP clients and libraries out there that could do this, and probably quicker than your web server's speed. Although if it's required for you to go through a web server, I wouldn't have much of an answer other than perhaps using an external webclient that can maybe run slightly faster.
So to sort of answer your question, using a secure FTP library would most likely be faster, and the speed difference is mainly due to the middleware you have to go through before hitting your actual server.
I am scratching my head about this. My scenario are that I need to upload a file to the company server machine(to a folder on c:) from our hosting one(totally different server). I don't know how I should do this. Any of you got tips or code on how this is done.
Thanks Guys
I would set up an FTP server (like the one in IIS or a third-party server) on the Company Server. If security is an issue then you'll want to set up SFTP (secure FTP) rather than vanilla FTP since FTP is not a natively secure transfer protocol. Then create a service on the Hosting Server to pick up the file(s) as they come in and ship them to the company server using C#/.NET's FTP control. Honestly, it should be pretty straightforward.
Update: Reading your question, I am under the strong impression that you will NOT have a web site running on the company server. That is, you do not need a file upload control in your web app (or already know how to implement one given that the control is right in the web page toolbox). Your question, as I understand it, is how to get a file from the web server over to the company server.
Update 2: Added a note about security. Note that this is less of a concern if the servers are on the same subdomain and won't be routed outside of the company network and/or if the data is not sensitive. I didn't think of this at first because I am working a project like this now but our data is not, in any way, sensitive.
Darren Johnstone's File Upload control is as good a solution as you will find anywhere. It has the ability to handle large files without impacting the ASP.NET server memory, and can display file upload progress without requiring a Flash or Silverlight dependency.
http://darrenjohnstone.net/2008/07/15/aspnet-file-upload-module-version-2-beta-1/
There isnt enough info to tell your whole hosting scenario but I have a few suggestions that might get you started in the right direction:
Is your external server owned by another company or group and you cant modify it? If not you might consider hosting the process on the same machine, either in process or as a separate service on the machine. If it cannot be modified, you might consider hosting the service on the destination machine, that way its in the same place as the files need to show up at.
Do the files need to stay in sync with the process? I.e. do they need to be uploaded, moved and verified as a single operation? If not then a separate process is probably the best way to go. The separate process will give you some flexibility, but remember it will be a separate process and a separate set of code to manage and work with
How big is the file(s) that are being uploaded? Do they vary by upload? are the plain files, binaries (zips, executables, etc)? If the files are small you have more options than if they are large. If they are small enough, you can even relay them in line.
Depending on the answers to the above some of these might work for you:
Use MSMQ. This will work for simple messages under about 3MB without too much hassle. Its ideal for messages that can be directly worked with (such as XML).
Use direct HTTP(s) relaying. On the host machine open a HTTP(s) connetion to the destination machine and transfer the file. Again this will work better for smaller files (i.e. only a few KB since it will be done in-line)
If you have access to the host machine, deploy a separate process on the machine which builds or collects the files and uses any of the listed methods to send them to the destination machine.
You can use SCP, FTP (of any form SFTP, etc) on either the host machine (if you have access) or the target machine to host the incoming files and use a batch process to move the files. This will have a lot of issues to address, such as file size, keeping submissions in sync, and timing. I would consider this as a last resort, depending on the situation.
Again depending on message size, you could also use a layer of abstraction such as a DB to act as the intermediate layer between the two machines. This will work as long as the two machines can see the DB (or other storage location) and both act on it. SQL Server Service Broker could be used for this purpose (and most other DB products offer similar products).
You can look at other products like WSO2 ESB or NServiceBus to facilitate messaging between the two apps and do it inline.
Hopefully that will give you some starting points to look into.
We're currently creating an online music store that allows, only for administrators, to upload songs and previews on website. The problem is that uploading songs to our website take about 3 or 4 minutes. Is it normal? So can someone tell me ways I can tell to the website's hosters to check because our client is not really happy to upload about 100-200 songs to start his website that takes about 300-800 minutes, 5-13hours :oP.
Here's httpruntime we've put in web.config :
<httpRuntime maxRequestLength="20480" executionTimeout="240" />
Thanks
First step is to check the host's bandwidth limitations. Is this described in a service agreement or similar? Ask them. If you have access to the host server you could check for yourself using a variety of speed test tools or just simply transferring a file independent of your application.
The other thing to check is the client bandwidth. What's the ISP's bandwidth (downstream and upstream), any limits or throttling, does the speed vary at different times during the day (or night)? It's only going to go as fast as the slowest link in the chain, and if there is DSL/Cable involved remember that often these are asymmetric and if so are usually significantly slower on the upstream than downstream.
If the host and client bandwidths are okay, then start looking at the application's configuration.
Your server can handle fast uploads, the problem is with bandwidth. Internet providers optimize connections for fast downloads and slower uploads. If you can, offer FTP access to admins to upload their files. Should be faster than HTTP anyway.
What are the challenges in porting your existing applications to Azure?
Here are few points I'm already aware about.
1) No Support for Session Affinity (Azure is Stateless) - I'm aware that Azure load balancing doesn't support Session Affinity - hence if the existing web application should be changed if it has session affinity.
2) Interfacing with COM - Presently I think there is no support for deploying COM components to the cloud to interface with them - if my current applications need to access some legacy components.
3) Interfacing with other systems from the cloud using non-http protocols
Other than the above mentioned points, what are other significant limitations/considerations that you are aware off?
Also, how these pain points are addressed in the latest release?
our biggest challenge is the stateless nature of the cloud. though we've tried really really hard, some bits of state have crept through to the core and this is what is being addressed.
the next challenge is the support of stale data and caching as data can be offline for weeks at a time. this is hard regardless.
Be prepared for a lengthy deployment process. At this time (pre-PDC 2009), uploading a deployment package and spinning up host services sometimes has taken me more than 30 minutes (depends on time of day, size of package, # of roles, etc).
One side effect of this is that making configuration changes in web.config files is expensive because it requires the entire app package to be re-packaged and re-deployed. Utilize the Azure configuration files instead for config settings - as they do not require a host suspend/restart.
My biggest problem with Azure today is operability with other OS’es. Here I am comparing Azure to EC2/Rackspace instances (Even though Azure as PAAS offers a lot more than them e.g. load balancing, storage replication, geographical deployment etc in a single cheap package).
Even if you consider me as a BizSpark startup guy, I am not inclined to run my database on SqlAzure (Sql2005 equivalent) since I can’t accept their pricing policy, which I’ll have to bear three years after of the BizSpark program. Now they don’t have an option for MySql or any other database. This to me is ridiculous for an SME. With EC2 I can run my MySql instance on another Linux VM (obviously in the same network. Azure gives you the capability to connect to network outside theirs, but that is not really an option)
2nd. This is again is related to using *nix machines. I want all my caching to be maintained by Memcached. With asp.net 4 they have even given us out of the box memcached support through extensible output caching. The reason why I am adamant about memcached is the eco system it provides. E.g.: Today I can get memcached with persistent caching as an add-on. This will even give me the opportunity to store session data with memcached. Additionally I can run map reduce jobs on the IIS logs. This is done using cloudera images on EC2. I don’t see how I can do these with Azure.
You see, in the case of Amazon/Rackspace I can run my asp.net web app on a single instance of Windows Server 2008 and the rest on *nix machines.
I am contemplating running my non hierarchical data (web app menu items) on CouchDb. With Azure I get the Azure table. But I am not very comfortable with that ATM. With EC2 I can run it on the same MySql box(don't catch me on this one :-)).
If you are ready to forget these problems, Azure gives you an environment with a lot of grunt work abstracted. And that’s a nice thing. Scaling, loading balancing, a lot of very cheap storage, CDN, storage replication, out of the box monitoring for services through Fabric Controller etc among these. With EC2/Rackspace you’ll have to hire a sysadmin shelling $150k PA to do these things (AFAIK Amazon provides some of these feature at additional cost).
My comparisons are between azure and Amazon/Rackspace instances (and not cloud). For some this might seem like apples and orange. But azure does not provide you with instances. Just the cloud with their customized offerings…
My biggest problem is/was just signing up and creating a project. And that's how far it got over the last month.
Either I am doing something very wrong, or that site is broken most of the time.
One important challenge is the learning curve, lack of experienced developers, the time it takes to become productive .
This happens with all technologies, but with the cloud there is a fundamental change in how somethings are done.
If your application needs a database, I'm not sure that Windows Azure has a relational database (right now)
Also, there are other cloud computing providers that can offer you more options in configuring your virtual machine for example, it really depends on what you actually need and want.
I am now at the point where i cant get any further with out some help.I am trying to host files on the cloud and then access those files via code (C#). So far i have tried Rapidshare and Skydrive and have been unable to get either working at all. Below is a few things that i am trying to do or rather must be able to do with the cloud storage.
What i need is a place to host files on the internet (obvious).
The files can range in size from 10 megs to 100megs.
I Must be able to download the files via code as well as upload via code.
I don't really mind having to pay as long as the price is not ridiculous.Any help at all will be much appreciated.
Thanks
Stalkerh
Why don't you look at Amazon S3 they do what you want, are cheap and have a C# API wrapping their web service (But ThreeSharp is better).