I'm having issues with Continuous Deployment from GitHub in Azure. I have a Shared pricing tier, and the issue goes away if I upgrade to the Basic pricing tier. This is for an ASP.NET MVC 6 application (RC1).
Essentially I get the following error:
http://pastebin.com/PgARgurg
The bit that stands out is:
Restore failed
There is not enough space on the disk.
If I publish directly from Visual Studio to the Shared tier it works fine. It's only when using continuous deployment where it falls over.
Any ideas?
Shared instances have very limited resources, especially around disk size. Continuous deployment integration from GitHub involves bringing down a copy of the code to disk upon every change, and this isn't always cleaned up immediately (or at all). A direct publish to Azure from Visual Studio does clean up the prior deployment packages automatically. The reason that upgrading your tier solves the problem is the dedicated and increased available disk space. You should upgrade to Basic if you decide that continuous integration (and thus disk space) are important to your work.
There's another caveat. Shared Web Apps instances have a limit on Temp Folder size of 300Mb.
An asp.net 5 RC1 application uses more Temp Folder storage than a Beta8 app. It's right now almost impossible to deploy an RC1 through Continuous Integration to a Free App Service.
Basically restored packages size is NOT the same as published packages of an application.
You can understand why this is the case by opening up the restored packages...For example, in the below scenario the published package has only the required content for running the application.
Restored JSON.NET package content:
Published JSON.NET package content:
So probably you should publish the application from your continuous deployment?
Related
I have been working on .net platform for few years now and I must say I am very impressed by how Microsoft is making .net cross-platform compatible.
I spent hours trying to run a small hello world application built using CoreCLR on a mac. And it worked. While There are still a lot of UNKNOWNS I am still trying to understand, there is this one question I was not able to find answer of on google.
How do you automate the deployment a dnx application. I mean, do you compile your aspnet 5 app into a nuget package and then restore it on your linux server (I have never used linux so not sure how nuget works there), and run dnx command ? Or just zip it and push it to the server directly ?
Sorry this is all very new to me and so my questions might sound stupid. I just want to know what's the best way I can implement continuous delivery for my asp.net 5 applications. My ultimate goal is to host my apps on linux containers.
You can use dnu publish --runtime <name of runtime> --no-source. That creates a folder that has your application, its dependencies, and the runtime. Then, all you have to do is get that folder on your server.
How you move files around really depends on your scenario... It could be FTP, Storage, Kudu (if you're on Azure WebSites), etc.
Another alternative is to do the restore on the server. While this reduces the size of the application when you publish, you will have to restore packages on the server which can be insecure and it can also lead to application breaks because there might be newer, incompatible packages on the feeds.
While there's no right answer to fit all, I found that if you want to most reliable and consistent results, you should publish with everything, test locally and then just copy the bundle on your server.
For docker, I recommend the same thing. Publish with runtime and no sources and create a container that has the resulting folder.
I have an MVC4 web application that uses jquery and some other libs (jquery-ui in particular).
Yesterday I decided to update all the packages via NuGet package manager; my web application worked correctly on my local machine, but when I deployed it to my azure website a javascript error popped out in my browser (it was related to jquery-ui library, something like "$browser is not a function").
I searched the web and found out that the cause of this error was that I was still using an old version of jquery. It seems that deploy process didn't publish the new version of the js libraries even if they have been updated in local project.
I solved the problem connecting via RDP to the Azure machine, deleting the contents of "Scripts" folder and deploying again, but I'm wondering if there's a way to "force" script/libraries update when deploying to Azure.
Edit 1: I'm developing with Visual Studio 2012, using Mercurial as source control provider
Edit 2: I'm deploying to Azure Web Sites
Please, in your future questions clearly indicate what type of Azure Service do you use. An MVC4 web application can be deployed to 3 different type of services: Azure Web Sites, Azure Cloud Service, Azure Virtual Machine!
Since you are talking about RDP, the viable options are Cloud Service or Virtual Machine. But then you say
I solved the problem connecting via RDP to the Azure machine, deleting
the contents of "Scripts" folder and deploying again, but I'm
wondering if there's a way to "force" script/libraries update when
deploying to Azure.
Now the question is how you do deploy to Windows Azure? Is it via Visual Studio's Publish feature to Azure Cloud Service. Is it Visual Studio's Package feature and then using any other method of deployment (upload the package from the portal, use Azure PowerShell cmdlets, or use third party tool to deploy the package)? Is it integration with Mercurial and deployment is done automatically when you check-in?
Any any case, the issue you face is a mixture of NuGET failing to do real clean update of everything. Browser caching - especially for local development - IE caches all the scripts, CSS and images and it is hard to say (without explicitly deleting all locally cached files) which script are you actually using. Simple version control issue - keeping old and new scripts.
When you do a JS/CSS updates I strongly advise all the customers to first delete all browser's cache (crtl+shift+del - works for all browsers) before testing locally.
I highly doubt that if you use a Cloud Service, RDP-ing and deleting anything in the sitesroot folder will help you when you redeploy. What you do in the ROLEROOT drive (usually E:, sometimes F: drive) is dropped of/forgotten when you re-deploy regardless of the re-deploy method you use: in-place-upgrade or full re-deploy. So what you did is actually creating new package and re-deploying your new package.
The fact that you deleted some folder has no effect on your re-deploy action.
I am developing a C#, MVC4, EF5 Code First application on .NET in Visual Studio 2012 and have used the VS publish mechanism to deploy it to an Azure Website with an Azure SQL Database.
I now want to use Git and GitHub for version control and involve others in the project.
However, although I am familiar with using Git in a LAMP environment, I have no experience of using Git with Windows, Azure Websites and a compiled environment.
I would like to use the Azure Website as the production server, another Azure Website as a Staging server, developer Windows machines using Visual Studio for development and GitHub as the central repository.
There is a helpful article here: http://www.windowsazure.com/en-us/develop/net/common-tasks/publishing-with-git/ . I can get my head around what would be needed here for, say, a PHP application on Azure. But I am unsure of the best approach with a compiled application and what I can achieve using Azure Websites and Visual Studio.
A nudge or two in the right direction would be greatly appreciated!
don't publish from VS to azure, instead setup your azure website to pull from the github repo. the deployment process compiles your solution.
watch http://www.youtube.com/watch?v=5NGieL0tinw&feature=youtu.be&hd=1 or read http://vishaljoshi.blogspot.com/2012/09/continuous-deployment-from-github-to.html
Also SocttGu announced this on his blog # http://weblogs.asp.net/scottgu/archive/2012/09/17/announcing-great-improvements-to-windows-azure-web-sites.aspx he also talks about a cool feature of publishing branches, this will nail your requirement for a stage server and production server. Have a stage branch and a production branch and merge to them as desired. see the section "Support for multiple branches"
looks like they added support for private repos finally.
appharbor is a competitor to azure that does something similar.
You are basically introducing a new step with the requirement that the source code must be compiled before it can be deployed to the server. Where you implement this step is up to you. You could:
Ensure that your target server has the capabilities to compile the source code (some Continuous Integration tools could help with this, such as CruiseControl.NET). This has the caveat that the target server be able to compile source code (possibly even requiring Visual Studio to be installed), so that may not be an option.
Check the compiled binaries into source control. You could keep these compiled binaries separate from the main source branch, to keep things clean. Deploy the binaries to the target server.
Some hybrid of the previous two options is also possible; you could set up a Continuous Integration server with CruiseControl.NET, which can check out the current source, build it, and check the resulting binary back into a special branch, then deploy that branch to your target Server.
I am working on my first ASP.NET MVC 4 app. The client is deploying directly from the SVN repo, which I am pushing from. Can/should I be checking in release builds, or should they be running builds on their end as part of the deploy process. I am wanting to make it as simple for them as possible. Thanks for any advice!
You shouldn't be checking any builds into a source control repository. Only source code. A build server should be used to precompile the application using the target configuration (Release if you are pushing to production). Also be careful not to leave any production connection strings and urls into the source code you have commited. An innocent developer could checkout the code and do lots of damage without any consciousness.
What is the best strategy for making changes to a specific file within a C# .NET project and a DEV server and then moving that file to a different environment, say server B? I noticed it always wants me to recompile on the destination server and I figured I was doing something wrong because I didn't think I would have to (plus the server isn't in-house so it is really slow and time consuming).
Any suggestions or strategies you or your company uses would be appreciated.
Make sure you are using a Web Application project where it compiles a DLL, not web site which uses loose code files.
You could use a source code versioning system like Subversion.
use a source control program for source files (like SubVersion) and Cruise Control for binaries built out of those files...
For web application development my experience has been:
Developers have a development environment on their local machines that is attached to source control
A DEV web server with shares to the projects created allows developers to COPY files to the web application folders manually.
A TEST web server where MSI installations ONLY are used to distribute the changes for UAT
A PROD web server where MSI installations ONLY are used to distribute the UAT approved MSI
The size of projects I am involved with usually makes build scripts overkill, most times a project is being worked on it is built many times for debugging etc.