How can I deploy static content separately from my Azure solution? - c#

I saw the following comment on Stack Overflow but I'm unable to find a reference to it.
"Yes it is. I have seen a Azure webcast from Cloud9 where the application was
broken up. The static content like images, html, css etc were deployed separately
than the Azure solution. The azure web app just linked to these resources"
Has anyone done this and have any information on how to do it?

As #Joe gennari mentioned: image links, css links, etc. just need to be changed, to reference objects in blob storage. For instance: <img src="http://mystorage.blob.core.windows.net/images/logo.jpg" />.
To actually get content into blob storage, you can:
Create a little uploader app, making very simple calls to via one of the language SDKs (Java, php, .net, python, etc.).
Upload blobs using PowerShell cmdlets - see command documentation here.
Use a tool such as Cerebrata's Cloud Storage Studio or Clumsy Leaf CloudXplorer, which lets you work with blobs in a similar way you'd work with your local file system.
You would no longer be bundling static content with your Windows Azure project. You'd upload blob updates separately (and without need for re-uploading an Azure project). This has the benefit of reducing deployment package size.

Related

How to download files from AWS S3 via CodeBuild

I have a build in AWS who get an Automation Project from GitHub and runs it. All the files that we are using to upload to the UI are currently saved to the repo of the project. I don't like it as it is about 300 - 400 MB, and that is a lot. My idea is to use the CodeBuild Buildspec and download the files from AWS S3 to the server before the broject is build. Is this possible at all?
I am new the the CodeBuild so I prefer some guidance. If the whole idea is not possible, I will have to use the project SetUp but there are a lot of things that can break while trying to download the files.
There are multiple ways to do this, but in general you have to generate a buildspec.yaml that instructs AWS CodeBuild to do what you want.
The AWS Documentation is actually pretty good, there are a lot of examples.
This one downloads the source primiarily from S3 and secondary sources from Github. This might fit your use case.
Have a look at the other examples as well, they might help generating the buildspec.yaml
If none fits, you will still be able to run normal bash commands and do aws s3 sync or something similar.

Up-to-date examples for using the Google Drive APs That Really Work

The company I work for has many gigabytes of data on the Google Drive cloud. A lot of this data is contained in individual files of several Gigabytes.
We would like to operate on this data:
a file at a time
remotely (without downloading a whole file to local disk
with COTS applications that see the cloud as a local hard disk and may access the file non-linearly i.e. doing seeks and partial fetches from within a file
on Windows
I am trying to write a virtual disk driver that would appear to a COTS application as just another disk drive with read-only permissions. I am working in C# .net Visual Studio. There is nothing to require this development environment it's just one that I am comfortable using and should have the capabilities to do the job. I could be convinced to switch environments if this is a blocking impediment.
My biggest problem is that Google is changing their APIs faster than they are documenting the changes. I have found examples for downloading a (partial) file using HTTP and for reading directories to get file IDs but when I try to run them they just don't work. If they would work I'd have the building blocks for putting together a disk interface.
For example:
Installing the Google Drive API and Client Library was a challenge. The direction page https://developers.google.com/drive/quickstart-cs and its subsequent links to the library were clearly written before the latest V2 authentication library was released this month (October 2013). The C# code in the quick-start example produces all types of warnings about deprecated and obsolete functions that are going to go away with the next release but no guidance (yet) on the new equivalents.
https://developers.google.com/drive/v2/reference/files/list shows how to list files on a drive. When I execute the Try It! at the bottom of the page without parameters I get a list of all my files. When I try to limit it with a search query string parameter q on the page) I get a 500 Internal Server error. This is before I try to use the program that should do this.
Once I have a file ID (or I take several from the above page execution without queries, the page developers.google.com/drive/v2/reference/files/get has a Try It! for retrieving a file. I am always getting a 401 error (invalid ID).
I am very tempted to put this project aside for a few weeks and see what Google comes up with in new documentation. Of course that won't please my boss. Alternatively, is there anyone else trying to work with the Google Drive API in a similar manner that is willing to share/collaborate?

ImageResizer for Azure using AzureReader2 plugin not resizing

EDIT
I got it to work however I had to add the RemoteReader plugin. When I remove the AzureReader2 plugin from my project it still works which makes sense however what is the AzureReader2 plugin benefiting me?
ORIGINAL QUESTION
I have done everything that has been outline here (including the comments) but can't seem to figure out why I can't resize images on the fly with this plugin for imageresizer.
This is what my web.config entry under the element looks like:
<add name="AzureReader2" prefix="~/img/" connectionString="DefaultEndpointsProtocol=https;AccountName=[Account];AccountKey=[key]" endpoint="http://<account>.blob.core.windows.net/" />
and I have set up my container to be called 'img'.
When I go to this URL to test it out:
https://<account>.blob.core.windows.net/img/image.jpg?width=50 The image shows up but just in its regular size. I've also tried running this locally and on the live side AWS but still get no resizing :(
ImageResizer library allows to serve modified versions of images (resized, cropped, rotated, with watermark etc.). AzureReader2 is a plugin that allows to fetch unmodified images from the Azure Blob (https://<account>.blob.core.windows.net) as opposed to disk.
So URL which should be used to obtain modified version of an image is your application URL with ImageResizer library installed and not Azure Blob URL (in your example https://<account>.blob.core.windows.net/img/image.jpg?width=50).
EDIT
AzureReader2 plugin allows you to read images from Azure Blob the same way as if they were saved in a disk. If you application is built in a way that all images are coming from Azure Blob, you can have two independent teams: one managing your images (and other media like CSS) and one managing your code. With that approach AzureReader2 plugin will be very handy.
I hope that will help.
After hours of playing around I finally understand how it works. I didn't realize the prefix is what you tack on the end of the actual URL and not the BLOB store URL. I ended up with
http://<account>.azurewebsites.net/img/img/image.jpg?width=50
This worked instead of my original thinking of:
https://<account>.blob.core.windows.net/img/image.jpg?width=50
For anybody that's looking at this the prefix is whats tacked on the URL of the actual site and not the BLOB store!

How to run external executables from an Appharbor application (HTML to PDF generation)?

I have a requirement to produce PDF's for one my .Net web applications currently hosted on Appharbor.
Traditionally, I would simply install latex on the machine, and create PDF's on the fly with pdflatex. This requirement is to display sections in HTML to end users, but also have a downloadable PDF - so it's slightly different.
I have found several (free) external HTML to PDF converters which may be applicable in this instance. However, I haven't found any libraries allowing me to do this purely programatically.
What advice would you give if I plan to continue using Appharbor?
Should I set up a seperate EC2 (or similar) instance to run such an application from? Or is there a better alternative?
I'd recommend using something like DocRaptor. Note that you can probably continue with your current scheme if you place the relevant pdflatex executable (and it doesn't require the entire Latex runtime) alongside the code you push to AppHarbor. AppHarbor will also be introducing background workers, which might be a good fit for this sort of work.
Note that if you're trying to use Rotativa or using wkhtmltopdf with routes obtained from HttpContext you'll need to use this workaround:
http://support.appharbor.com/kb/getting-started/workaround-for-generating-absolute-urls-without-port-number
or install the premotion fix from Nuget:
https://github.com/trilobyte/Premotion-AspNet-AppHarbor-Integration

Sharepoint + Javascript, moving/copying a list item to another folder

I'm looking into being able to automate a process where certain list items (xml files) in one of my document libraries are analyzed for certain data within the xml, then moved to a certain folder within the doc lib based on which type of data is found.
It's simple enough to setup some javascript to perform the analysis, but I'm stumped on how to transfer the document/listitem to another folder. I am currently performing the analysis by putting my .js file in the same web folder as the list items that require analysis and executing the js from there. The destination folders are also in this web folder.
Is there any way to use javascript to move a document within a web folder to another folder? Note when I say folder I am talking about a folder that was created in the document library.
If this needs some clarity, feel free to ask. I should note that I am using JS to do this because it would be an immediate solution, as opposed to writing SP Object Model code which needs to go through a long/painful deployment process.
You've not detailed which version of SharePoint your using.
If you are using SharePoint 2010 then this may be possible using the Client Object Model
If you're using SharePoint 2007 then this may be possible using the SPServices project which allows you to use SharePoint's web services via javascript/jquery.
But I think you're on the wrong track here with javascript - this sounds like a scheduled task type operation so I think you would be better looking at a winforms/command line based solution than some javascript hack.
If deployment is such a pain then you may be able to use the SharePoint Web Services (2003/2007 and 2010) so then you don't have to deploy this on the SharePoint server itself - it can be ran on any other machine.

Categories

Resources