We're just getting started with Azure Storage.
In our scenario we upload to private blobs that we later need to access directly from our client app, e.g. images.
Is there a way to address private blobs in Azure Storage with a URL containing the access key?
Sifting through the MS docs all I could find so far is simple URL access via the blob URI, e.g. as given by the URI property of the CloudBlockBlob instance when listing blobs via the .net API.
Naturally accessing this from a web browser fails due to the blob not being public.
However, can we qualify the URL to also include the access key in order to allow authorized clients to access the blob..?
You can generate an SAS URL and token for the private blob. Here's the process for generating this manually in the Azure portal, to test the concept. It will work even if your storage container is private, as it allows temporary, time limited access to the file using a URL that contains a token in it's query string.
Click on your file within the storage container, select the 'Generate SAS' tab, and in the right pane select
This will generate a token, and a URL that includes the token, like below:
You can test downloading the URL as a file by using curl. Use the 2nd URL shown in the image above (the one that includes the full token and other parameters in the querystring), then do this (IMPORTANT - the URL must be in double quotes):
curl "<YOUR_URL>" --output myFileName.txt
Tip - this is also a good method for making files available to an Azure VM, if you need to install a file directly on the VM for any reason (I needed to do this to install an SSL certificate), you can generate the URL then curl to download the file, on the VM itself. E.g. connect to the VM first with Bastion or SSH, then use curl to download the file somewhere.
This is the API for how you read blobs from storage:
https://learn.microsoft.com/en-us/rest/api/storageservices/get-blob
There is no URL-Parameter to pass the access key, only the header value Authorization. So you could do the request manually and e.g. add the resulting data as a base64 encoded image. I would advise against it if at all possible.
You must also be aware that by passing your access key to the client, you are effectively making your blob public anyways. You would be putting your data at more risk than anonymous access, since the access key allows more operations than anonymous access. This would also hold true for your objective-c app, even though its much more obfuscated there. SAS is the way to go there - create a backend service that creates a defined set of SAS tokens for given resources. It is however much more effort than simply obfuscating the full access key somewhere.
See "Features available to anonymous users":
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-manage-access-to-resources
Related
The Azure documentation says that storage blob containers can be made with public or private read access (see here). This says that public access can be set to 'Container' or 'Blob', and explains the differences in the table.
However, it isn't clear if, having set the container with Blob level public access:
container.CreateIfNotExists(Azure.Storage.Blobs.Models.PublicAccessType.Blob);
This implies that the public read access is set on a blob by blob basis, and if so, how to set it up.
However, I am not sure this is true? I have seen various other posts about copying blobs around between different public/private containers, which somewhat backs up my thinking. The client creation doesnt appear to have a public/private setting:
BlobClient blobClient = container.GetBlobClient(filename);
... and using the above coding, then all blobs created have public access.
My problem is that I need to allow users to change the state of uploaded images/videos to public or private. I am wondering if there is a less cludgy way than moving the blobs around between private and public containers..?
Public access is a container level setting.
There are two options for public access: https://learn.microsoft.com/en-us/azure/storage/blobs/anonymous-read-access-configure?tabs=portal#set-the-public-access-level-for-a-container.
Public read access for blobs only (Blob): Anonymous requests can get blobs by their full URL (caller must know the container and blob name)
Public read access for container and its blobs (Container): Anonymous requests can get blob list for the container (caller must only know container name)
So I would say that yes, you either have to move them between containers or handle the authorization on your side.
Your right in your assumptions, the access level is defined on the container.
To workaround your issue, I would suggest granting access to all blob's using Shared Access Signatures. That way your application logic can control access, but all downloads still happen directly from blob storage.
The correct way to do this would be proxy all request via your application logic before redirecting the user to an blob url including the shared access signature. That way you can revoke a link later.
An example flow would then look like:
User access https://example.com/images/myimage.png
Application logic determines if "images/myimage.png" should allow anonymous access or if the user should be redirect to login
If access is allowed, the application finds the correct blob in blob storage, and generates an SAS signature
The user is then redirect to "https://example.blob.core.windows.net/images/myimage.png?sastokenstuffhere
I'm using Azure Blob Storage to allow users to upload files from a web app.
I've got them uploading into a container, but I'm not sure what would be best to save on the web app's database since there are multiple options.
There is a GUID for the file, but also a URL.
The URL can be used to get to the file directly, but is there a risk that it could change?
If I store the file GUID I can use that to get the other details of the file using an API, but of course that's and extra step compared to the URL.
I'm wondering what best practices are. Do you just store the URL and be done with it? Do you store the GUID and always make an extra call whenever a page loads to get the current URL? Do you store both? Is the URL something constant that can act just as good as a GUID?
Any suggestions are appreciated!
If you upload any file on azure blob it will give you Url to access it which contains three part
{blob base url}/{Container Name}/{File Name}
e.g
https://storagesamples.blob.core.windows.net/sample-container/logfile.txt
SO you can save Blob base url and container name in config file and only the file name part in data base.
and at run time you can create whole url and return it back to user.
So in case if you are changing blob or container you just need to change it in config file.
How is possible with Azure Blob Storage to upload a file into a private container, but make the file URL public accessible? (I.E view the file if using the URL in the browser).
Yes. You can generate a Shared Access Signature (SAS) for specific blobs in Azure storage in a private container. Creating a SAS will generate a unique URL to the file. You make the url valid for a certain time period and you can allow multiple operations (READ, CREATE, WRITE, DELETE), and optionally white list IP addresses that can access the url.
I've deployed a website into Azure and i want to access programaticaly this path : "D:\home\site\app" from a c# desktop application and delete all files and upload new ones programatically.
i have searched and found many ways but all are for AzureStorage or using Kudu consol or FTP while what i realy want is to access the local storage where the website is deployed programatiacally, and make some edits on files programatically.
Sure thing, the Site Control Manager (Kudu) has an API for that, the VFS API:
https://github.com/projectkudu/kudu/wiki/REST-API#vfs
You can use either of these for authentication:
A Bearer token that you obtain from the STS (reference implementation in ARMClient)
Site-level credentials (the long ugly ones under your Web App → Properties)
Git/FTP credentials (subscription level)
Sample usage (using site-level credentials):
# Line breaks brutally used to improve readability
# /api/vfs/ is d:\home
# Append path as necessary, i.e. /api/vfs/site/app
$ curl -k https://$are-we-eating-too-much-garlic-as-a-people:6sujXXX
XXXXXXq7Zc#are-we-eating-too-much-garlic-as-a-people.scm.azurewebsites.net
/api/vfs/site/wwwroot/ill-grab-this-file-over-vfs-api.txt
There, i did it.
I'm assuming here that you want to do all that from the outside world - since you don't clearly state otherwise.
Well, in my azure code. my task was to save a excel file and upload its contents to SQL server.
I used this plain and simple to access home site.
string fileToSave = string.Format("{0}\\{1}", HostingEnvironment.MapPath(#"~\Temp"), FileUpload.FileName);
if (!Directory.Exists(HostingEnvironment.MapPath(#"~\Temp")))
Directory.CreateDirectory(HostingEnvironment.MapPath(#"~\Temp"));
FileUpload.PostedFile.SaveAs(fileToSave);
you could use something like this to delete and save a new file or other I/O operations.
We are creating c# console beta version app for our clients in which they just paste the public folder/file URL of Google drive OR one drive OR drop box OR etc. And in back-end we need to retrieve the file and process it...
I just wanted to know how do we retrieve those cloud files without any prompts for authentication(as given URL will b public, so it should not ask for Id pw)
Any help from you all experts?
With OneDrive, you can use the "shares" API to retrieve a sharing link without authentication.
You just need to encode the sharing URL correctly and then pass that to the API endpoint. The details of encoding are on the page above, but it's just URL safe base64 encoding.
GET https://api.onedrive.com/shares/{encoded_sharing_url}/root/content
The API will return the content of the file.
Edit: I got the URL slightly wrong. The /shares/ API returns a "sharing root" which looks somewhat like a drive object. To access the actual shared file, you need to add /root before the /content part of the path. I've updated this above.