How to secure azure blobs - c#

I am using windows azure blob storage service. I want to protect my blobs from public access (except my users).For this i used Shared Access Signature (SAS) and it works fine. But my issue is that i have a Container which contains blob in a directory structure, like :
https://xxxxxxx.blob.core.windows.net/myContainer/directory1/blob1
https://xxxxxxx.blob.core.windows.net/myContainer/directory1/blob2
https://xxxxxxx.blob.core.windows.net/myContainer/directory1/blob3
https://xxxxxxx.blob.core.windows.net/myContainer/directory1/blob4
https://xxxxxxx.blob.core.windows.net/myContainer/directory1/blob5
https://xxxxxxx.blob.core.windows.net/myContainer/directory2/blob1
https://xxxxxxx.blob.core.windows.net/myContainer/directory2/blob2
https://xxxxxxx.blob.core.windows.net/myContainer/directory2/blob3
https://xxxxxxx.blob.core.windows.net/myContainer/directory2/blob4
https://xxxxxxx.blob.core.windows.net/myContainer/directory2/blob5
and so on...
Now my requirement is that i want to give public access to all blobs in myContainer under directory2 but not to blobs which is under directory1, i want to keep all the blobs under directory1 as private. How can i achieve this?

There are no directories in Azure blob storage. Those "directories" you have now are just blobs with a / embedded in the name. Since permissions are only at the container level, you'll have to create separate containers.

You can create two containers.
One Private container with SAS on Container level and One public access container
https://xxxxxxx.blob.core.windows.net/private/blob1
https://xxxxxxx.blob.core.windows.net/private/blob2
https://xxxxxxx.blob.core.windows.net/private/blob3
https://xxxxxxx.blob.core.windows.net/private/blob4
https://xxxxxxx.blob.core.windows.net/private/blob5
https://xxxxxxx.blob.core.windows.net/public/blob1
https://xxxxxxx.blob.core.windows.net/public/blob2
https://xxxxxxx.blob.core.windows.net/public/blob3
https://xxxxxxx.blob.core.windows.net/public/blob4
https://xxxxxxx.blob.core.windows.net/public/blob5

You can only set permissions on container level, so you're left with two options.
Preferred option) Creat an additional public container and move your blobs.
Worse option) Create an seemingly endless valid sas link for all of your files.

Related

Can you set public / private read access for azure blobs in the same container?

The Azure documentation says that storage blob containers can be made with public or private read access (see here). This says that public access can be set to 'Container' or 'Blob', and explains the differences in the table.
However, it isn't clear if, having set the container with Blob level public access:
container.CreateIfNotExists(Azure.Storage.Blobs.Models.PublicAccessType.Blob);
This implies that the public read access is set on a blob by blob basis, and if so, how to set it up.
However, I am not sure this is true? I have seen various other posts about copying blobs around between different public/private containers, which somewhat backs up my thinking. The client creation doesnt appear to have a public/private setting:
BlobClient blobClient = container.GetBlobClient(filename);
... and using the above coding, then all blobs created have public access.
My problem is that I need to allow users to change the state of uploaded images/videos to public or private. I am wondering if there is a less cludgy way than moving the blobs around between private and public containers..?
Public access is a container level setting.
There are two options for public access: https://learn.microsoft.com/en-us/azure/storage/blobs/anonymous-read-access-configure?tabs=portal#set-the-public-access-level-for-a-container.
Public read access for blobs only (Blob): Anonymous requests can get blobs by their full URL (caller must know the container and blob name)
Public read access for container and its blobs (Container): Anonymous requests can get blob list for the container (caller must only know container name)
So I would say that yes, you either have to move them between containers or handle the authorization on your side.
Your right in your assumptions, the access level is defined on the container.
To workaround your issue, I would suggest granting access to all blob's using Shared Access Signatures. That way your application logic can control access, but all downloads still happen directly from blob storage.
The correct way to do this would be proxy all request via your application logic before redirecting the user to an blob url including the shared access signature. That way you can revoke a link later.
An example flow would then look like:
User access https://example.com/images/myimage.png
Application logic determines if "images/myimage.png" should allow anonymous access or if the user should be redirect to login
If access is allowed, the application finds the correct blob in blob storage, and generates an SAS signature
The user is then redirect to "https://example.blob.core.windows.net/images/myimage.png?sastokenstuffhere

Grant a docker container access to azure storage account blob container

I'd like to dynamically grant a given Docker container (or Docker image, doesn't matter) access to a specific azure storage account blob container.
Neither the blob container nor the azure container is going to be the same every time, i.e.
Grant DockerContainer1 access to AzureContainerX
Grant DockerContainer2 access to AzureContainerY
...
Is this even possible? Can I use Volume in dockerfile? Can I do this from my .net core web API app?
If so, how?
I believe you need to set up shared named volume on host, which is then filled with data from the AzureContainerX on runtime. Then all you need to do, is to share this volume into DockerContainer2 and access data from there.
Volumes are not used in the Dockerfile, instead they are set when starting the container with CMD parameters or by using compose file.
Refer for Docker documentation how to use volumes. You can set volumes are read-only.
However, I don't know how exactly you would like to set this up dynamically. You need configuration for each AzureContainerY container, that they are putting data into correct place. Also DockerContainer1 should be configured to access for data in correct paths. This might be achieved dynamically by setting up ENV values by using ARGs, but this really depends on the context here, and gets quickly very complicated.

Blob storage, c# private container with some files public

How is possible with Azure Blob Storage to upload a file into a private container, but make the file URL public accessible? (I.E view the file if using the URL in the browser).
Yes. You can generate a Shared Access Signature (SAS) for specific blobs in Azure storage in a private container. Creating a SAS will generate a unique URL to the file. You make the url valid for a certain time period and you can allow multiple operations (READ, CREATE, WRITE, DELETE), and optionally white list IP addresses that can access the url.

URL to access private blob in Azure Storage

We're just getting started with Azure Storage.
In our scenario we upload to private blobs that we later need to access directly from our client app, e.g. images.
Is there a way to address private blobs in Azure Storage with a URL containing the access key?
Sifting through the MS docs all I could find so far is simple URL access via the blob URI, e.g. as given by the URI property of the CloudBlockBlob instance when listing blobs via the .net API.
Naturally accessing this from a web browser fails due to the blob not being public.
However, can we qualify the URL to also include the access key in order to allow authorized clients to access the blob..?
You can generate an SAS URL and token for the private blob. Here's the process for generating this manually in the Azure portal, to test the concept. It will work even if your storage container is private, as it allows temporary, time limited access to the file using a URL that contains a token in it's query string.
Click on your file within the storage container, select the 'Generate SAS' tab, and in the right pane select
This will generate a token, and a URL that includes the token, like below:
You can test downloading the URL as a file by using curl. Use the 2nd URL shown in the image above (the one that includes the full token and other parameters in the querystring), then do this (IMPORTANT - the URL must be in double quotes):
curl "<YOUR_URL>" --output myFileName.txt
Tip - this is also a good method for making files available to an Azure VM, if you need to install a file directly on the VM for any reason (I needed to do this to install an SSL certificate), you can generate the URL then curl to download the file, on the VM itself. E.g. connect to the VM first with Bastion or SSH, then use curl to download the file somewhere.
This is the API for how you read blobs from storage:
https://learn.microsoft.com/en-us/rest/api/storageservices/get-blob
There is no URL-Parameter to pass the access key, only the header value Authorization. So you could do the request manually and e.g. add the resulting data as a base64 encoded image. I would advise against it if at all possible.
You must also be aware that by passing your access key to the client, you are effectively making your blob public anyways. You would be putting your data at more risk than anonymous access, since the access key allows more operations than anonymous access. This would also hold true for your objective-c app, even though its much more obfuscated there. SAS is the way to go there - create a backend service that creates a defined set of SAS tokens for given resources. It is however much more effort than simply obfuscating the full access key somewhere.
See "Features available to anonymous users":
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-manage-access-to-resources

.NET Azure Blob Storage: Get Root-Level Directories w/o Listing All Blobs

I have an Azure cloud blob storage account & I need to enumerate its contents. The account has a large amount of data & using ListBlobs to enumerate all its contents takes a long time to complete.
For both cloud containers & directories, I want the ability to enumerate only root-level items. For a container, I assume this will enumerate root-level blobs:
cloudBlobContainer.ListBlobs(
String.Empty,
false,
BlobListingDetails.None,
null,
null))
Is there any reasonable way to get root-level directories without listing all blobs? The only way I can think to do it is absurd: make calls to ListBlob with every possible combination a blob prefix could be.
Zachary, unfortunately there is no such thing as a "directory" in Azure Blob Storage. The object hierarchy is as follows:
Storage Account (Management Plane)
Storage Container [0..n] (Data Plane)
Blobs [0..n] (Data Plane)
When you see additional forward slashes in the blob names, it is only a "virtual" directory, not a separate directory entity.
https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-blobs/
You can achieve a more granular listing of a directory's contents by using the .ListBlobs().OfType<your_chosen_blob_type>() call. One blob type is CloudBlobDirectory, for example. See this answer: https://stackoverflow.com/a/14440507/9654964.

Categories

Resources