I'd like to dynamically grant a given Docker container (or Docker image, doesn't matter) access to a specific azure storage account blob container.
Neither the blob container nor the azure container is going to be the same every time, i.e.
Grant DockerContainer1 access to AzureContainerX
Grant DockerContainer2 access to AzureContainerY
...
Is this even possible? Can I use Volume in dockerfile? Can I do this from my .net core web API app?
If so, how?
I believe you need to set up shared named volume on host, which is then filled with data from the AzureContainerX on runtime. Then all you need to do, is to share this volume into DockerContainer2 and access data from there.
Volumes are not used in the Dockerfile, instead they are set when starting the container with CMD parameters or by using compose file.
Refer for Docker documentation how to use volumes. You can set volumes are read-only.
However, I don't know how exactly you would like to set this up dynamically. You need configuration for each AzureContainerY container, that they are putting data into correct place. Also DockerContainer1 should be configured to access for data in correct paths. This might be achieved dynamically by setting up ENV values by using ARGs, but this really depends on the context here, and gets quickly very complicated.
Related
The Azure documentation says that storage blob containers can be made with public or private read access (see here). This says that public access can be set to 'Container' or 'Blob', and explains the differences in the table.
However, it isn't clear if, having set the container with Blob level public access:
container.CreateIfNotExists(Azure.Storage.Blobs.Models.PublicAccessType.Blob);
This implies that the public read access is set on a blob by blob basis, and if so, how to set it up.
However, I am not sure this is true? I have seen various other posts about copying blobs around between different public/private containers, which somewhat backs up my thinking. The client creation doesnt appear to have a public/private setting:
BlobClient blobClient = container.GetBlobClient(filename);
... and using the above coding, then all blobs created have public access.
My problem is that I need to allow users to change the state of uploaded images/videos to public or private. I am wondering if there is a less cludgy way than moving the blobs around between private and public containers..?
Public access is a container level setting.
There are two options for public access: https://learn.microsoft.com/en-us/azure/storage/blobs/anonymous-read-access-configure?tabs=portal#set-the-public-access-level-for-a-container.
Public read access for blobs only (Blob): Anonymous requests can get blobs by their full URL (caller must know the container and blob name)
Public read access for container and its blobs (Container): Anonymous requests can get blob list for the container (caller must only know container name)
So I would say that yes, you either have to move them between containers or handle the authorization on your side.
Your right in your assumptions, the access level is defined on the container.
To workaround your issue, I would suggest granting access to all blob's using Shared Access Signatures. That way your application logic can control access, but all downloads still happen directly from blob storage.
The correct way to do this would be proxy all request via your application logic before redirecting the user to an blob url including the shared access signature. That way you can revoke a link later.
An example flow would then look like:
User access https://example.com/images/myimage.png
Application logic determines if "images/myimage.png" should allow anonymous access or if the user should be redirect to login
If access is allowed, the application finds the correct blob in blob storage, and generates an SAS signature
The user is then redirect to "https://example.blob.core.windows.net/images/myimage.png?sastokenstuffhere
I'm trying to use Realm Cloud in an Azure Function but it keeps giving me an error:
make_dir() failed: Permission denied Path: /realm-object-server/
Is there a way to configure Azure Functions to have permissions to create files? I'm new to Azure Functions, this is my first one so I'm not really sure of all the particulars.
I have found one solution to this. Inside Azure function, you can only create any file or folder inside the temp folder. So if you use following sync configuration, it should work. It worked for me.
var configuration = new FullSyncConfiguration(new Uri("/~/<your-realm-name>", UriKind.Relative), _realmUser, Path.Combine(Path.GetTempPath(), "realm-object-server"));
So basically, here you are passing the folder name to store the realm locally. If you don't pass the folder name, it will try to store it in a default location where you will get the access error.
I am not familiar with Realm, but functions has permissions to interact with the file system by default. See these links for information on the app service file system (this applies for functions too):
https://learn.microsoft.com/en-us/azure/app-service/operating-system-functionality#file-access
https://github.com/projectkudu/kudu/wiki/Understanding-the-Azure-App-Service-file-system
If the function is deployed using run from package, then the wwwroot is readonly. But since the path in the error message doesn't point to wwwroot, this is probably not the issue.
My best guess is that the code that is failing is trying to write to in an inappropriate location. The information in the links above should help resolve that. Maybe check to see if realm has a config setting that lets you specify which location it should be creating "realm-object-server" in.
You can use
SyncConfigurationBase.Initialize(UserPersistenceMode.NotEncrypted, basePath: Path.GetTempPath());
We're just getting started with Azure Storage.
In our scenario we upload to private blobs that we later need to access directly from our client app, e.g. images.
Is there a way to address private blobs in Azure Storage with a URL containing the access key?
Sifting through the MS docs all I could find so far is simple URL access via the blob URI, e.g. as given by the URI property of the CloudBlockBlob instance when listing blobs via the .net API.
Naturally accessing this from a web browser fails due to the blob not being public.
However, can we qualify the URL to also include the access key in order to allow authorized clients to access the blob..?
You can generate an SAS URL and token for the private blob. Here's the process for generating this manually in the Azure portal, to test the concept. It will work even if your storage container is private, as it allows temporary, time limited access to the file using a URL that contains a token in it's query string.
Click on your file within the storage container, select the 'Generate SAS' tab, and in the right pane select
This will generate a token, and a URL that includes the token, like below:
You can test downloading the URL as a file by using curl. Use the 2nd URL shown in the image above (the one that includes the full token and other parameters in the querystring), then do this (IMPORTANT - the URL must be in double quotes):
curl "<YOUR_URL>" --output myFileName.txt
Tip - this is also a good method for making files available to an Azure VM, if you need to install a file directly on the VM for any reason (I needed to do this to install an SSL certificate), you can generate the URL then curl to download the file, on the VM itself. E.g. connect to the VM first with Bastion or SSH, then use curl to download the file somewhere.
This is the API for how you read blobs from storage:
https://learn.microsoft.com/en-us/rest/api/storageservices/get-blob
There is no URL-Parameter to pass the access key, only the header value Authorization. So you could do the request manually and e.g. add the resulting data as a base64 encoded image. I would advise against it if at all possible.
You must also be aware that by passing your access key to the client, you are effectively making your blob public anyways. You would be putting your data at more risk than anonymous access, since the access key allows more operations than anonymous access. This would also hold true for your objective-c app, even though its much more obfuscated there. SAS is the way to go there - create a backend service that creates a defined set of SAS tokens for given resources. It is however much more effort than simply obfuscating the full access key somewhere.
See "Features available to anonymous users":
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-manage-access-to-resources
Good morning,
I'm trying to implement Azure Blog Storage for the first time using their example code provided. However my app is through a very broad 400 Bad Request error when trying to UploadFromStream().
I have done a bunch of searching on this issue. Almost everything i have come across identifies naming conventions of the container or blob to be the issue. this is NOT my issue, i'm using all lowercase, etc.
My code is no different from their example code:
The connection string:
<add key="StorageConnectionString" value="DefaultEndpointsProtocol=https;AccountName=xxxx;AccountKey=xxxxxx;EndpointSuffix=core.windows.net" />
And the code:
// Retrieve storage account from connection string
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
// Create the blob client
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
// Retrieve reference to a previously created container
CloudBlobContainer container = blobClient.GetContainerReference("mycontainer");
// Retrieve reference to a blob named "myblob"
CloudBlockBlob blob = container.GetBlockBlobReference("myblob");
// Create the container if it doesn't already exist
container.CreateIfNotExists();
// Create or overwrite the "myblob" blob with contents from a local file.
using (var fileStream = System.IO.File.OpenRead(#"D:\Files\logo.png"))
{
blob.UploadFromStream(fileStream);
}
Here is the exception details:
This is all i have to go on. The only other thing i can think of is that i'm running this on my development environment with HTTP not HTTPS. Not sure if this might be a issue?
EDIT:
Additionally, when attempting to upload a file directily in the Azure portal to the container i recieve a
Validation error for TestAzureFileUpload.txt. Details: "The page blob
size must be aligned to a 512-byte boundary. The current file size is
56."
Could this be related to my issue? Am i missing some setting here?
I know i do not have enough to go on here for anyone to help me identify the exact issue, but i am hoping that someone can at least point me in the right direction to resolve this?
Any help would be appreciated
I use a Premium storage account to test the code and get the same "400 bad request" as yours.
From the exception details, you can see the "Block blobs are not supported" message.
Here is an image of my exception details
To solve your problem, I think you should know the difference between block blob and page blob.
Block blobs are comprised of blocks, each of which is identified by a block ID. You create or modify a block blob by writing a set of blocks and committing them by their block IDs. they are for you discrete storage objects like jpg, txt, log, etc. That you'd typically view as a file in your local OS. Supported by standard storage account only.
Page blobs are a collection of 512-byte pages optimized for random read and write operations, such as VHD's. To create a page blob, you initialize the page blob and specify the maximum size the page blob will grow. The truth is, page blobs are designed for Azure Virtual Machine disks. Supported by both standard and Premium storage account.
Since you are using the Premium Storage, which is currently available only for storing data on disks used by Azure Virtual Machines.
So my suggestion is :
If you want your application to support streaming and random access scenarios, and be able to access application data from anywhere. You should use block blobs with the standard account.
If you want to lift and shift applications that use native file system APIs to read and write data to persistent disks. Or you want to store data that is not required to be accessed from outside the virtual machine to which the disk is attached. You should use Page blobs.
Reference link:
Understanding Block Blobs, Append Blobs, and Page Blobs
I am using windows azure blob storage service. I want to protect my blobs from public access (except my users).For this i used Shared Access Signature (SAS) and it works fine. But my issue is that i have a Container which contains blob in a directory structure, like :
https://xxxxxxx.blob.core.windows.net/myContainer/directory1/blob1
https://xxxxxxx.blob.core.windows.net/myContainer/directory1/blob2
https://xxxxxxx.blob.core.windows.net/myContainer/directory1/blob3
https://xxxxxxx.blob.core.windows.net/myContainer/directory1/blob4
https://xxxxxxx.blob.core.windows.net/myContainer/directory1/blob5
https://xxxxxxx.blob.core.windows.net/myContainer/directory2/blob1
https://xxxxxxx.blob.core.windows.net/myContainer/directory2/blob2
https://xxxxxxx.blob.core.windows.net/myContainer/directory2/blob3
https://xxxxxxx.blob.core.windows.net/myContainer/directory2/blob4
https://xxxxxxx.blob.core.windows.net/myContainer/directory2/blob5
and so on...
Now my requirement is that i want to give public access to all blobs in myContainer under directory2 but not to blobs which is under directory1, i want to keep all the blobs under directory1 as private. How can i achieve this?
There are no directories in Azure blob storage. Those "directories" you have now are just blobs with a / embedded in the name. Since permissions are only at the container level, you'll have to create separate containers.
You can create two containers.
One Private container with SAS on Container level and One public access container
https://xxxxxxx.blob.core.windows.net/private/blob1
https://xxxxxxx.blob.core.windows.net/private/blob2
https://xxxxxxx.blob.core.windows.net/private/blob3
https://xxxxxxx.blob.core.windows.net/private/blob4
https://xxxxxxx.blob.core.windows.net/private/blob5
https://xxxxxxx.blob.core.windows.net/public/blob1
https://xxxxxxx.blob.core.windows.net/public/blob2
https://xxxxxxx.blob.core.windows.net/public/blob3
https://xxxxxxx.blob.core.windows.net/public/blob4
https://xxxxxxx.blob.core.windows.net/public/blob5
You can only set permissions on container level, so you're left with two options.
Preferred option) Creat an additional public container and move your blobs.
Worse option) Create an seemingly endless valid sas link for all of your files.