I am trying to copy a file from one storage account to another account using StartCopy method to copy the file. Check the below code.
CloudStorageAccount sourceStorageAccount = CloudStorageAccount.Parse(#"source storage account connection string");
CloudStorageAccount destStorageAccount = CloudStorageAccount.Parse(#"destination storage account connection string");
CloudBlobClient sourceBlobClient = sourceStorageAccount.CreateCloudBlobClient();
CloudBlobClient destBlobClient = destStorageAccount.CreateCloudBlobClient();
var sourceContainer = sourceBlobClient.GetContainerReference("sourceContainer");
var destContainer = destBlobClient.GetContainerReference("destContainer");
CloudBlockBlob sourceBlob = sourceContainer.GetBlockBlobReference("copy.txt");
CloudBlockBlob targetBlob = destContainer.GetBlockBlobReference("copy.txt");
targetBlob.StartCopy(sourceBlob);
But it always return the following error.
Microsoft.WindowsAzure.Storage.StorageException: 'The remote server
returned an error: (404) Not Found.'
What am I missing here ?
Note, the same code works perfectly if I try to copy files from one container to another within same storage account.
Take a look at the following example on how a copy should be performed (taken from Introducing Asynchronous Cross-Account Copy Blob):
public static void CopyBlobs(
CloudBlobContainer srcContainer,
string policyId,
CloudBlobContainer destContainer)
{
// get the SAS token to use for all blobs
string blobToken = srcContainer.GetSharedAccessSignature(
new SharedAccessBlobPolicy(), policyId);
var srcBlobList = srcContainer.ListBlobs(true, BlobListingDetails.None);
foreach (var src in srcBlobList)
{
var srcBlob = src as CloudBlob;
// Create appropriate destination blob type to match the source blob
CloudBlob destBlob;
if (srcBlob.Properties.BlobType == BlobType.BlockBlob)
{
destBlob = destContainer.GetBlockBlobReference(srcBlob.Name);
}
else
{
destBlob = destContainer.GetPageBlobReference(srcBlob.Name);
}
// copy using src blob as SAS
destBlob.StartCopyFromBlob(new Uri(srcBlob.Uri.AbsoluteUri + blobToken));
}
}
Hope it helps!
Here is another way to do this using TransferManager.CopyAsync Method
CloudStorageAccount sourceStorageAccount = CloudStorageAccount.Parse(#"source storage account connection string");
CloudStorageAccount destStorageAccount = CloudStorageAccount.Parse(#"destination storage account connection string");
CloudBlobClient sourceBlobClient = sourceStorageAccount.CreateCloudBlobClient();
CloudBlobClient destBlobClient = destStorageAccount.CreateCloudBlobClient();
var sourceContainer = sourceBlobClient.GetContainerReference("sourceContainer");
var destContainer = destBlobClient.GetContainerReference("destContainer");
CloudBlockBlob sourceBlob = sourceContainer.GetBlockBlobReference("copy.txt");
CloudBlockBlob targetBlob = destContainer.GetBlockBlobReference("copy.txt");
TransferManager.CopyAsync(sourceBlob, targetBlob, true).Wait();
TransferManager is under the namespace Microsoft.WindowsAzure.Storage.DataMovement. To get the reference install Microsoft.Azure.Storage.DataMovement in nuget manager.
i recently ran into this error trying to copy from /uploads to /raw within a single blob account.
The issue was that the container /raw didn't exist on the destination side within the test environment.
(ie, this error is actually thrown by the destination, not the source)
Related
I would like to do something simple!!!
Copy a blob from a Container (SourceContainer) which is in a dataLake gen 2 (SourceDataLake) to a second DatLake (TargetDataLake) via c# code in an Azure Functions.
The connection between my Azure Function and SourceDataLake is secured via Private Link and Private Endpoint.
DataLakeDirectoryClient sourcedirectoryClient2 = sourceDataLakeFileSystemClient.GetDirectoryClient(myPath);
DataLakeFileClient sourcefileClient = sourcedirectoryClient2.GetFileClient(myBlobName);
Response<FileDownloadInfo> downloadResponse = await sourcefileClient.ReadAsync(); //I get the error in this line
Stream reader = downloadResponse.Value.Content;
DataLakeDirectoryClient targetdirectoryClient = taregetDataLakeFileSystemClient.GetDirectoryClient(TargetDirectory);
DataLakeFileClient targetfileClient = await targetdirectoryClient.CreateFileAsync(myBlobName);
await targetfileClient.UploadAsync(reader, true);
for Authentication to DataLake I use this function:
public static DataLakeFileSystemClient GetDataLakeFileSystemClient(string containerName, string dataLakeName, string dataLakeAccessKey)
{
StorageSharedKeyCredential storageSharedKeyCredential = new StorageSharedKeyCredential(dataLakeName, dataLakeAccessKey);
DataLakeClientOptions options = new DataLakeClientOptions(DataLakeClientOptions.ServiceVersion.V2019_07_07);
DataLakeServiceClient dataLakeServiceClient = new DataLakeServiceClient(
new Uri(string.Concat("https://", dataLakeName, ".dfs.core.windows.net")),
storageSharedKeyCredential,
options);
DataLakeFileSystemClient dataLakeFileSystemClient = dataLakeServiceClient.GetFileSystemClient(containerName);
return dataLakeFileSystemClient;
}
This code doesn't work for me. If I delete Priavet Link Private Endpoint from SourceDataLake then it works. Somehow Private Link and Private Endpoint doesn't work with this line of code:
Response<FileDownloadInfo> downloadResponse = await sourcefileClient.ReadAsync();
Do you have any Idea how can I solve this problem? or any better way to copy a blob from DataLake gen2?
Sorry I cannot help with the current issue , but as you asked about other options , I think you can explore Azure data factory . https://learn.microsoft.com/en-us/azure/data-factory/connector-azure-data-lake-storage
I have an Azure storage container with blobs (/images/filename). The filename(uri) is stored in the database at creation time and comes from the file upload save function:
blob.UploadFromStream(filestream);
string uri = blob.Uri.AbsoluteUri;
return uri;
The file upload works fine and when passes to the client with SAS key download works fine too.
Coming to delete the images I have a helper function that was taken from a MS example here:MS Github example
Here is the function:
internal bool DeleteFile(string fileURI)
{
try
{
Uri uri = new Uri(fileURI);
string filename = Path.GetFileName(uri.LocalPath);
CloudBlockBlob fileblob = container.GetBlockBlobReference(filename);
fileblob.Delete();
bool res = fileblob.DeleteIfExists();
return res; //Ok
}
catch(Exception ex)
{
Console.WriteLine(ex);
return false;
}
}
This is all in a helper class which starts as follows:
public class AzureHelpers
{
private string connection;
private CloudStorageAccount storageAccount;
private CloudBlobClient blobClient;
private CloudBlobContainer container;
public AzureHelpers()
{
connection = CloudConfigurationManager.GetSetting("myproject_AzureStorageConnectionString");
storageAccount = CloudStorageAccount.Parse(connection);
blobClient = storageAccount.CreateCloudBlobClient();
container = blobClient.GetContainerReference(Resources.DataStoreRoot);
container.CreateIfNotExists();
}
....
I deliberately added the delete before the deleteIfExists to cause the exception and prove what I suspected that it wasn't finding the file/blob.
As I step through the code however, the CloudBlockBlob is definitely there and has the correct URI etc.
I am wondering if this could be a permissions thing? Or am I missing something else?
I think there is a directory in your container. Assume that you have a container named container_1, and your files are stored in directory like /images/a.jpg. Here you should remember that in this case, your blob name is images/a.jpg, not a.jpg.
In your code, Path.GetFileName only get the file name like a.jpg, so it does not match the real blob name images/a.jpg, which cause the error "does not exist".
So in your DeleteFile(string fileURI) method, try the code below, it works fine at my side:
Uri uri = new Uri(fileURI);
var temp = uri.LocalPath;
string filename = temp.Remove(0, temp.IndexOf('/', 1)+1);
CloudBlockBlob fileblob = container.GetBlockBlobReference(filename);
//fileblob.Delete();
bool res = fileblob.DeleteIfExists();
or use this code snippet:
Uri uri = new Uri(fileURI);
//use this line of code just to get the blob name correctly
CloudBlockBlob blob_temp = new CloudBlockBlob(uri);
var myblob = cloudBlobContainer.GetBlockBlobReference(blob_temp.Name);
bool res = myblob.DeleteIfExists();
Seems to be a permission issue, Can you goto portal and then Edit container meta data on azure bolb storage. change access private to public.
I'm trying to use the Azure Storage SDK and trying to determine if there's a way I can specify a container and find the number of blobs it contains. The posts I've seen thus far only mention checking by the name of the blob, which doesn't suit my needs.
If I do the following:
CloudBlobContainer blobContainer = blobClient.GetContainerReference("my-container");
var blobCount = blobContainer.ListBlobs().Count();
Then I'm hit with a HTTP 404 Exception.
Is there any way to go about this?
You can check the count by using this code:
CloudBlobContainer blobContainer = blobClient.GetContainerReference("my-container");
blobContainer.FetchAttributes();
string count = blobContainer.Metadata["ItemCount"];
int ItemCount;
if(int.Tryparse(count ,out ItemCount))
{
if(ItemCount>0)
// Container is not Empty
else
// Container is Empty
}
else
{
// Conversion failed;
}
Am stuck with this error The specified container does not exist.
let me explain,
CloudBlobClient blobStorage = GetBlobStorage("upload");
CloudBlockBlob blob = BlobPropertySetting(blobStorage, Guid.NewGuid().ToString().ToLower() + Path.GetExtension(file.FileName));
blob.UploadFromStream(file.InputStream);
public static CloudBlobClient GetBlobStorage(string cloudBlobContainserName)
{
CloudBlobClient blobStorage;
try
{
var storageAccount = CloudStorageAccount.FromConfigurationSetting("StorageConnectionString");
blobStorage = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobStorage.GetContainerReference(cloudBlobContainserName);
container.CreateIfNotExist();
var permissions = container.GetPermissions();
permissions.PublicAccess = BlobContainerPublicAccessType.Container;
container.SetPermissions(permissions);
}
catch (Exception ex)
{
Logger.LogError(Log4NetLogger.Category.Exception, "Error in : BlobHandler.GetBlobStorage :>> Exception message: " + ex.Message);
throw;
}
return blobStorage;
}
public static CloudBlockBlob BlobPropertySetting(CloudBlobClient cloudBlobClientReferenceName, string blobContentName)
{
return cloudBlobClientReferenceName.GetBlockBlobReference(blobContentName);
}
and my StorageConnectionString is
<Setting name="StorageConnectionString" value="DefaultEndpointsProtocol=https;AccountName=duw;AccountKey=bla bla" />
container 'upload' and the storage account 'duw' exist.
executing blob.UploadFromStream(file.InputStream); statement causes the error.
stack trace :
at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.get_Result()
at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.ExecuteAndWait()
at Microsoft.WindowsAzure.StorageClient.TaskImplHelper.ExecuteImpl(Func`1 impl)
at Microsoft.WindowsAzure.StorageClient.CloudBlob.UploadFromStream(Stream source, BlobRequestOptions options)
at Microsoft.WindowsAzure.StorageClient.CloudBlob.UploadFromStream(Stream source)
at DAL.Handlers.BlobHandler.CreateAd(HttpPostedFileBase file, Advertisement model) in D:\DU Server\trunk\Du Server\DAL\Handlers\BlobHandler.cs:line 151
Inner exception:
{"The remote server returned an error: (404) Not Found."}
can any body help me to sort this out.
Short version
Try the following code for BlobPropertySetting function:
public static CloudBlockBlob BlobPropertySetting(CloudBlobClient cloudBlobClientReferenceName, string blobContentName)
{
CloudBlockBlob blob = cloudBlobClientReferenceName.GetBlockBlobReference("upload/" + blobContentName);
return blob;
}
Now for the longer version :)
The reason you're getting this error is because of the way you are constructing the CloudBlockBlob object in BlobPropertySetting method. When you use your code, it creates a blob object with the following URI: https://duv.blob.core.windows.net/blobContentName. If you notice, there's no container name there. Since there's no container name, storage client library assumes that you're trying to create a blob in $root blob container which is a special blob container. You can read more about it here: http://msdn.microsoft.com/en-us/library/windowsazure/hh488356.aspx. Since your storage account does not have this container, you get 404 - Resource Not Found error.
I am very late, but still thought if my answer would be useful for anyone.
I resolved this error by putting the correct "container name". It was different by default.
I have cloned this GIT project : https://github.com/Azure-Samples/storage-blob-upload-from-webapp-node
const
express = require('express')
, router = express.Router()
, azureStorage = require('azure-storage')
, blobService = azureStorage.createBlobService()
, containerName = 'container' // added container name here, as my container name
, config = require('../config')
;
Please help me. I am writing following code to mount the vhd file. But I am not able to mount it. It works fine locally but when I deploy it on azure server the webrole remains offline. I tried by removing foreach block below but in vain. But when I removed the code "Global.driveLetter = drive.Mount(localCache.MaximumSizeInMegabytes - 20, DriveMountOptions.Force);" role got ready on server. But I can't do this because this is the key statement to mount the drive.
What would be the problem?
private static void MountAzureDrive()
{
string connectionStringSettingName = "AzureConnectionString";
string azureDriveContainerName = "azuredrives";
string azureDrivePageBlobName = Guid.NewGuid().ToString("N").ToLowerInvariant();
string azureDriveCacheDirName = Path.Combine(Environment.CurrentDirectory, "cache");
CloudStorageAccount.SetConfigurationSettingPublisher((a, b) =>
{
b(RoleEnvironment.GetConfigurationSettingValue(connectionStringSettingName));
});
//CloudStorageAccount storageAccount=CloudStorageAccount.FromConfigurationSetting(connectionStringSettingName);
CloudStorageAccount storageAccount=CloudStorageAccount.DevelopmentStorageAccount;
LocalResource localCache=RoleEnvironment.GetLocalResource("InstanceDriveCache");
CloudDrive.InitializeCache(localCache.RootPath + "cache", localCache.MaximumSizeInMegabytes);
// Just checking: make sure the container exists
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
blobClient.GetContainerReference("drives").CreateIfNotExist();
// Create cloud drive
//WebRole.drive=storageAccount.CreateCloudDrive(blobClient.GetContainerReference("drives").GetPageBlobReference("Test.VHD").Uri.ToString());
WebRole.drive = storageAccount.CreateCloudDrive("drives/Test.VHD");
try
{
WebRole.drive.CreateIfNotExist(512);
}
catch (CloudDriveException ex)
{
// handle exception here
// exception is also thrown if all is well but the drive already exists
}
foreach (var d in CloudDrive.GetMountedDrives())
{
var mountedDrive = storageAccount.CreateCloudDrive(d.Value.PathAndQuery);
mountedDrive.Unmount();
}
//Global.driveLetter = drive.Mount(25, DriveMountOptions.Force);
Global.driveLetter = drive.Mount(localCache.MaximumSizeInMegabytes - 20, DriveMountOptions.Force);
}
Thanks in advance.
Maybe this is stating the obvious, but... when you deploy to Windows Azure, did you change the storage account from dev storage? You have the dev storage emulator hard-coded:
CloudStorageAccount storageAccount=CloudStorageAccount.DevelopmentStorageAccount;