I am using TransferUtilityDownload and TransferUtilityDownloadDirectory to download a file and full directrory. However even I am using same bucket name format it is working for single file but not for directory and returns 403 Access Denied. (same problem with listing objects):
string bucketName = "my-bucket-us-east-1-prod";
string UnscheduledIn = "abc/butter/input_butter_11nov2019/unscheduled";
AmazonS3Client client = new AmazonS3Client(RegionEndpoint.USEast1);
// request for object download
var request = new TransferUtilityDownloadRequest();
// request for directory download
var drequest = new TransferUtilityDownloadDirectoryRequest();
//This request for single file download
request.BucketName = bucketName + "/" + UnscheduledIn;
request.FilePath = "D:\\input\\" + "test.csv";
request.Key = "test.csv";
//This request for directory download
drequest.BucketName = bucketName + "/" + UnscheduledIn;
drequest.S3Directory = "unscheduled";
drequest.LocalDirectory = "D:\\input\\";
drequest.DownloadFilesConcurrently = true;
TransferUtility fileTransferUtility = new TransferUtility(new AmazonS3Client(RegionEndpoint.USEast1));
// This one works
fileTransferUtility.Download(request);
// This one does not work
fileTransferUtility.DownloadDirectory(drequest);
403 Access Denied error usually cause of wrong bucket or directory name (if request cannot find the bucket or directory and this is a known issue). However bucket name and directory name are correct. I wonder if the formatting or setting up some properties am missing?
Quick note this version also returns same 403 error:
//This request for directory download
drequest.BucketName = bucketName;
drequest.S3Directory = UnscheduledIn;
drequest.LocalDirectory = "D:\\input\\";
drequest.DownloadFilesConcurrently = true;
It looks there is some issue with the bucket name and s3 directory path. Update your code with this piece of code.
//This request for directory download
drequest.BucketName = bucketName;
drequest.S3Directory = '/' + UnscheduledIn;
drequest.LocalDirectory = "D:\\input";
drequest.DownloadFilesConcurrently = true;
Update:
In General, 403 forbidden comes from server in case of authentication failed/Permission Issue.
Please check your bucket policy to allow download.
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::my-bucket-us-east-1-prod/abc/butter/input_butter_11nov2019/*","arn:aws:s3:::my-bucket-us-east-1-prod/abc/butter/input_butter_11nov2019/unscheduled/*"]
}
Related
I have a method that will search for a file by name and then download that file:
public static void DownloadFile(string filename)
{
service = GetDriveService();
//check if file exists and grab id
FilesResource.ListRequest listRequest = service.Files.List();
listRequest.SupportsAllDrives = true;
listRequest.IncludeItemsFromAllDrives = true;
listRequest.PageSize = 1000;
listRequest.Q = "name = '" + filename + ".pdf'";
FileList files = listRequest.Execute();
if (files.Files.Count > 0) //the file exists, DOWNLOAD
{
var request = service.Files.Get(files.Files[0].Id);
var stream = new System.IO.MemoryStream();
request.Download(stream);
}
}
The problem is that FileList files = listRequest.Execute(); will return 0 files unless I have recently opened that file on Google Drive in Chrome. it's like the files are not indexed. Am I missing a parameter?
I've tested this on loads of files and without fail, if I've opened the file on chrome FileList files = listRequest.Execute(); returns 1 element otherwise it returns nothing.
I have also tested searching in specific folders only, same issue.
Any help is much appreciated.
thanks
edit; Here is the GetDriveService Method:
private static DriveService GetDriveService()
{
string[] scopes = new string[] { DriveService.Scope.Drive }; // Full access
GoogleDrive cr = JsonConvert.DeserializeObject<GoogleDrive>(System.IO.File.ReadAllText(#"\\PATH_TO_JSONFILE\GoogleAPI.json"));
ServiceAccountCredential xCred = new ServiceAccountCredential(new ServiceAccountCredential.Initializer(cr.client_email)
{
User = "xxxxx#xxxx.xx",
Scopes = new[] { DriveService.Scope.Drive }
}.FromPrivateKey(cr.private_key));
DriveService service = new DriveService(new BaseClientService.Initializer()
{
HttpClientInitializer = xCred,
ApplicationName = "APPLICATION_NAME",
});
return service;
}
The User email is the same account I use to open the files in Chrome.
I have found the issue.
In GetDriveService() the User was set to my own email address and not to the service account address with account-wide delegation across our organisation.
I guess my email-address did not 'own' the files until I opened them once.
Hope this can prevent someone from doing the same mistake!
It seems this happens if you are defaulting to the "user" corpora and the files haven't been directly shared with you (i.e. you've been aliased in to having access instead).
In the case that you can't use a service account for what you're doing, you can fix this issue by changing the corpora parameter to "allDrives" instead.
So in your C# DownloadFile method just add
listRequest.Corpora="allDrives";
I generated SAS url with below code
var blobBuilder = new BlobSasBuilder()
{
ExpiresOn = DateTimeOffset.UtcNow.AddDays(2),
Protocol = SasProtocol.Https,
StartsOn = DateTimeOffset.UtcNow.AddDays(-2),
BlobContainerName = _containerName
};
var blockBlob = _blobContainer.GetBlobClient(fileName);
blobBuilder.SetPermissions(BlobAccountSasPermissions.Read);
var isBlobExist = blockBlob.Exists();
if (isBlobExist)
{
var uriData = blockBlob.GenerateSasUri(blobBuilder);
if (uriData != null)
{
path = uriData.AbsoluteUri;
}
}
Generated URI is working most of the time for mobile users but sometimes it returns this error message when trying to download file
server returned http 403 server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature
I am not sure what wrong am I doing here because it works most of the time but doesn't work sometime.
I am also wondering if this is the case when someone try to override the file and other user is trying to read it. Please suggest
I have below code which is attempting to create a blank file on my azure file storage account
CloudStorageAccount sa = CloudStorageAccount.Parse(connectionString);
var fc = sa.CreateCloudFileClient();
var share = fc.GetShareReference("uploadparking");
share.CreateIfNotExists();
var rootDirectory = share.GetRootDirectoryReference();
var subDirectory = rootDirectory.GetDirectoryReference("valuationrequests");
subDirectory.CreateIfNotExists();
var uri = new Uri(subDirectory.Uri.ToString() + "/file.txt");
var file = new CloudFile(uri);
file.Create(0);
On the final line I am getting the following exception:
Microsoft.WindowsAzure.Storage.StorageException' occurred in Microsoft.WindowsAzure.Storage.dll
Additional information: The remote server returned an error: (404) Not Found.
I'm not sure what it can't find. It shouldn't be trying to find the file as it's creating it. I have confirmed the directories exist.
If anyone knows I can go about creating a file successfully please let me know. I've checked the tutorials and they sadly only show how to download a file not upload.
I believe the documentation is incorrect. The documentation only mentions that the URI should be absolute. It fails to mention that if you're using absolute URI, then you should also pass storage credentials or the URI should include Shared Access Signature with at least Create permission to create a file.
You should try using the following override of CloudFile to create an instance: https://learn.microsoft.com/en-us/dotnet/api/microsoft.windowsazure.storage.file.cloudfile.-ctor?view=azurestorage-8.1.3#Microsoft_WindowsAzure_Storage_File_CloudFile__ctor_System_Uri_Microsoft_WindowsAzure_Storage_Auth_StorageCredentials_.
So your code would be:
CloudStorageAccount sa = CloudStorageAccount.Parse(connectionString);
var fc = sa.CreateCloudFileClient();
var share = fc.GetShareReference("uploadparking");
share.CreateIfNotExists();
var rootDirectory = share.GetRootDirectoryReference();
var subDirectory = rootDirectory.GetDirectoryReference("valuationrequests");
subDirectory.CreateIfNotExists();
var uri = new Uri(subDirectory.Uri.ToString() + "/file.txt");
var file = new CloudFile(uri, sa.Credentials);
file.Create(0);
Other alternative would be to create a Shared Access Signature (SAS) token on the share and use a SAS URL when creating an instance of CloudFile. So in this case your code would be:
CloudStorageAccount sa = CloudStorageAccount.Parse(connectionString);
var fc = account.CreateCloudFileClient();
var share = fc.GetShareReference("uploadparking");
share.CreateIfNotExists();
var rootDirectory = share.GetRootDirectoryReference();
var subDirectory = rootDirectory.GetDirectoryReference("valuationrequests");
subDirectory.CreateIfNotExists();
SharedAccessFilePolicy policy = new SharedAccessFilePolicy()
{
Permissions = SharedAccessFilePermissions.Create,
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(15)
};
var sasToken = share.GetSharedAccessSignature(policy);
var uri = new Uri(subDirectory.Uri.ToString() + "/file1.txt" + sasToken);
var file = new CloudFile(uri);
file.Create(0);
I m using amazon sdk for .net
i have uploaded a file in folder of my bucket , now i want to get the url of that file using this code
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest();
request.BucketName = "my-new-bucket2";
request.Key = "Images/Tulips.jpg";
request.Expires = DateTime.Now.AddHours(1);
request.Protocol = Protocol.HTTP;
string url = s3.GetPreSignedURL(request);
but this is returning url with key , exipration date and signature but infact i want to get the url without them , there is no other method to get the url
**Things i tried **
i search and found that i have to change permission of my file
i have change the permission of file while uploading
request.CannedACL = S3CannedACL.PublicRead;
but still its returning the same url
http://my-new-bucket2.s3-us-west-2.amazonaws.com/Images/Tulips.jpg?AWSAccessKeyId=xxxxxxxxxxxx&Expires=1432715743&Signature=xxxxxxxxxxx%3D
it work when i remove keyid ,expire and signature
but how can i get url with out it , or do i have to do it manually
This is by design. If you know the bucket name and the key then you have everything you need to construct the URL. As an example, here bucketname is yourbucketname and the key is this/is/your/key.jpg.
https://yourbucketname.s3.amazonaws.com/this/is/your/key.jpg
Hope that helps!
I just browsed their documentation and was not able to find a method to return an absolute url. However, I really believe there is one that I could not see. For now, you can solve your problem by extracting an absolute url from the result you have:
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest();
request.BucketName = "my-new-bucket2";
request.Key = "Images/Tulips.jpg";
request.Expires = DateTime.Now.AddHours(1);
request.Protocol = Protocol.HTTP;
string url = s3.GetPreSignedURL(request);
index = url.IndexOf("?");
if (index > 0)
string absUrl = url.Substring(0, index);
Hope it helps :)
I have created two bucket on S3, named like "demobucket" and "demo.bucket".
When I am uploading any file on "demobucket" it works fine. But when I upload file on "demo.bucket", it gives me an error "Maximum number of retry attempts reached : 3"
My concern is that what is the problem in uploading file when bucket name contain periods(dots).
My code is:
public static bool UploadResumeFileToS3(string uploadAsFileName, Stream ImageStream, S3CannedACL filePermission, S3StorageClass storageType)
{
try
{
AmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client(MY_AWS_ACCESS_KEY_ID, MY_AWS_SECRET_KEY);
PutObjectRequest request = new PutObjectRequest();
request.WithKey(uploadAsFileName);
request.WithInputStream(ImageStream);
request.WithBucketName("demo.bucket");
request.CannedACL = filePermission;
request.StorageClass = storageType;
client.PutObject(request);
client.Dispose();
}
catch
{
return false;
}
return true;
}
There is a problem establishing a secure connection to S3 when the bucket name contains a period. The issue is explained well here: http://shlomoswidler.com/2009/08/amazon-s3-gotcha-using-virtual-host.html.
One solution is to create your S3 client passing a third argument which causes it to use HTTP instead of HTTPS. See Amazon S3 C# SDK "The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel." Error.
AmazonS3Config S3Config = new AmazonS3Config()
{
ServiceURL = "s3.amazonaws.com",
CommunicationProtocol = Protocol.HTTP,
RegionEndpoint = region
};
However, be aware that this is not secure and Amazon does not recommend it. Your secret access key could possibly be intercepted. I have not yet found a secure way to upload a file to a bucket with a period in the name.
If you are using a recent version of AWSSDK.dll (at least 2.0.13.0), you can do this instead:
AmazonS3Config S3Config = new AmazonS3Config()
{
ServiceURL = "s3.amazonaws.com",
ForcePathStyle = true,
RegionEndpoint = region
};
This forces the S3Client to use the version of the path to your bucket which avoids the problem.