I'm trying to list objects from my S3 bucket, 3rd level of a certain folder only:
bucket
samples
XXXX
XXXX_XXXXX
XXXX_XXXXX
YYYY
YYYY_YYYYY
YYYY_YYYYY
The XXXX_XXXXX and YYYY_YYYYY folders only.
Using C#, this is my code:
using (IAmazonS3 client = new AmazonS3Client("awsAccessKeyId", "awsSecretAccessKey", RegionEndpoint.GetBySystemName("eu-central-1")))
{
ListObjectsRequest request = new ListObjectsRequest
{
BucketName = bucketName,
Prefix = "samples/",
Delimiter = "/"
};
do
{
ListObjectsResponse response = client.ListObjects(request);
if (response.S3Objects.Count() > 0)
{ // CODE }
The response.S3Objects is empty. If I remove the Delimiter from the Request ALL the objects are returned, and the loading time is too long.
I've been following the AWS S3 docs, but it simply returns nothing. Please help me understand what is wrong. Many thanks.
You need to be looking in CommonPrefixes, not S3Objects. CommonPrefixes gives you all of the prefixes up to the next delimiter, each of which you use to repeat the request, taking you another level deeper each time.
Related
I am using .Net Core along with Amazon's .net sdk to push and pull things from S3. I am using a folder structure in S3 that involves inserting an empty directory with several sub directories.
At a later time I insert files into those directories and move them around. Now I need to be able to remove the directory entirely.
I am able to delete all of the contents of the directory by using
await client.DeleteObjectAsync(bucketName, keyName, null).ConfigureAwait(false);
where I loop through all the files I want to delete in the given bucket. However, it always leaves me with the empty folder structure, in S3 I see that it has a content of 0 Bytes but I don't want to have to sort through thousands of empty folders to find the ones that actually have data.
Is there any way to remove an empty folder from S3 using AWS .NET SDK?
Update:
I am able to delete everything in the folder I want except for the folder itself.
using (IAmazonS3 client = new AmazonS3Client(awsCreds, Amazon.RegionEndpoint.USEast1))
{
try
{
DeleteObjectsRequest deleteRequest = new DeleteObjectsRequest();
ListObjectsRequest listRequest = new ListObjectsRequest
{
BucketName = bucketName,
Prefix = prefix,
//Marker = prefix,
};
ListObjectsResponse response = await client.ListObjectsAsync(listRequest).ConfigureAwait(false);
// Process response
foreach (S3Object entry in response.S3Objects)
{
deleteRequest.AddKey(entry.Key);
}
deleteRequest.BucketName = bucketName;
var response2 = await client.DeleteObjectsAsync(deleteRequest).ConfigureAwait(false);
return true;
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3Exception.ErrorCode != null
&& (amazonS3Exception.ErrorCode.Equals("InvalidAccessKeyId", StringComparison.Ordinal)
|| amazonS3Exception.ErrorCode.Equals("InvalidSecurity", StringComparison.Ordinal)))
{
logger.LogError("AwsS3Service.DeleteFileFromBucket Error - Check the provided AWS Credentials.");
}
else
{
logger.LogError($"AwsS3Service.DeleteFileFromBucket Error - Message: {amazonS3Exception.Message}");
}
}
}
This deletes the entire contents of the directory I choose along with all sub directories. But the main directory remains, is there any way to remove that main directory.
Your code is 99% of the way there. The only thing you need to do is add the prefix variable to your keys to be deleted as well. Technically, it is a 0-byte object that needs to be 'deleted' as well.
For example, after your loop through all the objects in the response, go ahead and add the prefix variable that was added to find all those things.
foreach (S3Object entry in response.S3Objects)
{
deleteRequest.AddKey(entry.Key);
}
// Add the folder itself to be deleted as well
deleteRequest.AddKey(prefix);
Currently I am working in CRUD operations using Amazon S3 for 3.5 .net , I am using 3.1.5 version.
I found this code to check if the bucket exists :
AmazonS3Client s3Client = new AmazonS3Client ();
///setup the client configuration
S3DirectoryInfo directoryInfo = new S3DirectoryInfo(s3Client, bucketName);
bucketExists = directoryInfo.Exists;
Is there another elegant way (c# code) to check if the bucket exists?
I originally followed the answer here but I switched to a slightly different method so I thought I'd share it. This method creates the bucket if it doesn't already exist.
internal async Task CreateBucketAsync(string bucket, CancellationToken token)
{
if (string.IsNullOrEmpty(bucket)) return;
using (var amazonClient = GetAmazonClient)
{
if (AmazonS3Util.DoesS3BucketExist(amazonClient, bucket)) return;
await amazonClient.PutBucketAsync(new PutBucketRequest { BucketName = bucket, UseClientRegion = true }, token);
await SetMultiPartLifetime(amazonClient, bucket, token);
}
}
Your Code is written in c#, you are looking for other way to check if the directory exists? I think your way is better.
You can create a list of all the subfolders in the root and store it in other place (text file or list or whatever you want) and then you don't need to create every time connection to amazon.
S3DirectoryInfo s3Root = new S3DirectoryInfo(s3Client, "bucketofcode");
foreach (S3DirectoryInfo subDirectory in s3Root.GetDirectories())
{
Console.WriteLine(subDirectory.Name);
}
From here https://blogs.aws.amazon.com/net/post/Tx2N8LWZYHZHGQI/The-Three-Different-APIs-for-Amazon-S3
I am trying to copy a blob from one location to another and it seems like this method is obsolete. Everything I've read says I should use "StartCopy". However, when I try this it doesn't copy the blob. I just get a 404 error at the destination.
I don't seem to be able to find any documentation for this. Can anyone advise me on how to do this in the latest version of the API or point me in the direction of some docs.
Uri uploadUri = new Uri(destinationLocator.Path);
string assetContainerName = uploadUri.Segments[1];
CloudBlobContainer assetContainer =
cloudBlobClient.GetContainerReference(assetContainerName);
string fileName = HttpUtility.UrlDecode(Path.GetFileName(model.BlockBlob.Uri.AbsoluteUri));
var sourceCloudBlob = mediaBlobContainer.GetBlockBlobReference(fileName);
sourceCloudBlob.FetchAttributes();
if (sourceCloudBlob.Properties.Length > 0)
{
IAssetFile assetFile = asset.AssetFiles.Create(fileName);
var destinationBlob = assetContainer.GetBlockBlobReference(fileName);
destinationBlob.DeleteIfExists();
destinationBlob.StartCopyFromBlob(sourceCloudBlob);
destinationBlob.FetchAttributes();
if (sourceCloudBlob.Properties.Length != destinationBlob.Properties.Length)
model.UploadStatusMessage += "Failed to copy as Media Asset!";
}
I'm just posting my comment as the answer to make it easier to see.
It wasn't the access level of the container. It wasn't anything to do with StartCopy either. It turned out to be these lines of code.
var mediaBlobContainer = cloudBlobClient.GetContainerReference(cloudBlobClient.BaseUri + "temporarymedia");
mediaBlobContainer.CreateIfNotExists();
Apparently I shouldn't be supplying the cloudBlobClient.BaseUri, just the name temporarymedia.
var mediaBlobContainer = cloudBlobClient.GetContainerReference("temporarymedia");
There was no relevant error message though. Hopefully it'll save another Azure newbie some time in future.
I just deleted a task using the Rally website, but when a search for task using the REST API it doesn't return it. I assumed that it should return with the flag "Recycled".
Can anybody help me?
Regards,
Paulo
This is an inconsistency in the WSAPI. Unfortunately all queries are implicitly scoped (Recycled = false) so nothing that has been deleted will ever be returned from the artifact endpoints. There is also no way to access the contents of the recycle bin through the WSAPI.
I would encourage you to vote for the idea for this functionality at https://ideas.rallydev.com/ideas/D2374.
Although it's not ideal, you can get to the Recycle Bin through this REST endpoint:
https://rally1.rallydev.com/slm/webservice/1.40/recyclebin.js?workspace=/workspace/12345678910&project=/project/12345678911
Where the long integers the Workspace and Project OID's of interest.
Recycle bin entries look like the following:
{
_rallyAPIMajor: "1",
_rallyAPIMinor: "40",
_ref: "https://rally1.rallydev.com/slm/webservice/1.40/recyclebinentry/12345678910.js",
_refObjectName: "Test Case 3: Load in, run Analysis on Integer Grids",
_type: "RecycleBinEntry"
}
Where the Recycle Bin OID is unique and different from the OID of the Artifact that was deleted, so there's not a good way to map the Recycle Bin Entry to the Artifact that was deleted to create it. The Object Name could work, although you run the risk of duplicates. The Recycle Bin Entries also come with the same limitations as does the Recycle Bin in the UI - child objects are not shown/accessible.
If you want to walk the Recycle Bin from .NET, here's a quick example:
namespace RestExample_QueryRecycleBin {
class Program
{
static void Main(string[] args)
{
//Initialize the REST API
RallyRestApi restApi;
String userName = "user#company.com";
String userPassword = "topsecret";
// Set Rally parameters
String rallyURL = "https://rally1.rallydev.com";
String rallyWSAPIVersion = "1.40";
//Initialize the REST API
restApi = new RallyRestApi(userName,
userPassword,
rallyURL,
rallyWSAPIVersion);
// Specify workspace and project
string myWorkspace = "/workspace/12345678910";
string myProject = "/project/12345678911";
//Query for items
Request request = new Request("recyclebinentry");
request.Workspace = myWorkspace;
request.Project = myProject;
QueryResult queryResult = restApi.Query(request);
foreach (var result in queryResult.Results)
{
//Process item
string itemName = result["_refObjectName"];
string itemRef = result["_ref"];
Console.WriteLine(itemRef + ", " + itemName);
}
Console.ReadKey();
}
}
}
I'm trying to duplicate a file from a bucket to another but I can't seam to see the new file on the destination bucket.
I'm getting no errors at all...
Request:
Response:
<?xml version="1.0" encoding="UTF-8"?>
<CopyObjectResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<LastModified>2012-04-08T11:26:36.000Z</LastModified
<ETag>"a5f9084078981b64737b57dbf1735fcf"</ETag>
</CopyObjectResult>
But I keep checking the Last Modified Date on S3 and I can't find any information about this new file, either I can access it directly
http://jk-v20.s3.amazonaws.com/PublicFiles/3ff28e21-4801-47c6-a6d0-e370706d303f_Content_Favicon.ico
What am I doing wrong?
Method:
public void DuplicateFileInCloud(string original, string destination)
{
try
{
CopyObjectRequest request = new CopyObjectRequest();
if (original.StartsWith("http"))
{
// could be from other bucket, URL will show all data
// example: http://jk-v30.s3.amazonaws.com/PredefinedFiles/Favicons/002.ico
string bucket = getBucketNameFromUrl(original), // jk-v30
key = getKeyFromUrl(original); // PredefinedFiles/Favicons/002.ico
request.WithSourceBucket(bucket);
request.WithSourceKey(key);
}
else
{
// same bucket: copy/paste operation
request.WithSourceBucket(this.bucketName);
request.WithSourceKey(original);
}
request.WithDestinationBucket(this.bucketName);
request.WithDestinationKey(destination);
request.CannedACL = S3CannedACL.PublicRead;
using (AmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client(this.accessKey, this.secretAccessKey))
{
S3Response response = client.CopyObject(request);
response.Dispose();
}
}
catch (AmazonS3Exception s3Exception)
{
throw s3Exception;
}
}
http://jk-v20.s3.amazonaws.com//PublicFiles/3ff28e21-4801-47c6-a6d0-e370706d303f_Content_Favicon.ico
Is where the file is. (Note double slash. // ..) If you hit this Url you see the ico file. So its something to do with the leading slash, which may be added automatically by your toolset.
Can you post the request (with headers), captured with something like fiddler?
The docs indicate that the source path should start with a slash (i.e., fully qualified), have you tried that?
x-amz-copy-source: /source_bucket/sourceObject
Maybe the framework does that for you, but your destination has a leading slash so maybe...
Code looks correct, I am using something similar in my working application.
It might be helpful to enable server access logging on your s3 buckets to understand what is happening behind the scenes - http://docs.amazonwebservices.com/AmazonS3/latest/dev/ServerLogs.html.