I use AWSSDK for .Net and my code for copy file is:
CopyObjectRequest request = new CopyObjectRequest()
{
SourceBucket = _bucketName,
SourceKey = sourceObjectKey,
DestinationBucket = _bucketName,
DestinationKey = targetObjectKey
};
CopyObjectResponse response = amazonS3Client.CopyObject(request);
The code work perfect for normal files but when i tried to copy file with file name like 'mage...' it getting the following error message:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
Is there any way to copy object for that type of files?
I used the following C# code to copy files between S3 folders .
AmazonS3Config cfg = new AmazonS3Config();
cfg.RegionEndpoint = Amazon.RegionEndpoint.EUCentral1;//my bucket has this Region
string bucketName = "your bucket";
AmazonS3Client s3Client = new AmazonS3Client("your access key", "your secret key", cfg);
S3FileInfo sourceFile = new S3FileInfo(s3Client, bucketName, "FolderNameUniTest179/Test.test.test.pdf");
S3DirectoryInfo targetDir = new S3DirectoryInfo(s3Client, bucketName, "Test");
sourceFile.CopyTo(targetDir);
S3FileInfo sourceFile2 = new S3FileInfo(s3Client, bucketName, "FolderNameUniTest179/Test...pdf");
sourceFile2.CopyTo(targetDir);
I am using amazon AWSSDK.Core and AWSSDK.S3 version 3.1.0.0 for .net 3.5. I hope it can help you.
Related
I have a file a my S3 bucket and I want to access this file from a Lambda function.
When I pass the path of this file to one of the methods, I get the error:
Could not find a part of the path '/var/task/https:/s3.amazonaws.com/TestBucket/testuser/AWS_sFTP_Key.pem".
For example:
TestMethod("https://s3.amazonaws.com/TestBucket/testuser/AWS_sFTP_Key.pem")
code:
public void FunctionHandler(S3Event s3Event, ILambdaContext lambdaContext)
{
ConnectionInfo connectionInfo = new ConnectionInfo("xxx.xxx.xx.xxx", "testuser",
new AuthenticationMethod[]{
new PrivateKeyAuthenticationMethod("testuser", new PrivateKeyFile[] {
new PrivateKeyFile("https://s3.amazonaws.com/TestBucket/testuser/AWS_sFTP_Key.pem")})
});
SftpClient sftpClient = new SftpClient(connectionInfo);
sftpClient.Connect();
lambdaContext.Logger.Log(sftpClient.WorkingDirectory);
sftpClient.Disconnect();
}
You can use AWS SDK for reading the file from S3 as shown below, however I would suggest to use AWS Certificate Manager or IAM for storing and managing your certificates and keys:
PS: Make sure you assign the proper role for your lambda function or bucket policy for your bucket to be able to GetObject from S3:
RegionEndpoint bucketRegion = RegionEndpoint.USWest2;//region where you store your file
client = new AmazonS3Client(bucketRegion);
GetObjectRequest request = new GetObjectRequest();
request.WithBucketName(BUCKET_NAME);//TestBucket
request.WithKey(S3_KEY);//testuser/AWS_sFTP_Key.pem
GetObjectResponse response = client.GetObject(request);
StreamReader reader = new StreamReader(response.ResponseStream);
String content = reader.ReadToEnd();
More Help:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-certs.html
https://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/RetrievingObjectUsingNetSDK.html
I have below code which is attempting to create a blank file on my azure file storage account
CloudStorageAccount sa = CloudStorageAccount.Parse(connectionString);
var fc = sa.CreateCloudFileClient();
var share = fc.GetShareReference("uploadparking");
share.CreateIfNotExists();
var rootDirectory = share.GetRootDirectoryReference();
var subDirectory = rootDirectory.GetDirectoryReference("valuationrequests");
subDirectory.CreateIfNotExists();
var uri = new Uri(subDirectory.Uri.ToString() + "/file.txt");
var file = new CloudFile(uri);
file.Create(0);
On the final line I am getting the following exception:
Microsoft.WindowsAzure.Storage.StorageException' occurred in Microsoft.WindowsAzure.Storage.dll
Additional information: The remote server returned an error: (404) Not Found.
I'm not sure what it can't find. It shouldn't be trying to find the file as it's creating it. I have confirmed the directories exist.
If anyone knows I can go about creating a file successfully please let me know. I've checked the tutorials and they sadly only show how to download a file not upload.
I believe the documentation is incorrect. The documentation only mentions that the URI should be absolute. It fails to mention that if you're using absolute URI, then you should also pass storage credentials or the URI should include Shared Access Signature with at least Create permission to create a file.
You should try using the following override of CloudFile to create an instance: https://learn.microsoft.com/en-us/dotnet/api/microsoft.windowsazure.storage.file.cloudfile.-ctor?view=azurestorage-8.1.3#Microsoft_WindowsAzure_Storage_File_CloudFile__ctor_System_Uri_Microsoft_WindowsAzure_Storage_Auth_StorageCredentials_.
So your code would be:
CloudStorageAccount sa = CloudStorageAccount.Parse(connectionString);
var fc = sa.CreateCloudFileClient();
var share = fc.GetShareReference("uploadparking");
share.CreateIfNotExists();
var rootDirectory = share.GetRootDirectoryReference();
var subDirectory = rootDirectory.GetDirectoryReference("valuationrequests");
subDirectory.CreateIfNotExists();
var uri = new Uri(subDirectory.Uri.ToString() + "/file.txt");
var file = new CloudFile(uri, sa.Credentials);
file.Create(0);
Other alternative would be to create a Shared Access Signature (SAS) token on the share and use a SAS URL when creating an instance of CloudFile. So in this case your code would be:
CloudStorageAccount sa = CloudStorageAccount.Parse(connectionString);
var fc = account.CreateCloudFileClient();
var share = fc.GetShareReference("uploadparking");
share.CreateIfNotExists();
var rootDirectory = share.GetRootDirectoryReference();
var subDirectory = rootDirectory.GetDirectoryReference("valuationrequests");
subDirectory.CreateIfNotExists();
SharedAccessFilePolicy policy = new SharedAccessFilePolicy()
{
Permissions = SharedAccessFilePermissions.Create,
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(15)
};
var sasToken = share.GetSharedAccessSignature(policy);
var uri = new Uri(subDirectory.Uri.ToString() + "/file1.txt" + sasToken);
var file = new CloudFile(uri);
file.Create(0);
I received the next error while sending the ListObjectRequest:
The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256
According to this answer, AmazonS3Config was updated in the following way:
var amazonS3Config = new AmazonS3Config
{
SignatureVersion = "4",
ServiceURL = bucketName,
RegionEndpoint = RegionEndpoint.USEast1,
SignatureMethod = SigningAlgorithm.HmacSHA256
};
var s3Client = new AmazonS3Client(accessKeyID, secretKey, amazonS3Config);
But I still receive this error. What have I missed here?
Thanks.
Try to use the last version of amazonS3 sdk.I think ServiceUrl is not necessary when you know regionEndpoint, I used it with private cloud amazonS3 and When I do not know the Region Endpoint. I can retrieve the information from Amazon using the following code.
var amazonS3Config = new AmazonS3Config();
// region of FrankPurt is : RegionEndpoint.EUCentral1
// according to amazonS3 Doc http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
amazonS3Config.RegionEndpoint = RegionEndpoint.USEast1;
var s3Client = new AmazonS3Client("your access key", "your secret key", amazonS3Config);
S3DirectoryInfo dir = new S3DirectoryInfo(s3Client, "your bucket name", "your folder path without bucket name");
Console.WriteLine(dir.GetFiles().Count());
By Using this I am able to work in EU west 2 region
AmazonS3Config config = new AmazonS3Config();
config.SignatureVersion = "4";
config.RegionEndpoint = Amazon.RegionEndpoint.GetBySystemName("eu-west-2");
config.SignatureMethod = Amazon.Runtime.SigningAlgorithm.HmacSHA256;
region should
I am attempting to copy an S3 object, using a valid key but CopyObject() keeps returning "The specified key does not exist."
However, GetObject and ListObjects return the object without issue.
Here's a sample:
var copyReq = new CopyObjectRequest
{
SourceBucket = bucketName,
SourceKey = key,
DestinationBucket = bucketName,
DestinationKey = ("/UserImages/mynewkeyname.jpg"),
StorageClass = S3StorageClass.StandardInfrequentAccess,
CannedACL = Amazon.S3.S3CannedACL.PublicRead
};
s3.CopyObject(copyReq);
ListObjectsResponse listResponse = s3.ListObjects(new ListObjectsRequest
{
BucketName = bucketName,
MaxKeys = 1,
Prefix = key
});
var request = new Amazon.S3.Model.GetObjectRequest()
{
BucketName = bucketName,
Key = key,
};
var getResponse = s3.GetObject(request);
key and bucketname are defined else where, but they were pulled from other other API responses.
Based on this line:
DestinationKey = ("/UserImages/mynewkeyname.jpg"),
It seems like a reasonable assumption that you have made the same error with the source as you have with the destination.
The root of a bucket is the empty string, not /, so S3 object keys do not begin with a leading slash.
To clarify, the object key for https://example.s3.amazonaws.com/test.jpg is test.jpg rather than /test.jpg.
you can also use the S3FileInfo.CopyTo, S3FileInfo.CopyToLocal or S3FileInfo.CopyFromLocal method to perfom copy it is more elegant way to do that with AWSSDK 3.1.0 C#
I am using the .NET library for Amazon Web Services for an application that uploads images to an Amazon S3 bucket. It is used in an internal service of an ASP.NET 4.5 application. The NuGet package name is AWSSDK and its version is the latest (as of writing) stable: 2.3.54.2
When I attempt to use the PutObject method on the PutObjectRequest object (to upload the image blob), it throws an exception and complains that the hostname is wrong.
var accessKey = Config.GetValue("AWSAccessKey");
var secretKey = Config.GetValue("AWSSecretKey");
using (var client = new AmazonS3Client(accessKey, secretKey, config))
{
var request = new PutObjectRequest();
request.BucketName = Config.GetValue("PublicBucket");
request.Key = newFileName;
request.InputStream = resizedImage;
request.AutoCloseStream = false;
using (var uploadTaskResult = client.PutObject(request))
{
using (var uploadStream = uploadTaskResult.ResponseStream)
{
uploadStream.Seek(0, SeekOrigin.Begin);
var resultStr = new StreamReader(uploadStream).ReadToEnd();
}
}
}
The exception details are as follows:
Fatal unhandled exception in Web API component: System.Net.WebException: The remote name could not be resolved: 'images.ourcompany.com.http'
at System.Net.HttpWebRequest.GetRequestStream(TransportContext& context)
at System.Net.HttpWebRequest.GetRequestStream()
at Amazon.S3.AmazonS3Client.getRequestStreamCallback[T](IAsyncResult result)
at Amazon.S3.AmazonS3Client.endOperation[T](IAsyncResult result)
at Amazon.S3.AmazonS3Client.EndPutObject(IAsyncResult asyncResult)
at Tracks.Application.Services.Bp.BpTemplateService.UploadImage(Byte[] image, String fileName) in ...
I have tried to debug this in VS by stepping through the code but AWSSDK doesn't come with debug symbols. It should be noted that the remote host name (or bucket name as I think Amazon calls them) is images.ourcompany.com (not our real company's name!). I have checked the value of Config.GetValue("PublicBucket") and it is indeed images.ourcompany.com. At this stage I have exhausted my limited knowledge about Amazon S3 and have no theories about what causes the exception to be thrown.
I think you have to add region endpoint or/and set ServiceUrl to establish connection to AmazonS3 check the similar question below:
Coping folder inside AmazonS3 Bucket (c#)
Upload images on Amazon S3. source code
AmazonS3Config cfg = new AmazonS3Config();
cfg.RegionEndpoint = Amazon.RegionEndpoint.SAEast1;//your region Endpoint
string butcketName = "yourBucketName";
AmazonS3Client s3Client = new AmazonS3Client("your access key",
"your secret key", cfg);
PutObjectRequest request = new PutObjectRequest()
{
BucketName = _bucket,
InputStream = stream,
Key = fullName
};
s3Client.PutObject(request);
or
AmazonS3Config asConfig = new AmazonS3Config()
{
ServiceURL = "http://irisdb.s3-ap-southeast2.amazonaws.com/",
RegionEndpoint = Amazon.RegionEndpoint.APSoutheast2
};