AWS .NET PutObjectRequest using wrong host - c#

I am using the .NET library for Amazon Web Services for an application that uploads images to an Amazon S3 bucket. It is used in an internal service of an ASP.NET 4.5 application. The NuGet package name is AWSSDK and its version is the latest (as of writing) stable: 2.3.54.2
When I attempt to use the PutObject method on the PutObjectRequest object (to upload the image blob), it throws an exception and complains that the hostname is wrong.
var accessKey = Config.GetValue("AWSAccessKey");
var secretKey = Config.GetValue("AWSSecretKey");
using (var client = new AmazonS3Client(accessKey, secretKey, config))
{
var request = new PutObjectRequest();
request.BucketName = Config.GetValue("PublicBucket");
request.Key = newFileName;
request.InputStream = resizedImage;
request.AutoCloseStream = false;
using (var uploadTaskResult = client.PutObject(request))
{
using (var uploadStream = uploadTaskResult.ResponseStream)
{
uploadStream.Seek(0, SeekOrigin.Begin);
var resultStr = new StreamReader(uploadStream).ReadToEnd();
}
}
}
The exception details are as follows:
Fatal unhandled exception in Web API component: System.Net.WebException: The remote name could not be resolved: 'images.ourcompany.com.http'
at System.Net.HttpWebRequest.GetRequestStream(TransportContext& context)
at System.Net.HttpWebRequest.GetRequestStream()
at Amazon.S3.AmazonS3Client.getRequestStreamCallback[T](IAsyncResult result)
at Amazon.S3.AmazonS3Client.endOperation[T](IAsyncResult result)
at Amazon.S3.AmazonS3Client.EndPutObject(IAsyncResult asyncResult)
at Tracks.Application.Services.Bp.BpTemplateService.UploadImage(Byte[] image, String fileName) in ...
I have tried to debug this in VS by stepping through the code but AWSSDK doesn't come with debug symbols. It should be noted that the remote host name (or bucket name as I think Amazon calls them) is images.ourcompany.com (not our real company's name!). I have checked the value of Config.GetValue("PublicBucket") and it is indeed images.ourcompany.com. At this stage I have exhausted my limited knowledge about Amazon S3 and have no theories about what causes the exception to be thrown.

I think you have to add region endpoint or/and set ServiceUrl to establish connection to AmazonS3 check the similar question below:
Coping folder inside AmazonS3 Bucket (c#)
Upload images on Amazon S3. source code
AmazonS3Config cfg = new AmazonS3Config();
cfg.RegionEndpoint = Amazon.RegionEndpoint.SAEast1;//your region Endpoint
string butcketName = "yourBucketName";
AmazonS3Client s3Client = new AmazonS3Client("your access key",
"your secret key", cfg);
PutObjectRequest request = new PutObjectRequest()
{
BucketName = _bucket,
InputStream = stream,
Key = fullName
};
s3Client.PutObject(request);
or
AmazonS3Config asConfig = new AmazonS3Config()
{
ServiceURL = "http://irisdb.s3-ap-southeast2.amazonaws.com/",
RegionEndpoint = Amazon.RegionEndpoint.APSoutheast2
};

Related

C# connect to AWS using Rest Api i.e. without aws library

I need to connect to AWS without using C# library i.e. using only HTTP Rest endpoint so is it possible?
The reason I want to do this because I want to give flexibility to customers to connect to any service, in the case of the library I have to utilize the library code to connect to the relevant services.
And can we create an instance of AWS connection once and use that throughout the session instead of passing token or user name & password in headers again and again?
Here is what I tried using C# AWS library and I need to achieve the same using Rest endpoints.
public bool sendMyFileToS3(System.IO.Stream localFilePath, string bucketName, string subDirectoryInBucket, string fileNameInS3)
{
IAmazonS3 client = new AmazonS3Client(RegionEndpoint.USEast1);
TransferUtility utility = new TransferUtility(client);
TransferUtilityUploadRequest request = new TransferUtilityUploadRequest();
if (subDirectoryInBucket == "" || subDirectoryInBucket == null)
{
request.BucketName = bucketName; //no subdirectory just bucket name
}
else
{
// subdirectory and bucket name
request.BucketName = bucketName + #"/" + subDirectoryInBucket;
}
request.Key = fileNameInS3; //file name up in S3
request.InputStream = localFilePath;
request.ContentType = "";
utility.Upload(request); //commensing the transfer
return true; //indicate that the file was sent
}

Error when trying to access Azure Function

I have an anonymous function on Azure, successfully deployed (.netstandard 2.0), using Microsoft.NET.Sdk.Functions (1.0.13), but for some reason suddenly it stopped working and when I call it the response is:
<ApiErrorModel xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.datacontract.org/2004/07/Microsoft.Azure.WebJobs.Script.WebHost.Models">
<Arguments xmlns:d2p1="http://schemas.microsoft.com/2003/10/Serialization/Arrays" i:nil="true"/>
<ErrorCode>0</ErrorCode>
<ErrorDetails i:nil="true"/>
<Id>91fab400-3447-4913-878f-715d9d4ab46b</Id>
<Message>
An error has occurred. For more information, please check the logs for error ID 91fab400-3447-4913-878f-715d9d4ab46b
</Message>
<RequestId>0fb00298-733d-4d88-9e73-c328d024e1bb</RequestId>
<StatusCode>InternalServerError</StatusCode>
</ApiErrorModel>
How to figure that out?
EDIT: When I start AF environment locally and run the function it works as expected, without any issues, although what i see in console is a message in red:
and searching for it, stumbled across this GitHub post:
https://github.com/Azure/azure-functions-host/issues/2765
where I noticed this part:
#m-demydiuk noticed that azure function works with this error in the console. So this red error doesn't break function on the local machine. But I am afraid it may cause any problems in other environments.
and it bothers me. Can it be the problem?
I use a lib with a version that do not match my target framework, but again locally works fine, and also it was working fine before on Azure
My host version is "Version=2.0.11651.0"
This is the entire function:
public static class Function1
{
[FunctionName("HTML2IMG")]
public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]HttpRequest req, TraceWriter log)
{
string url = req.Query["url"];
byte[] EncodedData = Convert.FromBase64String(url);
string DecodedURL = Encoding.UTF8.GetString(EncodedData);
string requestBody = new StreamReader(req.Body).ReadToEnd();
dynamic data = JsonConvert.DeserializeObject(requestBody);
DecodedURL = DecodedURL ?? data?.name;
var api = new HtmlToPdfOrImage.Api("9123314e-219c-342d-a763-0a3dsdf8ad21", "vmZ31vyg");
var ApiResult = api.Convert(new Uri($"{DecodedURL}"), new HtmlToPdfOrImage.GenerateSettings() { OutputType = HtmlToPdfOrImage.OutputType.Image });
string BlobName = Guid.NewGuid().ToString("n");
string ImageURL = await CreateBlob($"{BlobName}.png", (byte[])ApiResult.model, log);
var Result = new HttpResponseMessage(HttpStatusCode.OK);
var oJSON = new { url = ImageURL, hash = BlobName };
var jsonToReturn = JsonConvert.SerializeObject(oJSON);
Result.Content = new StringContent(jsonToReturn);
Result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
return Result;
}
private async static Task<string> CreateBlob(string name, byte[] data, TraceWriter log)
{
string accessKey = "xxx";
string accountName = "xxx";
string connectionString = "DefaultEndpointsProtocol=https;AccountName=" + accountName + ";AccountKey=" + accessKey + ";EndpointSuffix=core.windows.net";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
CloudBlobClient client = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = client.GetContainerReference("images");
await container.CreateIfNotExistsAsync();
BlobContainerPermissions permissions = await container.GetPermissionsAsync();
permissions.PublicAccess = BlobContainerPublicAccessType.Container;
await container.SetPermissionsAsync(permissions);
CloudBlockBlob blob = container.GetBlockBlobReference(name);
blob.Properties.ContentType = "image/png";
using (Stream stream = new MemoryStream(data))
{
await blob.UploadFromStreamAsync(stream);
}
return blob.Uri.AbsoluteUri;
}
DOUBLE EDIT:
I created empty V2 .net core AF in VS and published it straight away -> i get the same error... I even updated the Microsoft.NET.Sdk.Function to the latest 1.0.14 instead of 1.0.13 and still get the same error. Obviously something in Azure or Visual Studio (15.7.5) is broken?!?!
Solution
On Azure portal, go to Function app settings check your Runtime Version. It is probably Runtime version: 1.0.11913.0 (~1) on your side. This is the problem. Change FUNCTIONS_EXTENSION_VERSION to beta in Application settings and your code should work on Azure.
Explanation
You create a v2 function as your local host version is 2.0.11651.0. So the function runtime should also be beta 2.x(latest is 2.0.11933.0) online.
When you published functions from VS before, you probably saw this prompt
You may have chosen No so that you got the error.
Note that if we publish through CI/CD like VSTS or Git, such notification is not available. So we need to make sure those configurations are set correctly.
Suggestions
As you can see your local host version is 2.0.11651, which is lower than 2.0.11933 on Azure. I do recommend you to update Azure Functions and Web Jobs Tools(on VS menus, Tools->Extensions and Updates) to latest(15.0.40617.0) for VS to consume latest function runtime.
As for your code, I recommend you to create images container and set its public access level manually on portal since this process only requires executing once.
Then we can use blob output bindings.
Add StorageConnection to Application settings with storage connection string. If your images container is in the storage account used by function app (AzureWebJobsStorge in Application settings), ignore this step and delete Connection parameter below, because bindings use that storage account by default.
Add blob output bindings
public static async Task<HttpResponseMessage> Run(...,TraceWriter log,
[Blob("images", FileAccess.Read, Connection = "StorageConnection")] CloudBlobContainer container)
Change CreateBlob method
private async static Task<string> CreateBlob(string name, byte[] data, TraceWriter log, CloudBlobContainer container)
{
CloudBlockBlob blob = container.GetBlockBlobReference(name);
blob.Properties.ContentType = "image/png";
using (Stream stream = new MemoryStream(data))
{
await blob.UploadFromStreamAsync(stream);
}
return blob.Uri.AbsoluteUri;
}

Using AWS SDK to upload file to S3 in .NET Core

I'm attempting to use this documentation to upload a file to my S3 bucket using the AWS SDK. Unfortunately, there does not seem to be any documentation giving an example of how to do this in .NET Core, only how to create and inject an instance of IAmazonS3.
Here is what I have:
private IAmazonS3 client; //Being injected
private string bucketName;
using (client)
{
var request = new PutObjectRequest
{
BucketName = bucketName,
Key = "keyTest",
ContentBody = "sample text"
};
var response = await client.PutObjectAsync(request);
}
When it calls the PutObjectAsync() line, it hangs for 30 seconds or so and then throws a "The HTTP redirect request failed" exception.
All the documentation I'm seeing is for PutObject() not PutObjectAsync(). The client instance I have only exposes async methods.
Brutal. I had my IAmazonS3 client pointing at us-west-1, but it was a different region (us-west-2).

AWS S3 .Net copy object that key contains dots at end

I use AWSSDK for .Net and my code for copy file is:
CopyObjectRequest request = new CopyObjectRequest()
{
SourceBucket = _bucketName,
SourceKey = sourceObjectKey,
DestinationBucket = _bucketName,
DestinationKey = targetObjectKey
};
CopyObjectResponse response = amazonS3Client.CopyObject(request);
The code work perfect for normal files but when i tried to copy file with file name like 'mage...' it getting the following error message:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
Is there any way to copy object for that type of files?
I used the following C# code to copy files between S3 folders .
AmazonS3Config cfg = new AmazonS3Config();
cfg.RegionEndpoint = Amazon.RegionEndpoint.EUCentral1;//my bucket has this Region
string bucketName = "your bucket";
AmazonS3Client s3Client = new AmazonS3Client("your access key", "your secret key", cfg);
S3FileInfo sourceFile = new S3FileInfo(s3Client, bucketName, "FolderNameUniTest179/Test.test.test.pdf");
S3DirectoryInfo targetDir = new S3DirectoryInfo(s3Client, bucketName, "Test");
sourceFile.CopyTo(targetDir);
S3FileInfo sourceFile2 = new S3FileInfo(s3Client, bucketName, "FolderNameUniTest179/Test...pdf");
sourceFile2.CopyTo(targetDir);
I am using amazon AWSSDK.Core and AWSSDK.S3 version 3.1.0.0 for .net 3.5. I hope it can help you.

Error in uploading file on S3 when bucket name containing periods(dots) through c# SDK

I have created two bucket on S3, named like "demobucket" and "demo.bucket".
When I am uploading any file on "demobucket" it works fine. But when I upload file on "demo.bucket", it gives me an error "Maximum number of retry attempts reached : 3"
My concern is that what is the problem in uploading file when bucket name contain periods(dots).
My code is:
public static bool UploadResumeFileToS3(string uploadAsFileName, Stream ImageStream, S3CannedACL filePermission, S3StorageClass storageType)
{
try
{
AmazonS3 client = Amazon.AWSClientFactory.CreateAmazonS3Client(MY_AWS_ACCESS_KEY_ID, MY_AWS_SECRET_KEY);
PutObjectRequest request = new PutObjectRequest();
request.WithKey(uploadAsFileName);
request.WithInputStream(ImageStream);
request.WithBucketName("demo.bucket");
request.CannedACL = filePermission;
request.StorageClass = storageType;
client.PutObject(request);
client.Dispose();
}
catch
{
return false;
}
return true;
}
There is a problem establishing a secure connection to S3 when the bucket name contains a period. The issue is explained well here: http://shlomoswidler.com/2009/08/amazon-s3-gotcha-using-virtual-host.html.
One solution is to create your S3 client passing a third argument which causes it to use HTTP instead of HTTPS. See Amazon S3 C# SDK "The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel." Error.
AmazonS3Config S3Config = new AmazonS3Config()
{
ServiceURL = "s3.amazonaws.com",
CommunicationProtocol = Protocol.HTTP,
RegionEndpoint = region
};
However, be aware that this is not secure and Amazon does not recommend it. Your secret access key could possibly be intercepted. I have not yet found a secure way to upload a file to a bucket with a period in the name.
If you are using a recent version of AWSSDK.dll (at least 2.0.13.0), you can do this instead:
AmazonS3Config S3Config = new AmazonS3Config()
{
ServiceURL = "s3.amazonaws.com",
ForcePathStyle = true,
RegionEndpoint = region
};
This forces the S3Client to use the version of the path to your bucket which avoids the problem.

Categories

Resources