I have an anonymous function on Azure, successfully deployed (.netstandard 2.0), using Microsoft.NET.Sdk.Functions (1.0.13), but for some reason suddenly it stopped working and when I call it the response is:
<ApiErrorModel xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.datacontract.org/2004/07/Microsoft.Azure.WebJobs.Script.WebHost.Models">
<Arguments xmlns:d2p1="http://schemas.microsoft.com/2003/10/Serialization/Arrays" i:nil="true"/>
<ErrorCode>0</ErrorCode>
<ErrorDetails i:nil="true"/>
<Id>91fab400-3447-4913-878f-715d9d4ab46b</Id>
<Message>
An error has occurred. For more information, please check the logs for error ID 91fab400-3447-4913-878f-715d9d4ab46b
</Message>
<RequestId>0fb00298-733d-4d88-9e73-c328d024e1bb</RequestId>
<StatusCode>InternalServerError</StatusCode>
</ApiErrorModel>
How to figure that out?
EDIT: When I start AF environment locally and run the function it works as expected, without any issues, although what i see in console is a message in red:
and searching for it, stumbled across this GitHub post:
https://github.com/Azure/azure-functions-host/issues/2765
where I noticed this part:
#m-demydiuk noticed that azure function works with this error in the console. So this red error doesn't break function on the local machine. But I am afraid it may cause any problems in other environments.
and it bothers me. Can it be the problem?
I use a lib with a version that do not match my target framework, but again locally works fine, and also it was working fine before on Azure
My host version is "Version=2.0.11651.0"
This is the entire function:
public static class Function1
{
[FunctionName("HTML2IMG")]
public static async Task<HttpResponseMessage> Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]HttpRequest req, TraceWriter log)
{
string url = req.Query["url"];
byte[] EncodedData = Convert.FromBase64String(url);
string DecodedURL = Encoding.UTF8.GetString(EncodedData);
string requestBody = new StreamReader(req.Body).ReadToEnd();
dynamic data = JsonConvert.DeserializeObject(requestBody);
DecodedURL = DecodedURL ?? data?.name;
var api = new HtmlToPdfOrImage.Api("9123314e-219c-342d-a763-0a3dsdf8ad21", "vmZ31vyg");
var ApiResult = api.Convert(new Uri($"{DecodedURL}"), new HtmlToPdfOrImage.GenerateSettings() { OutputType = HtmlToPdfOrImage.OutputType.Image });
string BlobName = Guid.NewGuid().ToString("n");
string ImageURL = await CreateBlob($"{BlobName}.png", (byte[])ApiResult.model, log);
var Result = new HttpResponseMessage(HttpStatusCode.OK);
var oJSON = new { url = ImageURL, hash = BlobName };
var jsonToReturn = JsonConvert.SerializeObject(oJSON);
Result.Content = new StringContent(jsonToReturn);
Result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
return Result;
}
private async static Task<string> CreateBlob(string name, byte[] data, TraceWriter log)
{
string accessKey = "xxx";
string accountName = "xxx";
string connectionString = "DefaultEndpointsProtocol=https;AccountName=" + accountName + ";AccountKey=" + accessKey + ";EndpointSuffix=core.windows.net";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
CloudBlobClient client = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = client.GetContainerReference("images");
await container.CreateIfNotExistsAsync();
BlobContainerPermissions permissions = await container.GetPermissionsAsync();
permissions.PublicAccess = BlobContainerPublicAccessType.Container;
await container.SetPermissionsAsync(permissions);
CloudBlockBlob blob = container.GetBlockBlobReference(name);
blob.Properties.ContentType = "image/png";
using (Stream stream = new MemoryStream(data))
{
await blob.UploadFromStreamAsync(stream);
}
return blob.Uri.AbsoluteUri;
}
DOUBLE EDIT:
I created empty V2 .net core AF in VS and published it straight away -> i get the same error... I even updated the Microsoft.NET.Sdk.Function to the latest 1.0.14 instead of 1.0.13 and still get the same error. Obviously something in Azure or Visual Studio (15.7.5) is broken?!?!
Solution
On Azure portal, go to Function app settings check your Runtime Version. It is probably Runtime version: 1.0.11913.0 (~1) on your side. This is the problem. Change FUNCTIONS_EXTENSION_VERSION to beta in Application settings and your code should work on Azure.
Explanation
You create a v2 function as your local host version is 2.0.11651.0. So the function runtime should also be beta 2.x(latest is 2.0.11933.0) online.
When you published functions from VS before, you probably saw this prompt
You may have chosen No so that you got the error.
Note that if we publish through CI/CD like VSTS or Git, such notification is not available. So we need to make sure those configurations are set correctly.
Suggestions
As you can see your local host version is 2.0.11651, which is lower than 2.0.11933 on Azure. I do recommend you to update Azure Functions and Web Jobs Tools(on VS menus, Tools->Extensions and Updates) to latest(15.0.40617.0) for VS to consume latest function runtime.
As for your code, I recommend you to create images container and set its public access level manually on portal since this process only requires executing once.
Then we can use blob output bindings.
Add StorageConnection to Application settings with storage connection string. If your images container is in the storage account used by function app (AzureWebJobsStorge in Application settings), ignore this step and delete Connection parameter below, because bindings use that storage account by default.
Add blob output bindings
public static async Task<HttpResponseMessage> Run(...,TraceWriter log,
[Blob("images", FileAccess.Read, Connection = "StorageConnection")] CloudBlobContainer container)
Change CreateBlob method
private async static Task<string> CreateBlob(string name, byte[] data, TraceWriter log, CloudBlobContainer container)
{
CloudBlockBlob blob = container.GetBlockBlobReference(name);
blob.Properties.ContentType = "image/png";
using (Stream stream = new MemoryStream(data))
{
await blob.UploadFromStreamAsync(stream);
}
return blob.Uri.AbsoluteUri;
}
Related
I am using the azure blob storage to copy the dropbox file. But when I try to copy that file via URL, got the 500 error and totalbytes are -1.
I am using StartCopy method of WindowsAzure.Storage.Blob package. But here I get the copyStatus.TotalBytes as -1 and copy not working.
Tried the all types of url as below:
https://dl.dropboxusercontent.com/s/1v9re1dozilpdgi/1_32min.mp4?dl=0
https://dl.dropboxusercontent.com/s/1v9re1dozilpdgi/1_32min.mp4?dl=1
https://www.dropbox.com/s/1v9re1dozilpdgi/1_32min.mp4?dl=0
So can you please help me to solve this issue? Anything needs to change in URL or any way to copy the dropbox media to azure blob storage.
Also, I am using the .net 4.8 frameworks with the C#.
Sample Code:
string url = "https://dl.dropboxu`enter code here`sercontent.com/s/1v9re1dozilpdgi/1_32min.mp4?dl=0";
Uri fileUri = new Uri(url);
string filename = "test-file.mp4";
var account = CloudStorageAccount.Parse(connectionstring);
var blobClient = account.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("test-container");
var blob = container.GetBlockBlobReference(filename);
blob.DeleteIfExists();
blob.StartCopy(fileUri);
var refBlob = (CloudBlockBlob)container.GetBlobReferenceFromServer(filename);
var fileLength = refBlob.CopyState.TotalBytes ?? 0;
while (refBlob.CopyState.Status == CopyStatus.Pending)
{
refBlob = (CloudBlockBlob)container.GetBlobReferenceFromServer(filename);
var copyStatus = refBlob.CopyState;
}
Error message: 500 InternalServerError "Copy failed."
We need to use CloudBlockBlob instead of using GetBlockBlobReference .
Because the filename, not the URL, is passed to GetBlockBlobReference in its Constructor.
For more information please refer the below
SO THREAD as suggested by #Tobias Tengler
& This BLOG:- Azure – Upload and Download data using C#.NET
I'm a complete noob at c# and know very little about azure apis and a current cs student doing a project for work. I built some middleware with youtube tutorials that authenticates a with a storage account using a string connection and it enumerates, uploads, downloads, and deletes blobs within a container. The issue i'm having lies with ONLY the downloading functionality and ONLY when the storage account access is set to private. This function works fine with anon access. I suspect the issue is with appending the url, and I'm not sure how to fix it. The blobs are mainly csv data if that matters. Any help or direction to resources would be greatly appreciated 🙂 here is the relevant code:
url function
public async Task<string> GetBlob(string name, string containerName)
{
var containerClient = _blobClient.GetBlobContainerClient(containerName);
var blobClient = containerClient.GetBlobClient(name);
return blobClient.Uri.AbsoluteUri;
}
The config file
"AllowedHosts": "*",
"BlobConnection" : "<mystringconnection>******==;EndpointSuffix=core.windows.net"
action request
[HttpGet]
public async Task<IActionResult> ViewFile(string name)
{
var res = await _blobService.GetBlob(name, "<mystorageacc>");
return Redirect(res);
}
The reason you are not able to download the blobs from a private container is because you are simply returning the blob's URL from your method without any authorization information. Request to access blobs in a private container must be authorized.
What you would need to do is create a Shared Access Signature (SAS) with at least Read permission and then return that SAS URL. The method you would want to use is GenerateSasUri. Your code would be something like:
public async Task<string> GetBlob(string name, string containerName)
{
var containerClient = _blobClient.GetBlobContainerClient(containerName);
var blobClient = containerClient.GetBlobClient(name);
return blobClient.GenerateSasUri(BlobSasPermissions.Read, DateTime.UtcNow.AddMinutes(5)).Uri.AbsoluteUri;
}
This will give you a link which is valid for 5 minutes from the time of creation and has the permission to read (download) the blob.
If you want to download from the blob service;
public async Task<byte[]> ReadFileAsync(string path)
{
using var ms = new MemoryStream();
var blob = _client.GetBlobClient(path);
await blob.DownloadToAsync(ms);
return ms.ToArray();
}
If you want to download the file byte array from controllers, you can check this;
https://stackoverflow.com/a/3605510/3024129
If you want to set a blob file public access level;
https://learn.microsoft.com/en-us/azure/storage/blobs/anonymous-read-access-configure.
Pay attention to the images please;
Or you can connect with Azure Storage Explorer and choose the easy way.
I found the images on the Google, there may be differences. :)
This worked for me by returning a byte array:
byte[] base64ImageRepresentation = new byte[] { };
BlobClient blobClient = new BlobClient(blobConnectionString,
blobContainerUserDocs,+ "/" + fileName);
if (await blobClient.ExistsAsync())
{
using var ms = new MemoryStream();
await blobClient.DownloadToAsync(ms);
return ms.ToArray();
}
I need to programmatically backup/export a SQL Database (either in Azure, or a compatible-one on-prem) to Azure Storage, and restore it to another SQL Database. I would like to use only NuGet packages for code dependencies, since I cannot guarantee that either the build or production servers will have the Azure SDK installed. I cannot find any code examples for something that I assume would be a common action. The closest I found was this:
https://blog.hompus.nl/2013/03/13/backup-your-azure-sql-database-to-blob-storage-using-code/
But, this code exports to a local bacpac file (requiring RoleEnvironment, an SDK-only object). I would think there should be a way to directly export to Blob Storage, without the intermediary file. One thought was to create a Stream, and then run:
services.ExportBacpac(stream, "dbnameToBackup")
And then write the stream to storage; however a Memory Stream wouldn't work--this could be a massive database (100-200 GB).
What would be a better way to do this?
Based on my test, the sql Microsoft Azure SQL Management Library 0.51.0-prerelease support directly export the sql database .bacpac file to the azure storage.
We could using sqlManagementClient.ImportExport.Export(resourceGroup, azureSqlServer, azureSqlDatabase,exportRequestParameters) to export the .bacpac file the azure storage.
But we couldn't find ImportExport in the lastest version of Microsoft Azure SQL Management Library SDK. So we could only use sql Microsoft Azure SQL Management Library 0.51.0-prerelease SDK.
More details about how to use sql Microsoft Azure SQL Management Library to export the sql backup to azure blob storage, you could refer to below steps and codes.
Prerequisites:
Registry an App in Azure AD and create service principle for it. More detail steps about how to registry app and get access token please refer to document.
Details codes:
Notice: Replace the clientId,tenantId,secretKey,subscriptionId with your registered azure AD information. Replace the azureSqlDatabase,resourceGroup,azureSqlServer,adminLogin,adminPassword,storageKey,storageAccount with your own sql database and storage.
static void Main(string[] args)
{
var subscriptionId = "xxxxxxxx";
var clientId = "xxxxxxxxx";
var tenantId = "xxxxxxxx";
var secretKey = "xxxxx";
var azureSqlDatabase = "data base name";
var resourceGroup = "Resource Group name";
var azureSqlServer = "xxxxxxx"; //testsqlserver
var adminLogin = "user";
var adminPassword = "password";
var storageKey = "storage key";
var storageAccount = "storage account";
var baseStorageUri = $"https://{storageAccount}.blob.core.windows.net/brandotest/";//with container name endwith "/"
var backName = azureSqlDatabase + "-" + $"{DateTime.UtcNow:yyyyMMddHHmm}" + ".bacpac"; //back up sql file name
var backupUrl = baseStorageUri + backName;
ImportExportOperationStatusResponse exportStatus = new ImportExportOperationStatusResponse();
try
{
ExportRequestParameters exportRequestParameters = new ExportRequestParameters
{
AdministratorLogin = adminLogin,
AdministratorLoginPassword = adminPassword,
StorageKey = storageKey,
StorageKeyType = "StorageAccessKey",
StorageUri = new Uri(backupUrl)
};
SqlManagementClient sqlManagementClient = new SqlManagementClient(new Microsoft.Azure.TokenCloudCredentials(subscriptionId, GetAccessToken(tenantId, clientId, secretKey)));
var export = sqlManagementClient.ImportExport.Export(resourceGroup, azureSqlServer, azureSqlDatabase,
exportRequestParameters); //do export operation
while (exportStatus.Status != Microsoft.Azure.OperationStatus.Succeeded) // until operation successed
{
Thread.Sleep(1000 * 60);
exportStatus = sqlManagementClient.ImportExport.GetImportExportOperationStatus(export.OperationStatusLink);
}
Console.WriteLine($"Export DataBase {azureSqlDatabase} to Storage {storageAccount} Succesfully");
}
catch (Exception exception)
{
//todo
}
}
private static string GetAccessToken(string tenantId, string clientId, string secretKey)
{
var authenticationContext = new AuthenticationContext($"https://login.windows.net/{tenantId}");
var credential = new ClientCredential(clientId, secretKey);
var result = authenticationContext.AcquireTokenAsync("https://management.core.windows.net/",
credential);
if (result == null)
{
throw new InvalidOperationException("Failed to obtain the JWT token");
}
var token = result.Result.AccessToken;
return token;
}
Result like this:
1.Send request to tell sql server start exporting to azure blob storage
2.Continue sending request to monitor the database exported operation status.
3.Finish exported operation.
Here's an idea:
Pass the stream to the .ExportBacPac method but hold a reference to it on a different thread where you regularly empty and reset the stream so that there's no memory overflow. I'm assuming here, that Dac does not have any means to access the stream while it is being filled.
The thing you have to take care of yourself though is thread safety - MemoryStreams are not thread safe by default. So you'd have to write your own locking mechanisms around .Position and .CopyTo. I've not tested this, but if you handle locking correctly I'd assume the .ExportBacPac method won't throw any errors while the other thread accesses the stream.
Here's a very simple example as pseudo-code just outlining my idea:
ThreadSafeStream stream = new ThreadSafeStream();
Task task = new Task(async (exitToken) => {
MemoryStream partialStream = new MemoryStream();
// Check if backup completed
if (...)
{
exitToken.Trigger();
}
stream.CopyToThreadSafe(partialStream);
stream.PositionThreadSafe = 0;
AzureService.UploadToStorage(partialStream);
await Task.Delay(500); // Play around with this - it shouldn't take too long to copy the stream
});
services.ExportBacpac(stream, "dbnameToBackup");
await TimerService.RunTaskPeriodicallyAsync(task, 500);
It's similiar to the Brando's answer but this one uses a stable package:
using Microsoft.WindowsAzure.Management.Sql;
Nuget
Using the same variables in the Brando's answer, the code will be like this:
var azureSqlServer = "xxxxxxx"+".database.windows.net";
var azureSqlServerName = "xxxxxxx";
SqlManagementClient managementClient = new SqlManagementClient(new TokenCloudCredentials(subscriptionId, GetAccessToken(tenantId, clientId, secretKey)));
var exportParams = new DacExportParameters()
{
BlobCredentials = new DacExportParameters.BlobCredentialsParameter()
{
StorageAccessKey = storageKey,
Uri = new Uri(baseStorageUri)
},
ConnectionInfo = new DacExportParameters.ConnectionInfoParameter()
{
ServerName = azureSqlServer,
DatabaseName = azureSqlDatabase,
UserName = adminLogin,
Password = adminPassword
}
};
var exportResult = managementClient.Dac.Export(azureSqlServerName, exportParams);
You can use Microsoft.Azure.Management.Fluent to export your database to a .bacpac file and store it in a blob. To do this, there are few things you need to do.
Create an AZAD (Azure Active Directory) application and Service Principal that can access resources. Follow this link for a comprehensive guide.
From the first step, you are going to need "Application (client) ID", "Client Secret", and "Tenant ID".
Install Microsoft.Azure.Management.Fluent NuGet packages, and import Microsoft.Azure.Management.Fluent, Microsoft.Azure.Management.ResourceManager.Fluent, and Microsoft.Azure.Management.ResourceManager.Fluent.Authentication namespaces.
Replace the placeholders in the code snippets below with proper values for your usecase.
Enjoy!
var principalClientID = "<Applicaiton (Client) ID>";
var principalClientSecret = "<ClientSecret>";
var principalTenantID = "<TenantID>";
var sqlServerName = "<SQL Server Name> (without '.database.windows.net'>";
var sqlServerResourceGroupName = "<SQL Server Resource Group>";
var databaseName = "<Database Name>";
var databaseLogin = "<Database Login>";
var databasePassword = "<Database Password>";
var storageResourceGroupName = "<Storage Resource Group>";
var storageName = "<Storage Account>";
var storageBlobName = "<Storage Blob Name>";
var bacpacFileName = "myBackup.bacpac";
var credentials = new AzureCredentialsFactory().FromServicePrincipal(principalClientID, principalClientSecret, principalTenantID, AzureEnvironment.AzureGlobalCloud);
var azure = await Azure.Authenticate(credentials).WithDefaultSubscriptionAsync();
var storageAccount = await azure.StorageAccounts.GetByResourceGroupAsync(storageResourceGroupName, storageName);
var sqlServer = await azure.SqlServers.GetByResourceGroupAsync(sqlServerResourceGroupName, sqlServerName);
var database = await sqlServer.Databases.GetAsync(databaseName);
await database.ExportTo(storageAccount, storageBlobName, bacpacFileName)
.WithSqlAdministratorLoginAndPassword(databaseLogin, databasePassword)
.ExecuteAsync();
I'm trying to connect an Azure DocumentDB and save documents using Azure Functions but I don't know how to create the connection.
You can do it using the Azure Portal.
After you created the DocumentDB -
Create new Azure Function.
Go to the Integrate Tab.
You can choose Azure Document DB as an output for your function.
Choose your Document DB/Database Name/Collection you want to use.
Document parameter name is the Output of your function.
For example
using System;
public static void Run(string input, out object document, TraceWriter log)
{
log.Info($"C# manually triggered function called with input: {input}");
document = new {
text = $"I'm running in a C# function! {input}"
};
}
you need to provide out object which is the same as you defined in the output tab.
You can just use the document client directly:
var endpoint = "https://XXXXX.documents.azure.com:443/";
var authKey = "XXXXX";
using (var client = new DocumentClient(new Uri(endpoint), authKey))
{
var sqlCountQuery = "select value count(1) from c";
IDocumentQuery<dynamic> query = client.CreateDocumentQuery<dynamic>(UriFactory.CreateDocumentCollectionUri("YOUR_DB_ID", "YOUR_COLLECTON_ID"), sqlCountQuery).AsDocumentQuery();
....
}
Azure Functions supports Document DB (Cosmos DB) out-of-the-box. You can just simply add an environment variable called AzureWebJobsDocumentDBConnectionString in V1 or AzureWebJobsCosmosDBConnectionString in V2.
Then just use a CosmosDBTrigger binding attribute for input binding like (in C# for example):
public static class UpsertProductCosmosDbTrigger
{
[FunctionName("ProductUpsertCosmosDbTrigger")]
public static void Run(
[CosmosDBTrigger(
// Those names come from the application settings.
// Those names can come with both preceding % and trailing %.
databaseName: "CosmosDbDdatabaseName",
collectionName: "CosmosDbCollectionName",
LeaseDatabaseName = "CosmosDbDdatabaseName",
LeaseCollectionName = "CosmosDbLeaseCollectionName")]
IReadOnlyList<Document> input,
TraceWriter log)
...
For output binding use DocumentDB output binding attribute in V1 and CosmosDB in V2 like:
[FunctionName("ProductUpsertHttpTrigger")]
public static async Task<HttpResponseMessage> Run(
[HttpTrigger(AuthorizationLevel.Function, "post", Route = "products")]
HttpRequestMessage req,
[DocumentDB(
databaseName: "%CosmosDbDdatabaseName%",
collectionName: "%CosmosDbCollectionName%")] IAsyncCollector<Product> collector,
TraceWriter log)
...
I've written a blog post about this: https://blog.mexia.com.au/cosmos-db-in-azure-functions-v1-and-v2
var EndpointUrl = "EndpointUrl";
var PrimaryKey = "PrimaryKeyValue"
this.client = new DocumentClient(new Uri(EndpointUrl), PrimaryKey);
Database database = await this.client.CreateDatabaseIfNotExistsAsync(new Database { Id = cosmoDbName });
you can get the End-point-URL and Primary-Key value from the azure portal in the keys section.
Assume C# has similar SDK like Java. The below is for Java
There are two ways you can connect to documentDB from an Azure function.
Using SDK
DocumentClient documentClient = new DocumentClient(
"SERVICE_ENDPOINT",
"MASTER_KEY",
ConnectionPolicy.GetDefault(),
ConsistencyLevel.Session);
Refer - [https://learn.microsoft.com/en-us/azure/cosmos-db/sql-api-java-samples][1]. This has .Net Samples too.
Binding
#FunctionName("CosmosDBStore")
#CosmosDBOutput(name = "database",
databaseName = "db_name",
collectionName = "col_name",
connectionStringSetting = "AzureCosmosDBConnection")
Please make sure you have a variable in the name of "AzureCosmosDBConnection" in your application settings and local.settings.json(if you want to test locally)
Refer - [https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2][1]
The above link has C# example too.
I am using the .NET library for Amazon Web Services for an application that uploads images to an Amazon S3 bucket. It is used in an internal service of an ASP.NET 4.5 application. The NuGet package name is AWSSDK and its version is the latest (as of writing) stable: 2.3.54.2
When I attempt to use the PutObject method on the PutObjectRequest object (to upload the image blob), it throws an exception and complains that the hostname is wrong.
var accessKey = Config.GetValue("AWSAccessKey");
var secretKey = Config.GetValue("AWSSecretKey");
using (var client = new AmazonS3Client(accessKey, secretKey, config))
{
var request = new PutObjectRequest();
request.BucketName = Config.GetValue("PublicBucket");
request.Key = newFileName;
request.InputStream = resizedImage;
request.AutoCloseStream = false;
using (var uploadTaskResult = client.PutObject(request))
{
using (var uploadStream = uploadTaskResult.ResponseStream)
{
uploadStream.Seek(0, SeekOrigin.Begin);
var resultStr = new StreamReader(uploadStream).ReadToEnd();
}
}
}
The exception details are as follows:
Fatal unhandled exception in Web API component: System.Net.WebException: The remote name could not be resolved: 'images.ourcompany.com.http'
at System.Net.HttpWebRequest.GetRequestStream(TransportContext& context)
at System.Net.HttpWebRequest.GetRequestStream()
at Amazon.S3.AmazonS3Client.getRequestStreamCallback[T](IAsyncResult result)
at Amazon.S3.AmazonS3Client.endOperation[T](IAsyncResult result)
at Amazon.S3.AmazonS3Client.EndPutObject(IAsyncResult asyncResult)
at Tracks.Application.Services.Bp.BpTemplateService.UploadImage(Byte[] image, String fileName) in ...
I have tried to debug this in VS by stepping through the code but AWSSDK doesn't come with debug symbols. It should be noted that the remote host name (or bucket name as I think Amazon calls them) is images.ourcompany.com (not our real company's name!). I have checked the value of Config.GetValue("PublicBucket") and it is indeed images.ourcompany.com. At this stage I have exhausted my limited knowledge about Amazon S3 and have no theories about what causes the exception to be thrown.
I think you have to add region endpoint or/and set ServiceUrl to establish connection to AmazonS3 check the similar question below:
Coping folder inside AmazonS3 Bucket (c#)
Upload images on Amazon S3. source code
AmazonS3Config cfg = new AmazonS3Config();
cfg.RegionEndpoint = Amazon.RegionEndpoint.SAEast1;//your region Endpoint
string butcketName = "yourBucketName";
AmazonS3Client s3Client = new AmazonS3Client("your access key",
"your secret key", cfg);
PutObjectRequest request = new PutObjectRequest()
{
BucketName = _bucket,
InputStream = stream,
Key = fullName
};
s3Client.PutObject(request);
or
AmazonS3Config asConfig = new AmazonS3Config()
{
ServiceURL = "http://irisdb.s3-ap-southeast2.amazonaws.com/",
RegionEndpoint = Amazon.RegionEndpoint.APSoutheast2
};