I am developing an application that will upload files to Amazon. Amazon provides
a method WithServerSideEncryptionMethod(ServerSideEncryptionMethod.AES256) to encrypt files but it is not working. It is saving text as a plain text.
public static void UploadFile()
{
new Program();
var key = "a";
//key = ReplaceDblSlashToSingleFwdSlash(key);
//path = ReplaceFwdSlashToBackSlash(path);
var request = new PutObjectRequest();
request.WithBucketName("demo")
.WithContentBody("i am achal kumar")
.WithKey(key)
.WithServerSideEncryptionMethod(ServerSideEncryptionMethod.AES256);
//request.PutObjectProgressEvent += displayFileProgress;
S3Response response = s3Client.PutObject(request);
response.Dispose();
}
Your data is likely encrypted and just being automatically decrypted with your get that you are testing with.
http://aws.typepad.com/aws/2011/10/new-amazon-s3-server-side-encryption.html
Decryption of the encrypted data requires no effort on your part. When
you GET an encrypted object, we fetch and decrypt the key, and then
use it to decrypt your data. We also include an extra header in the
response to the GET to let you know that the data was stored in
encrypted form in Amazon S3.
you can use the following code to check if the is encrypted or not .. because aws s3 they already decrypt the object when they return it to you.
so try the following code to check if the object is encrypted on amazon s3
GetObjectMetadataRequest meta = new GetObjectMetadataRequest();
GetObjectMetadataResponse response = s3Client.GetObjectMetadata(meta);
if(response.ServerSideEncryptionMethod = ServerSideEncryptionMethod.AES256)
{
// your code goes here
}
i hope this could help
Related
I am currently using Forge Webhooks API to handle different events that might occur on a project. Everything works fine, except the payload signature check.
The reason why I want to check the payload is because the callback will end up on my API and I want to reject all requests that do not come from Forge's webhook service.
Steps I followed:
Add (register) secret key (token) on Forge. API Reference
Trigger an event that will eventually call my API for handling it.
Validating signature header. Followed this tutorial.
PROBLEM!!! My computedSignature is different from the signature received from Forge.
My C# code looks like this:
private const string SHA_HASH = "sha1hash";
var secretKeyBytes = Encoding.UTF8.GetBytes(ForgeAuthConfiguration.AntiForgeryToken);
using var hmac = new HMACSHA1(secretKeyBytes);
var computedHash = hmac.ComputeHash(request.Body.ReadAsBytes());
var computedSignature = $"{SHA_HASH}={computedHash.Aggregate("", (s, e) => s + $"{e:x2}", s => s)}";
For one example, Forge's request has this signature header: sha1hash=303c4e7d2a94ccfa559560dc2421cee8496d2d83
My C# code computes this signature: sha1hash=3bb8d41c3c1cb6c9652745f5996b4e7f832ca8d5
The same AntiForgeryToken was sent to Forge at step 1
Ok, I thought my C# code is broken, then I tried this online HMAC generator and for the given input, result is: 3bb8d41c3c1cb6c9652745f5996b4e7f832ca8d5 (same as C#)
Ok, maybe the online generator is broken, I tried their own code in node js and this is the result:
I have 3 ways of encrypting the SAME body using the SAME key and I get the SAME result every time. BUT those results are DIFFERENT from the signature provided by Forge, resulting in failing the check and rejecting a valid request...
Does anyone know what is happening with that signature?
Why is it different from my result if I follow their tutorial?
How are you validating your requests?
The code below is working at my side. Could you give it a try if it helps?
[HttpPost]
[Route("api/forge/callback/webhookbysig")]
public async Task<IActionResult> WebhookCallbackBySig()
{
try
{
var encoding = Encoding.UTF8;
byte[] rawBody = null;
using (StreamReader reader = new StreamReader(Request.Body, Encoding.UTF8))
{
rawBody = encoding.GetBytes(reader.ReadToEnd());
}
var requestSignature = Request.Headers["x-adsk-signature"];
string myPrivateToken = Credentials.GetAppSetting("FORGE_WEBHOOK_PRIVATE_TOKEN");
var tokenBytes = encoding.GetBytes(myPrivateToken);
var hmacSha1 = new HMACSHA1(tokenBytes);
byte[] hashmessage = hmacSha1.ComputeHash(rawBody);
var calculatedSignature = "sha1hash=" + BitConverter.ToString(hashmessage).ToLower().Replace("-", "");
if (requestSignature.Equals(calculatedSignature))
{
System.Diagnostics.Debug.Write("Same!");
}
else
{
System.Diagnostics.Debug.Write("diff!");
}
}
catch(Exception ex)
{
}
// ALWAYS return ok (200)
return Ok();
}
If this does not help, please share with your webhook ID (better send email at forge.help#autodesk.com). We will ask engineer team to check it.
Is there any example in C# to see how to pre-sign all objects using a start-with policy with AWS v4 signature to let customers download object from their respective folder structure instead signing each document separately.
The documentation says :
https://s3.amazonaws.com/examplebucket/test.txt
?X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Credential=<your-access-key-id>/20130721/us-east-1/s3/aws4_request
&X-Amz-Date=20130721T201207Z
&X-Amz-Expires=86400
&X-Amz-SignedHeaders=host
&X-Amz-Signature=<signature-value>
But my signature is not working for GET (download) object, while working correctly for upload
void Main()
{
string bucket = "bucket-name-here";
string s3Key = "s3-key-here";
string s3Secret = "secret-here";
string s3Region = "us-east-1";
string Date = DateTime.UtcNow.ToString("yyyyMMdd");
string xAmzDate = DateTime.UtcNow.ToString("yyyyMMdd") + "T000000Z";
string expiration = DateTime.UtcNow.AddDays(1).ToString("yyyy-MM-ddTHH:mm:ssK");
string policyString = $#"{{""expiration"":""{expiration}"",""conditions"":[{{""bucket"":""{bucket}""}},{{""acl"":""private""}},[""starts-with"",""$key"",""Client_1""],[""starts-with"",""$Content-Type"",""""],[""starts-with"",""$filename"",""""],{{""x-amz-date"":""{xAmzDate}""}},{{""x-amz-credential"":""{s3Key}/{Date}/us-east-1/s3/aws4_request""}},{{""x-amz-algorithm"":""AWS4-HMAC-SHA256""}}]}}";
var policyStringBytes = Encoding.UTF8.GetBytes(policyString);
var policy = Convert.ToBase64String(policyStringBytes);
//policy.Dump();
byte[] signingKey = GetSigningKey(s3Secret, Date, s3Region, "s3");
byte[] signature = HmacSHA256(policy, signingKey);
var sign = ToHexString(signature);
sign.Dump();
}
static byte[] HmacSHA256(String data, byte[] key)
{
String algorithm = "HmacSHA256";
KeyedHashAlgorithm kha = KeyedHashAlgorithm.Create(algorithm);
kha.Key = key;
return kha.ComputeHash(Encoding.UTF8.GetBytes(data));
}
private byte[] GetSigningKey(String key, String dateStamp, String regionName, String serviceName)
{
byte[] kSecret = Encoding.UTF8.GetBytes(("AWS4" + key).ToCharArray());
byte[] kDate = HmacSHA256(dateStamp, kSecret);
byte[] kRegion = HmacSHA256(regionName, kDate);
byte[] kService = HmacSHA256(serviceName, kRegion);
byte[] kSigning = HmacSHA256("aws4_request", kService);
return kSigning;
}
public static string ToHexString(byte[] data)
{
StringBuilder sb = new StringBuilder();
for (int i = 0; i < data.Length; i++)
{
sb.Append(data[i].ToString("x2", CultureInfo.InvariantCulture));
}
return sb.ToString();
}
More about the problem: We have thousands of documents for hundreds of clients on S3 on their respective folder structure as below. Right now, every time the client is looking to download their object they gets signed by our API to created the downloadable link > so each document is signed separately.
Client 1
Client_1/Document1.xyz
Client_1/Document2.xyz
Client 2
Client_2/Document1.xyz
Client_2/Document2.xyz
The signing algorithm for S3 HTML form POST uploads allows you to sign a policy document with constraints like ["starts-with","$key",...] but pre-signed URLs for S3 don't support this. With a pre-signed URL, you sign not a policy document but a "canonical request," which is a canonicalized representation of the browser's exact request. So there is no support for wildcards or prefixes.
There are two alternatives that come to mind.
CloudFront signed URLs and signed cookies do support a policy document when you use a "custom policy" (not a "canned policy," which is more like what S3 supports) and a custom policy allows a * in the URL, similar to ["starts-with","$key",...] but using the URL the browser will be requesting. You only have to do the signing once, and the code running in the browser can reuse that policy and signature. On the back-side of CloudFront, a CloudFront Origin Access Identity is used to sign the requests as they are actually sent to the bucket, after authenticating the request on the front-side of CloudFront using the CloudFront signed URL or signed cookies. (With signed cookies, the browser just makes the request, and automatically submits the cookies, so that works the same but with no browser manipulation of the URLs.
Alternately, the AssumeRole action in Security Token Service could be called by your server, to generate a set of temporary credentials for the client to use, to sign its own individual URLs.
When calling AssumeRole, you can also pass an optional session policy document. If you do, then the generated temporary credentials can only perform actions allowed by both the role policy ("allow read from the bucket") and the session policy ("allow read from the bucket for keys beginning with a specific prefix"). So the role credentials obtained would only allow the user to access to their objects.
I want to build an API for Unity. I noticed many APIs in Unity like Vuforia, requires developers to generate a key on website, and then paste the key into Unity editor, I wonder how does it work? When will Unity send the key to website to validate? Which protocol is this so I can research more on the internet?
When will Unity send the key to website to validate?
It depends. Some send the key during every request. Some send the key once, then generate a temporary token that will be used to make other requests.
Which protocol is this so I can research more on the internet?
Most web API use the REST protocol.
Usually POST or GET request methods.
I wonder how does it work?
It's not complicated. You need to know C#, MySQL and PHP(any back-end language). If you know these 3 languages, you can do it.
The MySQL is used to store user information such as username, password. User can go on the website and create a private key. That key will be saved on the database and linked to that user with MySQL.
When you make a request from Unity(C#) to the server, you can get the key from the user then embed in a form(WWWForm). You can use that form (WWWForm.AddField("API_KEY", "Your key")) to make request to the server with the WWW or UnityWebRequest API.
When the PHP receives request from your Unity. You read the form sent from Unity with PHP ($_POST["API_KEY"]) and then check if the key is in the database with MySQL. If the key exist, go ahead and do what the request want. If the key does not exist in the database echo error message.
That's it. Below is an example of an API that converts Image to Text. It requires key to function. Some functions are not implemented and is only used to show how API key authentication is done.
C#:
public Texture2D texture;
public void sendData()
{
string reqUrl = "http://yourServerUrl.com/app/imagetospeech.php";
WWWForm reqForm = new WWWForm();
//Add API key
reqForm.AddField("API_KEY", "AEH392HEIQSKLZ82V4HCBZL093MD");
//Add Image to convert to Text
reqForm.AddBinaryData("REQ_IMAGE", texture.EncodeToPNG());
WWW www = new WWW(reqUrl, reqForm);
StartCoroutine(WaitForRequest(www));
}
private IEnumerator WaitForRequest(WWW www)
{
yield return www;
//Check if we failed to send
if (string.IsNullOrEmpty(www.error))
{
UnityEngine.Debug.Log(www.text);
}
}
PHP:
<?php
function keyExist($keyInput)
{
//MySQL code to check if key is in the table
//$retval = NOT IMPLEMENTED!
if (!$retval) {
return false;
} else {
return true;
}
}
function convertImageToText($imageInput)
{
//$retval = NOT IMPLEMENTED!
if (!$retval) {
return "";
} else {
return $retval;
}
}
//Get API key from Unity
$apiKey = $_POST["API_KEY"];
//Check if API key exist
if (keyExist($apiKey)) {
//Get the image from Unity
$imageFile = $_FILES['REQ_IMAGE'];
if (!empty($imageFile)) {
//Success
echo convertImageToText($imageFile);
} else {
echo "Failed!";
}
}
?>
I'm using the DropNetRT library and I can't find a way to create a working DropNetClient using just the Generated Access Token from my app page in my Dropbox account.
If I use my User Secret and User Token it works:
public static async Task UploadStuff()
{
DropNetClient client = new DropNetClient("APIKey", "AppSecret");
client.SetUserToken(new UserLogin() { Secret = "mySecret", Token = "myToken" });
// Then upload the data with the client
}
But, instead of my UserToken and UserSecret, I just want to use my Generated Access Token.
It looks something like this, just to be sure:
jfjfDkFkdfikAAAAAAAAAADkfkDJSJFJISjofdjFjjfoJOIDJSOjsFKPFKPEJKfjiksfd3_thD
Now, I tried using a UserLogin with just my Access Token as the Token and without a UserSecret, but the client threw an exception, so I guess that's not the right way to do that.
How can I do that? Is there a way to create a client with the access token with this library, or do I have to upload the file manually using an HttpClient? If so, I really have no idea on how to do that.
Thanks!
Sergio
Edit: this is what I tried (it's not working):
public static async Task TestUploadGeneratedToken()
{
// Create the client
DropNetClient client = new DropNetClient("APIKey", "AppSecret");
client.SetUserToken("MyGeneratedAccessToken", String.Empty);
// Get a test file
StorageFile tempFile = await ApplicationData.Current.TemporaryFolder.CreateFileAsync("test.txt", CreationCollisionOption.OpenIfExists);
await FileIO.WriteTextAsync(tempFile, "This is a simple test file");
// Convert the file to a byte array
IRandomAccessStream stream = await tempFile.OpenAsync(FileAccessMode.Read);
stream.Seek(0);
byte[] bytes = new byte[stream.Size];
await stream.ReadAsync(bytes.AsBuffer(), (uint)stream.Size, InputStreamOptions.None);
// Upload the file
await client.Upload(CrashReportPath, "tokenTest.txt", bytes);
}
the Upload method throws a DropboxException.
That looks right. Try setting the secret to string.empty instead of null?
I'm not sure if I have used a generated token before but can't see why it wouldn't work.
I am trying to use the REST interface of AWS S3 for a web service which stores and retrieves file pieces in a simmilar way git does (via hash and a directory system based off of it). I am using the RestSharp client library to make these calls, as the AWS SDK is out of the question (the web service is actually required to work with AWS-like stores such as Hitachi HDS) and in general, as more storage platforms would be added, it was felt a standardised method would be best to perform over-the-wire communication.
The problem is that RestSharp may be adding some extra payload, as S3 is crying about having more than one data element to save.
The following code is the core storage logic, and it should be noted I am using Ninject to handle any dependancies.
public bool PutBytesInStore(string piecehash, byte[] data)
{
string method = "POST";
string hash;
using (var sha1 = new SHA1CryptoServiceProvider())
{
hash = Convert.ToBase64String(sha1.ComputeHash(data));
}
string contentType = "application/octet-stream";
string date = new DateTime().ToString("{EEE, dd MMM yyyy HH:mm:ss}");
string file = string.Format("pieces/{0}/{1}/{2}", piecehash.Substring(0, 2), piecehash.Substring(0, 6),
piecehash);
//Creating signature
var sBuilder = new StringBuilder();
sBuilder.Append(method).Append("\n");
sBuilder.Append(contentType).Append("\n");
sBuilder.Append(date).Append("\n");
sBuilder.Append(hash).Append("\n");
sBuilder.Append(file).Append("\n");
var signature = Convert.ToBase64String(new HMACSHA1(Encoding.UTF8.GetBytes(_password)).ComputeHash(Encoding.UTF8.GetBytes(sBuilder.ToString())));
_request.Method = Method.POST;
_request.AddFile(piecehash, data, piecehash);
_request.AddHeader("Date", date);
_request.AddHeader("Content-MD5", hash);
_request.AddHeader("Authorisation", string.Format("AWS {0}:{1}", _identifier, signature));
var response = _client.Execute(_request);
//Check responses for any errors
var xmlResponse = XDocument.Parse(response.Content);
switch (response.StatusCode)
{
case HttpStatusCode.Forbidden:
ErrorCodeHandler(xmlResponse);
break;
case HttpStatusCode.BadRequest:
ErrorCodeHandler(xmlResponse);
break;
case HttpStatusCode.Accepted:
return true;
default:
return false;
}
return false;
}
The problem lies with the response sent, which reads;
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>InvalidArgument</Code>
<Message>POST requires exactly one file upload per request.</Message>
<ArgumentValue>0</ArgumentValue>
<ArgumentName>file</ArgumentName>
<RequestId>SomeRequest</RequestId
<HostId>SomeID</HostId>
</Error>
The AWS API seems pretty sparse on this message, and I cant quite seem to be able to figure out why the RestSharp client would be adding more than two files to the payload.
Any help is greatly appreciated.
Its because of webkitboundary. pleas try your stuff on postman - webkitboundary is very important on uploading.