Usually when I upload to S3 storage, I use an AmazonS3Client like this:
var client = Amazon.AWSClientFactory.CreateAmazonS3Client(accessKey, secretKey, s3Config)
This works fine for internal use but now I am looking at providing an app to external users and don't want our (sacret) access & secret keys to be out there. I've set up an S3 bucket with a bucket policy allowing uploads (PutObject) from anonymous users but how do I use the Amazon SDK now? I can't seem to find any way without providing the access and secret key.
You should not open a bucket up for public write, likely. You are open to lots of attacks and will need to keep a close eye on your log files, etc.
A better solution would be to keep the default private access on the bucket, then create an IAM user who only has upload (and perhaps download) permissions for the required area. Then when someone wants to upload a file, you can use a call to your server which has the IAM keys to calculate and return a 'pre signed post' which will allow your client app to post a new file to the server. You can then use any auth tool you want on your server to decide whether or not to allow someone to upload, including no auth - but have abuse detection. When you do this the secret key for the IAM user is never sent down to the client, which may be in a debug session etc.
Since the whole post is pre signed, you can also decide where the file is allowed to go, the uploaded file name, etc and return that in the server response.
You just need to pass null for accessKey and secretKey and you can use the SDK for any anonymously allowed operation.
Check out this related question of mine it includes an official response from an Amazon employee from their developer forum! Relevant information from the linked question:
This is from an official Amazon employee on their forum:
As of the 1.3.8.0 release of the SDK you can pass null for the access
and secret key and the SDK will skip the signing process and try the
operations like GetObject as a public operation.
Norm
Related
I am running Google Translate API in C#.
Running locally on my computer the next code works, but online on a server it throws the following error:
using Google.Cloud.Translation.V2;
TranslationClient client = TranslationClient.Create();
var response = client.TranslateText(sentence, targetLanguage, sourceLanguage: sourceLanguage);
"The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information."
Locally this runs just by installing Cloud SDK Installer which does all the settings, there is no need for authentication in code.
On the server, should I use instead OAuth 2.0 or Service account keys ?
Can someone assist me on how to solve this?
EDIT: Can someone confirm to me if it is necessary to have access to the local server to run commands in command line like here https://cloud.google.com/storage/docs/authentication ? This would be pretty ridiculous, instead of just writing code. For example Youtube API does not require local access.
Follow directions to get json file:
https://cloud.google.com/translate/docs/reference/libraries
Then run this code first:
System.Environment.SetEnvironmentVariable("GOOGLE_APPLICATION_CREDENTIALS", "c:\mypath\myfile.json");
To generate a private key in JSON or PKCS12 format:
Open the list of credentials in the Google Cloud Platform Console.
OPEN THE LIST OF CREDENTIALS
Click Create credentials.
Select Service account key. A Create service account key window
opens.
Click the drop-down box below Service account, then click New service account.
Enter a name for the service account in Name.
Use the default Service account ID or generate a different one.
Select the Key type: JSON or P12.
Click Create. A Service account created
window is displayed and the private key for the Key type you
selected is downloaded automatically. If you selected a P12 key, the
private key's password ("notasecret") is displayed.
Click Close.
You can find more details here
https://cloud.google.com/storage/docs/authentication
Its all in the errormessage. You have two options
Run the Google Compute Engine on the machine you have your program running on and input your credentials there.
Use a service account and set the "GOOGLE_APPLICATION_CREDENTIALS" environment variable to reference your credentials file (which is a .json file that you can download from the google developer console.)
PS: Do not store your credentials file anywhere on the server where it may be accessed by someone else!
You must download API key from
https://console.developers.google.com/iam-admin/serviceaccounts
After that download .P12 file file to use it in your code
var certificate = new X509Certificate2(#"key3.p12", "notasecret", X509KeyStorageFlags.Exportable);
notasecret is default password
The easiest answer to my question , to avoid local settings on the server, is the third option of using the Translation API described below: using API keys.
This means just a simple POST to an endpoint that has the API key in the link.
https://cloud.google.com/docs/authentication/#getting_credentials_for_server-centric_flow
https://cloud.google.com/docs/authentication/api-keys
I have an app that allow me to read the data from Google Spreadsheet using API Key. I just make HTTP GET to this address and get a response with data.
https://sheets.googleapis.com/v4/spreadsheets/18soCZy9H4ZGuu**********BeHlNY1lD8at-Pbjmf8c/values/Sheet1!A1?key=AIzaSyAYJ***********pB-4iKZjYf4y0vhXP8OM
But when I try to do same to write data using HTTP PUT to address
https://sheets.googleapis.com/v4/spreadsheets/18soCZy9H4ZGuu**********BeHlNY1lD8at-Pbjmf8c/values/Sheet1!A4?valueInputOption=RAW?key=AIzaSyAYJ***********pB-4iKZjYf4y0vhXP8OM
its gives me 401 error.
Code to make PUT request:
using (WebClient wc = new WebClient())
{
byte[] res = wc.UploadData(link, "PUT", Encoding.ASCII.GetBytes(textBox1.Text));
MessageBox.Show(Encoding.Default.GetString(res));
}
Also spreadsheet is fully public with permission to read and write by anyone without auth. My guess is that I can't use API Key to write data to spreadsheet, and only way to do this is using OAuth.
UPDATE:
So i've just tryed Google.Apis.Sheets.v4 to write values, and now i'm almost 100% sure that API Key can't be used to write data to Google Spreadsheet. Well, then I'll use OAuth 2.0.
Well, maybe you are correct and the problem here is the API_KEY itself.
If you check the Sheets API documentation, it is stated that every request your application sends to the Google Sheets API needs to identify your application to Google. There are two ways to identify your application: using an OAuth 2.0 token (which also authorizes the request) and/or using the application's API key. Here's how to determine which of those options to use:
If the request requires authorization (such as a request for an individual's private data), then the application must provide an OAuth 2.0 token with the request. The application may also provide the API key, but it doesn't have to.
If the request doesn't require authorization (such as a request for public data), then the application must provide either the API key or an OAuth 2.0 token, or both—whatever option is most convenient for you.
So meaning either the OAuth 2.0 token or API key will work in your case since the file is public. But the problem is in the PUT request that you are doing, we can assume here that the API key is not working with it. But, we have alternatives for it, and that is the OAuth.
I also found here a related SO question that might help you.
For anyone still hoping for a simple answer, it seems there won't be one - any writing to a sheet, irrespective of the sheets permissions, will require OAuth2:
'This is intentional behavior. While public sheets are anonymously readable, anonymous edits aren't currently supported for a variety of reasons.
In this context, "anyone" == anyone with a google account.' HERE
One option that wasn't mentioned here is to use a service account instead. Service accounts are like users, but without being attached to a person. Instead, they're attached to a project.
Service accounts have an email address as well as a private key. Both can be used to create a JWTClientAuth, and this can be used to authenticate the API while it's being instantiated or to authenticate each and every request.
The advantage of the service account is that it works like an API KEY -- no need to ask a user to copy a URL to the browser and then copy a code back into the application -- but because it can act as an authenticated user, the service account email address can be added to the Google Sheet as an editor. With this in place, the application has full write access to the sheet but without having to deal with authorization codes and refresh tokens and copy/pasting.
You can see a Python example, Python With Google Sheets Service Account Step By Step, and a Node.js example, Accessing Google APIs Using Service Account in Node.js. I followed these examples to get setup.
Since you're using C#, you may find Writing to Google Sheets API Using .NET and a Service Account to be helpful.
This method reads the service account credentials from the JSON file to then instantiate the SheetsService:
private void ConnectToGoogle() {
GoogleCredential credential;
// Put your credentials json file in the root of the solution and make sure copy to output dir property is set to always copy
using (var stream = new FileStream(Path.Combine(HttpRuntime.BinDirectory, "credentials.json"),
FileMode.Open, FileAccess.Read)) {
credential = GoogleCredential.FromStream(stream).CreateScoped(_scopes);
}
// Create Google Sheets API service.
_sheetsService = new SheetsService(new BaseClientService.Initializer() {
HttpClientInitializer = credential, ApplicationName = _applicationName
});
}
Afterwards, you can use the Google Sheets .NET Client Library to write the data.
What I have
I'm making a web service using C#.
In order to authenticate users, they have to send their name plus their encrypted password, in order to check if exists in a database.
Then, If it's found, I create a string token, which is a 10 char string randomly generated in order to send it the next times while the session is alive, avoiding to have to send the original credentials anymore.
What is my problem
Using this approach, my problem appears due to the service lifetime.
It's known that web services are not initialized each time a request arrives, but nor is infinite. So there will be a moment, when it'll be destroyed and initialized again.
At this point, my token list would be erased, as well as all the alive connections with it, as this is its function.
So I'm stuck at this point. I'm not sure about how to proceed, maybe I'm just fooling around and there's a simpler way to authenticate users? Or maybe you've and idea about how to don't loose all these alive sessions without having to write them at a DB.
Thank you in advance
Update:
My goal
I aim to create a personal Web Service, just build for me and some friends. Not inside a company nor anything like this. Not in the same LAN neither.
I want to add a bit of security to this service, so I wanted to add authentication to the WS, mainly in order to avoid people pretending to be another and this kind of stuff. So I created User+Password system.
Then, in order to avoid to send them both in each WS Request, I started to write the "token" approach described above.
Notice that I'm using token word because it's similarity with token systems for these cases, but it's a completely created from 0 system, nothing proffesional, so do not assume anything complex about it if I've not said that.
How my system works (or try to)
User -> Auth (user, pass_encrypted) -> WS -> DB (exist? OK)
WS -> token (randomly generated, 10char string) -> User
After that, at each WS request, User sends the token instead of credentials.
After receiving it, WS looks for the token at a List<structureToken>, so it obtains the user which is doing the call, and (for example) the access level, in order to know if the user has rights to run this call.
Your current problem is that you want same list to be persisted through restarts and not persisted to any physical media at the same time. You have to pick one of the choices and live with it: not persisted - just ignore the fact you can have list in memory and make sure token can be validated by itself, if persisting - pick storage and save you list of random numbers.
Since you are building simple system without actual need to have proven verifiable security you can get some ideas from existing systems (like STS and the way it creates token). Basically STS signs information about user (indeed after validation) and than encrypts it with public key of receiving party. So particular server that supposed to get the token can decrypt it (as it has private key part), everyone else may still use it but have to treat as non-verifiable black box token.
Simplest version of this would be no encryption of information, just basic signing. Proper signing requires private/public pair (so external party can validate signature), but since in your case both parties are the same service - you can just SHA256. To prevent external callers to fake you signature you need to have some private information included in hash to "salt" value before hashing. Random number hardcoded into server code (or read from settings) would be good enough. You may also want to include expiration as part of signed value.
So your "token" could look like:
Name,expiration,Base64 of SHA256 of {Name + expiration + secret value}
Bob-2015-06-30-A23BDEDDC56
Since your server code have "secret value" you can always re-compute hash to verify if it is indeed the correct token.
Notes:
do not use it for any real services. Use an existing authentication and make sure to review all security comments related to proper usage of it.
this approach gives you chance to learn a some other concepts - i.e. key update (when your "secret value" need to change, or in real systems signing/encryption certs).
I have tried a lot of things and I admit defeat (I have read a lot of responses on here but none have helped me so far). I am trying to setup signed URLs for files held on Cloudfont. I am able to create signed URLs for S3 but I cannot get anything to work for Cloudfront. For cloudfront I am using the following from the AWS SDK:
var url = AmazonCloudFrontUrlSigner.GetCannedSignedURL( AmazonCloudFrontUrlSigner.Protocol.http, "cdn.coffeebreakgrooves.com", privateKey,
file, cloudFrontKeyPairID, DateTime.Now.AddDays(2));
I get a signed URL generated but I get access denied when following the link, which when I read about it suggested I setup Origin Access Identity. So I then went to my distribution settings and setup Origin Access Identity and chose:
Restrict Bucket Access: Yes
Origin Access Identity: Use an Existing Identity
Grant Read Permissions: Yes, Update Bucket Policy
Then all files become publicly available on Cloudfront, regardless of any settings I have for ACL in S3 (so even if file.txt has no permissions for anyone in S3 it can then be accessed via Cloudfront) and I can't tell if the signed URLs work or not because the download works with or without the querystring and the files have become publicly available. Essentially, how can I make my files private but downloadable with a signed URL (and is my signing method correct?). If I delete the generated bucket policy access is restricted again. I think I need to know how to set the bucket policy so that the origin access identity can only access the bucket with a signed URL... maybe.
Many thanks in advance for any help!
After a bit of a break and a rethink here is where I was going wrong. It isn't possible to have some content secured and other not secured in the same distribution. Either a whole distribution is secured or not. Here is my solution.
Setup a new bucket for your secure items in AWS
Add a new distribution in Cloudfront pointing to the new bucket created in 1 and choose Yes for 'Restrict Viewer Access' and 'Yes' for 'Forward Query Strings' (this is only to add the ability to add content disposition to specific downloads) and choose 'Self' for 'Trusted Signers'
At the top of AWS click on your name and choose 'Security Credentials' and choose 'Continue' as we chose 'Self' above.
Click on 'CloudFront Key Pairs' and choose 'Create New Key Pair'. Download the key files when offered (they won't be offered again), you need the private key. Also copy the Access Key ID as you'll need that.
Go to your distributions, click on the i next to the secure distribution, click on the origins tab, click 'create origin' or select the origin and choose Edit then choose 'Yes' for Restrict Bucket Access, Create a New Identity and Yes Update Bucket Policy. This essentially means that Cloudfront can authenticate against your bucket.
In your project go to NuGet and search for 'AWS' and install the AWS SDK.
Copy the private key file (pk***.pem) to a folder above your website root (or somewhere relatively private)
Add some code as per the following to generate a secure URL with a Content Disposition header.
I have to say that I couldn't have solved this without the help of Torsten's post on https://forums.aws.amazon.com/thread.jspa?messageID=421768 which is in PHP but pointed me in the right direction:
string cloudFrontKeyPairID = "myaccesskeyidfrompoint4";
string pathtokey = HttpContext.Current.Request.MapPath("~/").Replace("wwwroot", "ssl") + "pk-mykeyidfilenamesavedin4.pem";
FileInfo privateKey = new FileInfo(pathtokey);
string file = "folder/mytrack.mp3?response-content-disposition=" +
HttpContext.Current.Server.UrlEncode("attachment;filename='a_filename_with_no_spaces.mp3'");
//I can't figure out how to do spaces or odd characters.
url = AmazonCloudFrontUrlSigner.GetCannedSignedURL(
AmazonCloudFrontUrlSigner.Protocol.http,
"customcname.mydomain.com",
privateKey,
file,
cloudFrontKeyPairID,
DateTime.Now.AddDays(2));
I hope that helps someone, I will be using this as a personal resource anyway! Enabling the Origin Access Identity on an existing bucket which doesn't have 'Restrict Viewer Access' set it essentially opens up permissions for all items on your bucket. This may or may not be desirable! If I have anything wrong please let me know, this is all pretty new to me.
I'm Looking at using Amazon S3 and simpleDB in a desktop application.
The main issue I have is that I either need to store my aws credentials in the application or use some other scheme.
I'm guessing that storing them in the application is out of the question as they would be easily picked out.
Another option is to create a web service that creates the aws authentication signature but this has its own issues.
Does the signature require all the data from a file thats being uploaded? If so I would have to transfer all the data twice.
There would then be a central failure point which was one of the main reasons for using aws.
Any ideas?
UPDATE:
I needed to make it a bit clearer that I'm wanting to store my aws credentials in an application handed out to others. DPAPI or any other encryption would be only stop people simply using reflector to get the credentials. Using any encryption still needs the key that is easy to get.
UPDATE 2 - Sept 2011
Amazon have released some details on using the AWS Security Token Service, which allows for authentication without disclosing your secret key. More details are available on this blog post.
Tim, you're indeed hitting on the two key approaches:
NOT GOOD ENOUGH: store the secret key "secretly" in the app. There is indeed a grave risk of someone just picking it out of the app code. Some mitigations might be to (a) use the DPAPI to store the key outside the app binary, or (b) obtain the key over the wire from your web service each time you need it (over SSL), but never store it locally. No mitigation can really slow down a competent attacker with a debugger, as the cleartext key must end up in the app's RAM.
BETTER: Push the content that needs to be protected to your web service and sign it there. The good news is that only the request name and timestamp need to be signed -- not all the uploaded bits (I guess Amazon doesn't want to spend the cycles on verifying all those bits either!). Below are the relevant code lines from Amazon's own "Introduction to AWS for C# Developers". Notice how Aws_GetSignature gets called only with "PutObject" and a timestamp? You could definitely implement the signature on your own web service without having to send the whole file and without compromising your key. In case you're wondering, Aws_GetSignature is a 9-line function that does a SHA1 hash on a concatenation of the constant string "AmazonS3", the operation name, and the RFC822 representation of the timestamp -- using your secret key.
DateTime timestamp = Aws_GetDatestamp();
string signature = Aws_GetSignature( "PutObject", timestamp );
byte[] data = UnicodeEncoding.ASCII.GetBytes( content );
service.PutObjectInline( "MainBucket", cAWSSecretKey, metadata,
data, content.Length, null,
StorageClass.STANDARD, true,
cAWSAccessKeyId, timestamp, true,
signature, null );
EDIT: note that while you can keep the secret key portion of your Amazon identity hidden, the access key ID portion needs to be embedded in the request. Unless you send the file through your own web service, you'll have to embed it in the app.
The main issue I have is that I either need to store my aws credentials in the application or use some other scheme.
Does Windows have a system-wide service similar to Apple's Keychain Manager? If so, put your credentials there. If not, perhaps you can build a watered-down version of it for storing a strongly-encrypted version of your AWS credentials.
Does the signature require all the data from a file thats being uploaded?
The HMAC SHA-1 signature is an encoded encryption of the HTTP request headers. This signature is a hash value and will be very short relative to your data, only 20 bytes long.
You can encrypt the config file and/or use ProtectedData. Here's my blog post on both.
UPDATE: You might be a be to encrypt your app.config as part of an install step. Sample here: http://www.codeproject.com/KB/security/encryptstrings.aspx. Not great, but the best I've found so far.
Will you let anyone that gets a hold of a copy of your program access the data on S3/SimpleDB? If not, you will need your own authentication scheme that's independent from AWS security.
In that case, you could implement a web service that accepts the credentials that you give your customers (a username/password for example, a digital certificate, etc) and then performs the S3/SimpleDB operations that your program requires. That way, the AWS credentials never leave AWS. If a particular user's credentials are compromised, you can cancel those credentials in your web service.