I m using amazon sdk for .net
i have uploaded a file in folder of my bucket , now i want to get the url of that file using this code
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest();
request.BucketName = "my-new-bucket2";
request.Key = "Images/Tulips.jpg";
request.Expires = DateTime.Now.AddHours(1);
request.Protocol = Protocol.HTTP;
string url = s3.GetPreSignedURL(request);
but this is returning url with key , exipration date and signature but infact i want to get the url without them , there is no other method to get the url
**Things i tried **
i search and found that i have to change permission of my file
i have change the permission of file while uploading
request.CannedACL = S3CannedACL.PublicRead;
but still its returning the same url
http://my-new-bucket2.s3-us-west-2.amazonaws.com/Images/Tulips.jpg?AWSAccessKeyId=xxxxxxxxxxxx&Expires=1432715743&Signature=xxxxxxxxxxx%3D
it work when i remove keyid ,expire and signature
but how can i get url with out it , or do i have to do it manually
This is by design. If you know the bucket name and the key then you have everything you need to construct the URL. As an example, here bucketname is yourbucketname and the key is this/is/your/key.jpg.
https://yourbucketname.s3.amazonaws.com/this/is/your/key.jpg
Hope that helps!
I just browsed their documentation and was not able to find a method to return an absolute url. However, I really believe there is one that I could not see. For now, you can solve your problem by extracting an absolute url from the result you have:
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest();
request.BucketName = "my-new-bucket2";
request.Key = "Images/Tulips.jpg";
request.Expires = DateTime.Now.AddHours(1);
request.Protocol = Protocol.HTTP;
string url = s3.GetPreSignedURL(request);
index = url.IndexOf("?");
if (index > 0)
string absUrl = url.Substring(0, index);
Hope it helps :)
Related
I am building a google document from Google docs API.
My doc is created from a template: copy the template in new location, replace texts, and replace images.
Everything is OK until I try to replace my images.
The images to put in the file are stored in a Google Drive folder.
So, I have 2 questions:
the first one is how to use replaceImage request with viewing URL of my image?
By using following code, I am facing a 400 error.
IDictionary<string, InlineObject> inlineObjects = doc.InlineObjects;
//Get the objet ID
string imageObjectId = inlineObjects.First().Value.ObjectId;
BatchUpdateDocumentRequest batchUpdateRequest = new BatchUpdateDocumentRequest {
Requests = new List<Google.Apis.Docs.v1.Data.Request>()
};
var request = new Google.Apis.Docs.v1.Data.Request {
ReplaceImage = new ReplaceImageRequest() {
ImageObjectId = imageObjectId,
Uri = "https://docs.google.com/uc?export=view&id=1pmXP9TFolKMQoUlXHllvIrQPlZiDxId6"
}};
batchUpdateRequest.Requests.Add(request);
DocumentsResource.BatchUpdateRequest updateRequest =
_docSercive.Documents.BatchUpdate(batchUpdateRequest, fileId);
BatchUpdateDocumentResponse updateResponse = updateRequest.Execute();
Error:
Invalid requests[0].replaceImage: Access to the provided image was forbidden. [400]
Errors [
Message[Invalid requests[0].replaceImage: Access to the provided image was forbidden.] Location[ - ] Reason[badRequest] Domain[global]
]'
and the second one is, how to industrialize this by retriving image URL from Google Drive API?
When I try getting it right now, I can only access image name and extension:
Google.Apis.Drive.v3.Data.File f = _service.Files.Get(fileId).Execute();
am I supposed to use query params for this file.get call?
Thanks for your help!
I am using TransferUtilityDownload and TransferUtilityDownloadDirectory to download a file and full directrory. However even I am using same bucket name format it is working for single file but not for directory and returns 403 Access Denied. (same problem with listing objects):
string bucketName = "my-bucket-us-east-1-prod";
string UnscheduledIn = "abc/butter/input_butter_11nov2019/unscheduled";
AmazonS3Client client = new AmazonS3Client(RegionEndpoint.USEast1);
// request for object download
var request = new TransferUtilityDownloadRequest();
// request for directory download
var drequest = new TransferUtilityDownloadDirectoryRequest();
//This request for single file download
request.BucketName = bucketName + "/" + UnscheduledIn;
request.FilePath = "D:\\input\\" + "test.csv";
request.Key = "test.csv";
//This request for directory download
drequest.BucketName = bucketName + "/" + UnscheduledIn;
drequest.S3Directory = "unscheduled";
drequest.LocalDirectory = "D:\\input\\";
drequest.DownloadFilesConcurrently = true;
TransferUtility fileTransferUtility = new TransferUtility(new AmazonS3Client(RegionEndpoint.USEast1));
// This one works
fileTransferUtility.Download(request);
// This one does not work
fileTransferUtility.DownloadDirectory(drequest);
403 Access Denied error usually cause of wrong bucket or directory name (if request cannot find the bucket or directory and this is a known issue). However bucket name and directory name are correct. I wonder if the formatting or setting up some properties am missing?
Quick note this version also returns same 403 error:
//This request for directory download
drequest.BucketName = bucketName;
drequest.S3Directory = UnscheduledIn;
drequest.LocalDirectory = "D:\\input\\";
drequest.DownloadFilesConcurrently = true;
It looks there is some issue with the bucket name and s3 directory path. Update your code with this piece of code.
//This request for directory download
drequest.BucketName = bucketName;
drequest.S3Directory = '/' + UnscheduledIn;
drequest.LocalDirectory = "D:\\input";
drequest.DownloadFilesConcurrently = true;
Update:
In General, 403 forbidden comes from server in case of authentication failed/Permission Issue.
Please check your bucket policy to allow download.
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::my-bucket-us-east-1-prod/abc/butter/input_butter_11nov2019/*","arn:aws:s3:::my-bucket-us-east-1-prod/abc/butter/input_butter_11nov2019/unscheduled/*"]
}
I'm trying to get hash image using the Facebook Ads api.
I didn't understand how to make the call.
I have the image Url as string and the image itself as Byte[].
This is the example from FB documentation:
curl -F 'test.jpg=#test.jpg' -F 'access_token=_' "https://graph.facebook.com/act_368811234/adimages"
What is the test.jpg=#test.jpg means ? It's not something that I've seen before.
You can find the relevant Facebook documentation URL at: https://developers.facebook.com/docs/reference/ads-api/adimage/
Thank you
The following part of the curl request means post a parameter named test.jpg which references a local file path within the current directory called test.jpg.
test.jpg=#test.jpg
If you're using c#, you may want to take a look at the open source library available from facebooksdk.net (note, it's not produced by Facebook):
http://facebooksdk.net/docs/making-synchronous-requests/
Using this, it will likely be a couple of lines of code:
var fb = new FacebookClient("access_token");
string attachementPath = #"C:\\image.jpg";
dynamic result = fb.Post("act_YOURACCOUNTID/adimages",
new
{
file = new FacebookMediaObject
{
ContentType = "image/jpeg",
FileName = Path.GetFileName(attachementPath)
}.SetValue(File.ReadAllBytes(attachementPath))
}
);
As you also tagged PHP, you can use the Facebook SDK which is produced by Facebook with the following code: https://github.com/facebook/facebook-php-sdk/
$facebook = new Facebook(array(
'appId' => 'YOUR_APPID',
'secret' => 'YOUR_APPSECRET',
));
$facebook->setAccessToken("YOUR_ACCESS_TOKEN");
$facebook->setFileUploadSupport(true);
$file='./test.jpg';
$args = array(
basename($file) => '#' . realpath($file),
);
$response = $facebook->api('/act_YOURACTID/adimages','post',$args);
I have a Facebook Page Tab app and I'm trying to find out where visitors to the page tab are coming from. I've read on http://developers.facebook.com/docs/authentication/signed_request/ that you can get these from app_data in the signed request but whenever I try getting the signed request app_data isn't there.
I used FB.getLoginStatus to get the signed request when inside the tab on Facebook, but
When I debug the signed request with http://developers.facebook.com/tools/echo I get the error "Bad Signature"
Your signed_request was probably not signed with our app_id of xxxxx Here is the payload:
{
"algorithm": "HMAC-SHA256",
"code": "xxxx",
"issued_at": xxxx,
"user_id": "xxxx2"
}
I'm using the C# SDK with Javascript
You can decode the signed request with the code in this topic:
Decode Signed Request Without Authentication
if (Request.Params["signed_request"] != null)
{
string payload = Request.Params["signed_request"].Split('.')[1];
var encoding = new UTF8Encoding();
var decodedJson = payload.Replace("=", string.Empty).Replace('-', '+').Replace('_', '/');
var base64JsonArray = Convert.FromBase64String(decodedJson.PadRight(decodedJson.Length + (4 - decodedJson.Length % 4) % 4, '='));
var json = encoding.GetString(base64JsonArray);
var o = JObject.Parse(json);
var lPid = Convert.ToString(o.SelectToken("page.id")).Replace("\"", "");
var lLiked = Convert.ToString(o.SelectToken("page.liked")).Replace("\"", "");
var lUserId= Convert.ToString(o.SelectToken("user_id")).Replace("\"", "");
}
It should be easy to get the app_data by adding
var lAppData = Convert.ToString(o.SelectToken("app_data")).Replace("\"", "");
To the have the app_data for your tab app you need to add it to the redirect url when acquiring permissions. You redirect url should something like:
http://facebook.com/YOUR_PAGE?sk=app_YOUR_APP_ID&app_data=add,whatever,parameters,you,want,here
I can only guess that the reason you got this error is because you just pasted your signed request in the address bar instead of the one used by the echo tool. The error is because your signed request is signed by your app_id and you're trying to use it with echo which has another app_id. But that's just a guess :)
My primary language is PHP but hope I was able to help :)
Any idea of how to upload a file to Google site from c#?
I am trying to upload but getting a 403 error. However, I am using the same credentials to connect to the site and get the list of attachments and pages present on the site.
Any help would be greatly appreciated!!
They most likely have an anti-CSRF scheme that stores temporal identifiers in the page and/or cookies, this is specifically to hinder bots.
You are most likely submitting a request without the proper CSRF tokens and get rejected. I would recommend analyzing how they handle CSRF, after this point it will most likely boil down to making a WebRequest to the page and so you can get any cookies they get back, along with having the form so you can scrape out any hidden fields that are relevant. Then move those over to your post request that you're attempting to the send the file to.
I figured out the problem and resolved it. Below is the complete function:
public bool UploadAttachment()
{
try
{
//AsyncSendData data = new AsyncSendData();
string parentUrl = Cabinets["Cabinet1"].ToString();
string parentID = parentUrl.Split('/')[7];
AtomEntry entry = new AtomEntry();
entry.Title.Text = "abc.jpg";
AtomCategory cat = new AtomCategory();
cat.Term = ATTACHMENT_TERM;
cat.Label = "attachment";
cat.Scheme = KIND_SCHEME;
entry.Categories.Add(cat);
AtomLink link = new AtomLink();
link.Rel = PARENT_REL;
link.HRef = parentUrl;
entry.Links.Add(link);
AtomContent content = new AtomContent();
FileInfo info = new FileInfo("C:\\Bluehills.txt");
FileStream stream = info.Open(FileMode.Open,FileAccess.ReadWrite,FileShare.ReadWrite);
this.setUserCredentials(userName, password);
Uri postUri = new Uri(makeFeedUri("content"));
entry.Source = new AtomSource();
//this.EntrySend(postUri, entry, GDataRequestType.Insert);
// Send the request and receive the response:
AtomEntry insertedEntry = this.Insert(postUri, stream, (string)DocumentTypes["TXT"], "bluehills");
return true;
}
catch (Exception ex)
{
return false;
}
}