I have a single project that contains classes to communicate with AWS. SQS is the only one that is not working. It is safe to assume that the access and secret keys are valid. I am also able to access this queue elsewhere so i am 100% it exists.
I have created a super basic method and this is failing.
var Config = new AmazonSQSConfig() { ServiceURL = "https://sqs.eu-west-1.amazonaws.com/.....etc"};
var Client = new AmazonSQSClient(Config);
SendMessageRequest request = new SendMessageRequest() { MessageBody = "Hello", };
SendMessageResponse sendMessageResponse = Client.SendMessage(request);
When the final line Client.SendMessage(request) runs it throws a 403 exception with the error
The request signature we calculated does not match the signature you
provided. Check your AWS Secret Access Key and signing method. Consult
the service documentation for details.
The code is so basic that i cant see where it could be wrong. The secret and access keys work for all other AWS communication so this cant be the cause and i am 100% sure the queue exists. What could be causing this?
This code works - see if you can use it instead:
using (var client = new AmazonSQSClient(Amazon.RegionEndpoint.USEast1))
{
client.SendMessage(new SendMessageRequest { QueueUrl = _queueName, MessageBody = JsonConvert.SerializeObject(request) });
}
Seems to be a really weird one, but i resolved the issue by using the fragmented AWSSDK instead of the complete one. I was using the main sdk from NuGet that contains everything for AWS. I removed this and install the core, s3 and sqs parts of the SDK. The code immediately worked once i did this. No idea why it did this and why it worked for S3 and not SQS, but at least this was a fairly simple fix.
Related
I am using the Nuget package Microsoft.Azure.CognitiveServices.Vision.CustomVision.Prediction
I have created a Custom Vision application in the Custom Vision portal and obtained API keys and a project ID.
Whenever I try to make a request to the API, I always get the following exception thrown:
HttpOperationException: Operation returned an invalid status code
'NotFound'
Here is my code:
HttpClient httpClient = new HttpClient();
CustomVisionPredictionClient customVisionPredictionClient = new CustomVisionPredictionClient(httpClient, false)
{
ApiKey = PredictionKey,
Endpoint = PredictionEndpoint,
};
var result = customVisionPredictionClient.PredictImageAsync(CUSTOM_VISION_PROJECT_GUID, imageData);
I have tried several different endpoints:
https://southcentralus.api.cognitive.microsoft.com/customvision/v2.0/Prediction
https://southcentralus.api.cognitive.microsoft.com/customvision/Prediction/v1.0
https://southcentralus.api.cognitive.microsoft.com/customvision/v1.1/Prediction
though on the portal the listed one is the first of the list. I have also succesfuly exported my app on Azure, which gives me the second endpoint in the list but with no more success.
I have also set a default iteration as suggested in a similar issue that I found ( CustomVision: Operation returned an invalid status code: 'NotFound' ).
I have tried this sample https://github.com/Microsoft/Cognitive-CustomVision-Windows/tree/master/Samples/CustomVision.Sample which uses a deprecated windows client, to at least ensure my project information are correct and I was able to access the API.
Any insight would be appreciated
For the .NET client SDK, you need to specify the base endpoint URL without the version or the rest of the path. The version is automatically added by the client SDK. In other words, you'll want (assuming SouthCentralUS is your region):
PreditionEndpoint = "https://southcentralus.api.cognitive.microsoft.com";
CustomVisionPredictionClient customVisionPredictionClient = new CustomVisionPredictionClient()
{
ApiKey = PredictionKey,
Endpoint = PredictionEndpoint,
};
var result = customVisionPredictionClient.PredictImageAsync(CUSTOM_VISION_PROJECT_GUID, imageData);
As an aside, note that unless you want to fine-tune the behavior, you don't need to pass in an HttpClient object to CustomVisionPredictionClient constructor.
If you need more sample code, please take a look at the QuickStart.
How to use the Prediction API
If you have an image URL:
your endpoint would be something like this
https://southcentralus.api.cognitive.microsoft.com/customvision/v2.0/Prediction/{Project-GUID}/url?iterationId={Iteration-ID}
Set Prediction-Key Header to : predictionId
Set Content-Type Header to : application/json
Set Body to : {"Url": "https://example.com/image.png"}
Or If you have an image file:
Endpoint would be like
https://southcentralus.api.cognitive.microsoft.com/customvision/v2.0/Prediction/{ProjectGuid}/image?iterationId={Iteration-Id}
Set Prediction-Key Header to : Predcition-key
Set Content-Type Header to : application/octet-stream
Set Body to : <image file>
Remember, you can mark an iteration as Default so you can send data to it without specifying an iteration id. You can then change which iteration your app is pointing to without having to update your app.
Check my other answer on the similar issue using python
Python custom vision predictor fails
Hope it helps.
So i created a bot in azure and downloaded it. The free 1000 calls from LUIS reached its limit. I created a subscription in azure portal (I did do the docker container something). Followed this guide until step 6. When i click the endpoint url and query directly in the browser it is working fine.
I added it to the bot via Bot Emulator by clicking + sign in services and adding the bot model there. But when i run bot i get the title error. I noticed in the .bot file the authoring key and subscription key added by the bot emulator is the same.
So i changed the subscription key to the one of the keys generated by azure and still the same error. I have tried reseting the authoring key still same and deleting my luis.ai account and created a new one. (still same email because that is the one logged in azure portal.) and still the same.
Here are some pictures for reference and the error.
I also tried testing it in luis.ai and got this result.
but when i check it is set to the new resource.
Here is a pic of the bot file after adding luis via Bot emulator. It has same authoring key and subscription key (still forbidden)
so i changed it now with subscription key (still forbidden).
Here it is working properly when tested directly in the URL.
For reference:
azure portal
luis.ai
and the error
How i add luis in the bot.
Here is the code for the bot service.
using System;
using System.Collections.Generic;
using Microsoft.Bot.Builder.AI.Luis;
using Microsoft.Bot.Configuration;
namespace Microsoft.BotBuilderSamples
{
public class BotServices
{
public BotServices(BotConfiguration botConfiguration)
{
foreach (var service in botConfiguration.Services)
{
switch (service.Type)
{
case ServiceTypes.Luis:
{
var luis = (LuisService)service;
if (luis == null)
{
throw new InvalidOperationException("The LUIS service is not configured correctly in your '.bot' file.");
}
var endpoint = (luis.Region?.StartsWith("https://") ?? false) ? luis.Region : luis.GetEndpoint();
var app = new LuisApplication(luis.AppId, luis.AuthoringKey, endpoint);
var recognizer = new LuisRecognizer(app);
this.LuisServices.Add(luis.Name, recognizer);
break;
}
}
}
}
public Dictionary<string, LuisRecognizer> LuisServices { get; } = new Dictionary<string, LuisRecognizer>();
}
}
I am trying to solve this for 4 days already. Thanks!
Thank you for all of the images. That is a HUGE help! Here's the problem:
By default, your code looks for the AuthoringKey in this section (second line):
var endpoint = (luis.Region?.StartsWith("https://") ?? false) ? luis.Region : luis.GetEndpoint();
var app = new LuisApplication(luis.AppId, luis.AuthoringKey, endpoint);
var recognizer = new LuisRecognizer(app);
this.LuisServices.Add(luis.Name, recognizer);
Since your .bot file still has the authoringKey set to the one that starts with ad9c..., which has hit its limit, your bot keeps running into the 403 error.
So, in your .bot file, replace that authoringKey with one of your endpointKeys (they start with 12ccc... or b575...).
I understand your confusion with this, especially since this requires you putting an endpointKey in your authoringKey property. I know there's some changes on the horizon to how LUIS bots will use keys, but those are probably a month or more out.
Alternatively, you can change:
var app = new LuisApplication(luis.AppId, luis.AuthoringKey, endpoint);
to:
var app = new LuisApplication(luis.AppId, luis.SubscriptionKey, endpoint);
Note: If you make either of these changes, LUIS can only query (which is usually fine), since Authoring Keys do everything else (see reference, below)
References
These are not so much for you as much as others that might come across this.
Authoring vs. Endpoint Keys
Key Limits
Troubleshooting LUIS 403 Errors
Recently when working with Lex in C#, I have referenced AWSCore.dll and AWSLex.dll and still trying to get a method that exposes all available Lexchatbots that I created in the Aamazon server.
var amazonPostRequest = new Amazon.Lex.Model.PostContentRequest();
var amazonPostResponse = new Amazon.Lex.Model.PostContentResponse();
used both methods to get all other information. Methods in request for bot name and alias is for setting and there is no method in response for getting available Lexchatbots in the server.
I don't believe that the Lex SDK supports this call directly.
Use the AWS Lex REST API to get a list of bots:
GET https://<your aws region endpoint>/bots/
https://docs.aws.amazon.com/lex/latest/dg/API_GetBots.html
After a long research I found the answer to my problem, It may help others.
First we need to add the AWSSDK.LexModelBuildingService through Nuget. This will add reference to the DLL.
From that all methods already exposed. We need to create both GetBotsRequest and GetBotsResponse methods.
var botRequest = new Amazon.LexModelBuildingService.Model.GetBotsRequest();
var botResponse = new Amazon.LexModelBuildingService.Model.GetBotsResponse();
Then we need to call lex model building service client
var amazonmodel = new AmazonLexModelBuildingServiceClient("YourAccesKeyId","YourSecretAccessKey",Amazon.RegionEndpoint.USEast1);
After that we can get the response of inbuilt method of GetBots()
botResponse = amazonmodel.GetBots(botRequest);
We will get the list of bots metadata
List<Amazon.LexModelBuildingService.Model.BotMetadata> bots = botResponse.Bots;
Every details about each bot created will be available in the array of list of bots
There is almost all methods in getting details from Lex configuration in LexModelBuildingService dll
Note:
In IAM (Identity Access Management) in AWS we need to give Access to have Lex components in Policy section. AWSLexFullAccess
or
atleast arn:aws:lex:region:account-id:bot:* access in policy
I am using Stripe.net SDK from NuGet. I always get the
The signature for the webhook is not present in the Stripe-Signature header.
exception from the StripeEventUtility.ConstructEvent method.
[HttpPost]
public void Test([FromBody] JObject incoming)
{
var stripeEvent = StripeEventUtility.ConstructEvent(incoming.ToString(), Request.Headers["Stripe-Signature"], Constants.STRIPE_LISTENER_KEY);
}
The WebHook key is correct, the Request Header contains "Stripe-Signature" keys.
I correctly receive incoming data from the Webhook tester utility (using nGrok with Visual Studio).
the secureCompare method seems to be the culprit => StripeEventUtility.cs
I tried to manipulate the incoming data from Stripe (Jobject, string, serializing...). The payload signature may cause some problem.
Has anybody had the same problem?
As per #Josh's comment, I received this same error
The signature for the webhook is not present in the Stripe-Signature header.
This was because I had incorrectly used the API secret (starting with sk_) to verify the HMAC on EventUtility.ConstructEvent.
Instead, Stripe WebHook payloads are signs with the Web Hook Signing Secret (starting with whsec_) as per the docs
The Web Hook Signing Secret can be obtained from the Developers -> WebHooks page:
The error can also occur because you are using the secret from the Stripe dashboard. You need to use the temporary one generated by the stripe cli if you are using the CLI for testing.
To obtain it run this:
stripe listen --print-secret
Im not sure about reason of this, but Json readed from Request.Body has a little bit different structure than parsed with [FromBody] and Serialized to string.
Also, you need to remove [FromBody] JObject incoming because then Request.Body will be empty.
The solution you need is:
[HttpPost]
public void Test()
{
string bodyStr = "";
using (var rd = new System.IO.StreamReader(Request.Body))
{
bodyStr = await rd.ReadToEndAsync();
}
var stripeEvent = StripeEventUtility.ConstructEvent(bodyStr, Request.Headers["Stripe-Signature"], Constants.STRIPE_LISTENER_KEY);
}
I was also receiving the same exception message when I looked at it in the debugger but when I Console.WriteLine(e.Message); I received a different exception message.
Received event with API version 2020-08-27, but Stripe.net 40.5.0 expects API version 2022-08-01. We recommend that you create a WebhookEndpoint with this API version. Otherwise, you can disable this exception by passing throwOnApiVersionMismatch: false to Stripe.EventUtility.ParseEvent or Stripe.EventUtility.ConstructEvent, but be wary that objects may be incorrectly deserialized.
I guess your best bet is to set throwOnApiVersionMismatch to false;
EventUtility.ParseEvent(json, header, secret, throwOnApiVersionMismatch: false)
The short question is whether is this possible and if so, how?
Outline
I have a .NET application which currently uses a service account to access information across a Google Apps domain using the Google Drive API. This works fine using the google-api-dotnet-client library and code along the same lines as shown in the samples here - which are currently a very good basic example of what I'm doing.
What I want to do now is extend it so as well as using those APIs provided by the "new" google-api-dotnet-client library, it uses the older "GData" libraries, as provided for via the
older google-gdata library, specifically the Spreadsheets API (and perhaps more to come).
The Problem
This is where the difficulty arises. The former library does exactly what I want, as evidenced by the second link in the first paragraph above - and the fact I have it doing it myself. HOWEVER... although the second library has been updated to support OAuth 2.0 in addition to OAuth 1.0 and the other older auth techniques, it does not - as far as I can tell from extensive Googling and trail-and-error - allow the "service account on behalf of all my users" operation which I need.
My question is whether I'm missing something (possibly a hard to find or undocumented something) which would allow me to do what I want. Failing that, is there any way I could force this behaviour and make these two libraries operate side by side?
The ideal solution
Ideally I would love some way of having the Google.GData.Spreadsheets.SpreadsheetsService instance be able to take advantage of the Google.Apis.Authentication.Auth2Authenticator<AssertionFlowClient> instance I'm already using... somehow. Is such witchcraft possible? I'm I missing the obvious?
Failing that, I'm happy to do the whole OAuth2 "assertion flow client" dance again if I have to, in some way that the older library can handle.
Help?
Other Thoughts
I have considered - and rejected for the time being - the option of starting from scratch and writing my own library to make this happen. This is for two reasons:
The gdata library already exists, and has been developed by many people likely cleverer than myself. I'm not so arrogant that I believe I can do better.
I'm not certain the OAuth2 with service account approach is even supported/allowed on these older APIs.
An alternate approach which I've been hoping to avoid but may have to fall back to depending on the answers here will be to use 2-legged OAuth 1.0 for portions of this. I'd prefer not to, as having parts of the app rely on one old auth method whilst other parts do it the nice new way just feels wrong to me. And there's that much more to go wrong...
Updates
I have considered the possibility of subclassing GDataRequestFactory and GDataRequest so I can make my own request factory and have that take the instance of Google.Apis.Authentication.Auth2Authenticator<AssertionFlowClient> (well, an instance of Google.Apis.Authentication.IAuthenticator anyway) which could step in to authenticate the request just before it's called. However... the constructor for GDataRequest is internal, which has stopped me.
It's really looking like this isn't meant to be.
For the sake of other folks coming across this question (now that the solution linked to in the accepted answer uses deprecated code), here's how I solved it:
First, start in "new API" land (use the Google.Apis.Auth nuget package) by setting up a ServiceAccountCredential following Google's Service Account example:
//In the old api, this accessed the main api accounts' sheets, not anymore
//** Important ** share spreadsheets with the Service Account by inviting the "serviceAccountEmail" address to the sheet
string serviceAccountEmail = "12345697-abcdefghijklmnop#developer.gserviceaccount.com";
var certificate = new X509Certificate2(#"key.p12", "notasecret", X509KeyStorageFlags.Exportable);
ServiceAccountCredential credential = new ServiceAccountCredential(
new ServiceAccountCredential.Initializer(serviceAccountEmail)
{
Scopes = new[] { "https://spreadsheets.google.com/feeds", "https://docs.google.com/feeds" }
}.FromCertificate(certificate));
Tell the credential to request an Access Token:
credential.RequestAccessTokenAsync(System.Threading.CancellationToken.None).Wait();
Now it's time to switch back to "old API" land (use the Google.GData.Spreadsheets nuget package). Start by constructing the SpreadsheetsService (similar to Google's example):
SpreadsheetsService service = new SpreadsheetsService("MySpreadsheetIntegration-v1");
To use Service Account authentication, we'll create an instance of the GDataRequestFactory and set a custom Authorization header:
var requestFactory = new GDataRequestFactory("My App User Agent");
requestFactory.CustomHeaders.Add(string.Format("Authorization: Bearer {0}", credential.Token.AccessToken));
Finally, set the SpreadsheetsService's RequestFactory property to this new factory:
service.RequestFactory = requestFactory;
And go ahead and use the SpreadsheetsService as you would had you authenticated using any other technique. (Tip: share spreadsheets with the Service Account by inviting the serviceAccountEmail address to the sheet)
I managed to solve this by subclassing GDataRequestFactory and creating my own implementation of the interfaces implemented by GDataRequest. This implementation wraps an instance of GDataRequest instantiated via reflection, and adds in the necessary code to perform authentication using an instance of IAuthenticator (in my case, Auth2Authenticator).
I wrote a blog post on it and added an example as a Gist:
Blog: Using Google's Spreadsheet API using .NET, OAuth 2.0 and a Service Account
Gist 4244834
Feel free to use this if it helps you (BSD licence).
Hey just stumbled accross the same problem and produced a different solution:
Has anybody ever concidered of writing the parameters from the credentials-object directly to an OAuth2Parameters-Object?
I did this and it worked nicely:
public class OAuthTest
{
OAuth2Parameters param = new OAuth2Parameters();
public OAuthTest()
{
Debug.WriteLine("Calling: AuthGoogleDataInterface()");
bool init = AuthGoogleDataInterface();
if (init)
{
GOAuth2RequestFactory requestFactory = new GOAuth2RequestFactory(null, "My App User Agent", this.param);
//requestFactory.CustomHeaders.Add(string.Format("Authorization: Bearer {0}", credential.Token.AccessToken));
var service = new SpreadsheetsService("MyService");
service.RequestFactory = requestFactory;
SpreadsheetQuery query = new SpreadsheetQuery();
// Make a request to the API and get all spreadsheets.
SpreadsheetFeed feed = service.Query(query);
// Iterate through all of the spreadsheets returned
foreach (SpreadsheetEntry entry in feed.Entries)
{
// Print the title of this spreadsheet to the screen
Debug.WriteLine(entry.Title.Text);
}
}
Debug.WriteLine(m_Init);
}
private bool AuthGoogleDataInterface()
{
bool b_success;
try
{
Console.WriteLine("New User Credential");
// New User Credential
UserCredential credential;
using (var stream = new FileStream("client_secrets.json", FileMode.Open, FileAccess.Read))
{
GoogleClientSecrets GCSecrets = GoogleClientSecrets.Load(stream);
string[] ArrScope = new[] { "https://spreadsheets.google.com/feeds", "https://docs.google.com/feeds" };
credential = GoogleWebAuthorizationBroker.AuthorizeAsync(
GCSecrets.Secrets,
ArrScope,
"user", CancellationToken.None,
new FileDataStore("My.cal")).Result;
// put the Information generated for the credentials object into the OAuth2Parameters-Object to access the Spreadsheets
this.param.ClientId = GCSecrets.Secrets.ClientId; //CLIENT_ID;
this.param.ClientSecret = GCSecrets.Secrets.ClientSecret; //CLIENT_SECRET;
this.param.RedirectUri = "urn:ietf:wg:oauth:2.0:oob"; //REDIRECT_URI;
this.param.Scope = ArrScope.ToString();
this.param.AccessToken = credential.Token.AccessToken;
this.param.RefreshToken = credential.Token.RefreshToken;
}
Debug.WriteLine("AuthGoogleDataInterface: Success");
b_success = true;
}
catch (Exception e)
{
Debug.WriteLine(e.ToString());
b_success = false;
}
return b_success;
}
}