I've been having a little issue implementing AspNetCore.HealthChecks.UI correctly. I have two endpoints that run health checks using the "live" and "ready" as tags. Both endpoints work work as expected but after implementing the HealthChecksUI, The health check results displayed is always "Unhealthy
An error occurred while sending the request." even though the endpoint returns "Healthy" from postman. Please see screenshots and relevant code snippets and configurations.
AppSettings Configuration
"HealthChecksUI": {
"HealthChecks": [
{
"Name": "Crowd Funding App",
"Uri": "http://localhost:5001/healthui"
}
],
"EvaluationTimeinSeconds": 10,
"MinimumSecondsBetweenFailureNotifications": 60
}
In the Configure function of the startup class, I have the following code.
Related
I cannot get an Endpoint to receive media-type text/csv.
I have a .Net Core 3.1 API app that needs to have an endpoint that receives a CSV. I figured adding this would be fine:
[HttpPost("CSV")]
[Consumes("text/csv")]
public async Task<IActionResult> InsertCsv([FromBody] string values)
{
return Ok($"here is the data!\r\n{values}");
}
However, in Postman sending the request returns this error:
{
"type": "https://tools.ietf.org/html/rfc7231#section-6.5.13",
"title": "Unsupported Media Type",
"status": 415,
"traceId": "|e2e74cae-4824ee592543d555." }
With that error figured I would have to do something in the ConfigureServices however, I cannot find anything. I did find you need to add for XML:
services.AddControllers().AddXmlDataContractSerializerFormatters();
Cannot figure out what do to add text/csv.
I have a serverless web API (API Gateway + Lambda) that I have built in C# and deployed via Visual Studio. This is achieved via a serverless.yml file that auto-creates a CloudFormation template, then that template is applied to create the API stack.
Once my stack is deployed, I have gone into the AWS Console to enable caching on one of the path parameters, but get this error:
!https://ibb.co/B4wmRRj
I'm aware of this post https://forums.aws.amazon.com/thread.jspa?messageID=711315򭪓 which details a similar but different issue where the user can't uncheck caching. My issue is I can't enable it to begin with. I also don't understand the steps provided to resolve the issue within that post. There is mention of using the AWS CLI, but not what commands to use, or what to do exactly. I have also done some reading on how to enable caching through the serverless.yml template itself, or cloud formation, but the examples I find online don't seem to match up in any way to the structure of my serverless file or resulting CF template. (I can provide examples if required). I just want to be able to enable caching on path parameters. I have been able to enable caching globally on the API stage, but that won't help me unless I can get the caching to be sensitive to different path parameters.
serverless.yml
"GetTableResponse" : {
"Type" : "AWS::Serverless::Function",
"Properties": {
"Handler": "AWSServerlessInSiteDataGw::AWSServerlessInSiteDataGw.Functions::GetTableResponse",
"Runtime": "dotnetcore2.0",
"CodeUri": "",
"MemorySize": 256,
"Timeout": 30,
"Role": null,
"Policies": [ "AWSLambdaBasicExecutionRole","AWSLambdaVPCAccessExecutionRole","AmazonSSMFullAccess"],
"Events": {
"PutResource": {
"Type": "Api",
"Properties": {
"Path": "kata/table/get/{tableid}",
"Method": "GET"
}
}
}
}
}
},
"Outputs" : {
"ApiURL" : {
"Description" : "API endpoint URL for Prod environment",
"Value" : { "Fn::Sub" : "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/" }
}
}
--Update Start--
The reason, you are getting Invalid cache key parameter specified error because you did not explicitly highlighted the path parameters section.
This is because, although the UI somehow extrapolated that there is a
path parameter, it has not been explicitly called out in the API
Gateway configuration.
I tested with below and was able to replicate the behavior on console. To resolve this, follow my Point 1 section full answer.
functions:
katatable:
handler: handler.katatable
events:
- http:
method: get
path: kata/table/get/{tableid}
--Update End--
Here you go. I still don't have your exact serverless.yml so I created a sample of mine similar to yours and tested it.
serverless.yml
functions:
katatable:
handler: handler.katatable
events:
- http:
method: get
path: kata/table/get/{tableid}
request:
parameters:
paths:
tableid: true
resources:
Resources:
ApiGatewayMethodKataTableGetTableidVarGet:
Properties:
Integration:
CacheKeyParameters:
- method.request.path.tableid
Above should make tableid path parameter is cached.
Explanation:
Point 1. You have to make sure in your events after your method and path, below section is created otherwise next resources section of CacheKeyParameters will fail. Note - boolean true means path parameter is required. Once you explicitly highlight path parameter, you should be able to enable caching via console as well without resources section.
request:
parameters:
paths:
tableid: true
Point 2. The resources section tells API Gateway to enable caching on tableid path parameter. This is nothing but serverless interpretation of CloudFormation template syntax. How did I get that I have to use ApiGatewayMethodKataTableGetTableidVarGet to make it working?. Just read below guidelines and tip to get the name.
https://serverless.com/framework/docs/providers/aws/guide/resources/
Tip: If you are unsure how a resource is named, that you want to
reference from your custom resources, you can issue a serverless
package. This will create the CloudFormation template for your service
in the .serverless folder (it is named
cloudformation-template-update-stack.json). Just open the file and
check for the generated resource name.
What does above mean? - First run serverless package without resources section and find .serverless folder in directory and open above mentioned json file. Look for AWS::ApiGateway::Method. you will get exact normalized name(ApiGatewayMethodKataTableGetTableidVarGet) syntax you can use in resources section.
Here are some references I used.
https://medium.com/#dougmoscrop/i-set-up-api-gateway-caching-here-are-some-things-that-surprised-me-7526d954fbe6
https://serverless.com/framework/docs/providers/aws/events/apigateway#request-parameters
PS - If you still need CLI steps to enable it, let me know.
I've created an Azure function that looks like this (actually, the Microsoft template did most of the work!):
[FunctionName("Function1")]
public static void Run([ServiceBusTrigger("%queue-name%", AccessRights.Listen)]string myQueueItem, TraceWriter log)
{
log.Info($"C# ServiceBus queue trigger function processed message: {myQueueItem}");
}
My local.settings.json looks like this:
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "..",
"AzureWebJobsDashboard": "..",
"AzureWebJobsServiceBus": "..",
"queue-name": "testqueue"
}
}
I then deployed this function. This is a strange SO question, because my problem is that this worked immediately, but I didn't expect it to. The function.json is here:
{
"generatedBy": "Microsoft.NET.Sdk.Functions-1.0.0.0",
"configurationSource": "attributes",
"bindings": [
{
"type": "serviceBusTrigger",
"queueName": "%queue-name%",
"accessRights": "listen",
"name": "myQueueItem"
}
],
"disabled": false,
"scriptFile": "..\\bin\\FunctionApp8.dll",
"entryPoint": "FunctionApp8.Function1.Run"
}
Clearly, the values in local.settings.json have been copied into the function settings, but i can't see them in the portal. My question is: where are these settings now stored (queue-name and AzureWebJobsServiceBus)?
EDIT:
My Application Settings for the function:
They'll be under the "Application settings" tab of the published function app in the Azure Portal (see picture).
There's a bit of documentation here! Note that most app settings are not published automatically, and require a bit of configuration either at the publish step or after publishing.
UPDATE: If two functions are listening for an event on the same queue, only one function will be fired. This can cause seemingly buggy behavior, as a function will appear to fire/not fire when expected.
In this case, the unexpected behavior came from a competing functions and not an unexpected connection string.
In Azure the function settings are taken from Application settings tab, which is the same as in App Service.
You probably published them too, go check in the portal UI.
I have a simple Odata V4 service that returns persons.
This service is not accessible directly:
it is accessible through Apigee, ie with http://myApigeeServer/Person
then Apigee is pointing on a load balancer, ie with http://myLoadBalancer/Person
finally there is actually two servers behind my load balancer virtual IP, ie with http://myFirstServer/Person and http://mySecondServer/Person
My problem is that we can see the real final server URL in the service response "#odata.context" metadata, so a call to http://myApigeeServer/Person('foo') can lead to two responses:
{
"#odata.context": "http://myFirstServer/$metadata#Person/$entity",
"FirstName": "John",
"LastName": "Doe",
"Phone#odata.type": "#Collection(String)",
"Phone": [
"+123456789"
],
}
Or
{
"#odata.context": "http://mySecondServer/$metadata#Person/$entity",
"FirstName": "John",
"LastName": "Doe",
"Phone#odata.type": "#Collection(String)",
"Phone": [
"+123456789"
],
}
I really must hide the final servers names. So my question is: is it possible to totally remove the "#odata.context" metadata or to customize it ? In my case customization would mean forcing the "#odata.context" metadata value with the Apigee URL.
[---EDIT----]
Based on the link proposed by Dylan Nicholson (thanks!), indeed it exists a hook on the ODataMediaTypeFormatter that enables to change the service URI base address.
But in my tests, it didn't work/I didn't manage to make it work for the #odata.context URI. So from links to links and tests to tests, I arrived here and lencharest's solution works perfectly well: using a custom URL helper that rewrites each link base address. I'll see in the future if it's too brutal...
#Max
To suppress the #odata.context, you can use "application/json;odata.metadata=none"
So I am very new to SignalR, in fact I've only been using it for a couple of days now. Anyway, I am getting the error below when my application first starts up:
The code for the application in question is located in two projects, a Web API and a Single Page Application (SPA). The first one has my backend code (C#) and the second one my client-side code (AngularJS). I think the problem might be due to the fact that the projects in question run on different ports. The Web API, where my SignalR hub lives, is on port 60161 and the SPA is on 60813. My hub is declared like so:
public class ReportHub : Hub
{
public void SendReportProgress(IList<ReportProgress> reportProgress)
{
this.Clients.All.broadcastReportProgress(reportProgress);
}
public override Task OnConnected()
{
this.Clients.All.newConnection();
return base.OnConnected();
}
}
and then in my Startup.cs file for my Web API I initialize SignalR like this:
public void Configuration(IAppBuilder app)
{
HttpConfiguration config = new HttpConfiguration();
config.Services.Replace(typeof(IHttpControllerActivator), new NinjectFactory());
config.MessageHandlers.Add(new MessageHandler());
//set up OAuth and Cors
this.ConfigureOAuth(app);
config.EnableCors();
config.IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Always;
// Setting up SignalR
app.Map("/signalr", map =>
{
map.UseCors(CorsOptions.AllowAll);
map.RunSignalR(new HubConfiguration { EnableJSONP = true });
});
//set up json formatters
FormatterConfig.RegisterFormatters(config.Formatters);
WebApiConfig.Register(config);
app.UseWebApi(config);
}
For my client-side code I use an Angular SignalR API called angular-signalr-hub (Angular-signalr-hub). The client-side follows:
angular
.module("mainApp")
.factory("reportHubService", ["$rootScope", "Hub", reportHubService]);
/// The factory function
function reportHubService($rootScope, Hub) {
var vm = this;
vm.reportName = "None";
// Setting up the SignalR hub
var hub = new Hub("reportHub", {
listeners: {
'newConnection': function(id) {
vm.reportName = "SignalR connected!";
$rootScope.$apply();
},
'broadcastReportProgress': function (reportProgress) {
vm.reportName = reportProgress.reportName;
$rootScope.$apply();
}
},
errorHandler: function(error) {
},
hubDisconnected: function () {
if (hub.connection.lastError) {
hub.connection.start();
}
},
transport: 'webSockets',
logging: true
//rootPath: 'http://localhost:60161/signalr'
});
I did some googling yesterday and one of the suggestions I came upon was to set the SignalR URL to the one of my Web API, which I did (the commented out line above). When I uncomment the line in question, that does seem to do something because if I now go to http://localhost:60161/signalr/hubs in my browser, it does show me the dynamically generated proxy file:
and when I run my application I no longer get the error above, but now it doesn't seem to connect. It gets to the negotiate line and it stops there:
I think it should look like this (this is from a SignalR tutorial I found):
In addition, none of my listeners (declared in my Angular code above) get called, so something is still now working quite right. There should be more lines in the log to the effect that connection was successfully established, etc. What could be the problem here?
UPDATE: upon further debugging i found out the problem is most likely being caused by the ProtocolVersion property being different between the client and the result here:
Because of that it seems it just exists and fails to establish connection.
I figured out what the problem was. My SignalR dependencies were out of date and because of that my client and server versions differed. All I had to do was update (via NuGet Package Manager) all SignalR dependencies to the latest version and now it works.
As a side note, SignalR was not very good at telling me what was wrong. In fact, no error message was displayed, unless of course there was some additional logging somewhere that had to be found or turned on, in addition to the logging I already had (turned on). Either way, it's either not logging certain errors or it makes it difficult to figure out how to turn on all logging. I had to go and debug the JQuery SignalR api to figure out what the problem was, which was a time consuming endeavour.