Asp.Net Web Api Versioning with Load Balancer (Layer 4) - c#

I'm currently in charge of developing a rather complex REST api with .Net and Web Api.
The interface should later be consumed by various public and private (in house) Clients.
I read a lot about how you should version your api, but opinions seem to be very different.
The application runs behind a load balancer in a multi server environment.
Now my problem is, the load balancer is hard restricted and only supports layer 4 load balancing, so I'm not able to inspect the URL / headers etc. of the incoming requests to route them to the correct version of the application.
We don't want to have versioned api controllers in our code base since we have a lot external dependecies which should be updated frequently, so we could break some functionality.
Currently it seems to be the only solution to use subdomains for versioning, e.g.
ver1.api.domain.com
Is there anything against this approach, do you have other solutions?

The issue with that versioning approach is that there will be duplicate entries for all resources including un-modified ones.
In my opinion, a better approach is to include the version of a resource in the Uri.
Let's have a look at a concrete example. Supposing there is a CarsController like the one below:
public class CarsController : ApiController
{
[Route("cars/{id}")]
public async Task<IHttpActionResult> Get(int id)
{
DoSomething();
return Ok(result);
}
}
After the first release, we introduce the second version of the Get method, we can do something like
[Route("cars/{id}/v2")]
public async Task<IHttpActionResult> GetCarsVersion2(int id)
{
DoSomethingElse();
return Ok(result);
}
So the existing clients still refer to the old Uri /Cars/{id}, whereas new clients can use the new Uri /Cars/{id}/v2.
In addition, if there aren't many differences between the two versions, the original implementation can be refactored to satisfy new requirements. Which, in turns, reduces code duplication.

Related

Implementing GraphQL in MVC solution - Never returns

I have added a few classes to send requests and handle responses from an external GraphQL web API, which use the GraphQL package from NuGet as well as HttpClient to connect to the API. The classes live in one project of my solution ("providers"). I have also added Test classes and methods to test the functionality of the service classes. The Tests all work as expected - results come back and they match the same queries made in Postman. Having success with the tests, I added references to the service classes to the main project which is an MVC web application in order to supplement user search capabilities - so calling a method from the same project as the Api helper classes within a controller in the MVC application. This is when I started having issues.
As an aside I did not have to add any of the GraphQL packages to the Test project, and it was able to work fine by just referencing the provider project that contains the api helper class. So it seems a bit like overkill in the first place that I had to add all that to the MVC project.
Initially, I was getting errors such as "ReadAsAsync doesn't exist" (this is a method of the System.Net.Http.HttpContent class I think?). This method was used in other classes that consume other external web services using HttpClient in the same project. In fact the method that was failing was nearly identical to ones in other classes that had been working for years at this point. So I thought it was strange it stopped working at this time. I eventually concluded it had something to do with all the dependencies added by adding the GraphQL nuget packages to the "providers" project.
I then added those same GraphQL packages to the MVC project to see if that resolved the issue. This added some issues where it couldn't find some System.IO libraries. I had seen this before in another project where I had to update the .NET version and the solution in that case was to remove the offending dependentAssembly entries in the web.config file. I did that here. Now, I load the web application in debug mode, and can click through to the page where the controller action to search using the new API class is located. The original search actions still work - I am able to get results using them. But the new API method does not. It does not throw an exception. It does not return results. The execution simply stops or seems to stop after the call to the API happens. The GraphQLHttpClient.SendQueryAsync just never comes back. It could just be that I'm not waiting long enough, but I gave it 10 minutes at one point for a query that runs in seconds in the test project, and also in seconds in postman. The application remains operational, though. I can still use other functions of the site after clicking on this.
I guess the question is - how can I find out what the problem is here? Or correct it in some way? I'm currently trying removing different dependentAssembly entries in the MVC project but that isn't really working out very well so far. Some of those entries are needed by the application. But not all of them.
** UPDATE **
I was able to work with some people on another site to resolve the issue.
Basically I was calling:
public static GqlResponse GetGqlResponse(string name){
var gqlResponse = client.SendQueryAsync<GqlData>(gqlRequest).Result;
return gqlResponse;
}
In a method in a class in one project, from an MVC controller action in a different project. Because of this, it caused a deadlock.
public ActionResult Index (){
var gql = myotherproject.myclass.GetGqlResponse("moo");
return View(gql);
}
The solution to this issue was to not try to make an asynchronous process completely synchronous. I changed the whole code pathway to be asynchronous.
public class MyGplClass{
public static async Task<GplResponse> GetGplResponse(string name){
// build the request, client, etc...
var gqlResponse = await client.SendQueryAsync<GqlData>(gqlRequest).ConfigureAwait(false);
// process response...
return gplResponse;
}
}
And in the MVC project:
public class MyController : BaseController{
public async Task<ActionResult> Index(){
var gqlr = await project2.MyGplClass.GetGplResponse("moo");
return View(gplr);
}
}
To reference for the future, hopefully this link stays alive: Don't Block on Async Code. But the gist of it is that because I was using the .Result property of the Task object created by calling the Async method, this was causing a deadlock within my Web Application (ASP.NET, .NET Framework), because of the way these sorts of web applications work. Because normally this problem had not happened for me, I was unprepared for it and I did not know what was happening when it happened this time. Hopefully this solution helps some people who have a similar problem as I stated in my question.

ASP.NET API version ranges

Our product is a client/server app that has multiple versions of the client out in the field but has only one server that runs the latest version to service all API calls. We have/will have hundreds of API endpoints and I'm trying how best to handle versioning. What I'd like to do is be able to avoid the laborious task of applying attributes to every single method, or copy entire controllers every time we make a minor change.
I might be misinterpreting most documents/practices on this, but it seems like every time you bump your API you have to go through and do all that work, which seems inefficient at best.
Instead what I'd like to do is apply an attribute to each endpoint with the version of when it was written, then the client finds the the closest version that is equal to or less than the client version.
For instance, if an endpoint was written at [ApiVersion("1.0")] then that is the attribute it gets. If we had to modify it, I'd copy the method, rename it, apply a RoutePrefix attribute so it gets properly hit and apply a new attribute with the version of our whole API (in this example I put 1.5).
Here is a simple example:
[HttpGet]
[ApiVersion("1.0")]
[Route("GetHeartBeat")]
public bool GetHeartBeat()
{
return true;
}
[HttpGet]
[ApiVersion("1.5")]
[Route("GetHeartBeat")]
public bool GetHeartBeat2()
{
return false;
}
This works no problem when I use url versioning:
/api/v1.0/GetHeartBeat
or
/api/v1.5/GetHeartBeat
but /api/v1.3/GetHeartBeat does not since that version doesn't exist..
What I want to happen is if I have a client that is running 1.3, then it will find the closest version that is equal to or less than the latest version. So /api/v1.3/GetHeartBeat would get received, since 1.3 doesn't exist, then it'll look at the closest/earlier version, in this case would be 1.0.
I can write a bunch of route logic to accomplish this, but I feel like there has to be a solution out of the box as I can't be the first person to try this. Is there a nuget package that would accomplish this?
You're really asking two questions. How you map things on the server-side is an implementation detail and there are many options. Attributes are not a hard requirement to apply API Versioning metadata. You can use conventions, including your own conventions. API versions must be discrete. That is by design. An API version is much more like a media type. You cannot arbitrarily add a media type, nor an API version, and necessarily expect a client to understand it.
Since you own both sides, you have some great avenues to make things work the way you want. The server should never assume what the client wants and the client should always have to explicitly ask the server what it wants. The easiest way to achieve your goal is to negotiate the API version. Ok, great. How?
I suspect not a lot of people are doing this today, but API Versioning baked in the necessary mechanics to achieve this very early on. There are many use cases, but the most common are for tooling (ex: client code generation) and client version negotiation. The first step is to enable reporting API versions:
services.AddApiVersioning(options => options.ReportApiVersions = true);
You can also apply [ReportApiVersions] on specific controller actions
This will enable reporting the available API versions via the api-supported-versions and api-deprecated-versions HTTP headers. Remember that deprecated doesn't mean that it doesn't exist, it just means that it will be going away at some point; you control the policy. This information can be used by your client to log warnings about out-of-date versions or it can influence your client's decision in selecting an appropriate version.
Part of your challenge is versioning by URL segment. Yes, it's very popular, but it violates the Uniform Interface constraint. v1.api.com is an endpoint. v1.0/GetHeartBeat and v1.5/GetHeartBeat are identifiers. The two identified resources are almost certainly not different resources, but have different representations. Why does that matter? Changing the identifier (e.g. URL) for every version results in a moving target for the client. Every other method of versioning would use always use GetHeartBeat. I'm sure you're far too down the road to make a change, but this leads into the solution.
It doesn't really matter which controller implementation you use, but you essentially need an action that does something like this:
[ApiController]
[Route("api/[controller]")]
public class GetHeartBeatController : ControllerBase
{
[ReportApiVersions] // ← instead of ApiVersioningOptions.ReportApiVersions = true
[ApiVersionNeutral] // ← allow any and all versions, including none at all
[HttpOptions]
public IActionResult Options()
{
// Allow is required by spec; you may need addition information
Response.Headers.Add("Allow", new StringValues(new[]{"GET", "OPTIONS"}));
Response.GetTypedHeaders().CacheControl = new()
{
MaxAge = TimeSpan.FromDays(1d),
};
return Ok();
}
}
Now, if your client sends:
OPTIONS api/getheartbeat HTTP/2
Host: localhost
You'll get back something like:
HTTP/2 200 OK
Cache-Control: max-age=86400
Api-Supported-Versions: 1.0, 1.5
If your client is running 1.3, it now has the knowledge necessary to select 1.0 from the list as the most appropriate API version. The Cache-Control header can be used as a way for the server to tell the client how long it can cache the result (but it doesn't have to). I presume API versions wouldn't change more often then once per day, so this seems like a reasonable approach.
You didn't mention what type of client you have. If it's a browser-based client, you may have to do some additional work with this setup to make it play nice with CORS, if it's even required. Alternatively, you could achieve the same result by using the HEAD method. I'd argue that OPTIONS is more appropriate, but you might not find it worth the work to make it play with CORS should you run into any complications.

Understanding the request lifecycle and routing mechanism in service stack

Background (you might want to skip this bit, it's here just in case you want context)
I saw from questions like this
ServiceStack CRUD Service routing Documentation that the documentation has a weird way of explaining something that takes what I'm used to (WebApi and controller based routing) to a message oriented routed mechanism requiring us to define a request, response, and a service class with methods in order to handle each and every request.
I'm in the process of converting an existing code base from WebAPI + OData based services to service stack to determine the difference / modelling changes that the two designs require.
The problem domain
There are many requests I currently make that don't really require any parameters (simple get requests) and yet i'm forced to both create, instantiate and then pass to a service method a DTO in this situation as without such a DTO I can't define the route.
Why? this is confusing!
What is the relationship between the DTO, the methods in a service and the routing / handling of a request because i currently have lots of "service" classes in my existing stack that are basically ...
public class FooService : CRUDService<Foo> { /* specifics for Foos */ }
public abstract class CRUDService<T> : ICRUDService<T>
{
public T GetById(object Id) { ... }
public IEnumerable<T> GetAll() { ... }
public T Add(T newT) { ... }
public T Update(T newVersion) { ... }
public bool Delete(object Id) { ... }
}
... how do i get from that for 100 or so services to making this a functional service stack implementation because at the moment my understanding is that I can't pass scalar values to any of these methods, I must always pass a DTO, the request DTO will define the route it handles and I must have a different request and response DTO for every possible operation that my API can perform.
This leaves me thinking I should resort to T4 templating to generate the various DTO's saving me time hand cranking hundreds of basically empty DTO's for now.
My question(s)
It boils down to How do I convert my codebase?
that said, the "sub parts" of this question are really sub questions like:
What's best practice here?
Am I missing something or is there a lot of work for me basically building empty boiler plate DTO's?
How does Service stack wire all this stuff up?
I was told that it's "better than the black box of OData / EF" but this at face value appears to hide a ton of implementation details. Unless i'm just confused at something in the design ethos.
Each Service in ServiceStack requires a concrete Request DTO which is used to define your Services contract.
As ServiceStack is a message-based services framework, the Typed Request DTO is fundamental in how ServiceStack works which "captures the Request" that Services are invoked with, which is also passed down through all ServiceStack filters, e.g:
The Request DTO is also all that's needed to be able to invoke a Service from any client, including MQ Clients:
And by following the Physical Project Structure that ServiceStack Project templates are configured with where all DTOs are kept in a dependency/impl-free ServiceModel project that ServiceStack's .NET generic Service Clients can reference directly to enable an end-to-end Typed API without code-gen, e.g:
var response = client.Get(new MyRequest { ... });
The Request DTO being a "message" uses a POCO DTO to define its contract which is better suited for versioning as they can be extended without breaking existing classes and
since the Request DTO is the definition and entry-point for your Service it's also what most of ServiceStack's other features is built-around, needless to say its important.
Sharp Script for code-generation
If you have so many CRUD Services that you wish to auto-generate DTOs for them I'd recommend taking a look at #Script which is a dynamic .NET Scripting language that defaults to a Template language mode that uses familiar JS syntax for expressions and the ideal handlebars syntax for blocks for templating.
The stand-alone Script Support includes Live Preview support whose instant feedback makes it highly productive and also includes built-in support for querying databases.
Although OrmLite is a code-first ORM, it does include T4 support for initially generating data models.
AutoCRUD Preview
Since you're looking to generate a number of CRUD Services you may want to checkout the preview release of AutoCRUD that's now available on MyGet.
It’s conceptually the same as “Auto Query” where you just need to implement the Request DTOs definition for your DB Table APIs and AutoQuery automatically provides the implementation for the Service.

ASP.NET Web API: any downsides to asynchronous operations?

I'm setting up a Web API that will serve a client over our Intranet. For the sake of convenience for the developers working on the client, I am considering having the Web API adhere to an interface that will be shared with the client, something like what is shown below.
The purpose of using a shared interface mostly has to do with making changes to the Web API detectable by the client developers at compile time. Also, the client can leverage the interface to be used for wrappers around HttpClient instances that will be used to communicate with the Web API.
The client developers would like to use async and await throughout their implementation, and who am I to say "no"?
public interface IValueController
{
Task<string> ReadAsync();
string ReadSync();
}
[Route("api/v1/[controller]")]
public class ValueController : Controller, IValueController
{
[HttpGet("async")]
public Task<string> ReadAsync()
{
return Task.FromResult("async!");
}
[HttpGet("sync")]
public string ReadSync()
{
return "sync!";
}
}
I'm not all that interested in providing both synchronous and asynchronous methods - one of them will have to do. The question is: Is there any down-side to defining the Web API operations as asynchronous? If not, I'm going all-in!
-S
When to use Async:
Async API calls.
Long Running database queries.
Tasks that are CPU bound.
When you need to achieve parallelism.
When not to use: While writing any quick running tasks or methods.
NOTE: Beware of deadlocks as the compiler allows you to write and successfully compile async code even without understanding it's basic concept.
The purpose of using a shared interface mostly has to do with making changes to the Web API detectable by the client developers at compile time.
These days, it's more common to auto-generate an API description (e.g., Swagger), and then auto-generate clients from that (which can be .NET or other languages). Not only does this approach allow multiple clients (e.g., one Swagger client is HTML documentation for your APIs, complete with examples and the ability to invoke them right from the website), but it also handles the synchronous/asynchronous translation for you without requiring any kind of "async signature but really it's sync" code.
That said, if you wanted to implement asynchronous methods synchronously, there's nothing that would prevent it. The only gotcha I can think of is that if you're on full ASP.NET, then asynchronous actions cannot be child actions. This restriction no longer exists on ASP.NET Core.

REST based MVC site and/or WCF

I know there are actually a number of questions similar to this one, but I could not find one that exactly answers my question.
I am building a web application that will
obviously display data to the users :)
have a public API for authenticated users to use
later be ported to mobile devices
So, I am stuck on the design. I am going to use asp.net MVC for the website, however I am not sure how to structure my architecture after that.
Should I:
make the website RESTful and act as the API
in my initial review, the GET returns the full view rather than just the data, which to me seems like it kills the idea of the public API
also, should I really be performing business logic in my controller? To be able to scale, wouldn't it be better to have a separate business logic layer that is on another server, or would I just consider pushing my MVC site to another server and it will solve the same problem? I am trying to create a SOLID design, so it also seems better to abstract this to a separate service (which I could just call another class, but then I get back to the problem of scalability...)
make the website not be RESTful and create a RESTful WCF service that the website will use
make both the website and a WCF service that are restful, however this seems redundant
I am fairly new to REST, so the problem could possibly be a misunderstanding on my part. Hopefully, I am explaining this well, but if not, please let me know if you need anything clarified.
I would make a separate business logic layer and a (restful) WCF layer on top of that. This decouples your BLL from your client. You could even have different clients use the same API (not saying you should, or will, but it gives you the flexibility). Ideally your service layer should not return your domain entities, but Data Transfer Objects (which you could map with Automapper), though it depends on the scope and specs of your project.
Putting it on another server makes it a different tier, tier <> layer.
Plain and simple.... it would be easiest from a complexity standpoint to separate the website and your API. It's a bit cleaner IMO too.
However, here are some tips that you can do to make the process of handling both together a bit easier if you decide on going that route. (I'm currently doing this with a personal project I'm working on)
Keep your controller logic pretty bare. Judging on the fact that you want to make it SOLID you're probably already doing this.
Separate the model that is returned to the view from the actual model. I like to create models specific to views and have a way of transforming the model into this view specific model.
Make sure you version everything. You will probably want to allow and support old API requests coming in for quite some time.... especially on the phone.
Actually use REST to it's fullest and not just another name for HTTP. Most implementations miss the fact that in any type of response the state should be transferred with it (missing the ST). Allow self-discovery of actions both on the page and in the API responses. For instance, if you allow paging in a resource always specify in the api or the webpage. There's an entire wikipedia page on this. This immensely aids with the decoupling allowing you to sometimes automagically update clients with the latest version.
Now you're controller action will probably looking something like this pseudo-code
MyAction(param) {
// Do something with param
model = foo.baz(param)
// return result
if(isAPIRequest) {
return WhateverResult(model)
}
return View(model.AsViewSpecificModel())
}
One thing I've been toying with myself is making my own type of ActionResult that handles the return logic, so that it is not duplicated throughout the project.
I would use the REST service for your website, as it won't add any significant overhead (assuming they're on the same server) and will greatly simplify your codebase. Instead of having 2 APIs: one private (as a DLL reference) and one public, you can "eat your own dogfood". The only caution you'll need to exercise is making sure you don't bend the public API to suit your own needs, but instead having a separate private API if needed.
You can use RestSharp or EasyHttp for the REST calls inside the MVC site.
ServiceStack will probably make the API task easier, you can use your existing domain objects, and simply write a set of services that get/update/delete/create the objects without needing to write 2 actions for everything in MVC.

Categories

Resources