AngularJS and Web Service Interaction Best Practices - c#

I have a small website I implemented with AngularJS, C# and Entity Framework. The whole website is a Single Page Application and gets all of its data from one single C# web service.
My question deals with the interface that the C# web service should expose. For once, the service can provide the Entities in a RESTful way, providing them directly or as DTOs. The other approach would be that the web service returns an object for exactly one use case so that the AngularJS Controller only needs to invoke the web service once and can work with the responded model directly.
To clarify, please consider the following two snippets:
// The service returns DTOs, but has to be invoked multiple
// times from the AngularJS controller
public Order GetOrder(int orderId);
public List<Ticket> GetTickets(int orderId);
And
// The service returns the model directly
public OrderOverview GetOrderAndTickets(int orderId);
While the first example exposes a RESTful interface and works with the resource metaphor, it has the huge drawback of only returning parts of the data. The second example returns an object tailored to the needs of the MVC controller, but can most likely only be used in one MVC controller. Also, a lot of mapping needs to be done for common fields in the second scenario.
I found that I did both things from time to time in my webservice and want to get some feedback about it. I do not care too much for performance, altough multiple requests are of course problematic and once they slow down the application too much, they need refactoring. What is the best way to design the web service interface?

I would advise going with the REST approach, general purpose API design, rather than the single purpose remote procedure call (RPC) approach. While the RPC is going to be very quick at the beginning of your project, the number of end points usually become a liability when maintaining code. Now, if you are only ever going to have less than 20 types of server calls, I would say you can stick with this approach without getting bitten to badly. But if your project is going to live longer than a year, you'll probably end up with far more end points than 20.
With a rest based service, you can always add an optional parameter to describe child records said resource contains, and return them for the particular call.

There is nothing wrong with a RESTful service returning child entities or having an optional querystring param to toggle that behavior
public OrderOverview GetOrder(int orderId, bool? includeTickets);
When returning a ticket within an order, have each ticket contain a property referring to the URL endpoint of that particular ticket (/api/tickets/{id} or whatever) so the client can then work with the ticket independent of the order

In this specific case I would say it depends on how many tickets you have. Let's say you were to add pagination for the tickets, would you want to be getting the Order every time you get the next set of tickets?
You could always make multiple requests and resolve all the promises at once via $q.all().

The best practice is to wrap up HTTP calls in an Angular Service, that multiple angular controllers can reference.
With that, I don't think 2 calls to the server is going to be a huge detriment to you. And you won't have to alter the web service, or add any new angular services, when you want to add new views to your site.
Generally, API's should be written independently minded of what's consuming it. If you're pressed for time and you're sure you'll never need to consume it from some other client piece, you could write it specifically for your web app. But generally that's how it goes.

Related

Simulate request scope in non-Web code

Background: I need parts of my system to be able to push various status messages to some data structure so that they can be consumed by a caller, without passing the data structure into methods explicitly, and where the needs of the callers can differ.
Detail: my application has two (and conceivably more) heads, an ASP.NET MVC 5 web site and a Windows service. So normally, while the composition root of a web application would be the web site itself, I am using a separate composition root that both these "front ends" connect to--this allows them to share a common configuration, as almost all of their dependency injection will be 100% identical. Plus, for testing, I've decided to keep most of the code out of the web site as truly unit testing controllers is problematic.
So my code needs to be able to run outside of the context of any web request. Similarly, anything the service does on a schedule needs to be able to be run as an on-demand job from the web site. So most of the heavy-lifting code in my application is NOT in the web site or the service.
Now, back to the needs of my status messages:
Some status messages will be logged, but potentially more will be logged when run as a service. It's okay to queue the log items and save them at the end.
When, say, a job is run on-demand from the web site, fewer things may be logged because any issues the user can take care of will be displayed directly to the user, and for debug purposes we only care about outright errors happening. New messages need to be pushed to the web site immediately (probably through websockets).
Also, a job may be run in debug or verbose mode, so that more informational or warning messages are produced one time (say on the web) than would be the case another time (from the headless service). Code generating messages shouldn't worry about these details at all, unless something that would hurt performance in production is placed inside compiler directives for debug mode).
Additionally, some of the code pushes errors, warnings, or information into the objects that are returned from a request. These are easy to handle. But other errors, warnings, or information (such as errors that prevent said requested objects from being fetched at all) need to bubble up outside of the normal return values.
Right now I'm using something that seems less than ideal: all my methods have to accept a parameter that they can modify in order to bubble up such errors. For example:
public IReadOnlyCollection<UsableItem> GetUsableItems(
ReadOnlyHashSet<string> itemIds,
List<StatusMessage> statusMessages
) {
var resultItems = _itemService.Get(itemIds);
var resultItemsByHasFrobDuplicate = resultItems
.GroupBy(i => i.FrobId)
.ToLookup(grp => grp.Count() > 1, grp => grp.ToList());
statusMessages
.AddRange(
resultItemsByHasFrobDuplicate[true]
.Select(items => $#"{items[0].FrobId
} is used by multiple items {string.Join(",", items.Select(i => i.usableItemId))
}")
);
return resultItemsByHasFrobDuplicate[false]
.Select(grp => grp.First())
.ToList()
.AsReadOnly();
}
So you can see here that while normally items can be in the return value from the method (and these items can even have their own status messages placed on them), others cannot—the calling code can't deal with duplicates and expects a collection of UsableItem objects that do NOT have duplicate FrobId values. The situation of the duplicates is unexpected and needs to bubble up to the user or the log.
The code would be greatly improved by being able to remove the statusMessages parameter and do something more like CurrentScope.PushMessage(message) and know that these messages will be properly handled based on their severity or other rules (the real messages are an object with several properties).
Oh, and I left something out in the code above. What I really have to do is:
_itemService.Get(itemIds, statusMessages); // -- take the darn parameter everywhere
Argh. That is not ideal.
I instantly thought of MiniProfiler.Current as similar, where it's available anywhere but it's scoped to the current request. But I don't understand how it is able to be static, yet segregate any Step calls between different requests so that a user doesn't get another user's steps in his output. Plus, doesn't it only work for MVC? I need this to work when there is no MVC, just non-web code.
Can anyone suggest a way to improve my code and not have to pass around a list to method after method? Something that will work with unit tests is also important, as I need to be able to set up a means to capture the bubbled errors in my mock within a unit test (or be able to do nothing at all if that's not the desired portion of the system to test).
P.S. I don't mind tactful criticism of my little ToLookup pattern above for separating duplicates. I use that technique a lot and would be interested in a better way.
I think you're just looking at this the wrong way. None of this actually involves or really is related to a request. You simply need some service you can inject which pushes messages out. How it does that is inconsequential, and the whole point of dependency injection is that the class with the dependency shouldn't know or care.
Create an interface for your messaging service:
public interface IMessagingService
{
void PushMessage(string message);
}
Then, you should alter your class which contains GetUsableItems a bit to inject the messaging service into the constructor. In general, method injection (what you're doing currently by passing List<StatusMessages> into the method) is frowned upon.
public class MyAwesomeClass
{
protected readonly IMessagingService messenger;
public MyAwesomeClass(IMessagingService messenger)
{
this.messenger = messenger;
}
Then, in your method:
messenger.PushMessage("My awesome message");
The implementation of this interface, then will probably vary based on whether it's injected in the web app or the windows service. Your web app will likely have an implementation that simply utilizes its own code to push messages, whereas the windows service will likely need an implementation that utilizes HttpClient to make requests to your web app. Setup your DI container to inject the right implementation for the right application and you're done.

Persist a variable in WCF application per instance

I am creating a WFC Restful service and there is a need to persist a variable that will be persist per user, is there a way I can achieve this without having to pass the variable to all my calls?
I am using trying to log the process of the user throughout the process, weather their request has failed or succeed, IP address, when they requested the action, failure time, etc.
Please note I am new to WCF, thanks in advance.
I recently worked on this (except it wasn't RESTFUL). You could transmit information through HTTP headers and extract that information on the service-side. See http://trycatch.me/adding-custom-message-headers-to-a-wcf-service-using-inspectors-behaviors/
For the client ID itself I can suggest two places to put it. One is OperationContext.Current.IncomingMessageProperties. Another is CorrelationManager.StartLogicalOperation which allows you to define a logical operation - that could be the service request, beginning to end - or multiple operations - and retrieve a unique ID for each operation.
I would lean toward the latter because it's part of System.Diagnostics and can prevent dependencies on System.ServiceModel. (The name CorrelationManager even describes what you're trying to do.)
In either case I would look at interception. That's the ideal way to read the value (wherever you store it) without having to pollute the individual methods with knowledge of logging and client IDs. (I saw from your message that you're trying to avoid that direct dependency on client IDs.)
Here's some documentation on adding Windsor to your WCF service. (At some point I'll add some end-to-end documentation on my blog.) Then, when you're using Windsor to instantiate your services, you can also use it to instantiate the dependencies and put interceptors around them that will perform your logging before or after those dependencies do their work. Within those interceptors you can access or modify that stack of logical operations.
I'm not doing Windsor justice by throwing out a few links. I'd like to flesh it out with some blog posts. But I recommend looking into it. It's beneficial for lots of reasons - interception just one. It helps with the way we compose services and dependencies.
Update - I added a blog post - how to add Windsor to a WCF service in five minutes.

Q: How to build the most basic service aggregation pattern?

I have a set of services I want to be able to access via one end point altogether.
Now I want to build something in wcf rather than use an existing framework/software so that is out of the question.
Suppose I have 10 contracts each representing a contract of an indepedent service that I want to "route" to, what direction should I go?
public partial class ServiceBus : ICardsService
{
//Proxy
CMSClient cards = new CMSClient();
public int methodExample()
{
return cards.methodExample();
}
So far I've tried using a partial class "ServiceBus" that implements each contract but then I have more than a few (60+) recurrences of identical function signatures so I think I should think in a different angle.
Anyone got an idea of what I should do? or what direction to research? currently I'm trying to use a normal wcf service that's going to be configured with a lot of client end points directing to each of the services it routes TO - and one endpoint for the 'application' to consume.
I'm rather new at wcf so anything that may seem too trivial to mention please do mention it anyway.
Thanks in advance.
I have a set of services I want to be able to access via one end point
altogether.
...
So far I've tried using a partial class "ServiceBus" that implements
each contract
It's questionable whether this kind of "service aggregation" pattern should be achieved by condensing multiple endpoints into an uber facade endpoint. Even when implemented well, this will still result in a brittle single failure point in your solution.
Suppose I have 10 contracts each representing a contract of an
indepedent service that I want to "route" to, what direction should I
go?
Stated broadly, your aim seems to be to decouple the caller and service so that the caller makes a call and based on the call context the call is routed the relevant services.
One approach would be to do this call mediation on the client side. This is an unusual approach but would involve creating a "service bus" assembly containing the capability to dynamically call a service at run-time, based on some kind of configurable metadata.
The client code would consume the assembly in-process, and at run-time call into the assembly, which would then make a call to the metadata store, retrieving the contract, binding, and address information for the relevant service, construct a WCF channel, and return it to the client. The client can then happily make calls against the channel and dispose it when finished.
An alternative is to do the call mediation remotely and luckily WCF does provide a routing service for this kind of thing. This allows you to achieve the service aggregation pattern you are proposing, but in a way which is fully configurable so your overall solution will be less brittle. You will still have a single failure point however, unless you load balance the router service.
I'm not sure about making it client side as I can't access some of the
applications (external apis) that are connecting to our service
Well, any solution you choose will likely involve some consumer rewrite - this is almost unavoidable.
I need to make it simple for the programmers using our api
This does not rule out a client side library approach. In fact in some ways this will make it really easy for the developers, all they will need to do is grab a nuget package, wire it up and start calling it. However I agree it's an unusual approach and would also generate a lot of work for you.
I want to implement the aggregation service with one endpoint for a
few contracts
Then you need to find a way to avoid having to implment multiple duplicate (or redundant) service operations in a single service implementation.
The simplest way would probably be to define a completely new service contract which exposes only those operations distinct to each of the services, and additionally a single instance of each of the redundant operations. Then you would need to have some internal routing logic to call the backing service operations depending on what the caller wanted to do. On second thoughts not so simple I think.
Do you have any examples of a distinct service operation and a redundant one?

Web API: Practices to use different Configuraions

Probably you can give me a hint about good practices: In order to learn a bit more about Web API, I'm trying to create a Web-Service which helps doing some work with the TFS.
It would be very cool, if the Client could select, which TFS he wants to use by passing kindahow an object, which contains the needed data since TFS Service URL etc. But this gives me some troubles:
I created a type called TFSConfiguation, to kindahow pass these information, but this has some drawbacks:
I can't use Get-Method,s since I'd need to pass this object via Body
Every method in every Controller needs to get this object passed
I (think I) can't use Dependency injection, since I need to pass this TFS-Parameter to the Layers behind the Controllers
Other approaches would all hurt the open closed principles I guess, since the Controller really doesn't know which concrete TFS is used.
Is there a good possibility to make such stuff work? If not, what would be the best case for such a scenario?
I can't use Get-Method,s since I'd need to pass this object via Body
The ModelBinder can bind from the URI.
Every method in every Controller needs to get this object passed
Or you let the user store it in the session with a call, and read it from the session in other calls.
I (think I) can't use Dependency injection, since I need to pass this TFS-Parameter to the Layers behind the Controllers
Why do you want to inject this?
You could create a POST endpoint that accepts a TfsConfiguration object and returns a token, such as a GUID, that is passed to GET endpoints via the URL or a custom header. The flow could be:
POST TfsConfiguraton to api/tfstoken, which returns the token
Routes which require the token have URLs of the form api/tfstoken/...

REST based MVC site and/or WCF

I know there are actually a number of questions similar to this one, but I could not find one that exactly answers my question.
I am building a web application that will
obviously display data to the users :)
have a public API for authenticated users to use
later be ported to mobile devices
So, I am stuck on the design. I am going to use asp.net MVC for the website, however I am not sure how to structure my architecture after that.
Should I:
make the website RESTful and act as the API
in my initial review, the GET returns the full view rather than just the data, which to me seems like it kills the idea of the public API
also, should I really be performing business logic in my controller? To be able to scale, wouldn't it be better to have a separate business logic layer that is on another server, or would I just consider pushing my MVC site to another server and it will solve the same problem? I am trying to create a SOLID design, so it also seems better to abstract this to a separate service (which I could just call another class, but then I get back to the problem of scalability...)
make the website not be RESTful and create a RESTful WCF service that the website will use
make both the website and a WCF service that are restful, however this seems redundant
I am fairly new to REST, so the problem could possibly be a misunderstanding on my part. Hopefully, I am explaining this well, but if not, please let me know if you need anything clarified.
I would make a separate business logic layer and a (restful) WCF layer on top of that. This decouples your BLL from your client. You could even have different clients use the same API (not saying you should, or will, but it gives you the flexibility). Ideally your service layer should not return your domain entities, but Data Transfer Objects (which you could map with Automapper), though it depends on the scope and specs of your project.
Putting it on another server makes it a different tier, tier <> layer.
Plain and simple.... it would be easiest from a complexity standpoint to separate the website and your API. It's a bit cleaner IMO too.
However, here are some tips that you can do to make the process of handling both together a bit easier if you decide on going that route. (I'm currently doing this with a personal project I'm working on)
Keep your controller logic pretty bare. Judging on the fact that you want to make it SOLID you're probably already doing this.
Separate the model that is returned to the view from the actual model. I like to create models specific to views and have a way of transforming the model into this view specific model.
Make sure you version everything. You will probably want to allow and support old API requests coming in for quite some time.... especially on the phone.
Actually use REST to it's fullest and not just another name for HTTP. Most implementations miss the fact that in any type of response the state should be transferred with it (missing the ST). Allow self-discovery of actions both on the page and in the API responses. For instance, if you allow paging in a resource always specify in the api or the webpage. There's an entire wikipedia page on this. This immensely aids with the decoupling allowing you to sometimes automagically update clients with the latest version.
Now you're controller action will probably looking something like this pseudo-code
MyAction(param) {
// Do something with param
model = foo.baz(param)
// return result
if(isAPIRequest) {
return WhateverResult(model)
}
return View(model.AsViewSpecificModel())
}
One thing I've been toying with myself is making my own type of ActionResult that handles the return logic, so that it is not duplicated throughout the project.
I would use the REST service for your website, as it won't add any significant overhead (assuming they're on the same server) and will greatly simplify your codebase. Instead of having 2 APIs: one private (as a DLL reference) and one public, you can "eat your own dogfood". The only caution you'll need to exercise is making sure you don't bend the public API to suit your own needs, but instead having a separate private API if needed.
You can use RestSharp or EasyHttp for the REST calls inside the MVC site.
ServiceStack will probably make the API task easier, you can use your existing domain objects, and simply write a set of services that get/update/delete/create the objects without needing to write 2 actions for everything in MVC.

Categories

Resources