Is it possible to have multiple serilog loggers? Currently within my WebApi I can call Log.Information for example to log an information event, but is there a way that I can instead make different logs and call ExternalLog.Information or AuthenticationLog.Information from my controller? The purpose behind this is that my web api is currently working with multiple different databases for different yet interrelated projects, and I would like to store logs within each of these databases that pertain to them instead of needing to create an additional logging database if at all possible.
A better solution, that I figure is less likely is, can I map individual controllers to a log, so that any time that a specific controller calls log, it writes to the AuthenticationLog for example.
I believe that the answer to this question is to use subloggers, rather than separate loggers. I have found that you can do .WriteTo.Logger and filter further in there. I will accept this as the answer if nobody else has a better solution (and of course if I am able to get it to work). I need to be able to filter on the controller or action name, which at this time I have a second stack overflow question out to figure out how to get that data. Serilog with Asp.net Web Api not using enricher
Related
Currently doing a project in .NET CORE, where i am going to have various layers, including a CrossCuting layer, but i cant get my head around this problem...
right now i have a Logger working all fine, using Serilog. But my problem now is, i want to Have my logger created and configured inside the CrossCutting layer, and then injected into the Application layer(for now).
Is There any possible way to do that? So many articles explaining how to do the configuration trough Program.cs but what about inside a Layer?
In order to correlate logs that belong to the same request, even across multiple applications, add a CorrelationId property to your logs.
Here you have Serilog best practices:
https://benfoster.io/blog/serilog-best-practices/
I am creating a WFC Restful service and there is a need to persist a variable that will be persist per user, is there a way I can achieve this without having to pass the variable to all my calls?
I am using trying to log the process of the user throughout the process, weather their request has failed or succeed, IP address, when they requested the action, failure time, etc.
Please note I am new to WCF, thanks in advance.
I recently worked on this (except it wasn't RESTFUL). You could transmit information through HTTP headers and extract that information on the service-side. See http://trycatch.me/adding-custom-message-headers-to-a-wcf-service-using-inspectors-behaviors/
For the client ID itself I can suggest two places to put it. One is OperationContext.Current.IncomingMessageProperties. Another is CorrelationManager.StartLogicalOperation which allows you to define a logical operation - that could be the service request, beginning to end - or multiple operations - and retrieve a unique ID for each operation.
I would lean toward the latter because it's part of System.Diagnostics and can prevent dependencies on System.ServiceModel. (The name CorrelationManager even describes what you're trying to do.)
In either case I would look at interception. That's the ideal way to read the value (wherever you store it) without having to pollute the individual methods with knowledge of logging and client IDs. (I saw from your message that you're trying to avoid that direct dependency on client IDs.)
Here's some documentation on adding Windsor to your WCF service. (At some point I'll add some end-to-end documentation on my blog.) Then, when you're using Windsor to instantiate your services, you can also use it to instantiate the dependencies and put interceptors around them that will perform your logging before or after those dependencies do their work. Within those interceptors you can access or modify that stack of logical operations.
I'm not doing Windsor justice by throwing out a few links. I'd like to flesh it out with some blog posts. But I recommend looking into it. It's beneficial for lots of reasons - interception just one. It helps with the way we compose services and dependencies.
Update - I added a blog post - how to add Windsor to a WCF service in five minutes.
I have a small website I implemented with AngularJS, C# and Entity Framework. The whole website is a Single Page Application and gets all of its data from one single C# web service.
My question deals with the interface that the C# web service should expose. For once, the service can provide the Entities in a RESTful way, providing them directly or as DTOs. The other approach would be that the web service returns an object for exactly one use case so that the AngularJS Controller only needs to invoke the web service once and can work with the responded model directly.
To clarify, please consider the following two snippets:
// The service returns DTOs, but has to be invoked multiple
// times from the AngularJS controller
public Order GetOrder(int orderId);
public List<Ticket> GetTickets(int orderId);
And
// The service returns the model directly
public OrderOverview GetOrderAndTickets(int orderId);
While the first example exposes a RESTful interface and works with the resource metaphor, it has the huge drawback of only returning parts of the data. The second example returns an object tailored to the needs of the MVC controller, but can most likely only be used in one MVC controller. Also, a lot of mapping needs to be done for common fields in the second scenario.
I found that I did both things from time to time in my webservice and want to get some feedback about it. I do not care too much for performance, altough multiple requests are of course problematic and once they slow down the application too much, they need refactoring. What is the best way to design the web service interface?
I would advise going with the REST approach, general purpose API design, rather than the single purpose remote procedure call (RPC) approach. While the RPC is going to be very quick at the beginning of your project, the number of end points usually become a liability when maintaining code. Now, if you are only ever going to have less than 20 types of server calls, I would say you can stick with this approach without getting bitten to badly. But if your project is going to live longer than a year, you'll probably end up with far more end points than 20.
With a rest based service, you can always add an optional parameter to describe child records said resource contains, and return them for the particular call.
There is nothing wrong with a RESTful service returning child entities or having an optional querystring param to toggle that behavior
public OrderOverview GetOrder(int orderId, bool? includeTickets);
When returning a ticket within an order, have each ticket contain a property referring to the URL endpoint of that particular ticket (/api/tickets/{id} or whatever) so the client can then work with the ticket independent of the order
In this specific case I would say it depends on how many tickets you have. Let's say you were to add pagination for the tickets, would you want to be getting the Order every time you get the next set of tickets?
You could always make multiple requests and resolve all the promises at once via $q.all().
The best practice is to wrap up HTTP calls in an Angular Service, that multiple angular controllers can reference.
With that, I don't think 2 calls to the server is going to be a huge detriment to you. And you won't have to alter the web service, or add any new angular services, when you want to add new views to your site.
Generally, API's should be written independently minded of what's consuming it. If you're pressed for time and you're sure you'll never need to consume it from some other client piece, you could write it specifically for your web app. But generally that's how it goes.
Ok so Ive run into a situation I would like to resolve with minimum impact on our development group.
We are using log4net as our logging framework in a largish c# system (~40 production assemblies).
Now our support end wants to be able to correlate logged events with a database they maintain separately. A reasonable request.
In production our main log repository is the Windows Event-Log.
At the developer side our current pattern is this:
Whenever you want to log from a component, you instantiate a logger like this at the top of the class:
private static readonly ILogger Log = LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod())
If you need stuff in the logging context, you put it in as-early-as-possible in the flow of every Thread, ie. at the receiving end of service calls etc.
Whenever you want to do logging, you simply do
Log.Warn(str, ex) - (or Info, Error etc)
Now we want to "fix" this log-entry to a unique "eventId", and we can supply an extension method to ILogger, that will allow us to do:
Log.Warn(int, str, ex), when "int" is a number with these properties:
It is "mapped" to a durable store.
It points to one and only one Log
entry
If the source code Log statement is removed, the Id is not
reused for a new log statement.
My immediate solution would be to maintain a global enum, that would cover the set of possible "eventId"'s and just instruct the developers to "use them only once".
We would then proceed to do some sort of "intelligent" mapping between our Namespaces and "CategoryId" - f.ex eveything in the "BusinessLayer" namespace gets one categoryId assigned.
But I think there is something I'm missing....
Any thoughts would be appreciated on:
How do you use EventId and CategoryId in your large systems? (Or "What" do you use them for)
Does any of you have an example of a "dynamic" way of creating the EventId's, in such a way that you can maintain the simple approach to logging, that does not require the developer to supply a unique Id at code-statement level.
Sorry if my question is too broad, I am aware that Im fishing a bit here.
I know there are actually a number of questions similar to this one, but I could not find one that exactly answers my question.
I am building a web application that will
obviously display data to the users :)
have a public API for authenticated users to use
later be ported to mobile devices
So, I am stuck on the design. I am going to use asp.net MVC for the website, however I am not sure how to structure my architecture after that.
Should I:
make the website RESTful and act as the API
in my initial review, the GET returns the full view rather than just the data, which to me seems like it kills the idea of the public API
also, should I really be performing business logic in my controller? To be able to scale, wouldn't it be better to have a separate business logic layer that is on another server, or would I just consider pushing my MVC site to another server and it will solve the same problem? I am trying to create a SOLID design, so it also seems better to abstract this to a separate service (which I could just call another class, but then I get back to the problem of scalability...)
make the website not be RESTful and create a RESTful WCF service that the website will use
make both the website and a WCF service that are restful, however this seems redundant
I am fairly new to REST, so the problem could possibly be a misunderstanding on my part. Hopefully, I am explaining this well, but if not, please let me know if you need anything clarified.
I would make a separate business logic layer and a (restful) WCF layer on top of that. This decouples your BLL from your client. You could even have different clients use the same API (not saying you should, or will, but it gives you the flexibility). Ideally your service layer should not return your domain entities, but Data Transfer Objects (which you could map with Automapper), though it depends on the scope and specs of your project.
Putting it on another server makes it a different tier, tier <> layer.
Plain and simple.... it would be easiest from a complexity standpoint to separate the website and your API. It's a bit cleaner IMO too.
However, here are some tips that you can do to make the process of handling both together a bit easier if you decide on going that route. (I'm currently doing this with a personal project I'm working on)
Keep your controller logic pretty bare. Judging on the fact that you want to make it SOLID you're probably already doing this.
Separate the model that is returned to the view from the actual model. I like to create models specific to views and have a way of transforming the model into this view specific model.
Make sure you version everything. You will probably want to allow and support old API requests coming in for quite some time.... especially on the phone.
Actually use REST to it's fullest and not just another name for HTTP. Most implementations miss the fact that in any type of response the state should be transferred with it (missing the ST). Allow self-discovery of actions both on the page and in the API responses. For instance, if you allow paging in a resource always specify in the api or the webpage. There's an entire wikipedia page on this. This immensely aids with the decoupling allowing you to sometimes automagically update clients with the latest version.
Now you're controller action will probably looking something like this pseudo-code
MyAction(param) {
// Do something with param
model = foo.baz(param)
// return result
if(isAPIRequest) {
return WhateverResult(model)
}
return View(model.AsViewSpecificModel())
}
One thing I've been toying with myself is making my own type of ActionResult that handles the return logic, so that it is not duplicated throughout the project.
I would use the REST service for your website, as it won't add any significant overhead (assuming they're on the same server) and will greatly simplify your codebase. Instead of having 2 APIs: one private (as a DLL reference) and one public, you can "eat your own dogfood". The only caution you'll need to exercise is making sure you don't bend the public API to suit your own needs, but instead having a separate private API if needed.
You can use RestSharp or EasyHttp for the REST calls inside the MVC site.
ServiceStack will probably make the API task easier, you can use your existing domain objects, and simply write a set of services that get/update/delete/create the objects without needing to write 2 actions for everything in MVC.