Simulate request scope in non-Web code - c#

Background: I need parts of my system to be able to push various status messages to some data structure so that they can be consumed by a caller, without passing the data structure into methods explicitly, and where the needs of the callers can differ.
Detail: my application has two (and conceivably more) heads, an ASP.NET MVC 5 web site and a Windows service. So normally, while the composition root of a web application would be the web site itself, I am using a separate composition root that both these "front ends" connect to--this allows them to share a common configuration, as almost all of their dependency injection will be 100% identical. Plus, for testing, I've decided to keep most of the code out of the web site as truly unit testing controllers is problematic.
So my code needs to be able to run outside of the context of any web request. Similarly, anything the service does on a schedule needs to be able to be run as an on-demand job from the web site. So most of the heavy-lifting code in my application is NOT in the web site or the service.
Now, back to the needs of my status messages:
Some status messages will be logged, but potentially more will be logged when run as a service. It's okay to queue the log items and save them at the end.
When, say, a job is run on-demand from the web site, fewer things may be logged because any issues the user can take care of will be displayed directly to the user, and for debug purposes we only care about outright errors happening. New messages need to be pushed to the web site immediately (probably through websockets).
Also, a job may be run in debug or verbose mode, so that more informational or warning messages are produced one time (say on the web) than would be the case another time (from the headless service). Code generating messages shouldn't worry about these details at all, unless something that would hurt performance in production is placed inside compiler directives for debug mode).
Additionally, some of the code pushes errors, warnings, or information into the objects that are returned from a request. These are easy to handle. But other errors, warnings, or information (such as errors that prevent said requested objects from being fetched at all) need to bubble up outside of the normal return values.
Right now I'm using something that seems less than ideal: all my methods have to accept a parameter that they can modify in order to bubble up such errors. For example:
public IReadOnlyCollection<UsableItem> GetUsableItems(
ReadOnlyHashSet<string> itemIds,
List<StatusMessage> statusMessages
) {
var resultItems = _itemService.Get(itemIds);
var resultItemsByHasFrobDuplicate = resultItems
.GroupBy(i => i.FrobId)
.ToLookup(grp => grp.Count() > 1, grp => grp.ToList());
statusMessages
.AddRange(
resultItemsByHasFrobDuplicate[true]
.Select(items => $#"{items[0].FrobId
} is used by multiple items {string.Join(",", items.Select(i => i.usableItemId))
}")
);
return resultItemsByHasFrobDuplicate[false]
.Select(grp => grp.First())
.ToList()
.AsReadOnly();
}
So you can see here that while normally items can be in the return value from the method (and these items can even have their own status messages placed on them), others cannot—the calling code can't deal with duplicates and expects a collection of UsableItem objects that do NOT have duplicate FrobId values. The situation of the duplicates is unexpected and needs to bubble up to the user or the log.
The code would be greatly improved by being able to remove the statusMessages parameter and do something more like CurrentScope.PushMessage(message) and know that these messages will be properly handled based on their severity or other rules (the real messages are an object with several properties).
Oh, and I left something out in the code above. What I really have to do is:
_itemService.Get(itemIds, statusMessages); // -- take the darn parameter everywhere
Argh. That is not ideal.
I instantly thought of MiniProfiler.Current as similar, where it's available anywhere but it's scoped to the current request. But I don't understand how it is able to be static, yet segregate any Step calls between different requests so that a user doesn't get another user's steps in his output. Plus, doesn't it only work for MVC? I need this to work when there is no MVC, just non-web code.
Can anyone suggest a way to improve my code and not have to pass around a list to method after method? Something that will work with unit tests is also important, as I need to be able to set up a means to capture the bubbled errors in my mock within a unit test (or be able to do nothing at all if that's not the desired portion of the system to test).
P.S. I don't mind tactful criticism of my little ToLookup pattern above for separating duplicates. I use that technique a lot and would be interested in a better way.

I think you're just looking at this the wrong way. None of this actually involves or really is related to a request. You simply need some service you can inject which pushes messages out. How it does that is inconsequential, and the whole point of dependency injection is that the class with the dependency shouldn't know or care.
Create an interface for your messaging service:
public interface IMessagingService
{
void PushMessage(string message);
}
Then, you should alter your class which contains GetUsableItems a bit to inject the messaging service into the constructor. In general, method injection (what you're doing currently by passing List<StatusMessages> into the method) is frowned upon.
public class MyAwesomeClass
{
protected readonly IMessagingService messenger;
public MyAwesomeClass(IMessagingService messenger)
{
this.messenger = messenger;
}
Then, in your method:
messenger.PushMessage("My awesome message");
The implementation of this interface, then will probably vary based on whether it's injected in the web app or the windows service. Your web app will likely have an implementation that simply utilizes its own code to push messages, whereas the windows service will likely need an implementation that utilizes HttpClient to make requests to your web app. Setup your DI container to inject the right implementation for the right application and you're done.

Related

Q: How to build the most basic service aggregation pattern?

I have a set of services I want to be able to access via one end point altogether.
Now I want to build something in wcf rather than use an existing framework/software so that is out of the question.
Suppose I have 10 contracts each representing a contract of an indepedent service that I want to "route" to, what direction should I go?
public partial class ServiceBus : ICardsService
{
//Proxy
CMSClient cards = new CMSClient();
public int methodExample()
{
return cards.methodExample();
}
So far I've tried using a partial class "ServiceBus" that implements each contract but then I have more than a few (60+) recurrences of identical function signatures so I think I should think in a different angle.
Anyone got an idea of what I should do? or what direction to research? currently I'm trying to use a normal wcf service that's going to be configured with a lot of client end points directing to each of the services it routes TO - and one endpoint for the 'application' to consume.
I'm rather new at wcf so anything that may seem too trivial to mention please do mention it anyway.
Thanks in advance.
I have a set of services I want to be able to access via one end point
altogether.
...
So far I've tried using a partial class "ServiceBus" that implements
each contract
It's questionable whether this kind of "service aggregation" pattern should be achieved by condensing multiple endpoints into an uber facade endpoint. Even when implemented well, this will still result in a brittle single failure point in your solution.
Suppose I have 10 contracts each representing a contract of an
indepedent service that I want to "route" to, what direction should I
go?
Stated broadly, your aim seems to be to decouple the caller and service so that the caller makes a call and based on the call context the call is routed the relevant services.
One approach would be to do this call mediation on the client side. This is an unusual approach but would involve creating a "service bus" assembly containing the capability to dynamically call a service at run-time, based on some kind of configurable metadata.
The client code would consume the assembly in-process, and at run-time call into the assembly, which would then make a call to the metadata store, retrieving the contract, binding, and address information for the relevant service, construct a WCF channel, and return it to the client. The client can then happily make calls against the channel and dispose it when finished.
An alternative is to do the call mediation remotely and luckily WCF does provide a routing service for this kind of thing. This allows you to achieve the service aggregation pattern you are proposing, but in a way which is fully configurable so your overall solution will be less brittle. You will still have a single failure point however, unless you load balance the router service.
I'm not sure about making it client side as I can't access some of the
applications (external apis) that are connecting to our service
Well, any solution you choose will likely involve some consumer rewrite - this is almost unavoidable.
I need to make it simple for the programmers using our api
This does not rule out a client side library approach. In fact in some ways this will make it really easy for the developers, all they will need to do is grab a nuget package, wire it up and start calling it. However I agree it's an unusual approach and would also generate a lot of work for you.
I want to implement the aggregation service with one endpoint for a
few contracts
Then you need to find a way to avoid having to implment multiple duplicate (or redundant) service operations in a single service implementation.
The simplest way would probably be to define a completely new service contract which exposes only those operations distinct to each of the services, and additionally a single instance of each of the redundant operations. Then you would need to have some internal routing logic to call the backing service operations depending on what the caller wanted to do. On second thoughts not so simple I think.
Do you have any examples of a distinct service operation and a redundant one?

AngularJS and Web Service Interaction Best Practices

I have a small website I implemented with AngularJS, C# and Entity Framework. The whole website is a Single Page Application and gets all of its data from one single C# web service.
My question deals with the interface that the C# web service should expose. For once, the service can provide the Entities in a RESTful way, providing them directly or as DTOs. The other approach would be that the web service returns an object for exactly one use case so that the AngularJS Controller only needs to invoke the web service once and can work with the responded model directly.
To clarify, please consider the following two snippets:
// The service returns DTOs, but has to be invoked multiple
// times from the AngularJS controller
public Order GetOrder(int orderId);
public List<Ticket> GetTickets(int orderId);
And
// The service returns the model directly
public OrderOverview GetOrderAndTickets(int orderId);
While the first example exposes a RESTful interface and works with the resource metaphor, it has the huge drawback of only returning parts of the data. The second example returns an object tailored to the needs of the MVC controller, but can most likely only be used in one MVC controller. Also, a lot of mapping needs to be done for common fields in the second scenario.
I found that I did both things from time to time in my webservice and want to get some feedback about it. I do not care too much for performance, altough multiple requests are of course problematic and once they slow down the application too much, they need refactoring. What is the best way to design the web service interface?
I would advise going with the REST approach, general purpose API design, rather than the single purpose remote procedure call (RPC) approach. While the RPC is going to be very quick at the beginning of your project, the number of end points usually become a liability when maintaining code. Now, if you are only ever going to have less than 20 types of server calls, I would say you can stick with this approach without getting bitten to badly. But if your project is going to live longer than a year, you'll probably end up with far more end points than 20.
With a rest based service, you can always add an optional parameter to describe child records said resource contains, and return them for the particular call.
There is nothing wrong with a RESTful service returning child entities or having an optional querystring param to toggle that behavior
public OrderOverview GetOrder(int orderId, bool? includeTickets);
When returning a ticket within an order, have each ticket contain a property referring to the URL endpoint of that particular ticket (/api/tickets/{id} or whatever) so the client can then work with the ticket independent of the order
In this specific case I would say it depends on how many tickets you have. Let's say you were to add pagination for the tickets, would you want to be getting the Order every time you get the next set of tickets?
You could always make multiple requests and resolve all the promises at once via $q.all().
The best practice is to wrap up HTTP calls in an Angular Service, that multiple angular controllers can reference.
With that, I don't think 2 calls to the server is going to be a huge detriment to you. And you won't have to alter the web service, or add any new angular services, when you want to add new views to your site.
Generally, API's should be written independently minded of what's consuming it. If you're pressed for time and you're sure you'll never need to consume it from some other client piece, you could write it specifically for your web app. But generally that's how it goes.

How to maintain Id's of log-entries in an agile project

Ok so Ive run into a situation I would like to resolve with minimum impact on our development group.
We are using log4net as our logging framework in a largish c# system (~40 production assemblies).
Now our support end wants to be able to correlate logged events with a database they maintain separately. A reasonable request.
In production our main log repository is the Windows Event-Log.
At the developer side our current pattern is this:
Whenever you want to log from a component, you instantiate a logger like this at the top of the class:
private static readonly ILogger Log = LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod())
If you need stuff in the logging context, you put it in as-early-as-possible in the flow of every Thread, ie. at the receiving end of service calls etc.
Whenever you want to do logging, you simply do
Log.Warn(str, ex) - (or Info, Error etc)
Now we want to "fix" this log-entry to a unique "eventId", and we can supply an extension method to ILogger, that will allow us to do:
Log.Warn(int, str, ex), when "int" is a number with these properties:
It is "mapped" to a durable store.
It points to one and only one Log
entry
If the source code Log statement is removed, the Id is not
reused for a new log statement.
My immediate solution would be to maintain a global enum, that would cover the set of possible "eventId"'s and just instruct the developers to "use them only once".
We would then proceed to do some sort of "intelligent" mapping between our Namespaces and "CategoryId" - f.ex eveything in the "BusinessLayer" namespace gets one categoryId assigned.
But I think there is something I'm missing....
Any thoughts would be appreciated on:
How do you use EventId and CategoryId in your large systems? (Or "What" do you use them for)
Does any of you have an example of a "dynamic" way of creating the EventId's, in such a way that you can maintain the simple approach to logging, that does not require the developer to supply a unique Id at code-statement level.
Sorry if my question is too broad, I am aware that Im fishing a bit here.

Propagate Application Service as WCF Service

I have description of my Application Services using my fancy classes (ServiceDescription class that contains collection of ServiceMethod description, for simplification).
Now, I want to expose one Application Service as one WCF Service (one Contract). The current solution is very lame - I have console application that generates *.svc file for each Application Service (ServiceDescription). There is one method (Operation) generated for one ServiceMethod.
This works well but I would like to make it better. It could be improved using T4 template but I'm sure that there is still better way in WCF.
I would still like to have one *.svc file per one Application Service but I don't want to generate methods (for corresponding Application Service methods).
I'm sure that there must be some interfaces that allow to discover operations dynamically, at runtime. Maybe IContractBehavior...
Thanks.
EDIT1:
I don't want to use generic operation contract because I would like to have the ability to generate service proxy with all operations.
I'm sure that if I write WCF service and operations by hand then WCF uses reflection to discover the operations in the service.
Now, I would like to customize this point in order not to use reflection, just use my "operations discovering code" instead.
I think there is nothing wrong with static code generation in that case. In my opinion, it is a better solution than dynamic generation of contracts. Keep in mind that your contract is the only evidence you have/provide that a service is hosting a particular set operations.
The main issue I see about the dynamic approach is about versioning and compatibility. If everything is dynamically generated, you may end up transparently pushing breaking changes into the system and create some problems with existing clients.
If you have a code generator when you plan on implementing some changes in the application services, it will be easier to remember that the changes you make on the services may have a huge impact.
But if you really want to dynamically handle messages, you could use a generic operation contract (with the Action property set to *), and manually route the messages to the application services.
Keep in mind that you would lose the ability to generate from the service a proxy containing a list of operations available.

Seeking One-Size-Fits-All Context Based Storage

First off, I wish context based storage was consistent across the framework!
With that said, I'm looking for an elegant solution to make these properties safe across ASP.NET, WCF and any other multithreaded .NET code. The properties are located in some low-level tracing helpers (these are exposed via methods if you're wondering why they're internal).
I'd rather not have a dependency on unneeded assemblies (like System.Web, etc). I don't want to require anyone using this code to configure anything. I just want it to work ;) That may be too tall of an order though...
Anyone have any tricks up their sleeves? (I've seen Spring's implementation)
internal static string CurrentInstance
{
get
{
return CallContext.LogicalGetData(currentInstanceSlotName) as string;
}
set
{
CallContext.LogicalSetData(currentInstanceSlotName, value);
}
}
internal static Stack<ActivityState> AmbientActivityId
{
get
{
Stack<ActivityState> stack = CallContext.LogicalGetData(ambientActivityStateSlotName) as Stack<ActivityState>;
if (stack == null)
{
stack = new Stack<ActivityState>();
CallContext.LogicalSetData(ambientActivityStateSlotName, stack);
}
return stack;
}
}
Update
By safe I do not mean synchronized. Background on the issue here
Here is a link to (at least part of) NHibernate's "context" implementation:
https://nhibernate.svn.sourceforge.net/svnroot/nhibernate/trunk/nhibernate/src/NHibernate/Context/
It is not clear to me exactly where or how this comes into play in the context of NHibernate. That is, if I wanted to store some values in "the context" would I get "the context" from NHibernate and add my values? I don't use NHibernate, so I don't really know.
I suppose that you could look and determine for yourself if this kind of implementation would be useful to you. Apparently the idea would be to inject the desired implementation, depending on the type of application (ASP.NET, WCF, etc). That probably implies some configuration (maybe minimal if one were to use MEF to load "the" ICurrentSessionContext interface).
At any rate, I found this idea interesting when I found it some time ago while searching for information on CallContext.SetData/GetData/LogicalSetData/LogicalGetData, Thread.SetData/GetData, [ThreadStatic], etc.
Also, based on your use of CallContext.LogicalSetData rather than CallContext.SetData, I assume that you want to take advantage of the fact that information associated with the logical thread will be propagated to child threads as opposed to just wanting a "thread safe" place to store info. So, if you were to set (pr Push) the AmbientActivity in your app's startup and then not push any more activities, any subsequent threads would also be part of that same activity since data stored via LogicalSetData is inherited by child threads.
If you have learned anything in the meantime since you first asked this question I would be very interested in hearing about it. Even if you haven't, I would be interested in learning about what you are doing with the context.
At the moment, I am working on maintaining some context information for logging/tracing (similar to Trace.CorrelationManager.ActivityId and Trace.CorrelationManager.LogicalOpertionStack and log4net/NLog context support). I would like to save some context (current app, current app instance, current activity (maybe nested)) for use in an app or WCF service AND I want to propagate it "automatically" across WCF service boundaries. This is to allow logging statements logged in a central repository to be correlated by client/activity/etc. We would be able to query and correlate for all logging statements by a specific instance of a specific application. The logging statements could have been generated on the client or in one or more WCF services.
The WCF propagation of ActivityId is not necessarily sufficient for us because we want to propagate (or we think we do) more than just the ActivityId. Also, we want to propagate this information from Silverlight clients to WCF services and Trace.CorrelationManager is not available in Silverlight (at least not in 4.0, maybe something like it will be available in the future).
Currently I am prototyping the propagation of our "context" information using IClientMessageInspector and IDispatchMessageInspector. It looks like it will probably work ok for us.
Regarding a dependency on System.Web, the NHibernate implementation does have a "ReflectiveHttpContext" that uses reflection to access the HttpContext so there would not be a project dependency on System.Web. Obviously, System.Web would have to be available where the app is deployed if HttpContext is configured to be used.
I don't know that using CallContext is the right move here if the desire is simply to provide thread-safe access to your properties. If that is the case, the lock statement is all you need.
However, you have to make sure you are applying it correctly.
With CallContext, you are going to get thread-safe access because you are going to have separate instances of CallContext when calls come in on different threads (or different stores, rather). However, that's very different from making access to a resource thread-safe.
If you want to share the same value across multiple threads, then the lock statement is the way to go. Otherwise, if you want specific values on a per-thread/call basis, use the CallContext, or use the static GetData/SetData methods on the Thread class, or the ThreadStatic attribute (or any number of thread-based storage mechanisms).

Categories

Resources