Propagating W3C trace context over MassTransit publisher/consumer - c#

'm trying to support propagating the W3C trace context traceId and spanId properties from http calls -> publisher -> consumer -> http call within MassTransit (just so they show up in the logs/seq for now, but we're using Dynatrace), but am I couldn't find anything out-of-the-box here:
https://masstransit-project.com/advanced/monitoring/diagnostic-source.html
If there is nothing available I'll probably try to create something myself based on these articles:
I could find this one as an example for OpenTracing:
https://github.com/yesmarket/MassTransit.OpenTracing
And this as a reference for NServiceBus:
https://jimmybogard.com/building-end-to-end-diagnostics-and-tracing-a-primer-trace-context/
Unless anyone can suggest something that already exists?

EDIT2:
The latest versions of MassTransit propagate the trace context by default.
Enable W3C Tracing in your startup:
Activity.DefaultIdFormat = ActivityIdFormat.W3C;
on configuring your bus call:
bus.UseInstrumentation();
EDIT:
Based on the library by Ryan Bartsch & article by Jimmy Bogard, I created this package which does what I need it to do:
https://github.com/zidad/MassTransit.ActivityTracing

I'm the author of the MassTransit.OpenTracing library you referenced, but I wrote it prior to the W3C recommendation, which looks to be quite recent (Feb 6, 2020).
My goal was that I wanted distributed tracing through a microservice architecture that had both synchronous/HTTP as well as asynchronous message broker communications. For the HTTP stuff I was using OpenTracing.Contrib.NetCore, which 'uses .NET's DiagnosticSource to instrument its code'. For the asynchronous message broker communications, I was using RabbitMQ with MassTransit, but I didn't really understand the MassTransit DiagnosticSource approach suggested on the website (and nor could I find any examples), so I decided to get into the nuts and bolts a bit and roll my own.
Long story short, it all worked as expected using Jaeger as the tracer. Interestingly, we (as in the company I work for) decided to also use DynaTrace, which operates at a much lower level and kind of removes the need for handling a lot of this stuff in code. That said, the approach is not invalid (IMO), as not everyone can afford DynaTrace (or similar APM tools).
I'll try to upgrade this library with the W3C recommendation in the coming week or 2. Let me know if you want to help with contribution/review (or if you want to go off in a different direction and roll your own is also fine)...

Dynatracing claims to integrate seamlessly with OpenTracing https://www.dynatrace.com/integrations/opentracing/ and if you use the library you mentioned and if you have your HTTP part instrumented with OpenTracing, it will work out of the box.
The only potential drawback that in the service that hosts gets an HTTP call and within the context of handling it sends or publishes a message via MassTransit, everything must be instrumented with OpenTracing, because it will start the child span using the OpenTracing API.
We do this with Datadog, so we use the Datadog OpenTracing integration library and trace the WebApi and HttpClient using OpenTracing Contrib libraries. So, the automatic instrumentation didn't work for us. But it's not hard to use those libs to instrument your app instead of using the automatic instrumentation.
The usual flow is like:
Outside -> WebApi: start a span
WebApi -> MassTransit: start a child span, inject the context to headers
MassTransit -> consumer: extract the context, start a child span
and so on
Both bits with injection and extraction are handled in the MassTransit.OpenTracing library, so, there's nothing extra to do.
With a library that your provider has to support OpenTracing it's usually like this:
Configure the tracer of the provider
Set the OpenTracing global tracer to the wrapper, using the integration library from your provider
When you create a span with OpenTracing, it will create a vendor-specific span and wrap it in OpenTracing span

Related

Elsa workflows from client apps

I am considering using the Elsa workflows for a project, but I couldn't find any examples or documentation on how to use it in client applications (xamarin.forms/blazor wasm). My idea idea is to basically define workflows that include also screen transitions in the client apps. Is this a relevant scenario for Elsa, or am not getting it? I understand that there is some REST API available, but no idea how to use it.
This great article explains how to use it in ASP.NET/backend scenarios https://sipkeschoorstra.medium.com/building-workflow-driven-net-core-applications-with-elsa-139523aa4c50
That's a great use case for Elsa and is something I am planning to create a sample application + guide for. So far, there are guides and samples about executing long-running "back-end" processes using Elsa, but there is now reason one couldn't also use it to implement application navigation logic such as wizards consisting of steps implemented as individual screens for example.
So that's your answer: yes, it is a relevant scenario. But it is unfortunate that there are no concrete samples to point you to at the moment.
Barring any samples, here's how it might work in a client application:
The client application has Elsa services configured.
Whether you decide to store workflow within the app (as code or JSON) or on a remote Elsa Server instance doesn't matter - once you have a workflow in memory, you can execute it.
Since your workflows will be driving UI, you have to think of how tightly-coupled the workflow will be with that UI. For example, a highly tight-coupled workflow might include activities that represent view (names) to present, including transition configuration if that is something to be configured, and outcomes based on what buttons were clicked. A highly loose-coupled workflow on the other hand might act more as a "conductor" or orchestrator of actions and events, where the workflow consists of nothing more than a bunch of primitives such as "SendCommand" and "Event Received", where a "SendCommand" simply raises some application event with a task name that your application then handles. The "Event Received" activity handles the other way around: your application fires instructions to Elsa, and Elsa drives the workflow. A task might be a "Navigate" instruction with the next view name provided as a parameter.
The "SendCommand" and "EventReceived" activities are very new and part of Elsa 2.1 preview packages. Right now they are directly coupled to webhook scenarios (where the commands are sent in the form of HTTP requests to an external application), but the goal is to have various strategies in place (HTTP out requests would just be one of them, another one might be a simple mediator pattern for in-process scenarios such as your client application one).
UPDATE
To retrieve workflows designed in the designer into your client app, you need to get the workflow definition via the following API endpoint:
http(s)://your-elsa-server/v1/workflow-definitions/{workflow-definition-id}/Published
What you'll get back is a JSON representing the workflow definition, which you can now deserialize using IContentSerializer.Deserialize<WorkflowDefinition>, which will give you a WorkflowDefinition. But to be able to actually run a workflow, you need a workflow blueprint. To turn the workflow definition into a blueprint, use `IWorkflowBlueprintMaterializer.CreateWorkflowBlueprintAsync(WorkflowDefinition).
Which will give you a blueprint that can then be executed using e.g. IStartsWorkflow.StartWorkflowAsync(IWorkflowBlueprint).
There are various other services that make it more convenient to construct and run workflows.
To make this as frictionless as possible for your client app, you could consider simply implementing IWorkflowProvider, of which we currently have 3 out of the box:
ProgrammaticWorkflowProvider: provides workflow blueprints based on the workflows coded with the fluent Workflow Builder API.
DatabaseWorkflowProvider: provides blueprints based on those stored in the database (JSON models stored by the designer).
StorageWorkflowProvider: provides blueprints based on JSON files stored on some hard drive or blob storage such as Azure Blob Storage.
What you might do, and in fact what I think we should provide out of the box now that you made me think of it, is create a fourth provider that uses the API endpoints to get workflows from.
Then your client app should not have to be bothered with invoking the Elsa API - the provider does it for you.

Capturing log output from library using Castle Core Logging

Currently depending on a library which utilizes the Castle.Core Logging abstraction. I've dug through both that library's docs, and Castle's, and can't seem to find clear explanation of how to capture log output to pipe to our logging framework of choice (NLog, in this instance). I've also dug through a few posts which touch on the topic, but dismissed as not applicable to this situation.
It should be noted that NLog works fine for the rest of the application. No errors seen in the internal logs. Just no output from this third party library.
I see the Castle.Core NLog integration, but that looks to be something to be utilized by the library depending on Castle, not one depending on the library.
So is it possible to capture log output from this library? Or do I need to reach out to the project for support?
If you own the process hosting the library, it is your responsibility to tell Castle.Core.Log which log provider to use.
Configure NLog in your application, then register NLog as the Castle Log Provider as explained in the documentation by calling
container.AddFacility<LoggingFacility>(f => f.LogUsing(LoggerImplementation.NLog) when creating your container
For your library (white) you should provide the logger factory by either setting it on the CoreAppXmlConfiguration instance, or supply your own subclass instance when initializing the library's Application object.
See https://github.com/TestStack/White/blob/master/src/TestStack.White/Configuration/CoreAppXmlConfiguration.cs#L53

Persist a variable in WCF application per instance

I am creating a WFC Restful service and there is a need to persist a variable that will be persist per user, is there a way I can achieve this without having to pass the variable to all my calls?
I am using trying to log the process of the user throughout the process, weather their request has failed or succeed, IP address, when they requested the action, failure time, etc.
Please note I am new to WCF, thanks in advance.
I recently worked on this (except it wasn't RESTFUL). You could transmit information through HTTP headers and extract that information on the service-side. See http://trycatch.me/adding-custom-message-headers-to-a-wcf-service-using-inspectors-behaviors/
For the client ID itself I can suggest two places to put it. One is OperationContext.Current.IncomingMessageProperties. Another is CorrelationManager.StartLogicalOperation which allows you to define a logical operation - that could be the service request, beginning to end - or multiple operations - and retrieve a unique ID for each operation.
I would lean toward the latter because it's part of System.Diagnostics and can prevent dependencies on System.ServiceModel. (The name CorrelationManager even describes what you're trying to do.)
In either case I would look at interception. That's the ideal way to read the value (wherever you store it) without having to pollute the individual methods with knowledge of logging and client IDs. (I saw from your message that you're trying to avoid that direct dependency on client IDs.)
Here's some documentation on adding Windsor to your WCF service. (At some point I'll add some end-to-end documentation on my blog.) Then, when you're using Windsor to instantiate your services, you can also use it to instantiate the dependencies and put interceptors around them that will perform your logging before or after those dependencies do their work. Within those interceptors you can access or modify that stack of logical operations.
I'm not doing Windsor justice by throwing out a few links. I'd like to flesh it out with some blog posts. But I recommend looking into it. It's beneficial for lots of reasons - interception just one. It helps with the way we compose services and dependencies.
Update - I added a blog post - how to add Windsor to a WCF service in five minutes.

Abstracting out existence of service bus/distributed messaging?

I'm working on a system right now that is in a single process space; we are breaking this up into several processes, initially to run on the same box but ultimately to distribute across several separate machines. I'm leaning towards using an ESB (NServiceBus, Rhino ESB) or possibly rolling my own with WCF + queues to handle the pub/sub and request/response scenarios our app has.
However, I'm struggling with the abstraction: I don't want the various components to know they are talking over the bus. The current APIs connecting the various services translate pretty well to this kind of model, but I want to hide that from the client and server sides. Short of writing a lot of custom proxy code for the client and server, is there a better way to approach this? I realize WCF can auto-generate proxies based on the service definition, but I really like some of the other stuff I get with (say) rhino servicebus.
Ideally, I'd like to be able to swap out different implementations (with and without an ESB/messaging layer) just using IoC (knowing there would have to be limits enforced by convention on what can be passed across the interfaces), but I'm not sure where to go with that. I'd really prefer to not have to change every method call on the current interfaces into its own discrete message class, either.
Any resources/patterns/tools to help me do this? Please ask questions if I'm not clear. Thanks.
There may not be one solution/off-the-shelf component that might help you.
Problem 1:
The basic problem can be solved via an ESB, as it provides location transparency and service aggregation. A regular ESB mediates/brokers requests between service consumer and service provider.
Take a simple example:
Service_A depends on Service_B
Service_C depends on Service_B
Service_B depends on Service_D
In this scenario, the best way to progress is this:
Define contracts exposed by Service_B and Service_D as external dependencies (possibly as a web service, though an ESB supports multiple protocols) in services Service_A, Service_C and Service_B, and consume via an ESB.
In ESB, to start with, route thes services Service_B and Service_D on the same instance.
If you migrate Service_D and Service_B as Service_Dx and Service_Bx on a different location, the ESB can be reconfigured to route to the new location. Also, an ESB can be configured to route to Service_B or Service_Bx based on some set of parameters (eg., test data to Service_B and production data to Service_Bx)
Problem 2:
The problem of IOC could probably be hard to achieve; there may not be a need.
I presume the clients, instead of consuming from a known location, are injected with the whereabouts of service location. This in reality transfers the configuration to client side. With this, for every new client added to the system there needs to be a separate configuration control. This might lead to logistical issues.
Please post your final solution, very interested to know your approach.

Easily mockable HTTP client framework for C#

In a upcoming project I'm going to write an application in C# which partly has to communicate with a HTTP server. I'm very fond of writing my code TDD-style, and I would just love it if I could mock all of the HTTP requests in my tests.
Does any one here know about an easly mockable HTTP client framework?
Ps. I usually use Moq for mocks. If you know of some free mocking framework that would be better to mock HTTP requests, that would be nice too.
DotNetOpenId, an open source project from which you may reuse code, uses HTTP wrapper classes through which all calls are made. During testing, a mock HTTP handler is injected so that the responses can be programmatically set before the call is made. It has another mode where it hosts its own ASP.NET site so that the full actual stack can be exercised.
This works well, although it hasn't been pulled out as a standalone solution. If you're interested in reusing it, here are some relevant links to the source. And you can ask for help integrating it at dotnetopenid#googlegroups.com.
Live one:
StandardWebRequestHandler.cs
Mocks: MockHttpRequest.cs, TestWebRequestHandler.cs
I suggest you use the framework support for this i.e. System.Net.WebRequest.
Define a really simple interface, and a simple wrapper for the webrequest. This way you will get what you want, and won't add an external dependency for something the framework already does well.
You could use WireMock.Net which is a flexible library for stubbing and mocking web HTTP responses using requests matching criteria.
And this can also be used very easily in unit -test projects. Check the wiki for details.
NuGet is found here.
I don't think there is actually any framework which handles the things you want to archive. After all only you know what each Http request should do. So basically you have 2 options:
Making the calls and using a dummy implementation on the other side. This may be a simple console application which returns dummy data. If you need more logic I would consider using an object database - in my opinion they fit perfectly for these applications.
Use a mock- implementation on the application side. If this implementation has much logic don't use any mocking framework - create a custom mock class which has all the logic.
Hope this helps

Categories

Resources