How do I avoid creating a loop with Serilog Sinks that I want to log.
The problem is that the base classes "MyTcpServer" and "MyTcpClient" use Serilog.
But since TcpSink also uses the same classes, sending a log entry will indefinitely loop.
How do I prevent this?
Main()
{
Serilog.Log.Logger = new LoggerConfiguration()
.WriteTo.TcpSink() //this a TcpListener/Server listening on port 1234
.WriteTo.Console()
.CreateLogger();
MyTcpServer AnotherServer = new MyTcpServer(4321);
}
public class MyTcpServer
{
///this class contains Log.Verbose|Debug|Error
private List<MyTcpClient> clients;
}
public class MyTcpClient
{
///this class contains Log.Verbose|Debug|Error
}
public class TcpServerSink : ILogEventSink
{
MyTcpServer server;
public TcpServerSink(int port = 1234)
{
server = new MyTcpServer(1234);
}
public void Emit(LogEvent logevent)
{
string str = Newtonsoft.Json.JsonConvert.Serialize(logevent);
server.Send(str);
}
}
There are only two options here
Use MyTcpServer in TcpServerSink but don't log to TcpServerSink
Don't use MyTcpServer in TcpServerSink
For the first solution make MyTcpServer dependent on ILogger rather than using static Log dependency. This way you can pass whatever logger you want or just disable logging in your sink:
server = new MyTcpServer(SilentLogger.Instance, 1234);
I personally prefer the second solution. Because you should log to Serilog sinks only events related to your application logic. TcpServerSink is not related to application logic. A common approach used in other Serilog sinks is the usage of static SelfLog that writes to someTextWriter. E.g.
SelfLog.Out = Console.Error;
And then you can use this self-log to write some info about your sink. Also instead of MyTcpServer your sink should use something like plain TcpClient. You can check a Splunk TcpSink example.
One option worth considering is to use Log.ForContext<MyTcpServer>() when logging within the TCP server:
Log.ForContext<MyTcpServer>().Information("Hello!");
and filter these messages out for the TCP sink:
// dotnet add package Serilog.Expressions
.WriteTo.Conditional(
"SourceContext not like 'MyNamespace.MyTcpServer%'",
wt => wt.TcpSink())
.WriteTo.Console()
This has the advantage of getting errors from the TCP sink through to the console, but the drawback that if you forget to use a contextual logger inside the TCP server you'll still stack overflow.
Related
I am having a simple scenario where I am publishing certain message using IPublishEndpoint and I want whatever microservice engages a consumer for it to consume independently on other microservices, but JUST ONCE not 10x. When I configure as documentation says it does not behave as described. It appears to multiplicate the message consumer-count-ish times as each consumer is not firing just once, but 3x in my case. Why?
Exact scenario: I have 3 independent microservices running in docker as mvc projects held in one solution, interconnected with core library where contracts resides. Each project has its own implementation of IConsumer of SAME contract class from core library and every project is registering that consumer at startup using same rabbitmq instance and virtualhost. For demonstration I have simplified the code to minimum.
From vague and confusing masstransit documentation I could not find why is behaving like this or what I am doing wrong nor how should I configure it properly (https://masstransit-project.com/). Masstransit documentation is very fragmented and does not explain much what their main configuration methods actually do for real in rabbitmq.
public interface ISystemVariableChanged
{
/// <summary>Variable key that was modified.</summary>
public string Key { get; set; }
/// <summary>Full reload requested.</summary>
public bool FullReload { get; set; }
}
3 consumers like this:
public class SystemVariableChangedConsumer : IConsumer<ISystemVariableChanged>
{
private readonly ILogger<SystemVariableChangedConsumer > logger;
public SystemVariableChangedConsumer (ILogger<SystemVariableChangedConsumer > logger)
{
this.logger = logger;
}
public async Task Consume(ConsumeContext<ISystemVariableChanged> context)
{
logger.LogInformation("Variable changed in /*ProjectName*/"); // differs per project
await Task.CompletedTask;
}
}
3x Startup like this
services.AddMassTransit(bus =>
{
bus.AddConsumer<SystemVariableChangedConsumer>();
// bus.AddConsumer<SystemVariableChangedConsumer>().Endpoint(p => p.InstanceId = "/*3 different values*/"); // not working either
bus.SetKebabCaseEndpointNameFormatter();
bus.UsingRabbitMq((context, rabbit) =>
{
rabbit.Host(options.HostName, options.VirtualHost, h =>
{
h.Username(options.UserName);
h.Password(options.Password);
});
rabbit.UseInMemoryOutbox();
rabbit.UseJsonSerializer();
rabbit.UseRetry(cfg => cfg.Incremental(options.RetryLimit, TimeSpan.FromSeconds(options.RetryTimeout), TimeSpan.FromSeconds(options.RetryTimeout)));
// rabbit.ConfigureEndpoints(bus); // not working either
// not working either
rabbit.ReceiveEndpoint("system-variable-changed", endpoint =>
{
endpoint.ConfigureConsumer<SystemVariableChangedConsumer>(context);
});
});
});
I tried many setups and they tend to behave quite the same wrong way (eg. setting endpoint instance ID etc.).
Regardless if I use ReceiveEndpoint method to configure per endpoint manually or ConfigureEndpoints to configure them all it makes not much of a difference.
I read various materials about this but they did not helped with masstransit setup. This should be absolute basic usecase easily achiveable, idk.
In rabbitmq console it created 1 interface exchange routing to 3 sub-exchanges created per consumer and each of those bound to final queue.
I am looking for some clean solution, not hardcoded queue names.
Can anyone help me with correct startup setup?
Thank you
This is all that is required:
services.AddMassTransit(bus =>
{
// assuming the same consumer is used, in the same namespace.
// If the consumers have different names/namespaces, InstanceId is not required
bus.AddConsumer<SystemVariableChangedConsumer>()
.Endpoint(p => p.InstanceId = "/*3 different values*/");
bus.SetKebabCaseEndpointNameFormatter();
bus.UsingRabbitMq((context, rabbit) =>
{
rabbit.Host(options.HostName, options.VirtualHost, h =>
{
h.Username(options.UserName);
h.Password(options.Password);
});
rabbit.UseMessageRetry(cfg => cfg.Incremental(options.RetryLimit, TimeSpan.FromSeconds(options.RetryTimeout), TimeSpan.FromSeconds(options.RetryTimeout)));
rabbit.UseInMemoryOutbox();
rabbit.ConfigureEndpoints(context);
});
});
I'd suggest clearing your entire broker exchange/queue binding history before running it, since previous bindings might be causing redelivery issues. But RabbitMQ is usually good about preventing duplicate deliveries for the same message to the same exchange.
I am using .NET Core 3.1.
I have a simple publish class which purpose is to publish messages to a message broker. All the logic for the message broker is in a separate class library and the publish class itself need somehow to get a connection in order to publish the message, so it looks like that:
public class Publisher : IPublisher
{
public void Publish(string subject, PublishMessage message)
{
var options = ConnectionFactory.GetDefaultOptions();
using (var connection = new ConnectionFactory().CreateEncodedConnection(options))
{
connection.OnSerialize = jsonSerializer;
connection.Publish(subject, message);
connection.Flush();
}
}
}
By the way new ConnectionFactory().CreateEncodedConnection(options) is native for the message broker so this is not a wrapper written by me.
However. In my web project I register this in the DI like this:
services.AddSingleton<IPublisher, Publisher>();
My final goal is to share the same connection, I know that when the times comes the DI will dispose all disposable resources but since I wrap the connection in a using block does it always dispose the connection and create a new one for each message or the DI manages to handle this somehow. And if not, how can I make is to the connection is not created for each message?
services.AddSingleton<IPublisher, Publisher>(); Create instance of Publisher class when application starts, but connection will be created and disposed every time you calls Publish method.
Using .Net 4.6 I have a static Serilog helper class - I've stripped down to the essentials as follows:
public static class SerilogHelper
{
private static ILogger log;
private static ILogger CreateLogger()
{
if (log == null)
{
string levelString = SSOSettingsFileManager.SSOSettingsFileReader.ReadString(
"BizTalk.Common", "serilog.minimum-level");
SerilogLevel level = (SerilogLevel)Enum.Parse(typeof(SerilogLevel), levelString);
string conString = SSOSettingsFileManager.SSOSettingsFileReader.ReadString(
"BizTalk.Common", "serilog.connection-string");
var levelSwitch = new LoggingLevelSwitch();
levelSwitch.MinimumLevel = (Serilog.Events.LogEventLevel)level;
log = new LoggerConfiguration()
.MinimumLevel.ControlledBy(levelSwitch)
.WriteTo.MSSqlServer(connectionString: conString, tableName: "Serilog", autoCreateSqlTable: true)
.WriteTo.RollingFile("log-{Date}.txt")
.CreateLogger();
}
return log;
}
public static void WriteString(string content)
{
var logger = CreateLogger();
logger.Information(content);
}
I have the following unit test:
[TestMethod]
public void UN_TestSerilog1()
{
Common.Components.Helpers.SerilogHelper.WriteString("Simple logging");
}
I've stepped through the debugger to be sure that the "level" variable is being set correctly - it's an enum named "Debug" with value of 1.
Although the Sql Server table is created ok, I don't see any rows inserted or any log txt file.
I've also tried calling logger.Error(content) but still no output.
I've used the same helper code previously on a different site / project and it worked ok.
Where did I go wrong this time?
Serilog.Sinks.MSSqlServer is a "periodic batching" sink and by default, it waits 5 seconds before sending the logs to the database. If your test ends before the sink had a chance to write the messages to the database, they are simply lost...
You need to make sure you dispose the logger before your test runner ends, to force the sink to flush the logs to the database at that point. See Lifecycle of Loggers.
((IDisposable) logger).Dispose();
Of course, if you are sharing a static log instance across multiple tests, you can't just dispose the logger inside of a single test as that would mean the next test that runs won't have a logger to write to... In that case, you should look at your testing framework support for executing code once, before the test suite run starts, and once again, for when the a test suite run ends.
I'm guessing you are using MSTest (because of the TestMethod), so you probably want to look into AssemblyInitialize and AssemblyCleanup, which would give you the opportunity to initialize the logger for all tests, and clean up after all tests finished running...
You might be interested in other ideas for troubleshooting Serilog issues: Serilog MSSQL Sink doesn't write logs to database
I'm about to start using hangfire in C# in a asp.net mvc web application, and wonder how to create the right architecture.
As we are going to use HangFire, we are using it as a messagequeue, so we can process(store in the database) the user data directly and then for instance notify other systems and send email later in a separate process.
So our code now looks like this
function Xy(Client newClient)
{
_repository.save(newClient);
_crmConnector.notify(newClient);
mailer.Send(repository.GetMailInfo(), newClient)
}
And now we want to put the last two lines 'on the queue'
So following the example on the hangfire site we could do this
var client = new BackgroundJobClient();
client.Enqueue(() => _crmConnector.notify(newClient));
client.Enqueue(() => mailer.Send(repository.GetMailInfo(), newClient));
but I was wondering whether that is the right solution.
I once read about putting items on a queue and those were called 'commands', and they were classes especially created to wrap a task/command/thing-to-do and put it on a queue.
So for the notify the crm connector this would then be
client.Enqueue(() => new CrmNotifyCommand(newClient).Execute();
The CrmNotifyCommand would then receive the new client and have the knowledge to execute _crmConnector.notify(newClient).
In this case all items that are put on the queue (executed by HangFire) would be wrapped in a 'command'.
Such a command would then be a self containing class which knows how to execute a kind of business functionality. When the command itself uses more than 1 other class it could also be known as a facade I guess.
What do you think about such an architecture?
I once read about putting items on a queue and those were called
'commands', and they were classes especially created to wrap a
task/command/thing-to-do and put it on a queue.
Yes, your intuition is correct.
You should encapsulate all dependencies and explicit functionality in a separate class, and tell Hangfire to simply execute a single method (or command).
Here is my example, that I derived from Blake Connally's Hangfire demo.
namespace HangfireDemo.Core.Demo
{
public interface IDemoService
{
void RunDemoTask(PerformContext context);
}
public class DemoService : IDemoService
{
[DisplayName("Data Gathering Task Confluence Page")]
public void RunDemoTask(PerformContext context)
{
Console.WriteLine("This is a task that ran from the demo service.");
BackgroundJob.ContinueJobWith(context.BackgroundJob.Id, () => NextJob());
}
public void NextJob()
{
Console.WriteLine("This is my next task.");
}
}
}
And then separately, to schedule that command, you'd write something like the following:
BackgroundJob.Enqueue("demo-job", () => this._demoService.RunDemoTask(null));
If you need further clarification, I encourage you to watch Blake Connally's Hangfire demo.
I was reading about the disadvantages of singleton patterns. A valid use of singleton suggested in many forums is the Logging application. I was wondering why this is a valid use of the pattern. Aren't we maintaing the state information in memory throughout the application?
Why not just use a function:
class Logger
{
public static void Log(string message)
{
//Append to file
}
}
To answer "why not just use a function": this code works incorrectly in multi-thread logging. If two threads try to write the same file, an exception will be thrown. And this is why it's good to use singleton for logging. In this solution, we have a thread safe singleton container, other threads push messages(logs) into the container safely. And the container(always a thread-safe queue) writes the messages/logs into a file/db/etc one by one.
It is better to declare interface:
interface ILogger
{
public void Log(string message);
}
Then implement specific type of logger
class FileLogger : ILogger
{
public void Log(string message)
{
//Append to file
}
}
class EmptyLogger : ILogger
{
public void Log(string message)
{
//Do nothing
}
}
And inject where required. You will inject EmptyLogger in tests. Using singleton will make testing harder, because you'll have to save to file in tests too. If you want to test if class makes correct log entries, you can use mock and define expectations.
About injection:
public class ClassThatUsesLogger
{
private ILogger Logger { get; set; }
public ClassThatUsesLogger(ILogger logger) { Logger = logger }
}
ClassThatUsesLogger takes FileLogger in production code:
classThatUsesLogger = new ClassThatUsesLogger(new FileLogger());
In tests it takes EmptyLogger:
classThatUsesLogger = new ClassThatUsesLogger(new EmptyLogger());
You inject different loggers in different scenarios. There are better ways to handle injections, but you'll have to do some reading.
EDIT
Remember you can still use singleton in your code, as others suggested, but you should hide its usage behind interface to loosen dependency between a class and specific implementation of logging.
I'm not sure what you are referring to when you ask about state information remaining in memory, but one reason to favour singleton over static for logging is that singleton still allows you to both
(1) program to abstractions (ILogger) and
(2) adhere to the dependency inversion principle by practicing dependency injection.
You can't inject your static logging method as a dependency (unless you want to pass something like Action<string> everywhere), but you can pass a singleton object, and you can pass different implementations like NullLogger when writing unit tests.
A singleton logger implementation allows for you to control easily how often your logging is being flushed to disk or the db. If you have multiple instances of the logger then they could all be trying to write at the same time which could cause collisions or performance issues. The singleton allows this to be managed so that you only flush to the store during quiet times and all your messages are kept in order.
In most circumstances the Singleton design pattern is not recommended, because it is a kind of Global State, hides dependencies (making APIs less obvious) and also hard to test.
Logging is not one of those circumstances. This is because logging does not affect the execution of your code. That is, as explained here: http://googletesting.blogspot.com/2008/08/root-cause-of-singletons.html :
your application does not behave any different whether or not a given
logger is enabled. The information here flows one way: From your
application into the logger.
You probably still don't want to use Singleton pattern though. Not quite at least. This is because there's no reason to force a single instance of a logger. What if you wanted to have two log files, or two loggers that behaved differently and were used for different purposes?
So all you really want for logger is to make it easily accessible from everywhere when you need it. Basically, logging is a special circumstances where the best way to go is to have it globally accessible.
The easy way is to simply have a static field in your application that contains the instance of logger:
public final static LOGGER = new Logger();
Or if your logger is created by a Factory:
public final static LOGGER = new LoggerFactory().getLogger("myLogger");
Or if your logger is created by a DI container:
public final static LOGGER = Container.getInstance("myLogger");
You could make your logger implementation be configurable, either through a config file, that you can set to "mode = test" when you are doing testing, so that the logger in those cases can behave accordingly, either not logging, or logging to the console.
public final static LOGGER = new Logger("logConfig.cfg");
You could also make the logger's behavior be configurable at runtime. So when running tests you can simply set it up as such: LOGGER.setMode("test");
Or if you don't make the static final, you can simply replace the static LOGGER with a test logger or mocked logger in the setup of your test.
Something slightly fancier you can do that is close to a Singleton pattern but not quite is:
public class Logger
{
private static Logger default;
public static getDefault()
{
if(default == null)
{
throw new RuntimeException("No default logger was specified.");
}
return default;
}
public static void setDefault(Logger logger)
{
if(default != null)
{
throw new RuntimeException("Default logger already specified.");
}
default = logger;
}
public Logger()
{
}
}
public static void main(String [] args)
{
Logger.setDefault(new Logger());
}
#Test
public void myTest()
{
Logger.setDefault(new MockedLogger());
// ... test stuff
}