NLog - Parameters not passed to query when writing to database - c#

I'm pretty new to NLog so please forgive my basic question.
I've inherited a Winforms application written by some contractors, I'm mainly a database developer but I've managed to do some development of the application but I sometimes struggle with tracking down error messages encountered by users, therefore I'm attempting to retrofit logging into the application via NLog, I first encountered the issue in NLog 4.4.5 and I've since updated to 4.4.12 and that hasn't solved the problem. I've verified that NLog is catching the errors correctly as it will output to a text file but when I try to direct it to a database output I can't get it to work.
This is my database table:
My problem is that I can only get errors written to the database only if I don't pass any parameters to the insert statement (which is pretty useless). That is to say that the following in my NLog.config file works:
<target name="database" xsi:type="Database">
<commandText>INSERT INTO [tblException] (DbVersionID, ExceptionDateTime) SELECT MAX(DbVersionID), GETDATE() FROM tblDbVersion</commandText>
<dbProvider>System.Data.SqlServerCe.4.0</dbProvider>
<connectionString>Data Source=${basedir}\Database.sdf</connectionString>
</target>
But this doesnt':
<target name="database" xsi:type="Database">
<commandText>INSERT INTO [tblException] (DbVersionID, ExceptionDateTime, Message) SELECT MAX(DbVersionID), GETDATE(), #message FROM tblDbVersion</commandText>
<parameter name="#message" layout="${message}" />
<dbProvider>System.Data.SqlServerCe.4.0</dbProvider>
<connectionString>Data Source=${basedir}\Database.sdf</connectionString>
</target>
I've enabled internal logging and the following is what I get:
2017-11-28 11:26:45.8063 Trace Executing Text: INSERT INTO [tblException] (DbVersionID, ExceptionDateTime, Message) SELECT MAX(DbVersionID), GETDATE(), #Message FROM tblDbVersion
2017-11-28 11:26:45.8063 Trace Parameter: '#message' = 'Test Error Message' (String)
2017-11-28 11:26:45.8063 Error Error when writing to database. Exception: System.Data.SqlServerCe.SqlCeException (0x80004005): A parameter is not allowed in this location. Ensure that the '#' sign is in a valid location or that parameters are valid at all in this SQL statement.
at System.Data.SqlServerCe.SqlCeCommand.ProcessResults(Int32 hr)
at System.Data.SqlServerCe.SqlCeCommand.CompileQueryPlan()
at System.Data.SqlServerCe.SqlCeCommand.ExecuteCommand(CommandBehavior behavior, String method, ResultSetOptions options)
at System.Data.SqlServerCe.SqlCeCommand.ExecuteNonQuery()
at NLog.Targets.DatabaseTarget.WriteEventToDatabase(LogEventInfo logEvent)
at NLog.Targets.DatabaseTarget.Write(LogEventInfo logEvent)
It appears that the parameter isn't getting replaced in the query before it's being run. I tried adding single quotes in the command text just in case that would help but it just resulted in the literal string '#message' being inserted into the database field.
I can't see anything that I've done differently to the examples, so any help would be appreciated.
Regards,
Alex

As per the comment from #pmcilreavy
Have you tried temporarily removing the MAX and replacing with a literal and changing to INSERT INTO .. (blah) VALUES (blah, #Message); SqlCe has quite a few limitations and quirks compared to Sql Server so might be worth simplifying things where possible.
The issue appears to be that SQLCE doesn't support parameters in the select part of a select statement.

Related

Custom Column(s) with MsSqlServer Sink and AppSettings

I've got an application that runs without problem with the File and Console Sinks and now I'm trying to add the MSSqlServer Sink.
Looking at the documentation on Github I've got my application to write to the SQL Database as well as the other sinks.
<add key="serilog:write-to:MSSqlServer.connectionString" value="Server=servername;Database=databasename;User Id=userid;Password=password;"/>
<add key="serilog:write-to:MSSqlServer.tableName" value="Logs"/>
<add key="serilog:write-to:MSSqlServer.autoCreateSqlTable" value="true"/>
One improvement is I'd like to add a custom column to the Logs table that Serilog uses to store a number indicating a unique RunId for my application. The idea being that a simple query would allow grouping by RunId to see all messages for that one run.
So I added the following, based on the documentation (and I haven't been able to find any other examples) as it seemed logical:
<add key="serilog:write-to:MSSqlServer.columnOptions.ColumnName" value="RunId"/>
<add key="serilog:write-to:MSSqlServer.columnOptions.PropertyName" value="RunId"/>
<add key="serilog:write-to:MSSqlServer.columnOptions.DataType" value="SqlDbType.Int"/>
<add key="serilog:write-to:MSSqlServer.columnOptions.DataLength" value="32"/>
and then in my code all I need to do is:
Log.Information("{RunId}{Message}", RunId, Message);
to see a new entry with {RunId} in the RunId column and {Message} in the Message column... however everytime I do this nothing is written to the RunId column, it remains as NULL whereas every log message the console/file has is also duplicated in the Table.
So it seems logging is working, it must be the keys wrong and I'm really not sure what should be used.
Would anyone be able to point me in the direction I need to be going or where I've gone wrong?
Thank you.
Logging definitely was working but some digging and finally I found out I had two issues:
Permissions on database where insuffident for the user serilog was using, and
AppSettings settings I was using were wrong
#1 was easy enough to fix, I created a dedicated Serilog account for this database and fixed the permissions as per documentation.
#2 however was very frustrating but eventually I was able to get the following to work:
<configSections>
<section name="MSSqlServerSettingsSection" type="Serilog.Configuration.MSSqlServerConfigurationSection, Serilog.Sinks.MSSqlServer"/>
</configSections>
<MSSqlServerSettingsSection DisableTriggers="false" ClusteredColumnstoreIndex="false" PrimaryKeyColumnName="Id">
<!-- SinkOptions parameters -->
<TableName Value="Logs"/>
<Columns>
<add ColumnName="RunId" DataType="int"/>
</Columns>
</MSSqlServerSettingsSection>
On compile and execution my program will now create the Logs table in the database and then create a RunId column which I have been able to populate with a {RunId} expression in my Log.Debug() calls.
FWIW: I hope this will be helpful and if I can work out how I'll see if I can add this to documentation as an example of using AppSettings for this use case. Searching this question most people seem to be using JSON and not AppSettings.

The type initializer for 'System.Data.SqlClient.TdsParser' threw an exception error while calling Stored Procedure from Azure function

I have an Azure Event hub with readings from my smart electricity meter. I am trying to use an Azure Function to write the meter readings to an Azure SQL DB. I have created a target table in the Azure SQL DB and a Stored Procedure to parse a JSON and store the contents in the table. I have successfully tested the stored procedure.
When I call it from my Azure Function however I am getting an error: The type initializer for 'System.Data.SqlClient.TdsParser' threw an exception. For testing purposes, I have tried to execute a simple SQL select statement from my Azure Function, but that gives the same error. I am lost at the moment as I have tried many options without any luck. Here is the Azure function code:
#r "Microsoft.Azure.EventHubs"
using System;
using System.Text;
using System.Data;
using Microsoft.Azure.EventHubs;
using System.Data.SqlClient;
using System.Configuration;
using Dapper;
public static async Task Run(string events, ILogger log)
{
var exceptions = new List<Exception>();
try
{
if(String.IsNullOrWhiteSpace(events))
return;
try{
string ConnString = Environment.GetEnvironmentVariable("SQLAZURECONNSTR_azure-db-connection-meterreadevents", EnvironmentVariableTarget.Process);
using(SqlConnection conn = new SqlConnection(ConnString))
{
conn.Execute("dbo.ImportEvents", new { Events = events }, commandType: CommandType.StoredProcedure);
}
} catch (Exception ex) {
log.LogInformation($"C# Event Hub trigger function exception: {ex.Message}");
}
}
catch (Exception e)
{
// We need to keep processing the rest of the batch - capture this exception and continue.
// Also, consider capturing details of the message that failed to process so it can be processed again later.
exceptions.Add(e);
}
// Once processing of the batch is complete if any messages in the batch failed process throw an exception so that there is a record of the failure.
if (exceptions.Count > 1)
throw new AggregateException(exceptions);
if (exceptions.Count == 1)
throw exceptions.Single();
}
The events coming in are in JSON form as follows
{
"current_consumption":450,
"back_low":0.004,
"current_back":0,
"total_high":13466.338,
"gas":8063.749,
"current_rate":"001",
"total_low":12074.859,
"back_high":0.011,
"timestamp":"2020-02-29 22:21:14.087210"
}
The stored procedure is as follows:
CREATE PROCEDURE [dbo].[ImportEvents]
#Events NVARCHAR(MAX)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON
-- Insert statements for procedure here
INSERT INTO dbo.MeterReadEvents
SELECT * FROM OPENJSON(#Events) WITH (timestamp datetime2, current_consumption int, current_rate nchar(3), current_back int, total_low numeric(8, 3), back_high numeric(8, 3), total_high numeric(8, 3), gas numeric(7, 3), back_low numeric(8, 3))
END
I have added a connection string of type SQL AZURE and changed {your password} by the actual password in the string. Any thoughts on how to fix this issue or maybe how to get more logging as the error is very general?.
I managed to fix this exception by re-installing Microsoft.Data.SqlClient.SNI. Then clean and rebuild your project.
I managed to fix the issue by changing the Runtime version to ~2 in the Function App Settings.
Does this mean this is some bug in runtime version ~3 or should there be another way of fixing it in runtime version ~3?
I might be late to the party, in my case the cause of the error was "Target runtime" when publishing, I developed on windows machine but was transferring the file to linux, the solution was to change target runtime to the correct one, initial it was win-x64(merely because I started off by deploying locally), see screenshot below
Try to connect to a local SQL, use SQL profiler, and check what you are sending, and what precisely SQL is trying to do with the command being executed.
It's very hard to replicate your code, because, I obviously do not have your Azure SQL :)
So I would suggest, try to execute each step in the Stored procedure, as direct queries.
See if that works, then try to wrap the statements into store-procedures called back-to-back, and get that to work.
Then combine the commands to a single command, and fiddle with it till you get it to work ;)
Get the most simple query to execute towards the Azure SQL, so you are sure your connection is valid. (Like just a simple select on something)
Because without more information, it is very difficult to assist you.
Pretty silly but I got this after installing the EntityFrameworkCore Nuget package but not EntityFrameworkCore.SqlServer Nuget package. I had the SqlServer version for EntityFramework 6 installed.
I had the same error with a VSTO-application that was installed with a double click in the file explorer. Windows copied not all files to such an automatic location somewhere into ProgramData, so the application was simply not complete!
The solution was to register the VSTO-application manualy in HKEY_CURRENT_USER and pointed the "Manifest" to the complete directory with all the files. (like Microsoft.Data.SqlClient.dll, Microsoft.Data.SqlClient.SNI.x64.dll etc)
Those automatically by Windows chosen installations/directories will give unexpected behaviour. :(
Some ws methods ran fine and other would fail with this same error. I ran ws method that was failing in the browser on the box that the ws was being served from and got a lengthy, and helpful error message. One item was an InnerException that said
<ExceptionMessage>Failed to load C:\sites\TXStockChecker.xxxxxx.com\bin\x64\SNI.dll</ExceptionMessage>
I noticed that that file was right were it was expected in my development environment so I copied it to the matching directory on the prod ws and now all the methods run as expected.

SharePoint 2010 Web Part Error - Exception from HRESULT: 0x80131904

A client of ours recently encountered a problem with a web part I wrote a while back. This web part is an advanced search which returns results based on information entered into a text box and the criteria selected from a drop down. This web part has been functional on other customer sites and the error which is now encountered by this one client could not be replicated, even after extensive testing on our development environment. This error only appears when the search column is a lookup field and works as expected on any other field type. I have looked around the web to find a resolution specific to my problem, but the majority of the cases refer to an SQL error of the Content Database being out of space, which I don't believe is the case in my instance.
Below is the full stack trace message we receive. Any help to resolve this problem would be very much appreciated!
Exception from HRESULT: 0x80131904 at Microsoft.SharePoint.SPGlobal.HandleComException(COMException comEx)
at
Microsoft.SharePoint.Library.SPRequest.GetListItemDataWithCallback2(IListItemSqlClient
pSqlClient, String bstrUrl, String bstrListName, String bstrViewName,
String bstrViewXml, SAFEARRAYFLAGS fSafeArrayFlags,
ISP2DSafeArrayWriter pSACallback, ISPDataCallback pPagingCallback,
ISPDataCallback pPagingPrevCallback, ISPDataCallback
pFilterLinkCallback, ISPDataCallback pSchemaCallback, ISPDataCallback
pRowCountCallback, Boolean& pbMaximalView) at
Microsoft.SharePoint.SPListItemCollection.EnsureListItemsData() at
Microsoft.SharePoint.SPListItemCollection.GetEnumerator() at
Biz_AdvancedListSearch_Module.AdvancedListSearch.AdvancedListSearch.btnSearch_Click(Object
sender, ImageClickEventArgs e)
EDIT: The problem only arises when the lookup column uses the "contains" search criteria. I use CAML Queries to retrieve the data and using a console application, I determined that this was definitely possible with a lookup field.
A little old, but I had the same issue today. The comparers are case sensitive. should be . In my case, it was , changed to and it works now!

TimeOut Exception in multicriteria/future using NHibernate

I use NHibernate 3.3.3 in my ASP.NET C# application with SqlServer 2008.
DetachedCriteria _pageCriteria = CriteriaTransformer.Clone(criteria)
.SetMaxResults(maxResult)
.SetFirstResult(firstResult);
_recordCount= _countCriteria.GetExecutableCriteria(session).FutureValue<int>();
var _pageCriteriaFuture = _pageCriteria.GetExecutableCriteria(session).Future<T>();
_pageCriteriaFuture.ToList();
If I try to execute the previous code I get a TimeOut error:
Failed to execute multi criteria[SQL:
SELECT count(*) as y0_ FROM Articoli this_ WHERE ((contains(this_.Oggetto, ?) or contains(this_.CorpoPlaintext, ?) or contains(this_.ParoleChiavi, ?) or contains(this_.SottoTitoloPlainText, ?)));
SELECT TOP (?) this_.Id as Id13_0_, this_.Corpo as Corpo13_0_, this_.CorpoPlaintext as CorpoPla3_13_0_, this_.Data as Data13_0_, this_.DataInserimento as DataInse5_13_0_, this_.LinkPagina as LinkPagina13_0_, this_.Numero as Numero13_0_, this_.Oggetto as Oggetto13_0_, this_.Tag as Tag13_0_, this_.NumeroVisualizzazioni as NumeroV10_13_0_, this_.IsConsigliatoRedazione as IsConsi11_13_0_, this_.ParoleChiavi as ParoleC12_13_0_, this_.SottoTitolo as SottoTi13_13_0_, this_.SottoTitoloPlainText as SottoTi14_13_0_, this_.idArticoloOld as idArtic15_13_0_, this_.IdUser as IdUser13_0_
FROM Articoli this_ WHERE ((contains(this_.Oggetto, ?) or contains(this_.CorpoPlaintext, ?) or contains(this_.ParoleChiavi, ?) or contains(this_.SottoTitoloPlainText, ?))) ORDER BY this_.DataInserimento desc;
]
Inner exception:
{"Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding."}
The point is that if I try to execute these steps separately:
_recordCount= _countCriteria.GetExecutableCriteria(session).FutureValue<int>().Value;
It work like a sharm!!
Why, if I try to execute them in the same statemant, I get this error?
It's a lock problem? I try also to execute the same two-query command in MSS management studio and I get no errors!
The principle, the Future queries you've used are working for me. Not only when I tried to reproduce the question, but even on daily bases.
I did experienced the same exception. It was caused by the fact, that there were two sesions opened. The first, running in transaction, was locking the table with update/insert - not commited. The second was asking for result (Future) and time-out stopped it. All that in the same request (horrible)
NOTES: From the snippet (I know it could be just a quick draft, but just in case, that it is copy paste), I am not sure about the naming conventions
(is criteria from outer scope, is the _recordCount member of a DAO
class, because the 'var _pageCriteriaFuture' is definetly local
variable while prefixed with _. The _countCriteria appears in the
snippet without initialization...
That all could mean, that each parts are separated and some of them could already trigger openning/closing of the other Session. So I would suggest, turn on log4net and the full logging for NHIberante, and check, when the transaction was opened. There could be the answer.

How to log complex synchronization process?

I'm developing a complicated distributed service that makes iterative synchronization process. It synchronise every 10 seconds business entities in different information systems. One iteration is consist of bunch of 3d party service calls to retrieve current state of business objects (count of customers, goods, certain customer and goods details etc.), queries to local DB and then get differences between them and smooth out, synchronize this differences.
There are different types of iterations. They are fast (only changes in set of objects) and slow iterations (full revieweing of data). Fast are every 10 seconds and slow are once a day.
So, how can I log this processes using NLog? I'm using SQLite for storing data. But I'm stuck in DB design for logs.
So I want to log flow of every iteration:
1. Request for current state of objects to 3d party service
2. Query the local database for current state of objects
3. Get differences list
4. Invoke external service to commit insufficient data
5. Update local database for insufficient data
But there is so many kinds of info to log so I can't just put it into one TEXT field.
At the moment I'm using such structure for logs:
CREATE TABLE [Log] (
[id] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
[ts] TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
[iteration_id] varchar,
[request_response_pair] varchar,
[type] VARCHAR NOT NULL,
[level] TEXT NOT NULL,
[server_id] VARCHAR,
[server_alias] VARCHAR,
[description] TEXT,
[error] Text);
So every service request and response puts to description and request_response_pair is a key to link every response to every request.
Here is my NLog config:
<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" internalLogFile="D:\nlog.txt" internalLogLevel="Trace">
<targets>
<target name="Database" xsi:type="Database" keepConnection="false"
useTransactions="false"
dbProvider="System.Data.SQLite.SQLiteConnection, System.Data.SQLite, Version=1.0.82.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139"
connectionString="Data Source=${basedir}\SyncLog.db;Version=3;"
commandText="INSERT into Log(iteration_id, request_response_pair, type, level, server_id, server_alias, description, error) values(#Iteration_id, #Request_response_pair, #Type, #Loglevel, #server_id, #server_alias, #Description, #Error)">
<parameter name="#Type" layout="${message}"/>
<parameter name="#Loglevel" layout="${level:uppercase=true}"/>
<parameter name="#Request_response_pair" layout="${event-context:item=request_response_pair}"/>
<parameter name="#Iteration_id" layout="${event-context:item=iteration_id}"/>
<parameter name="#server_id" layout="${event-context:item=server_id}"/>
<parameter name="#server_alias" layout="${event-context:item=server_alias}"/>
<parameter name="#Description" layout="${event-context:item=description}"/>
<parameter name="#Error" layout="${event-context:item=error}"/>
</target>
</targets>
<rules>
<logger name="*" minlevel="Trace" writeTo="Database" />
</rules>
</nlog>
Here is how I log:
namespace NLog
{
public static class LoggerExtensions
{
public static void InfoEx(this Logger l, string message, Dictionary<string, object> contextParams)
{
LogEventInfo eventInfo = new LogEventInfo(LogLevel.Info, "", message);
foreach (KeyValuePair<string, object> kvp in contextParams)
{
eventInfo.Properties.Add(kvp.Key, kvp.Value);
}
l.Log(eventInfo);
}
public static void InfoEx(this Logger l, string message, string server_id, string server_alias, Dictionary<string, object> contextParams = null)
{
Dictionary<string, object> p = new Dictionary<string, object>();
p.Add("server_id", server_id);
p.Add("server_alias", server_alias);
if (contextParams != null)
{
foreach (KeyValuePair<string, object> kvp in contextParams)
{
p.Add(kvp.Key, kvp.Value);
}
}
l.InfoEx(message, p);
}
}
}
I know about logging levels but I need all this verbose logs, so I log it as info. I can't find any tutorial how to log these complicated, structured logs. Only plain dumb log messges.
I am assuming, you are talking about "Logs" for the typical "log things so we have something to look at if we need to inspect our workflow (errors/performance)". I assume that you do NOT mean logs as e.g. in "We need accounting information and the log is part of our domain data and is included in the workflow."
And from what I got from your posting, you are worrying about the backend storage format on the logs, so that you can later process them and use for said diagnostics?
Then I'd recommend that you make the logging code independend of the domain specifics.
Question: How will the logs you create be processed? Do you really need to access them all over the place so you need the database to provide you a structured view? Is it in any kind relevant how fast you can filter your logs? Or will they end up in one big log-analyzer application anyway, that is ran only ever second week when something bad happened?
In my opinion, the biggest reasons you want to avoid any domain specifics in the log are that "logs should work if things go wrong" and "logs should work after things changed".
Logs should work if things go wrong
If you have columns in your log table for domain specific values like "Request_response_pair", and there is no pair, then to write the log itself might fail (e.g. if its an index field). Of course, you can make sure to not have non-null columns and no restrictions in your DB-design, but take a step back and ask: Why do you want the structure in your log database anyway? Logs are meant to work as reliable as possible, so any kind of template you press them on may restrict the use cases or may make you not be able to log crucial information.
Logs should work after things changed
Especially if you need logs for detecting and fixing bugs or improve performance, that means that you will regulary compare logs from "before the change" to logs from "after the change". If you need to change the structure of your log database, because you changed your domain data, this is going to hurt you when you need to compare the logs.
True, if you make a data structure change, you probably still need to update some tools like log analyzer and such, but there is usually a large part of logging/analyzing code that is completely agnostic to the actual structure of the domain.
Many systems (including complex ones) can live with "just log one simple string" and later write tools to take the string apart again, if they need to filter or process the logs.
Other systems write logs in simple string key/value pairs. The log function itself is not domain specific but just accepts a string dictionary and writes it off (or even easier, a params string[] which should have an even number of parameters and you use every second parameter as key - if you aren't scared by that proposition :-D).
Of course, you will probably start writing another tooling layer on top of the base log functions that knows about domain specific data structures and then composes the string dictionary and pass it on. You certainly don't want to copy the decomposing code all place around. But make the base functions available at all places where you might want to log something. Its really helpfull if you indeed run into "strange" situations (exception handler) where some information is missing.

Categories

Resources