My team has recently started using Fortify Static Code Analyzer (version 17.10.0 156) on our .NET code base (C# 6 and VB.NET), and are experiencing some pain with the amount of false positives it reports. For any given issue we can't know if it is a false positive without looking at it, and we don't want any actual problems to get lost in the clutter.
We have a utilities library with a method ReadEmbeddedSql which extracts sql from resources embedded in the assembly to execute. Fortify flags any OracleCommand (from Oracle.ManagedDataAccess.Client) which execute the sql returned from this method with a Sql Injection vulnerability.
This vulnerability is reported at the point the sql is set on the command, be it via constructor, or via the CommandText property.
It does not do this if the ReadEmbeddedSql method is defined in the local assembly.
A pared down listing of the source code which produces this result follows below. In the example code, ExecuteSomeSql() and ExecuteSomeSqlDifferently() are flagged with a vulnerability where ExecuteSomeLocalSql() is not.
For Analysis Evidence it only lists the line the OracleCommand is created:
TestDao.cs:27 - OracleCommand()
RuleID: 31D4607A-A3FF-447C-908A-CA2BBE4CE4B7
in the details for the issue it provides:
On line 27 of TestDao.cs, the method ExecuteSomeSql() invokes a SQL query
built using input coming from an untrusted source. This call could
allow an attacker to modify the statement's meaning or to execute
arbitrary SQL commands.
A sample diagram presented by Fortify for this issue:
After much searching, I came across this post describing a similar problem and proposed solution: Can I register a trusted source for SQL statements
After following the instructions there, and verifying the instructions independently in a different version of the user guide (page 90)
The result is unchanged. I added an additional 'SQL Injection Validation Rule' rule which is specifically described as "... identifies a function that properly validates data before using them in a SQL query."
Still no avail.
EDIT:
I have played around with custom rules more, and have been able to determine that the CustomCleanseRules are actually being applied (they do remove other types of taint), but not removing some trust specific flag Fortify applies to our in-house library.
Every value returned from any method of my libraries is distrusted, and none of the rules I've created seem to be able to remove this distrust.
Is there something I am doing wrong, or does Fortify just not work?
Is there a different type of rule needed to cleanse this general distrust?
Example Source code:
In library:
namespace Our.Utilities.Database
{
public abstract class BaseDao
{
protected string ReadEmbeddedSql(string key)
{
//... extract sql from assembly
return sql;
}
}
}
In application:
namespace Our.Application.DataAccess
{
public class TestDao: Our.Utilities.Database.BaseDao
{
public void ExecuteSomeSql()
{
//... connection is created
// Fortify Does not trust sqlText returned from library method.
var sqlText = ReadEmbeddedSql("sql.for.ExecuteSomeSql");
using(var someSqlCommand = new OracleCommand(sqlText, connection)) // Fortify flags creation of OracleCommand as SqlInjection vulnerability.
{
someSqlCommand.ExecuteNonQuery();
}
}
public void ExecuteSomeSqlDifferently()
{
//... connection is created
// Fortify Does not trust sqlText returned from library method.
var sqlText = ReadEmbeddedSql("sql.for.ExecuteSomeSql");
using(var someSqlCommand = connection.CreateCommand())
{
someSqlCommand.CommandText = sqlText; //Fortify flags setting CommandText as SqlInjection vulnerability.
someSqlCommand.ExecuteNonQuery();
}
}
public void ExecuteSomeLocalSql()
{
//... connection is created
var sqlText = ReadEmbeddedSqlLocallyDefined("sql.for.ExecuteSomeSql");
using(var someSqlCommand = new OracleCommand(sqlText, connection))
{
someSqlCommand.ExecuteNonQuery();
}
}
protected string ReadEmbeddedSqlLocallyDefined(string key)
{
//... extract sql from assembly
return sql;
}
}
}
XML of custom rules:
<?xml version="1.0" encoding="UTF-8"?>
<RulePack xmlns="xmlns://www.fortifysoftware.com/schema/rules">
<RulePackID>5A78FC44-4EEB-49C7-91DA-6564805C3F23</RulePackID>
<SKU>SKU-C:\local\path\to\custom\rules\Our-Utilities</SKU>
<Name><![CDATA[C:\local\path\to\custom\rules\Our-Utilities]]></Name>
<Version>1.0</Version>
<Description><![CDATA[]]></Description>
<Rules version="17.10">
<RuleDefinitions>
<DataflowCleanseRule formatVersion="17.10" language="dotnet">
<RuleID>7C49FEDA-AA67-490D-8820-684F3BDD58B7</RuleID>
<FunctionIdentifier>
<NamespaceName>
<Pattern>Our.Utilities.Database</Pattern>
</NamespaceName>
<ClassName>
<Pattern>BaseDao</Pattern>
</ClassName>
<FunctionName>
<Pattern>ReadSqlTemplate</Pattern>
</FunctionName>
<ApplyTo implements="true" overrides="true" extends="true"/>
</FunctionIdentifier>
<OutArguments>return</OutArguments>
</DataflowCleanseRule>
<DataflowCleanseRule formatVersion="17.10" language="dotnet">
<RuleID>14C423ED-5A51-4BA1-BAE1-075E566BE58D</RuleID>
<TaintFlags>+VALIDATED_SQL_INJECTION</TaintFlags>
<FunctionIdentifier>
<NamespaceName>
<Pattern>Our.Utilities.Database</Pattern>
</NamespaceName>
<ClassName>
<Pattern>BaseDao</Pattern>
</ClassName>
<FunctionName>
<Pattern>ReadSqlTemplate</Pattern>
</FunctionName>
<ApplyTo implements="true" overrides="true" extends="true"/>
</FunctionIdentifier>
<OutArguments>return</OutArguments>
</DataflowCleanseRule>
</RuleDefinitions>
</Rules>
</RulePack>
When I run your sample code (I do have to modify it since it will not compile as is). When I run it with SCA 17.10 with 2017Q3 (I also did this on 2017Q2) I did not get a the same SQL Injection Rule ID as you.
Looking at your Analysis Evidence, I assume that the analyzer that found this was not Dataflow or Control flow but maybe the Semantic or Structural?
You can see the type of analyzer that found the finding by looking at the summary tab:
Either way, I don't think a Custom Rule is what I would do here.
An option you can do is to use a Filter file.
This a file that can contain
RuleIDs
InstanceIDs
Categories
When this file is passed into the scan command, any finding that matches any of the fields in the filter file will be filtered out from the results.
You can find examples of using the filter file in <SCA Install Dir>\Samples\Advanced\filter
or you can look in the Fortify SCA Users Guide in Appendix B: Filtering the Analysis
*Note: Your analysis of using filters (in comment) are spot on.
Related
I'm trying to create an SSIS package using EzAPI 2012. The templates and tutorials that I've found works fine but I am not able to find anything on creating a connection to a OLEDB with SQL Authentication (username and password). Here's part of code that I'm using to create OLEDB connection:
// Creating a connection to the database:
EzSqlOleDbCM srcConn = Activator.CreateInstance(typeof(EzSqlOleDbCM), new object[] { package }) as EzSqlOleDbCM;
srcConn.SetConnectionString(_CL_SourceServerName, _CL_SourceDBName);
srcConn.Name = "SourceDB";
EzSqlOleDbCM destConn = Activator.CreateInstance(typeof(EzSqlOleDbCM), new object[] { package }) as EzSqlOleDbCM;
destConn.SetConnectionString(_CL_DestServerName, _CL_DestDBName);
destConn.Name = "DestDB";
The names with "CL" in the beginning are variables. The EzSqlOleDbCM function does not have parameters for username and password.
Thanks,
EzApi is great for the problems it solves. This is one of the many cases where it falls short. You can see in the source code they have hard coded the connection manager to use SSPI
/// <summary>
/// OleDb connection manager for SQL Server
/// </summary>
public class EzSqlOleDbCM : EzOleDbConnectionManager
{
public EzSqlOleDbCM(EzPackage parent) : base(parent) { }
public EzSqlOleDbCM(EzPackage parent, ConnectionManager c) : base(parent, c) { }
public EzSqlOleDbCM(EzPackage parent, string name) : base(parent, name) { }
public EzSqlOleDbCM(EzProject parentProject, string streamName) : base(parentProject, streamName) { }
public EzSqlOleDbCM(EzProject parentProject, string streamName, string name) : base(parentProject, streamName, name) { }
public void SetConnectionString(string server, string db)
{
ConnectionString = string.Format("provider=sqlncli11;integrated security=sspi;database={0};server={1};OLE DB Services=-2;Auto Translate=False;Connect Timeout=300;",
db, server);
}
}
What can be done?
Modify the source code to accommodate userids and passwords
Do as much as you can in EzApi and then revert to using the base SSIS object model - See the LoadFromXML portion but since this is connection manager, that will pretty much be everything
I don't think you can fake your way through it by adding Expressions to the connection manager itself as when it attempts to set metadata during development/object creation, the expressions won't yet be active
Give up on EzAPI - I know I have. I find using Biml far easier for the programmatic creation of SSIS packages. And it's supported whereas EzAPI appears to be abandoned.
Here is the source code for the connection managers in EzAPI. Based on that, EzSqlOleDbCM (which is what you are using) extends EzOleDbConnectionManager which in turn extends EzConnectionManager
You are using SetConnectionString method on EzSqlOleDbCm; which unfortunately is hard coded to use only the Windows Auth.
So here are the options I would try:
Option 1:
EzConnectionManager exposes a property called ConnectionString and has a setter; so you can directly assign the connection string (with your user name and password) using this property (and avoid the SetConnectionString method call). For example:
srcConn.ConnectionString = #"Provider=my_oledb_provider;Data Source=myServerAddress;Initial Catalog=myDataBase; User Id=myUsername;Password=myPassword";
Option 2:
Additionally, EzOleDbConnectionManager exposes the UserName and Password properties; so you can use these properties also to specify your username and password. However if you this, then you cannot use the SetConnectionString method; cos that is hard coded to use Windows Auth, which means you will have to overwrite it again using previous option.
srcConn.UserName = "my_user_name";
srcConn.Password = "my_password";
Option 3:
If you are using the EzAPI source code directly in your project, I would add another convenience method called SetConnectionString, but one which accepts a user name and password.
public void SetSqlAuthConnectionString(string server, string db, string user, string pwd)
{
// Notice that we are just setting the ConnectionString property.
ConnectionString =string.Format("Provider=my_oledb_provider;Data Source={0};Initial Catalog={1};User Id={2};Password={3}", server, db, user, pwd
};
Then you can use as follows:
srcConn.SetSqlAuthConnectionString(myServer, myDB, userName, password);
Alternatively, at an even higher level: I would also give Biml a shot. If that serves your need well and good. Also, I wrote a library similar to EzApi called Pegasus; which you can find here. You can look at the source code and re-use it or use the code as is. Now both EzAPI and Pegasus are wrappers around the SSIS object model API. Thus, you can use the SSIS API directly; but using it directly would be a pain since you will write a lot of boiler plate code and repeat your code. You are better off writing your own wrapper against the official API; or use something like EzAPI or Pegasus.
If SSIS is your day job and you create a lot of packages according to some patterns, then I would recommend you definitely look into package automation. The development and testing time savings are huge and huge and huge. If you are unwilling to use 3rd party libraries (like EzApi or Pegasus), I would recommend you to take a look at the source code of those libraries and roll your own; or Biml.
Let me know if you need any guidance/comments on package automation.
I created a library class in C# with this code as you can see:
namespace ManagedCodeAndSQLServer
{
public class BaseFunctionClass
{
public BaseFunctionClass()
{
}
[SqlProcedure]
public static void GetMessage(SqlString strName, out SqlString
strMessge)
{
strMessge = "Welcome," + strName + ", " + "your code is getting executed under CLR !";
}
}
}
I built this project with the UNSAFE Permission Set property, and I added the DLL to SQL Server using this code:
use master;
grant external access assembly to [sa];
use SampleCLR;
CREATE ASSEMBLY ManagedCodeAndSQLServer
AUTHORIZATION dbo
FROM 'd:\ManagedCodeAndSQLServer.dll'
WITH PERMISSION_SET = UNSAFE
GO
It added the assembly as part of my database.
I want to call the function as you can see:
CREATE PROCEDURE usp_UseHelloDotNetAssembly
#name nvarchar(200),
#msg nvarchar(MAX)OUTPUT
AS EXTERNAL NAME ManagedCodeAndSQLServer.[ManagedCodeAndSQLServer.
BaseFunctionClass].GetMessage
GO
But I get this error:
Msg 6505, Level 16, State 2, Procedure usp_UseHelloDotNetAssembly, Line 1
Could not find Type 'ManagedCodeAndSQLServer.
BaseFunctionClass' in assembly 'ManagedCodeAndSQLServer'.
The problem is subtle, and in an online format such as here, it even appears to simply be a matter of formatting. But the issue is entirely found here, within the square brackets:
AS EXTERNAL NAME ManagedCodeAndSQLServer.[ManagedCodeAndSQLServer.
BaseFunctionClass].GetMessage
There is an actual newline character after ManagedCodeAndSQLServer. that should not be there. It is even reflected in the error message:
Msg 6505, Level 16, State 2, Procedure usp_UseHelloDotNetAssembly, Line 1
Could not find Type 'ManagedCodeAndSQLServer.
BaseFunctionClass' in assembly 'ManagedCodeAndSQLServer'.
which again looks like a matter of word-
wrapping ;-), but isn't. If the newline is removed such that the T-SQL appears as follows:
AS EXTERNAL NAME ManagedCodeAndSQLServer.[ManagedCodeAndSQLServer.BaseFunctionClass].GetMessage;
then it will work just fine.
Some additional notes:
No need to ever grant sa anything. Any member of the sysadmin fixed server role already has all permissions.
There is no need for the code shown in the question to be marked as anything but SAFE. Please do not mark any Assembly as UNSAFE unless absolutely necessary.
There is no need to set the Database to TRUSTWORTHY ON. That is a security risk and should be avoided unless absolutely necessary.
If you are learning SQLCLR, please see the series I am writing on that topic on SQL Server Central: Stairway to SQLCLR (free registration is required to read their content, but it's worth it :-).
I am working on a program that will read in a text file and then insert areas of the text file into different columns on a database. The text file is generally set up like this:
"Intro information"
"more Intro information"
srvrmgr> "information about system"
srbrmgr> list parameters for component *ADMBatchProc*
"Headers"
*Name of record* *alias of record* *value of record*
The columns create a table containing all of the setting information for this component. One all of the settings are listed, the file moves to another component and returns all the information for that component in a new table. I need to read in the component and the information on the tables without the headers or the other information. I will then need to be able to transfer that data into a database. The columns are fixed width on each table within the file.
Any recommendations about how to approach this are welcome. I have never read in a file this complex so I dont really know how to approach ignoring alot of information while trying to get other information ready for a database. Also the component value I am trying to gather always follows the word component on a line that starts with "srvrmgr".
The '*' represents areas that will be put into datbase.
Siebel Enterprise Applications Siebel Server Manager, Version 8.1.1.11 [23030] LANG_INDEPENDENT
Copyright (c) 1994-2012, Oracle. All rights reserved.
The Programs (which include both the software and documentation) contain
proprietary information; they are provided under a license agreement containing
restrictions on use and disclosure and are also protected by copyright, patent,
and other intellectual and industrial property laws. Reverse engineering,
disassembly, or decompilation of the Programs, except to the extent required to
obtain interoperability with other independently created software or as specified
by law, is prohibited.
Oracle, JD Edwards, PeopleSoft, and Siebel are registered trademarks of
Oracle Corporation and/or its affiliates. Other names may be trademarks
of their respective owners.
If you have received this software in error, please notify Oracle Corporation
immediately at 1.800.ORACLE1.
Type "help" for list of commands, "help <topic>" for detailed help
Connected to 1 server(s) out of a total of 1 server(s) in the enterprise
srvrmgr> configure list parameters show PA_NAME,PA_ALIAS,PA_VALUE
srvrmgr>
srvrmgr> list parameters for component ADMBatchProc
PA_NAME PA_ALIAS PA_VALUE
---------------------------------------------------------------------- ------------------------------------- --------------------------------------------------------------------------------------------------------------------
ADM Data Type Name ADMDataType
ADM EAI Method Name ADMEAIMethod Upsert
ADM Deployment Filter ADMFilter
213 rows returned.
srvrmgr> list parameters for component ADMObjMgr_enu
PA_NAME PA_ALIAS PA_VALUE
---------------------------------------------------------------------- ------------------------------------- --------------------------------------------------------------------------------------------------------------------
AccessibleEnhanced AccessibleEnhanced False
This is the beginning of the text file. It a produced in a system called Siebel to show all of the settings for this environment. I need to pull the component name (there are multiple on the actual file but the ones shown here are 'ADMBatchProc' and 'ADMObjMgr_enu'), and then the data shown on the table below it that was created by Siebel. The rest of the information is irrelevant for the purpose of the task I need.
I would recommend using Test-Driven Development techniques in this case. I'm guessing that your possible variations of input format are near infinite.
Try this:
1) Create an interface that will represent the data operations or parsing logic you expect the application to perform. For example:
public interface IParserBehaviors {
void StartNextComponent();
void SetTableName(string tableName);
void DefineColumns(IEnumerable<string> columnNames);
void LoadNewDataRow(IEnumerable<object> rowValues);
DataTable ProduceTableForCurrentComponent();
// etc.
}
2) Gather as many small examples of discrete inputs that have well-defined behaviors as possible.
3) Inject a behaviors handler into your parser. For example:
public class Parser {
private const string COMPONENT_MARKER = "srvrmgr";
private readonly IParserBehaviors _behaviors;
public Parser(IParserBehaviors behaviors) {
_behaviors = behaviors;
}
public void ReadFile(string filename) {
// bla bla
foreach (string line in linesOfFile) {
// maintain some state
if (line.StartsWith(COMPONENT_MARKER)) {
DataTable table = _behaviors.ProduceTableForCurrentComponent();
// save table to the database
_behaviors.StartNextComponent();
}
else if (/* condition */) {
// parse some text
_behaviors.LoadNewDataRow(values);
}
}
}
}
4) Create tests around the expected behaviors, using your preferred mocking framework. For example:
public void FileWithTwoComponents_StartsTwoNewComponents() {
string filename = "twocomponents.log";
Mock<IParserBehaviors> mockBehaviors = new Mock<IParserBehaviors>();
Parser parser = new Parser(mockBehaviors.Object);
parser.ReadFile(filename);
mockBehaviors.Verify(mock => mock.StartNextComponent(), Times.Exactly(2));
}
This way, you will be able to integrate under controlled tests. When (not if) someone runs into a problem, you can distill what case wasn't covered, and add a test surrounding that behavior, after extracting the case from the log being used. Separating concerns this way also allows your parsing logic to be independent from your data operation logic. The needs of parsing specific behaviors seems to be central to your application, so it seems like a perfect fit for creating some domain-specific interfaces.
You'll want to read the text file using StreamReader:
using (FileStream fileStream = File.OpenRead(path))
{
byte[] data = new byte[fileStream.Length];
for (int index = 0; index < fileStream.Length; index++)
{
data[index] = (byte)fileStream.ReadByte();
}
Console.WriteLine(Encoding.UTF8.GetString(data)); // Displays: your file - now you can decide how to manipulate it.
}
Perhaps then you'll use Regex to capture the date you'd like to insert:
You might insert into the db like this:
using (TransactionScope transactionScope = new TransactionScope())
{
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
SqlCommand command1 = new SqlCommand(
“INSERT INTO People ([FirstName], [LastName], [MiddleInitial])
VALUES(‘John’, ‘Doe’, null)”,
connection);
SqlCommand command2 = new SqlCommand(
“INSERT INTO People ([FirstName], [LastName], [MiddleInitial])
VALUES(‘Jane’, ‘Doe’, null)”,
connection);
command1.ExecuteNonQuery();
command2.ExecuteNonQuery();
}
transactionScope.Complete();
}
Examples adapted from Wouter de Kort's C# 70-483.
I started with the solution here http://social.technet.microsoft.com/wiki/contents/articles/20547.biztalk-server-dynamic-schema-resolver-real-scenario.aspx
which matches my scenario perfectly except for the send port, but that isn't necessary. I need the receive port to choose the file and apply a schema to disassemble. From their the orchestration does the mapping, some of it custom, etc.
I've done everything in the tutorial but I keep getting the following error.
"There was a failure executing the receive pipeline... The body part is NULL"
The things I don't get from the tutorial but don't believe they should be an issue are:
I created a new solution and project to make the custompipeline component (reference figure 19) and thus the dll file. Meaning it is on it's own namespace. However, it looks like from the tutorial they created the project within the main biztalk solution (ie the one with the pipeline and the orchestration) and thus the namespace has "TechNetWiki.SchemaResolver." in it. Should I make the custompipeline component have the namespace of my main solution? I'm assuming this shouldn't matter because I should be able to use this component in other solutions as it is meant to be generic to the business rules that are associated with the biztalk application.
The other piece I don't have is Figure 15 under the "THEN Action" they have it equal the destination schema they would like to disassemble to but then they put #Src1 at the end of "http://TechNetWiki.SchemaResolver.Schemas.SRC1_FF#Src1". What is the #Src1 for?
In the sample you've linked to, the probe method of the pipeline component is pushing the first 4 characters from the filename into a typed message that is then passed into the rules engine. Its those 4 characters that match the "SRC1" in the example.
string srcFileName = pInMsg.Context.Read("ReceivedFileName", "http://schemas.microsoft.com/BizTalk/2003/file-properties This link is external to TechNet Wiki. It will open in a new window. ").ToString();
srcFileName = Path.GetFileName(srcFileName);
//Substring the first four digits to take source code to use to call BRE API
string customerCode = srcFileName.Substring(0, 4);
//create an instance of the XML object
XmlDocument xmlDoc = new XmlDocument();
xmlDoc.LoadXml(string.Format(#"<ns0:Root xmlns:ns0='http://TechNetWiki.SchemaResolver.Schemas.SchemaResolverBRE This link is external to TechNet Wiki. It will open in a new window. '>
<SrcCode>{0}</SrcCode>
<MessageType></MessageType>
</ns0:Root>", customerCode));
//retreive source code in case in our cache dictionary
if (cachedSources.ContainsKey(customerCode))
{
messageType = cachedSources[customerCode];
}
else
{
TypedXmlDocument typedXmlDocument = new TypedXmlDocument("TechNetWiki.SchemaResolver.Schemas.SchemaResolverBRE", xmlDoc);
Microsoft.RuleEngine.Policy policy = new Microsoft.RuleEngine.Policy("SchemaResolverPolicy");
policy.Execute(typedXmlDocument);
So the matching rule is based on the 1st 4 characters of the filename. If one isn't matched, the probe returns a false - i.e. unrecognised.
The final part is that the message type is pushed into the returned message - this is made up of the namespace and the root schema node with a # separator - so your #src1 is the root node.
You need to implement IProbeMessage near to class
I forgot to add IProbeMessage in the code of article. It is updated now.
but it is there in sample source code
Src1 is the the root node name of schema. I mentioned that in article that message type is TargetNamespace#Root
I recommend to download the sample code
I hope this will help you
I use a simple interceptor to intercept the sql string that nhibernate generates for loging purposes and it works fine.
public class SessionManagerSQLInterceptor : EmptyInterceptor, IInterceptor
{
NHibernate.SqlCommand.SqlString IInterceptor.OnPrepareStatement(NHibernate.SqlCommand.SqlString sql)
{
NHSessionManager.Instance.NHibernateSQL = sql.ToString();
return sql;
}
}
This however captures the sql statement without the parameter values.. They are replaced by '?'
Eg: .... WHERE USER0_.USERNAME = ?
The only alternative approach i found so far is using log4nets nhibernate.sql appender which logs sql statement including parameter values but that is not serving me well..
I need to use an interceptor so that for eg. when i catch an exception i want to log the specific sql statement that caused the persistence problem including the values it contained and log it mail it etc. This speeds up debuging great deal compared to going into log files looking for the query that caused the problem..
How can i get full sql statements including parameter values that nhibernate generate on runtime?
Here is (roughly sketched) what I did:
Create a custom implementation of the IDbCommand interface, which internally delegates all to the real work to SqlCommand (assume it is called LoggingDbCommand for the purpose of discussion).
Create a derived class of the NHibernate class SqlClientDriver. It should look something like this:
public class LoggingSqlClientDriver : SqlClientDriver
{
public override IDbCommand CreateCommand()
{
return new LoggingDbCommand(base.CreateCommand());
}
}
Register your Client Driver in the NHibernate Configuration (see NHibernate docs for details).
Mind you, I did all this for NHibernate 1.1.2 so there might be some changes required for newer versions. But I guess the idea itself will still be working.
OK, the real meat will be in your implementation of LoggingDbCommand. I will only draft you some example method implementations, but I guess you'll get the picture and can do likewise for the other Execute*() methods.:
public int ExecuteNonQuery()
{
try
{
// m_command holds the inner, true, SqlCommand object.
return m_command.ExecuteNonQuery();
}
catch
{
LogCommand();
throw; // pass exception on!
}
}
The guts are, of course, in the LogCommand() method, in which you have "full access" to all the details of the executed command:
The command text (with the parameter placeholders in it like specified) through m_command.CommandText
The parameters and their values through to the m_command.Parameters collection
What is left to do (I've done it but can't post due to contracts - lame but true, sorry) is to assemble that information into a proper SQL-string (hint: don't bother replacing the parameters in the command text, just list them underneath like NHibernate's own logger does).
Sidebar: You might want to refrain from even attempting to log if the the exception is something considered fatal (AccessViolationException, OOM, etc.) to make sure you don't make things worse by trying to log in the face of something already pretty catastrophic.
Example:
try
{
// ... same as above ...
}
catch (Exception ex)
{
if (!(ex is OutOfMemoryException || ex is AccessViolationException || /* others */)
LogCommand();
throw; // rethrow! original exception.
}
It is simpler (and works for all NH versions) to do this :
public class LoggingSqlClientDriver : SqlClientDriver
{
public override IDbCommand GenerateCommand(CommandType type, NHibernate.SqlCommand.SqlString sqlString, NHibernate.SqlTypes.SqlType[] parameterTypes)
{
SqlCommand command = (SqlCommand)base.GenerateCommand(type, sqlString, parameterTypes);
LogCommand(command);
return command;
}}
Just an idea: can you implement a new log4net-appender, which takes all sqlstatements(debug with parameter) and holds the last one. When an error occured you can ask him for the last sqlstatement and email it.
Implement custom ILoggerFactory and filter in LoggerFor for keyName equals NHibernate.SQL and set via LoggerProvider.SetLoggersFactory. Works for SQLite driver, should work for other.
LoggerProvider by default creates log4net via reflection if log4net assembly presented. So best implementation would be if custom ILoggerFactory will delegate log into default, until NHibernate.SQL log requested.