I am using Dapper Extensions (DE) as ORM. It is consumed in Data Access Layer which is implemented using Repository pattern. SQL Express is back-end RDBMS.
DE automatically generates most of the queries for me. I want to log those auto-generated queries for debugging purpose.
There are two ways I can see to achieve this: -
Get the SQL query generated by DE (before or after it is executed) and write it to log. This is preferred way for me as I already have my logging module (using log4net) in place. The only thing I need is the SQL generated by DE.
Integrate DE with some logging tool. I read this answer. It looks possible using MiniProfiler tool; but as I said above, I already have my logging module in place. I do not want to use other tool just for logging SQL queries.
How to log/get a SQL query auto-generated by Dapper Extensions without using any other logging tool?
The other similar question is about Dapper. This question is about Dapper Extensions.
Looking at the comment from #MarcGravell and this question about doing the same with Dapper, MiniProfiler.Integrations is better way to implement logging for Dapper Extensions.
Above linked question is about Dapper. But Dapper Extensions uses Dapper internally. So, if logging is implemented for Dapper, same works for Dapper Extensions as well.
More details could be found on GitHub.
Sample code is as below:
var factory = new SqlServerDbConnectionFactory(connectionString);
CustomDbProfiler cp = new CustomDbProfiler();
using(var connection = DbConnectionFactoryHelper.New(factory, cp))
{
//DB Code
}
string log = cp.ProfilerContext.GetCommands();
You can use in-build CustomDbProfiler using CustomDbProfiler.Current if that suits your need. cp.ProfilerContext.GetCommands() will return ALL the commands (success and failed) no matter how many times you call the method. I am not sure but, it might be maintaining concatenated string (StringBuilder may be) internally. If this is the case, this may slow down the performance. But, in my case, logging is disabled by default. I only enable logging when I need to debug something. So this is not a problem for me.
This also may raise memory footprint issue if single connection is used over very large scope. To avoid this, make sure CustomDbProfiler instance is disposed properly.
As mentioned in question, initially, I wanted to avoid this way (using external tool/library). But, MiniProfiler.Integrations is NOT writing the log itself. I can simply get all the queries generated and provide those to my logger module to dump into the file. That is why, this looks more suitable to me now.
MiniProfiler.dll internally implements similar logic (in StackExchange.Profiling.Data.ProfiledDbConnection and StackExchange.Profiling.Data.ProfiledDbCommand classes) which is mentioned here and here. So, if I decide to (in future may be) bypass MiniProfiler, I can use this implementation myself.
Dapper Extensions project is open source; everyone knows that. I downloaded it from GitHub and modified it to meet my needs.
Dapper Extensions build/generate SQL query internally in SqlGeneratorImpl class. There are multiple methods in this class those generate the various queries.
I added following property in DapperExtensions.DapperExtensions static class:
static string lastGeneratedQuery;
public static string LastGeneratedQuery
{
get
{
lock(_lock)
{
return lastGeneratedQuery;
}
}
internal set
{
lock(_lock)
{
lastGeneratedQuery = value;
}
}
}
Also, set this property in various methods of SqlGeneratorImpl class. Following is an example how I set it in Select method.
public virtual string Select(IClassMapper classMap, IPredicate predicate, IList<ISort> sort, IDictionary<string, object> parameters)
{
......
......
StringBuilder sql = new StringBuilder(string.Format("SELECT {0} FROM {1}",
......
......
DapperExtensions.LastGeneratedQuery = sql.ToString();
return sql.ToString();
}
Basic tests run well; I have not yet tested this thoroughly. I will update this answer in case of any change.
Please note that I do not recommend this as standard solution; this is just a hack that works for my needs. I would really like to see this as a regular feature in library. Please post an answer if you have better solution. Otherwise, please comment to improve the solution suggested here.
After merging this pull request in master branch, hopefully this is now available out of the box and no need to download and modify the toolkit source code anymore. Note that I have not verified this.
Related
Background:
I'm in the process of integrating Dapper into an existing application that has a legacy DAL based off of the DAAB of years gone by. This is a fairly large code base so it has to be done incrementally. My first attempt is taking a method that calls a reader and populates a collection of objects.
What I'm noticing is that the query extension method seems to be calling the IDbCommand.ExecuteReader and the legacy DAL implementation is being invoked instead of the Dapper version.
So my question is this "Is there a way to explicitly force Dapper to use it's implementation of IDbCommand.ExecuteReader and not the legacy version?"
Setup Connection: The open connection is coming from the existing framework and it completely wraps the IDbConnection interface plus some. There is a lot of security around getting this connection, and other additions that seem to muddy the waters.
private static XyDbConnection GetConnection()
{
return XyDbConnection.GetConnection(ExternalFieldDataConstants.ConnectionName.SecureRepositoryConnection);
}
Callsite: Nothing terrible going on here just setup of Dapper in a typical scenario
private void GetFieldMappingTableDapper()
{
const string commandText = #"
SELECT
OD.ObjectDefId,
OD.Name,
FM.LOBField,
FM.XPath,
FM.XPathDataType,
FM.legacyOnly,
BA.NAME BusinessAreaName
FROM [FieldMapping] FM
JOIN [ObjectDef] OD
ON OD.ObjectDefId = FM.ObjectDefId
JOIN [BusinessArea] BA
ON BA.BusinessAreaId = OD.BusinessAreaId";
using (var conn = GetConnection())
{
_mappingsCache = conn.Query<FieldMappingEx>(commandText).ToList();
}
}
Pictures are worth a thousand words.
Setting up the call to Dapper:
Now when I step (F11) into the call I get:
And now the most interesting part is the invocation of the legacy command object from Dapper. (it seems)
I've used Dapper for years, even with Oracle :) and this is the first time I've ever had a hickup, period.
Thank you,
Stephen
Yes, dapper calls the CreateCommand method on the connection. It has to. It doesn't know anything about what type of connection you are using. That is literally the only way you should (as a provider-agnostic library) create a command. Is there a different way you expected it to create an ado.net command?
I'm just looking to write a generic function in C# .NET 4.0 where I can send it a database string, a query string, and get my results back.
Not being that well versed in C# and it's various different objects, I'm not really sure what the best options might be for returning this information.
I know there is DataTable, DataSet, and that's really about it. What I'm looking for is an object that is fairly efficient and something where I can easily access it's data members. I'm sure it wouldn't be a problem writing my own, but I know there has to be some other object in .NET 4.0 that I can access.
The problem with such a design is that it's very difficult to use best practices for security and preventing SQL Injection. Passing in a pure SQL statement to a function pretty much eliminates the ability to defend against SQL Injection unless you do it carefully.
As a general rule, if you're taking in any user input, you want to be sure that you're either using parameterized stored procedures, or parameterized queries. The recommended pattern would be to pass in a statement and an array of type object that contains the correct number of parameter values, and build the parameter collection in your function.
We use the Microsoft data Application Blocks from their patterns and Practices library, which contain functionality that really makes this a lot easier.
In using this, the ConnectionString is stored in the app.config or web.config, and you pass in the NAME of the connection string, The Data Application Blocks code takes care of looking it up, creating the command, connection, and performing the query.
Our code looks like this when using this pattern:
public static DataSet ExecuteDataset(string CallingApp, string ConnectionStringName, string StoredProcName, object[] ParameterValues)
{
DataSet ReturnValue = new DataSet();
Microsoft.Practices.EnterpriseLibrary.Data.Database db = Microsoft.Practices.EnterpriseLibrary.Data.DatabaseFactory.CreateDatabase(ConnectionStringName);
try
{
ReturnValue = db.ExecuteDataSet(StoredProcName, ParameterValues);
}
catch (Exception ex)
{
// a bunch of code for exception handling removed
}
return ReturnValue;
}
This is the code for getting a DataSet. There are plenty of other functions available, and this code might not work against the newest version of the Data Access Library. We're using a slightly older version, but this should give you an idea of how easy it is to use.
New and improved method is not to do that, but to havea method to get the data you want.
E.g define a CustomerOrder class. Have a database module with a function called say:
GetOpenOrdersForCustomer(int argCustomerID)
that returns you a collection of them.
Look up Entity Framework POCO etc.
DataSet and DataTable are earlier incarnations of this sort of thing, they are pretty hefty, with a lot of baggage.
If you want/need to do a bit more handcoding. Interfaces like IDataReader, IDbCommand IDbConnection, will isolate you from a number of implementation details in your code, for instance whether your backend is SQL Server, MySql etc.
Even then you are still better off not passing things like DataTable, SqlCommand about about in your application code.
Too many chances to to plaster the database schema all over the code leaving you with a huge and soon to be realised potential for technical debt.
Hide it all. In Code First POCO it's practically invisible.
Have you thought about using an ORM like NHibernate : http://nhforge.org/Default.aspx
It will do everything related to database and you will be working with nice object classes.
In other case, your function can return Datatable and you can write another function to get the DataTable and extract data into an object class. eg. This new function would be like,
public object PopulateData(DataTable table, Enum ClassType); //Or something similar
You can then use reflection to map DataTable columns and class object.
But I would recommend using an ORM.
It depends the scenario you want to cover.
For example, in an MVC application its fairly maintainable to deploy a data layer based on Entity Framework, because its easily testeable and the communication with the database is very straightforward through simple objects.
Another approach could be use an already developed component, such is Data Application Block from Enterprise library or perhaps develop your custom database factory... It just depends the purpose of your development.
Could you give more details of what kind of application we are talking about?. It's an existing database or a new one?.
Another question. What exactly you want to expect to pass over the query string?. If you mind pass sql sentences i will tell you that its a very bad and dangerous practice that you should avoid.
Best Regards.
You may want something like:
public static DbDataReader Query(
string connectionString, string selectCommand,
params KeyValuePair<string, object>[] parameters)
{
var connection = new OleDbConnection(connectionString);
var command = new OleDbCommand(selectCommand, connection);
foreach (var p in parameters)
command.Parameters.Add(new OleDbParameter(p.Key, p.Value));
var result = command.ExecuteReader();
return result;
}
OleDb used as example.
For a utility I'm working on, the client would like to be able to generate graphic reports on the data that has been collected. I can already generate a couple canned graphs (using ZedGraph, which is a very nice library); however, the utility would be much more flexible if the graphs were more programmable or configurable by the end-user.
TLDR version
I want users to be able to use something like SQL to safely extract and select data from a List of objects that I provide and can describe. What free tools or libraries will help me accomplish this?
Full version
I've given thought to using IronPython, IronRuby, and LuaInterface, but frankly they're all a bit overpowered for what I want to do. My classes are fairly simple, along the lines of:
class Person:
string Name;
int HeightInCm;
DateTime BirthDate;
Weight[] WeighIns;
class Weight:
int WeightInKg;
DateTime Date;
Person Owner;
(exact classes have been changed to protect the innocent).
To come up with the data for the graph, the user will choose whether it's a bar graph, scatter plot, etc., and then to actually obtain the data, I would like to obtain some kind of List from the user simply entering something SQL-ish along the lines of
SELECT Name, AVG(WeighIns) FROM People
SELECT WeightInKg, Owner.HeightInCm FROM Weights
And as a bonus, it would be nice if you could actually do operations as well:
SELECT WeightInKg, (Date - Owner.BirthDate) AS Age FROM Weights
The DSL doesn't have to be compliant SQL in any way; it doesn't even have to resemble SQL, but I can't think of a more efficient descriptive language for the task.
I'm fine filling in blanks; I don't expect a library to do everything for me. What I would expect to exist (but haven't been able to find in any way, shape, or form) is something like Fluent NHibernate (which I am already using in the project) where I can declare a mapping, something like
var personRequest = Request<Person>();
personRequest.Item("Name", (p => p.Name));
personRequest.Item("HeightInCm", (p => p.HeightInCm));
personRequest.Item("HeightInInches", (p => p.HeightInCm * CM_TO_INCHES));
// ...
var weightRequest = Request<Weight>();
weightRequest.Item("Owner", (w => w.Owner), personRequest); // Indicate a chain to personRequest
// ...
var people = Table<Person>("People", GetPeopleFromDatabase());
var weights = Table<Weight>("Weights", GetWeightsFromDatabase());
// ...
TryRunQuery(userInputQuery);
LINQ is so close to what I want to do, but AFAIK there's no way to sandbox it. I don't want to expose any unnecessary functionality to the end user; meaning I don't want the user to be able to send in and process:
from p in people select (p => { System.IO.File.Delete("C:\\something\\important"); return p.Name })
So does anyone know of any free .NET libraries that allow something like what I've described above? Or is there some way to sandbox LINQ? cs-script is close too, but it doesn't seem to offer sandboxing yet either. I'd be hesitant to expose the NHibernate interface either, as the user should have a read-only view of the data at this point in the usage.
I'm using C# 3.5, and pure .NET solutions would be preferred.
The bottom line is that I'm really trying to avoid writing my own parser for a subset of SQL that would only apply to this single project.
There is a way to sandbox LINQ or even C#: A sandboxed appdomain. I would recommend you look into accepting and compiling LINQ in a locked-down domain.
Regarding NHibernate, perhaps you can pass the objects into the domain without exposing NHibernate at all (I don't know how NHibernate works). If this is not possible, perhaps the connection to the database used within the sandbox can be logged in as a user who is granted only SELECT permissions.
Maybe the expressions will come handy for You.
You could provide simple entry places for:
a) what to select - user is expected to enter an expression only _ probably member and arithmetic expressions - those are subclasses of the expression class
b) how to filter the things = again only expressions are expected
c) ordering
d) joining?
Expressions don't let You do File.Delete because You operate only on precise domain objects (which probably don't have this functionality). The only thing You have to check is whether the parameters of the said expressions are of Your domain types. and Return types of said expressions are of domain types (or generic types in case of IEnumerable<> or IQuerable<>
this might prove helpful
I.E. expressions don't let You write multi-line statements.
Then You build your method chain in code
and voila.
There comes the data
I ended up using a little bit of a different approach. Instead of letting users pick arbitrary fields and make arbitrary graphs, I'm still presenting canned graphs, but I'm using Flee to let the user filter out exactly what data is used in the source of the graph. This works out nicely, because I ended up making a set of mappings from variable names to "accessors", and then using those mappings to inject variables into the user-entered filters. It ended up something like:
List<Mapping<Person>> mappings;
// ...
mappings.Add(new Mapping("Weight", p => p.Weight, "The person's weight (in pounds)"));
// ...
foreach (var m in mappings)
{
context.Variables[m.Name] = m.Accessor(p);
}
// ...
And you can even give an expression context an "owner" (think Ruby's instance_eval, where the context is executed with score of the specified object as this); then the user can even enter a filter like Weight > InputNum("The minimum weight to see"), and then they will be prompted thusly when the filter is executed, because I've defined a method InputNum in the owning class.
I feel like it was a good balance between effort involved and end result. I would recommend Flee to anyone who has a need to parse simple statements, especially if you need to extend those statements with your own variables and functions as well.
I was wondering is constantly reusing namespace names is valid for c# conventions/best practises.
I am develop most of my programs in Java, and i would have a packet for implementations, eg:
com.ajravindiran.jolt.game.items.sql
com.ajravindiran.jolt.game.users.sql
com.ajravindiran.jolt.events.impl
com.ajravindiran.jolt.tasks.impl
Let's talk about com.ajravindiran.jolt.game.items.sql, which is most close my situation. I current wrote a library that wraps the MySQL Connection/Net into a OODBMS.
So i have an interface called ISqlDataObject which has the following members:
bool Insert(SqlDatabaseClient client);
bool Delete(SqlDatabaseClient client);
bool Update(SqlDatabaseClient client);
bool Load(SqlDatabaseClient client);
and used like such:
public class SqlItem : Item, ISqlDataObject
{
public bool Load(SqlDatabaseClient client)
{
client.AddParameter("id", this.Id);
DataRow row = client.ReadDataRow("SELECT * FROM character_items WHERE item_uid = #id;");
this.Examine = (string)row["examine_quote"];
...
}
...
}
called:
SqlItem item = new SqlItem(int itemid);
GameEngine.Database.Load(item);
Console.WriteLine(item.Examine);
So i was wondering if it's ok to add the sql editions of the items into something like JoltEnvironment.Game.Items.Sql or should i just keep it at JoltEnvironment.Game.Items?
Thanks in adnvanced, AJ Ravindiran.
For naming conventions and rules, see MSDN's Framework Guidelines on Names of Namespaces.
That being said, that won't cover this specific issue:
So i was wondering if it's ok to add the sql editions of the items into something like JoltEnvironment.Game.Items.Sql or should i just keep it at JoltEnvironment.Game.Items?
It is okay to do either, and the most appropriate one depends a bit on your specific needs.
If the game items will be used pervasively throughout the game, but the data access will only be used by a small portion, I would probably split it out into its own namespace (though probably not called Sql - I'd probably use Data or DataAccess, since you may eventually want to add non-SQL related information there, too).
If, however, you'll always use these classes along with the classes in the Items namespace, I'd probably leave them in a single namespace.
You're asking about naming conventions, and the answer is, it's really up to you.
I allow for extra levels of hierarchy in a namespace if there will be multiple implementations. In your case, the .Sql is appropriate if there is some other storage mechanism that doesn't use Sql for queries. Maybe it's XML/Xpath. But if you don't have that, then it seems like the .Sql layer of naming isn't necessary.
At that poiint, though, I'm wondering why you would use {games,users} at the prior level. Feels like the namespace is more naturally
JoltEnvironment.Game.Storage
..And the Fully-qualified type names would be
JoltEnvironment.Game.Storage.SqlItem
JoltEnvironment.Game.Storage.SqlUser
and so on.
If a namespace, like JoltEnvironment.Game.Items, has only one or two classes, it seems like it ought to be collapsed into a higher level namespace.
What are you calling SQL Editions? Versions of SQL Server? Or Version of Database Connections? If the later, I would do something like:
JoltEnvironment.Game.Items.DataAccess.SQLServer
JoltEnvironment.Game.Items.DataAccess.MySQL
JoltEnvironment.Game.Items.DataAccess.Oracle
etc...
If the former, I thought that ADO.NET would take care of that for you anyway, based on the provider, so everything under the same namespace would be ok.
What would be the best approach to allow users to define a WHERE-like constraints on objects which are defined like this:
Collection<object[]> data
Collection<string> columnNames
where object[] is a single row.
I was thinking about dynamically creating a strong-typed wrapper and just using Dynamic LINQ but maybe there is a simpler solution?
DataSet's are not really an option since the collections are rather huge (40,000+ records) and I don't want to create DataTable and populate it every time I run a query.
What kind of queries do you need to run? If it's just equality, that's relatively easy:
public static IEnumerable<object[]> WhereEqual(
this IEnumerable<object[]> source,
Collection<string> columnNames,
string column,
object value)
{
int columnIndex = columnNames.IndexOf(column);
if (columnIndex == -1)
{
throw new ArgumentException();
}
return source.Where(row => Object.Equals(row[columnIndex], value);
}
If you need something more complicated, please give us an example of what you'd like to be able to write.
If I get your point : you'd like to support users writting the where clause externally - I mean users are real users and not developers so you seek solution for the uicontrol, code where condition bridge. I just though this because you mentioned dlinq.
So if I'm correct what you want to do is really :
give the user the ability to use column names
give the ability to describe a bool function (which will serve as where criteria)
compose the query dynamically and run
For this task let me propose : Rules from the System.Workflow.Activities.Rules namespace. For rules there're several designers available not to mention the ones shipped with Visual Studio (for the web that's another question, but there're several ones for that too).I'd start with Rules without workflow then examine examples from msdn. It's a very flexible and customizable engine.
One other thing: LINQ has connection to this problem as a function returning IQueryable can defer query execution, you can previously define a query and in another part of the code one can extend the returned queryable based on the user's condition (which then can be sticked with extension methods).
When just using object, LINQ isn't really going to help you very much... is it worth the pain? And Dynamic LINQ is certainly overkill. What is the expected way of using this? I can think of a few ways of adding basic Where operations.... but I'm not sure how helpful it would be.
How about embedding something like IronPython in your project? We use that to allow users to define their own expressions (filters and otherwise) inside a sandbox.
I'm thinking about something like this:
((col1 = "abc") or (col2 = "xyz")) and (col3 = "123")
Ultimately it would be nice to have support for LIKE operator with % wildcard.
Thank you all guys - I've finally found it. It's called NQuery and it's available from CodePlex. In its documentation there is even an example which contains a binding to my very structure - list of column names + list of object[]. Plus fully functional SQL query engine.
Just perfect.