Dapper provide default name for dynamic result sets with QueryMultiple - c#

TLDR; Is there a way (using a type map or some other solution) to give dynamic result sets a default name, such as "(No Column Name)" in Dapper when no column name is supplied?
I am writing a query editor that allows users to write and run user-supplied queries against MS SQL Server databases. I've been using Dapper for all our querying and it has been working beautifully for 99% of what we need. I've hit a snag though and I'm hoping someone has a solution.
The query editor is similar to SSMS. I don't know ahead of time what the script will look like, what the shape or type the result set(s) will be, or even how many result sets will be returned. For this reason, I've been batching the scripts and using Dapper's QueryMultiple to read dynamic results from the GridReader. The results are then sent to a third party UI data grid (WPF). The data grid knows how to consume dynamic data and the only thing it requires to display a given row is at least one key value pair with a non-null, but not necessarily unique key and a nullable value. So far, so good.
The simplified version of the Dapper call looks something like this:
public async Task<IEnumerable<IEnumerable<T>>> QueryMultipleAsync<T>(string sql,
object parameters,
string connectionString,
CommandType commandType = CommandType.Text,
CancellationTokenSource cancellationTokenSource = null)
{
using (IDbConnection con = _dbConnectionFactory.GetConnection(connectionString))
{
con.Open();
var transaction = con.BeginTransaction();
var sqlBatches = sql
.ToUpperInvariant()
.Split(new[] { " GO ", "\r\nGO ", "\n\nGO ", "\nGO\n", "\tGO ", "\rGO "}, StringSplitOptions.RemoveEmptyEntries);
var batches = new List<CommandDefinition>();
foreach(var batch in sqlBatches)
{
batches.Add(new CommandDefinition(batch, parameters, transaction, null, commandType, CommandFlags.Buffered, cancellationTokenSource.Token));
}
var resultSet = new List<List<T>>();
foreach (var commandDefinition in batches)
{
using (GridReader reader = await con.QueryMultipleAsync(commandDefinition))
{
while (!reader.IsConsumed)
{
try
{
var result = (await reader.ReadAsync<T>()).AsList();
if (result.FirstOrDefault() is IDynamicMetaObjectProvider)
{
(result as List<dynamic>).ConvertNullKeysToNoColumnName();
}
resultSet.Add(result);
}
catch(Exception e)
{
if(e.Message.Equals("No columns were selected"))
{
break;
}
else
{
throw;
}
}
}
}
}
try
{
transaction.Commit();
}
catch (Exception ex)
{
Trace.WriteLine(ex.ToString());
if (transaction != null)
{
transaction.Rollback();
}
}
return resultSet;
}
}
public static IEnumerable<dynamic> ConvertNullKeysToNoColumnName<dynamic>(this IEnumerable<dynamic> rows)
{
foreach (var row in rows)
{
if (row is IDictionary<string, object> rowDictionary)
{
if (rowDictionary == null) continue;
rowDictionary.Where(x => string.IsNullOrEmpty(x.Key)).ToList().ForEach(x =>
{
var val = rowDictionary[x.Key];
if (x.Value == val)
{
rowDictionary.Remove(x);
rowDictionary.Add("(No Column Name)", val);
}
else
{
Trace.WriteLine("Something went wrong");
}
});
}
}
return rows;
}
This works with most queries (and for queries with only one unnamed result column), but the problem manifests when the user writes a query with more than one unnamed column like this:
select COUNT(*), MAX(create_date) from sys.databases.
In this case, Dapper returns a DapperRow that looks something like this:
{DapperRow, = '9', = '2/14/2020 9:51:54 AM'}
So the result set is exactly what the user asks for (i.e., values with no names or aliases) but I need to supply (non-unique) keys for all data in the grid...
My first thought was to simply change the null keys in the DapperRow object to a default value (like '(No Column Name)'), as it appears to be optimized for storage so table keys are only stored once in the object (which is nice and would provide a nice performance bonus for queries with huge result sets). The DapperRow type is private though. After searching around, I found that I could cast the DapperRow to an IDictionary<string, object> to access keys and values for the object, and even set and remove values. That's where the ConvertNullKeysToNoColumnName extension method comes from. And it works... But only once.
Why? Well, it appears that when you have multiple null or empty keys in a DapperRow that gets cast to an IDictionary<string,object> and you call the Remove(x) function (where x is the entire item OR just the key for any single item with a null or empty key), all subsequent attempts to resolve other values with a null or empty key via the indexer item[key] fail to retrieve a value--even if the additional key value pairs still exist in the object.
In other words, I can't remove or replace subsequent empty keys after the first one is removed.
Am I missing something obvious? Do I just need to alter the DapperRow via reflection and hope it doesn't have any weird side affects or that the underlying data structure doesn't change later? Or do I take the performance/memory hit and just copy/map the entire potentially large result set into a new sequence to give empty keys a default value at runtime?

I suspect this is because the dynamic DapperRow object is actually not a 'normal' dictionary. It can have several entries with the same key. You can see this if you inspect the object in the debugger.
When you reference rowDictionary[x.Key], I suspect you will always get the first unnamed column.
If you call rowDictionary.Remove(""); rowDictionary.Remove("");, you actually only remove the first entry - the second is still present, even though rowDictionary.ContainsKey("") returns false.
You can Clear() and rebuild the entire dictionary.
At that point, you're really not gaining much by using a dynamic object.
if (row is IDictionary<string, object>)
{
var rowDictionary = row as IDictionary<string, object>;
if (rowDictionary.ContainsKey(""))
{
var kvs = rowDictionary.ToList();
rowDictionary.Clear();
for (var i = 0; i < kvs.Count; ++i)
{
var kv = kvs[i];
var key = kv.Key == ""? $"(No Column <{i + 1}>)" : kv.Key;
rowDictionary.Add(key, kv.Value);
}
}
}
Since you're working with unknown result structure, and just want to pass it to a grid view, I would consider using a DataTable instead.
You can still keep Dapper for parameter handling:
foreach (var commandDefinition in batches)
{
using(var reader = await con.ExecuteReaderAsync(commandDefinition)) {
while(!reader.IsClosed) {
var table = new DataTable();
table.Load(reader);
resultSet.Add(table);
}
}
}

Related

Dapper QueryMultiple Read result as IDictionary

I am using Dapper with Oracle Managed and trying to use QueryMultiple method. I have a stored procedure in an Oracle package that returns multiple recordsets. I want to reuse my code for multiple package methods and the data coming back is coming from different source tables, so columns and actual data will vary.
To handle this I have a class that uses an IEnumerable<string> for storing FieldNames and an IEnumerable<object[]> for storing the data for each row.
With just a single recordset I am able to use the ExecuteReader method to iterate the results and add them myself. However, I would like to use QueryMultiple to get everything in a single call. Right now I have two recordsets coming back but others may be added.
After a few different attempts I was able to get the following code to work. However, it seems like there should be a better way to get this. Further below is a piece of code found in another SO question that seemed to be what I wanted but I just couldn't get it to work. Does anyone have any suggestions about how I'm getting the FieldNames and Data in the code below and if there is a better way using Dapper API. Loving Dapper by the way. Thanks for any suggestions.
// class to load results into
public class Results {
public IEnumerable<string> FieldNames { get; set; }
public DetailInfo Detail { get; set; }
public IEnumerable<object[]> Data { get; set; }
}
// code to get the results from the database field names and data varies
// method is private so that external callers cannot pass any just any string as methodName
private Results GetTableResults(string methodName) {
var queryparams = new OracleDynamicParameters();
queryparams.Add(name: "l_detail_cursor", dbType: OracleDbType.RefCursor, direction: ParameterDirection.Output);
queryparams.Add(name: "l_data_cursor", dbType: OracleDbType.RefCursor, direction: ParameterDirection.Output);
// ... other parameters go here, removed for example
Results results = new Results();
string sql = string.Format("{0}.GET_{1}_DETAILS", PackageName, methodName);
using (IDbConnection db = new OracleConnection(this.ConnectionString)) {
db.Open();
using (var multi = db.QueryMultiple(sql: sql, param: queryparams, commandType: CommandType.StoredProcedure)) {
// detail in first cursor, no problems here
results.Detail = multi.Read<DetailInfo>().Single();
// --------------------------------------------------
// this is the code I'm trying to see if there is a better way to handle
// --------------------------------------------------
// data in second cursor
var data = multi.Read().Select(dictionary =>
// cast to IDictionary
dictionary as IDictionary<string, object>
);
// pull from Keys
results.FieldNames = data.First().Select(d => d.Key);
// pull from values
results.Data = data.Select(d => d.Values.ToArray());
// --------------------------------------------------
}
}
return results;
}
I was attempting to try to use something like the following but only get an exception at run time about splitOn needing to be specified. I tried using something like ROWNUM to even give the recordset an "id" but that didn't seem to help.
var data = multi.Read<dynamic, dynamic, Tuple<dynamic, dynamic>>(
(a, b) => Tuple.Create((object)a, (object)b)).ToList();

Entity Framework: Replacing an entire DbSet collection

I have a generic class that performs add/update on entities of type T. The AddOrUpdate() method takes in a DbSet collection to act on as well as a list of items to add or update in the DbSet. The ItemExists() is used to check to see if an item already exists in the collection. If it does, we update. If not, we add. The method essentially compares the primary key of the item passed in with every single item in the table, and returns true (as well as the database object itself) if there's a match.
The code works fine for tables with small number of records. For larger tables however, the ItemExists() method is very inefficient. The method uses a foreach loop which itself is inside another foreach loop in the caller method, giving O(n^2).
An easier way would be to simply use contextDataSet.Contains(item), but that throws an exception that says Unable to create a constant value of type which makes sense since EF can't translate the class into a SQL query. So that's a no go.
Now my actual question: is there a way to replace the entire DbSet<T> with IEnumerable<T> that gets passed in? The IEnumerable that gets passed in is bound to a datagrid on the view and essentially includes all the items, so logically speaking, replacing the entire collection should be safe. Any help is greatly appreciated.
Code
public void AddOrUpdate<I, P>(Expression<Func<I, P>> dbSetExpression, IEnumerable<T> itemsToUpdate)
where I : DbContext, new()
where P : DbSet<T>
{
DataFactory.PerformOperation<I>(c =>
{
if (m_primaryKey == null && !TryFindPrimaryKey(c))
{
throw new ArgumentException("Primary key cannot be null.");
}
// Get the table name from expression passed in.
string dbsetName = ((MemberExpression)dbSetExpression.Body).Member.Name;
var propertyInfo = c.GetType().GetProperties().Single(p => p.Name == dbsetName);
// Get the values in the table.
DbSet<T> contextDataSet = propertyInfo.GetValue(c) as DbSet<T>;
foreach (var item in itemsToUpdate)
{
// If the primary key already exists, we're updating. Otherwise we're adding a new entity.
T existingItem;
if (ItemExists(contextDataSet, item, out existingItem) && existingItem != null)
{
c.Entry(existingItem).CurrentValues.SetValues(item);
}
else
{
contextDataSet.Add(item);
}
}
c.SaveChanges();
});
}
private bool ItemExists(DbSet<T> itemInDbSet, T itemInList, out T existingItem)
{
foreach (var dbItem in itemInDbSet)
{
// Get the primary key value in the database.
var dbValue = dbItem.GetType().GetProperties().Single(
p => p.Name == m_primaryKey).GetValue(dbItem);
// Get the primary key value from the item passed in.
var itemValue =
itemInList.GetType().GetProperties().Single(
p => p.Name == m_primaryKey).GetValue(itemInList);
// Compare the two values.
if (dbValue.ToString() == itemValue.ToString())
{
existingItem = dbItem;
return true;
}
}
existingItem = null;
return false;
}

EF and stored procedure returning dynamic result

I have a stored procedure with an input that defines the columns that are returned.
How can I iterate through the result?
I have tried solutions similar to this:
var selectColsParam = new System.Data.SqlClient.SqlParameter()
{
ParameterName = "#SelectCols",
Value = "Person.FirstName",
};
string sql = string.Format("dbo.DynamicResultSP {0} ", selectColsParam.ParameterName);
var result = db.Database.SqlQuery<List<dynamic>>(sql, selectColsParam);
At best, 'result' contains the correct number of rows which I can iterate through, but the 'row' itself is simply an object I can't seem to do anything with.
I don't need to know the column names but need to be able to iterate through the fields.
I know having a stored procedure that returns different columns depending on input is not considered good design, however, this is what I have to work with and so changing the SP is not an option.
Any help is appreciated
I ran into the same problem today, and had to resort back to SqlCommand rather than use the object context directly.
// Retrieve the connection from the object context
var entityConnection = this.ObjectContext.GetConnection() as EntityConnection;
var dbConnection = entityConnection.StoreConnection as SqlConnection;
// Create the command and associated parameters (dynamically passed in perhaps)
var command = new SqlCommand("dbo.DynamicResultSP");
command.Parameters.AddWithValue("#SelectCols", "Person.FirstName");
////command.Parameters.AddWithValue("#AnotherParameter", "Parameter.SecondValue");
////command.Parameters.AddWithValue("#AThirdParameter", "YetAnotherValue");
dbConnection.Open();
using (var reader = command.ExecuteReader())
{
// Get the column names
columnNames = new string[reader.FieldCount];
for (int i = 0; i < reader.FieldCount; i++)
{
columnNames[i] = reader.GetName(i);
}
// Get the actual results
while (reader.Read())
{
var result = new string[reader.FieldCount];
for (int i = 0; i < reader.FieldCount; i++)
{
result[i] = reader[i].ToString();
}
results.Add(result);
}
}
dbConnection.Close();
Now you should have access to both the names of the fields and the results.
Whilst it's not the prettiest solution, it gets the job done. Suggestions welcome.
I ran into a similar issue when testing JsonResults. You might be interested in a little extension I wrote that allows you to convert an object to a dynamic which would allow you to hook into the runtime dynamic binding on each object.
It might look like:
var result = db.Database.SqlQuery<List<object>>(sql, selectColsParam);
var dynamicResults = result.Select(o => o.AsDynamic()).ToList();
foreach (dynamic item in dynamicResults)
{
// treat as a particular type based on the position in the list
}
It's also possible that you might simply need to convert each element to the proper type based on whatever logic you have to determine that using Convert or by directly casting.

Creating a completely dynamic query with RavenDB using LuceneQuery

I want one method that can query my entire RavenDB database.
My method signature looks like this:
public static DataTable GetData(string className, int amount, string orderByProperty, string filterByProperty, string filterByOperator, string filterCompare)
I figured I can accomplish all of the above with a dynamic LuceneQuery.
session.Advanced.LuceneQuery<dynamic>();
The problem is: Since I'm using dynamic in the type given, how do I ensure that the query only includes the types matching the className?
I'm looking for something like .WhereType(className) or .Where("type: " + className).
Solution
This returns the results of the correct type:
var type = Type.GetType("Business.Data.DTO." + className);
var tagName = RavenDb.GetTypeTagName(type);
using (var session = RavenDb.OpenSession())
{
var result = session.Advanced
.LuceneQuery<object, RavenDocumentsByEntityName>()
.WhereEquals("Tag", tagName)
.ToList();
}
Note, it is not possible to add additional "WhereEquals" or other filters to this. This is because nothing specific to that document type is included in the "RavenDocumentByEntityName" index.
This means that this solution cannot be used for what I wanted to accomplish.
What I ended up doing
Although it doesn't fulfill my requirement completely, this is what I ended up doing:
public static List<T> GetData<T>(DataQuery query)
{
using (var session = RavenDb.OpenSession())
{
var result = session.Advanced.LuceneQuery<T>();
if (!string.IsNullOrEmpty(query.FilterByProperty))
{
if (query.FilterByOperator == "=")
{
result = result.WhereEquals(query.FilterByProperty, query.FilterCompare);
}
else if (query.FilterByOperator == "StartsWith")
{
result = result.WhereStartsWith(query.FilterByProperty, query.FilterCompare);
}
else if (query.FilterByOperator == "EndsWith")
{
result = result.WhereEndsWith(query.FilterByProperty, query.FilterCompare);
}
}
if (!string.IsNullOrEmpty(query.OrderByProperty))
{
if (query.Descending)
{
result = result.OrderBy(query.OrderByProperty);
}
else
{
result = result.OrderByDescending(query.OrderByProperty);
}
}
result = result.Skip(query.Skip).Take(query.Amount);
return result.ToList();
}
}
Although this is most certainly an anti-pattern, it's a neat way to just look at some data, if that's what you want. It's called very easily like this:
DataQuery query = new DataQuery
{
Amount = int.Parse(txtAmount.Text),
Skip = 0,
FilterByProperty = ddlFilterBy.SelectedValue,
FilterByOperator = ddlOperator.SelectedValue,
FilterCompare = txtCompare.Text,
OrderByProperty = ddlOrderBy.SelectedValue,
Descending = chkDescending.Checked
};
grdData.DataSource = DataService.GetData<Server>(query);
grdData.DataBind();
"Server" is one of the classes/document types I'm working with, so the downside, where it isn't completely dynamic, is that I would have to define a call like that for each type.
I strongly suggest you don't go down this road. You are essentially attempting to hide the RavenDB Session object, which is very powerful and intended to be used directly.
Just looking at the signature of the method you want to create, the parameters are all very restrictive and make a lot of assumptions that might not be true for the data you're working on. And the return type - why would you return a DataTable? Maybe return an object or a dynamic, but nothing in Raven is structured in tables, so DataTable is a bad idea.
To answer the specific question, the type name comes from the Raven-Entity-Name metadata, which you would need to build an index over. This happens automatically when you index using the from docs.YourEntity syntax in an index. Raven does this behind the scenes when you use a dynamic index such as .Query<YourEntity> or .Advanced.LuceneQuery<YourEntity>.
Still, you shouldn't do this.

How to create a custom property in a Linq-to-SQL entity class?

I have two tables Studies and Series. Series are FK'd back to Studies so one Study contains a variable number of Series.
Each Series item has a Deleted column indicating it has been logically deleted from the database.
I am trying to implement a Deleted property in the Study class that returns true only if all the contained Series are deleted.
I am using O/R Designer generated classes, so I added the following to the user modifiable partial class for the Study type:
public bool Deleted
{
get
{
var nonDeletedSeries = from s in Series
where !s.Deleted
select s;
return nonDeletedSeries.Count() == 0;
}
set
{
foreach (var series in Series)
{
series.Deleted = value;
}
}
}
This gives an exception "The member 'PiccoloDatabase.Study.Deleted' has no supported translation to SQL." when this simple query is executed that invokes get:
IQueryable<Study> dataQuery = dbCtxt.Studies;
dataQuery = dataQuery.Where((s) => !s.Deleted);
foreach (var study in dataQuery)
{
...
}
Based on this http://www.foliotek.com/devblog/using-custom-properties-inside-linq-to-sql-queries/, I tried the following approach:
static Expression<Func<Study, bool>> DeletedExpr = t => false;
public bool Deleted
{
get
{
var nameFunc = DeletedExpr.Compile();
return nameFunc(this);
}
set
{ ... same as before
}
}
I get the same exception when a query is run that there is no supported translation to SQL. (
The logic of the lambda expression is irrelevant yet - just trying to get past the exception.)
Am I missing some fundamental property or something to allow translation to SQL? I've read most of the posts on SO about this exception, but nothing seems to fit my case exactly.
I believe the point of LINQ-to-SQL is that your entities are mapped for you and must have correlations in the database. It appears that you are trying to mix the LINQ-to-Objects and LINQ-to-SQL.
If the Series table has a Deleted field in the database, and the Study table does not but you would like to translate logical Study.Deleted into SQL, then extension would be a way to go.
public static class StudyExtensions
{
public static IQueryable<study> AllDeleted(this IQueryable<study> studies)
{
return studies.Where(study => !study.series.Any(series => !series.deleted));
}
}
class Program
{
public static void Main()
{
DBDataContext db = new DBDataContext();
db.Log = Console.Out;
var deletedStudies =
from study in db.studies.AllDeleted()
select study;
foreach (var study in deletedStudies)
{
Console.WriteLine(study.name);
}
}
}
This maps your "deleted study" expression into SQL:
SELECT t0.study_id, t0.name
FROM study AS t0
WHERE NOT EXISTS(
SELECT NULL AS EMPTY
FROM series AS t1
WHERE (NOT (t1.deleted = 1)) AND (t1.fk_study_id = t0.study_id)
)
Alternatively you could build actual expressions and inject them into your query, but that is an overkill.
If however, neither Series nor Study has the Deleted field in the database, but only in memory, then you need to first convert your query to IEnumerable and only then access the Deleted property. However doing so would transfer records into memory before applying the predicate and could potentially be expensive. I.e.
var deletedStudies =
from study in db.studies.ToList()
where study.Deleted
select study;
foreach (var study in deletedStudies)
{
Console.WriteLine(study.name);
}
When you make your query, you will want to use the statically defined Expression, not the property.
Effectively, instead of:
dataQuery = dataQuery.Where((s) => !s.Deleted);
Whenever you are making a Linq to SQL query, you will instead want to use:
dataQuery = dataQuery.Where(DeletedExpr);
Note that this will require that you can see DeletedExpr from dataQuery, so you will either need to move it out of your class, or expose it (i.e. make it public, in which case you would access it via the class definition: Series.DeletedExpr).
Also, an Expression is limited in that it cannot have a function body. So, DeletedExpr might look something like:
public static Expression<Func<Study, bool>> DeletedExpr = s => s.Series.Any(se => se.Deleted);
The property is added simply for convenience, so that you can also use it as a part of your code objects without needing to duplicate the code, i.e.
var s = new Study();
if (s.Deleted)
...

Categories

Resources