Entity Framework for querying JSON strings in SQL Server - c#

I'm looking for anyone who's done anything along the lines of querying JSON strings with the Entity Framework.
I should give a little background about what I'm trying to do here. The database I'm using is for a workflow engine that I'm working on. It handles all the workflow data, and also allows you to store some custom data as a JSON string. The workflow engine I'm using handles the serializing and de-serializing of the JSON strings on a per request basis, but in the event I would want to do a query and filter based on values in the JSON string, I would have to pull the entire table into memory and de-serialize all the entries and then filter. This is, for obvious reasons, not acceptable. The reason for doing this, is we want a single workflow database that can be used for all applications that use this workflow engine, and we are trying to avoid having to do cross database views to seperate application specific databases to get the custom data. Since in most cases, the custom request data that is being stored as JSON strings is relatively simple, and in most cases isn't needed when querying, which is by design. But in the event that we do need to do custom searches, we need a way to be able to parse these custom JSON objects. And I'm hoping this can be done dynamically with Entity so I don't have to write extra stored procs for querying specific types of objects. Ideally I would just have one library that uses entity to allow querying of any JSON data structure.
I started with a database function that I found that parses JSON and returns a flattened table that contains the values (parent object id, name, value, and type). Then imported that function into my entity model. Here's a link to where I got the code from. Pretty interesting article.
Consuming JSON Strings in SQL Server
Here's the basics of where I'm at.
using (var db = new WorkflowEntities()) {
var objects = db.Requests.RequestData();
}
In the above code example, Request object is my base workflow Request object. RequestData() is an extension method on type
DbSet<Request>
and parseJSON is the name of my database function.
My plan is to write a series of extension methods that will filter the Queryables
IQueryable<parseJSON_result>
So for example, if I have an object that looks like this.
RequestDetail : {
RequestNumber: '123',
RequestType: 1,
CustomerId: 1
}
I would be able to do something like
db.Request.RequestData().Where("RequestType", 1);
or something along those lines. The .Where method would Take RequestData(), which is an IQueryable that contains the parsed JSON, it would filter and return the new IQueryable result.
So my question really is, has anyone done anything like this? If so, what kind of approach have you taken? My original intent was to do something dictionary style, but it seemed too difficult. Any thoughts, ideas, suggestions, wisdom, would be much appreciated. I worked on this for awhile, and I feel like I really didn't get that far. Which is mostly just because I can't decide how I want the syntax to look, and I'm not sure if I should be doing more work database side.
This was my original idea for syntax, but I couldn't run the [] operator without hydrating the object.
db.Request.Where(req => req.RequestData()["RequestType"] == 1).Select(req => req.RequestData()["CustomerInfo"]);
I know this is a pretty long post, so if you've read this far, thanks for just taking the time to read the whole thing.

As of SQL Server 2016, FOR JSON and OPENJSON exist, equivalent to FOR XML and OPENXML. You can Index on expressions that reference JSON stored in NVARCHAR columns.

This is a very late answer, but for anyone who is still searching...
As #Emyr says, SQL 2016 supports querying inside JSON columns using JSON_VALUE or OPENJSON statements.
Entity Framework still does not support this directly, but you can use the SqlQuery method to run a raw SQL command directly against the database which can query inside JSON columns and saves querying and deserializing every row in order to run a simple query.

What you could do is create a CLR SQL Server User-Defined Function then use it from your query.
See this link https://msdn.microsoft.com/en-us/library/ms131077.aspx
i would think that a table-valued functions is more suited for your situation.

Related

Automapper , Linq and Localizaton

Recently I have been playing with AutoMapper as a tool to populate our DTOs.
The cool thing about AutoMapper is its Project().To() that let us map a Queryable so that we can select the fields that we want according to our map.
But here we have another scenario too.We want to be able to translate some of the values of an entity into other representation.
Suppose that we have a string field that its value in DB is 'Apparatment' and we want to translate it to another language while we are selecting our DTOs.
I think that if we want to write this in SQL it will be something like this :
SELECT CASE BuidlingType WHEN 'Appartment' THEN 'apparatment in another langaure' WHEN 'Flat' THEN 'flat in another languate' END AS BuildingType FROM buildings
I know that we can define value resolvers in AutoMapper. The question I would like to ask is ,can we use these resolvers in a Project().To() scenario ? If the answer is yes then how we should use them (to return an expression instead of a value) and if not , is there any other alternative approach to be able to translate a DTO on the fly ?
No you can't use resolvers in LINQ projections.
I guess I'm a little confused - those translations look a little difficult in the SQL you have. Are your translations stored in a database, or do you hard code them in your SQL?
Typically my localized labels are stored either in resource files or specialized tables. If they're in tables, I don't necessarily returned translated data labels with data. I have two queries and the translation query is almost certainly cached.
I would start with "What LINQ do I build to make the correct SQL" and then I can help with "how can I configure AutoMapper to build this LINQ?"

How would one store a Dictionary in a SQLServer User-Defined Type?

Context: SQLServer2012
On one of Microsoft's CLR User-Defined Types pages, it says (in part)
Beginning with SQL Server 2005, you can use user-defined types (UDTs) to extend the scalar type system of the server, enabling storage of CLR objects in a SQL Server database.
Does this mean that I could turn a Dictionary into an UDT? If so, how? If not, why not?
Assuming I don't run into issues with System.Core and all that, I'm hoping that perhaps I can do something useful with JavaScriptSerializer like this for JSON parsing:
JavaScriptSerializer ser = new JavaScriptSerializer();
return ser.Deserialize<Dictionary<string, object>>("{'some':['deeply',{'nested':'json record'}]});
I agree with Damien_The_Unbeliever. the easy, straight-forward, and probably best way is a 2 column table, key and value.
However, there is another option, and that is to use an XML Column and just store your serialized dictionary there. Note that this means that if you do want to change or the data using SQL, your DML statements are going to be highly complex and inefficient comparing with the simple 2 columns table approach.

LINQ to Entities - Entity Framework

I'm looking to get a better understanding on when we should look to use IEnumerable over IQueryablewith LINQ to Entities.
With really basic calls to the database, IQueryable is way quicker, but when do i need to think about using an IEnumerable in its place?
Where is an IEnumerable optimal over an IQueryable??
Basically, IQueryables are executed by a query provider (for example a database) and some operations cannot be or should not be done by the database. For example, if you want to call a C# function (here as an example, capitalize a name correctly) using a value you got from the database you may try something like;
db.Users.Select(x => Capitalize(x.Name)) // Tries to make the db call Capitalize.
.ToList();
Since the Select is executed on an IQueryable, and the underlying database has no idea about your Capitalize function, the query will fail. What you can do instead is to get the correct data from the database and convert the IQueryable to an IEnumerable (which is basically just a way to iterate through collections in-memory) to do the rest of the operation in local memory, as in;
db.Users.Select(x => x.Name) // Gets only the name from the database
.AsEnumerable() // Do the rest of the operations in memory
.Select(x => Capitalize(x)) // Capitalize in memory
.ToList();
The most important thing when it comes to performance of IQueryable vs. IEnumerable from the side of EF, is that you should always try to filter the data using an IQueryable to get as little data as possible to convert to an IEnumerable. What the AsEnumerable call basically does is to tell the database "give me the data as it is filtered now", and if you didn't filter it, you'll get everything fetched to memory, even data you may not need.
IEnumerable represents a sequence of elements which you enumerate one by one until you find the answer you need, so for example if I wanted all entities that had some property greater than 10, I'd need to go through each one in turn and return only those that matched. Pulling every row of a database table into memory in order to do this would not maybe be a great idea.
IQueryable on the other hand represents a set of elements on which operations like filtering can be deferred to the underlying data source, so in the filtering case, if I were to implement IQueryable on top of a custom data source (or use LINQ to Entities!) then I could give the hard work of filtering / grouping etc to the data source (e.g. a database).
The major downside of IQueryable is that implementing it is pretty hard - queries are constructed as Expression trees which as the implementer you then have to parse in order to resolve the query. If you're not planning to write a provider though then this isn't going to hurt you.
Another aspect of IQueryable that it's worth being aware of (although this is really just a generic caveat about passing processing off to another system that may make different assumptions about the world) is that you may find things like string comparison work in the manner they are supported in the source system, not in the manner they are implemented by the consumer, e.g. if your source database is case-insensitive but your default comparison in .NET is case-sensitive.

Reading several tables from one single Entity Framework ExecuteStoreQuery request.

I have a library which uses EF4 for accessing a SQL Server data store. For different reasons, I have to use SQL Server specific syntax to read data from the store (for free text search), so I have to create the SQL code by hand and send it through the ExecuteStoreQuery method.
This works fine, except that the query uses joins to request several tables aside the main one (the main one being the one I specify as the target entity set when calling ExecuteStoreQuery), and EF never fills up the main entity's relationship properties with the other table's data.
Is there anything special to do to fill up these relationships? Using other EF methods or using special table names in the query or something?
Thanks for your help.
Executing direct SQL follows very simple rule: It uses column from the result set to fill the property with the same name in materialized entity. I think I read somewhere that this works only with the the main entity you materialize (entity type defined in ExecuteStoreQuery = no relations) but I can't find it now. I did several tests and it really doesn't populate any relation.
Ok so I'll write here what I ended up doing, which does not looks like a perfect solution, but it does not seem that there is any perfect solution in this case.
As Ladislav pointed out, the ExecuteStoreQuery (as well as the other "custom query" method, Translate) only maps the column of the entity you specify, leaving all the other columns aside. Therefore I had to load the dependencies separately, like this :
// Execute
IEnumerable<MainEntity> result = context.ExecuteStoreQuery<MainEntity>(strQuery, "MainEntities", MergeOption.AppendOnly, someParams).ToArray();
// Load relations, first method
foreach (MainEntity e in result)
{
if (!e.Relation1Reference.IsLoaded)
e.Relation1Reference.Load();
if (!e.Relation2Reference.IsLoaded)
e.Relation2Reference.Load();
// ...
}
// Load relations, second method
// The main entity contains a navigation property pointing
// to a record in the OtherEntity entity
foreach(OtherEntity e in context.OtherEntities)
context.OtherEntities.Attach(e);
There. I think these two techniques have to be chosen depending on the number and size of generated requests. The first technique will generate a one-record request for every required side record, but no unnessecary record will be loaded. The second technique uses less requests (one per table) but retrieves all the records so it uses more memory.

LinqToSQL - no supported translation to SQL

I have been puzzling over a problem this morning with LinqToSQL. I'll try and summarise with the abbreviated example below to explain my point.
I have DB two tables:
table Parent
{
ParentId
}
table Child
{
ChildId
ParentId [FK]
Name
Age
}
These have LinqToSQL equivalent classes in my project, however, I have written two custom model classes that I want my UI to use, instead of using the LinqToSQL classes.
My data access from the front end goes through a service class, which in turn calls a repository class, which queries the data via linq.
At the repository level I return an IQueryable by:
return from data in _data.Children
select new CustomModel.Child
{
ChildId = data.ChildId,
ParentId = date.ParentId
};
My service layer then adds an additional query restriction by parent before returning the list of children for that parent.
return _repository.GetAllChildren().Where(c => c.Parent.ParentId == parentId).ToList();
So at this point, I get the method has no supported translation to sql error when I run everything as the c.Parent property of my custom model cannot be converted. [The c.Parent property is an object reference to the linked parent model class.]
That all makes sense so my question is this:
Can you provide the querying process
with some rules that will convert a
predicate expression into the correct
piece of SQL to run at the database
and therefore not trigger an error?
I haven't done much work with linq up to now so forgive my lack of experience if I haven't explained this well enough.
Also, for those commenting on my choice of architecture, I have changed it to get around this problem and I am just playing around with ideas at this stage. I'd like to know if there is an answer for future reference.
Many thanks if anyone can help.
Firstly, it begs the question: why is the repository returning the UI types? If the repo returned the database types, this wouldn't be an issue. Consider refactoring so that the repo deals only with the data model, and the UI does the translation at the end (after any composition).
If you mean "and have it translate down to the database" - then basically, no. Composable queries can only use types defined in the LINQ-to-SQL model, and a handful of supported standard functions. Something similar came up recently on a related question, see here.
For some scenarios (unusual logic, but using the typed defined in the LINQ-to-SQL model), you can use UDFs at the database, and write the logic yourself (in TSQL) - but only with LINQ-to-SQL (not EF).
If the volume isn't high, you can use LINQ-to-Objects for the last bit. Just add an .AsEnumerable() before the affected Where - this will do this bit of logic back in managed .NET code (but the predicate won't be used in the database query):
return _repository.GetAllChildren().AsEnumerable()
.Where(c => c.Parent.ParentId == parentId).ToList();

Categories

Resources