I want to store list of parameters (that will define how document is going to be generated on the web page) in data base.
There is a number of item (or document) types, each type has a different set of parameters that vary (each type has it's own parameters).
Is it a good idea to store all parameters (key-value) as JSON in table's column?
Otherwise I would have to create Parameter Table for every Type and column for every parameter (10-30 params for every type).
A note: I am not going to search by parameters or something like that.
I will load the the JSON string (if I'll choose JSON), serialize it to Object and apply them on document as usual.
Since you have no requirement of searching by parameters, To me Json seems to be more robust because you will have Object ready with information when you Deserialize it. where as if you store it in columns and table you will have to initialize class members yourself. It will also have performance benifit as there will be only one column to fetch based on your document type.
Conclusion go with Json data in Database
Sounds like you should have a look at http://sisodb.com. However, it does support querying, but that is something you could turn-off and only rely on GetById.
Related
I have a (not quite valid) CSV file that contains rows of multiple types. Any record could be one of about 6 different types and each type has a different number of properties. The first part of any row contains the timestamp and the type of record, followed by a standard CSV of the data.
Example
1456057920 PERSON, Ted Danson, 123 Fake Street, 555-123-3214, blah
1476195120 PLACE, Detroit, Michigan, 12345
1440581532 THING, Bucket, Has holes, Not a good bucket
And to make matters more complex, I need to be able to do different things with the records depending on certain criteria. So a PERSON type can be automatically inserted into a DB without user input, but a THING type would be displayed on screen for the user to review and approve before adding to DB and continuing the parse, etc.
Normally, I would use a library like CsvHelper to map the records to a type, but in this case since the types could be different, and the first part uses a space instead of comma, I dont know how to do that with a standard CSV library. So currently how I am doing it each loop is:
String split based off comma.
Split the first array item by the space.
Use a switch statement to determine the type and create the object.
Put that object into a List of type object.
Get confused as to where to go now because i now have a list of various types and will have to use yet another switch or if to determine the next parts.
I don't really know for sure if I will actually need that List but I have a feeling the user will want the ability to manually flip through records in the file.
By this point, this is starting to make for very long, confusing code, and my gut feeling tells me there has to be a cleaner way to do this. I thought maybe using Type.GetType(string) would help simplify the code some, but this seems like it might be terribly inefficient in a loop with 10k+ records and might make things even more confusing. I then thought maybe making some interfaces might help, but I'm not the greatest at using interfaces in this context and I seem to end up in about this same situation.
So what would be a more manageable way to parse this file? Are there any C# parsing libraries out there that would be able to handle something like this?
You can implement an IRecord interface that has a Timestamp property and a Process method (perhaps others as well).
Then, implement concrete types for each type of record.
Use a switch statement to determine the type and create and populate the correct concrete type.
Place each object in a List
After that you can do whatever you need. Some examples:
Loop through each item and call Process() to handle it.
Use linq .OfType<{concrete type}> to segment the list. (Warning with 10k
records, this would be slow since it would traverse the entire list for each concrete type.)
Use an overridden ToString method to give a single text representation of the IRecord
If using WPF, you can define a datatype template for each concrete type, bind an ItemsControl derivative to a collection of IRecords and your "detail" display (e.g. ListItem or separate ContentControl) will automagically display the item using the correct DataTemplate
Continuing in my comment - well that depends. What u described is actually pretty good for starters, u can of course expand it to a series of factories one for each object type - so that you move from explicit switch into searching for first factory that can parse a line. Might prove useful if u are looking to adding more object types in the future - you just add then another factory for new kind of object. Up to you if these objects should share a common interface. Interface is used generally to define a a behavior, so it doesn't seem so. Maybe you should rather just a Dictionary? You need to ask urself if you actually need strongly typed objects here? Maybe what you need is a simple class with ObjectType property and Dictionary of properties with some helper methods for easy typed properties access like GetBool, GetInt or generic Get?
I load types dynamically through reflection, instantiate the classes, fill them with data and then save them to RavenDB. They all implement an interface IEntity.
In the RavenDB UI, I can see the classes correctly displayed and the meta data has the correct CLR type.
Now I want to get the data back out, change it and then save it.
What I'd like to do
I have the System.Type that matches the entity in RavenDB's CLR meta data, assuming that's called myType, I would like to do this:
session.Query(myType).ToList(); // session is IDocumentSession
But you can't do that, as RavenDB queries like so:
session.Query<T>();
I don't have the generic type T at compile time, I'm loading that dynamically.
Solution 1 - the Big Document
The first way I tried (for a proof of concept) was to wrap all the entities in a single data object and save that:
public class Data {
List<IEntity> Entities = new List<IEntity>();
}
Assuming the session is opened/closed properly and that I have an id:
var myDataObject = session.Load<Data>(Id);
myDataObject.Entities.First(); // or whatever query
As my types are all dynamic, specified in a dynamically loaded DLL, I need my custom json deserializer to perform the object creation. I specify that in the answer here.
I would rather not do this as loading the entire document in production would not be efficient.
## Possible solution 2 ##
I understand that Lucene can be used to query the type meta data and get the data out as a dynamic type. I can then do a horrible cast and make the changes.
Update after #Tung-Chau
Thank you to Tung, unfortunately neither of the solutions work. Let me explain why:
I am storing the entities with:
session.Store(myDataObject);
In the database, that will produce a document with the name of myDataObject.GetType().Name. If you do:
var myDataObject = session.Load<IEntity>(Id);
Then it won't find the document because it is not saved with the name IEntity, it is saved with the name of the dynamic type.
Now the Lucene solution doesn't work either but for a slightly more complex reason. When Lucene finds the type (which it does), Raven passes it to the custom JsonDeserialiser I mentioned. The custom Json Deserialiser does not have access to the meta data, so it does not know which type to reflect.
Is there a better way to retrieve data when you know the type but not at compile time?
Thank you.
If you have an ID (or IDs):
var myDataObject = session.Load<IEntity>(Id);
//change myDataObject. Use myDataObject.GetType() as you want
//....
session.SaveChange();
For query, LuceneQuery is suitable
var tag = documentStore.Conventions.GetTypeTagName(typeof(YourDataType));
var myDataObjects = session.Advanced
.LuceneQuery<IEntity, RavenDocumentsByEntityName>()
.WhereEquals("Tag", tag)
.AndAlso()
//....
Hello fellow developers.
First of all I apologize beforehand for the wall of text that follows, but after a day going crazy on this, I need to call for help.
I've stumbled across a problem I cannot seem to solve. I'll try to describe the scenario in the best possible way.
Task at hand: in an existing Asp.Net Mvc application, create a lookup table for an integer field, and use the textual value from the lookup in the editing view. When saving, we must first check if the lookup already has a corresponding text value for the same Root ID. If there is, use that. Otherwise, create it and then use it.
The structure:
The data model is a graph of objects where we have the root object, a collection of level A child objects, and every level A child object has a collection of level B child objects, so something like this:
Root (with fields)
Level A child (with fields) x n
Level B child (with fields) x n
The field we have to handle is on the LevelB objects.
There is a single Mvc view that handles the whole data. For collection objects, all fields are named like levelA1levelB1MyField, levelA1levelB2MyField, etc so every single field has unique name during the post. When the post happens, all values are read through a formCollection parameter which has average 120/130 keys. The keys are isolated by splitting them and looping on the numerical part of the names, values are read and parsed to the expected types and assigned to the object graph.
The datalayer part backing the object graph is all stored procedures, and all the mapping (both object to sproc and sproc to object) is hand written. There's a single stored procedure for the read part, which gets multiple datasets, and the method calling it reads the datasets and creates the object graph.
For the saving, there are multiple sprocs, mainly a "CreateRoot" and "UpdateRoot". When the code has to perform such tasks, the following happens:
For create scenario, "CreateRoot" is called, then the sprocs "CreateLevelA" and "CreateLevelB" are called in loop for each element in the graph;
For update scenario, "UpdateRoot" is called, which internally deletes all "LevelA" and "LevelB" items, then the code recreates them calling the aforementioned sprocs in loop.
Last useful piece of information is that the "business objects graph" is used directly as a viewmodel in the view, instead of being mapped to a plain "html friendly" viewmodel. This is maybe what is causing me the most trouble.
So now the textbox on the view handles an "integer" field. That field must now accept a string. The field on LevelB must remain an integer, only with a lookup table (with FK of course) and the text field from the lookup must be used.
The approaches I tried with no success:
My first thought was to change the datatype on the property MyField from integer to string on the object, then change the sprocs accordingly and handle the join at sproc level: I'd have a consistent object for my view, and the read/write sprocs could translate from string to integer and viceversa, but I can't do that because the join keys to retrieve the integer when writing are part of the Root item (as I stated in the first lines of this wall of text), which I don't know in the CreateLevelB sproc, and changing the whole chain of calls to pass those parameters would have a huge impact on the rest of the application, so no good.
My next try was to keep things "as they are" and call some "translation methods": when reading, pass the integer to the view, and there call the translation method to display the text value. When saving, use the posted text to retrieve the integer. The save part would work, I'd have all the parameters I need, but for the read part, I'd have to instantiate the "data access layer" and call its method at View level, and there's no need to explain why that is a very bad choice, so I ruled this out too.
Now I'm out of options (or ideas anyway). Any suggestion to solve this is very welcome, and also if something is not clear enough just point it out and I will edit my post with more accurate information.
Thanks.
This is not a real answer but you could rip out all sprocs and use the updating facilities of an OR mapper. This will resolve all the layering issues. You just update data how you see fit and submit at the end.
I guess this would also make the questions around "should I use an int or a string" go away.
Edit: After reading your comment I thought of the following: Do not implement alternative 1. You rather want to sacrifice code quality in the view than in the data storage model. The last one is more important and more centrally used.
I would not be too concerned with messing up the view by calling the DAL from it or the like. Changes in a view are localized and do not mess up the application's architecture. They just degrade the view.
Maybe you could create a view model in your controller and do the translations between DAL-model and view model? Or is that pattern not allowed?
My issue is trying to determine a number of objects created, the objects being serialized from an XML document. The XML document should be set up for simplicity, so any developer can add an additional object and need no further modification to the code. However each of these objects need to be handled/updated seperately, and specifically, some of the objects are of different sub-classes, which need to be handled differently. So what would be my simplest course of action, allowing other to add objects via the XML, but still ensuring the proper logic happenes for each?
This is totally a bad idea, but if you want something constructive...
Model your XML document objects and include some kind of known syntax for you to specify Lambda expressions in it. So if you enter a
<BinaryExpresion>
<NodeType>Add</NodeType>
<Left>3</Left>
<Right>4</Right>
</BinaryExpression>
Then when you read and compile the expression, you could run that code against the data if the XML object and do something (in this case, executing 3 + 4)
I am writing a program that needs to read a set of records that describe the register map of a device I need to communicate with. Each record will have a handfull of fields that describe the properties of each register.
I don't really need to edit or modify the data in my VB or C# program, though I would like to be able to display the data on a grid. I would like to store the data in a CSV file, or perhaps an XML file. I need to enable users to edit the data off-line, preferably in excel.
I am considering using a DataTable or a Collection of "Register" objects (which I would define).
I prototyped a DataTable, and found I can read/write XML easily using the built in methods and I can easily bind to a DataGridView. I was not able to find a way to retreive info on a single register without using a query that returns a collection of rows, even though I defined a unique primaty key column. The syntax to get a value from a column is also complex, though I could be missing something on both counts.
I'm tempted to use a collection of "Register" objects that I can access via a unique key. It would be a little more coding up front, but seems like a cleaner solution overall. I should still be able to use LINQ to dataset to query subsets of registers when I need them, but would also be able to grab a single field using a the key value, something like this: Registers(keyValue).fieldName).
Which would be a cleaner approach to the problem?
Is there a way to read/write XML into a Collection without needing custom code?
Could this be accomplished using String for a key?
UPDATE: Sounds like the consensus is towards the Collection of register Objects. Makes sense to me. I was leaning that way, and since nobody pointed out any DataTable features that would simplify acessing a single row, it looks like the Collection is clearly the way to go. Thanks to those who weighed in.
I would be inclined not to use data sets. It would be better to work with objects and collections. Your code will be more maintainable/readable, composable, testable & reusable.
Given that you can do queries on the data set to return particular row, you might find that a LINQ query to turn the rows into objects may be all the custom code that you need.
Using a Dictionary<string, Register> for look ups is a good idea if you have a large number of items (say greater than 1000). Otherwise a simple LINQ query should be fine.
It depends on how you define 'clean'.
A generic collection is potentially MUCH more lightweight than a DataTable. But on the other hand that doesn't seem to be too much of an issue for you. And unless you go into heavy reflection you'll have to write some code to read/write xml.
If you use a key I'd also recommend (in the case of the collection) to use a Dictionary. That way you have a Collection of the raw data and still can identify each entry through the key in the Dictionary.
I usually use datatables if its something quick and unlikely to be used in any other way. If it's something I can see evolving into an object that has its own use within the app (like your Register Object you mentioned).
It might be a little extra code up front - but it saves converting from a datatable to the collection in the future if you come up with something you would like to do based on an individual row, or if you want/need to add some sort of extra functionality to that element down the road.
I would go with the collection of objects so you can swap out the data access later if you need to.
You can serialize classes with an xml serializer and defining a Serialize attribute or something like that (it has been a while since I done that, sorry for the vagueness). A DataSet or DataTable works great with XML.
Both DS and DT have ReadXml and WriteXml methods. XML must be predefined format, but it works seamlessly.
Otherwise, I personally like collections or dictionaries; DS/DT are OK, but I like custom objects, and LINQ adds in some power.
HTH.