My program should:
Load a data table from a legacy raw data file.
Provide an interface to display, filter, graph, etc.
My approach is to create an in-memory database as a DataSource for binding the filter controls, results grid, graphs, etc.
Question:
What is the simplest way to define and populate this in-memory database?
Edit:
I only have a minimal knowledge of LINQ. In the past, I'd always been able to just drag a database table or query into the form or webpage. Visual Studio would create the DataSet, DataTable, DataSource, etc objects for me.
... where do I define this structure (an XML file, in-code, wizard, drag and drop)? what data objects do I need? etc
You could create classes containing the necessary properties and then simply parse the file and populate those classes in-memory. Here you go, you've got an in-memory database.
I only have a minimal knowledge of LINQ
Here's a good start for you: http://code.msdn.microsoft.com/101-LINQ-Samples-3fb9811b
where do I define this structure (an XML file, in-code, wizard, drag and drop)?
If you want to store the data in-memory define strongly typed C# classes that match your data.
what data objects do I need?
That would entirely depend on what information you have in your file and you want to handle.
Related
The situation: I have a list of queries written so that each select data from their respective table. I want to create this list of queries as an SSIS object variable and iterate through each one, using the query as a OLE DB source in a DFT.
Is there any way to do this so that the DFT source component does not have an issue with the meta data being incorrect, after we switch to a query using a different table than the first?
The destination will also be changing as well. I know that you can delay the validation but i don't believe that helps with the switching meta data.
No, if the meta data is not the same for all queries, then you cannot use them in a single data flow task. The meta data for a DFT is set at design time, and cannot change or "refresh" during a run. You're correct that delaying validation will not help with this.
You might want to look into BiML, which dynamically creates packages based on meta data.
Currently I'm working on an application to import data from different sources (csv and xml). The core data in the files are the same (ID, Name, Coordinate, etc.) but with different structures (xml: in nodes, tables: on rows) and in the xml I have additional data which I need to keep together till the export. Additionally important to say, I need for the visualization and modification just a few data but I need all for the export.
Problem:
I'm locking for a good structure (database or what ever) where I can import the data at run-time. I need to have a reference to the data to visualize and modify them. Afterward I need to export the information to a user specified file type. (consider the image).
Approaches:
I defined a class for the csv schema and mapped the necessary information of the xml to it. The problem occurs when I try to export the data because I have not all data available in the memory.
I defined a class for the xml schema and mapped the information from the csv to it. The problem is in this case, that the storage structure bases on the schema of the xml and if this xml schema changes, I need to change the whole storage structure.
I'm planing now to implement a sql database with entity framework. This is not the easiest way but it seems to be state-of-the-art and updateable. The thing is that I'm not very experienced with databases and the entity framework. That's way I like to know whether this is a good way to solve this problem.
Last thing to say: I would like to store the imported data just once and would like to work with references to this source. This way I can export the information from this source and I'm certain that I have the current data.
Question:
What is the common way to solve such storage problems. Did I missed a good approach? Thank you so much for your help!
I am writing a GUI that will be integrated with SAP Business One. I'm having a difficult time determine how to load, edit, and save the data in the best way possible (reliable, fast, easy).
I have two tables which are called Structures and StructureRows (not great names). A structure can contain other structures, generics, and specifics. The structures rows hold all of these items and have a type associated with them. The generics are placeholders for specifics and the specifics are an actual item in inventory.
A job will contain job metadata as well as n structures. On the screen where you edit the job, you can add structures and delete structures as well as edit the rows underneath them. For example, if you added Structure 1 to Job 1 and Structure 1 contains Generic 1, the user would be able to swap Generic 1 for a Specific.
I understand how to store the data, but I don't know what the best method to load and save the data is...
I see a few different options:
When someone adds a structure to a job, load the structure, and then recursively load any structures beneath it (the generics and specifics will already be loaded). I would put this all into an Object Model such as List and each Structure object would have List and List. When I save the changes back to the database, I would have to manually loop through the data and persist the changes.
Somehow load the data into a view in SQL and then group and order the datatable/dataset on the client side. Bind the data to a GridView so changes are automatically reflected in the dataset. When you go to save, SQL / ADO.NET could handle this automatically? This seems like the ideal solution, but I don't know how to actually implement it...
The part that throws me off is being able to add a structure to a structure. If it wasn't for this, I would select the Specifics and Generics from the StructureRows table, and group them in the GUI based on the Structure they belong to. I would have them in a DataTable and bind that to the GridView so any changes were persisted automatically to the DataTable and then I could turn around and push them to SQL very easily...
Is loading and saving the data manually via an object model the only option I have? If not, how would you do it? I'm not sure if I'm just making it more complicated then it needs to be or if this is actually difficult to do with C#, ADO.NET and MS SQL.
The HierarchyID datatype was introduced in SQLServer 2008 to handle this kind of thing. Haven't done it myself, but here's a place to start that gives a fair example of how to use it.
That being said, if you aren't wedded to your current tables, and don't need to query out the individual elements (in other words, you are always dealing the job as a whole), I'd be tempted to store the data for each job as XML. (If you were doing a web-based app, you could also go with JSON.) It preserves the hierarchy and there are many tools in .NET for working with XML. There's also a built-in TreeView class for winForms, and doubtless other third party controls available.
I have a C# WPF program that needs to display GridView and 2D graph data which initially will come from a hardware device. I also want to continuously (or frequently periodic) backup this data to an XML file on disk. What would be the easiest way to implement this in visual studio? Should I create an XML schema first, or use the dataset designer? Should I bother with datasets at all or would it make sense to eliminate them and write my incoming data directly to xml?
I would recommend:
Plan a structure of an XML ahead. Create a simple empty file to help you along the way.
Create a data serialization provider as well as the interface that it will implement. In your case it will be an XML provider (who knows, you may need to save the data to a database in future. You should plan ahead for that.)
Write a custom class that serializes your poco domain objects into an xml using LinqToXML.
I am using SQL Server 2008 with NHibernate for an application. In the application I need to create multiple object of a Info class and use it in multiple places. I also need to store that object in the databse.
There are multiple types of Info class.
To store these objects of Info class I have two options
Store the Serialized obejct of the class
Store the details of that class as string.
What is the advantage of storing the serialized object in the database over storing its values as multiple strings?
-Ram
If you store the serialized object into the db:
You don't have to rebuild it from the partial data (ie. write your own deserializer if the behaviour is default, create objects from the partial data)
You must create the object "manually"
May be faster in some cases
Stores redundant infrastructure data
You may choose multiple formats (XML, custom format, blobs)
You have fully prepared serialized objects that are ready to be processed anywhere (sent over the network, stored in a disk)
I you store the multiple strings, you:
Need to build the objects "manually"
May use the database data in different scenarios (from .net, to build another structures such as cubes)
The data is much more compact
May store the data in a relational normalized form which is (almost) always a good practice
Query the data
And the overall more versatile usage of the data.
I would definitely go for the relational normalized form to store the strings and then build the corresponding class builder in .net.
I would definitely store records and fields and not just a chunk of bytes ( either binary or text or xml ) representing the current status of your object.
it depends of course on the complexity of your business entities ( in your case the Info class ), but I would really avoid saving the serialized version of it in 1 column.
if you explode all properties into fields you can query better for records having certain values and you can handle new columns and releases much easier.
The most common issue with storing an object as a serialized stream is that it is difficult to search the properties of the object once it is stored, whereas if each property of the object is explicitly stored in its own strongly typed column, it can be indexed for searches, and you get the integrity benefit of strongly typed storage.
However, At least if the object is XmlSerialized into an XML column in SQL, you can use technologies such as xquery and OPENXML to ease your searches.
Serialized obejct (XML)
If you store the class as XML. You will be able to search the contect of the class by using Xquery. This way is eay way if you want to search(with or without conditions). More over, you can create index over XML column. The XML index will make you application faster.
AS string
If you don have bussines login to look at the content of class.