I need to save data between application execution and compare the old data with new data and proceed with the other implementation. Data is the result of a query. I need to check on the count of old data and new data. if the count is more then I need to consider the newly added data.
What best method I can use to implement this.
There are many possibilities, depending on your requirements, the size of the data and your skills. You could save the data for example:
in a well-known file
in a database
as session state
What would be preferable is not obvious from your description.
Related
I'm preparing WinForm application with local DB. I want to remember changes, when somebody will leave application, he decide to save or discard changes. I expect to have three tables, and about 1000 rows in every table. On one window I have three tables which can be modified in every time. So for I have found some solution and I had use the Local method on context. eg.
people.DataSource = dbctx.People.Local.Where(...).ToList();
I recently I try to use dynamic linq query, and I couldn't use when I access to dbctx with Local. Is something better TransactionScope, IEditableObject? How to use it.
If they close the winforms application entirely (losing any in memory storage options) then you will need to use a store of some sort (database, file etc), as you are already using a database it sounds like your best option would be to store it in there.
If you are concerned about space etc, you could simply serialize the in memory objects (perhaps to json) and store it against the user record, then next time they log in/ come back you can deserialize it back to memory and they can carry on.
If you choose this option I would recommend using Json.Net:
To serialize your in memory records use:
string json = JsonConvert.SerializeObject(people);
And deserialize back out:
var people = JsonConvert.DeserializeObject<People>(json);
I am writing a GUI that will be integrated with SAP Business One. I'm having a difficult time determine how to load, edit, and save the data in the best way possible (reliable, fast, easy).
I have two tables which are called Structures and StructureRows (not great names). A structure can contain other structures, generics, and specifics. The structures rows hold all of these items and have a type associated with them. The generics are placeholders for specifics and the specifics are an actual item in inventory.
A job will contain job metadata as well as n structures. On the screen where you edit the job, you can add structures and delete structures as well as edit the rows underneath them. For example, if you added Structure 1 to Job 1 and Structure 1 contains Generic 1, the user would be able to swap Generic 1 for a Specific.
I understand how to store the data, but I don't know what the best method to load and save the data is...
I see a few different options:
When someone adds a structure to a job, load the structure, and then recursively load any structures beneath it (the generics and specifics will already be loaded). I would put this all into an Object Model such as List and each Structure object would have List and List. When I save the changes back to the database, I would have to manually loop through the data and persist the changes.
Somehow load the data into a view in SQL and then group and order the datatable/dataset on the client side. Bind the data to a GridView so changes are automatically reflected in the dataset. When you go to save, SQL / ADO.NET could handle this automatically? This seems like the ideal solution, but I don't know how to actually implement it...
The part that throws me off is being able to add a structure to a structure. If it wasn't for this, I would select the Specifics and Generics from the StructureRows table, and group them in the GUI based on the Structure they belong to. I would have them in a DataTable and bind that to the GridView so any changes were persisted automatically to the DataTable and then I could turn around and push them to SQL very easily...
Is loading and saving the data manually via an object model the only option I have? If not, how would you do it? I'm not sure if I'm just making it more complicated then it needs to be or if this is actually difficult to do with C#, ADO.NET and MS SQL.
The HierarchyID datatype was introduced in SQLServer 2008 to handle this kind of thing. Haven't done it myself, but here's a place to start that gives a fair example of how to use it.
That being said, if you aren't wedded to your current tables, and don't need to query out the individual elements (in other words, you are always dealing the job as a whole), I'd be tempted to store the data for each job as XML. (If you were doing a web-based app, you could also go with JSON.) It preserves the hierarchy and there are many tools in .NET for working with XML. There's also a built-in TreeView class for winForms, and doubtless other third party controls available.
hi experts :) I'm using wpf with sql server
problem 1: lots of data gets created and must be saved to db every second, but at the same time multiple parts of the program write to the same tables. Saving to the db every second is not efficient as db methods are expensive, do you experts disagree or what should I do? not sure what is the best thing to do, when would xml or text files be more useful?
problem 2: I have to retrieve the data from the db from the tables that problem 1 is saving to so I can show on live graphs. Would this cause read/write problems?
Loading a lot of data with one-by-one inserts is not a good idea. Try to look at SqlBulkCopy
Database handles concurrency very well, you can insulate the writing in proper transaction in order to see just the data when a complete write is done.
Considering that you have time limit, you have to process data in some way in 1 second.
I would suggest:
problem 1: Save the data you generate in chunks injectd into the Stack<..>. After from another thread process that Stack<..> to save the chunk to the DB, untill the Stack<..> is not empty. As there is no any gurantee that you wil be able save data in 1 second, after you have it in memory.
problem 2: Having it already in memory you can achieve maximum possible perforance, remaining in the acceptable allocated memory limimts.
It's hard to suggest somethign really practical here, as performance is always strictly domain specific, which can not be described completely in short question. But soluton, can be taken like a basic guideline.
1 You can use Caching in order to persist your datas, you can save with Cache class
Link : http://msdn.microsoft.com/en-us/library/system.web.caching.cache.add.aspx
2 You don't have problem with second scenario, you can use Transaction in order to ensure that you get commited datas.
Link : http://msdn.microsoft.com/en-us/library/system.transactions.transaction.aspx
An other option, is to break up the table(s).
Are your other functions only writing to subset of the field of the records.
So you have a 1 to 1 mapping between two tables one for the initial data, the other for functionA. Depends on how well you can partition the backend needs, but it can significantly reduce collisions.
Basic idea is to look at your tables more like objects. So you have a base table, then if it's a type 1 thingy you add a 1 to 1 link to the relevant table. The business functions around type 1 only need to write to that table, never the entity table.
A possibility anyway.
I have a graph of data that I'm pulling from an OAuth source using several REST calls and storing relationally in a database. The data structure ends up having about 5-10 tables with several one-to-many relationships. I'd like to periodically go a re-retrieve that information to see if updates are necessary in my database.
Since I'm going to be doing this for many users and their data will likely not change very often, my goal is to minimize the load on my database unnecessarily. My strategy is to query the data from my OAuth provider but then hash the results and compare it to the last hash that I generated for the same dataset. If the hashes don't match, then I would simply start a transaction in the database, blow away all the data for that user, re-write the data, and close the transaction. This saves me the time of reading in the data from the database and doing all the compare work to see what's changed, what rows were added, deleted changed etc.
So my question: if I glue all my data together in memory as a big string and use C# GetHasCode(), is that fairly reliable mechanism to check if my data has changed? Or, are there any better techniques to skinning this cat?
Thanks
Yes, that's a fairly reliable mechanism to detect changes. I do not know about the probabilty of collisions in the GetHashCode() Method, but I'd assume it to be safe.
Better methods: Can't the data have a version-stamp or timestamp that is set everytime something changes?
I am currently working on a web application that requires certain requests by users to be persisted. I have three choices:
Serialize each request object and store it as an xml text file.
Serialize the request object and store this xml text in a DB using CLOB.
Store the requests in separate tables in the DB.
In my opinion I would go for option 2 (storing the serialized objects' xml text in the DB). I would do this because it would be so much easier to read from 1 column and then deserialize the objects to do some processing on them. I am using c# and asp .net MVC to write this application. I am fairly new to software development and would appreciate any help I can get.
Short answer: If option 2 fits your needs well, use it. There's nothing wrong with storing your data in the database.
The answer for this really depends on the details. What kind of data are storing? How do you need to query it? How often will you need to query it?
Generally, I would say it's not a good idea to do both 1 and 2. The problem with option 2 is that you it will be much harder to query for specific fields. If you're going to do a LIKE query and have it search a really long string, it's going to be an expensive operation and you'll likely run into perf issues later on.
If you really want to stay away from having to write code to read multiple columns to load your data, look into using an ORM like Linq to SQL. That will help load database tables into objects for you.
I have designed a number of systems where storing 'some' object as serialized xml in the db has proven the better choice. I also learned lessons where storing objects in the db as xml ended up causing more headaches down the road. So I came up with some questions that you have to answer yes to in order to be comfortable in doing:
Does the object need to be portable?
Is the data in the object encapsulated i.e. not part of something else, and not made up of something else.
In the future can number 2 change?
In SQL you can always create a table view using XQuery, but I would only recommend you do this if a) its too late to change your mind b) you don't have that many objects to manage.
Serializing and storing objects in XML has some real benefits, especially for extensibilty and agile development.
If the number of this kind of objects is large and the size of it isn't very large. I think that using the database is a good idea.
Whether store it in a separate table or store it in the original table depends on how would you use this CLOB data with the original table.
Go with option 2 if you will always need the CLOB data when you access the original table.
Otherwise go with option 3 to improve performance.
You need to also think about security and n-tier architecture. Storing serialized data in a database means your data will be on another server, ideal if the data needs to be secure, but will alos give you network latency, whereas storing the data in the filesystem will give you quicker IO access, but very limited searching ability.
I have a situiation like this and I use the database. It also gets backed up properly with the rest of the related data.