More efficient than a Datatable - c#

We have a reporting tool that is grabbing a large amount of records. At times it can be 1 million records. We have been storing this in a datable. I wanted to to know if there was a better object to store this in. I would need to be able to aggregate the data in various ways.
Update:
Yes. Personally believe that should not being getting that many records. This is not the direction I want to go.
Also I am using Oracle
Update Update
Sorry for the delay, but there are always fire to put out here. The main issue was they were running out of memory and getting memory errors. They had issues with the datatable releasing from memory and also binding to a datagridview. I guess what I was looking for was a lighter weight object that wouldn't take as much space.
After thinking about a little more, it really doesn't make any sense to get that much data as diagonalbatman mentioned. furthermore if we have just a few people are using it with these issues. How is it going to scale.
Unfortunately, I have a boss that doesn't listen and an offshore team that is too much of a "yes sir" type attitude. They are serializing the raw data (as an XML file) and releasing the raw data Datatable which I think is not a good direction at all.
#diagonalbatman - OUt of curiousity, do you have an example of this

Why do you need to draw down 1 Milion records into your app?
Can you not do your reporting consolidation / aggregation on the DB? This would make better use of the DB's resources (after all this is what an RDBMS is designed to do) then you can focus your app on working with smaller consolidated sets?

I would recommend you try several options to verify, especially in light of your needed ability to aggregate the data in various ways.
1) Can it be aggregated by proper queries on the data side, this is likely the best solution.
2) if you use POCOs does LINQ improve upon your current memory and performance characteristics. Does LINQ allow you to to the aggregation you require.
Measure the characteristics you care about and try different options.

What you want are Data Cubes. Depending on the type of database you have, you should look at building some Cubes.

Related

Dealing with Massive Graphs - Traveling Salesperson

I'm teaching myself how to program algorithms involving TSPs (Djikstra, Kruskal) and I'm looking for some start up advice. I am working with C# and SQL. Ideally I'd like to be able to do this strictly in SQL however I'm not sure if that is possible (I assume the runtime would be awful after 50 vertices).
So I guess the question is, can I do this is only SQL and if so what is the best approach? If not, and I have to get C# involved what would be the best approach there?
It is only advisable to do simple calculations in SQL, like calculating sums. Sums are faster in SQL, because only sums are returned instead of all the records. Complicated algorithms like the ones you have in mind must be done in your c# code! First, the SQL language is not suited for such problems, second it is optimized for db accesses, making it very slow for other types of uses.
Read your data from your db with SQL into an appropriate data structure into your c# program. Do all the TSP related logic there and, if you want, store the result in the db, when finished.
i'm going to chime in for SQL. While it would not really be my first choice to work on TSP - it still can easily do this kind of thing - assuming of course that the data model is optimal for your efforts.
The first chore will be to define a data model that holds the information your algorithm needs, then populate with some sample data, then work out a query that can retrieve the arrays as needed.
finally you can decide if some simple SQL in that query would work for you, or perhaps an extension in the form of a stored procedure.
finally, you may opt to pull it out to your alternate language of choice.
Well, I am not sure if SQL is the best option to accomplish this, but you could try using an adjacency matrix for the input. Many published algorithms are designed for this kind of input, and after that the only issue is putting the pseudocode into C#. Take a look that this:
http://en.wikipedia.org/wiki/Adjacency_matrix.
You would be using a two dimensional array to represent the matrix.

LinqToSQL Design Query / Worry

I wonder if somebody could point me in the right direction. I've recently started playing with LinqToSQL and love the strongly typed data objects etc.
I'm just struggling to understand the impact on database performance etc. For example, say I was developing a simple user profile page. The page shows basic information about the user, some information on their recent activity, and a list of unread notifications.
If I was developing a stored procedure for this page, I could create a single SP which returns multiple datatables covering all of the required information - resulting in a single db call.
However, using LinqToSQL, this could results in many calls - one for user info, atleast one for activity, atleast one for notifications, if I then want further info on notifications this may result in further calls - multiple db calls.
Should I be worried about the number of db calls happenning as a result of using this design pattern? Ie, are the multiple db handshakes etc going to degrade my db etc?
I'd appreciate your thoughts on this!
Thanks
David
LINQ to SQL can consume multiple results from a stored proc if you need to go that route. Unfortnately the designer has problems mapping them correctly, so you will probably need to create your mapping manually. See http://www.thinqlinq.com/Default/Using-LINQ-to-SQL-to-return-Multiple-Results.aspx.
You can configure LINQ to SQL to eagerly load the child records if you know that you're going to need them for every parent record. Use the DataLoadOptions and .LoadWith to configure it.
You can also project an object graph with multiple child collections in the Select clause of a LINQ query to reduce the number of DB hits that you make.
Ultimately, you need to check a number of options to determine which route is the best performance for your situation. It's not a one size fits all scenario.
Is it worst from a performance standpoint ? Yes, it should be. Multiple roundtrips are usually worse than single.
The real question is, do you mind? Is your application going to receive enough visits to warrant the added complexity of a stored procedure? Or do you value the simplicity of future modifications over raw performance?
In any case, if you need the performance, you can create a stored procedure and map it on your context. This will give you one single call, but return the data as objects
Here is an article explaining a bit about that option:
linq-to-sql-returning-multiple-result-sets

Joins are for lazy people?

I recently had a discussion with another developer who claimed to me that JOINs (SQL) are useless. This is technically true but he added that using joins is less efficient than making several requests and link tables in the code (C# or Java).
For him joins are for lazy people that don't care about performance. Is this true? Should we avoid using joins?
No, we should avoid developers who hold such incredibly wrong opinions.
In many cases, a database join is several orders of magnitude faster than anything done via the client, because it avoids DB roundtrips, and the DB can use indexes to perform the join.
Off the top of my head, I can't even imagine a single scenario where a correctly used join would be slower than the equivalent client-side operation.
Edit: There are some rare cases where custom client code can do things more efficiently than a straightforward DB join (see comment by meriton). But this is very much the exception.
It sounds to me like your colleague would do well with a no-sql document-database or key-value store. Which are themselves very good tools and a good fit for many problems.
However, a relational database is heavily optimised for working with sets. There are many, many ways of querying the data based on joins that are vastly more efficient than lots of round trips. This is where the versatilty of a rdbms comes from. You can achieve the same in a nosql store too, but you often end up building a separate structure suited for each different nature of query.
In short: I disagree. In a RDBMS, joins are fundamental. If you aren't using them, you aren't using it as a RDBMS.
Well, he is wrong in the general case.
Databases are able to optimize using a variety of methods, helped by optimizer hints, table indexes, foreign key relationships and possibly other database vendor specific information.
No, you shouldnt.
Databases are specifically designed to manipulate sets of data (obviously....). Therefore they are incredibly efficient at doing this. By doing what is essentially a manual join in his own code, he is attempting to take over the role of something specifically designed for the job. The chances of his code ever being as efficient as that in the database are very remote.
As an aside, without joins, whats the point in using a database? he may as well just use text files.
If "lazy" is defined as people who want to write less code, then I agree. If "lazy" is defined as people who want to have tools do what they are good at doing, I agree. So if he is merely agreeing with Larry Wall (regarding the attributes of good programmers), then I agree with him.
Ummm, joins is how relational databases relate tables to each other. I'm not sure what he's getting at.
How can making several calls to the database be more efficient than one call? Plus sql engines are optimized at doing this sort of thing.
Maybe your coworker is too lazy to learn SQL.
"This is technicaly true" - similarly, a SQL database is useless: what's the point in using one when you can get the same result by using a bunch of CSV files, and correlating them in code? Heck, any abstraction is for lazy people, let's go back to programming in machine code right on the hardware! ;)
Also, his asssertion is untrue in all but the most convoluted cases: RDBMSs are heavily optimized to make JOINs fast. Relational database management systems, right?
Yes, You should.
And you should use C++ instead of C# because of performance. C# is for lazy people.
No, no, no. You should use C instead of C++ because of performance. C++ is for lazy people.
No, no, no. You should use assembly instead of C because of performance. C is for lazy people.
Yes, I am joking. you can make faster programs without joins and you can make programs using less memory without joins. BUT in many cases, your development time is more important than CPU time and memory. Give up a little performance and enjoy your life. Don't waste your time for little little performance. And tell him "Why don't you make a straight highway from your place to your office?"
The last company I worked for didn't use SQL joins either. Instead they moved this work to application layer which is designed to scale horizontally. The rationale for this design is to avoid work at database layer. It is usually the database that becomes bottleneck. Its easier to replicate application layer than database. There could be other reasons. But this is the one that I can recall now.
Yes I agree that joins done at application layer are inefficient compared to joins done by database. More network communication also.
Please note that I'm not taking a hard stand on avoiding SQL joins.
Without joins how are you going to relate order items to orders?
That is the entire point of a relational database management system.
Without joins there is no relational data and you might as well use text files
to process data.
Sounds like he doesn't understand the concept so he's trying to make it seem they are useless. He's the same type of person who thinks excel is a database application.
Slap him silly and tell him to read more about databases. Making multiple connections and pulling data and merging the data via C# is the wrong way to do things.
I don't understand the logic of the statement "joins in SQL are useless".
Is it useful to filter and limit the data before working on it? As you're other respondants have stated this is what database engines do, it should be what they are good at.
Perhaps a lazy programmer would stick to technologies with which they were familiar and eschew other possibilities for non technical reasons.
I leave it to you to decide.
Let's consider an example: a table with invoice records, and a related table with invoice line item records. Consider the client pseudo code:
for each (invoice in invoices)
let invoiceLines = FindLinesFor(invoice)
...
If you have 100,000 invoices with 10 lines each, this code will look up 10 invoice lines from a table of 1 million, and it will do that 100,000 times. As the table size increases, the number of select operations increases, and the cost of each select operation increases.
Becase computers are fast, you may not notice a performance difference between the two approaches if you have several thousand records or fewer. Because the cost increase is more than linear, as the number of records increases (into the millions, say), you'll begin to notice a difference, and the difference will become less tolerable as the size of the data set grows.
The join, however. will use the table's indexes and merge the two data sets. This means that you're effectively scanning the second table once rather than randomly accessing it N times. If there's a foreign key defined, the database already has the links between the related records stored internally.
Imagine doing this yourself. You have an alphabetical list of students and a notebook with all the students' grade reports (one page per class). The notebook is sorted in order by the students' names, in the same order as the list. How would you prefer to proceed?
Read a name from the list.
Open the notebook.
Find the student's name.
Read the student's grades, turning pages until you reach the next student or the last page.
Close the notebook.
Repeat.
Or:
Open the notebook to the first page.
Read a name from the list.
Read any grades for that name from the notebook.
Repeat steps 2-3 until you get to the end
Close the notebook.
Sounds like a classic case of "I can write it better." In other words, he's seeing something that he sees as kind of a pain in the neck (writing a bunch of joins in SQL) and saying "I'm sure I can write that better and get better performance." You should ask him if he is a) smarter and b) more educated than the typical person that's knee deep in the Oracle or SQL Server optimization code. Odds are he isn't.
He is most certainly wrong. While there are definite pros to data manipulation within languages like C# or Java, joins are fastest in the database due to the nature of SQL itself.
SQL keeps detailing statistics regarding the data, and if you have created your indexes correctly, can very quickly find one record in a couple of million. Besides the fact that why would you want to drag all your data into C# to do a join when you can just do it right on the database level?
The pros for using C# come into play when you need to do something iteratively. If you need to do some function for each row, it's likely faster to do so within C#, otherwise, joining data is optimized in the DB.
I will say that I have run into a case where it was faster breaking the query down and doing the joins in code. That being said, it was only with one particular version of MySQL that I had to do that. Everything else, the database is probably going to be faster (note that you may have to optimize the queries, but it will still be faster).
I suspect he has a limited view on what databases should be used for. One approach to maximise performance is to read the entire database into memory. In this situation, you may get better performance and you may want to perform joins if memory for efficiency. However this is not really using a database, as a database IMHO.
No, not only are joins better optimized in database code that ad-hoc C#/Java; but usually several filtering techniques can be applied, which yields even better performance.
He is wrong, joins are what competent programmers use. There may be a few limited cases where his proposed method is more efficient (and inthose I would probably be using a Documant database) but I can't see it if you have any deceent amount of data. For example take this query:
select t1.field1
from table1 t1
join table2 t2
on t1.id = t2.id
where t1.field2 = 'test'
Assume you have 10 million records in table1 and 1 million records in table2. Assume 9 million of the records in table 1 meet the where clause. Assume only 15 of them are in table2 as well. You can run this sql statement which if properly indexed will take milliseconds and return 15 records across the network with only 1 column of data. Or you can send ten million records with 2 columns of data and separately send another 1 millions records with one column of data across the network and combine them on the web server.
Or of course you could keep the entire contents of the database on the web server at all times which is just plain silly if you have more than a trivial amount of data and data that is continually changing. If you don't need the qualities of a relational database then don't use one. But if you do, then use it correctly.
I've heard this argument quite often during my career as a software developer. Almost everytime it has been stated, the guy making the claim didn't have much knowledge about relational database systems, the way they work and the way such systems should be used.
Yes, when used incorrectly, joins seem to be useless or even dangerous. But when used in the correct way, there is a lot of potential for database implementation to perform optimizations and to "help" the developer retrieving the correct result most efficiently.
Don't forget that using a JOIN you tell the database about the way you expect the pieces of data to relate to each other and therefore give the database more information about what you are trying to do and therefore making it able to better fit your needs.
So the answer is definitely: No, JOINSaren't useless at all!
This is "technically true" only in one case which is not used often in applications (when all the rows of all the tables in the join(s) are returned by the query). In most queries only a fraction of the rows of each table is returned. The database engine often uses indexes to eliminate the unwanted rows, sometimes even without reading the actual row as it can use the values stored in indexes. The database engine is itself written in C, C++, etc. and is at least as efficient as code written by a developer.
Unless I've seriously misunderstood, the logic in the question is very flawed
If there are 20 rows in B for each A, a 1000 rows in A implies 20k rows in B.
There can't be just 100 rows in B unless there is many-many table "AB" with 20k rows with the containing the mapping.
So to get all information about which 20 of the 100 B rows map to each A row you table AB too. So this would be either:
3 result sets of 100, 1000, and 20k rows and a client JOIN
a single JOINed A-AB-B result set with 20k rows
So "JOIN" in the client does add any value when you examine the data. Not that it isn't a bad idea. If I was retrieving one object from the database than maybe it makes more sense to break it down into separate results sets. For a report type call, I'd flatten it out into one almost always.
In any case, I'd say there is almost no use for a cross join of this magnitude. It's a poor example.
You have to JOIN somewhere, and that's what RDBMS are good at. I'd not like to work with any client code monkey who thinks they can do better.
Afterthought:
To join in the client requires persistent objects such as DataTables (in .net). If you have one flattened resultset it can be consumed via something lighter like a DataReader. High volume = lot of client resources used to avoid a database JOIN.

Virtual Database in Memory

Imagine the following:
I have a table of 57,000 items that i regularly use in my application to figure out things like targeting groups etc.
instead of querying the database 300,000 times a day, for a table that hardly ever changes it's data, is there a way to store its information in my application and poll data in memory directly? Or, do I have to create some sort of custom datatype for each row and iterate through testing each row, to check for the results i want?
After some googling, the closest thing i could find is in-memory database
thank you,
- theo
SQLite supports in-memory tables.
For 57,000 items that you will be querying against (and want immediately available) I would not recommend implementing just simple caching. For that many items I'd either recommend a distributed memory cache (even if it's only one machine) such as Memcache, Velocity etc or to go with your initial idea of using an in memory database.
Also if you use any full fledged ORM such as NHibernate you can implement it to use clients for the distributed caching tools with almost no work. Many of the major clients have NHibernate implementations for them including Memcache, Velocity and some others. This might be a better solution as you can have it where it's only caching data it truly is using and not all the data it might need.
Read up on Caching
It sounds like this is application level data rather than user level, so you should look into "Caching Application Data"
Here are some samples of caching datatables
If you only need to find rows using the same key all the time, a simple Dictionary<Key, Value> could very well be all that you need. To me, 57,000 items doesn't really sound that much unless each row contains a huge amount of data. However, if you need to search by different columns, an in-memory database is most likely the way to go.

Is it a good idea to store serialized objects in a Database instead of multiple xml text files?

I am currently working on a web application that requires certain requests by users to be persisted. I have three choices:
Serialize each request object and store it as an xml text file.
Serialize the request object and store this xml text in a DB using CLOB.
Store the requests in separate tables in the DB.
In my opinion I would go for option 2 (storing the serialized objects' xml text in the DB). I would do this because it would be so much easier to read from 1 column and then deserialize the objects to do some processing on them. I am using c# and asp .net MVC to write this application. I am fairly new to software development and would appreciate any help I can get.
Short answer: If option 2 fits your needs well, use it. There's nothing wrong with storing your data in the database.
The answer for this really depends on the details. What kind of data are storing? How do you need to query it? How often will you need to query it?
Generally, I would say it's not a good idea to do both 1 and 2. The problem with option 2 is that you it will be much harder to query for specific fields. If you're going to do a LIKE query and have it search a really long string, it's going to be an expensive operation and you'll likely run into perf issues later on.
If you really want to stay away from having to write code to read multiple columns to load your data, look into using an ORM like Linq to SQL. That will help load database tables into objects for you.
I have designed a number of systems where storing 'some' object as serialized xml in the db has proven the better choice. I also learned lessons where storing objects in the db as xml ended up causing more headaches down the road. So I came up with some questions that you have to answer yes to in order to be comfortable in doing:
Does the object need to be portable?
Is the data in the object encapsulated i.e. not part of something else, and not made up of something else.
In the future can number 2 change?
In SQL you can always create a table view using XQuery, but I would only recommend you do this if a) its too late to change your mind b) you don't have that many objects to manage.
Serializing and storing objects in XML has some real benefits, especially for extensibilty and agile development.
If the number of this kind of objects is large and the size of it isn't very large. I think that using the database is a good idea.
Whether store it in a separate table or store it in the original table depends on how would you use this CLOB data with the original table.
Go with option 2 if you will always need the CLOB data when you access the original table.
Otherwise go with option 3 to improve performance.
You need to also think about security and n-tier architecture. Storing serialized data in a database means your data will be on another server, ideal if the data needs to be secure, but will alos give you network latency, whereas storing the data in the filesystem will give you quicker IO access, but very limited searching ability.
I have a situiation like this and I use the database. It also gets backed up properly with the rest of the related data.

Categories

Resources