Memory size of a list of int - c#

I have a list of int that's stored in the DB in the form of a string with commas in between (4345,324,24,2424,64567,33...). This string could become quite large and contain 2-3 thousand numbers. It's stored in the DB and used quite frequently.
I'm thinking that instead of reading it from the DB every time it's needed, it'd be better to store it in the session after it's loaded the first time.
How much memory would a list of 1,000 int require? Does the memory size also depend on the int itself such that storing a larger int (234,332) takes more space than a smaller int (544)?
Is it going to be better to read once and store in the session at the cost of memory space or better to read often and discard from memory after render.
Thanks for your suggestions.

I think you are heading in wrong direction. Storing in DB will likely be a better option, not in comma separated format, but as a table of int values.
Storing data in session will reduce scalability significantly. You might start having OutOfMemory exception and wondering why this is happening.
So my suggestion is read from DB when needed, apply appropriate indexes and it will be very fast.
The way you are heading is:
Day #1, 1 user - Hmm, should I store data in Session, why not. Should work fast. No need to query DB. Also easy to do.
Day #10, 5 users - Need to store another data structure, will put this to the session too, why not? Session is very fast.
Day #50, 10 users - There is a control that is haeavy to render, I will make it smart, render once and than put to the Session, will reuse it on every postback.
Day #100, 20 users - Sometimes the web site slow, don't know why. But it is just sometimes, so not a big deal.
Day #150, 50 users - It's got slow. Need better CPU and memory? We need to buy a better server, the hardware is old.
Day #160, 60 users - Got a new server, works much faster. Problem solved.
Day #200, 100 users - slow again, why? This is the newest the most expensive server!
Day #250, 150 users - application pool is getting recylced all the time. Why? OutOfMemoryException? what is this? I will google.
Day #300, 200 users - Users complain, we lose customers. I read about WinDbg, need to try using it.
Day #350, 200 users - Should we start using network load balancing, we can buy two servers! Bought server, tried to use, didn't work, a lot of dependencies on Session.
Day #400, 200 users - Can't get new customers, old customers go away. Started using WinDbg found out that almost all the memory is used by Session.
Day #450, 200 users - Starting a big project called 'Get rid of Session'.
Day #500, 250 users - The server is so fast now.
I've been there seen that. Basically my advice - don't go this way.

An int in C# is always 4 bytes (no matter what the value). A list of 1,000 ints is therefore ~4,000 bytes. I say approximately because the list structure will add some overhead. A few thousand ints in a list shouldn't be a problem for a modern computer.

I would not recommend storing it in the session, since that's going to cause memory pressure. If you have a series of integers tied to a single record, it sounds like you have a missing many to one relationship - why not store the ints in a separate table with a foreign key to the original table?

Integers are of a fixed size in .NET. Assuming you store it in an array instead of a List (since you are probably not adding to or removing from it), it would take up roughly 32 bits * the number of elements. 1000 ints in an array = roughly 32000 bits, or a little under 4 KB.

An int usually takes 32 bits (4 bytes), so 1000 of them would take about 4KB.
It doesn't matter how large the number is. They're always stored in the same space.

Is this list of int's unique to a session? If not, cache it at the server level and set an expiration on it. 1 copy of the list.
context.Cache.Add(...
I do this and refresh it every 5 minutes with a large amount of data. This way it's pretty "fresh" but only 1 connection takes the hit to populate it.

Related

Finding Changes between 2 HUGE zone (text) files

I have access to the .com zone files. A zone file is a text file with a list of domain names and their nameservers. It follows a format such as:
mydomain NS ns.mynameserver.com.
mydomain NS ns2.mynameserver.com.
anotherdomain NS nameservers.com.
notinalphadomain NS ns.example.com.
notinalphadomain NS ns1.example.com.
notinalphadomain NS ns2.example.com.
As you can see, there can be multiple lines for each domain (when there are multiple nameservers), and the file is NOT in alpha order.
These files are about 7GB in size.
I'm trying to take the previous file and the new file, and compare them to find:
What domains have been Added
What domains have been Removed
What domains have had nameservers changed
Since 7GB is too much to load the entire file into memory, Obviously I need to read in a stream. The method I've currently thought up as the best way to do it is to make several passes over both files. One pass for each letter of the alphabet, loading all the domains in the first pass that start with 'a' for example.
Once I've got all the 'a' domains from the old and new file, I can do a pretty simple comparison in memory to find the changes.
The problem is, even reading char by char, and optimizing as much as I've been able to think of, each pass over the file takes about 200-300 seconds, with collecting all the domains for the current pass's letter. So, I figure in its current state I'm looking at about an hour to process the files, without even storing the changes in the database (which will take some more time). This is on a dual quad core xeon server, so throwing more horsepower at it isn't much of an option for me.
This timing may not be a dealbreaker, but I'm hoping someone has some bright ideas for how to speed things up... Admittedly I have not tried async IO yet, that's my next step.
Thanks in advance for any ideas!
Preparing your data may help, both in terms of the best kind of code: the unwritten kind, and in terms of execution speed.
cat yesterday-com-zone | tr A-Z a-z | sort > prepared-yesterday
cat today-com-zone | tr A-Z a-z | sort > prepared-today
Now, your program does a very simple differences algorithm, and you might even be able to use diff:
diff prepared-today prepared-yesterday
Edit:
And an alternative solution that removes some extra processing, at the possible cost of diff execution time. This also assumes the use of GnuWin32 CoreUtils:
sort -f <today-com-zone >prepared-today
sort -f <yesterday-com-zone >prepared-yesterday
diff -i prepared-today prepared-yesterday
The output from that will be a list of additions, removals, and changes. Not necessarily 1 change record per zone (consider what happens when two domains alphabetically in order are removed). You might need to play with the options to diff to force it to not check for as many lines of context, to avoid great swaths of false-positive changes.
You may need to write your program after all to take the two sorted input files and just run them in lock-step, per-zone. When a new zone is found in TODAY file, that's a new zone. When a "new" zone is found in YESTERDAY file (but missing in today), that's a removal. When the "same" zone is found in both files, then compare the NS records. That's either no-change, or a change in nameservers.
The question has been already answered, but I'll provide a more detailed answer, with facts that are good for everyone to understand. I'll try to cover the existing solutions, and even how to distribute , with explanations of why things turned out as they did.
You have a 7 GB text file. Your disk lets us stream data at, let's be pessimistic, 20 MB/second. This can stream the whole thing in 350 seconds. That is under 6 minutes.
If we suppose that an average line is 70 characters, we have 100 million rows. If our disk spins at 6000 rpm, the average rotation takes 0.01 seconds, so grabbing a random piece of data off of disk can take anywhere from 0 to 0.01 seconds, and on average will take 0.005 seconds. This is called our seek time. If you know exactly where every record is, and seek to each line, it will take you 0.005 sec * 100,000,000 = 500,000 sec which is close to 6 days.
Lessons?
When working with data on disk you really want to avoid seeking. You want to stream data.
When possible, you don't want your data to be on disk.
Now the standard way to address this issue is to sort data. A standard mergesort works by taking a block, sorting it, taking another block, sorting it, and then merging them together to get a larger block. The merge operation streams data in, and writes a stream out, which is exactly the kind of access pattern that disks like. Now in theory with 100 million rows you'll need 27 passes with a mergesort. But in fact most of those passes easily fit in memory. Furthermore a clever implementation - which nsort seems to be - can compress intermediate data files to keep more passes in memory. This dataset should be highly structured and compressible, in which all of the intermediate data files should be able to fit in RAM. Therefore you entirely avoid disk except for reading and writing data.
This is the solution you wound up with.
OK, so that tells us how to solve this problem. What more can be said?
Quite a bit. Let's analyze what happened with the database suggestions. The standard database has a table and some indexes. An index is just a structured data set that tells you where your data is in your table. So you walk the index (potentially doing multiple seeks, though in practice all but the last tend to be in RAM), which then tells you where your data is in the table, which you then have to seek to again to get the data. So grabbing a piece of data out of a large table potentially means 2 disk seeks. Furthermore writing a piece of data to a table means writing the data to the table, and updating the index. Which means writing in several places. That means more disk seeks.
As I explained at the beginning, disk seeks are bad. You don't want to do this. It is a disaster.
But, you ask, don't database people know this stuff? Well of course they do. They design databases to do what users ask them to do, and they don't control users. But they also design them to do the right thing when they can figure out what that is. If you're working with a decent database (eg Oracle or PostgreSQL, but not MySQL), the database will have a pretty good idea when it is going to be worse to use an index than it is to do a mergesort, and will choose to do the right thing. But it can only do that if it has all of the context, which is why it is so important to push work into the database rather than coding up a simple loop.
Furthermore the database is good about not writing all over the place until it needs to. In particular the database writes to something called a WAL log (write access log - yeah, I know that the second log is redundant) and updates data in memory. When it gets around to it it writes changes in memory to disk. This batches up writes and causes it to need to seek less. However there is a limit to how much can be batched. Thus maintaining indexes is an inherently expensive operation. That is why standard advice for large data loads in databases is to drop all indexes, load the table, then recreate indexes.
But all this said, databases have limits. If you know the right way to solve a problem inside of a database, then I guarantee that using that solution without the overhead of the database is always going to be faster. The trick is that very few developers have the necessary knowledge to figure out the right solution. And even for those who do, it is much easier to have the database figure out how to do it reasonably well than it is to code up the perfect solution from scratch.
And the final bit. What if we have a cluster of machines available? The standard solution for that case (popularized by Google, which uses this heavily internally) is called MapReduce. What it is based on is the observation that merge sort, which is good for disk, is also really good for distributing work across multiple machines. Thus we really, really want to push work to a sort.
The trick that is used to do this is to do the work in 3 basic stages:
Take large body of data and emit a stream of key/value facts.
Sort facts, partition them them into key/values, and send off for further processing.
Have a reducer that takes a key/values set and does something with them.
If need be the reducer can send the data into another MapReduce, and you can string along any set of these operations.
From the point of view of a user, the nice thing about this paradigm is that all you have to do is write a simple mapper (takes a piece of data - eg a line, and emits 0 or more key/value pairs) and a reducer (takes a key/values set, does something with it) and the gory details can be pushed off to your MapReduce framework. You don't have to be aware of the fact that it is using a sort under the hood. And it can even take care of such things as what to do if one of your worker machines dies in the middle of your job. If you're interested in playing with this, http://hadoop.apache.org/mapreduce/ is a widely available framework that will work with many other languages. (Yes, it is written in Java, but it doesn't care what language the mapper and reducer are written in.)
In your case your mapper could start with a piece of data in the form (filename, block_start), open that file, start at that block, and emit for each line a key/value pair of the form domain: (filename, registrar). The reducer would then get for a single domain the 1 or 2 files it came from with full details. It then only emits the facts of interest. Adds are that it is in the new but not the old. Drops are that it is in the old but not the new. Registrar changes are that it is in both but the registrar changed.
Assuming that your file is readily available in compressed form (so it can easily be copied to multiple clients) this can let you process your dataset much more quickly than any single machine could do it.
This is very similar to a Google interview question that goes something like "say you have a list on one-million 32-bit integers that you want to print in ascending order, and the machine you are working on only has 2 MB of RAM, how would you approach the problem?".
The answer (or rather, one valid answer) is to break the list up into manageable chunks, sort each chunk, and then apply a merge operation to generate the final sorted list.
So I wonder if a similar approach could work here. As in, starting with the first list, read as much data as you can efficiently work with in memory at once. Sort it, and then write the sorted chunk out to disk. Repeat this until you have processed the entire file, and then merge the chunks to construct a single sorted dataset (this step is optional...you could just do the final comparison using all the sorted chunks from file 1 and all the sorted chunks from file 2).
Repeat the above steps for the second file, and then open your two sorted datasets and read through them one line at a time. If the lines match then advance both to the next line. Otherwise record the difference in your result-set (or output file) and then advance whichever file has the lexicographically "smaller" value to the next line, and repeat.
Not sure how fast it would be, but it's almost certainly faster than doing 26 passes through each file (you've got 1 pass to build the chunks, 1 pass to merge the chunks, and 1 pass to compare the sorted datasets).
That, or use a database.
You should read each file once and save them into a database. Then you can perform whatever analysis you need using database queries. Databases are designed to quickly handle and process large amounts of data like this.
It will still be fairly slow to read all of the data into the database the first time, but you won't have to read the files more than once.

what is the technique to process 1 million records from a database

This is more or less a design question. we have to process like a 1 million rows and send an xml to a third party. Initially we have to send like 1 million records, later we will send the deltas only.
Right now the stored procedure is taking approchimately around 15 to 20 min to return the data. Its a consoleapp right. I know its not a good way to get 1 million records at time.
I want to know the following things
1) Is console app in c# which connects to database is right approach or not
2) Are there any other ways of doing this?
Appreciate your guidence on this , there is no need for any coding or so , We need some some advice on how to proceed.
Thanks in advance.
My thoughts:
don't fetch all the data then process it; but process it as it arrives - via IDataReader or LINQ
use equally streaming approach for the file; perhaps XmlWriter directly, or maybe XStreamingElement - in either case reading from the source above
this vastly reduces the amount of memory you need, and allows your machine to do something useful while waiting on the network IO
Re 1: Depends on your architecture. That simple. It is a VIABLE approach.
Re 2: Yes, tons. All vaible. You could make a system service handling data generation upon request. You could have a web application.
In general, a console app will work fine, and 1 million rows nia a result set are no exactly a lot either. Totally workable.
1-20 minuts is odd, though. Where is the time spent? 1 million rows to transfer and write out shoul not take more than 2-3 minutes.
1) Yeah, why not.
2) Yes.
Use cursors.
You will need to be a little more specific on what you are doing during the 15 to 20 minutes.
You are asking about the "right" way to do things - what are you optimising for?
Speed? A 15 - 20 minute stored proc sounds dangerous. What is it doing?
Maintenance / Readability? A console app will work. It would also be easier to test (unit testing etc) than a stored proc.
I have never liked long running stored procedures because it's not easy to see progress. At least with a console app you can output something
Trust me, 1 million records isn't a big deal to a famous commercial data base, it's not worth 15 to 20 min to return the records. Somewhere else is wrong! Are you building the XML file in the store procedure? If yes, please remove them and implement the XML building in C#. The SP has only one simple task: fetching data. It won't take a long time if you are not joining 1 million records on another 1 million records. After the data come into the application(console application is ok in this case), build the XML with maybe LINQ-to-XML. If you are still not satisfy with the performance, make your code parallel.
EDIT Your SP is time consuming, you need to optimize it. An example: In the SP T_Data with 1m records joins T_User with 1m records that costs a lot of time. After optimization: In the SP T_Data joins one record in T_User(almost a WHERE expression which is very fast), and in C# code you are getting the records from T_User, for each record, call the SP and get the data, then build one section/*piece* of your XML. All of them can be processed concurrently. At last, you merge all the pieces of XML into one.

Need a way to sort a 100 GB log file by date [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
So, for some strange reason I end up with a 100GB log file that is unsorted (actually it's partially sorted), while the algorithms that I'm attempting to apply require sorted data. A line in the log file looks like so
data <date> data data more data
I have access to C# 4.0 and about 4 GB of RAM on my workstation. I would imagine that merge-sort of some kind would be best here, but short of implementing these algorithms myself - I want to ask if there's some kind of a shortcut I could take.
Incidentally parsing the date string with DateTime.Parse() is very slow and takes up a lot of CPU time - The chugging-rate is measly 10 MB/sec. Is there a faster way than the following?
public static DateTime Parse(string data)
{
int year, month, day;
int.TryParse(data.Substring(0, 4), out year);
int.TryParse(data.Substring(5, 2), out month);
int.TryParse(data.Substring(8, 2), out day);
return new DateTime(year, month, day);
}
I wrote that to speed up DateTime.Parse() and it actually works well, but is still taking a bucket-load of cycles.
Note that for the current log-file I'm interested in hours, minutes and seconds also. I know that I can provide DateTime.Parse() with format, but that doesn't seem to speed it up all that much.
I'm looking for a nudge in the right direction, thanks in advance.
EDIT: Some people have suggested that I use string comparison in order to compare dates. That would work for the sorting phase, but I do need to parse dates for the algorithms. I still have no idea how to sort 100GB file on 4GB of free ram, without doing it manually.
EDIT 2 : Well, thanks to several suggestions that I use windows sort, I found out that there's a similar tool for Linux. Basically you call sort and it fixes everything for you. As we speak it's doing something, and I hope it'll finish soon. The command I'm using is
sort -k 2b 2008.log > 2008.sorted.log
-k specifies that I want to sort on the second row, which is an date-time string in the usual YYYY-MM-DD hh:mm:ss.msek format. I must admit that the man-pages are lacking explaining all the options, but I found a lot of examples by running info coreutils 'sort invocation'.
I'll report back with results and timings. This part of the log is about 27GB. I am thinking of sorting 2009 and 2010 separately and then merging the results into a single file with the sort -m option.
Edit 3 Well, checking iotop suggests that it's reading in small chunks of the data file and then furiously doing something in order to process them. This process seems to be quite slow. =(
sort isn't using any memory, and only a single core. When it does read data from the drive it's not processing anything. Am I doing something wrong?
Edit 4 Three hours in and it's still doing the same thing. Now I'm at that stage where I want to try playing with parameters of the function, but I'm three hours invested... I'll abort in in about 4 hours, and try to put it for overnight computation with smarter memory and space parameters...
Edit 5 Before I went home, I restarted the process with the following command:
sort -k 2b --buffer-size=60% -T ~/temp/ -T "/media/My Passport" 2010.log -o 2010.sorted.log
It returned this, this morning:
sort: write failed: /media/My Passport/sortQAUKdT: File too large
Wraawr! I thought I would just add as many hard drives as possible to speed this process up. Apparently adding a USB-drive was the worst idea ever. At the moment I can't even tell if it's about FAT/NTFS or some such, because fdisk is telling me that the USB drive is a "wrong device"... no kidding. I'll try to give it another go later, for now let's put this project into the maybe failed pile.
Final Notice
This time it worked, with the same command as above, but without the problematic external hard drive. Thank you all for your help!
Benchmarking
Using 2 workstation grade (at least 70mb/sec read/write IO) hard-disks on the same SATA controller, it took me 162 minutes to sort a 30GB log file. I will need to sort another 52 GB file tonight, I'll post how that goes.
Code like this is completely bound by how fast you can get the data off the disk. The file simply can never fit in the file system cache so you're always waiting on the disk to supply the data. You're doing fairly well at 10 MB/sec, optimizing the code is never going to have a discernible effect.
Get a faster disk. Defrag the one you've got as an intermediate step.
If a string sort will work for you, then just use the Windows SORT command. Sort the file and be done with it. It'll happily sort your 100GB file, and it's simple to use.
If you need to filter and convert the file, specifically the date field, then I would simply write a small conversion program that converts the data field in to a 0 filled integer (like # of seconds since 1970, or whatever you like), and rewrites the record. Then you can pipe (|) the output in to the sort command, then you have a final, sorted file thats more readily parsed by your utility program.
I think the mistake you're making is simply trying to do this all in one go. 100GB of data is a lot, and it takes some time to copy, but it doesn't take THAT long. Since you have to sort it, you already have to deal with a copy of the file at some point (i.e. you need as much free space on your machine to handle both copies at some time), even with an external sorting routine like merge sort.
Writing a simple reformatter and piping it in to sort will save you a couple trips through the file, and save space on disk, since you'll inevitably just need the two copies.
I would also tweak the formatter in to pulling only the fields I'm really interested in, and do all of the "heavy" parsing at that point so that what you end up with is essentially a formatted file that easily handled by your reporting routines. That way you'll save time later when potentially running your reports more than once.
Use a simple CSV or, even better, a fixed length file format for output if possible.
Make sure your date information, if you choose to use an integer, has all of the fields the same length. Otherwise the SORT utility won't sort them correctly (you end up with 1 10 2 3 instead of 1 2 3 10. You're better to have 01 02 03 10.).
Edit --
Let's approach it from a different tact.
The biggest question is "do you need all this data". This relates to the earlier suggestion about doing the heavy parsing first. Obviously, the more you can reduce the initial set the better. For example, simply removing 10% of the data is 10GB.
Something I like to think about as a rule of thumb, especially when dealing with a lot of data: "If you have 1 Million of something, then every millisecond saved, is 20 minutes off the bottom line."
Normally, we really don't think in terms of milliseconds for our work, it's more "seat of the pants", "that feels faster". But the 1ms == 20min/million is a good measure to get a grasp of how much data you're dealing with, and how long stuff should/could take.
For you case, 100GB of data. With a swag of 100 bytes per record, you're taking 1 Billion rows. 20,000 minutes per millisecond. -- 5 1/2 hours. gulp (It's a rule of thumb, if you do the math it doesn't quite work out to this.)
So, you can appreciate the desire to reduce the raw data if at all possible.
That was one reason I deferred to the Windows SORT command. It's a basic process, but one affected by nuance, and one that can use some optimization. The folks who wrote SORT had time and opportunity to make it "optimal", in many ways. Whether they did or did not, I can't say. But its a fair assumption that they would put more time and attention in to this process to make their SORT as good as practical, versus you who are under a tight deadline.
There are 3rd party sorting utilities for large data sets, that probably (ideally) work better for that case. But, those are unavailable to you (you can get them but I don't think you wanted to rush out and get some other utility right away). So, SORT is our best guess for now.
That said, reducing the data set will gain more than any sort utility.
How much detail do you really need? And how much information are you really tracking? For example, if it were, say, web statistics, you may have 1000 pages on your site. But even with hourly numbers for a year, 365 * 24 * 1000, that's only 8.7M "buckets" of information -- a far cry from 1B.
So, is there any preprocessing you can do that does not require sorting? Summarizing the information into a coarser granularity? You can do that without sorting, simply using memory based hash maps. Even if you don't have "enough memory" to process all 100GB of data in one throw, you probably have enough to do it in chunks (5 chunks, 10 chunks), and write out the intermediary results.
You may also have a lot better luck splitting the data as well. Into monthly, or weekly file chunks. Maybe that's not easily done because the data is "mostly" sorted. But, in that case, if it's by date, the offenders (i.e. the data that's out of sort) may well be clustered within the file, with the "out of order" stuff being just mixed up on the barriers of the time periods (like around day transitions, maybe you have rows like 11:58pm, 11:59pm, 00:00am, 00:01am, 11:58pm, 00:02pm). You might be able to leverage that heuristic as well.
The goal being that if you can somewhat deterministically determine the subset that's out of order, and break the file up in to chunks of "in order data" and "out of order data", your sorting task may be MUCH MUCH smaller. Sort the few rows that are out of order, and then you have a merge problem (much simpler than a sorting problem).
So, those are tactics you can take approaching the problem. Summarization is obviously the best one as anything that reduces this data load in any measurable, is likely worth the trouble. Of course it all boils down to what you really want from the data, clearly the reports will drive that. This is also a nice point about "pre-mature optimization". If they're not reporting on it, don't process it :).
Short answer - load the data into a relational database eg Sql Express, create an index, and use a cursor based solution eg DataReader to read each record off and write it to disk.
Why don't you try this relatively unkown tool from microsoft called logparser. It basically allows you to do an SQL query over a CSV file (or any other formatted textfile).
Saves you the trouble of pumping it into a database, doing your sort, and pumping it back out again
Just to answer your question about sorting a long file that doesn't fit into the memory - you'll need to use some external sorting algorithm such as Merge sort. The process is roughly following:
Partition the input into several parts that fit into memory and can be sorted using standard in-memory sorting algorithms (e.g. 100 MB or larger - you'll need to keep ~4 parts in memory at once). Sort all the parts and write them back to disk.
Read two parts from the disk (they are both sorted) and merge them, which can be done just by simultaneously iterating over the two inputs. Write the merged data set to another place in the disk. Note that you don't need to read the whole part into memory - just read it/write it in blocks as you go.
Repeat merging of parts until you have only a single part (which will be sorted file with all the data from your original input data set).
You mentioned that the data is partially sorted already, so it would be a good idea to pick some algorithm for in-memory sorting (in the first phase) that is efficient in this case. You can see some suggestions in this question (though I'm not sure if the answer will be the same for very large data sets - and it depends on how much partially sorted the input is).
The best way of optimising the parsing of the dates is to not parse them at all.
As the dates is in ISO 8601 format, you can just compare them as strings. There is no parsing needed at all.
Regarding the sorting, you should be able to effectively use the fact that it's partially sorted. One approach could be to read the file and write into separate files divided on time ranges, for example daily or hourly. If you make each file small enough you can then read them into memory and sort them, and then just merge all the files.
Another approach could be to read the file and write the records that are in order into one file, and the the other ones into another file. Sort the second file (possibly using this process recursively if it's large) and zip the two files together. I.e. a modified merge sort.
For sorting you could implement a file-based bucket sort:
Open input file
Read file line by line
Get date as string from line
Append line to file <date>.log
The result would be a separate log file for each day, or separate for each hour. Choose so that you get files of a size that you can easily sort.
The remaining task would be to sort the created files and possibly merge the file again.
I do need to parse dates for the algorithms.
On *NIX, I generally would have first converted dates into something simple, suitable for text comparison and made it first word on the string. It's too early for date/time object creation. My usual date presentation is YYYYMMDD-hhmmss.millis. Make it that all files would have same date format.
I still have no idea how to sort 100GB file on 4GB of free ram, without doing it manually.
As you have figured it out already, merge sort is the only option.
So to me the tasks falls into the following step:
dumb conversion to make dates sortable. Complexity: read/write sequentially 100GB.
split data in chunks of usable size, e.g. 1GB and sort every chunk using plain quick sort before writing it to disk. Complexity: read/write sequentially 100GB; memory for quick sort.
merge-sort the small files into one large. One can do it step-wise, using a program which takes two files and merges them into new one. Complexity: read/write sequentially 100GB log(N) times (where N is the number of files). HDD space requirement: 2*100GB (last merge of 2 x 50GB files into single 100GB file).
A program to automate the previous step: pick two (e.g. smallest) files, start program to sort-merge them into a new file, remove the two original files. Repeat until number of files is greater than 1.
(Optional) split the 100GB sorted file into smaller chunks of manageable size. After all you are going to do something with them. Number them sequentially or put first and last time stamps into the file name.
General concept: do not try to find a way to do it fast, piping 100GB would take time anyway; plan for the programs one every step to run over-night as a batch, without your attention.
On Linux that is all doable with shell/sort/awk/Perl, and I do not think that it is a problem to write it all in any other programming language. This is potentially 4 programs - but all of them are rather simple to code.
Assuming that your log file only has 1-2% of the rows out of order, you could make a single pass through the complete log, outputing two files: one file that is in order and another file containing the 1-2% of rows that are out of order. Then sort the out-of-order rows in memory and perform a single merge of the formerly out-of-order rows with the in-order rows. This will be much faster than a full mergesort which will do many more passes.
Assuming that your log file has no row more than N rows out of place, you could make a single pass through the log with a sorted queue N rows deep. Whenever you encounter a log row that is out of order, just insert it into the proper place in the queue. Since this only requires a single pass through the log, it's going to be as fast as you can get.
Actually I don`t have many ideas about the date conversion, but the things that I would try to use to do that is:
A database with a Index in the Date Column (to be easy to search in this data after).
To Insert in this base use Bulk Insert.
And some way to parallel the reading (In think parallel LINQ would be good and is very easy to use).
Lots of patience (the most important/hard thing)
Pre-emptive comment: My answer only addresses the sub-problem of parsing date time values.
DateTime.Parse contains checks for all possible date formats. If you have a fix format you can optimize parsing quite well. A simple optimization would be to convert the characters directly:
class DateParserYyyyMmDd
{
static void Main(string[] args)
{
string data = "2010-04-22";
DateTime date = Parse(data);
}
struct Date
{
public int year;
public int month;
public int day;
}
static Date MyDate;
static DateTime Parse2(string data)
{
MyDate.year = (data[0] - '0') * 1000 + (data[1] - '0') * 100
+ (data[2] - '0') * 10 + (data[3] - '0');
MyDate.month = (data[5] - '0') * 10 + (data[6] - '0');
MyDate.day = (data[8] - '0') * 10 + (data[9] - '0');
return new DateTime(MyDate.year, MyDate.month, MyDate.day);
}
}
Apart from whatever you are doing (probably, willw's suggestion is helpful), your parsing could be done over multiple threads provided you have multiple processors or processor cores.
Not really as a solution, but just out of interest, one way to do it like this:
First break the file down into 1GB files
Then reading 2 files at a time, load the contents into a list of string and sort it
Write it back down to the individual files.
The problem is that you would need to read/write 100 files on each pass and do 100 passes to make sure that the data is sorted.
If my maths is correct: That is 10 000 GB read and 10 000 GB write, at an average 10MB/sec that is 20 000 000 sec which is 231 days
One way that is might work is that you scan the file once and write to smaller files, one for each time period for example day or hour. Then sort these individual files.
You can try implement radix sort algorithm. Because radix scans the whole list sequentially and only few times, it can help here to prevent gigant number of scans and seeks of your 100 GB file.
Radix sort intend to classificate your records each iteration by one part. This part can be a digit, or a datetime part like year, month, day. in this case you dont even need to convert the string to DateTime, you can convert only the specific part to int.
Edit:
For sorting purposes, you can create temp binary file with only 2 columns: DateTime (DateTime.ToBinary() as Int64) and line address in the source file (as Int64).
Then you getting a much smaller file with fixed size records, only 16 bytes per record, then you can sort it much faster (IO operations will be faster at least).
Once finished sorting the temp file, you can create back the full sorted 100 GB log file.
Wow. First of all, that is a whole new level of documenting-obssession.
My actual edvice would be, try to consider how neccessary this file really is.
About sorting, I have no idea if this will work or not, but you might want to try to build an Enumerator that returns the data directly from the Hard Disk (not saving anything but few pointers maybe), and then trying to use LINQ's OrderBy, which returns IEnumerator as well, which you, hopefuly, can Enamurate and save directly back to the disk.
The only question is whether or not OrderBy saves anything in the RAM.
Boot up a Linux flavor from USB
And use the while command to read
The file. Utilize grep, filters and
Pipes to segregate the data.
This can all be done in
3 lines of a BASH script.
Grep will rip through the data in
No time. I've grepped through
7 million lines in 45 seconds

Optimal storage of data structure for fast lookup and persistence

Scenario
I have the following methods:
public void AddItemSecurity(int itemId, int[] userIds)
public int[] GetValidItemIds(int userId)
Initially I'm thinking storage on the form:
itemId -> userId, userId, userId
and
userId -> itemId, itemId, itemId
AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime.
There are potentially 2000 users and 10 million items.
Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year).
AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list.
I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean.
If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution.
The ItemId -> UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes.
Each time a new user is added all the lists have to updated as well, but this can be done nightly.
Question
Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :)
[Update 2010-03-31]
I have now tested with SQL server 2008 under the following conditions.
Table with two columns (userid,itemid) both are Int
Clustered index on the two columns
Added ~800.000 items for 180 users - Total of 144 million rows
Allocated 4gb ram for SQL server
Dual Core 2.66ghz laptop
SSD disk
Use a SqlDataReader to read all itemid's into a List
Loop over all users
If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds.
The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest.
Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.
[Update 2010-03-31 #2]
I did a quick test with the same data putting it as bits in memory mapped files. It performs much better. Six threads yields access times between 0.02s and 0.06s. Purely memory bound. The mapped files were mapped by one process, and accessed by six others simultaneously. And as the sql base took 4gb, the files on disk took 23mb.
After much testing I ended up using Memory Mapped Files, marking them with the sparse bit (NTFS), using code from NTFS Sparse Files with C#.
Wikipedia has an explanation of what a sparse file is.
The benefits of using a sparse file is that I don't have to care about what range my id's are in. If I only write id's between 2006000000 and 2010999999, the file will only allocate 625,000 bytes from offset 250,750,000 in the file. All space up to that offset is unallocated in the file system. Each id is stored as a set bit in the file. Sort of treated as an bit array. And if the id sequence suddenly changes, then it will allocate in another part of the file.
In order to retrieve which id's are set, I can perform a OS call to get the allocated parts of the sparse file, and then I check each bit in those sequences. Also checking if a particular id is set is very fast. If it falls outside the allocated blocks, then it's not there, if it falls within, it's merely one byte read and a bit mask check to see if the correct bit is set.
So for the particular scenario where you have many id's which you want to check on with as much speed as possible, this is the most optimal way I've found so far.
And the good part is that the memory mapped files can be shared with Java as well (which turned out to be something needed). Java also has support for memory mapped files on Windows, and implementing the read/write logic is fairly trivial.
I really think you should try a nice database before you make your decision. Something like this will be a challenge to maintain in the long run. Your user-base is actually quite small. SQL Server should be able to handle what you need without any problems.
2000 users isn't too bad but with 10 mil related items you really should consider putting this into a database. DBs do all the storage, persistence, indexing, caching etc. that you need and they perform very well.
They also allow for better scalability into the future. If you suddenly need to deal with two million users and billions of settings having a good db in place will make scaling a non-issue.

Storing Real Time data into 1000 files

I have a program that receives real time data on 1000 topics. It receives -- on average -- 5000 messages per second. Each message consists of a two strings, a topic, and a message value. I'd like to save these strings along with a timestamp indicating the message arrival time.
I'm using 32 bit Windows XP on 'Core 2' hardware and programming in C#.
I'd like to save this data into 1000 files -- one for each topic. I know many people will want to tell me to save the data into a database, but I don't want to go down that road.
I've considered a few approaches:
1) Open up 1000 files and write into each one as the data arrives. I have two concerns about this. I don't know if it is possible to open up 1000 files simultaneously, and I don't know what effect this will have on disk fragmentation.
2) Write into one file and -- somehow -- process it later to produce 1000 files.
3) Keep it all in RAM until the end of the day and then write one file at a time. I think this would work well if I have enough ram although I might need to move to 64 bit to get over the 2 GB limit.
How would you approach this problem?
I can't imagine why you wouldn't want to use a database for this. This is what they were built for. They're pretty good at it.
If you're not willing to go that route, storing them in RAM and rotating them out to disk every hour might be an option but remember that if you trip over the power cable, you've lost a lot of data.
Seriously. Database it.
Edit: I should add that getting a robust, replicated and complete database-backed solution would take you less than a day if you had the hardware ready to go.
Doing this level of transaction protection in any other environment is going to take you weeks longer to set up and test.
Like n8wrl i also would recommend a DB. But if you really dislike this feature ...
Let's find another solution ;-)
In a minimum step i would take two threads. First is a worker one, recieving all the data and putting each object (timestamp, two strings) into a queue.
Another thread will check this queue (maybe by information by event or by checking the Count property). This thread will dequeue each object, open the specific file, write it down, close the file and proceed the next event.
With this first approach i would start and take a look at the performance. If it sucks, make some metering, where the problem is and try to accomplish it (put open files into a dictionary (name, streamWriter), etc).
But on the other side a DB would be so fine for this problem...
One table, four columns (id, timestamp, topic, message), one additional index on topic, ready.
First calculate the bandwidth! 5000 messages/sec each 2kb = 10mb/sec. Each minute - 600mb. Well you could drop that in RAM. Then flush each hour.
Edit: corrected mistake. Sorry, my bad.
I agree with Oliver, but I'd suggest a modification: have 1000 queues, one for each topic/file. One thread receives the messages, timestamps them, then sticks them in the appropriate queue. The other simply rotates through the queues, seeing if they have data. If so, it reads the messages, then opens the corresponding file and writes the messages to it. After it closes the file, it moves to the next queue. One advantage of this is that you can add additional file-writing threads if one can't keep up with the traffic. I'd probably first try setting a write threshold, though (defer processing a queue until it's got N messages) to batch your writes. That way you don't get bogged down opening and closing a file to only write one or two messages.
I'd like to explore a bit more why you don't wnat to use a DB - they're GREAT at things like this! But on to your options...
1000 open file handles doesn't sound good. Forget disk fragmentation - O/S resources will suck.
This is close to db-ish-ness! Also sounds like more trouble than it's worth.
RAM = volatile. You spend all day accumulating data and have a power outage at 5pm.
How would I approach this? DB! Because then I can query index, analyze, etc. etc.
:)
I would agree with Kyle and go with a package product like PI. Be aware PI is quite expensive.
If your looking for a custom solution I'd go with Stephen's with some modifications. Have one server recieve the messages and drop them into a queue. You can't use a file though to hand off the message to the other process because your going to have locking issues constantly. Probably use something like MSMQ(MS message queuing) but I'm not sure on speed of that.
I would also recommend using a db to store your data. You'll want to do bulk inserts of data into the db though, as I think you would need some heafty hardware to allow SQL do do 5000 transactions a second. Your better off to do a bulk insert every say 10000 messages that accumulate in the queue.
DATA SIZES:
Average Message ~50bytes ->
small datetime = 4bytes + Topic (~10 characters non unicode) = 10bytes + Message -> 31characters(non unicode) = 31 bytes.
50 * 5000 = 244kb/sec -> 14mb/min -> 858mb/hour
Perhaps you don't want the overhead of a DB install?
In that case, you could try a filesystem-based database like sqlite:
SQLite is a software library that
implements a self-contained,
serverless, zero-configuration,
transactional SQL database engine.
SQLite is the most widely deployed SQL
database engine in the world. The
source code for SQLite is in the
public domain.
I would make 2 separate programs: one to take the incoming requests, format them, and write them out to one single file, and another to read from that file and write the requests out. Doing things this way allows you to minimize the number of file handles open while still handling the incoming requests in realtime. If you make the first program format it's output correctly then processing it to the individual files should be simple.
I'd keep a buffer of the incoming messages, and periodically write the 1000 files sequentially on a separate thread.
If you don't want to use a database (and I would, but assuming you don't), I'd write the records to a single file, append operations are fast as they can be, and use a separate process/service to split up the file into the 1000 files. You could even roll-over the file every X minutes, so that for example, every 15 minutes you start a new file and the other process starts splitting them up into 1000 separate files.
All this does beg the question of why not a DB, and why do you need 1000 different files - you may have a very good reason - but then again, perhaps you should re-think you strategy and make sure it is sound reasoning before you go to far down this path.
I would look into purchasing a real time data historian package. Something like a PI System or Wonderware Data Historian. I have tried to things like this in files and a MS SQL database before and it didn't turn out good (It was a customer requirement and I wouldn't suggest it). These products have API's and they even have packages where you can make queries to the data just like it was SQL.
It wouldn't allow me to post Hyperlinks so just google those 2 products and you will find information on them.
EDIT
If you do use a database like most people are suggesting I would recommend a table for each topic for historical data and consider table partitioning, indexes, and how long you are going to store the data.
For example if you are going to store a days worth and its one table for each topic, you are looking at 5 updates a second x 60 seconds in a minute x 60 minutes in an hour x 24 hours = 432000 records a day. After exporting the data I would imagine that you would have to clear the data for the next day which will cause a lock so you will have to have to queue you writes to the database. Then if you are going to rebuild the index so that you can do any querying on it that will cause a schema modification lock and MS SQL Enterprise Edition for online index rebuilding. If you don't clear the data everyday you will have to make sure you have plenty of disk space to throw at it.
Basically what I'm saying weigh the cost of purchasing a reliable product against building your own.

Categories

Resources