I have a table which stores timestamped recordings from a collection of sensors, these readings are taken 14400 times per day. (every 6 seconds).
There are 4 sensors, and they share their main data table.
At the moment the schema is as follows:
id (int-PK)
time (DateTime)
sensor (int)
reading (int)
This works perfectly well, and I have the primary key set to autoincrement.
It seems silly to have this primary key at all however, since I never refer to it - Would I be better off using a combination of time and sensor to act as a composite key?
If I did use a composite key, I assume my bytes per row would be decreased too? This is relevant since the table is over 10m rows, so any saving is worth it.
It seems win-win, but I wanted to see what the repercussions of this approach would be.
Composite indexes, and especially composite primary keys, should be avoided. The index is wider and this is bad for performance (and memory usage). In my personal opinion, it's also bad design to have a composite primary key, since there is no more unique singular way of referencing your row.
My advice would be to stick to the design you have now.
At this time you are using a surrogate key. And you are evaluating to move to natural keys.
Working with surrogate keys has advantages over natural keys that you can learn about in previous link:
Immutability
Requirement changes
Performance
Compatibility
Uniformity
(From wikipedia)
You can look for some other posts about surrogate v.s. natural keys in stackoverflow.
But each design is different to others. As database analyst you should evaluate what is the best decission for your project.
stick with the design, I've never had anything but problems putting a datetime in a PK. When your inserts start failing because of duplicates, you'll wish you hadn't done it.
if you want to save space go with a tiny int for the sensor column (you have only 4 different values). Possibly something smaller for reading, I doubt the sensor can record 2 trillion different values that an int can store, most likely you can use a smallint or tiny int for it.
bigint 8 bytes, -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
int 4 Bytes -2,147,483,648 to 2,147,483,647
smallint 2 Bytes -32,768 to 32,767
tinyint 1 byte 0 to 255
Using a combined primary key (or unique index) on 10M rows could easily eat up any storage space gained by removing the int PK (and more). Also, referencing a row from this table would become a lot more difficult.
I always keep an int (or bigint if required) PK on any table. The storage space is normally relatively small compared to the rest of the data and having an easy way of linking/referencing rows always in place makes life a lot easier WRT to enhancements and changes to your data model.
Related
Thanks to the wonderful article The Cost of GUIDs as Primary Keys, we have the COMB GUID. Based on current implementation, there are 2 approaches:
use last 6 bytes for timestamp: GUIDs as fast primary keys under multiple databases
use last 8 bytes for timestamp by using windows tick: GUID COMB strategy in EF4.1 (CodeFirst)
We all know that for 6 bytes timestamp at GUID, there would more bytes for random bytes to reduce the collision of the GUID. However more GUID with same timestamp would be created and those are not sequential at all. With that, 8 bytes timestamp would be preferred.
So it seems a hard choice. Based on article above GUIDs as fast primary keys under multiple databases, it says:
Before we continue, a short footnote about this approach: using a 1-millisecond-resolution timestamp means that GUIDs generated very close together might have the same timestamp value, and so will not be sequential. This might be a common occurrence for some applications, and in fact I experimented with some alternate approaches, such as using a higher-resolution timer such as System.Diagnostics.Stopwatch, or combining the timestamp with a "counter" that would guarantee the sequence continued until the timestamp updated. However, during testing I found that this made no discernible difference at all, even when dozens or even hundreds of GUIDs were being generated within the same one-millisecond window. This is consistent with what Jimmy Nilsson encountered during his testing with COMBs as well
Just wonder if someone who knows database internal could share some lights about above observation. Is it because that database server just store the data in the memory and only write to disk when it reaches certain threshold? Thus the reorder of inserted data with non sequence GUID with same time stamp would happen in general in memory and thus minimal performance penalty.
Update:
Based on our testing, the COMB GUID could not reduce the table fragmentation as it is claimed over the internet compared with random GUID. It seems the only way right now is to use SQL Server to generate the sequential GUID.
The article you referenced is from 2002 and is very old. Just use newsequentialid (available in SQL Server 2005 and up). This guarantees that each new id you generate is greater than the previous one, solving the index fragmentation/page split issue.
Another aspect I'd like to mention, though, that the writer of that article glossed over, is that using 16 bytes when you only need 4 is not a good idea. Let's say you have a table with 500,000 rows averaging 150 bytes not including the clustered column, and the table has 3 nonclustered indexes (which repeat the clustered column in each row), each in turn with rows averaging 4 bytes, 25 bytes, and 50 bytes not counting the clustered column.
The storage requirements at perfect 100% fill factor are then (all numbers in megabytes except where %):
Item Clust 50 25 4 Total
---- ----- ----- ----- ----- ------
GUID 79.1 31.5 19.6 9.5 139.7
int 73.4 25.7 13.8 3.8 116.7
%imp 7.2% 18.4% 29.6% 60.0% 16.5%
In the nonclustered index having just one int column of 4 bytes (a common scenario), switching the clustered index to an int makes it 60% smaller! This translates directly into a 60% performance improvement for any scans on the table--and that's conservative, because with smaller rows, page splits will occur less often and the fragmentation will stay better longer.
Even in the clustered index itself, there's still a 7.2% performance improvement, which is not nothing, at all.
What if you used GUIDs throughout your entire database, which had tables with a similar profile as this where switching to int would yield a 16.5% reduction in size, and the database itself was 1.397 Terabytes in size? Your whole database would be 230 Gb larger (refer to the Total column, 139.7 - 116.7). That translates into real money in the real world for high-availability storage. It moves your disk purchase schedule earlier in time which is harmful to your company's bottom line.
Do not use larger data types than necessary, ever. It's like adding weight to your car for no reason: you will pay for it (if not in speed, then in fuel economy).
UPDATE
Now that I know you are creating the GUID in your client-side code, I can see more clearly the nature of your problem. If you are able to defer creating the GUID until row insertion time, here's one way to accomplish that.
First, set a default for your CustomerID column:
ALTER TABLE dbo.Customer ADD CONSTRAINT DF_Customer_CustomerID
DEFAULT (newsequentialid()) FOR Customer;
Now you don't have to specify what value to insert for CustomerID in any INSERT, and your query could look like this:
DECLARE #Name varchar(100) = 'Acme Spy Devices';
INSERT dbo.Customer (Name)
OUTPUT inserted.CustomerID -- a GUID
VALUES (#Name);
In this very simple example, you have inserted a new row to the Customer table, and returned a rowset to the client containing the just-created value, all in one query.
If you wanted to explicitly insert VALUES (newsequentialid(), #Name) that would work, too.
I wonder about Guid duplication. I am creating a Guid to save database table as entity Primary Key.
Account account = new Account(Guid.NewGuid());
But I am confused. Does this cause a duplication on a database table because I am creating manually a Primary key and inserting it to the database.
Database engine does not generate Ids. After saving myriads of records, is there a possibility to have duplications?
Not really.
How much of "not really" depends on the GUID type, and on your understanding of probabilities.
A "real" GUID, version 1, the value is guaranteed to be unique. It's formed by combining the MAC address of your network card (unique, unless you change it manually) and a timestamp.
A pseudo-random GUID, version 4, is not guaranteed to be unique, but it is extremely unlikely to get a collision anyway. You have 122 bits to work with, and 2^122 is a very big number. Like, really big. Using Guid.NewGuid() is fine - although it should be noted that the random numbers used to generate the GUID are not crypto-random.
Of course, different implementations of GUIDv4 will have markedly different entropies. If you just use Random to generate the numbers, you're nowhere near the 122-bit maximum. So don't think you can just write your own code to generate GUIDs, most of such attempts and with nothing more unique than just Random.Next() - by far not good enough for a primary key in a database.
Note that GUIDs are commonly used in scenarios like replication, which are completely built on two generated GUIDs being unique.
the total number of unique such GUIDs is 2122 (approximately
5.3×1036). This number is so large that the probability of the same number being generated randomly twice is negligible
From Wiki
For your information SQL SERVER Generates Guid.
Make the data type of column ID as uniqueidentifier then in the properties bar, go to RowGuid then change it to yes.
P.S.
Make sure your ID is Primary Key.
I am migrating an old database (oracle) and there are few tables like CountryCode, DeptCode and RoleCodes, their primary key is string (Codes) and i am thinking about adding Number column as a primary key because it would work fast with joins. These tables are not really big.
I am wondering if primary key for those tables should start from number '1' or it can be started from 100 just to differentiate b/w tables PK although i don't think i would be showing them on reports.
For sequence-generated IDs, I would suggest starting at different values if it's easy to do (depends on your database etc). You shouldn't be using this to differentiate between them in code, but it can make testing more reasonable.
Before now, I've had a situation where I've accidentally used a foreign key one table as if it were the foreign key for another table. The tests passed as the IDs were coincidentally the same. After we discovered the problem, we changed the initial seed and found the tests were a lot clearer.
You shouldn't do it to differentiate between tables. That is just not practical.
Not all primary keys have to start at 1, as in the case of an order number.
The rationale you're using to switch to an integer primary key doesn't seem valid: the performance gain you'd see using an INT rather than the original codes (which I assume are strings) will be negligable. The PK is always indexed, and indexes for strings or numerics are as good as instant. So unless you really need an INT, I'd be tempted to stick with the original data-type and work with the original data - simplifies data migration (which is something that should be considered whilst doing any work).
It is very common for example in ERP systems to define number ranges that
represent a certain group of items.
This can be both as position in a bigger number, e.g.
1234567890
| |
index 4 - 6 represents region code
index 7 - 8 represents dept code...
or, as I suspect in your case, parts at the same place, like
1000 - 1999 Region codes
2000 - 2999 DeptCode
3000 - 3999 RoleCode
Therefore: No, it not necessarily starts with 1.
Bigger ERP Systems have even configuration sections for number ranges!
Now, from a database point of view:
Yes, your tables should always have a primary key!
Having one will tremendously improve performance on average cases.
(but in most database systems, if you do not provide one, one will be
set by the DBMS which you do not see and can not handle. Some DBMS even
create indices, but thats another story)
I think it does not matter the start number or the start value that will hold the primary key .
What is important is that they will be represented in the FK of the join tables with the same values that are in the PK of the MAIN table .
A surrogate key can have any values, as long as they are unique. That's what makes it "surrogate" after all - values have no intrinsic meaning on their own, and shouldn't generally even be shown to the user. That being said, you could think about using different seeds, just for testing purposes, as Jon Skeet suggested.
That being said, do you really need to introduce a new (surrogate) key? The existing natural key could actually lead to less1 JOINS, and may be useful for clustering. While there are legitimate uses for surrogate keys, don't do it just becaus it is "fashionable" - always be aware of the tradeoffs you are making and pick the right balance for you concrete needs.
1 It is automatically "propagated" down foreign keys, so you don't need to JOIN the child table to the parent just to get the natural key - natural key is already in the child.
Doesn't matter what int the primary key starts from.
Assuming the codes aren't updated regularly, I don't believe that int will be any faster. It more heavily depends on it being a varchar or of a known size.
I personally always have an field names "Id" as a primary key to a table, defined as an int or a bigInt if necessary.
If the table matches up to an enumerated type then I make sure the Id matches the EnumeratedType id which can be any number - so no it doesn't need to start from 1.
If it doesn't match an enumerated type, then I will usually use an auto-incrementing key starting from 1 but this is not always needed.
Note - that if the number of rows is small, then the difference between indexing on a number and on a varchar will be negligible.
yes, it does'nt matter what integer it start from, it main use is define row uniquely and relationship among other table.
I use VARCHAR throughout my app, and found something particularly confusing... Why do I need to define my SQL VARCHAR columns with a length, such as VARCHAR(50) or VARCHAR(1000)? Is the one and only purpose that this length constraint allows me to define my preferred maximum string length? Is there any performance difference or otherwise between VARCHAR(50) and VARCHAR(1000)?
That depends entirely on the internals of your DBMS. For example, if you index a varchar column, you will almost certainly get a keypart set to the maximum size.
That's because indexes have to be insanely efficient and you don't want to be mucking about with variable length fields in that case, since it will probably slow you down.
Even in the data area of the database, you may find that it simply allows for the largest size. I've seen proposals that just store a pointer in the row to a on-disk-heap but that means two disk reads per row and I can't see that being a very good option for massive performance.
The sizes of your columns will affect performance with things like how many records can be read in at one time, how many can fit in a n-ary tree index node and so forth.
SQLite: Size limits are completely ignored.
PostgreSQL: VARCHAR(N) is essentially equivalent to TEXT CHECK (LENGTH(x) <= N). There is no performance advantage to declaring a maximum size.
MySQL: Determines whether the string length is stored as one byte or two bytes.
Oracle: Higher size limits have a performance disadavantage.
MS SQL Server: VARCHAR columns greater than 900 bytes cannot be indexed.
We have a large database with enquiries, each enquirys is referenced using a Guid. The Guid isn't very customer friendly so we want to the additional 5 digit "human id" (ok as we'll very likely won't have more than 99999 enquirys active at any time, and it's ok if a humanuid reference multiple enquirys as they aren't used for anything important).
1) Is there any way to have a IDENTITY column reset to 1 after 99999?
My current workaround to this is to use a INT IDENTITY(1,1) NOT NULL column and when presenting a HumanId take HumanId % 100000.
2) Is there any way to automatically "randomly distribute" the ids over [0..99999] so that two enquirys created after each other don't get the adjacent ids? I guess I'm looking for a two-way one-to-one hash function??
... Ideally I'd like to create this using T-SQL automatically creating these id's when a enquiry is created.
If performance and concurrency isn't too much of an issue, you can use triggers and the MAX() function to calculate a 'next human ID' value. You probably would want to keep your IDENTITY column as is, and have the 'human ID' in a separate column.
EDIT: On a side note, this sounds like a 'presentation layer' issue, which shouldn't be in your database. Your presentation layer of your application should have the code to worry about presenting a record in a human readable manner. Just a thought...
If you absolutely need to do this in the database, then why not derive your human-friendly value directly from the GUID column?
-- human_id doesn't have to be calculated when you retrieve the data
-- you could create a computed column on the table itself if you prefer
SELECT (CAST(your_guid_column AS BINARY(3)) % 100000) AS human_id
FROM your_table
This will give you a random-ish value between 0 and 99999, derived from the first 3 bytes of the GUID. If you want a larger, or smaller, range then adjust the divisor accordingly.
I would strongly recommend relooking at your logic. Your approach has a few dangers, including:
It is always a bad idea to re-use ID's, even if the original record has become "obsolete" - do you lose anything by continuing to grow ID's beyond 99999? The problem here is more likely to be with long term maintenance, especially if there is any danger of the system developing over time. Another thing to consider - is there any chance a user will take this reference number, and use it to reference your system at some stage in the future?
With manually assigning a generated / random ID, you will need to ensure that multiple records are not assigned the same ID. There are a few options that you have to follow this (for example, using transactions), however you should ensure that the scope of the transactions is not going to leave you open to problems with concurrent transactions being blocked - this may cause a few problems eg. Performance. You may be best served by generating your ID externally (as SQL does not do random especially well), and then enforcing a unique constraint on your DB, perhaps in the way suggested by Firoz Ansari.
If you still want to reset the identity column, this can be done with the DBCC CHECKIDENT command.
An example of generating random seeds in SQL server can be found here:
http://weblogs.sqlteam.com/jeffs/archive/2004/11/22/2927.aspx
You can create composite primary key with two columns, say..BatchId and HumanId.
Records in these columns will look like this:
BatchId, HumanId
1, 1
1, 2
1, 3
.
.
1, 99998
1, 99999
2, 1
2, 2
3, 3
use MAX or ORDER BY DESC to get next available HumanId with condition with BachId
SELECT TOP 1 #NextHumanId=HumanId
FROM [THAT_TABLE]
ORDER BY BatchId DESC, HumanID DESC
IF #NextHumanId>=99999 THEN SET #NextHumanId=1
Hope this help.
You could have a table of available HUMANIDs, each time you add an enquiry you could randomly pull a HUMANID from the table (and DELETE it), and each time you delete the enquiry you could add it back (by INSERTing).