I wonder about Guid duplication. I am creating a Guid to save database table as entity Primary Key.
Account account = new Account(Guid.NewGuid());
But I am confused. Does this cause a duplication on a database table because I am creating manually a Primary key and inserting it to the database.
Database engine does not generate Ids. After saving myriads of records, is there a possibility to have duplications?
Not really.
How much of "not really" depends on the GUID type, and on your understanding of probabilities.
A "real" GUID, version 1, the value is guaranteed to be unique. It's formed by combining the MAC address of your network card (unique, unless you change it manually) and a timestamp.
A pseudo-random GUID, version 4, is not guaranteed to be unique, but it is extremely unlikely to get a collision anyway. You have 122 bits to work with, and 2^122 is a very big number. Like, really big. Using Guid.NewGuid() is fine - although it should be noted that the random numbers used to generate the GUID are not crypto-random.
Of course, different implementations of GUIDv4 will have markedly different entropies. If you just use Random to generate the numbers, you're nowhere near the 122-bit maximum. So don't think you can just write your own code to generate GUIDs, most of such attempts and with nothing more unique than just Random.Next() - by far not good enough for a primary key in a database.
Note that GUIDs are commonly used in scenarios like replication, which are completely built on two generated GUIDs being unique.
the total number of unique such GUIDs is 2122 (approximately
5.3×1036). This number is so large that the probability of the same number being generated randomly twice is negligible
From Wiki
For your information SQL SERVER Generates Guid.
Make the data type of column ID as uniqueidentifier then in the properties bar, go to RowGuid then change it to yes.
P.S.
Make sure your ID is Primary Key.
Related
Thanks to the wonderful article The Cost of GUIDs as Primary Keys, we have the COMB GUID. Based on current implementation, there are 2 approaches:
use last 6 bytes for timestamp: GUIDs as fast primary keys under multiple databases
use last 8 bytes for timestamp by using windows tick: GUID COMB strategy in EF4.1 (CodeFirst)
We all know that for 6 bytes timestamp at GUID, there would more bytes for random bytes to reduce the collision of the GUID. However more GUID with same timestamp would be created and those are not sequential at all. With that, 8 bytes timestamp would be preferred.
So it seems a hard choice. Based on article above GUIDs as fast primary keys under multiple databases, it says:
Before we continue, a short footnote about this approach: using a 1-millisecond-resolution timestamp means that GUIDs generated very close together might have the same timestamp value, and so will not be sequential. This might be a common occurrence for some applications, and in fact I experimented with some alternate approaches, such as using a higher-resolution timer such as System.Diagnostics.Stopwatch, or combining the timestamp with a "counter" that would guarantee the sequence continued until the timestamp updated. However, during testing I found that this made no discernible difference at all, even when dozens or even hundreds of GUIDs were being generated within the same one-millisecond window. This is consistent with what Jimmy Nilsson encountered during his testing with COMBs as well
Just wonder if someone who knows database internal could share some lights about above observation. Is it because that database server just store the data in the memory and only write to disk when it reaches certain threshold? Thus the reorder of inserted data with non sequence GUID with same time stamp would happen in general in memory and thus minimal performance penalty.
Update:
Based on our testing, the COMB GUID could not reduce the table fragmentation as it is claimed over the internet compared with random GUID. It seems the only way right now is to use SQL Server to generate the sequential GUID.
The article you referenced is from 2002 and is very old. Just use newsequentialid (available in SQL Server 2005 and up). This guarantees that each new id you generate is greater than the previous one, solving the index fragmentation/page split issue.
Another aspect I'd like to mention, though, that the writer of that article glossed over, is that using 16 bytes when you only need 4 is not a good idea. Let's say you have a table with 500,000 rows averaging 150 bytes not including the clustered column, and the table has 3 nonclustered indexes (which repeat the clustered column in each row), each in turn with rows averaging 4 bytes, 25 bytes, and 50 bytes not counting the clustered column.
The storage requirements at perfect 100% fill factor are then (all numbers in megabytes except where %):
Item Clust 50 25 4 Total
---- ----- ----- ----- ----- ------
GUID 79.1 31.5 19.6 9.5 139.7
int 73.4 25.7 13.8 3.8 116.7
%imp 7.2% 18.4% 29.6% 60.0% 16.5%
In the nonclustered index having just one int column of 4 bytes (a common scenario), switching the clustered index to an int makes it 60% smaller! This translates directly into a 60% performance improvement for any scans on the table--and that's conservative, because with smaller rows, page splits will occur less often and the fragmentation will stay better longer.
Even in the clustered index itself, there's still a 7.2% performance improvement, which is not nothing, at all.
What if you used GUIDs throughout your entire database, which had tables with a similar profile as this where switching to int would yield a 16.5% reduction in size, and the database itself was 1.397 Terabytes in size? Your whole database would be 230 Gb larger (refer to the Total column, 139.7 - 116.7). That translates into real money in the real world for high-availability storage. It moves your disk purchase schedule earlier in time which is harmful to your company's bottom line.
Do not use larger data types than necessary, ever. It's like adding weight to your car for no reason: you will pay for it (if not in speed, then in fuel economy).
UPDATE
Now that I know you are creating the GUID in your client-side code, I can see more clearly the nature of your problem. If you are able to defer creating the GUID until row insertion time, here's one way to accomplish that.
First, set a default for your CustomerID column:
ALTER TABLE dbo.Customer ADD CONSTRAINT DF_Customer_CustomerID
DEFAULT (newsequentialid()) FOR Customer;
Now you don't have to specify what value to insert for CustomerID in any INSERT, and your query could look like this:
DECLARE #Name varchar(100) = 'Acme Spy Devices';
INSERT dbo.Customer (Name)
OUTPUT inserted.CustomerID -- a GUID
VALUES (#Name);
In this very simple example, you have inserted a new row to the Customer table, and returned a rowset to the client containing the just-created value, all in one query.
If you wanted to explicitly insert VALUES (newsequentialid(), #Name) that would work, too.
I have a requirement to generate a semi-random code in C#/ASP.NET that has to be unique in the SQL Server database.
These codes need to be generated in batches of up to 100 codes per run.
Given the requirements, I'm not sure how I can do this without generating a code and then checking the database to see if it exists, which seems like a horrible way of doing it.
Here are the requirements:
Maximum 10 characters long (alpha-numeric only)
Must not be case sensitive
User can specify an optional 3 character prefix for the code
Must not violate 2 column unique constraint in the database, i.e. must be a unique "code text" within the "category" (CONSTRAINT ucCodes UNIQUE (ColumnCodeText, ColumnCategoryId))
So, given the 10 character limit, GUIDs are not an option. Given the case insensitivity requirement, the mathematical probability for database collisions are fairly high, I think.
At the same time, there are enough possible combinations that a straight look-up table in the DB would be prohibitive, I believe.
Is there a reasonably performant way of generating codes with these requirements that doesn't involve saving them to the DB one code at a time and waiting for a unique key violation to see if it goes through?
You have two options here.
You generate a new ID and insert it. If it throws dup unique key exception then try again until you succeed or bail if you run out of IDs. The performance will stink if most of the IDs are used up.
You pregenerate all the possible IDs and store them in a table. Whenever you need to get one you can remove one from a random row index and use that as the ID. Database will take care of the concurrency for you so its guarantee unique. if the first three letters are given then you can simply add a where clause to restrict the rows to match that constraint.
I am migrating an old database (oracle) and there are few tables like CountryCode, DeptCode and RoleCodes, their primary key is string (Codes) and i am thinking about adding Number column as a primary key because it would work fast with joins. These tables are not really big.
I am wondering if primary key for those tables should start from number '1' or it can be started from 100 just to differentiate b/w tables PK although i don't think i would be showing them on reports.
For sequence-generated IDs, I would suggest starting at different values if it's easy to do (depends on your database etc). You shouldn't be using this to differentiate between them in code, but it can make testing more reasonable.
Before now, I've had a situation where I've accidentally used a foreign key one table as if it were the foreign key for another table. The tests passed as the IDs were coincidentally the same. After we discovered the problem, we changed the initial seed and found the tests were a lot clearer.
You shouldn't do it to differentiate between tables. That is just not practical.
Not all primary keys have to start at 1, as in the case of an order number.
The rationale you're using to switch to an integer primary key doesn't seem valid: the performance gain you'd see using an INT rather than the original codes (which I assume are strings) will be negligable. The PK is always indexed, and indexes for strings or numerics are as good as instant. So unless you really need an INT, I'd be tempted to stick with the original data-type and work with the original data - simplifies data migration (which is something that should be considered whilst doing any work).
It is very common for example in ERP systems to define number ranges that
represent a certain group of items.
This can be both as position in a bigger number, e.g.
1234567890
| |
index 4 - 6 represents region code
index 7 - 8 represents dept code...
or, as I suspect in your case, parts at the same place, like
1000 - 1999 Region codes
2000 - 2999 DeptCode
3000 - 3999 RoleCode
Therefore: No, it not necessarily starts with 1.
Bigger ERP Systems have even configuration sections for number ranges!
Now, from a database point of view:
Yes, your tables should always have a primary key!
Having one will tremendously improve performance on average cases.
(but in most database systems, if you do not provide one, one will be
set by the DBMS which you do not see and can not handle. Some DBMS even
create indices, but thats another story)
I think it does not matter the start number or the start value that will hold the primary key .
What is important is that they will be represented in the FK of the join tables with the same values that are in the PK of the MAIN table .
A surrogate key can have any values, as long as they are unique. That's what makes it "surrogate" after all - values have no intrinsic meaning on their own, and shouldn't generally even be shown to the user. That being said, you could think about using different seeds, just for testing purposes, as Jon Skeet suggested.
That being said, do you really need to introduce a new (surrogate) key? The existing natural key could actually lead to less1 JOINS, and may be useful for clustering. While there are legitimate uses for surrogate keys, don't do it just becaus it is "fashionable" - always be aware of the tradeoffs you are making and pick the right balance for you concrete needs.
1 It is automatically "propagated" down foreign keys, so you don't need to JOIN the child table to the parent just to get the natural key - natural key is already in the child.
Doesn't matter what int the primary key starts from.
Assuming the codes aren't updated regularly, I don't believe that int will be any faster. It more heavily depends on it being a varchar or of a known size.
I personally always have an field names "Id" as a primary key to a table, defined as an int or a bigInt if necessary.
If the table matches up to an enumerated type then I make sure the Id matches the EnumeratedType id which can be any number - so no it doesn't need to start from 1.
If it doesn't match an enumerated type, then I will usually use an auto-incrementing key starting from 1 but this is not always needed.
Note - that if the number of rows is small, then the difference between indexing on a number and on a varchar will be negligible.
yes, it does'nt matter what integer it start from, it main use is define row uniquely and relationship among other table.
I have a table which stores timestamped recordings from a collection of sensors, these readings are taken 14400 times per day. (every 6 seconds).
There are 4 sensors, and they share their main data table.
At the moment the schema is as follows:
id (int-PK)
time (DateTime)
sensor (int)
reading (int)
This works perfectly well, and I have the primary key set to autoincrement.
It seems silly to have this primary key at all however, since I never refer to it - Would I be better off using a combination of time and sensor to act as a composite key?
If I did use a composite key, I assume my bytes per row would be decreased too? This is relevant since the table is over 10m rows, so any saving is worth it.
It seems win-win, but I wanted to see what the repercussions of this approach would be.
Composite indexes, and especially composite primary keys, should be avoided. The index is wider and this is bad for performance (and memory usage). In my personal opinion, it's also bad design to have a composite primary key, since there is no more unique singular way of referencing your row.
My advice would be to stick to the design you have now.
At this time you are using a surrogate key. And you are evaluating to move to natural keys.
Working with surrogate keys has advantages over natural keys that you can learn about in previous link:
Immutability
Requirement changes
Performance
Compatibility
Uniformity
(From wikipedia)
You can look for some other posts about surrogate v.s. natural keys in stackoverflow.
But each design is different to others. As database analyst you should evaluate what is the best decission for your project.
stick with the design, I've never had anything but problems putting a datetime in a PK. When your inserts start failing because of duplicates, you'll wish you hadn't done it.
if you want to save space go with a tiny int for the sensor column (you have only 4 different values). Possibly something smaller for reading, I doubt the sensor can record 2 trillion different values that an int can store, most likely you can use a smallint or tiny int for it.
bigint 8 bytes, -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
int 4 Bytes -2,147,483,648 to 2,147,483,647
smallint 2 Bytes -32,768 to 32,767
tinyint 1 byte 0 to 255
Using a combined primary key (or unique index) on 10M rows could easily eat up any storage space gained by removing the int PK (and more). Also, referencing a row from this table would become a lot more difficult.
I always keep an int (or bigint if required) PK on any table. The storage space is normally relatively small compared to the rest of the data and having an easy way of linking/referencing rows always in place makes life a lot easier WRT to enhancements and changes to your data model.
I have a database that is part of a Merge Replication scheme that has a GUID as it's PK. Specifically the Data Type is uniqueidentifier, Default Value (newsequentialid()), RowGUID is set to Yes. When I do a InsertOnSubmit(CaseNote) I thought I would be able to leave CaseNoteID alone and the database would input the next Sequential GUID like it does if you manually enter a new row in MSSMS. Instead it sends 00000000-0000-0000-0000-000000000000. If I add CaseNoteID = Guid.NewGuid(), the I get a GUID but not a Sequential one (I'm pretty sure).
Is there a way to let SQL create the next sequential id on a LINQ InsertOnSubmit()?
For reference below is the code I am using to insert a new record into the database.
CaseNote caseNote = new CaseNote
{
CaseNoteID = Guid.NewGuid(),
TimeSpentUnits = Convert.ToDecimal(tbxTimeSpentUnits.Text),
IsCaseLog = chkIsCaseLog.Checked,
ContactDate = Convert.ToDateTime(datContactDate.Text),
ContactDetails = memContactDetails.Text
};
caseNotesDB.CaseNotes.InsertOnSubmit(caseNote);
caseNotesDB.SubmitChanges();
Based on one of the suggestions below I enabled the Autogenerated in LINQ for that column and now I get the following error --> The target table of the DML statement cannot have any enabled triggers if the statement contains an OUTPUT clause without INTO clause.
Ideas?
In the Linq to Sql designer, set the Auto Generated Value property to true for that column.
This is equivalent to the IsDbGenerated property for a column. The only limitation is that you can't update the value using Linq.
From the top of the "Related" box on the right:
Sequential GUID in Linq-to-Sql?
If you really want the "next" value, use an int64 instead of GUID. COMB guid will ensure that the GUIDs are ordered.
In regards to your "The target table of the DML statement cannot have any enabled triggers if the statement contains an OUTPUT clause without INTO clause", check out this MS KB article, it appears to be a bug in LINQ:
http://support.microsoft.com/kb/961073
You really needed to do a couple of things.
Remove any assignment to the GUID type property
Change the column to autogenerated
Create a constraint in the database to default the column to NEWSEQUENTIALID()
Do insert on submit just like you were before.
On the insert into the table the ID will be created and will be sequential. Performance comparison of NEWSEQUENTIALID() vs. other methods
There is a bug in Linq2Sql when using an auto-generated (guid/sequential guid) primary key and having a trigger on the table.. that is what is causing your error. There is a hotfix for the problem:
http://support.microsoft.com/default.aspx?scid=kb;en-us;961073&sd=rss&spid=2855
Masstransit uses a combguid :
https://github.com/MassTransit/MassTransit/blob/master/src/MassTransit/NewId/NewId.cs
is this what you're looking for?
from wikipedia:
Sequential algorithms
GUIDs are commonly used as the primary key of database tables, and
with that, often the table has a clustered index on that attribute.
This presents a performance issue when inserting records because a
fully random GUID means the record may need to be inserted anywhere
within the table rather than merely appended near the end of it. As a
way of mitigating this issue while still providing enough randomness
to effectively prevent duplicate number collisions, several algorithms
have been used to generate sequential GUIDs. The first technique,
described by Jimmy Nilsson in August 2002[7] and referred to as a
"COMB" ("combined guid/timestamp"), replaces the last 6 bytes of Data4
with the least-significant 6 bytes of the current system date/time.
While this can result in GUIDs that are generated out of order within
the same fraction of a second, his tests showed this had little
real-world impact on insertion. One side effect of this approach is
that the date and time of insertion can be easily extracted from the
value later, if desired. Starting with Microsoft SQL Server version
2005, Microsoft added a function to the Transact-SQL language called
NEWSEQUENTIALID(),[8] which generates GUIDs that are guaranteed to
increase in value, but may start with a lower number (still guaranteed
unique) when the server restarts. This reduces the number of database
table pages where insertions can occur, but does not guarantee that
the values will always increase in value. The values returned by this
function can be easily predicted, so this algorithm is not well-suited
for generating obscure numbers for security or hashing purposes. In
2006, a programmer found that the SYS_GUID function provided by Oracle
was returning sequential GUIDs on some platforms, but this appears to
be a bug rather than a feature.[9]
You must handle OnCreated() method
Partial Class CaseNote
Sub OnCreated()
id = Guid.NewGuid()
End Sub
End Class