Proper use of a Lookup table and adaptations for NET [closed] - c#

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I need to create a few lookup tables and I often see the following:
create table Languages
(
Id int identity not null primary key (Id),
Code nvarchar (4) not null,
Description nvarchar (120) not null,
);
create table Posts
(
Id int identity not null primary key (Id),
LanguageId int not null,
Title nvarchar (400) not null,
);
insert into Languages (Id, Code, Description) values (1, "en", "English");
This way I am localizing Posts with language id ...
IMHO, this is not the best scheme for Languages table because in a Lookup table the PK should be meaningful, right?
So instead I would use the following:
create table Languages
(
Code nvarchar (4) not null primary key (Code),
Description nvarchar (120) not null,
);
create table Posts
(
Id int identity not null primary key (Id),
LanguageCode nvarchar (4) not null,
Title nvarchar (400) not null,
);
insert into Languages (Code, Description) values ("en", "English");
The NET applications usually use language code so this way I can get a Post in English without using a Join.
And with this approach I am also maintaining the database data integrity ...
This could be applied to Genders table with codes "M", "F", countries table, transaction types table (should I?), ...
However I think it is common to use int as PK in lookup tables because it is easier to map to ENUMS.
And know it is even possible to map to Flag Enums so have a Many to Many relationship in an ENUM.
That helps in NET code but in fact has limitations. A Languages table could never be mapped to a FLags Enum ...
... An flags enum can't have more than 64 items (Int64) because the keys must be a power of two.
A SOLUTION
I decided to find an approach that enforces database data integrity and still makes possible to use enums so I tried:
create table Languages
(
Code nvarchar (4) not null primary key (Code),
Key int not null,
Description nvarchar (120) not null,
);
create table Posts
(
Id int identity not null primary key (Id),
LanguageCode nvarchar (4) not null,
Title nvarchar (400) not null,
);
insert into Languages (Code, Key, Description) values ("en", 1, "English");
With this approach I have a meaningfully Language code, I avoid joins and I can create an enum by parsing the Key:
public enum LanguageEnum {
[Code("en")
English = 1
}
I can even preserve the code in an attribute. Or I can switch the code and description ...
What about Flag enums? Well, I will have not Flag enums but I can have List ...
And when using List I do not have the limitation of 64 items ...
To me all this makes sense but would I apply it to a Roles table, or a ProductsCategory table?
In my opinion I would apply only to tables that will rarely change over time ... So:
Languages, Countries, Genders, ... Any other example?
About the following I am not sure (They are intrinsic to the application):
PaymentsTypes, UserRoles
And to these I wouldn't apply (They can be managed by a CMS):
ProductsCategories, ProductsColors
What do you think about my approach for Lookup tables?

The first way of doing it is correct, with an ID as a PK. (You can also set a unique index on the Code column.)
'PK should be meaningful, right?'
Nope. This not a requirement; I have never heard of it in many many years of DBMS work.
Bear in mind that most RDBMS' have optimisations for int keys and will look up and int PK faster than most other data types. That's one of the reason's why IDENTITY is used for so many PK's.

Related

Microsoft Sync Framework unique index error

I use the MS Sync Framework to sync my SQL Server instance with a local SQL CE file to make it possible working offline with my Windows app.
I use GUIDs as keys. On my table I have a unique index on 2 columns: user_id and setting_id:
usersettings table
------------------
id PK -> I also tried it without this column. Same result
user_id FK
setting_id FK
value
Now I do the following:
I create a new record in this table in both databases - SQL Server and SQL CE with the same user_id and setting_id.
This should work and merge the data together since this can happen in real life. But I get an error when syncing saying the unique key constraint led to an error. The key pair already exists in the table.
A duplicate value cannot be inserted into a unique index. [ Table name = user_settings,Constraint name = unique_userid_settingid ]
Why can't MS sync handle that? It should not try to insert the key pair again. It should update the value if needed.
The issue is if you add the same key pair to different copies of the table, they get different IDs (GUIDs) as primary keys in this usersettings table.
As this is simply a many-to-many table between Users and Settings, there is no need to have that ID as a PK (or even a column at all).
Instead, just use a concatenated key of the two FKs e.g.,
CREATE TABLE [dbo].[usersettings](
[user_id] [UNIQUEIDENTIFIER] NOT NULL,
[setting_id] [UNIQUEIDENTIFIER] NOT NULL,
[value] [varchar](50) NOT NULL,
CONSTRAINT [PK_usersettings] PRIMARY KEY CLUSTERED ([user_id] ASC, [setting_id] ASC) );
Of course, include appropriate field settings (e.g., if you use VARCHARs to store the IDs) and relevant FKs.
As the rows inserted should now be identical on the two copies, it should merge fine.
If you must have a single column as a unique identifier for the table, you could make it meaningful e.g.,
the PK (ID) becomes a varchar (72)
it gets filled with CONCAT(user_ID, setting_id)
As the User_ID and Setting_ID are FKs, you should already have them generated so concatenating them should be easy enough.
Do you get the error during sync, then it should appear as a conflict, that you must solve in code.
https://learn.microsoft.com/en-us/previous-versions/sql/synchronization/sync-framework-2.0/bb734542(v=sql.105)
I also see this in the manual: By default, the following objects are not copied to the client database: FOREIGN KEY constraints, UNIQUE constraints, DEFAULT constraints, and the SQL Server ROWGUIDCOL property. This indicates poor support for your scenario
I suggest you remove the unique constraint from the device table.

How to add sequential numbering to SQL tables from C# [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I want to have my tables all contain a unique number for my tableID column.
Insert sequential number in MySQL
This is pretty much what I'm trying to accomplish but from my C# app.
EDIT: Adding the ID column with Primary Key and Auto Increment was all I needed to do. Thank you deterministicFail
From your error log:
ERROR 1067: Invalid default value for 'Status'
SQL Statement:
ALTER TABLE `lianowar_woodlandlumberbook`.`book`
CHANGE COLUMN `Customer_Ph` `Customer_Ph` VARCHAR(16) NOT NULL ,
CHANGE COLUMN `Status` `Status` VARCHAR(10) NOT NULL DEFAULT NULL ,
DROP PRIMARY KEY,
ADD PRIMARY KEY (`Customer_Name`, `Status`)
ERROR: Error when running failback script. Details follow.
ERROR 1050: Table 'book' already exists
SQL Statement:
CREATE TABLE `book` (
`Customer_Name` varchar(20) NOT NULL,
`Customer_Ph` varchar(16) DEFAULT NULL,
`Customer_Ph2` varchar(30) NOT NULL,
`Info_Taken_By` varchar(12) NOT NULL,
`Project_Type` varchar(20) NOT NULL,
`Project_Size` varchar(20) NOT NULL,
`Date_Taken` varchar(5) NOT NULL,
`Date_Needed` varchar(5) NOT NULL,
`Sales_Order` varchar(5) NOT NULL,
`Information` text NOT NULL,
`Status` varchar(10) DEFAULT NULL,
`tableID` varchar(5) DEFAULT NULL,
PRIMARY KEY (`Customer_Name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
You try to define a NOT NULL column and give it the default NULL. You should consider your datatypes, tableID should be a number datatype (btw the name isn't good, just id or bookId would be better).
To your question:
You should define the table like this
CREATE TABLE `book` (
`ID` INT NOT NULL AUTO_INCREMENT,
`Customer_Name` varchar(20) NOT NULL,
`Customer_Ph` varchar(16) DEFAULT NULL,
`Customer_Ph2` varchar(30) NOT NULL,
`Info_Taken_By` varchar(12) NOT NULL,
`Project_Type` varchar(20) NOT NULL,
`Project_Size` varchar(20) NOT NULL,
`Date_Taken` varchar(5) NOT NULL,
`Date_Needed` varchar(5) NOT NULL,
`Sales_Order` varchar(5) NOT NULL,
`Information` text NOT NULL,
`Status` varchar(10) DEFAULT NULL,
PRIMARY KEY (`ID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
I dont know which datatypes you really need, because i dont know the data you going to store. But to use the Primary Key and Auto_increment feature this will do the trick.
Don't do this from application code. Ever. Application code is poorly positioned to guarantee uniqueness, because you have a potential race condition between multiple clients trying to insert at about the same time. It will also be slower, because application code must first request a current value from the database before incrementing it, resulting in two separate database transactions.
The database, on the other hand, already has features to ensure atomicity and uniqueness, and can respond to requests in order, thus positioning it to do this job much faster and better. Indeed, pretty much every database out there, including MySql, already has this feature built in.

Globalization of Database Stored Values

We are using resource files (.resx) to translate our .NET 4.5 MVC C# application to different languages. This works great for static text that is located in our views. However, we have values that are pulled from our SQL database that need to be translated as well.
An example: Dropdown list with values that are populated from a table in the database.
What is the best practice for translating these values in the database?
The last multilingual application I designed I've used a table for language, and for each table that had any string types (char, varchar etc`) I had a translation table.
Something along these lines:
CREATE TABLE TblLanguage
(
Language_Id int identity(1,1) PRIMARY KEY,
Language_EnglishName varchar(30),
Language_NativeName nvarchar(30),
CONSTRAINT UC_TblLanguage UNIQUE(Language_EnglishName)
)
CREATE TABLE TblSomeData (
SomeData_Id int identity(1,1) PRIMARY KEY,
SomeData_TextColumn varchar(50),
....
)
CREATE TABLE TblSomeData_T ( -- _T stands for translation
SomeData_T_SomeData_Id int FOREIGN KEY TblSomeData(SomeData_Id),
SomeData_T_Language_Id int FOREIGN KEY TblLanguage (Language_Id),
SomeData_T_TextColumn nvarchar(100),
PRIMARY KEY (SomeData_T_SomeData_Id , SomeData_T_Language_Id)
)
My application had English as it's default (or main) language, so I kept the default language in the base table and only the translations on the translation tables. You could, of course, keep the string values only in the translations table if you want to. Note that this does not take into consideration the different date and number formats for each culture, this is done on the presentation layer.

How to solve bad database design with Entity Framework?

I apologize for the strange question; it is hard to put into words. I am forced to work with a database of questionable design and I would like to solve data access issues with the Entity Framework. I am at a loss how to treat this type of design in an object oriented way.
The Item table is the problem. It has fields that may contain different types of data, ranging from Size to Lot Numbers to SO numbers, etc. The name of the field is determined by the ItemDef table, which links to a ItemDefValue table with the actual field names. The tables have been simplified for demonstration purposes.
Create Table Item
(
ItemKey int Primary Key not null,
ItemID1 varchar(100) null,
ItemID2 varchar(100) null,
ItemID3 varchar(100) null,
ItemID4 varchar(100) null,
ItemDefKey int not null --foreign key to ItemDef table
);
Create Table ItemDef
(
ItemDefKey int Primary Key not null,
CustomerKey int not null , -- foreign key to cusotmer table
);
Create Table ItemDefValue
(
FieldCode small not null,
Title varchar(50) not null
ItemDefKey int not null - foreign key to ItemDef table
);
I have solved this problem with DataSets and DataTables by renaming columns based on the ItemDefValue, so I am not looking for a table-based solution. I would like to avoid this type of table-based logic, especially since I am not fond of DataSets and would rather accomplish data access using the Entity Framework.
I would appreciate advise from anyone that has dealt with this kind of problem before. I would specifically like any suggestions on how to treat this kind of database design in an object oriented way, preferably using the Entity Framework.
And if you think there is no other solution than to re-design the database than I will take that advise as well.
Thanks.
Messy! A restructure would definitely be best.
But, how about creating views that represent the way you'd like the tables to be organised at an object level - and then with EF use those views rather than the tables directly. You'd need to function map the insert/update/delete to stored procedures for dealing with the real tables, but at least from EF side of things you'd be dealing with a decently organised set of entities rather than those tables ...

SQL: Associate a single type with multiple records of various other types

Designing a database with many tables and want to add a general Note table. I want a Note object to be able to attach to several other tables. So one Note can be associated with a particular Contact, maybe a Job, and also a few different Equipment objects. I'd like to be able to filter Note objects by the particular objects they are associated with.
Well, here's one way:
CREATE TABLE NoteTables
(
TableID INT NOT NULL Identity(1,1),
TableName SysName NOT NULL,
CONSTRAINT PK_NoteTables PRIMARY KEY CLUSTERED(TableID)
)
GO
CREATE TABLE TableNotes
(
TableID INT NOT NULL,
RowID INT NOT NULL,
NoteID INT NOT NULL,
CONSTRAINT PK_NoteAttachments PRIMARY KEY CLUSTERED(TableID, RowID, NoteID)
)
GO
CREATE TABLE Notes
(
NoteID INT NOT NULL Identity(1,1),
Note NVARCHAR(MAX),
CONSTRAINT PK_Notes PRIMARY KEY CLUSTERED(NoteID)
)
Note that I am assuming SQL Server and the use of IDENTITY columns here (if Oracle, you can use Sequences instead).
The Notes table contains all of the notes and gives them an ID to use as both a referemce and a primary key.
The NoteTables just list all of the tables that can have note attached to their rows.
TableNotes links the notes to the tables and rows that they are attached to. Note that this design assumes that all of these tables have INT ID columns that can be used for unique referencing.
It would take you two tables. Structure is as easy as the following.
Note table:
NotePK | tableFK | note
And a table that lists all your tables.
Either you create one (then you have full control but need to maintain it) or you take the
sys.tables t
You read it out by SELECT * FROM sys.tables t
the column object_id would be your tableFK in the first table
You can store as many comments as you like. If you want to get the note simply query the note table and filter for your tableFK.

Categories

Resources