Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I want to have my tables all contain a unique number for my tableID column.
Insert sequential number in MySQL
This is pretty much what I'm trying to accomplish but from my C# app.
EDIT: Adding the ID column with Primary Key and Auto Increment was all I needed to do. Thank you deterministicFail
From your error log:
ERROR 1067: Invalid default value for 'Status'
SQL Statement:
ALTER TABLE `lianowar_woodlandlumberbook`.`book`
CHANGE COLUMN `Customer_Ph` `Customer_Ph` VARCHAR(16) NOT NULL ,
CHANGE COLUMN `Status` `Status` VARCHAR(10) NOT NULL DEFAULT NULL ,
DROP PRIMARY KEY,
ADD PRIMARY KEY (`Customer_Name`, `Status`)
ERROR: Error when running failback script. Details follow.
ERROR 1050: Table 'book' already exists
SQL Statement:
CREATE TABLE `book` (
`Customer_Name` varchar(20) NOT NULL,
`Customer_Ph` varchar(16) DEFAULT NULL,
`Customer_Ph2` varchar(30) NOT NULL,
`Info_Taken_By` varchar(12) NOT NULL,
`Project_Type` varchar(20) NOT NULL,
`Project_Size` varchar(20) NOT NULL,
`Date_Taken` varchar(5) NOT NULL,
`Date_Needed` varchar(5) NOT NULL,
`Sales_Order` varchar(5) NOT NULL,
`Information` text NOT NULL,
`Status` varchar(10) DEFAULT NULL,
`tableID` varchar(5) DEFAULT NULL,
PRIMARY KEY (`Customer_Name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
You try to define a NOT NULL column and give it the default NULL. You should consider your datatypes, tableID should be a number datatype (btw the name isn't good, just id or bookId would be better).
To your question:
You should define the table like this
CREATE TABLE `book` (
`ID` INT NOT NULL AUTO_INCREMENT,
`Customer_Name` varchar(20) NOT NULL,
`Customer_Ph` varchar(16) DEFAULT NULL,
`Customer_Ph2` varchar(30) NOT NULL,
`Info_Taken_By` varchar(12) NOT NULL,
`Project_Type` varchar(20) NOT NULL,
`Project_Size` varchar(20) NOT NULL,
`Date_Taken` varchar(5) NOT NULL,
`Date_Needed` varchar(5) NOT NULL,
`Sales_Order` varchar(5) NOT NULL,
`Information` text NOT NULL,
`Status` varchar(10) DEFAULT NULL,
PRIMARY KEY (`ID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
I dont know which datatypes you really need, because i dont know the data you going to store. But to use the Primary Key and Auto_increment feature this will do the trick.
Don't do this from application code. Ever. Application code is poorly positioned to guarantee uniqueness, because you have a potential race condition between multiple clients trying to insert at about the same time. It will also be slower, because application code must first request a current value from the database before incrementing it, resulting in two separate database transactions.
The database, on the other hand, already has features to ensure atomicity and uniqueness, and can respond to requests in order, thus positioning it to do this job much faster and better. Indeed, pretty much every database out there, including MySql, already has this feature built in.
Related
We have written a c# MVC ASP.NET CORE project which is hosted as an azure app-service and is driven by an azure hosted sql server.
I have the following sql Table as a part of the database:
CREATE TABLE [dbo].[Users] (
[UserID] INT IDENTITY (1, 1) NOT NULL,
[UserName] NVARCHAR (50) NOT NULL,
[Email] NVARCHAR (50) NOT NULL,
[Active] BIT DEFAULT ((1)) NOT NULL,
[TeamID] INT NULL,
[UserType] INT DEFAULT ((0)) NOT NULL,
[UserCpM] FLOAT (53) DEFAULT ((0)) NOT NULL,
[CpMSet] BIT DEFAULT ((0)) NOT NULL,
[CurrentProject] INT DEFAULT ((-2)) NOT NULL,
[CurrentNotes] NVARCHAR (MAX) NULL,
[TimeStarted] DATETIME NOT NULL,
[Failure] BIT DEFAULT ((0)) NOT NULL,
[Failure_Project] INT NULL,
[Failure_TimeStarted] DATETIME NULL,
[Failure_TimeEnded] DATETIME NULL,
[Failure_Notes] NVARCHAR (MAX) NULL,
PRIMARY KEY CLUSTERED ([UserID] ASC),
CONSTRAINT [FK_Users_ToTeams] FOREIGN KEY ([TeamID]) REFERENCES [dbo].[Teams] ([TeamID]),
CONSTRAINT [FK_Users_CurrentProject_ToProjects] FOREIGN KEY ([CurrentProject]) REFERENCES [dbo].[Projects] ([ProjectID]),
CONSTRAINT [FK_Users_FailureProject_ToProjects] FOREIGN KEY ([Failure_Project]) REFERENCES [dbo].[Projects] ([ProjectID])
);
The UserCpM field is an indication of employees cost-per-minute and as such is cleared before displaying to any user who is not of the highest privilege level. Over the last couple of days this field has been randomly resetting itself back to 0 for every entry in the table in the database.
I have hunted through the code and confirmed that on any view where the CpM is cleared it is re-set before saving the User back to the database so I'm fairly certain that that isn't the cause of the issue.
Is there any reason that the context would save it's changes without being told to do so? or alternatively is it possible that it will track changes to anything set from it and transfer them back to the database.
Is there any reason that the context would save it's changes without being told to do so?
No, Azure SQL or SQL Server hosted on Azure will not modify your data.
is it possible that it will track changes to anything set from it and transfer them back to the database.
I suggest you create a log table and 2 triggers for your Users table. If any updates has been applies on Users table. It triggers will write the operation time and the UserCpM changes to the log table.
Here is a insert trigger code sample.
CREATE TRIGGER dbo.UsersInsert
ON [dbo].[Users]
AFTER Insert
AS
BEGIN
insert into dbo.UserLog(UserID, OriginalUserCpM, NewUserCpm, ModifiedDate)
select UserID, null, UserCpm, GETDATE() from inserted
SET NOCOUNT ON;
END
Here is a update trigger code sample.
CREATE TRIGGER dbo.UsersUpdate
ON [dbo].[Users]
AFTER Update
AS
BEGIN
insert into dbo.UserLog(UserID, OriginalUserCpM, NewUserCpm, ModifiedDate)
select UserID, UserCpm, null, GETDATE() from deleted
insert into dbo.UserLog(UserID, OriginalUserCpM, NewUserCpm, ModifiedDate)
select UserID, null, UserCpm, GETDATE() from inserted
SET NOCOUNT ON;
END
I currently have a Google Cloud SQL instance with some tables that I've created using the following format:
-- Create Hotel table with hotelNo primary key
CREATE TABLE IF NOT EXISTS Hotel(
hotelNo INT NOT NULL AUTO_INCREMENT,
hotelName VARCHAR(255) NOT NULL,
city VARCHAR(255) NOT NULL,
PRIMARY KEY(hotelNo)
);
-- Create Room table with roomNo and hotelNo for the composite key (primary)
CREATE TABLE IF NOT EXISTS Room(
roomNo INT NOT NULL AUTO_INCREMENT,
hotelNo INT NOT NULL,
roomType VARCHAR(255) NOT NULL,
price INT NOT NULL,
PRIMARY KEY(roomNo, hotelNo)
);
-- Create Booking table with hotelNo, guestNo and dateFrom for the composite key (primary)
CREATE TABLE IF NOT EXISTS Booking(
hotelNo INT NOT NULL,
guestNo INT NOT NULL,
dateFrom DATE NOT NULL,
dateTo DATE NOT NULL,
roomNo INT NOT NULL,
PRIMARY KEY(hotelNo, guestNo, dateFrom)
);
-- Create Guest table with guestNo primary key
CREATE TABLE IF NOT EXISTS Guest(
guestNo INT NOT NULL AUTO_INCREMENT,
guestName VARCHAR(255) NOT NULL,
guestAddress VARCHAR(255) NOT NULL,
PRIMARY KEY(guestNo)
);
I am building a small application for class that will allow me to connect to my database on the cloud and then perform basic queries against it. My tables are already loaded with some data so that I am able to carry out basic CRUD operations.
I was wondering how, using C# and from my local machine I could connect to this instance. I have activated the Google SQL API and have downloaded the Google NuGet package but am still having some difficulty figuring out the rest.
I have read some of Google's documentation on this but am hoping to find a more straightforward and simple answer from the community.
Any ideas on how I should move forward? At least for now, I'd like to just be able to simply connect and possibly do a SELECT * FROM Hotel query and see the results on the command prompt.
I'm trying to insert data into the database using LINQ. In me SQL server side I wrote a sequence to generate a custom id with a prefix. Here what I did in my code,
ALLibraryDataClassesDataContext dbContext = new ALLibraryDataClassesDataContext();
dbContext.Books.InsertOnSubmit(book);
dbContext.SubmitChanges();
dbContext.Dispose();
In my book object I'm not setting the BookID because it's generating from the sequence. When I'm executing
INSERT INTO library_db.dbo.Book([BookName]) VALUES ('SH')
This inserts data with the auto generated id. But when I'm running the code, It gives an SQLException,
Cannot insert the value NULL into column 'BookId', table 'library_db.dbo.Book'; column does not allow nulls. INSERT fails.
EDITED
My sequence and table creation,
CREATE TABLE [dbo].[Book] (
[BookId] VARCHAR (50) NOT NULL,
[BookName] VARCHAR (50) NULL,
[AuthorName] VARCHAR (50) NULL,
[PublisherName] VARCHAR (50) NULL,
[PublishedDate] DATE NULL,
[price] MONEY NULL,
[PurchasedDate] DATE NULL,
[PurchasedBillNo] VARCHAR (50) NULL,
[CategoryId] VARCHAR (50) NULL,
[NoOfCopies] VARCHAR (50) NULL,
[FinePerDay] MONEY NULL,
[BookImage] VARCHAR (2000) NULL,
PRIMARY KEY CLUSTERED ([BookId] ASC),
CONSTRAINT [FK_Book_Category] FOREIGN KEY ([CategoryId]) REFERENCES [dbo].
[Category] ([CategoryId])
);
GO
CREATE SEQUENCE dbo.BookId_Seq AS
INT START WITH 1
INCREMENT BY 1 ;
GO
ALTER TABLE dbo.Book
ADD CONSTRAINT Const_BookId_Seq
DEFAULT FORMAT((NEXT VALUE FOR dbo.BookId_Seq),'B0000#') FOR [BookId];
GO
What is the different between running a query manually and through LINQ? Is there a possible way to insert data into code(LINQ) with a custom id?
I'm not sure if you can do this with LINQ-to-SQL, but give it a try:
In the context designer, set "Auto Generated Value" = true for BookId. This tells LINQ-to-SQL to exclude the column from insert statements. Now the database will fall back to the default constraint, which doesn't happen when an insert statement supplies a value.
However, there may be one problem. When auto-generated value is defined for a primary key column, LINQ-to-SQL will try to read the generated value afterwards by a select statement looking like
SELECT CONVERT(Int,SCOPE_IDENTITY()) AS [value]
Obviously, drawing the value from a sequence will leave SCOPE_IDENTITY() undefined. At best this will prevent you from reading the generated key value from the book variable after SubmitChanges, but in the worst case it will cause an error that you can't work around.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I need to create a few lookup tables and I often see the following:
create table Languages
(
Id int identity not null primary key (Id),
Code nvarchar (4) not null,
Description nvarchar (120) not null,
);
create table Posts
(
Id int identity not null primary key (Id),
LanguageId int not null,
Title nvarchar (400) not null,
);
insert into Languages (Id, Code, Description) values (1, "en", "English");
This way I am localizing Posts with language id ...
IMHO, this is not the best scheme for Languages table because in a Lookup table the PK should be meaningful, right?
So instead I would use the following:
create table Languages
(
Code nvarchar (4) not null primary key (Code),
Description nvarchar (120) not null,
);
create table Posts
(
Id int identity not null primary key (Id),
LanguageCode nvarchar (4) not null,
Title nvarchar (400) not null,
);
insert into Languages (Code, Description) values ("en", "English");
The NET applications usually use language code so this way I can get a Post in English without using a Join.
And with this approach I am also maintaining the database data integrity ...
This could be applied to Genders table with codes "M", "F", countries table, transaction types table (should I?), ...
However I think it is common to use int as PK in lookup tables because it is easier to map to ENUMS.
And know it is even possible to map to Flag Enums so have a Many to Many relationship in an ENUM.
That helps in NET code but in fact has limitations. A Languages table could never be mapped to a FLags Enum ...
... An flags enum can't have more than 64 items (Int64) because the keys must be a power of two.
A SOLUTION
I decided to find an approach that enforces database data integrity and still makes possible to use enums so I tried:
create table Languages
(
Code nvarchar (4) not null primary key (Code),
Key int not null,
Description nvarchar (120) not null,
);
create table Posts
(
Id int identity not null primary key (Id),
LanguageCode nvarchar (4) not null,
Title nvarchar (400) not null,
);
insert into Languages (Code, Key, Description) values ("en", 1, "English");
With this approach I have a meaningfully Language code, I avoid joins and I can create an enum by parsing the Key:
public enum LanguageEnum {
[Code("en")
English = 1
}
I can even preserve the code in an attribute. Or I can switch the code and description ...
What about Flag enums? Well, I will have not Flag enums but I can have List ...
And when using List I do not have the limitation of 64 items ...
To me all this makes sense but would I apply it to a Roles table, or a ProductsCategory table?
In my opinion I would apply only to tables that will rarely change over time ... So:
Languages, Countries, Genders, ... Any other example?
About the following I am not sure (They are intrinsic to the application):
PaymentsTypes, UserRoles
And to these I wouldn't apply (They can be managed by a CMS):
ProductsCategories, ProductsColors
What do you think about my approach for Lookup tables?
The first way of doing it is correct, with an ID as a PK. (You can also set a unique index on the Code column.)
'PK should be meaningful, right?'
Nope. This not a requirement; I have never heard of it in many many years of DBMS work.
Bear in mind that most RDBMS' have optimisations for int keys and will look up and int PK faster than most other data types. That's one of the reason's why IDENTITY is used for so many PK's.
How can I avoid this problem:
Cannot insert the value NULL into column 'rowguid', column does not allow nulls. INSERT fails.
Sample query is:
insert into tablename (col1, col2, col3)
values('v1', 'v2', 'v3')
If you are not going to insert the value explicitly in the INSERT statement, and the columns is NOT NULL, you need to specify a default value in the TABLE.
The DEFAULT constraint is used to insert a default value into a
column. The default value will be added to all new records, if no
other value is specified.
It sounds like your table has been created with a ROWGUIDCOL but without the appropriate default value. Here is an example from ASP.NET forum for a proper table definition using the feature. It should give you some help.
CREATE TABLE Globally_Unique_Data
(guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL,
Employee_Name varchar(60)
CONSTRAINT Guid_PK PRIMARY KEY (Guid) );
The NEWSEQUENTIALID() default will generate a GUID for you. Without it you will have to generate your own and include it in the insert. You don't have to use it as the primary key as in the example, but you have to supply it or use the default.
There are two solutions to this problem.
One you can make column 'rowguid' to allow null and the other is to set some default value of your parameters.
In your code you can set values as
int v1 = 0;
string v2 ="";
Then pass these values to query.
You can set default parameter values in the stored procedure as
#v1 int = 0,
#v2 varchar(50) = ' '
Follow this example:
CREATE TABLE [dbo].[Files](
[id] [int] IDENTITY(1,1) NOT NULL,
[IdFile] [uniqueidentifier] unique ROWGUIDCOL NOT NULL,
[Title] [nvarchar](max) NULL,
[File] [varbinary](max) FILESTREAM NULL,
CONSTRAINT [PK_Table_1] PRIMARY KEY CLUSTERED
(
[id] ASC
))