I'm trying to insert data into the database using LINQ. In me SQL server side I wrote a sequence to generate a custom id with a prefix. Here what I did in my code,
ALLibraryDataClassesDataContext dbContext = new ALLibraryDataClassesDataContext();
dbContext.Books.InsertOnSubmit(book);
dbContext.SubmitChanges();
dbContext.Dispose();
In my book object I'm not setting the BookID because it's generating from the sequence. When I'm executing
INSERT INTO library_db.dbo.Book([BookName]) VALUES ('SH')
This inserts data with the auto generated id. But when I'm running the code, It gives an SQLException,
Cannot insert the value NULL into column 'BookId', table 'library_db.dbo.Book'; column does not allow nulls. INSERT fails.
EDITED
My sequence and table creation,
CREATE TABLE [dbo].[Book] (
[BookId] VARCHAR (50) NOT NULL,
[BookName] VARCHAR (50) NULL,
[AuthorName] VARCHAR (50) NULL,
[PublisherName] VARCHAR (50) NULL,
[PublishedDate] DATE NULL,
[price] MONEY NULL,
[PurchasedDate] DATE NULL,
[PurchasedBillNo] VARCHAR (50) NULL,
[CategoryId] VARCHAR (50) NULL,
[NoOfCopies] VARCHAR (50) NULL,
[FinePerDay] MONEY NULL,
[BookImage] VARCHAR (2000) NULL,
PRIMARY KEY CLUSTERED ([BookId] ASC),
CONSTRAINT [FK_Book_Category] FOREIGN KEY ([CategoryId]) REFERENCES [dbo].
[Category] ([CategoryId])
);
GO
CREATE SEQUENCE dbo.BookId_Seq AS
INT START WITH 1
INCREMENT BY 1 ;
GO
ALTER TABLE dbo.Book
ADD CONSTRAINT Const_BookId_Seq
DEFAULT FORMAT((NEXT VALUE FOR dbo.BookId_Seq),'B0000#') FOR [BookId];
GO
What is the different between running a query manually and through LINQ? Is there a possible way to insert data into code(LINQ) with a custom id?
I'm not sure if you can do this with LINQ-to-SQL, but give it a try:
In the context designer, set "Auto Generated Value" = true for BookId. This tells LINQ-to-SQL to exclude the column from insert statements. Now the database will fall back to the default constraint, which doesn't happen when an insert statement supplies a value.
However, there may be one problem. When auto-generated value is defined for a primary key column, LINQ-to-SQL will try to read the generated value afterwards by a select statement looking like
SELECT CONVERT(Int,SCOPE_IDENTITY()) AS [value]
Obviously, drawing the value from a sequence will leave SCOPE_IDENTITY() undefined. At best this will prevent you from reading the generated key value from the book variable after SubmitChanges, but in the worst case it will cause an error that you can't work around.
Related
I was working with VisualStudio 2017 C# and localdb2016 I created a table in server explorer's database desginer window with 10 columns added X rows.
I have already set copy if newer so rows wont get deleted everytime
but when i try to add a new column (11th column) all of my rows (even in other tables!) get deleted !
why ?
Here is my Table code generated from desginer .... :
CREATE TABLE [dbo].[CustomerTable] (
[Id] INT IDENTITY (1, 1) NOT NULL,
[name] NVARCHAR (50) NULL,
[company] NVARCHAR (50) NULL,
[email] NVARCHAR (50) NULL,
[phone1] NVARCHAR (50) NULL,
[phone2] NVARCHAR (50) NULL,
[address] NVARCHAR (50) NULL,
[fax] NVARCHAR (50) NULL,
[phone3] NVARCHAR (50) NULL,
[date] DATETIME DEFAULT (getdate()) NULL,
PRIMARY KEY CLUSTERED ([Id] ASC)
);
UPDATE 1 :
ALTER TABLE dbo.CustomerTable ADD column_b VARCHAR(20) NULL, column_c INT NULL ;
does that code remove old columns (b and c) if they exists?
is it a good idea to do that everytime app starts? (for upgrading purpose)
Try this Transact-SQL query
ALTER TABLE dbo.CustomerTable ADD column_b VARCHAR(20) NULL, column_c INT NULL ;
This query will add 2 columns in your table -> First, b (VARCHAR[20]) & Second, c (INT).
To read more,
ALTER TABLE (Transact-SQL)
The query will not remove any existing column because it is an alter query that means it alter the table as you mention. Adding existing column doesn't alter table. So, no changes.
We have written a c# MVC ASP.NET CORE project which is hosted as an azure app-service and is driven by an azure hosted sql server.
I have the following sql Table as a part of the database:
CREATE TABLE [dbo].[Users] (
[UserID] INT IDENTITY (1, 1) NOT NULL,
[UserName] NVARCHAR (50) NOT NULL,
[Email] NVARCHAR (50) NOT NULL,
[Active] BIT DEFAULT ((1)) NOT NULL,
[TeamID] INT NULL,
[UserType] INT DEFAULT ((0)) NOT NULL,
[UserCpM] FLOAT (53) DEFAULT ((0)) NOT NULL,
[CpMSet] BIT DEFAULT ((0)) NOT NULL,
[CurrentProject] INT DEFAULT ((-2)) NOT NULL,
[CurrentNotes] NVARCHAR (MAX) NULL,
[TimeStarted] DATETIME NOT NULL,
[Failure] BIT DEFAULT ((0)) NOT NULL,
[Failure_Project] INT NULL,
[Failure_TimeStarted] DATETIME NULL,
[Failure_TimeEnded] DATETIME NULL,
[Failure_Notes] NVARCHAR (MAX) NULL,
PRIMARY KEY CLUSTERED ([UserID] ASC),
CONSTRAINT [FK_Users_ToTeams] FOREIGN KEY ([TeamID]) REFERENCES [dbo].[Teams] ([TeamID]),
CONSTRAINT [FK_Users_CurrentProject_ToProjects] FOREIGN KEY ([CurrentProject]) REFERENCES [dbo].[Projects] ([ProjectID]),
CONSTRAINT [FK_Users_FailureProject_ToProjects] FOREIGN KEY ([Failure_Project]) REFERENCES [dbo].[Projects] ([ProjectID])
);
The UserCpM field is an indication of employees cost-per-minute and as such is cleared before displaying to any user who is not of the highest privilege level. Over the last couple of days this field has been randomly resetting itself back to 0 for every entry in the table in the database.
I have hunted through the code and confirmed that on any view where the CpM is cleared it is re-set before saving the User back to the database so I'm fairly certain that that isn't the cause of the issue.
Is there any reason that the context would save it's changes without being told to do so? or alternatively is it possible that it will track changes to anything set from it and transfer them back to the database.
Is there any reason that the context would save it's changes without being told to do so?
No, Azure SQL or SQL Server hosted on Azure will not modify your data.
is it possible that it will track changes to anything set from it and transfer them back to the database.
I suggest you create a log table and 2 triggers for your Users table. If any updates has been applies on Users table. It triggers will write the operation time and the UserCpM changes to the log table.
Here is a insert trigger code sample.
CREATE TRIGGER dbo.UsersInsert
ON [dbo].[Users]
AFTER Insert
AS
BEGIN
insert into dbo.UserLog(UserID, OriginalUserCpM, NewUserCpm, ModifiedDate)
select UserID, null, UserCpm, GETDATE() from inserted
SET NOCOUNT ON;
END
Here is a update trigger code sample.
CREATE TRIGGER dbo.UsersUpdate
ON [dbo].[Users]
AFTER Update
AS
BEGIN
insert into dbo.UserLog(UserID, OriginalUserCpM, NewUserCpm, ModifiedDate)
select UserID, UserCpm, null, GETDATE() from deleted
insert into dbo.UserLog(UserID, OriginalUserCpM, NewUserCpm, ModifiedDate)
select UserID, null, UserCpm, GETDATE() from inserted
SET NOCOUNT ON;
END
I have an Excel sheet that is populated by HR employee with thousands of client records it looks like this one:
User Friendly Excel Sheet Example Screenshot
My client's SQL Server table schema looks like this
CREATE TABLE [dbo].[Clients] (
[ID] INT IDENTITY (1, 1) NOT NULL,
[Name] NVARCHAR (100) NOT NULL,
[Photo] VARCHAR (200) NOT NULL,
[PolicyID] INT NOT NULL,
[BirthDay] DATE NOT NULL,
[Gender] BIT NOT NULL,
[Title] NVARCHAR (100) NULL,
[Nationality] NVARCHAR (100) NOT NULL,
[Relationship] NVARCHAR (50) NOT NULL,
[ClassID] INT NOT NULL,
[SponsorID] INT NULL,
[HRID] INT NOT NULL,
[Active] BIT CONSTRAINT [DF_Clients_Active] DEFAULT ((1)) NOT NULL,
[StartingDate] DATE NOT NULL,
[EndingDate] DATE NOT NULL,
[AddingDate] DATETIME NOT NULL,
[Creator] INT NOT NULL,
[UniqueID] NVARCHAR (50) NULL,
[PassportNo] NVARCHAR (50) NULL,
CONSTRAINT [PK_Clients] PRIMARY KEY CLUSTERED ([ID] ASC),
CONSTRAINT [FK_Clients_Clients] FOREIGN KEY ([SponsorID]) REFERENCES [dbo].[Clients] ([ID]),
CONSTRAINT [FK_Clients_Employees] FOREIGN KEY ([HRID]) REFERENCES [dbo].[Employees] ([ID]),
CONSTRAINT [FK_Clients_Employees1] FOREIGN KEY ([Creator]) REFERENCES [dbo].[Employees] ([ID]),
CONSTRAINT [FK_Clients_Policy] FOREIGN KEY ([PolicyID]) REFERENCES [dbo].[Policy] ([ID]),
CONSTRAINT [FK_Clients_Classes] FOREIGN KEY ([ClassID]) REFERENCES [dbo].[Classes] ([ID])
);
What is the best approach to achieve such inserts?
I've tried using SqlBulkCopy but it doesn't allow any manipulation on the inserted rows.
I've tried also using SqlAdapter.Update(Datatable) but it failed since I've read the Excel sheet using ExcelDataReader then tried to add some columns like Creator and Adding Date at runtime and when I tried to run Adapter.Update(ModifiedDatatable) it throws an exception
Update requires a valid UpdateCommand when passed DataRow collection with modified rows
When I tried to use SqlBulkCopy to insert this Excel sheet it worked as expected
Excel Sheet with Foreign Keys Screenshot
But it's not right to force the end user to put some foreign keys in the Excel sheet before import.
Notice:
Sorry for uploading screenshots to Tinypic but I couldn't upload them here because of my Rep Points.
Thanks in advance
I would be creating an SSIS package in this scenario. SSIS can read from Excel, and you can get it to query the database for the extra information to build a valid dataset that will not violate the FK constraints.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I want to have my tables all contain a unique number for my tableID column.
Insert sequential number in MySQL
This is pretty much what I'm trying to accomplish but from my C# app.
EDIT: Adding the ID column with Primary Key and Auto Increment was all I needed to do. Thank you deterministicFail
From your error log:
ERROR 1067: Invalid default value for 'Status'
SQL Statement:
ALTER TABLE `lianowar_woodlandlumberbook`.`book`
CHANGE COLUMN `Customer_Ph` `Customer_Ph` VARCHAR(16) NOT NULL ,
CHANGE COLUMN `Status` `Status` VARCHAR(10) NOT NULL DEFAULT NULL ,
DROP PRIMARY KEY,
ADD PRIMARY KEY (`Customer_Name`, `Status`)
ERROR: Error when running failback script. Details follow.
ERROR 1050: Table 'book' already exists
SQL Statement:
CREATE TABLE `book` (
`Customer_Name` varchar(20) NOT NULL,
`Customer_Ph` varchar(16) DEFAULT NULL,
`Customer_Ph2` varchar(30) NOT NULL,
`Info_Taken_By` varchar(12) NOT NULL,
`Project_Type` varchar(20) NOT NULL,
`Project_Size` varchar(20) NOT NULL,
`Date_Taken` varchar(5) NOT NULL,
`Date_Needed` varchar(5) NOT NULL,
`Sales_Order` varchar(5) NOT NULL,
`Information` text NOT NULL,
`Status` varchar(10) DEFAULT NULL,
`tableID` varchar(5) DEFAULT NULL,
PRIMARY KEY (`Customer_Name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
You try to define a NOT NULL column and give it the default NULL. You should consider your datatypes, tableID should be a number datatype (btw the name isn't good, just id or bookId would be better).
To your question:
You should define the table like this
CREATE TABLE `book` (
`ID` INT NOT NULL AUTO_INCREMENT,
`Customer_Name` varchar(20) NOT NULL,
`Customer_Ph` varchar(16) DEFAULT NULL,
`Customer_Ph2` varchar(30) NOT NULL,
`Info_Taken_By` varchar(12) NOT NULL,
`Project_Type` varchar(20) NOT NULL,
`Project_Size` varchar(20) NOT NULL,
`Date_Taken` varchar(5) NOT NULL,
`Date_Needed` varchar(5) NOT NULL,
`Sales_Order` varchar(5) NOT NULL,
`Information` text NOT NULL,
`Status` varchar(10) DEFAULT NULL,
PRIMARY KEY (`ID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
I dont know which datatypes you really need, because i dont know the data you going to store. But to use the Primary Key and Auto_increment feature this will do the trick.
Don't do this from application code. Ever. Application code is poorly positioned to guarantee uniqueness, because you have a potential race condition between multiple clients trying to insert at about the same time. It will also be slower, because application code must first request a current value from the database before incrementing it, resulting in two separate database transactions.
The database, on the other hand, already has features to ensure atomicity and uniqueness, and can respond to requests in order, thus positioning it to do this job much faster and better. Indeed, pretty much every database out there, including MySql, already has this feature built in.
How can I avoid this problem:
Cannot insert the value NULL into column 'rowguid', column does not allow nulls. INSERT fails.
Sample query is:
insert into tablename (col1, col2, col3)
values('v1', 'v2', 'v3')
If you are not going to insert the value explicitly in the INSERT statement, and the columns is NOT NULL, you need to specify a default value in the TABLE.
The DEFAULT constraint is used to insert a default value into a
column. The default value will be added to all new records, if no
other value is specified.
It sounds like your table has been created with a ROWGUIDCOL but without the appropriate default value. Here is an example from ASP.NET forum for a proper table definition using the feature. It should give you some help.
CREATE TABLE Globally_Unique_Data
(guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL,
Employee_Name varchar(60)
CONSTRAINT Guid_PK PRIMARY KEY (Guid) );
The NEWSEQUENTIALID() default will generate a GUID for you. Without it you will have to generate your own and include it in the insert. You don't have to use it as the primary key as in the example, but you have to supply it or use the default.
There are two solutions to this problem.
One you can make column 'rowguid' to allow null and the other is to set some default value of your parameters.
In your code you can set values as
int v1 = 0;
string v2 ="";
Then pass these values to query.
You can set default parameter values in the stored procedure as
#v1 int = 0,
#v2 varchar(50) = ' '
Follow this example:
CREATE TABLE [dbo].[Files](
[id] [int] IDENTITY(1,1) NOT NULL,
[IdFile] [uniqueidentifier] unique ROWGUIDCOL NOT NULL,
[Title] [nvarchar](max) NULL,
[File] [varbinary](max) FILESTREAM NULL,
CONSTRAINT [PK_Table_1] PRIMARY KEY CLUSTERED
(
[id] ASC
))