I have a query in LINQ, I want to get MAX of Code of my table and increase it and insert new record with new Code. just like the IDENTITY feature of SQL Server, but here my Code column is char(5) where can be alphabets and numeric.
My problem is when inserting a new row, two concurrent processes get max and insert an equal Code to the record.
my command is:
var maxCode = db.Customers.Select(c=>c.Code).Max();
var anotherCustomer = db.Customers.Where(...).SingleOrDefault();
anotherCustomer.Code = GenerateNextCode(maxCode);
db.SubmitChanges();
I ran this command cross 1000 threads and each updating 200 customers, and used a Transaction with IsolationLevel.Serializable, after two or three execution an error occured:
using (var db = new DBModelDataContext())
{
DbTransaction tran = null;
try
{
db.Connection.Open();
tran = db.Connection.BeginTransaction(IsolationLevel.Serializable);
db.Transaction = tran;
.
.
.
.
tran.Commit();
}
catch
{
tran.Rollback();
}
finally
{
db.Connection.Close();
}
}
error:
Transaction (Process ID 60) was
deadlocked on lock resources with
another process and has been chosen as
the deadlock victim. Rerun the
transaction.
other IsolationLevels generates this error:
Row not found or changed.
Please help me, thank you.
UPDATE2: I have a .NET method generating the new code, which is alphanumeric.
UPDATE3: My .NET function generates code like this: 0000, 0001, 0002, ... , 0009, 000a, 000b, 000c, ... , 000z, 0010, 0011, 0012, ... , 0019, 001a, 001b, 001z, ... ...
Avoid locking and continuous access to the same "slow access" resources:
At the start your application (service) calculate next id (max + 1, for example)
In some variable (your should lock access to this variable ONLY) reserve, for example 100 values (it depends on your id's usage)
Use these ids
Avoid using IDENTITY columns (if transaction rollbacks the id will be still incremented)
Use some table to store keys (last or next ids) for every table (or for the all tables as variant).
Luck.
For your web application:
How to change and access Application state:
Application.Lock();
Application["IDS"] = <some ids>
Application.UnLock();
Second solution:
Use stored procedure and code some like this:
declare #id int
update t set
#id = id
,id = #id + 1
from dbo.TableIdGenerator t where t.TableName = 'my table name that id I need'
select #id
Update operation is atomic and you can increment id and return current one.
Don't forget to insert the first and only record for every table's ids.
Third solution:
Use CLR function.
If possible, try to rethink your database design. As you already noticed, having to use isolation level Serializable, when locking a whole table, can be troublesome.
I assume that the 5 character unique incrementing value is a requirement, because when it is not, you should definitely simply use an IDENTITY column. However, assuming this is not the case, here is an idea that might work.
Try to create a method that allows to express that 5 char identifier as a number. How to do this depends on which characters are allowed in your char identifier and which combinations are possible, but here are some examples: '00000' -> 0, '00009', -> 9, '0000z' -> 36, '00010' -> 37, '0001z' -> 71, 'zzzzz' -> 60466175. When you've found a method, use a incrementing primary key for the table and use a trigger that calculates the char identifier after you inserted a record. When a trigger is not appropriate, you can also do this in .NET. Or you can choose not to store that 5 char value in your database, because it is calculated. You can define it in a view or simply as property in your domain entity.
I have a solution, but not complete, It reduces the errors and problems: I have a file named: "lock.txt" and every try to get lock should open this file, get maximum code and generate next and update my table and close the file. the file is just for opening and closing, and there is no content in it.
public void DoTheJob()
{
int tries = 0;
try
{
using (var sr = new StreamReader(#"c:\lock.txt"))
{
try
{
// get the maximum code from my table
// generate next code
// update current record with the new code
}
catch (Exception ex)
{
Logger.WriteError(ex);
}
finally
{
sr.Close();
}
}
}
catch
{
Thread.Sleep(2000); // wait for lock for 2 second
tries++;
if (tries > 15)
throw new Exception("Timeout, try again.");
}
}
Please say if this solution is correct.
Or use StreamWriter.
It would be really useful to see your function GenerateNextCode, because it could be crucial piece of information. Why? Because I don't believe it is not possible to change this function from
f(code) -> code
to
f(id) -> code
If the latter is true, you could redesign your table and whole concept would be much easier.
But assuming it is really not possible, some quick solution -- use the pool table with pregenerated codes. Then use simply ids (autoincremented) in your main table.
Disadvantage: you have to use extra join to retrieve the data. Personally I don't like it.
Another solution, "normal" one: keep lower isolation level and simply handle the exception (i.e. get the code again, calculate new code again and save the data). It is pretty classic situation, web, no web, does not matter.
Please note: you will get the same problem on concurrent editing of the same data. So in some sense, you cannot avoid this kind of problem.
EDIT:
So, I guessed right this function is simply f(id) -> code. You can drop the code column and use autoincrement id. Then add a view where the code is calculated on fly. Using view as a way of retrieving the data from the table is always a good idea (think of it as getter of property in C#).
If you are afraid of CPU usage ;-) you can calculate code while inserting records (use the triggers).
Of course problems with locking records are not removed entirely (concurrent edits still can occur).
here is my answer, not completely correct, but without error.
public static void WaitLock<T>() where T : class
{
using (var db = GetDataContext())
{
var tableName = typeof(T).Name;
var count = 0;
while (true)
{
var recordsUpdated = db.ExecuteCommand("UPDATE LockedTable SET IsLocked = 1 WHERE TableName = '" + tableName + "' AND IsLocked = 0");
if (recordsUpdated <= 0)
{
Thread.Sleep(2000);
count++;
if (count > 50)
throw new Exception("Timeout getting lock on table: " + tableName);
}
else
{
break;
}
}
}
}
public static void ReleaseLock<T>() where T : class
{
using (var db = GetDataContext())
{
var tableName = typeof(T).Name;
db.ExecuteCommand("UPDATE LockedTable SET IsLocked = 0 WHERE TableName = '" + tableName + "' AND IsLocked = 1");
}
}
public static void GetContactCode(int id)
{
int tries = 0;
try
{
WaitLock<Contact>();
using (var db = GetDataContext())
{
try
{
var ct = // get contact
var maxCode = // maximum code
ct.Code = // generate next
db.SubmitChanges();
}
catch
{
}
}
}
catch
{
Thread.Sleep(2000);
tries++;
if (tries > 15)
throw new Exception("Timeout, try again.");
}
finally
{
ReleaseLock<Contact>();
}
}
Related
I am using Entity Framework Core 3.1.8 with SQL Server 2016.
Consider following example (simplified for clarity):
Database table is defined as follows:
CREATE TABLE [dbo].[Product]
(
[Id] INT IDENTITY(1,1) NOT NULL ,
[ProductName] NVARCHAR(500) NOT NULL,
CONSTRAINT [PK_Product] PRIMARY KEY CLUSTERED (Id ASC) WITH (FILLFACTOR=80),
CONSTRAINT [UQ_ProductName] UNIQUE NONCLUSTERED (ProductName ASC) WITH (FILLFACTOR=80)
)
And following C# program:
using System;
using System.Linq;
using System.Reflection;
namespace CcTest
{
class Program
{
static int Main(string[] args)
{
Product product = null;
string newProductName = "Basketball";
using (CcTestContext context = new CcTestContext())
using (var transaction = context.Database.BeginTransaction())
{
try
{
product = context.Product.Where(p => p.ProductName == newProductName).SingleOrDefault();
if (product is null)
{
product = new Product { ProductName = newProductName };
context.Product.Add(product);
context.SaveChanges();
transaction.Commit();
}
}
catch(Exception ex)
{
transaction.Rollback();
}
}
if (product is null)
return -1;
else
return product.Id;
}
}
}
Everything works as expected during testing - new product is inserted into the table if it didn't already exist. So I expect [UQ_ProductName] constraint to never be hit because everything is done as a single transaction.
However, in reality this code is a part of the business logic connected to Web API. What happened was that 10 instances of this code using the same new product name got executed almost simultaneously (execution time was the same within one hundredth of a second, we save it in the log table). One of them succeeded (new product name got inserted into the table) but the rest of them failed with following exception:
Violation of UNIQUE KEY constraint 'UQ_ProductName'. Cannot insert
duplicate key in object 'dbo.Product'. The duplicate key value is
(Basketball). The statement has been terminated.
Why did this happen? Isn't this exactly what my use of transaction was supposed to prevent? That is I think checking whether row with such value already exists and inserting if it doesn't should have been an atomic operation. After the first API call was executed and row was inserted the rest of API calls should have detected value already exists and not tried to insert a duplicate.
Can someone explain if there is an error in my implementation? Clearly it is not working the way I expect.
TLDR: using transaction (at any isolation level) alone will not solve the issue.
Root cause of the issue is perfectly explained here: https://stackoverflow.com/a/6173482/412352
When using serializable transaction SQL Server issues shared locks on read records / tables. Shared locks doesn't allow other transactions modifying locked data (transactions will block) but it allows other transactions reading data before the transaction which issued locks start modifying data. That is the reason why the example doesn't work - concurrent reads are allowed with shared locks until the first transaction starts modifying data.
Below is the block of code that will reproduce issue all the time and fix is also there (commented out):
using System;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.EntityFrameworkCore;
namespace CcTest
{
class Program
{
static void Main(string[] args)
{
object Lock = new object();
// Parallel loop below will reproduce the issue all the time
// (if lock (Lock) is commented out)
// when in function AddProduct() you assign
// newProductName value that currently doesn't exist in the database
Parallel.For(0, 10, index =>
{
//lock (Lock) // uncomment this to resolve the issue
{
AddProduct(index);
}
});
// Sequential loop below will aways work as expected
//for (int index = 0; index < 10; index++)
// AddProduct(index);
Console.ReadKey();
}
static void AddProduct( int index)
{
Product product = null;
string newProductName = "Basketball"; // specify something that doesn't exist in database table
using (CcTestContext context = new CcTestContext())
using (var transaction = context.Database.BeginTransaction())
{
try
{
product = context.Product.FirstOrDefault(p => p.ProductName == newProductName);
if (product is null)
{
product = new Product { ProductName = newProductName };
context.Product.Add(product);
context.SaveChanges();
transaction.Commit();
Console.WriteLine($"API call #{index}, Time {DateTime.Now:ss:fffffff}: Product inserted. Id={product.Id}\n");
}
else
Console.WriteLine($"API call #{index}, Time {DateTime.Now:ss:fffffff}: Product already exists. Id={product.Id}\n");
}
catch(DbUpdateException dbuex)
{
transaction.Rollback();
if (dbuex.InnerException != null)
Console.WriteLine($"API call #{index}, Time {DateTime.Now:ss:fffffff}: Exception DbUpdateException caught, Inner Exception Message: {dbuex.InnerException.Message}\n");
else
Console.WriteLine($"API call #{index}, Time {DateTime.Now:ss:fffffff}: Exception DbUpdateException caught, Exception Message: {dbuex.Message}\n");
}
catch (Exception ex)
{
transaction.Rollback();
Console.WriteLine($"API call #{index}, Time {DateTime.Now:ss:fffffff}: Exception caught: {ex.Message}\n");
}
}
}
}
}
As you can see one of the ways to fix the issue is to place the code in the critical section.
Other approach is NOT to place the code in the critical section but catch an exception and check whether it is a DbUpdateException. Then you can check whether inner error message contains something related to constraint violation and if so - try to re-read from the database.
Yet another approach is to use raw SQL and specify SELECT locking hints:
https://weblogs.sqlteam.com/dang/2007/10/28/conditional-insertupdate-race-condition/
PLEASE NOTE: any of the approaches may have negative implications (like a performance decrease).
Other useful pages to take a look at:
https://learn.microsoft.com/en-us/aspnet/mvc/overview/getting-started/getting-started-with-ef-using-mvc/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application#modify-the-department-controller
https://weblogs.sqlteam.com/dang/2009/01/31/upsert-race-condition-with-merge/
I suspect that behavior is related to transaction IsolationLevel (ReadCommitted by default for SQL Server in EF Core provider).
I'd rather wait for someone with more expertise to give you a proper explanation, but you can read about Isolation levels and read phenomena here.
Imagine what would happen if two threads run this code, query the table, then are paused, then resume and try to insert a value. Your current code structure cannot prevent these two threads from attempting an insert at the same time. A Database transaction won't help you here.
You could write raw sql like this to reduce the risk of a duplicate;
insert into Product (ProductName)
select #newName
where not exists(
select 1 from Product where ProductName = #newName
)
Though the database may encounter the exact same threading issue.
Or you could just wrap your code in a try catch and do it again. (see also Only inserting a row if it's not already there)
Or, if you are planning to update the record anyway;
update Product set ...
where ProductName = #newName
if ##rowcount = 0
begin
insert into Product (ProductName) values (#newName)
end
The Isolation level, ReadCommited by default (MSSSQL), does not prevent a non repeatable read.
This means your product name query does not create a lock in the Product table.
So you can do the same read further in the transaction and get a different value (because another thread modified or inserted it).
What you want to use is either Snapshot isolation (better for application performance, other threads don't have to wait for your transaction to complete before they can read the data) or Serializable isolation.
You should be able to specify the isolation level via BeginTransaction(IsolationLevel. ) if you reference the Microsoft.EntityFrameworkCore.Relational package.
I have a C# project which connects to a TSQL database. The project runs multiple sequential update statements on a single table, eg.:
private void updateRows() {
string update1 = "UPDATE WITH (ROWLOCK) table SET ... WHERE ...;"
string update2 = "UPDATE WITH (ROWLOCK) table SET ... WHERE ...;"
string update3 = "UPDATE WITH (ROWLOCK) table SET ... WHERE ...;"
// execute updates one after the other
}
Ideally, these statements will be batched to avoid making multiple round-trips to/from the database:
string update = "
UPDATE WITH (ROWLOCK) table SET ... WHERE ...;
GO
UPDATE WITH (ROWLOCK) table SET ... WHERE ...;
GO
UPDATE WITH (ROWLOCK) table SET ... WHERE ...;
GO
";
My question is, if the statements are batched, does this increase the chance of deadlock errors occurring due to table scans?
As there is less time between each statement, I imagine that this could increase chances of deadlocks as one update could introduce a row or page lock, which may not be released by the time the next statement is executed. However if the update statements were not batched, then there is more time for row or page locks to be released between each update statement, therefore less chance of deadlocks occurring.
I guess you're not going to like my answer, here are my 2 cents, let me try and explain
first, your rowlock might not work, you might end up getting a table lock of your DDL doesn't allow SQL server to apply row locking in your transaction.
SQL likes set operations, it performs great when you update large datasets in one time.
I have a similar issue, I need to update large volumes of user transactions but I have no spare IO in the systems. I end-up using an 'ETL like' update,
In C# I'm using a bulk insert to get all data in the database in one go. Here is my method.
protected void BulkImport(DataTable table, string tableName)
{
if (!CanConnect)
return;
var options = SqlBulkCopyOptions.FireTriggers | SqlBulkCopyOptions.CheckConstraints |
SqlBulkCopyOptions.UseInternalTransaction;
using (var bulkCopy = new SqlBulkCopy(_con.ConnectionString, options))
{
bulkCopy.DestinationTableName = tableName;
bulkCopy.BulkCopyTimeout = 30;
try
{
lock(table){
bulkCopy.WriteToServer(table);
table.Rows.Clear();
table.AcceptChanges();
}
}
catch (Exception ex)
{
var msg = $"Error: Failed the writing to {tableName}, the error:{ex.Message}";
Logger?.Enqueue(msg);
try
{
var TE= Activator.CreateInstance(ex.GetType(), new object[] { $"{msg}, {ex.Message}", ex });
Logger?.Enqueue(TE as Exception);
}
catch
{
Logger?.Enqueue(ex);
}
}
finally
{
bulkCopy.Close();
}
}
}
Please note that DataTable is not thread safe, you need to lock the DataTable when interacting (insert rows, clear table) with it.
Then I dump the data in a staging table and use a merge statement to bring the data into the database where I need it.
I do +100k records per second on 50 or so tables and have not had any performance or deadlock issues till now.
I have a one column table to keep track of number of visits. In Global.asax.cs I attempt to increment this value by 1 inside application_start but the field does't get updated. I get no exceptions but number of rows affected is always zero.
I tried the same simple query in SSMS and I get the same thing: 0 rows affected.
There is one int column in that table called NumVisits.
This is part of Application_Start in Global.asax.cs:
Application.Lock();
int iNumVisits = SomeClass.GetNumVisits();
SomeClass.UpdateNumVists(iNumVisits + 1);
Application.UnLock();
This is in SomeClass (BLL):
public static void UpdateNumVists(int iNumVisists)
{
LocaleRepository oLocaleRepository = new LocaleRepository(new SqlDbContext());
oLocaleRepository.UpdateNumVists(iNumVisists);
}
and this is in DAL:
public void UpdateNumVists(int iNumVisits)
{
int iRet = 0;
try
{
dbContext.Open();
List<SqlParameter> spParams = new List<SqlParameter>();
string sQuery = "update Visits set NumVisits = #NumVisits";
spParams.Add(dbContext.CreateSqlParam("#NumVisits", SqlDbType.Int, 0, iNumVisits));
dbContext.ExecuteSqlNonQuery(sQuery, spParams, ref iRet);
}
catch (Exception e)
{
throw e;
}
finally
{
dbContext.Close();
}
return;
}
I use the following for all commands using executeNonQuery:
public void ExecuteSqlNonQuery(string sQuery, List<SqlParameter> spParams, ref int iRet)
{
using (SqlCommand command = new SqlCommand())
{
command.CommandType = CommandType.Text;
command.Parameters.AddRange(spParams.ToArray<SqlParameter>());
command.Connection = DbConnection;
command.CommandText = sQuery;
try
{
iRet = command.ExecuteNonQuery();
}
catch(Exception e)
{ }
}
}
when the update command is executed, iRet is zero. I can't see why a simple update query would not work.
This is the create script I got from SSMS:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Visits](
[NumVisits] [int] NULL
) ON [PRIMARY]
GO
In general there are a few possible reasons why an update would not happen.
First, if the field is also an identity or calculated field, an ordinary update is not going to work. This doesn't look to be your case, but it is good to know for the future.
Next if there is a trigger on the table, it may be preventing the update. SSMS doesn't necessarily script triggers out when you script a table, so I can't tell if you have a trigger.
Third And most common, your application may not be sending what you expect as the update statement or even communicating with the database at all when you expect it to. This is often true when there is a problem of nulls. If your variable is not properly populating, then you may indeed be updating a null value to a null value. Run Profiler to capture exactly what is being sent when you try to do the update. Often when you see the statement that is actually being run, you will see the problem. Sometimes it is a matter of a missing space, sometimes a variable that was not populated that you thought was populated, etc.
Another possibility is that the user running the code has no update rights to the table. You should have gotten a message if this were the case.
If you have run Profiler, try running that exact code in SSMS and see if it updates. Sometimes you get a better error bubble up when you do that. Especially if your error handling in the application code is not well designed.
Of course if the table has no data, you need to do an insert not an update. Or the update might not be finding any records to update, try doing a select using the same conditions and it may turn out there is not record to update.
It seems like there is no data in the table, are there any records in Visits?
I am dealing with a Console application in which it is getting 1million of records form one table which is in different server and storing in to a data table and sending the DataTable to SQL and inserting/updating in to our table using a merge statement. To get the data it is taking 25min and for inserting it takes 20min, i am using SQL-Server management studio-2012.
I would like to give information on the front screen by displaying "10000 records are inserted/updated" for every 10000 records or by a specific time delay of 10sec.
Is there a way to achieve this using the SQL(for every 10000 records updated, need to send a message to App) or by Console Application. Below is my Stored procedure using for it.
ALTER PROCEDURE [dbo].[sp_Data_Inserting_In_To_QueueTable]
-- Add the parameters for the stored procedure here
#Values AS [dbo].[Type_Table] READONLY
AS
BEGIN
SET NOCOUNT ON;
;WITH CTE_data_codes AS(
select [NO], [ID], [Name], [Code] from #SearchValues where [No] != '0'
)
MERGE tbl__QueueTable_Codes AS t
-- Values is the temp table in which data is coming from the console application
USING CTE_data_codes AS s
ON (s.Code = t.Code and s.NO = t.NO and s.ID = t.ID)
WHEN NOT MATCHED by target
--Newly added values in Values has been updated in tbl__QueueTable_Codes
THEN INSERT(NO, Name, ID, Code, Deleted)
VALUES(s.[NO], s.[Name], s.[ID], s.[Code], 0)
WHEN NOT MATCHED by source
--It means the value has been deleted in Value, hence we put a flag for deleted ones as '1'.
THEN UPDATE
SET t.Deleted = 1;
END
If you don't need the insert/merge be sequential you should add parallelism to your insert.
In my case I process AVL gps data around 4000 records/min each record need to find the near_road to know where the car is. The function to find the road is linear time 10ms so for 4000 records is 40 sec.
So instead of sending one INSERT query for 4000 records, I use C# to send 4x1000 and take ~10 seconds instead.
In this QUESTION you can see how I split the avl into ranges and send to a stored procedure and one of the answer is a suggestion to use Parallel.ForEach in C#
This is possible. I'm not sure what are you using to actually get the data from the database but the SqlDataReader class is effectively a forward-only stream that processes the query output row-by-row. You could monitor the count and call a delegate (Action) that prints your progress every time you get another N rows by wrapping your method within a class that accepts the delegate as a parameter.
Here is an example (pseudocode, never actually checked it) as I see it:
public class ReadWriteWithProgress {
public List<ClassWeAreReading> ReadData(Action<int> rowCountReporter) {
List<ClassWeAreReading> result = new List<ClassWeAreReading>();
using (SqlConnection connection = new SqlConnection("Server=localhost;Integrated Security=true;Initial Catalog=MyDatabase;"))
{
connection.Open();
var queryToExecute = "SELECT Id, Name FROM dbo.Table";
using (SqlCommand command = new SqlCommand(queryToExecute, connection))
{
using (SqlDataReader dataReader = command.ExecuteReader())
{
if (dataReader != null) {
int rowCounter = 0;
while (dataReader.Read()) {
var intermediateResult = new ClassWeAreReading();
intermediateResult.Id = (int) dataReader["Id"];
intermediateResult.Name = dataReader["Name"].ToString();
rowCounter++;
if (rowCounter % 1000 == 0) {
if (rowCountReporter != null) {
rowCountReporter (rowCounter);
}
}
result.Add(intermediateResult);
}
}
}
}
}
}
public void LoadData(List<ClassWeAreReading> dataToLoad, Action<int> rowCountReporter) {
int rowCounter = 0;
// Iterate through the list (it might be IEnumerable or any other kind of collection you can iterate through) and use the callback in the same manner as above
// ...
rowCounter++;
if (rowCounter % 1000 == 0) {
if (rowCountReporter != null) {
rowCountReporter (rowCounter);
}
}
// ...
}
}
// Then you need this kind of method to use as a callback:
private static void PrintRowCount(int rowCount){
Console.WriteLine("{0} rows transferred...", rowCount);
}
private static void PrintUpdateRowCount(int rowCount){
Console.WriteLine("{0} rows written...", rowCount);
}
// And finally you can start your stuff and pass in the method:
static void Main(string[] args)
{
var readerWithProgress = new ReadWriteWithProgress();
var result = readerWithProgress.ReadData(PrintRowCount);
readerWithProgress.LoadData(result, PrintUpdateRowCount);
}
I have three database tables:
Student{RollNo(primary key), Name etc.}
Book{Id(primary key),NoOfCopiesAvailable etc.}
IssuedBook{RollNo(Foreign Key referencing Student(RollNo)), Id(Foreign Key referencing Book(Id)}
Now, a student can't be issued more than 5 books(means a RollNo can't be in the IssuedBook table more than 5 times. Right?) and a book can only be issued if 'NoOfCopiesAvailable'>0.
So, I wrote the following trigger:
CREATE TRIGGER trigger_IssuedBook ON IssuedBook
INSTEAD OF INSERT
AS BEGIN
IF((SELECT NoOfCopiesAvailable FROM Book WHERE Id=(SELECT Id FROM INSERTED))>0 AND (SELECT COUNT(RollNo) FROM StudentBook)<5)
INSERT INTO StudentBook SELECT RollNo,Id FROM INSERTED
ELSE
//Stucked up here. Want to send a message/alert back to application(C#). Don't know how to achieve that.
END
Can this be achieved? If yes, Please let me know and if possible provide the coding. Or if there is any other way doing out this whole thing, then please do share. I would be very thankful for your help.Thank You!NOTE: Using Windows form coded in C# to update the database, SQL Server 2008
Putting business logic inside a trigger isn't a good idea, and having the trigger communicate back a specific error message to the C# code is difficult. Also your trigger isn't using a transaction so it's possible a student could end up with more than 5 books.
I would put both the SELECT COUNT and INSERT statement inside a transaction. This could be in C#, or in a stored procedure. This guarantees that another SQL query can't INSERT a record until the transaction either succeeds or gets rolled back.
Here's some C# code with Linq to get you started.
public static bool CheckoutBook(int bookID, int studentID)
{
try
{
using (var dc = new LibraryDataContext())
{
using(var tran = dc.Connection.BeginTransaction())
{
var bookCount = dc.StudentBook.Count(a => a.StudentID == studentID);
if (bookCount < 5)
{
var studentBook = new StudentBook();
studentBook.StudentID = studentID;
studentBook.BookID = bookID;
studentBook.CreateDate = DateTime.Now;
dc.StudentBook.InsertOnSubmit(studentBook);
dc.SubmitChanges();
dc.Transaction.Commit();
return true;
}
}
}
}
catch (Exception ex)
{
//the transaction failed for some reason
}
return false;
}