We're currently migrating our old OR Mapper to EF Core. Till now we used the
http://www.castleproject.org/projects/activerecord
or mapper with the HiLo algorithm. The explanations is:
https://github.com/castleproject-deprecated/ActiveRecord/blob/master/docs/primary-key-mapping.md
Now we want to switch to EF Core and will try to use the same algorithm. But there isn't much explanation how the HiLo algorithm exactly works in Nhibernate/ActiveRecord. And I try to avoid Id collision.
As far as I see, the Hi value is configured in a Database:
select next_hi from hibernate_unique_key
with the value: 746708
I think the maxLow Value is Int16.MaxValue
In that case the Sequence for EFCore should be:
CREATE SEQUENCE [dbo].[DBSequenceHiLo]
AS [bigint]
START WITH (select next_hi from hibernate_unique_key + Int16.MaxValue)
INCREMENT BY Int16.MaxValue
MINVALUE -9223372036854775808
MAXVALUE 9223372036854775807
CACHE
GO
How does the ActiveRecord HiLo Algorithm exactly works? What is the Increment by value? What is the start with value? The migration will take some time, is it possible to run it parallel with the same HiLo algorithm?
As far as I know. It's not possible to use the exact same algorithm for ActiveRecord and EF Core. One works with Sequence and the other works with a table. So you can't use both OR Mapper parallel. But you can create a Sequence for EF Core without ID collision, you just can't use ActiveRecord afterwards.
To get the INCREMENT BY value just start the current app. Create a DB Entry with the App. Stop it. Start it again and create a second entry. Because you stopped the app, the Lo/cache is empty and it gets the next hi value. The difference between those two ID's is the "INCREMENT BY" value of Active Record. It was 2^17 in my case. Default should be 2^15 I think, but I haven't seen any Infos about it.
To get a start value I created a SQL script, to get the highest Id of the database. Here is my script (It works only if your PK is named Id and it only works with sql.)
DECLARE #tables TABLE(tablename nvarchar(max) NOT NULL);
DECLARE #name nvarchar(max)
DECLARE #maxid bigint
DECLARE #currentid bigint
DECLARE #query nvarchar(max);
DECLARE #sSQL nvarchar(500);
DECLARE #ParmDefinition nvarchar(500);
set #maxid = 0
insert into #tables
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_CATALOG = 'BB_Vision'
While(Select Count(*) From #tables) > 0
Begin
Select Top 1 #name = tablename From #tables
IF EXISTS(SELECT 1 FROM sys.columns
WHERE Name = N'Id'
AND Object_ID = Object_ID(#name))
BEGIN
SELECT #sSQL = N'SELECT #retvalOUT = MAX(ID) FROM ' + #name;
SET #ParmDefinition = N'#retvalOUT bigint OUTPUT';
EXEC sp_executesql #sSQL, #ParmDefinition, #retvalOUT=#currentid OUTPUT;
IF #currentid > #maxid
BEGIN
set #maxid = #currentid
END
END
Delete #tables Where #name = tablename
End
select #maxid+1
Now you can create your EF Core Sequence. Here is an explanation how to use it:
https://www.talkingdotnet.com/use-hilo-to-generate-keys-with-entity-framework-core/
After that you shouldn't use ActiveRecord anymore or you have to create your sequence again with a higher start value.
Because the migration takes some time and you will mostly still create some Features/Bugfix for the current OR mapper, it's a good idea to set your ActiveRecord Hi value to a larger value on your local Database. So you can work with both on the same Database. But I wouldn't use it in production
update hibernate_unique_key set next_hi = next_hi + next_hi
I have a situation where I have a SQL Server 2016 database consisting of 77 tables (we'll call it MainDB). These tables hold records of a person and items attributed to them. The application retrieves with it needs by a GUID of the personID or the itemID.
I have to query the entire DB for a person, place all the relevant data into a new database (we'll call MainDB_Backup).
Right now I've started by writing a SQL script in SSMS consisting of:
BEGIN TRY
BEGIN TRANSACTION
DECLARE #PersonID uniqueidentifier;
SELECT #PersonID='XXXXXXXX-XXXX-XXX-XXXX-XXXXXXXXXXXX'
... some more variables ...
INSERT INTO MAINDB_BACKUP.dbo.Person
SELECT * FROM MAINDB.dbo.Person
WHERE MAINDB.dbo.Person.PersonID = #PersonID
INSERT INTO MAINDB_BACKUP.dbo.PersonInfo
SELECT * FROM MAINDB.dbo.PersonInfo
WHERE MAINDB.dbo.PersonInfo.PersonID = #PersonID
... more insert into statements ...
COMMIT TRANSACTION
END TRY
BEGIN CATCH
DECLARE
#ErrorMessage NVARCHAR(4000),
#ErrorSeverity INT,
#ErrorState INT;
SELECT
#ErrorMessage = ERROR_MESSAGE(),
#ErrorSeverity = ERROR_SEVERITY(),
#ErrorState = ERROR_STATE();
RAISERROR (
#ErrorMessage,
#ErrorSeverity,
#ErrorState
);
ROLLBACK TRANSACTION
END CATCH
I've been using the feature in SSMS that checks tables that depend on a selected table and tables on which the selected table is dependent on in order to query correctly. The application itself does not write to all tables at the same time.
My question is, is there a better way of doing this? Also, would having individual statements as I have above for each table effect performance ?
I have also considered having a stored procedure as well, but I'm working with this script just to see if I can successfully move the data over.
I'm facing an issue in stored procedure while performing update and delete operations in sql server 2008.
Issue description :
Two out of 10 times records does not get updated in the table(table1).
As the data (id) gets deleted from table2 it is not being referred as a foreign key in table1. When responseid is sent to the stored proc its value will be same in table1and table2.
So question here is why update doesn't trigger sometimes but delete triggers everytime(data in table2 gets deleted) ?
Please suggest a way to eradicate this issue.
When tried with a sample application it works fine. Issue occurs when there are multiple concurrent actions.
Here is the code
Declare #statusid int, #sent bit =1,#responseid (value is being passed from. Net code)
If (#sent=1)
Begin
Declare #newtran bit
Set #newtran=0
If(##trancount=0)
Begin
Set #newtran=1
Begin Tran
End
If exists (select top 1 'x' from table1 where responseid=#responseid)
Begin
Update table1 set statusid =#statusid where responseid =#responseid
End
Begin
Delete from table2 where id =#responseid
End
If (##ERROR <>0)
Begin
If (#newtran =1 and ##TRANCOUNT>0)
Begin
Set #newtran=0
Rollback tran
End
Raiseerror ('failed ', 16,1)
Return 600
End
End ---if ends here
Else
Begin
If exists (select top 1 'x' from table1 where responseid=#responseid)
Begin
Update table1 set status =1 where responseid =#responseid
End
Begin
Update table2 set count =count +1 where id =#responseid
End
--raising an error if error count <>0
Begin
End
End
If (#newtran=1and ##trancount >0)
Begin
--commit transaction
End
I have an insert statement that was deadlocking using linq. So I placed it in a stored proc incase the surrounding statements were affecting it.
Now the Stored Proc is dead locked. Something about the insert statement is locking itself according to the Server Profiler. It claims that two of those insert statements were waiting for the PK index to be freed:
When I placed the code in the stored procedure it is now stating that this stored proc has deadlocked with another instance of this stored proc.
Here is the code. The select statement is similar to that used by linq when it did its own query. I simply want to see if the item exists and if not then insert it. I can find the system by either the PK or by some lookup values.
SET NOCOUNT ON;
BEGIN TRY
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION SPFindContractMachine
DECLARE #id int;
set #id = (select [m].pkID from Machines as [m]
WHERE ([m].[fkContract] = #fkContract) AND ((
(CASE
WHEN #bByID = 1 THEN
(CASE
WHEN [m].[pkID] = #nMachineID THEN 1
WHEN NOT ([m].[pkID] = #nMachineID) THEN 0
ELSE NULL
END)
ELSE
(CASE
WHEN ([m].[iA_Metric] = #lA) AND ([m].[iB_Metric] = #lB) AND ([m].[iC_Metric] = #lC) THEN 1
WHEN NOT (([m].[iA_Metric] = #lA) AND ([m].[iB_Metric] = #lB) AND ([m].[iC_Metric] = #lC)) THEN 0
ELSE NULL
END)
END)) = 1));
if (#id IS NULL)
begin
Insert into Machines(fkContract, iA_Metric, iB_Metric, iC_Metric, dteFirstAdded)
values (#fkContract, #lA, #lB, #lC, GETDATE());
set #id = SCOPE_IDENTITY();
end
COMMIT TRANSACTION SPFindContractMachine
return #id;
END TRY
BEGIN CATCH
if ##TRANCOUNT > 0
ROLLBACK TRANSACTION SPFindContractMachine
END CATCH
Any procedure that follows the pattern:
BEGIN TRAN
check if row exists with SELECT
if row doesn't exist INSERT
COMMIT
is going to run into trouble in production because there is nothing to prevent two treads doing the check simultaneously and both reach the conclusion that they should insert. In particular, under serialization isolation level (as in your case), this pattern is guaranteed to deadlock.
A much better pattern is to use database unique constraints and always INSERT, capture duplicate key violation errors. This is also significantly more performant.
Another alternative is to use the MERGE statement:
create procedure usp_getOrCreateByMachineID
#nMachineId int output,
#fkContract int,
#lA int,
#lB int,
#lC int,
#id int output
as
begin
declare #idTable table (id int not null);
merge Machines as target
using (values (#nMachineID, #fkContract, #lA, #lB, #lC, GETDATE()))
as source (MachineID, ContractID, lA, lB, lC, dteFirstAdded)
on (source.MachineID = target.MachineID)
when matched then
update set #id = target.MachineID
when not matched then
insert (ContractID, iA_Metric, iB_Metric, iC_Metric, dteFirstAdded)
values (source.contractID, source.lA, source.lB, source.lC, source.dteFirstAdded)
output inserted.MachineID into #idTable;
select #id = id from #idTable;
end
go
create procedure usp_getOrCreateByMetrics
#nMachineId int output,
#fkContract int,
#lA int,
#lB int,
#lC int,
#id int output
as
begin
declare #idTable table (id int not null);
merge Machines as target
using (values (#nMachineID, #fkContract, #lA, #lB, #lC, GETDATE()))
as source (MachineID, ContractID, lA, lB, lC, dteFirstAdded)
on (target.iA_Metric = source.lA
and target.iB_Metric = source.lB
and target.iC_Metric = source.lC)
when matched then
update set #id = target.MachineID
when not matched then
insert (ContractID, iA_Metric, iB_Metric, iC_Metric, dteFirstAdded)
values (source.contractID, source.lA, source.lB, source.lC, source.dteFirstAdded)
output inserted.MachineID into #idTable;
select #id = id from #idTable;
end
go
This example separates the two cases, since T-SQL queries should never attempt to resolve two different solutions in one single query (the result is never optimizable). Since the two tasks at hand (get by mahcine id and get by metrics) are completely separate, the should be separate procedures and the caller should call the apropiate one, rather than passing a flag. This example shouws how to achieve the (probably) desired result using MERGE, but of course, a correct and optimal solution depends on the actual schema (table definition, indexes and cosntraints in place) and on the actual requirements (is not clear what the procedure is expected to do if the criteria is already matched, not output and #id?).
By eliminating the SERIALIZABLE isolation, this is no longer guaranteed to deadlock, but it may still deadlock. Solving the deadlock is, of course, completely dependent on the schema which was not specified, so a solution to the deadlock cannotactually be provided in this context. There is a sledge hammer of locking all candidate row (force UPDLOCK or even TABLOCX) but such a solution would kill throughput on heavy use, so I cannot recommend it w/o knowing the use case.
Get rid of the transaction. It's not really helping you, instead it is hurting you. That should clear up your problem.
How about this SQL? It moves the check for existing data and the insert into a single statement. This way, when two threads are running they're not deadlocked waiting for each other. At best, thread two is deadlocked waiting for thread one, but as soon as thread one finishes, thread two can run.
BEGIN TRY
BEGIN TRAN SPFindContractMachine
INSERT INTO Machines (fkContract, iA_Metric, iB_Metric, iC_Metric, dteFirstAdded)
SELECT #fkContract, #lA, #lB, #lC, GETDATE()
WHERE NOT EXISTS (
SELECT * FROM Machines
WHERE fkContract = #fkContract
AND ((#bByID = 1 AND pkID = #nMachineID)
OR
(#bByID <> 1 AND iA_Metric = #lA AND iB_Metric = #lB AND iC_Metric = #lC))
DECLARE #id INT
SET #id = (
SELECT pkID FROM Machines
WHERE fkContract = #fkContract
AND ((#bByID = 1 AND pkID = #nMachineID)
OR
(#bByID <> 1 AND iA_Metric = #lA AND iB_Metric = #lB AND iC_Metric = #lC)))
COMMIT TRAN SPFindContractMachine
RETURN #id
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
ROLLBACK TRAN SPFindContractMachine
END CATCH
I also changed those CASE statements to ORed clauses just because they were easier to read to me. If I recall my SQL theory, ORing might make this query a little slower.
I wonder if adding an UPDLOCK hint to the earlier SELECT(s) would fix this; it should avoid sone deadlock scenarios be preventing another spud getting a read-lock on the data you are about to mutate.
Hi I have the following SP, however when I use LINQ to SQL it generates 2 multiple recordsets. For my sanity I am trying to fathom out what it is in the stored procedure that is doing this and would like to only return a single recordset... Can any help?
ALTER PROCEDURE [dbo].[CheckToken]
#LoginId int
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE #failures INT
SET #failures = (SELECT COUNT(loginid) FROM auditerrorcode WHERE
errorid = 1012 AND loginid = #loginid
AND datecreated > DATEADD(hh, -1, getdate())
)
IF #failures > 10 UPDATE [login] SET [IsDisabled]=1 WHERE loginid = #loginid
SELECT * FROM [Login] WHERE LoginId = #LoginId
END
Execute your procedure stand alone and rule out your not getting two rows because there are two rows being returned for the ID you are passing in. Do this in SQL Managment Studio with a
EXEC dbo.CheckToken 999
Make sure to use the same #LoginID that you are calling from your .NET code.
Sorry Guys....
I looked again in the DBML file generated and deleted the CheckToken method which had multiple result sets defined. I then regenerated and now I get what I expected, one recordset
Looks like the mods I made to the SP has worked.