I have a program that use old Linq To SQL to connect an ASP.NET application to a SQL Server DB.
ASP.NET application and SQL Server instance are on the same machine, and both "environment" are upadated (IIS 10, NET Framework 4.8 and SQL Server 2019).
In the software i have to handle a virtual Cart with the customer order. Cart has many field, one of them is a nvarchar and contains the "cart document" a stirng that tipically is few KB, but sometime may reach few MB (never more than 10MB)
When i udpate a document string in the reange of 2-3MB, and then update the single row that contains it, the udpate operation is really, really slow (2-2,5s).
Here update code :
protected void Upsert(CartDto cart, bool isValidationUpsert = false )
{
lock (_sync)
{
if ((cart?.Id ?? 0) <= 0)
throw new ExtendedArgumentException("cartId");
using (var dbContext = ServiceLocator.ConnectionProvider.Instace<CartDataContext>())
{
var repository = new CartRepository(dbContext);
var existingCart = repository.Read(crt => crt.ID == cart.Id).FirstOrDefault();
if (existingCart == null)
{
existingCart = new tbl_set_Cart();
existingCart.Feed(cart);
repository.Create(existingCart);
}
else
{
existingCart.Feed(cart);
repository.Update(existingCart);
}
dbContext.SubmitChanges(); //<<--- This speecifi operation will take 2-2,5s previous instructions take a neglectable time
}
}
}
I have no idea about the why, nor how to improve performance in this scenario
--EDITED :
as suggested, i have profiled the oepration on the DB and experienced the same delay (~2,5) event if i run the SQL code directly onto SQL Server (using SSMS to connect and execute code).
Here SQL code and perforamance statistics :
DECLARE #p0 AS INT = [cart_id];
DECLARE #p1 AS INT = [entry_count];
DECLARE #p2 AS NVARCHAR(MAX) = '..document..';
UPDATE [dbo].[tbl_set_Cart]
SET [ITEMS_COUNT] = #p1, [ITEMS] = #p2
WHERE [ID] = #p0
Here my table schema, as you can see nothing it's very simple :
/****** Object: Table [dbo].[tbl_set_Cart] Script Date: 02/12/2021 15:44:07 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[tbl_set_Cart](
[ID] [int] NOT NULL,
[AS400_CUSTOMER_COD] [nvarchar](50) NOT NULL,
[AS400_LISTIN] [int] NOT NULL,
[VALUE] [nvarchar](max) NOT NULL,
[DELIVERY_COSTS] [nvarchar](max) NOT NULL,
[ITEMS_COUNT] [int] NOT NULL,
[ITEMS] [nvarchar](max) NOT NULL,
[KIND] [int] NOT NULL,
[CHECKOUT_INFO] [nvarchar](max) NOT NULL,
[ISSUES] [nvarchar](max) NOT NULL,
[LAST_CHECK] [datetime] NOT NULL,
[USER_ID] [int] NOT NULL,
[IMPERSONATED_USER_ID] [int] NOT NULL,
[OVERRIDE_PRICES] [bit] NOT NULL,
[HAS_ISSUE] [bit] NOT NULL,
[IS_CONFIRMED] [bit] NOT NULL,
[IS_COLLECTED] [bit] NOT NULL,
[_METADATA] [nvarchar](max) NOT NULL,
CONSTRAINT [PK_tbl_set_Cart] PRIMARY KEY CLUSTERED
(
[ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
After investigating deeper the DB profiling with the help of DBA Stack Overflow users (here the discussion https://dba.stackexchange.com/questions/303400/sql-server-how-to-upload-big-json-into-column-performance-issue/303409#303409) turn out to be an issue probably related to disk.
Because some production system hit the same problem as my development machine i ask how to improve performance and recevied the beautiful tips of store the compressed version of my data.
Data are not to big (in my scanrio) to be too slow for an in-memory at runtime compression/decompression, and that drammatically reduce the time (LZMA used).
From 2,5s to ~0,3 a really good improvement.
Thanks to all for precious help and tips.
Related
I have an Stored Procedure in SQL Server 2012:
CREATE procedure [dbo].[GetEmailThread]
(
#id int
)
AS
SELECT e.Id, e.Created, e.MailFrom, e.MailTo, e.Body, e.Sended, ip.Ip, e2.Sended as ReplySended, e2.Body as ReplyBody, e2.Id as ReplyId
FROM Emails e
LEFT JOIN IpAddress ip ON (ip.Id = e.IpId)
LEFT JOIN Emails e2 ON (e.Id = e2.ParentEmailId)
WHERE e.RealtyId = #id
ORDER BY e.Sended, ReplySended
Definition of Email Table:
CREATE TABLE [dbo].[Emails](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Created] [datetime] NOT NULL,
[MailFrom] [nvarchar](100) NOT NULL,
[MailTo] [nvarchar](500) NOT NULL,
[Subject] [nvarchar](500) NULL,
[Body] [nvarchar](max) NULL,
[Sended] [datetime] NULL,
[Ip] [varchar](15) NULL,
[ReplyTo] [nvarchar](100) NULL,
[IpId] [int] NULL,
[RealtyId] [int] NULL,
[ParentEmailId] [int] NULL,
[IsBodyHtml] [bit] NOT NULL CONSTRAINT [DF_Emails_IsBodyHtml] DEFAULT ((0)),
And EntityFramework model in C#. Whenever I right-click in Visual Studio Model Browser to this EntityFramework model and choose the menu "Update Model from Database", the model gets updated, but the field MailTo in GetEmailThread_Result is always missing.
I need to add it manually to the model GetEmailThread_Result.cs to get it working.
Why? Why this? I cannot see anything special in this field. Why not MailFrom?
The solution was to manually edit EF files in the project as they were corrupted, for details see Updating Entity Framework Model
I have two tables in SQLServer 2014, one with ~100M points and one with ~2000 polygons.
Each point intersects with only one of the polygons. The task is to assign the ID of the intersecting polygon to the point.
What is the best practice to do it?
I have tried it in C#, loading two datatables, going row by row through the points and row by row through the polygons to find matches.
Boolean inside = (Boolean)polygon.STIntersects(point);
This is painfully slow since i have to access each point separately and each polygon multiple times to check for intersection. Any ideas are very welcome!
Create table statement for Points
CREATE TABLE [dbo].[ManyPoints](
[idNearByTimeLine] [int] IDENTITY(1,1) NOT NULL,
[msgID] [bigint] NOT NULL,
[userID] [bigint] NULL,
[createdAT] [datetime2](0) NULL,
[WGSLatitudeX] [numeric](9, 6) NULL,
[WGSLongitudeY] [numeric](9, 6) NULL,
[location] [geography] NULL
)
and Polygons
CREATE TABLE [dbo].[ManyPolygons](
[OBJECTID] [int] IDENTITY(1,1) NOT NULL,
[Shape] [geography] NULL,
[ID_0] [int] NULL,
[ISO] [nvarchar](3) NULL,
[NAME_0] [nvarchar](75) NULL,
[ID_1] [int] NULL,
[NAME_1] [nvarchar](75) NULL,
[ID_2] [int] NULL,
[NAME_2] [nvarchar](75) NULL,
[ID_3] [int] NULL,
[NAME_3] [nvarchar](75) NULL,
[NL_NAME_3] [nvarchar](75) NULL,
[VARNAME_3] [nvarchar](100) NULL,
[TYPE_3] [nvarchar](50) NULL,
[ENGTYPE_3] [nvarchar](50) NULL,
[ORIG_FID] [int] NULL,
)
Both tables have a spatial index on "location" and "Shape"
Select idnearbytimeline, objectid
From dbo.manypoints as point
Join dbo.manypolygons as polygon
On point.location.STIntersects(polygon.shape) =1
I came up with another solution. This is a stored procedure which selects all points within a given polygon ID. Then I use a simple C# program to loop through all polygons. However this is still not optimal and painfully slow. Any tweaks that can be made easily?
USE [<<DATABASE>>]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[test] #ID INT
AS
SET IDENTITY_INSERT [weiboDEV].[dbo].[<<NEW TABLE>>] ON;
-- Select Points in Polygon (Geography)
DECLARE #Shape GEOGRAPHY = (select [Shape] from <<POLYGONS>> where OBJECTID=#ID);
DECLARE #SQLString2 NVARCHAR(500)= N'INSERT INTO <<NEW TABLE>>(<<YOUR COLUMNS>>) SELECT <<YOUR COLUMNS>> FROM <<POINTS>> WHERE ([location]).STWithin(#Shape) = 1;';
DECLARE #ParmDefinition NVARCHAR(500) = N'#ID INT, #Shape geography';
EXECUTE sp_executesql #SQLString2, #ParmDefinition, #ID, #Shape;
GO
I suggest you to store your points of single polygon with comma separated string. So it can be covered in only one record for each polygon.
It may be a simple/specific question but I really need help on that. I have two tables: Entry and Comment in a SQL Server database. I want to show comment count in entry table. And of course comment count will increase when a comment is added. Two tables are connected like this:
Comment.EntryId = Entry.Id
Entry table:
CREATE TABLE [dbo].[Entry] (
[Id] INT IDENTITY (1, 1) NOT NULL,
[Subject] NVARCHAR (MAX) NOT NULL,
[Content] NVARCHAR (MAX) NOT NULL,
[Type] NVARCHAR (50) NOT NULL,
[SenderId] NVARCHAR (50) NOT NULL,
[Date] DATE NOT NULL,
[Department] NVARCHAR (50) NULL,
[Faculty] NVARCHAR (50) NULL,
[ViewCount] INT DEFAULT ((0)) NOT NULL,
[SupportCount] INT DEFAULT ((0)) NOT NULL,
[CommentCount] INT DEFAULT ((0)) NOT NULL,
PRIMARY KEY CLUSTERED ([Id] ASC)
);
Comment table:
CREATE TABLE [dbo].[Comment] (
[Id] INT IDENTITY (1, 1) NOT NULL,
[EntryId] INT NOT NULL,
[SenderId] NVARCHAR (50) NOT NULL,
[Date] DATETIME NOT NULL,
[Content] NVARCHAR (MAX) NOT NULL,
[SupportCount] INT NOT NULL,
PRIMARY KEY CLUSTERED ([Id] ASC)
);
I am showing the entries in a gridview in codebehind (c#). The question is this, what should I write as a query to do this most efficiently? Thanks for help.
Try this:
select e.Id,e.date,count(*) as NumComments
from Entry e
join comment c on c.entryId=e.id
group by e.id,e.date
If there might be no comments, try the following
select e.Id,e.date,count(c.entryId) as NumComments
from Entry e
left join comment c on c.entryId=e.id
group by e.id,e.date
You can use left join for that purpose. Kindly me more specific with what fields you want in gridview
And why do you want commentcount in table (most tables have that 1-many relation and we didn't use that). If you keep that in table you have to update entry table every time when comment is made.
i have an integer value id, it is okay and it is taking the value by default like 1,2, 3..... now i want to take this value like this 0001, 0002,0003. how can it possible please help me.
USE [Companybook]
GO
/****** Object: Table [dbo].[Employees] Script Date: 12/05/2013 14:50:14 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Employees](
[EmployeeID] [int] IDENTITY(1,1) NOT NULL,
[LastName] [nchar](10) NULL,
[FirstName] [nchar](10) NULL,
[Country] [nchar](10) NULL,
CONSTRAINT [PK_Employees] PRIMARY KEY CLUSTERED
(
[EmployeeID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
Hi you cant use 0001 as Identity in Sql, instead of that you can use valid number like 1001,2001,3001......
But you can retrieve the id as you said...
Please refer the link:
http://social.msdn.microsoft.com/Forums/en-US/9ae39780-9e95-4e91-bd9f-9f9fc9232084/how-to-make-int-identity-field-showed-like-0001-
http://forums.asp.net/t/1625788.aspx
Ok guys (and gals), this one has been driving me nuts all night and I'm turning to your collective wisdom for help.
I'm using Fluent Nhibernate and Linq-To-NHibernate as my data access story and I have the following simplified DB structure:
CREATE TABLE [dbo].[Classes](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](100) NOT NULL,
[StartDate] [datetime2](7) NOT NULL,
[EndDate] [datetime2](7) NOT NULL,
CONSTRAINT [PK_Classes] PRIMARY KEY CLUSTERED
(
[Id] ASC
)
CREATE TABLE [dbo].[Sections](
[Id] [bigint] IDENTITY(1,1) NOT NULL,
[ClassId] [bigint] NOT NULL,
[InternalCode] [varchar](10) NOT NULL,
CONSTRAINT [PK_Sections] PRIMARY KEY CLUSTERED
(
[Id] ASC
)
CREATE TABLE [dbo].[SectionStudents](
[SectionId] [bigint] NOT NULL,
[UserId] [uniqueidentifier] NOT NULL,
CONSTRAINT [PK_SectionStudents] PRIMARY KEY CLUSTERED
(
[SectionId] ASC,
[UserId] ASC
)
CREATE TABLE [dbo].[aspnet_Users](
[ApplicationId] [uniqueidentifier] NOT NULL,
[UserId] [uniqueidentifier] NOT NULL,
[UserName] [nvarchar](256) NOT NULL,
[LoweredUserName] [nvarchar](256) NOT NULL,
[MobileAlias] [nvarchar](16) NULL,
[IsAnonymous] [bit] NOT NULL,
[LastActivityDate] [datetime] NOT NULL,
PRIMARY KEY NONCLUSTERED
(
[UserId] ASC
)
I omitted the foreign keys for brevity, but essentially this boils down to:
A Class can have many Sections.
A Section can belong to only 1 Class but can have many Students.
A Student (aspnet_Users) can belong to many Sections.
I've setup the corresponding Model classes and Fluent NHibernate Mapping classes, all that is working fine.
Here's where I'm getting stuck. I need to write a query which will return the sections a student is enrolled in based on the student's UserId and the dates of the class.
Here's what I've tried so far:
1.
var sections = (from s in this.Session.Linq<Sections>()
where s.Class.StartDate <= DateTime.UtcNow
&& s.Class.EndDate > DateTime.UtcNow
&& s.Students.First(f => f.UserId == userId) != null
select s);
2.
var sections = (from s in this.Session.Linq<Sections>()
where s.Class.StartDate <= DateTime.UtcNow
&& s.Class.EndDate > DateTime.UtcNow
&& s.Students.Where(w => w.UserId == userId).FirstOrDefault().Id == userId
select s);
Obviously, 2 above will fail miserably if there are no students matching userId for classes the current date between it's start and end dates...but I just wanted to try.
The filters for the Class StartDate and EndDate work fine, but the many-to-many relation with Students is proving to be difficult. Everytime I try running the query I get an ArgumentNullException with the message:
Value cannot be null.
Parameter name: session
I've considered going down the path of making the SectionStudents relation a Model class with a reference to Section and a reference to Student instead of a many-to-many. I'd like to avoid that if I can, and I'm not even sure it would work that way.
Thanks in advance to anyone who can help.
Ryan
For anyone who cares, it looks like the following might work in the future if Linq-To-NHibernate can support subqueries (or I could be totally off-base and this could be a limitation of the Criteria API which is used by Linq-To-NHibernate):
var sections = (from s in session.Linq<Section>()
where s.Class.StartDate <= DateTime.UtcNow
&& s.Class.EndDate > DateTime.UtcNow
&& s.Students.First(f => f.UserId == userId) != null
select s);
However I currently receive the following exception in LINQPad when running this query:
Cannot use subqueries on a criteria
without a projection.
So for the time being I've separated this into 2 operations. First get the Student and corresponding Sections and then filter that by Class date. Unfortunately, this results in 2 queries to the database, but it should be fine for my purposes.