Greeting
I am new with SQL Server i have a table Transaction with the attribute Debit, Credit and other columns, i want to calculate balance but i cant use CTE Query
expect result should be like ....
Debit Credit Balance
10,000 0 10,000
0 3,0000 7,000
5,000 0 12,000
previously i did it in mysql using variables as below
SELECT A.Debit,A.Credit, #b := #b + A.Debit - A.Credit AS balance
FROM (SELECT #b := 0.0) AS dummy
CROSS JOIN FinTrans A
but I am new to MSSQL SERVER How do I do it in MSSQLSERVER
Thanks in Advance
In SQL Server 2012, you would use the ANSI standard cumulative sum functions:
select ft.*,
sum(debit - credit) over (order by ??) as balance
from FinTrans ft;
SQL tables represent unordered sets. The ?? is for the column that specifies the ordering for your cumulative sum.
In fact, this might typically look like:
select ft.*,
sum(debit - credit) over (partition by <account id column>
order by <ordering column
) as balance
from FinTrans ft;
That is, this is how you would do the calculation for different accounts at the same time.
Related
Im trying to make a quote for my winform I have a total of 100 records. I created it like this
CREATE TABLE quote (
quote_id numeric identity primary key,
quote_quote varchar(500) not null,
quote_from varchar(100) not null,
)
Now I get the display I wanted when I display it in my winform, but I think that everyday a new quote would replace the other is nice, rather than just every run a different quote will display. I query it like this
SELECT TOP 1 quote_quote,quote_from FROM quotes ORDER BY NEWID()
How can I do that every day, that query will fire or any other suggestion?
There are a few ways to do this. One option is the following, which will cycle through a new quote each day:
SELECT quote_quote, quote_from FROM QUOTE
ORDER BY QUOTE_ID
OFFSET (SELECT CAST(GETDATE() AS INT) % COUNT(*) FROM QUOTE) ROWS
FETCH NEXT 1 ROW ONLY;
The subquery SELECT CAST(GETDATE() AS INT) % COUNT(*) FROM QUOTE casts the current date to an integer (days since 1900-01-01) modulates by the number of rows in the quote table. This will be a number between 0 and N-1, where N is the number of rows in the quotes table. The query is offset by this number of rows and fetches only one row; the effect is to cycle through the quotes a day at a time.
I have a table with data like:
ID Amount Status
1 15.00 Paid
2 3.00 Paid
3 10.00 Awaiting
4 12.00 Awaiting
The system looks at this table to see if a customer has paid enough for a subscription. This uses a table to record payments. Once a day I need to see if the customer has met this requirement.
The solution looks nothing like the above table as it much more complex, but the issue remain the same and can be broken down to this.
I need to find a way to add the amounts up, but when the amount goes over 20, change the data in the table as follows:
ID Amount Status
1 15.00 Paid
2 3.00 Paid
3 2.00 Paid <= Customer has reached payment level
4 12.00 Cancelled <= Subsequent payment is cancelled
5 8.00 BForward <= Extra money is brought forward
Currently I am using a cursor, but performance is bad, as expected.
Does anyone know of a better way?
Generates the desired results. Not sure why you would want to update the original data (assuming this is transnational data)
Declare #Table table (ID int,Amount money,Status varchar(50))
Insert into #Table values
(1,15.00,'Paid'),
(2,3.00,'Paid'),
(3,10.00,'Awaiting'),
(4,12.00,'Awaiting')
;with cteBase as (
Select *
,SumTotal=sum(Amount) over (Order By ID)
From #Table
), cteExtended as (
Select *
,Forward = IIF(SumTotal>20 and SumTotal-Amount<20,SumTotal-20,0)
,Cancelled = IIF(SumTotal>20 and SumTotal-Amount>20,Amount,0)
From cteBase
)
Select ID,Amount,Status='Paid' from cteExtended Where Forward+Cancelled=0
Union All
Select ID,Amount=Amount-Forward,Status='Paid' from cteExtended Where Forward>0
Union All
Select ID,Amount,Status='Cancelled' from cteExtended Where Cancelled>0
Union All
Select ID=(Select Count(*) from cteBase)+Row_Number() over (Order by ID),Amount=Forward,Status='BForward' from cteExtended Where Forward>0
Order By ID
Returns
ID Amount Status
1 15.00 Paid
2 3.00 Paid
3 2.00 Paid
4 12.00 Cancelled
5 8.00 BForward
Now I add this tables with these columns:
DrugID, BatchNo, ManufacreDate, ExpireDate, Quantity.
Note: (DrugID, BatchNo) constitute the primary key.
For example: there are 2 records as follow:
(101, 1234, 1-7-2014, 1-7-2016, 50)
(101, 7654, 1-7-2015, 1-7-2017, 80)
If, as example, one customer wants 80 item from drug with drugID=101, how could I update the table so that the first record will be removed, the second one will remain but the quantity will be modified to 30?
Any help, please?
The approach to solving this is to calculate the cumulative quantity of each drug. Then compare the cumulative amount to the desired amount to determine the rows that need updating.
The rest is a bit of arithmetic. In SQL Server 2012+, the code looks like:
declare #val int = 80;
with t as (
select t.*,
sum(qty) over (partition by drugid order by batchno) as cumeqty
from t
)
update t
set qty = (case when cumeqty - #val < 0 then 0
else cumeqty - #val
end)
where (cumeqty - #val) < qty;
In earlier versions, you have to work a bit harder to get the cumulative quantity, but the idea is the same.
Here is an example of the code (or something very similar) working in SQL Fiddle.
I have a database, with a table which has around 2.500.000 records. I am fetching around 150.000 records, with 2 different queries to test. First one returns results between 30 seconds and 1 minutes. But the second one, responds in between 3 - 4 minutes which is very weird. The only thing changes is first one doesn't use parameter but second one does. I am running both from C#. For security issues I want to use parametered one but I couldn't understand why does it take so much time. Any help will be appreciated.
First query:
DECLARE #page INT=3
DECLARE #pagesize INT=300
string sql = "SELECT Col1,Col2,Col3 FROM
(SELECT ROW_NUMBER() OVER(ORDER BY Col1) AS rownumber,Col1,Col2,Col3";
sql += " FROM my_table WHERE Col1 LIKE '" + letter + "%') as somex
WHERE rownumber >= (#page-1)*(#pagesize)";
sql += "AND rownumber <=(#page)*#pagesize;
SELECT COUNT(*) FROM my_table WHERE col1 LIKE '" + letter + "%'";
Second query:
DECLARE #page INT=3
DECLARE #pagesize INT=300
DECLARE #starting VARCHAR(10)='be'
string sql = "SELECT Col1,Col2,Col3FROM
(SELECT ROW_NUMBER() OVER(ORDER BY Col1) AS rownumber,Col1,Col2,Col3";
sql += " FROM my_table WHERE Col1 LIKE #letter+'%') as somex
WHERE rownumber >= (#page-1)*(#pagesize)";
sql += "AND rownumber <=(#page)*#pagesize; SELECT COUNT(*)
FROM my_table WHERE col1 LIKE #letter+'%'";
My server is 16GB Ram, 4 real 4 virtual CPU, Sata disks.
Edit: Col1 is Clustered and Non-clustered index.
Progress: It turns out that these queries work well on another server. But this confuses me more. Could it be some setting from SQL Server?
As I said in a comment, it sounds like parameter sniffing, but in the interest of being helpful I thought I'd expand on that. There are a number of articles on the web that
go into a lot more detail than I will, but the long and the short of parameter sniffing is that SQL-Server has cached an execution plan based on a value for the parameter that does not yield the best execution plan for the current value.
Supposing that Col1 has a nonclustered index on, but does not include col2 or col3 as non key columns then
SQL-Server has two options, it can either do a clustered index scan on My_Table to get all the rows where Col1 LIKE #letter+'%', or it can search the index on Col1 then do a bookmark lookup on the clustered index to get the values
for each row returned by the index. I can't quite remember off the top of my head at what point SQL-Server switches between the two based on the estimated row count, it is at quite a low percentage, so I am fairly sure that if you are returning 150,000 records
out of 2,500,000 the optimiser will go for a clustered index scan. However, if you were only returning a few hundred rows then a bookmark lookup would be preferable.
When you don't use parameters SQL-Server will create a new execution plan each time it is executed, and produce the best execution plan for that parameter (assuming your statistics are up to date), when you do use a paramter the first time they query is run sql-server creates a plan
based on that particular parameter value, and stores that plan for later use. Each subsequent time the query is run sql-server recognises that the query is the same so doesn't recompile it. This means though that if the first time the query was run it was for
a parameter that returned a low number of rows then the bookmark lookup plan will be stored. Then if the next time the query is run it is passed for a value that returns a high number of rows where the optimal plan is a clustered index scan then the query is still executed using the suboptimal bookmark lookup and
will result in a longer execution time. This could of course also be true the other way round. There are a number of ways to get around parameter sniffing, but since your query is not very complex the compile time will not be significant, especially in comparison to the 30 seconds you say this query is taking
to run even at its best, so I would use the OPTION RECOMPILE Query hint:
SELECT Col1, Col2, Col3
FROM ( SELECT ROW_NUMBER() OVER(ORDER BY Col1) AS rownumber,Col1,Col2,Col3
FROM my_table
WHERE Col1 LIKE #letter+'%'
) as somex
WHERE rownumber >= (#page-1)*(#pagesize)
AND rownumber <= (#page) * #pagesize
OPTION (RECOMPILE);
SELECT COUNT(*)
FROM my_table
WHERE Col1 LIKE #letter+'%'
OPTION (RECOMPILE);
The reason that when you tried this on a new server that it executed fine is that the first time it was run on the new server the parameterised query had to be compiled, and the plan generated was suitable to value of the parameter provided.
One final point, if you are using SQL_Server 2012 then you could use OFFSET/FETCH to do your paging:
SELECT Col1, Col2, Col3
FROM My_table
WHERE Col1 LIKE #letter+'%'
ORDER BY Col1 OFFSET (#page-1) * (#pagesize) ROWS FETCH NEXT #pagesize ROWS ONLY;
I'm using an MS Access .mdb database in my C# application. The database contains email messages (one row - one message).
I need to get a specified amount of messages which are older than a specified datetime. Let's say, 30 messages before 2012-02-01 12:00:00. I've tried different queries but all of them give me errors. Have tried the TOP, LIMIT and other statements also:
"SELECT * FROM ( SELECT * FROM Mails WHERE (timeReceived < ?) ) LIMIT 0,30";
"SELECT * FROM Mails WHERE (timeReceived = ?) ORDER BY timeReceived DESC LIMIT ?";
etc.
Any hints appriciated.
You say you've tried TOP clause, but it should work
SELECT TOP 30 * FROM Mails WHERE timeReceived < '2012-02-01 12:00:00' ORDER BY timeReceived DESC
You must take this into account.
The top directive doesn't return the top n items, as one is easily led
to believe. Instead it returns at least n distinct items determined by
the ordering of the result.
Edit to clarify:
SELECT TOP 25
FirstName, LastName
FROM Students
WHERE GraduationYear = 2003
ORDER BY GradePointAverage DESC;
http://office.microsoft.com/en-us/access-help/results.aspx?qu=top&ex=1&origin=HA010256402
The TOP predicate does not choose between equal values. In the
preceding example, if the twenty-fifth and twenty-sixth highest grade
point averages are the same, the query will return 26 records.
So, no, rows with the same timestamp are not skipped. But if the 30th and 31th records(according to the order clause) have the same timestamp, both will be returned and you get 31 records.
If you want to force 30 records to be returned, you need to include the primary key into the Order By to differentiate between tied values:
SELECT TOP 30 *
FROM Mails
WHERE timeReceived < '2012-02-01 12:00:00'
ORDER BY timeReceived DESC, MailID ASC
You can try this SQL out:
SELECT top 30 * FROM Mails WHERE timeReceived < #2012-02-01#
This should work (unverified):
SELECT top 30 *
FROM Mails
WHERE timeReceived < '2012-02-01 12:00:00'
ORDER BY timeReceived desc