I have 3 columns in my database. (1) Buy/Sell (2) ID (3) Date and time. For example:
buySel ID Date
1 234 12/12/2014
1 234 12/12/2014
2 234 12/12/2014
In buySell the number (1) is represented as buy and (2) is sell. Within the same day if the ID e.g. '234' is bought and sold this should return a error message.
This is what I have done in C#
string connectionString= "connection string goes here";
string Query = "SELECT COUNT(*) AS sum from databaseTable WHERE created_time >= DATEADD(hour, 9, CONVERT(DATETIME, CONVERT(DATE, GETDATE())))";
........
SqlDataReader data;
try
{
con.Open();
myReader = cmdg.ExecuteReader();
while (data.Read())
{
if (myReader[0].ToString() != "0")
{
MessageBox.Show("Error " + myReader[0].ToString());
}
}
}
catch (Exception e)
{
MessageBox.Show(e.Message);
}
I managed to compare it with today's date however how will I compare it to the buySell column and the ID column?
I'm not sure exactly what you want to return. The following will identify all the errors in your data, based on having a buy and sell in the same day:
select id, date
from databaseTable t
group by id, date
having sum(case when buysel = 1 then 1 else 0 end) > 0 and
sum(case when buysel = 2 then 1 else 0 end) > 0;
I'll like #GordonLinoff's answer, but haven't compared it performance wise to what you would get from a using EXISTS with correlated subqueries.
create table databaseTable (buySel TINYINT, ID INT, [Date] DATE)
insert into databaseTable values
(1,234,'12/12/2014'),
(1,234,'12/12/2014'),
(2,234,'12/12/2014')
select id
,[Date]
from databaseTable a
where exists(select 1 from databaseTable b where b.id=a.id
and b.[Date] = a.[Date]
and buysel = 1)
and exists(select 1 from databaseTable b where b.id=a.id
and b.[Date] = a.[Date]
and buysel = 2)
group by id
,[Date]
In this query the group by serves only as a more efficient DISTINCT.
EDIT:
Since the above statement has been questioned I figure I should examine it more closely. There a lot of discussion here and on the web at large. I think the sum of the guidance would be that GROUP BY is often more efficient then DISTINCT, but not always and DISTINCT is more intuitive a syntax.
Huge performance difference when using group by vs distinct
When the performance of Distinct and Group By are different?
http://msmvps.com/blogs/robfarley/archive/2007/03/24/group-by-v-distinct-group-by-wins.aspx
Related
I have a table with five column description, opening balance, sale, sale return, recipt.
I want to merge opening balance, sale as "Debit" and salereturn, recipt as "Credit".
How to calculate running total as column name as "balance" debit amount plus and credit amount MINUS in balance column?
My attempt is
SELECT Description, (InvoiceAmount + OpeningBalance) as 'Dabit', (DrAmount + SaleReturn + BadDebtAmount) as 'credit', SUM (sale+ OpeningBalance-SaleReturn-recipt) over (ORDER BY id) AS RunningAgeTotal FROM tablename
You seem to be describing coalesce() and a window function:
select description,
coalesce(opening, sale) as debit,
coalesce(return, receipt) as credit,
sum(coalesce(opening, sale, 0) - coalesce(return, receipt, 0)) over (order by order by (case description when 'opening balance' then 1 when 'sale' then 2 when 'sale return' then 3 else 4 end))
from t
order by (case description when 'opening balance' then 1 when 'sale' then 2 when 'sale return' then 3 else 4 end);
At the expense of creating a temporary list, a Linq version would be as follows :
Assuming your original source is from a Sql database, then you first need to bring the data into memory, eg
var records = OrderDetails
.OrderBy(a=>a.Date)
.Select(a => new
{
a.Description,
Debit = a.OpeningBalance + a.Sale,
Credit = a.Return + a.SaleReturn
}
)
.ToList();
Note the query needs to be sorted to ensure the date is returned in the correct order. You haven't mentioned any other fields, so I have just assumed there is a field called date that can be used.
Once you have the data in memory, you can then add the Balance column, ie
decimal balance = 0;
var list = records.Select(a => new
{
a.Description,
a.Debit,
a.Credit,
Balance = balance += (a.Debit - a.Credit),
}).ToList();
Because you are introducing a local variable and initialising it outside the Linq statement, it is important that the query is not enumerated twice unless balance has been reset to zero. You can avoid this by using .ToList(); or .ToArray();
How would I go about doing this say I have a bunch of registers and they have a terminal id = 4741 and I only want the very last record of that day. I am using SQL Server 2008 R2.
SELECT TOP 1000
[id]
,[data_atualizacao]
,[direcao]
,[velocidade]
,[latitude]
,[longitude]
,[nivel_bateria]
,[enum_status_gps]
,[id_terminal]
FROM
[TecnologiaGPS_V2].[dbo].[posicao_historico_terminal_82015]
WHERE
id_terminal = 4741
ORDER BY
data_atualizacao ASC
But say that above query returns the following I only want however the last time the 19:57:58 one i thought order by would have been enough but it just brings me them in ascending order.
id_terminal 19:57:05
id_terminal 19:57:15
id_temminal 19:57:58
This is how I am getting my data back in c#
using (SqlCommand cmd = new SqlCommand("SELECT * FROM motorista where id = " + driverId +" order_by data_atualizacao", connection))
{
connection.Open();
using (SqlDataReader reader = cmd.ExecuteReader())
{
// Check is the reader has any rows at all before starting to read.
if (reader.HasRows)
{
// Read advances to the next row.
while (reader.Read())
{
motorista motorist = new motorista();
// To avoid unexpected bugs access columns by name.
motorist.id = reader.GetInt32(reader.GetOrdinal("id"));
motorist.nome = reader.GetString(reader.GetOrdinal("nome"));
motorist.numero_registro = reader.GetString(reader.GetOrdinal("numero_registro"));
morotistList.Add(motorist);
}
return morotistList.ToList();
}
}
}
So my question is how do I get the very most recent time stamp and also is my code sufficient when I would be paging through a record set of six million entries. Even though I will always have the terminal id to pass to the query.
Select top 1 *
from TecnologiaGPS_V2
where id_terminal = 4741
order by data_atualizacao desc
select
[id], [data_atualizacao], [direcao], [velocidade],
[latitude], [longitude], [nivel_bateria],
[enum_status_gps],[id_terminal]
from
(select
[id],
row_number() over(partition by id order by data_atualizacao desc) rn,
[data_atualizacao], [direcao], [velocidade],
[latitude], [longitude], [nivel_bateria],
[enum_status_gps], [id_terminal]
from
[TecnologiaGPS_V2].[dbo].[posicao_historico_terminal_82015]
where
id_terminal = 4741) t
where
t.rn = 1
You can use a row_number window function to get the latest record for each id.
Rather than sorting the data you could also use and aggregate as well.
SELECT *
FROM
[TecnologiaGPS_V2].[dbo].[posicao_historico_terminal_82015]
WHERE
id_terminal = 4741
AND data_atualizacao = (SELECT MAX (data_atualizacao) FROM [TecnologiaGPS_V2].[dbo].[posicao_historico_terminal_82015] as subquery
WHERE id_terminal = subquery.id_terminal)
Doing an order by can cause extra work by the database server. Sorts can be extremely expensive based on the data set size. While you can also rely on an index to also keep it sorted for you, you are going to pay in index maintenance. Sql server is good at aggregations, this may perform better overall. Also, based on your table definition you are guaranteed 1 row.
select top 1 * from tablename where [condition] order by [column] desc;
to check run the each query one by one.
SELECT * FROM Orders where EmployeeID = 4 ;
SELECT top 1 * FROM Orders where EmployeeID = 4 order by OrderDate desc;
in
http://www.w3schools.com/sql/trysql.asp?filename=trysql_func_first&ss=-1
I have this table (table name is price):
value productId ShopID Timestamp
1.30 1 5 2015-05-30 05:20:28.000
1.20 1 5 2015-05-29 16:09:34.000
1.00 1 5 2015-05-29 16:09:43.000
1.20 1 5 2015-05-29 16:09:50.000
1.20 1 5 2015-05-29 16:09:58.000
1.00 2 5 2015-05-29 16:10:13.000
1.00 2 5 2015-05-29 16:10:17.000
1.00 1 6 2015-05-29 16:10:42.000
1.00 1 5 2015-05-30 15:02:44.000
1.30 1 5 2015-05-30 15:03:24.000
I want to get value that has been entered the most times in the latest date for each shop for single product. Note that the I want to convert timestamp to date.
Right now I have this SQL query:
select count(*), value, Productid, shopid
from price
where shopid = 5
and productId = 1
and cast(floor(cast(Timestamp as float)) as datetime)
in (select max(cast(floor(cast(Timestamp as float)) as datetime)) as [date] from price)
group by value, Productid, shopid
order by count(*) desc
It returns ordered count for single product and single shop like this:
Count value Productid shopid
2 1.30 1 5
1 1.00 1 5
In ideal case I would like to get only the row with biggest count, but I guess this would be ok as well.
Up to this point I've been using QueryOver queries in my solution, but I guess anything that can be used in c# would be ok.
TIA
Update
Following suggestions in comments I am trying to use native SQL. I've got this code to get distinct shop ids, and this is working fine.
var shopIds =
_session.CreateSQLQuery("select distinct shopid from price where productid = " + productId).List();
Then I am trying to execute main query for each shop, after which I would like to find a single instance of most frequent price like this:
List<Price> query;
foreach (var shopId in shopIds)
{
var querySubResult = _session.CreateSQLQuery("select value, productId, shopId" +
"from price " +
"where shopid = " + shopId +
"and productId = " + productId +
"and cast(floor(cast(Timestamp as float)) as datetime) " +
"in (select max(cast(floor(cast(Timestamp as float)) as datetime)) as [date] from price)" +
"group by value, Productid, shopid" +
"order by count(*) desc")
.AddScalar("value", NHibernateUtil.Decimal)
.AddScalar("productId", NHibernateUtil.Int64)
.AddScalar("shopId", NHibernateUtil.Int64).List();
query.Add(_session.QueryOver<Price>()
.Where(x => x.Value == querySubResult[1].value)
.And(x => x.Product.ProductId == querySubResult[1].productId)
.And(x => x.Shop.ShopId == querySubResult[1].shopId));
}
As I understand AddScallar defines output properties for returned data from sql statement, but I can't access none of the variables in the list.
What is wrong? Should it be aproached in diferent way? Any help would be appretiated.
This is documented in NHibernate Reference, Scalar Queries:
This will return an IList of Object arrays (object[]) with scalar
values for each column in the (CATS) table.
Example:
var result = _session.CreateSQLQuery("...")
.AddScalar(...).AddScalar(...)
.List();
var row0property0 = ((object[])result[0])[0];
var row0property1 = ((object[])result[0])[1];
I think you could also do ...List<object[]> to avoid having to cast to object[] when accessing property values.
The greatest use of raw SQL queries in NHibernate is when you tell NHibernate to actually build entity instances from the returned data.
I am selecting two records between two dates, when doing this i am experiencing repeated record, I have used the word distinct but its not working: This is how my query looks:
public List<i> searchTwoDates(string firstDate, string secondDate)
{
DateTime fdate = Convert.ToDateTime(firstDate);
string realfirstDate = fdate.ToLongDateString();
DateTime sdate = Convert.ToDateTime(secondDate);
string realsecondDate = sdate.ToLongDateString();
List<i> date = new List<i>();
SqlConnection conn = new SqlConnection(....);
SqlCommand command = new SqlCommand(#"SELECT distinct * From TableName WHERE Cast(columnName AS DATE) > #columnName AND CAST(columnName AS DATE) < #columnName1 ORDER BY columnName1 Desc", conn);
command.Parameters.AddWithValue("#columnName", realfirstDate);
command.Parameters.AddWithValue("#columnName2", realsecondDate);
conn.Open();
SqlDataReader reader = command.ExecuteReader();
while (reader.Read())
{
Mod d = new Mod();
here i get my column names....
date.Add(d);
}
conn.Close();
return date;
}
I also have a unique ID in my database so we can use that to retrieve unique record but how would i write that?
Currently i am getting repeated records
ID FName sName Date
1 John JAck 2013-9-07
2 Linda Bush 2013-10-07
3 Linda Bush 2013-11-07
This is what i want
ID FName sName
1 John JAck 2013-9-07
2 Linda Bush 2013-11-07
This is the records between 2013-9-07 to 2013-11-07. in between these records i dont want any repeated ID
[migrated from comments]
You should use select distinct id, fname, sname from table. If you don't need the date, then this will work, no repetitions.
Try this
Select Max(ID),FName,sName,Max(Date)
FROM Table1
Where Date > 'SomeDate' And Date < 'SomeDate'
Group By FName,sName
Don't try using BETWEEN
SELECT DISTINCT ID,FName,sName,Date
FROM Table1
WHERE Date BETWEEN 'Date1' And 'Date2'
UPDATE
Because of the two different dates and IDs in the question above you should use grouping. This gives you:
SELECT MIN(ID),FName,sName, MIN(Date)
FROM Table1
WHERE Date BETWEEN 'Date1' And 'Date2'
ORDER BY ID
GROUP BY FName,sName
DISTINCT will not work as there are different IDs and dates for each record.
The above query will give you the first occuring ID. You can change the ORDER BY to Date to get the earliest. Or you want the most recent then use MAX instead of MIN and use ORDER BY with DESC.
getName_as_Rows is an array which contains some names.
I want to set an int value to 1 if record found in data base.
for(int i = 0; i<100; i++)
{
using (var command = new SqlCommand("select some column from some table where column = #Value", con1))
{
command.Parameters.AddWithValue("#Value", getName_as_Rows[i]);
con1.Open();
command.ExecuteNonQuery();
}
}
I am looking for:
bool recordexist;
if the above record exist then bool = 1 else 0 with in the loop.
If have to do some other stuff if the record exist.
To avoid making N queries to the database, something that could be very expensive in terms of processing, network and so worth, I suggest you to Join only once using a trick I learned. First you need a function in your database that splits a string into a table.
CREATE FUNCTION [DelimitedSplit8K]
--===== Define I/O parameters
(#pString VARCHAR(8000), #pDelimiter CHAR(1))
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
--===== "Inline" CTE Driven "Tally Table" produces values from 0 up to 10,000...
-- enough to cover VARCHAR(8000)
WITH E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), --10E+1 or 10 rows
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS (--==== This provides the "zero base" and limits the number of rows right up front
-- for both a performance gain and prevention of accidental "overruns"
SELECT 0 UNION ALL
SELECT TOP (DATALENGTH(ISNULL(#pString,1))) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
),
cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
SELECT t.N+1
FROM cteTally t
WHERE (SUBSTRING(#pString,t.N,1) = #pDelimiter OR t.N = 0)
)
--===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY s.N1),
Item = SUBSTRING(#pString,s.N1,ISNULL(NULLIF(CHARINDEX(#pDelimiter,#pString,s.N1),0)-s.N1,8000))
FROM cteStart s
GO
Second, concatenate your 100 variables into 1 string:
"Value1", "Value 2", "Value 3"....
In Sql Server you can just join the values with your table
SELECT somecolumn FROM sometable t
INNER JOIN [DelimitedSplit8K](#DelimitedString, ',') v ON v.Item = t.somecolumn
So you find 100 strings at a time with only 1 query.
Use var result = command.ExecuteScalar() and check if result != null
But a better option than to loop would be to say use a select statement like
SELECT COUNT(*) FROM TABLE WHERE COLUMNVAL >= 0 AND COLUMNVAL < 100,
and run ExecuteScalar on that, and if the value is > 0, then set your variable to 1.