I have a student's table where there are 7
columns: Reg_No(i.e.Register number of
student),Mark1,Mark2,Mark3,Best1,Best2,Total.
The data Reg_No, Mark1, Mark2 and Mark3 are retrieved from database.
Am just looking for a way to select the maximum 2 marks from Mark1, Mark2 and Mark3 and fill
them in Best1 and Best2 columns.
Finally i shoud produce the added result of Mark1 and Mark2 in Total column. Pls suggest me a way.
I am going to assume you would like an SQL answer.
SELECT Reg_No, Mark1, Mark2, MAX(Mark1) AS Best1, MAX(Mark2)
AS Best2, SUM(Mark1 + Mark2) AS Total FROM Students GROUP BY Reg_No,
Mark1, Mark2
This query probably isn't very useful, though, since it mixes aggregated data with the data to be aggregated. If you only need to see each unique student's best and total grades, a better query would be:
SELECT Reg_No, MAX(Mark1) AS Best1, MAX(Mark2) AS Best2,
SUM(Mark1 + Mark2) AS Total FROM Students GROUP BY Reg_No
You need to use greatest and least functions applied on the field values of a row.
Example:
select
#m1 := 55 m1, #m2 := 42 m2, #m3 := 66 m3,
#b1 := greatest( #m1, #m2, #m3 ) b1,
#b2 := ( ( #total := #m1 + #m2 + #m3 )
- ( #b1 + least( #m1, #m2, #m3 ) )
) b2,
#total total;
+----+----+----+------+------+-------+
| m1 | m2 | m3 | b1 | b2 | total |
+----+----+----+------+------+-------+
| 55 | 42 | 66 | 66 | 55 | 163 |
+----+----+----+------+------+-------+
Try this on your students table:
select
Reg_No, Mark1, Mark2, Mark3,
#b1 := greatest( Mark1, Mark2, Mark3 ) Best1,
#b2 := ( ( #total := Mark1 + Mark2 + Mark3 )
- ( #b1 + least( Mark1, Mark2, Mark3 ) )
) Best2,
#total Total
from students
Refer to:
MySQL: GREATEST(value1,value2,...)
Return the largest argument
MySQL: LEAST(value1,value2,...)
Return the smallest argument
Related
I am using MariaDB. I have a table that I create for every IoT device at the time of the first insertion with a stored procedure. If anyone wonders Why I create a new table for every device is devices publish data every 5 seconds and it is impossible for me to store all of them in a single table.
So, my table structure is like below:
CREATE TABLE IF NOT EXISTS `mqttpacket_',device_serial_number,'`(
`data_type_id` int(11) DEFAULT NULL,
`data_value` int(11) DEFAULT NULL,
`inserted_date` DATE DEFAULT NULL,
`inserted_time` TIME DEFAULT NULL,
FOREIGN KEY(data_type_id) REFERENCES datatypes(id),
INDEX `index_mqttpacket`(`data_type_id`,`inserted_date`)) ENGINE = INNODB;
I have a very long SELECT query like below to fetch the data between selected type, date, and time.
SELECT mqttpacket_123.data_value, datatypes.data_name, datatypes.value_mult,
CONCAT(mqttpacket_123.inserted_date, ' ',
mqttpacket_123.inserted_time) AS 'inserted_date_time'
FROM mqttpacket_123
JOIN datatypes ON mqttpacket_123.data_type_id = datatypes.id
WHERE mqttpacket_123.data_type_id IN(1,2,3,4,5,6)
AND CASE WHEN mqttpacket_123.inserted_date = '2021-11-08'
THEN mqttpacket_123.inserted_time > '12:25:00'
WHEN mqttpacket_123.inserted_date = '2021-11-15'
THEN mqttpacket_123.inserted_time< '12:25:00'
ELSE (mqttpacket_123.inserted_date BETWEEN '2021-11-08'
AND '2021-11-15')
END;
and this returns around 500k records of the sample below:
| data_value | data_name | value_mult | inserted_date_time |
--------------------------------------------------------------------------------
| 271 | name_1 | 0.1 | 2021-11-08 12:25:04 |
| 106 | name_2 | 0.1 | 2021-11-08 12:25:04 |
| 66 | name_3 | 0.1 | 2021-11-08 12:25:04 |
| 285 | name_4 | 0.1 | 2021-11-08 12:25:04 |
| 61 | name_5 | 0.1 | 2021-11-08 12:25:04 |
| 454 | name_6 | 0.1 | 2021-11-08 12:25:04 |
| 299 | name_7 | 0.1 | 2021-11-08 12:25:04 |
Affected rows: 0 Found rows: 395,332 Warnings: 0 Duration for 1 query: 0.734 sec. (+ 7.547 sec. network)
I keep only the last 2 weeks' data in my tables and clean up the previous data as I have a backup system.
However, Loading the query result to DataTable also takes ~30sec. which is 4 times slower than MySQL.
Do you have any suggestions to improve this performance?
PS. I call this query from C# by the following statement in a Stored Procedure of RunQuery which takes the query and performs it as it is.
public DataTable CallStoredProcedureRunQuery(string QueryString)
{
DataTable dt = new DataTable();
try
{
using (var conn = new MySqlConnection(_connectionString))
{
conn.Open();
using (var cmd = new MySqlCommand("SP_RunQuery", conn))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("#query_string", MySqlDbType.VarChar).Value = QueryString;
using (MySqlDataAdapter sda = new MySqlDataAdapter(cmd))
{
sda.Fill(dt);
}
}
}
}
catch (Exception ex)
{
IoTemplariLogger.tLogger.EXC("Call Stored Procedure for RunQuery failed.", ex);
}
return dt;
}
EDIT: My sensors push a single MQTT packet which contains ~50 different data. There are 12 times 5seconds in a minute. So, basically, I receive ~600 rows per minute per device.
Data insertion is done in a Stored Procedure async. I push the JSON content along with the device_id and I iterate on the JSON to parse and insert into the table.
PS. The following code is just for clarification. It works fine.
/*Dynamic SQL -- IF they are registered to the system but have notable, create it.*/
SET create_table_query = CONCAT('CREATE TABLE IF NOT EXISTS `mqttpacket_',device_serial_number,'`(`data_type_id` int(11) DEFAULT NULL, `data_value` int(11) DEFAULT NULL,`inserted_date` DATE DEFAULT NULL, `inserted_time` TIME DEFAULT NULL, FOREIGN KEY(data_type_id) REFERENCES datatypes(id), INDEX `index_mqttpacket`(`data_type_id`,`inserted_date`)) ENGINE = InnoDB;');
PREPARE stmt FROM create_table_query;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
/*Loop into coming value array. It is like: $.type_1,$.type_2,$.type_3, to iterate in the JSON. We reach each value like $.type_1*/
WHILE (LOCATE(',', value_array) > 0)
DO
SET arr_data_type_name = SUBSTRING_INDEX(value_array,',',1); /*pick first item of value array*/
SET value_array = SUBSTRING(value_array, LOCATE(',',value_array) + 1); /*remove picked first item from the value_array*/
SELECT JSON_EXTRACT(incoming_data, arr_data_type_name) INTO value_iteration; /*extract value of first item. $.type_1*/
SET arr_data_type_name := SUBSTRING_INDEX(arr_data_type_name, ".", -1); /*Remove the $ and the . to get pure data type name*/
/*Check the data type name exists or not in the table, if not insert and assign it's id to lcl_data_type_id*/
IF (SELECT COUNT(id) FROM datatypes WHERE datatypes.data_name = arr_data_type_name) > 0 THEN
SELECT id INTO lcl_data_type_id FROM datatypes WHERE datatypes.data_name = arr_data_type_name LIMIT 1;
ELSE
SELECT devices.device_type_id INTO lcl_device_type FROM devices WHERE devices.id = lcl_device_id LIMIT 1;
INSERT INTO datatypes (datatypes.data_name,datatypes.description,datatypes.device_type_id,datatypes.value_mult ,datatypes.inserted_time) VALUES(arr_data_type_name,arr_data_type_name,lcl_device_type,0.1,NOW());
SELECT id INTO lcl_data_type_id FROM datatypes WHERE datatypes.data_name = arr_data_type_name LIMIT 1;
END IF;
/*To retrieve the table of which device has which datatypes inserted, this is to not to retrieve the datatypes unneccesseraly for the selected device*/
IF (SELECT COUNT(device_id) FROM devicedatatypes WHERE devicedatatypes.device_id = lcl_device_id AND devicedatatypes.datatype_id = lcl_data_type_id) < 1 THEN
INSERT INTO devicedatatypes (devicedatatypes.device_id, devicedatatypes.datatype_id) VALUES(lcl_device_id,lcl_data_type_id);
END IF;
SET lcl_insert_mqtt_query = CONCAT('INSERT INTO mqttpacket_',device_serial_number,'(data_type_id,data_value,inserted_date,inserted_time) VALUES(',lcl_data_type_id,',',value_iteration,',''',data_date,''',''',data_time,''');');
PREPARE stmt FROM lcl_insert_mqtt_query;
EXECUTE stmt;
SET affected_data_row_count = affected_data_row_count + 1;
END WHILE;
Here and here are also extra information that can be found of the server and database regarding the comments.
I have an SSD on the server. There is nothing important else that works other than my dotnet application and database.
It is usually better to have a DATETIME column instead of splitting it into two (DATE and TIME) columns. That might simplify the WHERE clause.
Having one table per device is usually a bad idea. Instead, add a column for the device_id.
Not having a PRIMARY KEY is a bad idea. Do you ever get two readings in the same second for a specific device? Probably not.
Rolling those together plus some other likely changes, start by changing the table to
CREATE TABLE IF NOT EXISTS `mqttpacket`(
`device_serial_number` SMALLINT UNSIGNED NOT NULL,
`data_type_id` TINYINT UNSIGNED NOT NULL,
`data_value` SMALLINT NOT NULL,
`inserted_at` DATETIME NOT NULL,
FOREIGN KEY(data_type_id) REFERENCES datatypes(id),
PRIMARY KEY(device_serial_number, `data_type_id`,`inserted_at`)
) ENGINE = INNODB;
That PK will make the query faster.
This may be what you are looking for after the change to DATETIME:
AND inserted_at >= '2021-11-08 12:25:00'
AND inserted_at < '2021-11-08 12:25:00' + INTERVAL 7 DAY
To keep 2 weeks' worth of data, DROP PARTITION is an efficient way to do the delete. I would use PARTITION BY RANGE(TO_DAYS(inserted_at)) and have 16 partitions, as discussed in http://mysql.rjweb.org/doc.php/partitionmaint
If you are inserting a thousand rows every 5 seconds -- With table-per-device, you would need a thousand threads each doing one insert. This would be a nightmare for the architecture. With a single table (as I suggest), and if you can get the 1000 rows together in a process at the same, time, do one multi-row INSERT every 5 seconds. I discuss other high speed ingestion.
Rate Per Second = RPS
Suggestions to consider for your instance [mysqld] section
innodb_io_capacity=500 # from 200 to use more of available SSD IOPS
innodb_log_file_size=256M # from 48M to reduce log rotation frequency
innodb_log_buffer_size=128M # from 16M to reduce log rotation avg 25 minutes
innodb_lru_scan_depth=100 # from 1024 to conserve 90% CPU cycles used for function
innodb_buffer_pool_size=10G # from 128M to reduce innodb_data_reads 85 RPS
innodb_change_buffer_max_size=50 # from 25 percent to expedite pages created 590 RPhr
Observation,
innodb_flush_method=O_DIRECT # from fsync for method typically used on LX systems
You should find these significantly improve task completion performance. View profile for free downloadable Utility Scripts to assist with performance tuning.
There are additional opportunities to tune Global Variables.
How to remove certain words like DUM or PRJ from begging of string if exists and then split a string based on character _ and take the second part .
For example , if we take
DUM_EI_AO_L_5864_Al Meena Tower I need to get answer as AO and from EI_AE_L_5864_Al radha Tower as AE
Replace the prefixes you want to remove, then find the index of the first and second underscores and then find the substring between those two separators:
Oracle Setup:
CREATE TABLE your_table ( value ) AS
SELECT 'DUM_EI_AO_L_5864_Al Meena Tower' FROM DUAL UNION ALL
SELECT 'EI_AE_L_5864_Al radha Tower' FROM DUAL
Query:
SELECT value,
SUBSTR( replaced_value, first_separator + 1, second_separator - first_separator - 1 )
AS second_term
FROM (
SELECT value,
replaced_value,
INSTR( replaced_value, '_', 1, 1 ) AS first_separator,
INSTR( replaced_value, '_', 1, 2 ) AS second_separator
FROM (
SELECT value,
REPLACE(
REPLACE(
value,
'PRJ_'
),
'DUM_'
) AS replaced_value
FROM your_table
)
)
Output:
VALUE | SECOND_TERM
:------------------------------ | :----------
DUM_EI_AO_L_5864_Al Meena Tower | AO
EI_AE_L_5864_Al radha Tower | AE
Query 2:
You can also use a regular expression:
SELECT value,
REGEXP_SUBSTR( value, '(DUM_|PRJ_)?.*?_(.*?)_', 1, 1, NULL, 2 ) AS second_term
FROM your_table
Output:
VALUE | SECOND_TERM
:------------------------------ | :----------
DUM_EI_AO_L_5864_Al Meena Tower | AO
EI_AE_L_5864_Al radha Tower | AE
db<>fiddle here
Here is my actual data in Excel, which I am successfully able to read in DataGridView in C# Windows Application.
Test | Energy |
---------------------
C018-3L-1 | 113 |
C018-3L-2 | 79 |
C018-3L-3 | 89 |
C018-3L-4 | 90 |
C018-3L-5 | 95 |
C021-3T-1 | 115 |
C021-3T-2 | 100 |
But now I want this data in DataGridView in below Format from excel file:
Test |Energy-1|Energy-2|Energy-3 |
------------------------------------
C018-3L |113 |79 |89 |
C018-3L |90 |95 |NULL |
C021-3T |115 |100 |NULL |
Here is my code:
private void TensileEnergyData_Load(object sender, EventArgs e)
{
try
{
string sourcefilepath = ConfigurationManager.AppSettings["FilePath"].ToString();
string[] files = Directory.GetFiles(sourcefilepath, "*.xlsx");
foreach (string s in files)
{
string excelConnectionString = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=" + s + ";Extended Properties='Excel 12.0;HDR=YES';";
// Create Connection to Excel Workbook
using (OleDbConnection connection = new OleDbConnection(excelConnectionString))
{
connection.Open();
da = new OleDbDataAdapter("Select Test, Energy FROM [Sheet1$]", connection);
da.Fill(dtExcelData);
connection.Close();
}
}
}
catch (Exception ex)
{
objDAL.SendExcepToDB(ex, "TensileEnergyData_Load");
MessageBox.Show("Fail to read data...!!");
}
dataGridView1.Visible = true;
dataGridView1.DataSource = dtExcelData;
}
How can I achieve this using Group By?
I'll provide a SQL-Server based answer, as your very related question asked for this. Here you did not tag your question with [sql-server] at all... Hope this helps...
This is a very good reason, why you should never ever put more than one content in one column. Store this in separate columns and this will be much easier.
Further more, this smells a bit... Such issues should rather be solved in your presentation layer.
Nevertheless this can be done:
DECLARE #tbl TABLE(Test VARCHAR(100),Energy INT);
INSERT INTO #tbl VALUES
('C018-3L-1',113)
,('C018-3L-2',79)
,('C018-3L-3',89)
,('C018-3L-4',90)
,('C018-3L-5',95)
,('C021-3T-1',115)
,('C021-3T-2',100);
SELECT p.*
FROM
(
SELECT B.Code
,(B.Number-1)/3 AS Line
,CONCAT('Energy-',CASE B.Number % 3 WHEN 0 THEN 3 ELSE B.Number % 3 END) AS ColumnName
,Energy
FROM #tbl t
CROSS APPLY(SELECT LEN(t.Test) - CHARINDEX('-',REVERSE(t.Test))) A(PosLastHyphen)
CROSS APPLY(SELECT LEFT(t.Test,PosLasthyphen) AS Code
,CAST(SUBSTRING(t.Test,PosLastHyphen+2,10) AS INT) AS Number) B
) tbl
PIVOT
(
MAX(Energy) FOR ColumnName IN([Energy-1],[Energy-2],[Energy-3])
) p
ORDER BY Code,Line;
The result
+---------+------+----------+----------+----------+
| Code | Line | Energy-1 | Energy-2 | Energy-3 |
+---------+------+----------+----------+----------+
| C018-3L | 0 | 113 | 79 | 89 |
+---------+------+----------+----------+----------+
| C018-3L | 1 | 90 | 95 | NULL |
+---------+------+----------+----------+----------+
| C021-3T | 0 | 115 | 100 | NULL |
+---------+------+----------+----------+----------+
Some explanations
I use the CROSS APPLY to compute the separation of your code and the running number. Then I use the integer division to calculate the group and the modulo operator % to spread this in three columns.
This question already has answers here:
Natural (human alpha-numeric) sort in Microsoft SQL 2005
(14 answers)
Closed 5 years ago.
I have alphanumeric numbers. After applying sorting thru SQL Server ORDER BY clause, I get following result
select *
from WO
where WOCode = AnyNumber
order by [ColumnName]
Result:
39660A1
39660A10
39660A11
39660A2
39660A3
39660A4
39660A5
39660A6
39660A7
39660A8
39660A9
Required result
39660A1
39660A2
39660A3
39660A4
39660A5
39660A6
39660A7
39660A8
39660A9
39660A10
39660A11
Here is a quick and dirty solution:
SELECT *
FROM table
ORDER BY LEN(Field) ASC, Field ASC
Demo here.
Assuming that the letter A is always in the same position, and the characters after it are integers only.
Then you can do this:
WITH CTE AS
(
SELECT
WOCode,
CAST(SUBSTRING(WOCode, CHARINDEX('A', WOCode) + 1,
LEN(WOCode) - CHARINDEX('A', WOCode) + 1) AS INT) AS DisplayOrder
FROM
WO
)
SELECT *
FROM CTE
ORDER BY DisplayOrder;
Demo
Results:
| WOCode |
|----------|
| 39660A1 |
| 39660A2 |
| 39660A3 |
| 39660A4 |
| 39660A5 |
| 39660A6 |
| 39660A7 |
| 39660A8 |
| 39660A9 |
| 39660A10 |
| 39660A11 |
You can also use TRY_CAST to avoid errors that might result because of using cast with non integer values (Thanks to #zambonee for suggestion):
WITH CTE AS
(
SELECT
WOCode,
CASE
WHEN TRY_CAST(WOCode AS INT) IS NULL
THEN CAST(SUBSTRING(WOCode,
CHARINDEX('A', WOCode) + 1,
LEN(WOCode) - CHARINDEX('A', WOCode) + 1) AS INT)
ELSE 0
END AS DisplayOrder
FROM
WO
)
SELECT *
FROM CTE
ORDER BY DisplayOrder;
updated demo
I have the following python code I found on the internet, I would like to make a table in a SQL database with every ipv4 address that there is. I dont code in python but its what I found.
My question is
1: Is there T-SQL code I can use to generate the table ? (one column ie 0.0.0.0-255.255.255.255)
2: Is how would I make this in c#? using the fastest method possible ? I know showing the results slows the console application down by 400 %
#!/usr/bin/env python
def generate_every_ip_address():
for octet_1 in range( 256 ):
for octet_2 in range( 256 ):
for octet_3 in range( 256 ):
for octet_4 in range( 256 ):
yield "%d.%d.%d.%d" % (octet_1, octet_2, octet_3, octet_4)
for ip_address in generate_every_ip_address():
print ip_address
Would this work?
DECLARE #a INTEGER
DECLARE #b INTEGER
DECLARE #c INTEGER
DECLARE #d INTEGER
DECLARE #IPADDRESS nvarchar(50)
set #a = 0
WHILE #a < 256
BEGIN
SET #b = 0
WHILE #b < 256
BEGIN
SET #c = 0
WHILE #c < 256
BEGIN
SET #d = 0
WHILE #d < 256
BEGIN
SET #IPADDRESS = CAST(#a AS nvarchar(3)) + '.' + CAST(#b AS nvarchar(3)) + '.' + CAST(#c AS nvarchar(3)) + '.' + CAST(#d AS nvarchar(3))
PRINT #IPADDRESS
SET #d = #d + 1
END
SET #c = #c + 1
END
SET #b = #b + 1
END
SET #a = #a + 1
END
To insert in batches of 16,581,375 rows would be quite straightforward using the following TSQL.
DECLARE #Counter INT
SET #Counter = 0
SET NOCOUNT ON ;
WHILE ( #Counter <= 255 )
BEGIN
RAISERROR('Procesing %d' ,0,1,#Counter) WITH NOWAIT ;
WITH Numbers ( N )
AS ( SELECT CAST(number AS VARCHAR(3))
FROM master.dbo.spt_values
WHERE type = 'P'
AND number BETWEEN 0 AND 255
)
INSERT INTO YourTable
( IPAddress
)
SELECT #Counter + '.' + N1.N + '.' + N2.N + '.' + N3.N
FROM Numbers N1 ,
Numbers N2 ,
Numbers N3
SET #Counter = #Counter + 1
END
Please just use an int IDENTITY column to store each IP address. They're only 32 bits. Fill your table up with whatever else you're storing.