I have a table looking as follows: (with a few more irrelevant rows)
| user_id | user_name | fk_role_id| password|
| 1 | us1 | 1 | 1234 |
| 2 | us2 | 2 | 1234 |
| 3 | us3 | 2 | 1234 |
| 4 | us4 | 4 | 1234 |
I need to form/create an SQL statement that is counting the amount of entries with the fk_role_id of 1.
If there is more than one user with that fk_role_id, it can delete that user, but if there is only one user with fk_role_id it will fail, or give an error message stating that, that user is the last one with that fk_role_id and therefore it can't be deleted.
So far I have not found anything anywhere near that, that works. So hopefully someone in here will be able to help me quickly.
SQL server (2008 onwards):
with CTE as
(
select MT.*, row_number() over(partition by fk_role_id order by user_id) as rn
from MyTable MT
)
delete
from CTE
where rn >1
please try this statement.
DELETE TOP (1) FROM table_name WHERE fk_role_id= (SELECT fk_role_id
FROM table_name GROUP BY fk_role_id HAVING COUNT(fk_role_id)>1)
Each time the statement is executed it deletes the top row till last only one record is left. For the last record, Having condition fails and hence it will not be deleted from your table.
Related
Here is the scenario:
Config Table:
+--------+-----------+-------+
| Prefix | Separator | Seed |
+--------+-----------+-------+
| A | # | 10000 |
+--------+-----------+-------+
Transaction Table:
+----+----------+------+
| Id | SerialNo | Col3 |
+----+----------+------+
| 1 | A#10000 | |
| 2 | A#10001 | |
+----+----------+------+
The Transaction table has a SerialNo column that has a sequential number generated based on configuration table. Configuration table determines the prefix separator and the seed value of the serial number.
In the above example the serial number would start at A#10000 and increment by 1.
But if after few months someone updates the configuration table to have
+--------+-----------+-------+
| Prefix | Separator | Seed |
+--------+-----------+-------+
| B | # | 10000 |
+--------+-----------+-------+
Then the Transaction table is supposed to look something like this:
+----+----------+------+
| Id | SerialNo | Col3 |
+----+----------+------+
| 1 | A#13000 | |
| 2 | B#10001 | |
+----+----------+------+
However there could be no duplicate serial numbers at any given point in time in Transaction table.
If someone sets Prefix back to A and seed to 10000 then the next serial number should not be A#10000 because it already exists. It should be A#13001
One could simply write a select query with MAX() and CONCAT() by then it could cause issues with concurrency. Don't want to have duplicate serial numbers. Also, would want to have this as performance friendly as possible.
Another solution that I could come up with is that I create a windows service that will keep on running and watching the table. The records get inserted with null as serial number and the windows service will update the serial number. This way there will be no concurrency issues but then I am not sure how reliable this is. There will be delays.
There will only be one entry in configuration table at any given point in time.
You can solve the seed value problem quite easily in SQL Server. When someone updates the seed value back to 10000 you will need to do this via a stored procedure. The stored procedure then determines what the actual next available value should be because clearly 10000 could be the wrong value. The stored procedure then executes DBCC CHECKIDENT with the correct "new_reseed_value". Then when new records are inserted the server will handle the values again correctly.
Please look at this link for usage on the DBCC CHECKIDENT command. SQL Server DBCC CHECKIDENT
I'm looking for a way to list all views in a database.
Initially I found and tried an answer on the MySQL forums:
SELECT table_name
FROM information_schema.views
WHERE information_schema.views.table_schema LIKE 'view%';
How ever this doesn't work, returning an empty set. (I know they're in there!)
These also fail:
mysql> use information_schema;
Database changed
mysql> select * from views;
ERROR 1102 (42000): Incorrect database name 'mysql.bak'
mysql> select * from tables;
ERROR 1102 (42000): Incorrect database name 'mysql.bak'
Why isn't this working?
SHOW FULL TABLES IN database_name WHERE TABLE_TYPE LIKE 'VIEW';
MySQL query to find all views in a database
Here's a way to find all the views in every database on your instance:
SELECT TABLE_SCHEMA, TABLE_NAME
FROM information_schema.tables
WHERE TABLE_TYPE LIKE 'VIEW';
To complement about to get more info about a specific view
Even with the two valid answers
SHOW FULL TABLES IN your_db_name WHERE TABLE_TYPE LIKE 'VIEW';
SELECT TABLE_SCHEMA, TABLE_NAME
FROM information_schema.TABLES
WHERE TABLE_TYPE LIKE 'VIEW' AND TABLE_SCHEMA LIKE 'your_db_name';
You can apply the following (I think is better):
SELECT TABLE_SCHEMA, TABLE_NAME
FROM information_schema.VIEWS
WHERE TABLE_SCHEMA LIKE 'your_db_name';
is better work directly with information_schema.VIEWS (observe now is VIEWS and not TABLES anymore), thus you can retrieve more data, use DESC VIEWS for more details:
+----------------------+---------------------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------------------+---------------------------------+------+-----+---------+-------+
| TABLE_CATALOG | varchar(64) | YES | | NULL | |
| TABLE_SCHEMA | varchar(64) | YES | | NULL | |
| TABLE_NAME | varchar(64) | YES | | NULL | |
| VIEW_DEFINITION | longtext | YES | | NULL | |
| CHECK_OPTION | enum('NONE','LOCAL','CASCADED') | YES | | NULL | |
| IS_UPDATABLE | enum('NO','YES') | YES | | NULL | |
| DEFINER | varchar(93) | YES | | NULL | |
| SECURITY_TYPE | varchar(7) | YES | | NULL | |
| CHARACTER_SET_CLIENT | varchar(64) | NO | | NULL | |
| COLLATION_CONNECTION | varchar(64) | NO | | NULL | |
+----------------------+---------------------------------+------+-----+---------+-------+
For example observe the VIEW_DEFINITION field, thus you can use in action:
SELECT TABLE_SCHEMA, TABLE_NAME, VIEW_DEFINITION
FROM information_schema.VIEWS
WHERE TABLE_SCHEMA LIKE 'your_db_name';
Of course you have more fields available for your consideration.
select * FROM information_schema.views\G;
This will work.
USE INFORMATION_SCHEMA;
SELECT TABLE_SCHEMA, TABLE_NAME
FROM information_schema.tables
WHERE TABLE_TYPE LIKE 'VIEW';
Try moving that mysql.bak directory out of /var/lib/mysql to say /root/ or something. It seems like mysql is finding that and it may be causing that ERROR 1102 (42000): Incorrect database name 'mysql.bak' error.
The error your seeing is probably due to a non-MySQL created directory in MySQL's data directory. MySQL maps the database structure pretty directly onto the file system, databases are mapped to directories and tables are files in those directories.
The name of the non-working database looks suspiciously like someone has copied the mysql database directory to a backup at some point and left it in MySQL's data directory. This isn't a problem as long as you don't try and use the database for anything. Unfortunately the information schema scans all of the databases it finds and finds that this one isn't a real database and gets upset.
The solution is to find the mysql.bak directory on the hard disk and move it well away from MySQL.
If you created any view in Mysql databases then you can simply see it as you see your all tables in your particular database.
write:
--mysql> SHOW TABLES;
you will see list of tables and views of your database.
Another way to find all View:
SELECT DISTINCT table_name FROM information_schema.TABLES WHERE table_type = 'VIEW'
I am struggling with a simple update statement in Oracle. The update itself has not changed in forever but the table has grown massively and the performance is now unacceptable.
Here is the low down:
70 columns
27 indexes (which I am not under any circumstances allowed to reduce)
50M rows
Update statement is just hitting one table.
Update statement:
update TABLE_NAME
set NAME = 'User input string',
NO = NO,
PLANNED_START_DATE = TO_DATE('3/2/2016','dd/mm/yyyy'),
PLANNED_END_DATE = TO_DATE('3/2/2016','dd/mm/yyyy'),
WHERE ID = 999999 /*pk on the table*/
Execution Plan:
==================
Plan hash value: 2165476569
-----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | 1 | 245 | 1 (0)| 00:00:01 |
| 1 | UPDATE | TABLE_NAME | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| TABLE_NAME | 1 | 245 | 1 (0)| 00:00:01 |
|* 3 | INDEX UNIQUE SCAN | PK_INDEX | 1 | | 1 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("ID"=35133238)
==================================================
The update statement originates in a C# application but I am free to change the statement there.
Select statements still perform well thanks to all the indexes but as I see it that is exactly what is wrong with the update - it has to go update all the indexes.
We are licensed for partitioning but this table is NOT partitioned.
How can I improve the performance of this update statement without altering the table or its indexes?
Are you sure that column id is primary key? And is primary key based on unique index? Because in this case CBO would use INDEX UNIQUE SCAN. In your plan CBO expected 188 rows using filter ID (primary kay) = value and uses INDEX RANGE SCAN
Hi guys I hope you can help me with a SQL query I'm trying to write. I have achieved the table I want but I have to build it up row by row. This is not efficient and when I end up with 200 rows this is slowing the website down.
This is what my Table looks like:
FeedBack Table:
RatingID | RATING| TUTORIAL_ID| TUTORIAL_NAME
5716| 4 | 993| Test002
5717| 3 | 993| Test002
5777| 1 | 994| eClip3
5886| 1 | 994| eClip3
7127| 4 | 1235| FTIR001
7128| 4 | 1235| FTIR001
7169| 3 | 1235| FTIR001
7170| 2 | 1235| FTIR001
7131| 4 | 1235| FTIR001
7187| 3 | 1235| FTIR001
7132| 3 | 1235| FTIR001
What I wanted to produce was a table of all the unique tutorial names and then the Total number of times that specific tutorial was rated;
1(not useful), 2(somewhat useful), 3(useful), 4(Very Useful)
So The query should make:
Tutorial Name | not Useful | Somewhat Useful| Useful| Very Useful
Test002| 0| 0|1|1
eClip3|2|0|0|0
FTIR001| 0| 1| 3| 3
The table is being shown on a webpage with C# behind. Currently I loop through the table to find a list of individual tutorial names then for each clip name I select Count(*) where rating = 1 etc.. and build the table up row by row.
What I really want to know is if there is a way to do all of this at once. This would imporve the efficiency of the website greatly as there are 200+ tutorials I want to show data for.
select tutorial_name,
sum(case when rating=1 then 1 else 0 end) "not Useful",
sum(case when rating=2 then 1 else 0 end) "Somewhat Useful",
sum(case when rating=3 then 1 else 0 end) "Useful",
sum(case when rating=4 then 1 else 0 end) "Very Useful"
from feedback
group by tutorial_name;
is the query you need.
I have a SQL Server database that will contain many tables that all connect, each with a primary key. I have a Dictionary that keeps track of the the primary keys fields are for each table. My task is to extract data every day from attribute-centric XML files and insert them into a master database. Each XML file has the same schema. I'm doing this by using an XMLReader and importing the data into a DataSet.
I can't use an AutoNumber for the keys. Let's say yesterday's XML file produced a DataTable similar to the following, and it was imported into a database
-------------------------------------
| Key | Column1 | Column2 | Column3 |
|-----------------------------------|
| 0 | dsfsfsd | sdfsrer | sdfsfsf |
|-----------------------------------|
| 1 | dertert | qweqweq | xczxsdf |
|-----------------------------------|
| 2 | prwersd | xzcsdfw | qwefkgs |
-------------------------------------
If today's XML file produces the following DataTable
-------------------------------------
| Key | Column1 | Column2 | Column3 |
|-----------------------------------|
| 0 | sesdfsd | hjghjgh | edrgffb |
|-----------------------------------|
| 1 | wrwerwr | zxcxfsd | pijghjh |
|-----------------------------------|
| 2 | vcbcvbv | vbnvnbn | bnvfgnf |
-------------------------------------
Then when I go to import the new data into the database using SqlBulkCopy, then there will be duplicate keys. My solution to this is to use DateTime.Now.Ticks to generate unique keys. Theoretically, this should always create a unique key.
However, for some reason DateTime.Now.Ticks is not unique. For example, 5 records in a row might all have the key 635387859864435908, and the next 7 records might have the key 635387859864592164, even though I am generating that value at different times. I want to say that the cause of the problem is that my script is calling DateTime.Now.Ticks several times before it updates the time.
Can anyone else think of a better way to generate keys?
It's possible that the value of DateTime.Now is cached for a small amount of time for performance reasons. We do something similar to this and there are 2 possible options that we use:
Keep a list of numbers that you've used on the server you're on and increment if you can determine the number has already been used
Convert the field to a string and append a GUID or some other random identifier on the end of it. A GUID can be created with System.Guid.NewGuid().ToString();
Obviously neither of these plans are going to make the risk of collision zero, but they can help in reducing it.
If you have huge amount of data and you need to have a unique key for each row just use GUID
You could do something like the following to get a unique id (SQL Fiddle):
SELECT
CONCAT(YEAR(GETDATE()), DATEDIFF(DAY, STR(YEAR(GETDATE()), 4) + '0101',
GETDATE() ) + 1, ROW_NUMBER() OVER(ORDER BY id DESC)) UniqueID
FROM supportContacts s
This would work if you only run the query once per day. If you ran it more than once per day you would need to grab the seconds or something else (SQL Fiddle):
SELECT CONCAT(CurrYear, CurrJulian, CurrSeconds, Row) AS UniqueID
FROM
(
SELECT
YEAR(GETDATE()) AS CurrYear,
DATEDIFF(DAY, STR(YEAR(GETDATE()), 4) + '0101', GETDATE() ) + 1 AS CurrJulian,
ROW_NUMBER() OVER(ORDER BY id DESC) AS Row,
datediff(second, left(convert(varchar(20), getdate(), 126), 10), getdate()) AS CurrSeconds
from supportContacts s
) AS m