We have a table that has a couple of nText columns so looking to delete a rows without retrieving it if possible
Another twist is that we don't know the IDentity value but a couple of other values that identify the row uniquely so examples such as the one below won't work as is
http://blogs.msdn.com/b/alexj/archive/2009/03/27/tip-9-deleting-an-object-without-retrieving-it.aspx
Hoping there's something newer in EF5 to address this without resorting to stored procs ?
Use ExecuteStoreCommand
For example:
databaseContext.ExecuteStoreCommand("DELETE FROM [table] where ... ");
Related
I have to add sorting to existing database created in SQL Server. The problem is that this database contains ntext columns that are not supported by LinQ's OrderBy method. The database was written in a code-first approach, so I have access to template of database, but I can't look at ready database working on the server.
I've tried to change string type properties marking them as
[Column(TypeName = "nvarchar(MAX)")]
but then I got a
Sequence contains no matching element
exception which I don't know how to fix.
This is the way that I wanted to sort my data(i got exception right in the below instruction:
MyDatabase.MyTable.OrderBy(x => x.MyRow).Load();
Before I changed TypeName to nvarchar, I've got this error:
Large objects (ntext and image) cannot be used in ORDER BY clauses
Can somebody help me with fixing things up to make possible to sort data from database?
I'll appreciate any kind of help. Thanks in advance!
In T-SQL you can solve this in many ways, may be you can adopt one of them for you too?
ORDER BY (cast MyTextCol as nvarchar(max))
create a view from
this table with that field casted as nvarchar(max) and use it
instead of your table(even in future)
ALTER TABLE myTable ALTER
COLUMN myTextCol nvarchar(max)
The last one solves your problem and takes no time to be made: it's just a metadata operation, nothing will reorganizen in your table for existing rows
I am developing an web app using asp.net. I am making this app for compatable for both SQL and MYSQL DB.
So my concern is think that I have a set of records in a table. This table's records are referenced by other tables. So if a user try to delete a record from this table I have to check whether this record is referenced by other tables or not. If not then user can delete the record. I am using using foreign keys for many tables but others not.
So I want this scenario for every tables. So method that coming to my mind is before delete a record I have to run some select queries against those tables to check whether if records available. So is this the only approach.? Seems to its headache. you know if table is referenced by lot of tables. Can I use a flag or some thing?
Is there any better way to do this?
I think this might help you ::
SELECT
table_name, column_name
FROM
information_schema.key_column_usage
WHERE
referenced_table_name = '<table>'
and referenced_column_name = '<primary key column>'
Please check this link too:
MySQL: How to I find all tables that have foreign keys that reference particular table.column AND have values for those foreign keys?
I think it is a little overkill and not performance optimized to be selecting tables and references to check before each delete. You will be making unnecessary database calls.
Since you tag'd ASP.Net are you using ADO ? If so, or similar.
Why not make the normal delete call inside a try block and in the catch handle error message received from database something like:
try
{
}
catch(SqlExcpetion sqlEx)
{
if(sqlEx.Message.ToLower().Contains("foreign"))
{
return "your user friendly error message";
}
}
In case you are using foreign keys to constraint the references, you can act in the following order:
consider you are using database test and are trying to delete a row from emp table
1) list all the tables with their column names, that reference any column in the table we are going to remove a row from (emp in this case)
select
table_name,column_name,referenced_column_name
from
information_schema.KEY_COLUMN_USAGE
where
REFERENCED_TABLE_NAME = 'emp' and REFERENCED_table_schema = 'test';
2) for each row of the result try looking up the value of referenced_column_name from the emp row that is being removed in the corresponding table_name.column_name
I have created table in MS SQL 2008 with one identity column(Start Value - 1 and Increment is also 1) and 4 other columns. I am accessing this DB from C# ASP.NET. Used to push data only for the non identity column. Identity column will auto increment itself.
As of now i am manually querying the column value with the remaining for columns. But I am facing problem if all the other four column values are equal i am not getting the exact value which i am looking for
Now my query is, Is there any why in C# where I can get the value of the newly created identity column whenever new record is created.
Thanks.
You can use
SCOPE_IDENTITY()
Which will returns the primary key value of the recently inserted row
The answer to your question actually lies in SQL Server. You can run:
SELECT ##identity
after your insert to get the last inserted row identity.
http://technet.microsoft.com/en-us/library/aa933167(v=sql.80).aspx
EDIT BASED ON COMMENTS:
Consider using SCOPE_IDENTITY() as referenced here:
http://technet.microsoft.com/en-us/library/aa259185(v=sql.80).aspx
In SQL terms you can output the records back if you wish it. But how you might apply this to C# is up to you. Example:
INSERT INTO TABLE_A (SOMETHING, SOMETHINGELSE, RANDOMVAL3)
OUTPUT inserted.A_ID, inserted.SOMETHING, inserted.SOMETHINGELSE, inserted.RANDOMVAL3
SELECT 'ASD','DOSD', 123
But unless you're using merge, you can't use OUTPUT to print out any values from joining tables from an INSERT. But that's another matter entirely, I think.
Also, it's hardly good practice to bounce this data between the application and the DB all the time, so I'd look to alternatives if possible.
I need to update a bit field in a table and set this field to true for a specific list of Ids in that table.
The Ids are passed in from an external process.
I guess in pure SQL the most efficient way would be to create a temp table and populate it with the Ids, then join the main table with this and set the bit field accordingly.
I could create a SPROC to take the Ids but there could be 200 - 300,000 rows involved that need this flag set so its probably not the most efficient way. Using the IN statement has limitation wrt the amount of data that can be passed and performance.
How can I achieve the above using the Entity Framework
I guess its possible to create a SPROC to create a temp table but this would not exist from the models perspective.
Is there a way to dynamically add entities at run time. [Or is this approach just going to cause headaches].
I'm making the assumption above though that populating a temp table with 300,000 rows and doing a join would be quicker than calling a SPROC 300,000 times :)
[The Ids are Guids]
Is there another approach that I should consider.
For data volumes like 300k rows, I would forget EF. I would do this by having a table such as:
BatchId RowId
Where RowId is the PK of the row we want to update, and BatchId just refers to this "run" of 300k rows (to allow multiple at once etc).
I would generate a new BatchId (this could be anything unique -Guid leaps to mind), and use SqlBulkCopy to insert te records onto this table, i.e.
100034 17
100034 22
...
100034 134556
I would then use a simgle sproc to do the join and update (and delete the batch from the table).
SqlBulkCopy is the fastest way of getting this volume of data to the server; you won't drown in round-trips. EF is object-oriented : nice for lots of scenarios - but not this one.
I'm assigning Marcs response as the answer but I'd just like to give a little detail on how we implemented the requirement.
Marc response helped greatly in the formulation of our solution.
We had to deal with an aim/guideline to keep within the Entity Framework while not utilizing SPROCS and although our solution may not suit others it has worked for us
We created a Item table in the Database with BatchId [uniqueidentifier] and ItemId varchar columns.
This table was added to the EF model so we did not use temporary tables.
On upload of these Ids this table is populated with the Ids [Inserts are quick enough we find using EF]
We then use context.ExecuteStoreCommand to run the SQL to do join the item table and the main table and update the bit field in the main table for records that exist for the batch Id created specifically for that session.
We finally clear this table for that batchId.
We have the performance, keeping within our no SPROC goal. [Which not of us agree with :) but its a democracy]
Our exact requirements are a little more complex but insofar as needing good update performance using the Entity framework given our specific restrictions it works fine.
Liam
Using the ADO.NET MySQL Connector, what is a good way to fetch lots of records (1000+) by primary key?
I have a table with just a few small columns, and a VARCHAR(128) primary key. Currently it has about 100k entries, but this will become more in the future.
In the beginning, I thought I would use the SQL IN statement:
SELECT * FROM `table` WHERE `id` IN ('key1', 'key2', [...], 'key1000')
But with this the query could be come very long, and also I would have to manually escape quote characters in the keys etc.
Now I use a MySQL MEMORY table (tempid INT, id VARCHAR(128)) to first upload all the keys with prepared INSERT statements. Then I make a join to select all the existing keys, after which I clean up the mess in the memory table.
Is there a better way to do this?
Note: Ok maybe its not the best idea to have a string as primary key, but the question would be the same if the VARCHAR column would be a normal index.
Temporary table: So far it seems the solution is to put the data into a temporary table, and then JOIN, which is basically what I currently do (see above).
I've dealt with a similar situation in a Payroll system where the user needed to generate reports based on a selection of employees (eg. employees X,Y,Z... or employees that work in certain offices). I've built a filter window with all the employees and all the attributes that could be considered as a filter criteria, and had that window save selected employee id's in a filter table from the database. I did this because:
Generating SELECT queries with dynamically generated IN filter is just ugly and highly unpractical.
I could join that table in all my queries that needed to use the filter window.
Might not be the best solution out there but served, and still serves me very well.
If your primary keys follow some pattern, you can select where key like 'abc%'.
If you want to get out 1000 at a time, in some kind of sequence, you may want to have another int column in your data table with a clustered index. This would do the same job as your current memory table - allow you to select by int range.
What is the nature of the primary key? It is anything meaningful?
If you're concerned about performance I definitely wouldn't recommend an 'IN' clause. It's much better try do an INNER JOIN if you can.
You can either first insert all the values into a temporary table and join to that or do a sub-select. Best is to actually profile the changes and figure out what works best for you.
Why can't you consider using a Table valued parameter to push the keys in the form of a DataTable and fetch the matching records back?
Or
Simply you write a private method that can concatenate all the key codes from a provided collection and return a single string and pass that string to the query.
I think it may solve your problem.