After search through google I came to know that the SQLSRV32 odbc driver do not support MARS.What are the workarounds for this. One way i guess is stop loop through the results of several SQL commands. But in my case i have to create 30-40 table and insert about 400-500 rows of data at a time. Is it a good idea to open and close connection for every single sql commands.Please Help
Don't open and close connection for each statement, open the connection and create multiple commands to use that one connection. Inserting ~15,000 records shouldn't take too long. I don't know if ODBC has support for it, but you can also look into SQL Server's Bulk Copy functionality to do something like this.
A final word about MARS. MARS only matters when you want to have multiple simultaneous queries on the same connection that are returning result sets. That isn't really an issue here as you are doing inserts.
Also, there isn't anything stopping you from running multiple threads to do the inserts. I would do perhaps one thread per table, with a thread for each core. Parallel.ForEach could help out here.
Related
I have a windows service which has five threads.Each thread will pic different Excel file then it reads the excel rows and inserting into data base. Is it possible to INSERT parallelly ? Currently i am using single class with lock for inserting.
If you are inserting and the key is created for you by the DBMS, then there should be no problem, and no need to lock.
This depends on your database. If you database is capable of handling multiple connections (which it should nowaways).
It has nothing to do with your class where you do the insert though. Any locking there is not really necessary (unless, of course, you database does not support multiple connections, which I seriously doubt).
Make sure it's in a transaction, and get rid of the lock! You should be fine...assuming whatever database you're using supports transactions.
Most modern databases will support multiple writes, it's safer to use the transaction incase another goes wrong.
I have a DataReader...I use the result of the DataReader as parameter on another DataReader that is connected to a command with a Stored Procedure type. It works fast for now but I worry about the time when my database is filled with information. How can I speed things up? Thanks
Likely, your initial query could stand to join to the results generated by the sproc.
Essentially, you have 2 database round-trips instead of one. This may be a performance problem if you call this frequently and the result is small and you have already optimized both query and the stored procedure (so the round-trip overhead becomes significant relative to the actual useful work).
Benchmark and see if this piece of functionality is actually a bottleneck. If yes, you may try to "merge" these two operations at the SQL level, so they can be executed server-size in one go.
I'm not sure if this is related to your question, but keep in mind that (depending on your DBMS / ADO.NET provider), multiple active readers on the same connection may or may not be supported. Are you closing the first DbDataReader before opening the second one? If no, and you happen to switch to a different DBMS, there may be trouble. If memory serves me well, Oracle (ODP.NET) and DB2 support multiple readers, while MS SQL Server and PostgreSQL (Npgsql) don't.
I have a GUI where different parts of the information shown is extracted from a database. In order for the GUI not to freeze up I've tried putting the database queries in BackgroundWorkers. Because these access the database asynchronously I get an exception telling me the database connection is already open and used by another.
Is it possible to create a queue for database access?
I've looked into Task and ContinueWith, but since i code against .Net framework 3.5 this is not an option.
What is the DB engine you're using? Most modern databases are optimized for concurrent operations, so there's no need to queue anything.
The thing you're appaently doing wrong is reusing the same IDbConnection instance across different threads. Thats a no-no: each thread has to have its own instance.
I think your problem is in the way you get a connection to the database. If you want to fire separate queries you could use separate connections for separate requests. If you enable connection pooling this does not add a lot of overhead.
Try to use the pool objects. Plus as per your description your trying to open a connection on an unclosed connection object.
This is not a question about optimizing a SQL command. I'm wondering what ways there are to ensure that a SQL connection is kept open and ready to handle a command as efficiently as possible.
What I'm seeing right now is I can execute a SQL command and that command will take ~1s, additional executions will take ~300ms. This is after the command has previously been executed against the SQL server (from another application instance)... so the SQL cache should be fully populated for the executed query prior to this applications initial execution. As long as I continually re-execute the query I see times of about 300ms, but if I leave the application idle for 5-10 minutes and return the next request will be back to ~1s (same as the initial request).
Is there a way to via the connection string or some property on the SqlConnection direct the framework to keep the connection hydrated and ready to efficiently handle queries?
Have you checked the execution plan for your procedures. Execution plans I believe are loaded into memory on the Server and then get cleared after certain periods of time or depending on what tables etc are accessed in the procedures. We've had cases where simplifying stored procedures (perhaps splitting them) reduces the amount of work the database server has to do in calculating the plans...and ultimately reduces the first time the procedure is called...You can issue commands to force stored procedures to recompile each time for testing whether you are reducing the initial call time...
We've had cases where the complexity of a stored procedure made the database server continually have to recompile based on different parameters which drastically slowed it down, splitting the SP or simplifying large select statements into multiple update statements etc helped a considerable amount.
other ideas are perhaps intermittently calling a simple getDate() or similar every so often so that the sql server is awake (hope that makes sense)...much the same as keeping an asp.net app in memory in IIS.
The default value for open connections in a .NET connection pool is zero.
You can adjust this value in your connection string to 1 or more:
"data source=dbserver;...Asynchronous Processing=true;Min Pool Size=1"
See more about these options in MSDN.
you keep it open by not closing it. :) but that's not adviseable since connection pooling will handle connection management for you. do you have it enabled?
by default the connection pooling is enabled in ADO .NET. this will be through the connection string used by the application. More info in Using Connection Pooling with SQL Server
If you use more than one database connection, it may be more efficent. Having one database connection means the best possible access speed is always going to be limited sequentially. Whereas having >1 connections means theres an opportunity there for your compiler to optimize concurrent access a little more. I guess you're using .NET?
Also if your issuing the same SQL statement repeatedly, its possible your database server is caching the result for a short period of time, therefore making the return of the resultset quicker..
I am writing an application that logs status updates (GPS locations) from devices to a database. The updates occur at a set interval for each device, which is currently every 3 seconds. I'm using a simple table in SQL Server 08 for storing each update.
I've noticed that running the inserts is an area of slow down in my application. Its not a severe slow down, but noticable. Naturally, I'd like to write to the database in as an efficient way as possible. I have an idea to improve the performance and am looking for input and advice to see if it will help:
The status updates come in from an asynchronous Socket thread. In my current implementation, the database insert call is executed from this thread. I'm thinking I can create a queue for holding update data that the Socket thread can quickly add its update to and then go on its merry way. There would then be a separate thread whose sole responsibility would be checking the update queue and inserting the updates into the database.
Basically this whole process rests on the assumption that writing to the database from one location with a bunch of data all at once is more efficient than writing one row of data at a random time. Is my assumption correct, or way off base? Also, on the SQL side, is there a command to tell it to write a bunch of rows at once that would improve write performance?
This is how the database is being written to:
I'm using LinqToSQL in C#, so for each insert, I first create a DataContext instance. From the DataContext object I then call a stored procedure which inserts the location update.
The table is indexed by datetime, for the time of the update.
Have a look at the SqlBulkCopy class - this allows you to use BCP to insert chunks of data very quickly.
Also, make sure your indexes are efficient. If you have a clustered index on anything that does not increase sequentially (integer, date) then you will suffer performance slowdowns as the pages are filled up.
Have you looked MSMQ ( Microsoft Message Queuing (MSMQ)) ? That seems to me an option to take a look.
Yes, inserting in batches will typically be faster than separate inserts given your description. Each insert will require a connection to be set up and packets to be transferred. If you have a single small insert that takes one packet and you issue three of those, but you alternatively have three inserts that are small enough that they can all fit in one packet then it will help.
Quantifying it is difficult just based on your description - you'll need to do testing for that. For example, if you are keeping a dedicated connection open at all times anyway, as hova suggests, then you might see less of an impact.
Another area you might want to take a look at is whether you are setting up and tearing down a connection for each insert. That alone might make a performance improvement, negating the need for batching.
You'll also want to have as few indexes on the table as possible.
It sounds like a good idea. Why not give it a shot and see how it performs?
On the SQL side you'd want to have a look at making sure you are using parameterized queries.
Also batching your INSERT statements will certainly increase the performance.
Connection management is also key, of course that depends on how the application is built and whether it depends on a connection being there.
Are you not afraid to loose data while are you collecting data to batch copy?
I'm writing application doing the same. At start I will have to write data from 3,5k GPS devices. One device should send data each minute but it can send faster. Destination number of devices is 10,5k.
I'm wondering about inserting performance too. For now I'm saving received data to db on every packet using pure ADO.NET ICommand and stored procedure. On my test serwer (Xeon 3,4GHz and one 1TB hard disk - normal desktop ;) it takes for now 1ms or less.
#GRIMUS - should I wondering if there will be more devices?