Automatically manage Sql Server database size - c#

There are 2 .NET services which use 2 SQL Server databases. I am currently using SQL Express so the maximum database size is an issue.
When the size approaches the 10GB limit (or some record limit), I would like to automatically delete the oldest X amount of records to free up some space.
This is not a production environment and I REALLY don't need the old data, I just want to keep the data "fresh".
Should this be done at the service level? I can modify my services to periodically check spaceused and execute a manual "clean up" (Whether it's a delete, archive, etc.). I'm not sure how do this on the SQL level however.

Since you are using SQL Express, you will need to do this at the service level on some schedule. You will first need to delete the rows out of the table(s) that you want to purge the data from. Something like:
delete BigOleTable where LoggedDate < dateadd(yy,-1,getdate())
that will get rid of stuff older than a year.
Then, you will need to shrink the database.. so, this depends on your recovery model. If you're in full recovery, you'll need to backup the transaction log. and then issue a shrinkdatabase as Tanner alluded to above.

You can create a job that does this or better use SSIS (see this: http://technet.microsoft.com/en-us/library/ms181153%28v=sql.105%29.aspx).
You can use this http://technet.microsoft.com/en-us/library/ms188776.aspx procedure to query the space used and if it exceeds a thesshold you could delete data.

Related

Insert Records in SQL Server from Oracle C#

I am working on winform application in C# where I have to import some 200,000 records from an Oracle view and dump into SQL Server table. I am not sure how to approach this problem whether i should use datatable to hold all these records and then store it in SQL server Or I have to use some DBlink which I am not familiar with.
Any suggestions/recommendations will be greatly appreciated. Thanks!
A lot of how you handle it depends on the data and how it is used.
If it is dynamic data that is often changing, then having a live link might be better, although there still may be a speed issue. Or if it changes some but not often, reloading it to SQL Server using an SSIS package might be a good option. If having two copies of the data is simpler (and it's data that doesn't change), then just doing a one-time copy over might be acceptable.
Setting up a DB Link is not too hard and recommended if you're going to make a SSIS package or accessing the data through SQL Server but leaving it in Oracle.
If you're using the data and getting it fresh each month, and not modifying it in SQL, then creating a SSIS task and a DB Link would be a reasonable solution. Create the link and make sure it connects, then use SSIS to truncate your SQL table and then reload it from Oracle. Run the package during a time the application is not using the SQL copy of the data. Make a job to run the package. It might be reasonable to make a backup of the table before truncating, or copying to a temporary location, or some sort of process to recover in the case of a problem loading the data. Something like the following:
Job
step 1 Backup table
step 2 (if step 1 successful) Run SSIS package
Truncate table
Reload table using DB Link
step 3 (if step 2 failure) restore from backup
Open 2 connections. Read from one and write to the other. There is an Oracle .Net driver. It's called ODP.NET.

Scrollable ODBC cursor in C#

I'm a C++ programmer and I'm not familiar with the .NET database model. I usually use IDataReader (OdbcDataReader, OledbDataReader or SqlDataReader) to read data from database. Sometimes when I need a bulk of data I use DataAdapter, but what should I do to achieve the functionality of scrollable cursors that exists in native libraries like ODBC?
Thanks all of you for your answers, but I am in a situation that I can't accept them, of course this is my fault that didn't explain my problem completely. I explain it as a comment in one of answers that now removed.
I have to write a program that will act as a proxy between client side program and MSSQL, for this library I have following requirements:
My program should be compatible with MSSQL2000
I don't know all the tables and queries that will be sent by the user, I should simply add some information to it, make a log, ... and then execute it against MSSQL, so it is really hard to use techniques that based on ordered field(s) of the query or primary key of the table(All my works are in one database but that database is huge and may change over time).
Only a part of data is needed by the client, most DBMS support LIMIT OFFSET, unfortunately MSSQL do not support it, and ROW_NUMBER does not exist in the MSSQL2000 and if it supported, then again I need to understand program logic and that need a parse of SQL command(Actually I write a parsing library with boost::spirit but that's native code and beside that I'm not yet 100% sure about its functionality).
I may have multiple clients but most of queries that will be sent by them are one of a few predefined queries(of course users still send custom queries but its about 30% of all queries), So I think I can open some scrollable cursors and respond to clients using that cursors and a custom cache.
Server machine and its MSSQL will be dedicated to my program, so I really want to use all of the power of the server and DBMS to achieve my functionality.
So now:
What is the problem in using scrollable cursors and why I should avoid them?
How can I use scrollable cursors in .NET?
In SQL Server you can create queries paged thus. The page number you handle it easily from the application. You do not need to create cursors for this task.
For SQL Server 2005 o higher
SELECT * FROM ( SELECT *, ROW_NUMBER() OVER (ORDER BY ID) AS ROW FROM TABLEA ) AS ALIAS
WHERE ROW > 40
AND ROW <= 49
For SQL Server 2000
SELECT TOP 10 T.* FROM TABLA AS T WHERE T.ID NOT IN
( SELECT TOP 39 id from tabla order by id desc )
ORDER BY T.ID DESC
PD: edited to include support for SQL Server 2000
I usually use DataReader.Read() to skip all rows that I do not want to use when doing paging on a DB which do not support paging.
If you don't want to build the SQL paged query yourself you are free to use my paging class: https://github.com/jgauffin/Griffin.Data/blob/master/src/Griffin.Data/BasicLayer/Paging/SqlServerPager.cs
When Microsoft designed the ADO.NET API, they made the decision to expose only firehose cursors (IDataReader etc). This may or may not actually pose a problem for you. You say that you want "functionality of scrollable cursors", but that can mean all sorts of things, not just paging, and each particular use case can be tackled in a variety of ways. For example:
Requirement: The user should be able to arbitrarily page up and down the resultset.
Retrieve only one page of data at a time, e.g. using the ROW_NUMBER() function. This is more efficient than scrolling through a cursor.
Requirement: I have an extremely large data set and I only want to process one row at a time to avoid running out of memory.
Use the firehose cursor provided by ADO.NET. Note that this is only practical if (a) you don't need to hit the database at all during the loop, or (b) you have MARS configured in your connection string.
Simulate a keyset cursor by retrieving the set of unique identifiers into an array, then loop through the array and read one row of data at a time.
Requirement: I am doing a complicated calculation that involves moving forwards and backwards through the resultset.
You should be able to re-write your algorithm to eliminate this requirement. For example, read one set of rows, process them, read another set of rows, process them, etc.
UPDATE (more information provided in the question)
Your business requirements are asking too much. You have to handle arbitrary queries that assume the presence of scrollable cursors, but you can't provide scrollable cursors, and you can't re-write the client code to not use scrollable cursors. That's an impossible position to be in. I recommend you stick with what you currently have (C++ and ODBC) and don't bother trying to re-write it in .NET.
I don't think cursors will work for you particular case. The main reason is that you have 3 tiers. But let's take two steps back.
Most 3 tier applications have a stateless middle tier (your c++ code). Caching is fine since it really just an optimization and does not create any real state in the middle tier. The middle tier normally has a small number of open sessions to the database. Because opening a db session is expensive for the processor, and after the db session is open a set amount of RAM is reserved at the database server. When a request is received by the middle tier, the request is processed and handed on to the SQL database. An algorithm may be used to pick any of the open sessions, or it can even be done at random. In this model it is not possible to know what session will receive the next request. Cursors belong to the session that received the original query request. So you can't really expect that the receiving session will be the one that has your open cursor.
The 3 tier model I described is used mainly for web applications so they can scale to hundreds or thousands of clients. Were SQL servers would never be able to open that many sessions. Microsoft ADO.NET already has many features to support the kind of architecture I described, so it is not very hard to implement. And the same is used even in non Web applications depending on the circumstance. You could potentially keep track of your sessions so you could open a single session per client, I would first make sure that the use case justifies that. Know that open cursors can take up a lot of resources as well.
Cursors still have a place within a single transaction, it's just hard to keep them open so that the client application can fetch/update values within the result set.
What I would suggest its that you do the following within the query transaction. Store in a separate table the primary key values of the main table in your query. On the separate table include other values like sessionid and rownumber. Return a few of the first rows by linking to the new table in the original query. And in subsequent calls just query the corresponding rows again by linking to your new table. You will need an equivalent to a caching mechanism to purge old data, and to refresh the result set according to your needs.

How to synchronize 2 databases

I want to sync 2 database specific records.
Let suppose I have two databases;
1.Shop
2.Stock
Now lets suppose user change the price of a specific product in stock. I want to change this product price in shop also!
What I work out is that - assuming Internet connection is stable,
When price change in stock I invoke a web service this service will insert entries in web data table price.
Now on shop side I ping that web data table using web service every 20 minutes if I find any new entry I update that relevant product price in shop!
Another option I thought about was replication. But we are using express edition of SQL Server and according to my knowledge express edition can not work as publisher!
Is my first option is efficient for this purpose or am I missing something and there is a better alternative to accomplish this purpose!
You could have a trigger on the table like pRime says above but instead of writing directly to the other database write the changes to a local "staging" table and then every 20 min or so schedule a task to send the updates to the second db.
You could set up the second DB as a Linked Server.
This way you avoid making the table the trigger is on Read Only if the connection between the two dbs goes down.
you can create a trigger on Stock table.
CREATE TRIGGER triggerName
ON [Stock].[dbo].[products]
AFTER UPDATE
AS
IF ( UPDATE (productPrice))
BEGIN
--insert to shop
END
GO
IF can't use the MS SQL Server replication feature (requires some non-Express edition as you already identified) for this situation (see http://msdn.microsoft.com/en-us/library/ms151198.aspx) then another option is to use the MS Sync Framework (can work with DBs down to SQL CE etc., files even custom data sources etc.) - see http://msdn.microsoft.com/en-us/library/bb726002.aspx .
IF you really want to implement this in code yourself (I strongly recommend against that) then implement it as a "push-scenario" :
DB triggers which fill staging tables
Windows Service which does check for changes in the staging tables and apply themn
conflict resolution rules
complete logging of all this to be able to analyze discrepancies (just in case)

Whats the best way to compare large amounts of data between two different databases?

I have one desktop application receiving data from a webservice and storing it inside a local postgresql database (while the webservice retrieves data from a SQL Server database). At the end of the process there will be a minimum of 2.5 million entries inside a table in my local database but this will be received from de webservice in batches of about 300 rows at time and within a time frame of about 15 days.
What I need is a way to make sure that my local database has the exact same information the server's database has.
I'm thinking of creating some sort of checksum for each batch received and then, after all batches were received, another checksum of the entire table but I don't know if this is the best solution and, if is, I don't know where to start to create it.
PS: TCP already handles integrity check so I don't even know if this is needed, but it is critical that the data are the same.
I can see how a checksum could possibly be useful, but the amount of transformation you're doing would probably make it impractical. You'd have to derive the checksum on either the original form of the data or on the transformed form; it wouldn't be valid on both.
You have some strange constraints (been there myself), so it's kind of hard to come up with a clear strategy without knowing all the details. Maybe one of the following suggestions would work.
A simple count(*) on the SQL Server side and on the PostgreSQL side after the migration is complete.
Dump out a list of keys from the SQL Server side and from the PostgreSQL side after the migration is complete, and then sort and compare those files.
If 1 and 2 aren't possible because of limited access to SQL Server, maybe dump out the results of the web service calls to a single file location as you go along, and then extract the same data from PostgreSQL at the end, and compare those files.
There are numerous tools available for comparing files if you choose options 2 or 3.
Do you have control over the web service and SQL Server DB? If you do, SQL Server Change Tracking should do the trick. MSDN Change Tracking will track every change (or just the changes you care about) on a per table basis. Each time you synchronize you just pass it your version number and it will return the changeset required to bring you up to date.

C# and Access 2000

I have developed an network application that is in use in my company for last few years.
At start it was managing information about users, rights etc.
Over the time it grew with other functionality. It grew to the point that I have tables with, let's say 10-20 columns and even 20,000 - 40,000 records.
I keep hearing that Access in not good for multi-user environments.
Second thing is the fact that when I try to read some records from the table over the network, the whole table has to be pulled to the client.
It happens because there is no database engine on the server side and data filtering is done on the client side.
I would migrate this project to the SQL Server but unfortunately it cannot be done in this case.
I was wondering if there is more reliable solution for me than using Access Database and still stay with a single-file database system.
We have quite huge system using dBase IV.
As far as I know it is fully multiuser database system.
Maybe it will be good to use it instead of Access?
What makes me not sure is the fact that dBase IV is much older than Access 2000.
I am not sure if it would be a good solution.
Maybe there are some other options?
If you're having problems with your Jet/ACE back end with the number of records you mentioned, it sounds like you have schema design problems or an inefficiently-structured application.
As I said in my comment to your original question, Jet does not retrieve full tables. This is a myth propagated by people who don't have a clue what they are talking about. If you have appropriate indexes, only the index pages will be requested from the file server (and then, only those pages needed to satisfy your criteria), and then the only data pages retrieved will be those that have the records that match the criteria in your request.
So, you should look at your indexing if you're seeing full table scans.
You don't mention your user population. If it's over 25 or so, you probably would benefit from upsizing your back end, especially if you're already comfortable with SQL Server.
But the problem you described for such tiny tables indicates a design error somewhere, either in your schema or in your application.
FWIW, I've had Access apps with Jet back ends with 100s of thousands of records in multiple tables, used by a dozen simultaneous users adding and updating records, and response time retrieving individual records and small data sets was nearly instantaneous (except for a few complex operations like checking newly entered records for duplication against existing data -- that's slower because it uses lots of LIKE comparisons and evaluation of expressions for comparison). What you're experiencing, while not an Access front end, is not commensurate with my long experience with Jet databases of all sizes.
You may wish to read this informative thread about Access: Is MS Access (JET) suitable for multiuser access?
For the record this answer is copied/edited from another question I answered.
Aristo,
You CAN use Access as your centralized data store.
It is simply NOT TRUE that access will choke in multi-user scenarios--at least up to 15-20 users.
It IS true that you need a good backup strategy with the Access data file. But last I checked you need a good backup strategy with SQL Server, too. (With the very important caveat that SQL Server can do "hot" backups but not Access.)
So...you CAN use access as your data store. Then if you can get beyond the company politics controlling your network, perhaps then you could begin moving toward upfitting your current application to use SQL Server.
I recently answered another question on how to split your database into two files. Here is the link.
Creating the Front End MDE
Splitting your database file into front end : back end is sort of a key to making it more performant. (Assume, as David Fenton mentioned, that you have a reasonably good design.)
If I may mention one last thing...it is ridiculous that your company won't give you other deployment options. Surely there is someone there with some power who you can get to "imagine life without your application." I am just wondering if you have more power than you might realize.
Seth
The problems you experience with an Access Database shared amongst your users will be the same with any file based database.
A read will pull a lot of data into memory and writes are guarded with some type of file lock. Under your environment it sounds like you are going to have to make the best of what you have.
"Second thing is the fact that when I try to read some records from the table over the network, the whole table has to be pulled to the client. "
Actually no. This is a common misstatement spread by folks who do not understand the nature of how Jet, the database engine inside Access, works. Pulling down all the records, or excessive number of records, happens because you don't have all the fields used in the selection criteria or sorting in the index. We've also found that indexing yes/no aka boolean fields can also make a huge difference in some queries.
What really happens is that Jet brings down the index pages and data pages which are required. While this is a lot more data than a database engine would create this is not the entire table.
I also have clients with 600K and 800K records in various tables and performance is just fine.
We have an Access database application that is used pretty heavily. I have had 23 users on all at the same time before without any issues. As long as they don't access the same record then I don't have any problems.
I do have a couple of forms that are used and updated by several different departments. For instance I have a Quoting form that contains 13 different tabs and 10-20 fields on each tab. Users are typically in a single record for minutes editing and looking for information. To avoid any write conflicts I call the below function any time a field is changed. As long as it is not a new record being entered, then it updates.
Function funSaveTheRecord()
If ([chkNewRecord].value = False And Me.Dirty) Then
'To save the record, turn off the form's Dirty property
Me.Dirty = False
End If
End Function
They way I have everything setup is as follows:
PDC.mdb <-- Front End, Stored on the users machine. Every user has their own copy. Links to tables found in PDC_be.mdb. Contains all forms, reports, queries, macros, and modules. I created a form that I can use to toggle on/off the shift key bipass. Only I have access to it.
PDC_be.mdb <-- Back End, stored on the server. Contains all data. Only form and VBA it contains is to toggle on/off the shift key bipass. Only I have access to it.
Secured.mdw <-- Security file, stored on the server.
Then I put a shortcut on the users desktop that ties the security file to the front end and also provides their login credentials.
This database has been running without error or corruption for over 6 years.
Access is not a flat file database system! It's a relational database system.
You can't use SQL Server Express?
Otherwise, MySQL is a good database.
But if you can't install ANYTHING (you should get into those politics sooner rather than later -- or it WILL be later), just use you existing database system.
Basically with Access, it cannot handle more than 5 people connected at the same time, or it will corrupt on you.

Categories

Resources