I have two databases, one is an MS Access file, the other is a SQL Server database. I need to create a SELECT command that filters data from the SQL Server database based on the data in the Access database. What is the best way to accomplish this with ADO.NET?
Can I pull the required data from each database into two new tables. Put these in a single Dataset. Then perform another SELECT command on the Dataset to combine the data?
Additional Information:
The Access database is not permanent. The Access file to use is set at runtime by the user.
Here's a bit of background information to explain why there are two databases. My company uses a CAD program to design buildings. The program stores materials used in the CAD model in an Access database. There is one file for each model. I am writing a program that will generate costing information for each model. This is based on current material prices stored in a SQL Server database.
My Solution
I ended up just importing the data in the access db into a temporary table in the SQL server db. Performing all the necessary processing then removing the temporary table. It wasn't a pretty solution but it worked.
You don't want to pull both datasets across if you don't have to do that. You are also going to have trouble implementing Tomalak's solution since the file location may change and might not even be readily available to the server itself.
My guess is that your users set up an Access database with the people/products or whatever that they are interested in working with and that's why you need to select across the two databases. If that's the case, the Access table is probably smaller than the SQL Server table(s). Your best bet is to pull in the Access data, then use that to generate a filtered query to SQL Server so that you can minimize the data that is sent over the network.
So, the most important things are:
Filter the data ON THE SERVER so that you can minimize network traffic and also because the database is going to be faster at filtering than ADO.NET
If you have to choose a dataset to pull into your application, pull in the smaller dataset and then use that to filter the other table.
Assuming Sql Server can get to the Access databases, you could construct an OPENROWSET query across them.
SELECT a.*
FROM SqlTable
JOIN OPENROWSET(
'Microsoft.Jet.OLEDB.4.0',
'C:\Program Files\Microsoft Office\OFFICE11\SAMPLES\Northwind.mdb';'admin';'',
Orders
) as b ON
a.Id = b.Id
You would just change the path to the Access database at runtime to get to different MDBs.
First you need to do something on the server - reference the Access DB as a "Linked Server".
Then you will be able to query it from within the SQL server, pulling out or stuffing in data however you like. This web page gives a nice overview on how to do it.
http://blogs.meetandplay.com/WTilton/archive/2005/04/22/318.aspx
If I read the question correctly, you are NOT attempting to cross reference across multiple databases.
You need merely to reference details about a particular FILE, which in this case, could contain:
primary key, parent file checksum (if it is a modification), file checksum, last known author, revision number, date of last change...
And then that primary key when adding information obtained from analysing that file using your program.
If you actually do need a distributed database, perhaps you would prefer to use a non-relational database such as LDAP.
If you can't use LDAP, but must use a relational database, you might consider using GUID's to ensure that your primary keys are good.
Since you don't give enough information, i'm going to have to make some assumptions.
Assuming:
The SQL Server and the Access Database are not on the same computer
The SQL Server cannot see the Access database over a file share or it would be too difficult to achieve this.
You don't need to do joins between the access database and the sql server, only use data from teh access database as lookup elements of your where clause
If the above assumptions are correct, then you can simply use ADO to open the Access database and retrieve the data you need, possibly in a dataset or datatable. Then extract the data you need and feed it to a different ADO query to your SQL Server in a dynamic Where clause, prepared statement, or via parameters to a stored procedure.
The other solutions people are giving all assume you need to do joins on your data or otherwise execute SQL which includes both databases. To do that, you have to use linked databases, or else import the data into a table (perhaps temporary).
Have you tried benchmarking what happens if you link from the Access front end to your SQL Server via ODBC and write your SQL as though both tables are local? You could then do a trace on the server to see exactly what Jet sends to the server. You might be surprised as to how efficient Jet is with this kind of thing. If you're linking on a key field (e.g., and ID field, whether from the SQL Server or not), it would likely be the case that Jet would send a list of of the IDs. Or you could write your SQL to do it that way (using IN SELECT ... in your WHERE clause).
Basically, how efficient things will be depends on where your WHERE clause is going to be executed. If, for instance, you are joining a local Jet table with a linked SQL Server table on a single field, and filtering the results based on values in the local table, it's very likely to be extremely efficient, in that the only thing Jet will send to the server is whatever is necessary to filter the SQL Server table.
Again, though, it's going to depend entirely on exactly what you're trying to do (i.e., which fields you're filtering on). But give Jet a chance to see if it is smart, as opposed to assuming off the bat that Jet will screw it up. It may very well require some tweaking to get Jet to work efficiently, but if you can keep all your logic client-side, you're better off than trying to muck around with tracking all the Access databases from the server.
Related
I have an c#.net windows based application that uses a database in Microsoft SQL Server 2008. During deployment for very first time to our client(s), we create a copy of our database and deploy it on client(s) remote server along with the UI application. The client database can be on version SQL Server 2005 and higher.
During times the UI application and associated database has gone lots of changes. Since this is a thick client application the client(s) database is not sync with our latest database and unfortunately no one ever made notes of all the changes done. So my challenges are as follows:
How to find any missing columns on database table in Client's Database as compared to my Database? if any?
How to find any missing Primary/Unique Constraints on database table in Client's Database as compared to my Database? if any?
How to find any missing Indexes on database table that exist in Client's Database as compared to my Database? if any?
Please keep in mind the client(s) database size may ranges from 10-100GB, so i cannot plan to just drop all client tables and recreate it.
You can use Data-tier applications. It's built-in feature of SQL Server, so you don't need to use any extra tools.
You can extract data-tier application from your database (in SSMS right-click -> Tasks -> Extract data-tier application) to a DACPAC file, copy the file to the client's server and use it to upgrade the DB there (or generate update script).
It also integrates nicely with SQL Server Data Tools.
For this task, you need a software that compare SQL database. Just like there is a lot of software to compare text, there is a lot to compare database.
Personally, I use AdoptSQLDiff, but there is a bunch. RedGate has developed one also and I know others exists. Just type SQL Database compare in google to find them. You probably can have the job done with the trial period.
These softwares show you which tables was added, deleted or changed. It does the same for views, indexes, triggers, Stored Procedures, User Defined Functions, Constraints. More importantly, those tools generate script to push modifications into the target database. Very handy, but have a look at the script generated, it sometime messes it up by deleting data, but it can be fixed very easily.
There is also the option to compare data in a specific table if you need to.
Here is a screen shot of the interface of another so you know what it's look like.
With SQLServer Management Studio, you can try selecting a database and then Task->Generate Script, selecting appropriate options.
Do the same thing for the 2 db you want to compare. You will get two text files you can compare with a text file software comparer.
Comparison will highlight difference in the db structure.
Not the best way to do it, of course. But it can be a start. If the two dbs are not too different, you should be able to handle the differences
Better option, use some db comparer software. They are meant to compare db structure, constraint indexes and so on. Never used any of them, so cannot give any advice on that
If it is one time thing use any diff tool for DB, VS2010+ has a build in one, allows you to get difference for schema and data in two different files.
If you want to solve problem of your development process, you have wide range of options to implement versioning for data base.
If you are using EF - use Migrations, can't beat that.
If you are only on SQL Server and never looking at other RDBMS, check DAC ( Data-Tier applications, mentioned by Jakub)
Otherwise take a look at more generic solutions, among them I would reccomend you to take a look at DB.UP and if python code is good for you , check Alembic, it allow you to write your migrations using really nice python API.
if nothing works for you, create snapshot of current db schema and start doing differential scripts that you can use with self written tool or DB.UP
I am not sure if this can help, but who knows.
So is there any way to restore the server database on your local environment? If the answer is yes, you can try to join system views for each database and compare them?
I propose something like this(was a quick solution, so please sorry for formatting and other common stuff).
USE [master]
GO
SELECT
LocalDataBaseTable.name AS TableName,
LocalDataBaseTableColumns.name AS [Column],
LocalDataBaseTypes.name AS DataType,
LocalDataBaseTableColumns.max_length,
LocalDataBaseTableColumns.[precision]
INTO #tmpLocalInfo
FROM LocalTable.sys.columns as LocalDataBaseTableColumns
INNER JOIN LocalTable.sys.tables AS LocalDataBaseTable
ON LocalDataBaseTableColumns.object_id = LocalDataBaseTable.object_id
INNER JOIN LocalTable.sys.types AS LocalDataBaseTypes
ON LocalDataBaseTypes.user_type_id = LocalDataBaseTableColumns.user_type_id
SELECT
ServerDataBaseTable.name AS TableName,
ServerDataBaseTableColumns.name AS [Column],
ServerDataBaseTypes.name AS DataType,
ServerDataBaseTableColumns.max_length,
ServerDataBaseTableColumns.[precision]
INTO #tmpServerInfo
FROM ServerTable.sys.columns as ServerDataBaseTableColumns
INNER JOIN ServerTable.sys.tables AS ServerDataBaseTable
ON ServerDataBaseTableColumns.object_id = ServerDataBaseTable.object_id
INNER JOIN ServerTable.sys.types AS ServerDataBaseTypes
ON ServerDataBaseTypes.user_type_id = ServerDataBaseTableColumns.user_type_id
SELECT
#tmpServerInfo.*
FROM #tmpLocalInfo
RIGHT OUTER JOIN #tmpServerInfo
ON #tmpLocalInfo.TableName = #tmpServerInfo.TableName COLLATE DATABASE_DEFAULT
AND #tmpLocalInfo.[Column] = #tmpServerInfo.[Column] COLLATE DATABASE_DEFAULT
WHERE #tmpLocalInfo.[Column] IS NULL
DROP TABLE #tmpLocalInfo
DROP TABLE #tmpServerInfo
This will return all information about missed columns in your local database. The idea is to investigate 'sys' views and to find out if there any suitable solution for you.
You can use this simple script, which show you differences between tables, views, indexes etc.
Compalex is a free lightweight script to compare two database schemas. It
supports MySQL, MS SQL Server and PostgreSQL.
or look at this question Compare two MySQL databases. This question about comparing two MySQL schemas, but some of listed tools supports MSSQL or have MSSQL version (for example http://www.liquibase.org/).
Another answer What is best tool to compare two SQL Server databases (schema and data)?
So I have two systems I often have to join together to display certain reports. I've got one system that stores metadata on documents that is stored in SQL Server, usually by part number. The list of part numbers I would want to get documents for come from an Oracle table in our ERP system. My current method goes like this:
Get data from ERP (Oracle) system into a DataTable.
Compile string[] of part numbers from a column.
Use an IN() statement to get all document information from docs (MSSQLSVR) system into another DataTable.
Add columns to ERP DataTable, loop through rows.
Fill out document info from docs DataTable, if(erpRow["ITEMNO"] == docRow["ITEMNO"])
This, to me feels really inefficient. Now obviously I can't use one connection string to JOIN the two tables, or use a database link, so I assume there will have to be two calls, one to each database. Is there another way to join these two sets together?
I would suggest a LikedServer approach (http://msdn.microsoft.com/en-us/library/ms188279.aspx). Write a Stored Procedure on the SQL Server side that pulls the data over from an Oracle Linked Server, does the JOIN locally and returns the combined data.
SQL Server has been designed to execute JOINs efficiently. No need to try to recreate that functionality in the app layer.
Since you've ruled out a database link I would do the following
Get data from ERP (Oracle) system into a DataTable.
Pass DataTable as a Parameter to SQL Server via a Table-Valued Parameter
Return your data (no loops updating an older set)
I realize you can use the upsize wizard in access to convert this normally but as this is a server side process where we are getting the mdb files from a third party on a daily basis, I have to be able to ingest these with a no touch architecture.
Currently, I'm about to set out to write it all by hand (ugh) where I read the access database through a datasource and punch it up into sql server through bulk inserts or entity framework. I really wish there were a better way to do this though. I'm willing to entertain lots of creative methods as there are a LOT of tables and a TON of data.
There are a number of methods that come to mind, which do all indeed involve custom programming, but should be relatively simple and straightforward to implement.
From another Access DB, open the source DB programmatically (i.e., with VBA). Create linked tables to SQL backend in source DB. Copy the data from the source DB to linked table (using insert dest select * from source).
Use OPENDATASET or OPENROWSOURCE with SQL Server to directly connect to the Access DB and copy the data. You can use again insert dest select * from source to copy the data, or select * into dest from source to create a new table from the source data. This involves tweaking some system settings on sql server since it's not enabled by default, but a few google searches should get you started.
From a .NET program, use SqlBulkCopy (which is the .NET class for automating bcp) to upload data from the Access database. Just work with the data directly with ADO.Net, as there's no reason to build an entire EF layer just for migrating data from one source to another.
I have used variations of all three methods above in various projects, but for moving a large number of tables, I have found option #2 to be relatively efficient. It will involve some dynamic SQL code if your table names are dynamic on a daily basis, but if they are static, you should only have to write the logic once and use a parameter for the filename to read from.
I just started working with SQL and now what I need is to make a database for my C# application which will save user names and passwords. Think of it like a password reminder.
Anyway, what I think I need to do is: I need to create a SQL database which will the application only use to save data. It should not need SQL Server to be installed on the machine.
I searched over the internet but no result, all of them require the use of SQL Server, please could you just provide me the steps to do so, or any resource and thanks a lot.
You need to decide how you want to save your data. You can save the data in an XML file, a plain text file, or anything else.
The reason you're seeing so many examples of people using SQL is because relational databases have already solved a lot of the issues you're likely to run into when storing and retrieving data. That means in most cases, it's much easier to use an SQL database (even a lightweight embedded one) than to try to come up with your own library for retrieving and saving data.
A note about saving passwords
Bear in mind that any data you save on a client's computer is going to be accessible to that client. You can use tricks to make it very difficult for someone to get to that data, but there's nothing your program can do that a clever hacker won't be able to mimic. The only way to avoid letting users see the stored passwords would be to make the user provide a "master" password that gets converted into a key that is used to encrypt the other passwords. This way, only users that know this master password would still be able to get the stored data.
Storing data in a relational database is not sufficient to prevent users from accessing that data.
You could use SQL Compact ( http://msdn.microsoft.com/en-us/data/ff687142 ) to avoid installing SQL Server.
You could also go for sqlite.org.
Your requirements of "I want a database" and "I don't want [database]-server" are difficult to match.
If you have a database without a database-server, you have a lump of in-accessible data.
If you want to use a SQL database, you will need a SQL server. Or, you will have to re-implement one in order to use the database.
There are a number of small, open-source SQL servers available. I've worked with HSQL.
you might want to take a look at the SO question SQL-lite vs HSQL-db.
If all you want is one file that can't be read easily, try encrypting the data, and use either text or XML files for storage.
You do need a database to be installed on your computer if you want to use SQL to query the data. It need not be SQL Server, but definitely some kind of database. You can download MySQL, Sql Server Express or any of the free database products available out there.
Once you have that up and running, querying the database from c# is fairly simple and it may be easier for you to follow this tutorial, with has similar functionality to what you need.
You could always use a CSV (Comma Separated Value) file as your "database" I found a link that could help you with this Query CSV using LINQ C# 2010
EDIT: When it comes to security, you can always use something like MD5. It can turn a password into a an irreversible hash. This way no one will ever be able to see others passwords
Hopefully I can explain what I am trying to do.
I am writing a system to take data stored in Sharepoint lists and push them into SQL tables. This is being done so the data from the lists can be joined with other data and reported on.
I need the system to be quite flexible so I want to store the mapping between the lists and SQL and then create any of the SQL that is missing.
So I would first have to check if the SQL table I want exists and if not create it. Then check all the columns I expect and create an missing ones then populate the table with the list data.
Getting the list data is no problem to me and it isn't a problem for me to store by configuration information.
My issue is I'm not sure what .NET features to use when talking to the database. I was looking into the entity framework and LINQ but these seem to need fixed tables which I don't have.
I am also looking at using the enterprise libraries (4.1) as I use these for event logging.
Ideally what I want to be able to do is build a datatable and then "compare" it to a SQL table and have the system update it as required.
Does any thing like this exist and what approach would you use.
These may help get you started :-
Codeplex - SPListSync
Synchronize information with other
lists or SQL Server table based on a
linked column. This can be helpfull
when having list with companies and
another list with contacts. The
company-information (e.g. Business
phone and address) can be copied to
the linked contacts.
Exporting Data from SharePoint 2007 Lists to SQL Server via SSIS
SO - Easiest way to extract SharePoint list data to a separate SQL Server table?
Commercial
Simego - Data Synchronisation Studio
AxioWorks SQList
You need to bit Study SQL Server Management Objects, through which you can directly interact with SQL Server very easily. Through this you can create New Table, Stored Procedure etc and also check pre-existance of any object.
Talking to Database like this was never so easy...