A few things about what I intend my .NET application to do:
Query various enterprise wide databases (I would not know the structure of these databases before hand, in terms of table or fields to expect)
These databases could either be Oracle or SQL Server
Have an interface where the user could select the table and the fields within, set group by and where clauses, to extract a recordset
Insert records into an Access database based on the results of the above query. I know the structure of this Access database and I have some business logic to execute before inserting the records in a certain way.
I am not sure if an ORM like NHibernate is really a solution to this requirement as I have no information before hand on the structure of data (tables, fields) that I will encounter. Is there a .NET library that will abstract the connection and querying part for me and one that works with SQL Server and Oracle.
Related
I have Azure SQL database database1 in server Server1.database.windows.net
I need to retrieve some records from this database, and insert them in a table in different database on different Azure server.
Do you think for this scenario it's better to do it using .Net or to use Elastic queries?
Also, is there any limitations for the elastic queries?
I have Azure SQL database database1 in server Server1.database.windows.net I need to retrieve some records from this database, and insert them in a table in different database on different Azure server. Do you think for this scenario it's better to do it using .Net or to use Elastic queries?
Your requirement is query data from one database and then insert returned data into another (different) database, in my opinion, making these operations via code in your program would completely meet your requirement, elastic query is not required for your scenario. Normally the goal of using elastic query is to facilitate querying scenarios where multiple databases contribute rows into a single overall result. You can find detailed information of the elastic query feature in this article.
Besides, sp_execute_remote can help us execute remote stored procedure calls or remote functions, which could be another approach for your scenario.
is there any limitations for the elastic queries?
Under "Preview limitations" section in this article, you can see:
Running your first elastic query can take up to a few minutes on the Standard performance tier. This time is necessary to load the elastic query functionality; loading performance improves with higher performance tiers.
Scripting of external data sources or external tables from SSMS or SSDT is not yet supported.
Import/Export for SQL DB does not yet support external data sources and external tables. If you need to use Import/Export, drop these objects before exporting and then re-create them after importing.
Elastic query currently only supports read-only access to external tables. You can, however, use full T-SQL functionality on the database where the external table is defined. This can be useful to, e.g., persist temporary results using, e.g., SELECT INTO , or to define stored procedures on the elastic query database which refer to external tables.
Except for nvarchar(max), LOB types are not supported in external table definitions. As a workaround, you can create a view on the remote database that casts the LOB type into nvarchar(max), define your external table over the view instead of the base table and then cast it back into the original LOB type in your queries.
Column statistics over external tables are currently not supported. Tables statistics are supported, but need to be created manually.
I am working in a company where we have hundreds of tables and queries in MS Access and several tables in our SQL Server. Sometimes we need to combine results from both databases.
When we try to do it by linking our SQL tables to MS Access using ODBC connection, the results return very slow that causes our clients to complain. So other than non-ODBC connection property here is what I am looking for;
Being able to build a query using Sql DB and Access DB visually by dragging and dropping tables.
Being able to save these queries and re-run them using C#
Being able to use sub-queries (query in another query)
My question is if there is any kind of application or API (paid or unpaid) that supports all these terms? Or is it possible to build such a thing?
SQL Server Management Studio does all this. But running a query by trying to join a SQL server table with an access table is not a good idea and will always be slow. You need to upload the access table data to a temporary table in SQL Server and join from there.
You can use vba to import the record set to a temp table and then do a join. If you're drawing down both datasets in C# app and joining there then yes that will be relatively more efficient than joining in Access but certainly not the same thing. Joining a large Access table with another large linked table will always be slow.
I have to develop a layer to retrieve data from a database (can be SQL Server, Oracle or IBM DB2). Queries (which are generic) are written by developers, but i can can modify them in my layer. The tables can be huge (say > 1 000 000 rows), and they are a lot of joins (for example, I have a query with 35 joins - no way to reduce).
So, I have to develop a pagination system, to retreive a "page" (say, 50 rows).
The layer (which is in a dll) is for desktop applications.
Important fact : queries are never ordered by ID.
The only way I found is to generate a unique row number (with MSSQL ROW_NUMBER() function) but won't work with Oracle because there are too much joins.
Does anyone know another way ?
There are only two ways to do pagination code.
The first is database specific. Each of those databases have very different best practices with regards to paging through result sets. Which means that your layer is going to have to know what the underlying database is.
The second is to execute the query as is then just send the relevant records up the stream. This has obvious performance issues in that it would require your data layer to essentially grab all the records all of the time.
This is, IMHO, the primary reason why people shouldn't try to write database agnostic code. At the end of the day there are enough differences between RDBMs that it makes sense to have a pluggable data layer architecture which can take advantage of the specific RDBMs it works with.
In short, there is no ANSI standard for this. For example:
MySql uses the LIMIT keyword for paging.
Oracle has ROWNUM which has to be combined with subqueries. (not sure when it was introduced)
SQL Server 2008 has ROW_NUMBER which should be used with a CTE.
SQL Server 2005 had a different (and very complicated) way entirely of paging in a query which required several different procs and a function.
IBM DB2 has rownumber() which also must be implemented as a subquery.
You can do LINQ on your object collection, if you want to do that in the web side.
list.Skip(numPages * itemsPerPage).Take(itemsPerPage)
Lets you skip to the specified page (aka numPages = 0 is page 1).
I have one SQL Server with multiple databases. Database1 has a table with a reference to IDs that are stored in a table on Database2. Not sure if it's possible, but could I configure NHibernate (Fluent NHibernate specifically) to saturate an object pulling data from multiple databases?
I'm not concerned about writing to these tables, I'm just trying to ORM the objects to display in an data viewing application.
I realize this isn't an ideal database situation, but it's what I was given to work with.
The usual answer to db-specific query structures, like cross-DB queries, is to create a view on the "local" DB (that NH connects to) that will perform the cross-DB query and return the joined results. You can also have a repository-per-DB and develop some means to query the records from each DB and join them manually.
One thing that will also work; the table property of each mapping is just a string, and could be anything; NHibernate just takes that and plugs it in wherever it needs to reference the table name. So, you could try specifying the tables in the mappings using their fully-qualified names: ConnectedDB..LocalTable, OtherDB..RemoteTable. It might be considered a hack, but it's also rather elegant in a way; your program doesn't even have to know there are multiple databases in the persistence schema.
I have two databases, one is an MS Access file, the other is a SQL Server database. I need to create a SELECT command that filters data from the SQL Server database based on the data in the Access database. What is the best way to accomplish this with ADO.NET?
Can I pull the required data from each database into two new tables. Put these in a single Dataset. Then perform another SELECT command on the Dataset to combine the data?
Additional Information:
The Access database is not permanent. The Access file to use is set at runtime by the user.
Here's a bit of background information to explain why there are two databases. My company uses a CAD program to design buildings. The program stores materials used in the CAD model in an Access database. There is one file for each model. I am writing a program that will generate costing information for each model. This is based on current material prices stored in a SQL Server database.
My Solution
I ended up just importing the data in the access db into a temporary table in the SQL server db. Performing all the necessary processing then removing the temporary table. It wasn't a pretty solution but it worked.
You don't want to pull both datasets across if you don't have to do that. You are also going to have trouble implementing Tomalak's solution since the file location may change and might not even be readily available to the server itself.
My guess is that your users set up an Access database with the people/products or whatever that they are interested in working with and that's why you need to select across the two databases. If that's the case, the Access table is probably smaller than the SQL Server table(s). Your best bet is to pull in the Access data, then use that to generate a filtered query to SQL Server so that you can minimize the data that is sent over the network.
So, the most important things are:
Filter the data ON THE SERVER so that you can minimize network traffic and also because the database is going to be faster at filtering than ADO.NET
If you have to choose a dataset to pull into your application, pull in the smaller dataset and then use that to filter the other table.
Assuming Sql Server can get to the Access databases, you could construct an OPENROWSET query across them.
SELECT a.*
FROM SqlTable
JOIN OPENROWSET(
'Microsoft.Jet.OLEDB.4.0',
'C:\Program Files\Microsoft Office\OFFICE11\SAMPLES\Northwind.mdb';'admin';'',
Orders
) as b ON
a.Id = b.Id
You would just change the path to the Access database at runtime to get to different MDBs.
First you need to do something on the server - reference the Access DB as a "Linked Server".
Then you will be able to query it from within the SQL server, pulling out or stuffing in data however you like. This web page gives a nice overview on how to do it.
http://blogs.meetandplay.com/WTilton/archive/2005/04/22/318.aspx
If I read the question correctly, you are NOT attempting to cross reference across multiple databases.
You need merely to reference details about a particular FILE, which in this case, could contain:
primary key, parent file checksum (if it is a modification), file checksum, last known author, revision number, date of last change...
And then that primary key when adding information obtained from analysing that file using your program.
If you actually do need a distributed database, perhaps you would prefer to use a non-relational database such as LDAP.
If you can't use LDAP, but must use a relational database, you might consider using GUID's to ensure that your primary keys are good.
Since you don't give enough information, i'm going to have to make some assumptions.
Assuming:
The SQL Server and the Access Database are not on the same computer
The SQL Server cannot see the Access database over a file share or it would be too difficult to achieve this.
You don't need to do joins between the access database and the sql server, only use data from teh access database as lookup elements of your where clause
If the above assumptions are correct, then you can simply use ADO to open the Access database and retrieve the data you need, possibly in a dataset or datatable. Then extract the data you need and feed it to a different ADO query to your SQL Server in a dynamic Where clause, prepared statement, or via parameters to a stored procedure.
The other solutions people are giving all assume you need to do joins on your data or otherwise execute SQL which includes both databases. To do that, you have to use linked databases, or else import the data into a table (perhaps temporary).
Have you tried benchmarking what happens if you link from the Access front end to your SQL Server via ODBC and write your SQL as though both tables are local? You could then do a trace on the server to see exactly what Jet sends to the server. You might be surprised as to how efficient Jet is with this kind of thing. If you're linking on a key field (e.g., and ID field, whether from the SQL Server or not), it would likely be the case that Jet would send a list of of the IDs. Or you could write your SQL to do it that way (using IN SELECT ... in your WHERE clause).
Basically, how efficient things will be depends on where your WHERE clause is going to be executed. If, for instance, you are joining a local Jet table with a linked SQL Server table on a single field, and filtering the results based on values in the local table, it's very likely to be extremely efficient, in that the only thing Jet will send to the server is whatever is necessary to filter the SQL Server table.
Again, though, it's going to depend entirely on exactly what you're trying to do (i.e., which fields you're filtering on). But give Jet a chance to see if it is smart, as opposed to assuming off the bat that Jet will screw it up. It may very well require some tweaking to get Jet to work efficiently, but if you can keep all your logic client-side, you're better off than trying to muck around with tracking all the Access databases from the server.