I have a LINQ to SQL DataContext that queries four different tables. But I need to move one of those tables to another database. Is it possible to have a database and connection string for certain tables and another for another table?
So right now I have something like:
[global::System.Data.Linq.Mapping.DatabaseAttribute(Name="DATABASE1")]
public partial class DataClassesDataContext : System.Data.Linq.DataContext
{
private static System.Data.Linq.Mapping.MappingSource mappingSource = new AttributeMappingSource();
public DataClassesDataContext() : base(global::System.Configuration.ConfigurationManager.ConnectionStrings["DATABASE1ConnectionString"].ConnectionString, mappingSource)
{
OnCreated();
}
public DataClassesDataContext(string connection) : base(connection, mappingSource)
{
OnCreated();
}
}
So right now that handles all four tables. I would like for it to handle the first 3 tables and have another for the last. This possible?
Thanks!
Not directly; the most obvious thing would be to split the data-context into two separate data-context classes (and two dbml setups).
If you are careful, you could leave "as is", and just explicitly supply the connection-string to each data-context instance, and just don't use the wrong bits, however: this is risky. In particular, leaving it intact means you still might have queries that try to join between tables that are now in different databases, which won't work.
The data-context here is only designed to work in a single database.
Linq-to-SQL works best when all the data you need is on the same database. If you start moving tables to another database doing the cross database joins can be a pain.
http://www.enderminh.com/blog/archive/2009/04/25/2654.aspx
I have had the same issue in the past and the way I overcame this was to move the table as stated, then create a view in the original database which references the table.
There is one drawback in that the view is then read-only. However, going forward I wouldnt recommend this approach, I would recommend separate datacontexts for each database.
We have tackled a similar situation by creating the LINQ to SQL context on a development database that has all the tables in one database, and then creating a synonym in the production database to point to the table(s) in the other database, and it all Just Works.
A brief outline of how it works:
Dev environment:
use [TheDatabase]
go
create table Table1
{
-- stuff goes here
)
go
create table Table2
(
-- stuff goes here
}
go
create table Table3
{
-- stuff goes here
}
Production environment
use [Database2]
go
create table Table3
{
-- stuff goes here
)
use [Database1]
go
create table Table1
{
-- stuff goes here
)
go
create table Table2
(
-- stuff goes here
}
go
create synonym Table3 for Database2.dbo.Table3
Obviously, depending on your environment it might not work in your situation, but it has worked well for us.
Related
I need to to build a GUI to query a database, and I opted to use Entity Framework and the database-first approach.
The database has a flaw in it's layout, in my opinion, and I wonder what my options are to correct this in the ef model (and how).
The database looks like this:
CREATE TABLE a (
idA int
)
CREATE TABLE b (
idB int
)
CREATE TABLE c (
idC int,
fkA int,
fkB int
)
The design issue I see is that items in B do not exits alone, they are always related to A. The following tables would make more sense:
CREATE TABLE a (
idA int
)
CREATE TABLE b (
idB int,
fkA int,
)
CREATE TABLE c (
idC int,
fkB int
)
With words, c is set up to be a child of independent a and b, while in reality, b is always a child of a, and c is a child of b (and of a in consequence).
How would I modify the generated model to change this, if this is possible at all? Using Visual Studio, and the EDMX Model editor, obviously, but what changes do need to make to the model so that is still loads the wrong database layout, but offers the corrected one to the GUI?
The GUI will only read data, there is no need write anything anytime.
Thanks!
If you want to apply changes to the database you'll need more than read only access. When the changes are made and saved in the database, you can update the model.
If you are using an Ado.net edmx file, updating is easy. Select the .edmx file and select update model, select the related tables from the database and follow the remaining of the GUI.
If you need to something from the package manager console or from dotnet console. It is wise to use an tutorial from here.
Update from OP
It's possible but it could result to many unwanted side-effects.
When you are getting the objects from the database you should box them into new objects that represent the newly wanted structure. Another way to do this is to change the model, you'll need to add properties to the given classes. Be warned that your code will suffer from lot's of side effects.
The product I'm working on will need to support different database types. At first, it needs to support SQL Server and Oracle, but in the future it may need to support IBM DB2 and Postgre SQL.
And the product will need to be working for different customers who might have slightly different schemas. For example a column name on one client with SQL Server might be _ID and on another client with Oracle it could be I_ID.
The general schema will be the same except the column names. They all could potentially be mapped to the same object. But, there may be some extra columns that are specific to each customer. These do not need to be mapped to an object though. They can be retrieved in a Master-Detail scenario using a simpler way.
I wanted use an ORM as we will need to support different types of database providers. But as far as I can understand, ORMs are not good with creating a mapping on runtime.
To support these requests (summary):
Column names may be different for each customer, but they are pretty much the same columns except names.
Database provider may be different for each customer.
There may be extra columns for each customer.
Edit: Program should be able to support a new database by changing the configuration during runtime.
What is a good way to create a data access for such specifications? Is there a way to do it with ORMs? Or do I need to write code specific to each database to support this scenario? Do I have any other option that would make it easier than using ADO.NET directly?
Edit: I think I wrote my question a bit too broad, and didn't explain it clearly, sorry about that. The problem is I won't be creating the databases. They will be created already, and the program should be able to work with a new database by configuring the program during runtime. I have no control over the databases.
The other thing is, of course it is possible to do it by creating SQL statements in the program, but that is really cumbersome. All these providers have slightly different rules and different SQL implementations, so it is a lot of work. I was wondering if I could use something like an ORM to make it easier for me.
Edit 2: I am totally aware that this is a stupid way to do things, and it shows bad design decisions. But I have spent so many hours trying to convince my company to not do it this way. They don't want to change their way of thinking because an intern tells them so. So any help would be appreciated.
Column names may be different for each customer, but they pretty much the same columns except names.
Because of this requirement alone you're going to have to build your SQL statement dynamically, on your own, but it's really pretty straight forward. I would recommend building a table like this:
CREATE TABLE DataTable (
ID INT PRIMARY KEY NOT NULL,
Name SYSNAME NOT NULL
)
to store all of the tables in the database. Then build one like this:
CREATE TABLE DataTableField (
ID INT PRIMARY KEY NOT NULL,
DataTableID INT NOT NULL,
Name SYSNAME NOT NULL
)
to store the base names for the fields. You'll just have to pick a schema and call it the baseline. That's what goes in those two tables. Then you have a table like this:
CREATE TABLE Customer (
ID INT PRIMARY KEY NOT NULL,
Name VARCHAR(256) NOT NULL
)
to store all the unique customers you have using the product, and then finally a table like this:
CREATE TABLE CustomerDataTableField (
ID INT PRIMARY KEY NOT NULL,
CustomerID INT NOT NULL,
DataTableFieldID INT NOT NULL,
Name SYSNAME,
IsCustom BIT
)
to store the different field names for each customer. We'll discuss the IsCustom in a minute.
Now you can leverage these tables to build your SQL statements dynamically. In C#, you might cache all this data up front when the application first loads and then use those data structures to build the SQL statements. But get started on that and if you have specific questions about that then create a new question, add the code you already have, and let us know where you're having trouble.
Database provider may be different for each customer.
Here you're going to need to use something like Dapper because it works with POCO classes (like what you'll be building) and it also simply extends the IDbConnection interface so it doesn't matter what concrete class you use (e.g. SqlConnection or OracleConnection), it works the same.
There may be extra columns for each customer.
This is actually quite straight forward. Leverage the IsCustom field in the CustomerDataTableField table to add those fields to your dynamically built SQL statements. That solves the database side. Now, to solve the class side, I'm going to recommend you leverage partial classes. So consider a class like this:
public partial class MyTable
{
public int ID { get; set; }
public string Field1 { get; set; }
}
and that represents the baseline schema. Now, everything maps into those fields except those marked IsCustom, so we need to do something about those. Well, let's build an extension to this class:
public partial class MyTable
{
public string Field2 { get; set; }
}
and so now when you build a new MyTable() it will always have these additional fields. But, you don't want that for every customer do you? Well, that's why we use partial classes, you define these partial classes in external assemblies that only get installed for the right customer. Now you have a bunch of small, customer specific extensions to the system, and they are easily developed, installed, and maintained.
I have the following scenario: there are a database that generates a new logTable every year. It started on 2001 and now has 11 tables. They all have the same structure, thus the same fields, indexes,pk's, etc.
I have some classes called managers that - as the name says - manages every operation on this DB. For each different table i have a manager, except for this logTable which i have only one manager.
I've read a lot and tried different things like using ITable to get tables dynamically or an interface that all my tables implements. Unfortunately, i lose strong-typed properties and with that i can't do any searches or updates or anything, since i can't use logTable.Where(q=> q.ID == paramId).
Considering that those tables have the same structure, a query that searches logs from 2010 can be the exact one that searches logs from 2011 and on.
I'm only asking this because i wouldn't like to rewrite the same code for each table, since they are equal on it's structure.
EDIT
I'm using Linq to SQL as my ORM. And these tables uses all DB operations, not just select.
Consider putting all your logs in one table and using partitioning to maintain performance. If that is not feasible you could create a view that unions all the log tables together and use that when selecting log data. That way when you added a new log table you just update the view to include the new table.
EDIT Further to the most recent comment:
Sounds like you need a new DBA if he won't let you create new SPs. Yes I think could define an ILogTable interface and then make your log table classes implement it, but that would not allow you do GetTable<ILogTable>(). You would have to have some kind of DAL class with a method that created a union query, e.g.
public IEnumerable<ILogTable> GetLogs()
{
var Log2010 = from log in DBContext.2010Logs
select (ILogTable)log;
var Log2011 = from log in DBContext.2011Logs
select (ILogTable)log;
return Log2010.Concat(Log2011);
}
Above code is completely untested and may fail horribly ;-)
Edited to keep #AS-CII happy ;-)
You might want to look into the Codeplex Fluent Linq to SQL project. I've never used it, but I'm familiar with the ideas from using similar mapping techniques in EF4. YOu could create a single object and map it dynamically to different tables using syntax such as:
public class LogMapping : Mapping<Log> {
public LogMapping(int year) {
Named("Logs" + year);
//Column mappings...
}
}
As long as each of your queries return the same shape, you can use ExecuteQuery<Log>("Select cols From LogTable" + instance). Just be aware that ExecuteQuery is one case where LINQ to SQL allows for SQL Injection. I discuss how to parameterize ExecuteQuery at http://www.thinqlinq.com/Post.aspx/Title/Does-LINQ-to-SQL-eliminate-the-possibility-of-SQL-Injection.
Let's say I have in my database multiple db schema for example : HumanRessources and Inventory.
In each of those schema contains multiple tables. Do you usually split your DB into multiple edmx or usually just put everything in one single edmx?
I was thinking about creating a edmx for each schema, but wondering how this will impact a unitorwork pattern. Reading through some articles, the ObjectContext will be the unitofwork. By defining 2 edmx, I will end up with 2 ObjectContext : HumanRessourceContext and InventoryContext, meaning each will will be a unitofwork. What if I want all modification made to an entity in the humanressource and an entity in the inventorycontext to be ATOMIC, can this be achieve with the unitofwork pattern?
While this isn't an endorsement of splitting up the database by schema into EDMX's, you can make the updates atomic by using a TransactionScope:
using(TransactionScope trans = new TransactionScope())
{
using(HumanResources hr = new HumanResources())
{
//...
hr.SaveChanges();
}
using(Inventory inv = new Inventory())
{
//...
inv.SaveChanges();
}
trans.Complete();
}
Obviously you can rearrange your context objects however you like (if you need to use them both at the same time, for instance) and you can alter the transaction isolation level to whatever is appropriate, but this should give you what you need to know in order to make your database changes atomic.
If your Inventory and HumanResources tables don't have any relationships between them, splitting up the tables into two edmx files is fine, though I don't know what benefit it would offer. If they do have direct or indirect relationships, you will run into problems trying to use those relationships. The simplest solution is to use a single EDM.
I have the following situation:
Customers contain projects and projects contain licenses.
Good because of archiving we won't delete anything but we use the IsDeleted instead.
Otherweise I could have used the cascade deletion.
Owkay I work with the repository pattern so I call
customerRepository.Delete(customer);
But here starts the problem. The customer is set to isdeleted true. But then I would like to delete all the projects of that customer and each project that gets deleted should delete all licenses as well.
I would like to know if there is a proper solution for this.
It has to be performant though.
Take note that this is a simple version of the actual problem. A customer has also sites which are also linked to licenses but I just wanted to simplify the problem for you guys.
I'm working in a C# environment using sql server 2008 as database.
edit: I'm using enterprice libraries to connect to the database
One option would be to do this in the database with triggers. I guess another option would be use Cascade update, but that might not fit in with how your domain works.
Personally I'd probably just bite the bullet and write C# code to do the setting of IsDeleted type field for me (if there was one and only one app accessing the DB).
I recommend just writing a stored procedure (or group of stored procedures) to encapsulate this logic, which would look something like this:
update Customer set isDeleted = 1
where CustomerId = #CustomerId
/* Say the Address table has a foreign key to customer */
update Address set isDeleted = 1
where CustomerId = #CustomerId
/*
To delete related records that also have child data,
write and call other procedures to handle the details
*/
exec DeleteProjectByCustomer(#CustomerId)
/* ... etc ... */
Then call this procedure from customerRepository.Delete within a transaction.
This totally depends on your DAL. For instance NHibernate mappings can be setup to cascade delete all these associated objects without extra code. I'm sure EF has something similar. How are you connecting to your DB?
If your objects arent persisted, then the .NET GC will sweep all your project objects away once there is no reference to them. I presume from your question though that you are talking about removing them from the database?
If your relationships are fixed (i.e. a license is always related to a project, and a project to a customer), you can get away with not cascading the update at all. Since you're already dealing with the pain of soft deletes in your queries, you might as well add in the pain of checking the hierarchy:
SELECT [...] FROM License l
JOIN Project p ON l.ProjectID = p.ID
JOIN Customer c on p.CustomerID = c.ID
WHERE l.IsDeleted <> 1 AND p.IsDeleted <> 1 AND c.IsDeleted <> 1
This will add a performance burden only in the case where you have queries on a child table that don't join to the ancestor tables.
It has an additional merit over a cascading approach: it lets you undelete items without automatically undeleting their children. If I delete one of a project's licenses, then delete the project, then undelete the project, a cascading approach will lose the fact that I deleted that first license. This approach won't.
In your object model, you'd implement it like this:
private bool _IsDeleted;
public bool IsDeleted
{
get
{
return _IsDeleted || (Parent == null ) ? false : Parent.IsDeleted;
}
set
{
_IsDeleted = value;
}
}
...though you must be careful to actually store the private _IsDeleted value in the database, and not the value of IsDeleted.