I'm kind of new with databases and SQL and I'm struggling trying to understand how SQL Change Tracking and Microsoft Sync Framework work together.
I couldn't find some clear examples about how to sync databases with Microsoft Sync Framework but hopefully I found this site, modified the code and got syncing working on my two databases, here is the code I got:
// Server connection
using (SqlConnection serverConn = new SqlConnection(serverConnectionString))
{
if (serverConn.State == ConnectionState.Closed)
serverConn.Open();
// Client connection
using (SqlConnection clientConn = new SqlConnection(clientConnectionString))
{
if (clientConn.State == ConnectionState.Closed)
clientConn.Open();
const string scopeName = "DifferentPKScope";
// Provision Server
var serverProvision = new SqlSyncScopeProvisioning(serverConn);
if (!serverProvision.ScopeExists(scopeName))
{
var serverScopeDesc = new DbSyncScopeDescription(scopeName);
var serverTableDesc = SqlSyncDescriptionBuilder.GetDescriptionForTable(table, serverConn);
// Add the table to the descriptor
serverScopeDesc.Tables.Add(serverTableDesc);
serverProvision.PopulateFromScopeDescription(serverScopeDesc);
serverProvision.Apply();
}
// Provision Client
var clientProvision = new SqlSyncScopeProvisioning(clientConn);
if (!clientProvision.ScopeExists(scopeName))
{
var clientScopeDesc = new DbSyncScopeDescription(scopeName);
var clientTableDesc = SqlSyncDescriptionBuilder.GetDescriptionForTable(table, clientConn);
// Add the table to the descriptor
clientScopeDesc.Tables.Add(clientTableDesc);
clientProvision.PopulateFromScopeDescription(clientScopeDesc);
clientProvision.SetCreateTrackingTableDefault(DbSyncCreationOption.CreateOrUseExisting);
clientProvision.Apply();
}
// Create the sync orchestrator
var syncOrchestrator = new SyncOrchestrator();
// Setup providers
var localProvider = new SqlSyncProvider(scopeName, clientConn);
var remoteProvider = new SqlSyncProvider(scopeName, serverConn);
syncOrchestrator.LocalProvider = localProvider;
syncOrchestrator.RemoteProvider = remoteProvider;
// Set the direction of sync session
syncOrchestrator.Direction = direction;
// Execute the synchronization process
return syncOrchestrator.Synchronize();
}
}
So on this way any changes are synchronized between my two databases. But I wanted a way for my C# app to automatically synchronize both databases when something changes so I found something called Change Tracking here. I downloaded the example code that provides a SynchronizationHelper that also creates tables in my databases called "{TableName}_tracking". This is another table that tracks the changes and indeed it does, whenever I change something in my database the _tracking is updated with the elements I changed, added or removed. Change Tracking doesn't automatically synchronize my databases, it just keeps track of the changes in them, what's the purpose of this?
With the first code, synchronization works but no _tracking table is created, does it just synchronize everything in the table no matter what changed? If that's the case, for big databases I should be using Change Tracking?
Maybe this is something trivial but I have been googling and testing a lot of code but I can't find a clear answer.
When you install Sync Framework, it comes with a help file that includes several walkthroughs of synchronizing databases. the first link you referred to and the second uses the same sync provider and they both have tracking tables. Sync Framework supports using the built-in SQL Change Tracking feature or using a custom-one that Sync Framework creates by itself (the _tracking).
Sync Framework sits outside of your database and you need to invoke it in order to fire the synchronization. Change Tracking is what it says it is- tracking changes.
if you want your databases to do the sync, you might want to check SQL Replication instead.
Related
I'm trying to test the newly supported transactions in Mongo DB with a simple example I wrote.
I'm using Mongo DB version 4.0.5 with driver version 2.8.1.
It's only a primary instance with no shards/replicas.
I must be missing something basic in the following code.
I create a Mongo client, session & database, then start a transaction, add a document and abort the transaction. After this code, I expect nothing to change in the database, but the document is added. When debugging I can also see the document right after the InsertOne() by using Robo 3T (Mongo client GUI).
Any idea what am I missing?
var client = new MongoClient("mongodb://localhost:27017");
var session = client.StartSession();
var database = session.Client.GetDatabase("myDatabase", new MongoDatabaseSettings
{
GuidRepresentation = GuidRepresentation.Standard,
ReadPreference = ReadPreference.Primary,
WriteConcern = new WriteConcern(1,
new MongoDB.Driver.Optional<TimeSpan?>(TimeSpan.FromSeconds(30))),
});
var entities = database.GetCollection<MyEntity>("test");
session.StartTransaction();
// After this line I can already see the document in the db collection using Mongo client GUI (Robo 3T), although I expect not to see it until committing
entities.InsertOne(new MyEntity { Name = "Entity" });
// This does not have any effect
session.AbortTransaction();
Edit:
It's possible to run MongoDB as a 1-node replica set, although I'm not sure what's the difference between a standalone and a 1-node replica set.
See my post below.
In any case, to use the started transaction the insertion code must receive the session as a parameter:
entities.InsertOne(session, new MyEntity { Name = "Entity" });
With these 2 change now the transaction works.
This is inherently a property of MongoDB itself. (More here and here)
Transactions are only available in a replica set setup
Why isnt it available for standalone instances?
With subdocuments and arrays, document databases (MongoDB) allow related data to be unified hierarchically inside a single data structure. The document can be updated with an atomic operation, giving it the same data integrity guarantees as a multi-table transaction in a relational database.
I found a solution, although not sure what the consequences are, maybe someone can point it out:
It seems it's possible to use Mongo DB as a 1-node replica set (instead of a standalone) by simply adding the following in the mongod.cfg file:
replication:
replSetName: rs1
Also, thanks to the following link the code should use the correct overload of InsertOne() which receives the session as the first parameter (see the edit on the original post):
multiple document transaction not working in c# using mongodb 4.08 community server
I am using Entity Framework and manipulating data in a sqlserver database via stored procedures (per client request).
Data is pulled from the database via stored procedures and the results of these stored procedures populates a SQLite db in the Winforms Application.
SQLite is used for additional querying and changing of data and then pushed back via update stored procedure to the sql server db when the user syncs
all stored procedures are on sql server (no in text / in line sql in the application)
I am faced with the scenario where multiple users can potentially attempt to update the same field, which poses 2 problems for me.
If they call the same stored procedure at the same time (select or update).
I am not sure what my options are here from a programming level, I don't have rights to make server changes.
if the field they are trying to update has already been updated.
for problem 2 I am trying to build in a check by date stamping the modification. ie. when a user syncs sql server adds that sync date to a date modified column, if a another user tries to modify the same field i want to check the date modified on his sqlite db and compare that to date modified in sql server, if sql server's date modified is more recent, keep sql server values, if syncing user's modified date is more recent use his...
I have looked into Resolving optimistic concurrency with a condition where the client wins.
using (var context = new BloggingContext())
{
var blog = context.Blogs.Find(1);
blog.Name = "The New ADO.NET Blog";
bool saveFailed;
do
{
saveFailed = false;
try
{
context.SaveChanges();
}
catch (DbUpdateConcurrencyException ex)
{
saveFailed = true;
// Update original values from the database
var entry = ex.Entries.Single();
entry.OriginalValues.SetValues(entry.GetDatabaseValues());
}
} while (saveFailed);
}
but this seems to only work when you directly query the db with Entity Framework and not when you want to update via stored procedure.
what can I use to perform these types of checks?
Ok, This is probably not the best solution, but it is what I was able to come up with, and although not tested extensively initial once over seems to be ok-ish.
I am not going to mark this as the answer, but its what i got working based on my question above.
calling stored procedure at same time, created a class for the transactions
public class TransactionUtils
{
public static TransactionScope CreateTransactionScope()
{
var transactionOptions = new TransactionOptions();
transactionOptions.IsolationLevel = IsolationLevel.ReadCommitted;
transactionOptions.Timeout = TransactionManager.DefaultTimeout;
return new TransactionScope(TransactionScopeOption.Required, transactionOptions);
}
}
and then in code use it as follows:
var newTransactionScope = TransactionUtils.CreateTransactionScope();
try
{
using (newTransactionScope)
{
using (var dbContextTransaction = db_context.Database.BeginTransaction(/*System.Data.IsolationLevel.ReadCommitted*/))
{
try
{
db_context.Database.CommandTimeout = 3600;
db_context.Database.SqlQuery<UpdateData>("UpdateProc #Param1, #Param2, #Param3, #Param4, #Param5, #Param6, #DateModified",
new SqlParameter("Param1", test1),
new SqlParameter("Param2", test2),
new SqlParameter("Param3", test3),
new SqlParameter("Param4", test4),
new SqlParameter("Param6", test5),
new SqlParameter("DateModified", DateTime.Now)).ToList();
dbContextTransaction.Commit();
}
catch (TransactionAbortedException ex)
{
dbContextTransaction.Rollback();
throw;
}
As for issue 2 (concurrency)
I could not find a way to use built in concurrency checks between data on SQL Server and the data that I want to update from SQLite (2 different contexts)
So I am storing Date modified in both sql server and sqlite.
the sqlite date modified is updated when the user modifies a record,
date modified on sql server is updated when a sync runs.
Before syncing I query the sqlServer db for the record to be updated's date modified and compare it with the sqlite's date modified for that record in a if statement and then either run the update stored procedure for that record or not
I'm trying to write some simple Extended Events management code in C#, but am fairly new to it. I am able to setup XEvent sessions in SSMS and was able to get the Linq stream from that created session in C# using this example
What I would like to do now, is to be able to query a given database for what sessions exist. I could manually query the sys.dm_xe* tables and create the mapped classes for those, but it looks like the classes already exist in the Microsoft.SqlServer.Management.XEvent namespace - so I'd hate to do a poor re-implementation if something already exists.
The specific table holding what sessions exist is sys.dm_xe_sessions.
Any example code or help is appreciated. Thanks!
The class to look for is XEStore in Microsoft.SqlServer.Managment.XEvent. With this you can see what extended event sessions exist as well as create new ones.
using (SqlConnection conn = new SqlConnection(connString)) {
XEStore store = new XEStore(new SqlStoreConnection(conn));
if (store.Sessions[sessionName] != null) {
Console.WriteLine("dropping existing session");
store.Sessions[sessionName].Drop();
}
Session s = store.CreateSession(sessionName);
s.MaxMemory = 4096;
s.MaxDispatchLatency = 30;
s.EventRetentionMode = Session.EventRetentionModeEnum.AllowMultipleEventLoss;
Event rpc = s.AddEvent("rpc_completed");
rpc.AddAction("username");
rpc.AddAction("database_name");
rpc.AddAction("sql_text");
rpc.PredicateExpression = #"sqlserver.username NOT LIKE '%testuser'";
s.Create();
s.Start();
//s.Stop();
//s.Drop();
}
EntityFramework migrations become useless after switching to new Context.
DbMigrator is using list of Pending Migrations from first database instance, which makes means no migrations are applied to other databases, which then leads to errors during Seed();
C# .NET 4.5 MVC project with EF 6
MS SQL Server 2014, multiple instances of same database model.
CodeFirst approach with migrations.
DbContext initializer is set to null.
On Application Start we have custom Db Initialization to create and update databases. CreateDatabaseIfNotExists is working as intended, new databases have all migrations applied. However both MigrateDatabaseToLatestVersion initializer and our custom one are failing to update databases other than first one on list.
foreach (var connectionString in connectionStrings)
{
using (var context = new ApplicationDbContext(connectionString))
{
//Create database
var created = context.Database.CreateIfNotExists();
var conf = new Workshop.Migrations.Configuration();
var migrator = new DbMigrator(conf);
migrator.Update();
//initial values
conf.RunSeed(context);
}
}
context.Database.CreateIfNotExists(); works correctly.
migrator.GetLocalMigrations() is always returning correct values.
migrator.GetPendingMigrations() after first database is returning
empty list.
migrator.GetDatabaseMigrations() is mirror of pending migrations,
after first database it contains full list event for empty databases.
Fetching data (context.xxx.ToList()) from Db instance confirms connection is up and working, and links to correct instance.
Forcing update to most recent migration with migrator.Update("migration_name"); changes nothing. From what I gather by reading EF source code, it checks pending migration list on its own, which gives it faulty results.
There seems to be some caching going in under the hood, but it eludes me how to reset it.
Is there a way to perform migrations on multiple databases or is it yet another "bug by design" in EF?
Edit:
Real problem is DbMigrator creating new Context for its own use. It does it via default parameterless constructor, which in my case had fallback to default (first) connection string in web.Config.
I do not see good solution for this problem but primitive workaround in my case is to temporarily edit default connection string:
var originalConStr = WebConfigurationManager.ConnectionStrings["ApplicationDbContext"].ConnectionString;
var setting = WebConfigurationManager.ConnectionStrings["ApplicationDbContext"];
var fi = typeof(ConfigurationElement).GetField("_bReadOnly", BindingFlags.Instance | BindingFlags.NonPublic);
//disable readonly flag on field
fi.SetValue(setting, false);
setting.ConnectionString = temporaryConnectionString; //now it works
//DO STUFF
setting.ConnectionString = originalConStr; //revert changes
Cheat from: How do I set a connection string config programatically in .net?
I still hope someone will find real solution so for now I will refrain with self-answer.
You need to correctly set DbMigrationsConfiguration.TargetDatabase property, otherwise the migrator will use the default connection info.
So in theory you can do something like this
conf.TargetDatabase = new System.Data.Entity.Infrastructure.DbConnectionInfo(...);
Unfortunately the only 2 public constructors of the DbConnectionInfo are
public DbConnectionInfo(string connectionName)
connectionName: The name of the connection string in the application configuration.
and
public DbConnectionInfo(string connectionString, string providerInvariantName)
connectionString: The connection string to use for the connection.
providerInvariantName: The name of the provider to use for the connection. Use 'System.Data.SqlClient' for SQL Server.
I see you have the connection string, but have no idea how you can get the providerInvariantName.
UPDATE: I didn't find a good "official" way of taking the needed information, so I've ended using a hack with accessing internal members via reflection, but still IMO it's a quite more safer than what you have used:
var internalContext = context.GetType().GetProperty("InternalContext", BindingFlags.Instance | BindingFlags.NonPublic).GetValue(context);
var providerName = (string)internalContext.GetType().GetProperty("ProviderName").GetValue(internalContext);
var conf = new Workshop.Migrations.Configuration();
conf.TargetDatabase = new System.Data.Entity.Infrastructure.DbConnectionInfo(connectionString, providerName);
So Connections in Dynamics CRM provide a general purpose way of linking things together.
Internally the Connections entity has a Record1Id attribute and a Record2Id attribute, among other things.
When you create a connection via the UI, CRM actually "creates two entries in the Connection table in the database. Each entry allows you to search for the related record from the originating record or the related record."
That is, if you connect A and B, it saves two rows to the (behind the scenes) table:
one with Record1Id = A and Record2Id = B
and one with Record1Id = B and Record2Id = A
This is to make searching for connections easier. If you do an Advanced Find on connections, you only have to do the search 'one way round'.
So my question is:
When you create Connections via the API (late bound), which goes something like this:
Entity connection = new Entity("connection");
connection["record1id"] = new EntityReference("contact", someContactId);
connection["record1objecttypecode"] = new OptionSetValue(2);
connection["record1roleid"] = new EntityReference("connectionrole", someConnectionRoleId);
connection["record2id"] = new EntityReference("incident", someCaseId);
connection["record2objecttypecode"] = new OptionSetValue(122);
connection["record2roleid"] = new EntityReference("connectionrole", someOtherConnectionRoleId);
var newId = service.Create(connection);
... is it sufficient to create them 'one way round' as above, and then behind the scenes CRM will create connections in both directions?
... or do you need to manually create them in both directions? (by saving twice and swapping round the record1id record2id values, etc)
Or, in other words, does the CRM API for Connections encapsulate the 'its actually two connections behind the scenes' functionality, or do you need to manually handle that yourself?
You just need to create one connection record. One thing to note is I don't think you need to set the typecodes as you are doing above. Just setting the logical names in the entity references should be enough. Here is the sample from the SDK:
Connection newConnection = new Connection
{
Record1Id = new EntityReference(Account.EntityLogicalName,
_accountId),
Record1RoleId = new EntityReference(ConnectionRole.EntityLogicalName,
_connectionRoleId),
Record2RoleId = new EntityReference(ConnectionRole.EntityLogicalName,
_connectionRoleId),
Record2Id = new EntityReference(Contact.EntityLogicalName,
_contactId)
};
_connectionId = _serviceProxy.Create(newConnection);