I'm running the below code to update some records based on a bank transaction history file that is sent to us each morning. It's pretty basic stuff but, for some reason, when I hit the end, dbContext.GetChangeSet() reports "0" for all actions.
public void ProcessBatchFile(string fileName)
{
List<string[]> failed = new List<string[]>();
int recCount = 0;
DateTime dtStart = DateTime.Now;
using (ePermitsDataContext dbContext = new ePermitsDataContext())
{
try
{
// A transaction must be begun before any data is read.
dbContext.BeginTransaction();
dbContext.ObjectTrackingEnabled = true;
// Load all the records for this batch file.
var batchRecords = (from b in dbContext.AmegyDailyFiles
where b.FileName == fileName
&& b.BatchProcessed == false
&& (b.FailReason == null || b.FailReason.Trim().Length < 1)
select b);
// Loop through the loaded records
int paymentID;
foreach (var r in batchRecords)
{
paymentID = 0;
try
{
// We have to 'parse' the primary key, since it's stored as a string value with leading zero's.
if (!int.TryParse(r.TransAct.TrimStart('0'), out paymentID))
throw new Exception("TransAct value is not a valid integer: " + r.TransAct);
// Store the parsed, Int32 value in the original record and read the "real" record from the database.
r.OrderPaymentID = paymentID;
var orderPayment = this.GetOrderPayment(dbContext, paymentID);
if (string.IsNullOrWhiteSpace(orderPayment.AuthorizationCode))
// If we haven't processed this payment "Payment Received" do it now.
this.PaymentReceived(orderPayment, r.AuthorizationNumber);
// Update the PaymentTypeDetailID (type of Credit Card--all other types will return NULL).
var paymentTypeDetail = dbContext.PaymentTypes.FirstOrDefault(w => w.PaymentType1 == r.PayType);
orderPayment.PaymentTypeDetailID = (paymentTypeDetail != null ? (int?)paymentTypeDetail.PaymentTypeID : null);
// Match the batch record as processed.
r.BatchProcessed = true;
r.BatchProcessedDateTime = DateTime.Now;
dbContext.SubmitChanges();
}
catch (Exception ex)
{
// If there's a problem, just record the error message and add it to the "failed" list for logging and notification.
if (paymentID > 0)
r.OrderPaymentID = paymentID;
r.BatchProcessed = false;
r.BatchProcessedDateTime = null;
r.FailReason = ex.Message;
failed.Add(new string[] { r.TransAct, ex.Message });
dbContext.SubmitChanges();
}
recCount++;
}
dbContext.CommitTransaction();
}
// Any transaction will already be commited, if the process completed successfully. I just want to make
// absolutely certain that there's no chance of leaving a transaction open.
finally { dbContext.RollbackTransaction(); }
}
TimeSpan procTime = DateTime.Now.Subtract(dtStart);
// Send an email notification that the processor completed.
System.Text.StringBuilder sb = new System.Text.StringBuilder();
sb.AppendFormat("<p>Processed {0} batch records from batch file '{1}'.</p>", recCount, fileName);
if (failed.Count > 0)
{
sb.AppendFormat("<p>The following {0} records failed:</p>", failed.Count);
sb.Append("<ul>");
for (int i = 0; i < failed.Count; i++)
sb.AppendFormat("<li>{0}: {1}</li>", failed[i][0], failed[i][1]);
sb.Append("<ul>");
}
sb.AppendFormat("<p>Time taken: {0}:{1}:{2}.{3}</p>", procTime.Hours, procTime.Minutes, procTime.Seconds, procTime.Milliseconds);
EMailHelper.SendAdminEmailNotification("Batch Processing Complete", sb.ToString(), true);
}
The dbContext.BeginTransaction() method is something I added to the DataContext just to make it easy to use explicit transactions. I'm fairly confident that this isn't the problem, since it's used extensively elsewhere in the application. Our database design makes it necessary to use explicit transactions for a few, specific operations, and the call to "PaymentReceived" happens to be one of them.
I have stepped through the code and confirmed that the Rollback() method on the transaction itself is not begin called, and I have also checked the dbContext.GetChangeSet() before the call to CommitTransaction() happens with the same result.
I have included the BeginTransaction(), CommitTransaction() and RollbackTransaction() method bodies below, just for clarity.
/// <summary>
/// Begins a new explicit transaction on this context. This is useful if you need to perform a call to SubmitChanges multiple times due to "circular" foreign key linkage, but still want to maintain an atomic write.
/// </summary>
public void BeginTransaction()
{
if (this.HasOpenTransaction)
return;
if (this.Connection.State != System.Data.ConnectionState.Open)
this.Connection.Open();
System.Data.Common.DbTransaction trans = this.Connection.BeginTransaction();
this.Transaction = trans;
this._openTrans = true;
}
/// <summary>
/// Commits the current transaction (if active) and submits all changes on this context.
/// </summary>
public void CommitTransaction()
{
this.SubmitChanges();
if (this.Transaction != null)
this.Transaction.Commit();
this._openTrans = false;
this.RollbackTransaction(); // Since the transaction has already been committed, this just disposes and decouples the transaction object itself.
}
/// <summary>
/// Disposes and removes an existing transaction on the this context. This is useful if you want to use the context again after an explicit transaction has been used.
/// </summary>
public void RollbackTransaction()
{
// Kill/Rollback the transaction, as necessary.
try
{
if (this.Transaction != null)
{
if (this._openTrans)
this.Transaction.Rollback();
this.Transaction.Dispose();
this.Transaction = null;
}
this._openTrans = false;
}
catch (ObjectDisposedException) { } // If this gets called after the object is disposed, we don't want to let it throw exceptions.
catch { throw; }
}
I just found the problem: my DBA didn't put a primary key on the table when he created it for me, so LinqToSql did not generate any of the "PropertyChanged" event/handler stuff in the entity class, which is why the DataContext was not aware that changes were being made. Apparently, if your table has no primary key, Linq2Sql won't track any changes to that table, which makes sense, but it would be nice if there were some kind of notification to that effect. I'm sure my DBA didn't think about it, because of this just being a way of "tracking" which of these line items from the text file had been processed and doesn't directly relate to any other tables.
Related
I have a piece of code where multiple threads are accessing using a shared ID property from ConcurrentBag type of string like following:
var ids = new ConcurrentBag<string>();
// List contains lets say 10 ID's
var apiKey = ctx.ApiKey.FirstOrDefault();
Parallel.ForEach(ids, id =>
{
try
{
// Perform API calls
}
catch (Exception ex)
{
if (ex.Message == "Expired")
{
// the idea is that if that only one thread can access the DB record to update it, not multiple ones
using (var ctx = new MyEntities())
{
var findApi= ctx.ApiKeys.Find(apiKey.RecordId);
findApi.Expired = DateTime.Now.AddHours(1);
findApi.FailedCalls += 1;
}
}
}
});
So in a situation like this if I have a list of 10 ids and 1 key that is being used for API call, once the key reachces hourly limit of calls, I will catch the exception from the API and then flag the key not to be used for the next hour.
However, in the code I have pasted above, all of the 10 threads will access the record from DB and count the failed calls as 10 times, instead of only 1..:/
So my question here is how do I prevent all of the threads from doing the update of the DB record, but instead to only allow one thread to access the DB, update the record (add failed calls by +1) ?
How can I achieve this?
It looks like you only need to update apiKey.RecordId once if an error occurred, why not just track the fact that an error occurred and update once at the end? e.g.
var ids = new ConcurrentBag<string>();
// List contains lets say 10 ID's
var apiKey = ctx.ApiKey.FirstOrDefault();
var expired = false;
Parallel.ForEach(ids, id =>
{
try
{
// Perform API calls
}
catch (Exception ex)
{
if (ex.Message == "Expired")
{
expired = true;
}
}
}
if (expired)
{
// the idea is that if that only one thread can access the DB record to
// update it, not multiple ones
using (var ctx = new MyEntities())
{
var findApi= ctx.ApiKeys.Find(apiKey.RecordId);
findApi.Expired = DateTime.Now.AddHours(1);
findApi.FailedCalls += 1;
}
});
You are in a parallel loop, therefore the most likely behaviour is that each of the 10 threads are going to fire, try to connect to your API with the expired key and then all fail, throwing the exception.
There are a couple of reasonable solutions to this:
Check the key before you use it
Can take the first run through the loop out of sequence? for example:
var ids = new ConcurrentBag<string>();
var apiKey = ctx.ApiKey.FirstOrDefault();
bool expired = true;
try {
// Perform API calls
expired = false;
}
catch(Exception ex) {
// log to database once
}
// Or grab another, newer key?
if (!expired)
{
Parallel.ForEach(ids.Skip(1), id =>
{
// Perform API Calls
}
}
This would work reasonable well if the key was likely to have expired before you use it, but will be active while you use it.
Hold on to the failures
If the key is possibly valid when you start but could expire while you are using it you might want to try capturing that failure and then logging at the end.
var ids = new ConcurrentBag<string>();
var apiKey = ctx.ApiKey.FirstOrDefault();
// Assume the key hasn't expired - don't set to false within the loops
bool expired = false;
Parallel.ForEach(ids.Skip(1), id =>
{
try {
// Perform API calls
}
catch (Exception e) {
if (e.Message == "Expired") {
// Doesn't matter if many threads set this to true.
expired = true;
}
}
if (expired) {
// Log to database once.
}
}
I've got a table in database:
USERID MONEY
______________
1 500
The money value could be changed only by logged in user that owns account. I've got a function like:
bool buy(int moneyToSpend)
{
var moneyRow = db.UserMoney.Find(loggedinUserID);
if(moneyRow.MONEY < moneyToSpend)
return false;
//code for placing order
moneyRow.MONEY -= moneyToSpend;
return true;
}
I know that mvc sessions are always synchronous, so there will never be 2 symulateous calls to this function in one user session. But what if user logs in to the site 2 times from different browsers? Will it be still single threaded session or I can get 2 concurrent requests to this function?
And if there will be concurrency then how should I handle it with EF? Normally in ADO I would use MSSQL's "BEGIN WORK" for this type of situation, but I have no idea on how to make it with EF.
Thank you for your time!
I would suggest you to use RowVersion to handle concurrent requests.
Good reference here: http://www.asp.net/mvc/overview/getting-started/getting-started-with-ef-using-mvc/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application
// in UserMoney.cs
[Timestamp]
public byte[] RowVersion { get; set; }
// in model builder
modelBuilder.Entity<UserMoney>().Property(p => p.RowVersion).IsConcurrencyToken();
// The update logic
public bool Buy(int moneyToSpend, byte[] rowVersion)
{
try
{
var moneyRow = db.UserMoney.Find(loggedinUserID);
if(moneyRow.MONEY < moneyToSpend)
{
return false;
}
//code for placing order
moneyRow.MONEY -= moneyToSpend;
return true;
}
catch (DbUpdateConcurrencyException ex)
{
var entry = ex.Entries.Single();
var submittedUserMoney = (UserMoney) entry.Entity;
var databaseValue = entry.GetDatabaseValues();
if (databaseValue == null)
{
// this entry is no longer existed in db
}
else
{
// this entry is existed and have newer version in db
var userMoneyInDb = (UserMoney) databaseValue.ToObject();
}
}
catch (RetryLimitExceededException)
{
// probably put some logs here
}
}
I do not think it would be a major problem for you since the idea is that MSSQL as far as i know will not allow asyncroneus data commits to the same user from the same thread it has to finish one process before moving to the next one but you can try something like this
using (var db = new YourContext())
{
var moneyRow = db.UserMoney.Find(loggedinUserID);
moneyRow.MONEY -= moneyToSpend;
bool saveFailed;
do
{
saveFailed = false;
try
{
db.SaveChanges();
}
catch (DbUpdateConcurrencyException ex)
{
saveFailed = true;
// Update original values from the database
var entry = ex.Entries.Single();
entry.OriginalValues.SetValues(entry.GetDatabaseValues());
}
} while (saveFailed);
}
More can be found here Optimistic Concurrency Patterns
I have a simple table:
IPAddress (PK, string)
Requests (int)
It's a flood limiter. Every minute the tables data is deleted. Every page request, the Requests count increments for given IPAddress.
It works great, and our website performance has increased significantly as we do suffer some accidental/intentional effective DDOSes due to the nature of our product and website.
The only problem is, when an IP does send thousands of requests a minute to our website for whatever reason, we get these errors popping up:
Violation of PRIMARY KEY constraint 'PK_v2SiteIPRequests'. Cannot insert duplicate key in object 'dbo.v2SiteIPRequests'. The duplicate key value is ([IP_ADDRESS]). The statement has been terminated.
The code that makes the insert is:
/// <summary>
/// Call everytime a page view is requested
/// </summary>
private static void DoRequest(string ipAddress)
{
using (var db = new MainContext())
{
var rec = db.v2SiteIPRequests.SingleOrDefault(c => c.IPAddress == ipAddress);
if (rec == null)
{
var n = new v2SiteIPRequest {IPAddress = ipAddress, Requests = 1};
db.v2SiteIPRequests.InsertOnSubmit(n);
db.SubmitChanges();
}
else
{
rec.Requests++;
db.SubmitChanges();
// Ban?
if (rec.Requests >= Settings.MAX_REQUESTS_IN_INTERVAL)
{
BanIP(ipAddress);
}
}
}
}
What's the best way to handle this exception, and why is it being thrown? Is a try catch best here?
If you get two requests simultanously, the following happens:
Request one: is it in the database?
Request two: is it in the database?
Request one: No, not yet
Request two: No, not yet
Request one: INSERT
Request two: INSERT
Request one: WORKS
Request two: FAILS (already inserted a split second before)
There is nothing you can do here but catch the exception and handle it gracefully. Maybe by using a simple "try again" logic.
You've got a few race conditions there, especially when there are concurrent connections.
You may need to change approach, and always store each request, and then query if there are more in the timeframe than permitted and take whatever action you need
Here's the solution based on suggestions. It's ugly but works as far as I can tell.
/// <summary>
/// Call everytime a page view is requested
/// </summary>
private static void DoRequest(string ipAddress)
{
using (var db = new MainContext())
{
var rec = db.v2SiteIPRequests.SingleOrDefault(c => c.IPAddress == ipAddress);
if (rec == null)
{
// Catch insert race condition for PK violation. Especially susceptible when being hammered by requests from 1 IP
try
{
var n = new v2SiteIPRequest {IPAddress = ipAddress, Requests = 1};
db.v2SiteIPRequests.InsertOnSubmit(n);
db.SubmitChanges();
}
catch (Exception e)
{
try
{
// Can't reuse original context as it caches
using (var db2 = new MainContext())
{
var rec2 = db2.v2SiteIPRequests.Single(c => c.IPAddress == ipAddress);
rec2.Requests++;
db2.SubmitChanges();
if (rec2.Requests >= Settings.MAX_REQUESTS_IN_INTERVAL)
{
BanIP(ipAddress);
}
}
}
catch (Exception ee)
{
// Shouldn't reach here
Error.Functions.NewError(ee);
}
}
}
else
{
rec.Requests++;
db.SubmitChanges();
// Ban?
if (rec.Requests >= Settings.MAX_REQUESTS_IN_INTERVAL)
{
BanIP(ipAddress);
}
}
}
}
In .NET 4.0 and Linq to SQL, I am trying to use a partial class to "trigger" changes from within an update method (an existing DBML method). For simplicity, imagine a table Things with columns Id and Value
The auto gen DBML contains a method OnValueChanged, I'll extend that and as an exercise try to change one value in one other row :
public partial class Things
{
partial void OnValueChanged()
{
MyAppDataContext dc = new MyAppDataContext();
var q = from o in dc.GetTable<Things>() where o.Id == 13 select o;
foreach (Things o in q)
{
o.Value = "1"; // try to change some other row
}
try
{
dc.SubmitChanges();
}
catch (Exception)
{
// SQL timeout occurs
}
}
}
A SQL timeout error occurs. I suspect that the datacontext is getting confused trying to SubmitChanges() before the current OnValueChanged() method has disposed of it's datacontext, but I am not sure.
Mostly I cannot find an example of a good pattern for triggering updates against a DB within an existing DBML generated method.
Can anyone provide any pointers on why this doesn't work and how I can accomplish something that works OK? (I realize I can trigger in the SQL database, but do not want to take that route.)
Thanks!
First, you aren't disposing of the DataContext at all in your function. Wrap it in a using statement.
The actual issue is coming from the fact that you're recursively calling yourself by setting the Value property on the retrieved values. You're just running into the timeout before you can hit a StackOverflowException.
It's unclear what you're trying to do here; if you're trying to allow different behavior between when you set the Value property here versus anywhere else, then it's simple enough to use a flag. In your partial class, declare an internal instance boolean auto property called UpdatingValue, and set it to true on each item inside your foreach block before you update the value, then set it to false after you update the value. Then, as the first line in OnValueChanged, check to ensure that UpdatingValue is false.
Like this:
public partial class Things
{
internal bool UpdatingValue { get; set; }
partial void OnValueChanged()
{
if (UpdatingValue) return;
using(MyAppDataContext dc = new MyAppDataContext())
{
var q = from o in dc.GetTable<Things>() where o.Id == 13 select o;
foreach (Things o in q)
{
o.UpdatingValue = true;
o.Value = "1"; // try to change some other row
o.UpdatingValue = false;
}
dc.SubmitChanges();
}
}
}
I would suspect that you may have introduced infinite recursion by changing the values of Things in the OnValueChanged event handler of Things.
To me, a cleaner solution to your problem is not to generate your class in a DBML file, but instead use LinqToSql attributes on a class you create. By doing so you can do your "trigger" modifications in the setters of your properties/columns.
I had a similar issue. I don't think it is a bug in your code, I'm leaning toward a bug in how the SqlDependency works. I did the same this as you, but I incrementally tested it. If the select statement return 1-100 rows, then it worked fine. If the select statement returned 1000 rows, then I would get the SqlException (timeout).
It is not a stack overflow issue (at least not in this client code). Putting a break point at the OnValueChanged event handler reveals that it does not get called again while the SubmitChanges call is hanging.
It is possible that there is a requirement that the OnValueChanged call must return before you can call SubmitChanges. Maybe calling SubmitChanges on a different thread might help.
My solution was to wrap the code in a big try/catch block to catch the SqlException. If it happens, then I perform the same query, but I don't use an SqlDependency and don't attach it to the command. This does not hang the SubmitChanges call anymore. Then right after that, I recreate the SqlDependency and then make the query again, to reregister the dependency.
This is not ideal, but at least it will process all the rows eventually. The problem only occurs if there are a lot of rows to be selected, and if the program is working smoothly, this should not happen as it is constantly catching up.
public Constructor(string connString, CogTrkDBLog logWriter0)
{
connectionString = connString;
logWriter = logWriter0;
using (SqlConnection conn = new SqlConnection(connString))
{
conn.Open();
using (SqlCommand cmd = new SqlCommand("SELECT is_broker_enabled FROM sys.databases WHERE name = 'cogtrk'", conn))
{
bool r = (bool) cmd.ExecuteScalar();
if (!r)
{
throw new Exception("is_broker_enabled was false");
}
}
}
if (!CanRequestNotifications())
{
throw new Exception("Not enough permission to run");
}
// Remove any existing dependency connection, then create a new one.
SqlDependency.Stop(connectionString);
SqlDependency.Start(connectionString);
if (connection == null)
{
connection = new SqlConnection(connectionString);
connection.Open();
}
if (command == null)
{
command = new SqlCommand(GetSQL(), connection);
}
GetData(false);
GetData(true);
}
private string GetSQL()
{
return "SELECT id, command, state, value " +
" FROM dbo.commandqueue WHERE state = 0 ORDER BY id";
}
void dependency_OnChange(object sender, SqlNotificationEventArgs e)
{
// Remove the handler, since it is only good
// for a single notification.
SqlDependency dependency = (SqlDependency)sender;
dependency.OnChange -= dependency_OnChange;
GetData(true);
}
void GetData(bool withDependency)
{
lock (this)
{
bool repeat = false;
do {
repeat = false;
try
{
GetDataRetry(withDependency);
}
catch (SqlException)
{
if (withDependency) {
GetDataRetry(false);
repeat = true;
}
}
} while (repeat);
}
}
private void GetDataRetry(bool withDependency)
{
// Make sure the command object does not already have
// a notification object associated with it.
command.Notification = null;
// Create and bind the SqlDependency object
// to the command object.
if (withDependency)
{
SqlDependency dependency = new SqlDependency(command);
dependency.OnChange += dependency_OnChange;
}
Console.WriteLine("Getting a batch of commands");
// Execute the command.
using (SqlDataReader reader = command.ExecuteReader())
{
using (CommandQueueDb db = new CommandQueueDb(connectionString))
{
foreach (CommandEntry c in db.Translate<CommandEntry>(reader))
{
Console.WriteLine("id:" + c.id);
c.state = 1;
db.SubmitChanges();
}
}
}
}
Let me preface this by saying I now realize how stupid I am and was. I have been developing for 1 year (to the day) and this was the first thing I wrote. I now have come back to it and I can't make heads or tails of it. It worked at one point on a very simple app but that was a while ago.
Specifically I am having problems with LocalDBConn which uses out but for the life of me I can't remember why.
Guidance, pointer's, refactoring, slaps up side the head are ALL welcome and appreciated!
public class MergeRepl
{
// Declare nessesary variables
private string subscriberName;
private string publisherName;
private string publicationName;
private string subscriptionDbName;
private string publicationDbName;
private MergePullSubscription mergeSubscription;
private MergePublication mergePublication;
private ServerConnection subscriberConn;
private ServerConnection publisherConn;
private Server theLocalSQLServer;
private ReplicationDatabase localRepDB;
public MergeRepl(string subscriber, string publisher, string publication, string subscriptionDB, string publicationDB)
{
subscriberName = subscriber;
publisherName = publisher;
publicationName = publication;
subscriptionDbName = subscriptionDB;
publicationDbName = publicationDB;
//Create connections to the Publisher and Subscriber.
subscriberConn = new ServerConnection(subscriberName);
publisherConn = new ServerConnection(publisherName);
// Define the pull mergeSubscription
mergeSubscription = new MergePullSubscription
{
ConnectionContext = subscriberConn,
DatabaseName = subscriptionDbName,
PublisherName = publisherName,
PublicationDBName = publicationDbName,
PublicationName = publicationName
};
// Ensure that the publication exists and that it supports pull subscriptions.
mergePublication = new MergePublication
{
Name = publicationName,
DatabaseName = publicationDbName,
ConnectionContext = publisherConn
};
// Create the local SQL Server instance
theLocalSQLServer = new Server(subscriberConn);
// Create a Replication DB Object to initiate Replication settings on local DB
localRepDB = new ReplicationDatabase(subscriptionDbName, subscriberConn);
// Check that the database exists locally
CreateDatabase(subscriptionDbName);
}
public void RunDataSync()
{
// Keep program from appearing 'Not Responding'
///// Application.DoEvents();
// Does the needed Databases exist on local SQLExpress Install
/////CreateDatabase("ContactDB");
try
{
// Connect to the Subscriber
subscriberConn.Connect();
// if the Subscription exists, then start the sync
if (mergeSubscription.LoadProperties())
{
// Check that we have enough metadata to start the agent
if (mergeSubscription.PublisherSecurity != null || mergeSubscription.DistributorSecurity != null)
{
// Synchronously start the merge Agent for the mergeSubscription
// lblStatus.Text = "Data Sync Started - Please Be Patient!";
mergeSubscription.SynchronizationAgent.Synchronize();
}
else
{
throw new ApplicationException("There is insufficient metadata to synchronize the subscription." +
"Recreate the subscription with the agent job or supply the required agent properties at run time.");
}
}
else
{
// do something here if the pull mergeSubscription does not exist
// throw new ApplicationException(String.Format("A mergeSubscription to '{0}' does not exist on {1}", publicationName, subscriberName));
CreateMergeSubscription();
}
}
catch (Exception ex)
{
// Implement appropriaate error handling here
throw new ApplicationException("The subscription could not be synchronized. Verify that the subscription has been defined correctly.", ex);
//CreateMergeSubscription();
}
finally
{
subscriberConn.Disconnect();
}
}
public void CreateMergeSubscription()
{
// Keep program from appearing 'Not Responding'
// Application.DoEvents();
try
{
if (mergePublication.LoadProperties())
{
if ((mergePublication.Attributes & PublicationAttributes.AllowPull) == 0)
{
mergePublication.Attributes |= PublicationAttributes.AllowPull;
}
// Make sure that the agent job for the mergeSubscription is created.
mergeSubscription.CreateSyncAgentByDefault = true;
// Create the pull mergeSubscription at the Subscriber.
mergeSubscription.Create();
Boolean registered = false;
// Verify that the mergeSubscription is not already registered.
foreach (MergeSubscription existing in mergePublication.EnumSubscriptions())
{
if (existing.SubscriberName == subscriberName
&& existing.SubscriptionDBName == subscriptionDbName
&& existing.SubscriptionType == SubscriptionOption.Pull)
{
registered = true;
}
}
if (!registered)
{
// Register the local mergeSubscription with the Publisher.
mergePublication.MakePullSubscriptionWellKnown(
subscriberName, subscriptionDbName,
SubscriptionSyncType.Automatic,
MergeSubscriberType.Local, 0);
}
}
else
{
// Do something here if the publication does not exist.
throw new ApplicationException(String.Format(
"The publication '{0}' does not exist on {1}.",
publicationName, publisherName));
}
}
catch (Exception ex)
{
// Implement the appropriate error handling here.
throw new ApplicationException(String.Format("The subscription to {0} could not be created.", publicationName), ex);
}
finally
{
publisherConn.Disconnect();
}
}
/// <summary>
/// This will make sure the needed DataBase exists locally before allowing any interaction with it.
/// </summary>
/// <param name="whichDataBase">The name of the DataBase to check for.</param>
/// <returns>True if the specified DataBase exists, False if it doesn't.</returns>
public void CreateDatabase(string whichDataBase)
{
Database db;
LocalDBConn(whichDataBase, out theLocalSQLServer, out localRepDB, out db);
if (!theLocalSQLServer.Databases.Contains(whichDataBase))
{
//Application.DoEvents();
// Create the database on the instance of SQL Server.
db = new Database(theLocalSQLServer, whichDataBase);
db.Create();
}
localRepDB.Load();
localRepDB.EnabledMergePublishing = false;
localRepDB.CommitPropertyChanges();
if (!mergeSubscription.LoadProperties())
{
CreateMergeSubscription();
}
}
private void LocalDBConn(string databaseName, out Server server, out ReplicationDatabase replicationDatabase, out Database db)
{
db = server.Databases[replicationDatabase.Name];
}
/// <summary>
/// Checks for the existince of the Publication. If there is one it verify's Allow Pull is set
/// </summary>
/// <returns>True if Publication is present. False if not.</returns>
public bool CheckForPublication()
{
// If LoadProperties() returns TRUE then the Publication exists and is reachable
if (mergePublication.LoadProperties())
return true;
if ((mergePublication.Attributes & PublicationAttributes.AllowPull) == 0)
{
mergePublication.Attributes |= PublicationAttributes.AllowPull;
}
return false;
} // end CheckForPublication()
/// <summary>
/// Checks for the existence of a Subscription.
/// </summary>
/// <returns>True if a Subscription is present. False if not</returns>
public bool CheckForSubscription()
{
// Check for the existence of the Subscription
return mergeSubscription.IsExistingObject;
} // end CheckForSubscription()
}
Edit 1
Opps, I forgot the specific errors. On server and replicationDatabase I am getting a "Out parameter might not be initialized before accessing"
private void LocalDBConn(string databaseName, out Server server, out ReplicationDatabase replicationDatabase, out Database db)
{
db = server.Databases[replicationDatabase.Name];
}
Looks like you'd be safe to remove the out's.
I'm not sure how that even compiled, unless maybe it was with VS2003 and the compiler didn't check for this type of error.
From MSDN: Although variables passed as out arguments do not have to be initialized before being passed, the called method is required to assign a value before the method returns.
private void LocalDBConn(string databaseName, Server server,
ReplicationDatabase replicationDatabase, out Database db)
{
db = server.Databases[replicationDatabase.Name];
}
or
private Database LocalDBConn(string databaseName, Server server,
ReplicationDatabase replicationDatabase)
{
return server.Databases[replicationDatabase.Name];
}
Then update your code in CreateDatabase to:
Database db;
LocalDBConn(whichDataBase, theLocalSQLServer, localRepDB, out db);
or
Database db = LocalDBConn(whichDataBase, theLocalSQLServer, localRepDB);
private void LocalDBConn(string databaseName, out Server server, out ReplicationDatabase replicationDatabase, out Database db)
{
db = server.Databases[replicationDatabase.Name];
}
This method will not build because the out parameters are not initialized before the method returns. An out parameter allows the caller to pass an uninstantiated object into the method, which the method must then initialize through all paths before it returns (excluding if the method throws an exception). Basically, an out parameter is a statement by the method saying "I will instantiate this object before returning". In LocalDBConn, you are not doing this.
In the C# 3.0 language specification this detailed in section 5.1.6.
All the compiler is telling you is that the Server and ReplicationDatabase are output parameters and yet you are not assigning anything to them before returning from the LocalDBConn method.
When using out parameters you should perform a null check on them to ensure they have been initialized outside the method, if not initialize them.