In the old API (1.X) you could tell whether the server was connected or not by using the State property on the MongoServer instance returned from MongoClient.GetServer:
public bool IsConnceted
{
get
{
return _client.GetServer().State == MongoServerState.Connected;
}
}
However GetServer is not a part of the new API (2.0). How can that be achieved?
The more appropriate way to do that is not by checking the server but rather the cluster (which may contain multiple servers) and you can access it directly from the MongoClient instance:
public bool IsClusterConnceted
{
get
{
return _client.Cluster.Description.State == ClusterState.Connected;
}
}
If you would like to check a specific server that's also possible:
public bool IsServerConnceted
{
get
{
return _client.Cluster.Description.Servers.Single().State == ServerState.Connected;
}
}
Keep in mind that the value is updated by the last operation so it may not be current. The only way to actually make sure there's a valid connection is to execute some kind of operation.
As noted by i3arnon, one has to perform some sort of operation on the database before the state is updated properly.
The act of enumerating the databases is sufficient to update the state.
This worked for me:
var databases = _client.ListDatabasesAsync().Result;
databases.MoveNextAsync(); // Force MongoDB to connect to the database.
if (_client.Cluster.Description.State == ClusterState.Connected)
{
// Database is connected.
}
Related
I have a class that keeps track of Property Changes
public class Property
{
object _OriginalValue;
object _ProposedValue;
DateTime _ProposedDateTime;
List<Property> _History = new List<Property>();
public object OriginalValue
{
get
{
return _OriginalValue;
}
set
{
_OriginalValue = value;
}
}
public object ProposedValue
{
get
{
return _ProposedValue;
}
set
{
_ProposedDateTime = DateTime.Now;
_ProposedValue = value;
}
}
public bool IsDirty
{
get
{
if (OriginalValue != ProposedValue)
{
return true;
}
else
{
return false;
}
}
}
}
This property can be used by classes like
public class Customer
{
protected Property _FirstName = new Property();
public string FirstName
{
get
{
return (string)_FirstName.ProposedValue;
}
set
{
_FirstName.ProposedValue = value;
}
}
public object GetOriginalValue(Property Property)
{
return Property.OriginalValue;
}
}
The question is, is there a way to secure the original value when passing this to a client in an N-Tier architecture?
When a client passes a Customer back into the Service Boundary - by default you can't trust the client. You need to either reload the original values from the database or validate that the original values are untampered. Of course I'm assuming we're going to use business logic based on the current values in the customer to reject or allow an update operation.
Example:
User inserts record with Name Bob.
User fetches record with Name Bob and changes name to Ted. Original Value is Bob, proposed Value is Ted.
User sends Customer to Service to Update Customer.
Everything is good.
*A business rule is now coded into the service that says if the customer's name is Ted - allow the update else throw "unable to update" exception. *
User fetches record with name Ted.
User changes name to Darren.
User changes name back to Ted - system throws exception.
User fetches Ted. User cheats and uses a tool to change the OriginalPropertyValue on the client.
The server doesn't refetch the OriginalValue from the database and simply reads the OriginalValue coming from the client.
User bypasses business rule.
Actually there're more issues with your approach than just checking if original value hasn't been tampered. For example, I suspect that's a multi-user environment where more than an user would be able to edit the same object. That is, the original value mightn't be tampered, but changed before other has already saved a new original value in the database.
I guess you're already applying some kind of optimistic or pessimistic locking on your data...
About your actual concern, probably you need to sign your original value, and whenever you're going to store those objects back in the database, your application layer should check that original value hasn't been tampered (from Wikipedia):
Digital signatures are a standard element of most cryptographic
protocol suites, and are commonly used for software distribution,
financial transactions, contract management software, and in other
cases where it is important to detect forgery or tampering.
I'm new in C# and trying to understand how to work with Lazy.
I need to handle concurrent request by waiting the result of an already running operation. Requests for data may come in simultaneously with same/different credentials.
For each unique set of credentials there can be at most one GetDataInternal call in progress, with the result from that one call returned to all queued waiters when it is ready
private readonly ConcurrentDictionary<Credential, Lazy<Data>> Cache
= new ConcurrentDictionary<Credential, Lazy<Data>>();
public Data GetData(Credential credential)
{
// This instance will be thrown away if a cached
// value with our "credential" key already exists.
Lazy<Data> newLazy = new Lazy<Data>(
() => GetDataInternal(credential),
LazyThreadSafetyMode.ExecutionAndPublication
);
Lazy<Data> lazy = Cache.GetOrAdd(credential, newLazy);
bool added = ReferenceEquals(newLazy, lazy); // If true, we won the race.
Data data;
try
{
// Wait for the GetDataInternal call to complete.
data = lazy.Value;
}
finally
{
// Only the thread which created the cache value
// is allowed to remove it, to prevent races.
if (added) {
Cache.TryRemove(credential, out lazy);
}
}
return data;
}
Is that right way to use Lazy or my code is not safe?
Update:
Is it good idea to start using MemoryCache instead of ConcurrentDictionary? If yes, how to create a key value, because it's a string inside MemoryCache.Default.AddOrGetExisting()
This is correct. This is a standard pattern (except for the removal) and it's a really good cache because it prevents cache stampeding.
I'm not sure you want to remove from the cache when the computation is done because the computation will be redone over and over that way. If you don't need the removal you can simplify the code by basically deleting the second half.
Note, that Lazy has a problem in the case of an exception: The exception is stored and the factory will never be re-executed. The problem persists forever (until a human restarts the app). In my mind this makes Lazy completely unsuitable for production use in most cases.
This means that a transient error such as a network issue can render the app unavailable permanently.
This answer is directed to the updated part of the original question. See #usr answer regarding thread-safety with Lazy<T> and the potential pitfalls.
I would like to know how to avoid using ConcurrentDictionary<TKey, TValue> and start
using MemoryCache? How to implement
MemoryCache.Default.AddOrGetExisting()?
If you're looking for a cache which has a mechanism for auto expiry, then MemoryCache is a good choice if you don't want to implement the mechanics yourself.
In order to utilize MemoryCache which forces a string representation for a key, you'll need to create a unique string representation of a credential, perhaps a given user id or a unique username?
If you can, you can create an override of ToString which represents your unique identifier or simply use the said property, and utilize MemoryCache like this:
public class Credential
{
public Credential(int userId)
{
UserId = userId;
}
public int UserId { get; private set; }
}
And now your method will look like this:
private const EvictionIntervalMinutes = 10;
public Data GetData(Credential credential)
{
Lazy<Data> newLazy = new Lazy<Data>(
() => GetDataInternal(credential), LazyThreadSafetyMode.ExecutionAndPublication);
CacheItemPolicy evictionPolicy = new CacheItemPolicy
{
AbsoluteExpiration = DateTimeOffset.UtcNow.AddMinutes(EvictionIntervalMinutes)
};
var result = MemoryCache.Default.AddOrGetExisting(
new CacheItem(credential.UserId.ToString(), newLazy), evictionPolicy);
return result != null ? ((Lazy<Data>)result.Value).Value : newLazy.Value;
}
MemoryCache provides you with a thread-safe implementation, this means that two threads accessing AddOrGetExisting will only cause a single cache item to be added or retrieved. Further, Lazy<T> with ExecutionAndPublication guarantess only a single unique invocation of the factory method.
We seem to have come up on a weird issue, where two concurrent requests to our service are actually using the same DB connection.
Our setup is ServiceStack + NHibernate + FluentNHibernate + MySQL. I have set up a small test that recreates the problem:
public class AppHost : AppHostBase
{
private ISessionFactory _sessionFactory;
public AppHost() : base("Lala Service", typeof(AppHost).Assembly)
{
}
public override void Configure(Container container)
{
_sessionFactory = Fluently.Configure()
.Database(MySQLConfiguration.Standard.ConnectionString(conn =>
conn.Server("localhost").Username("lala").Password("lala").Database("lala")))
.Mappings(mappings => mappings.AutoMappings.Add(
AutoMap.Assembly(GetType().Assembly).Where(t => t == typeof(Lala))
.Conventions.Add(DefaultLazy.Never(), DefaultCascade.All())))
.BuildSessionFactory();
container.Register(c => _sessionFactory.OpenSession()).ReusedWithin(ReuseScope.Request);
}
}
public class Lala
{
public int ID { get; set; }
public string Name { get; set; }
}
[Route("/lala")]
public class LalaRequest
{
}
public class LalaReseponse
{
}
public class LalaService : Service
{
private ISession _session;
public ISession Session1
{
get { return _session; }
set { _session = value; }
}
public LalaReseponse Get(LalaRequest request)
{
var lala = new Lala
{
Name = Guid.NewGuid().ToString()
};
_session.Persist(lala);
_session.Flush();
lala.Name += " XXX";
_session.Flush();
return new LalaReseponse();
}
}
The I hit this service 10 times concurrenly via Ajax like so:
<script type="text/javascript">
for (i = 0; i < 10; i++) {
console.log("aa");
$.ajax({
url: '/lala',
dataType: 'json',
cache: false
});
}
</script>
The result is consistenly:
Number of connections open < 10.
Not all records updated.
On occasion - a StaleObjectStateException thrown - if I delete records.
The reason behind this is that the connections are reused by two concurrent requests, and then LAST_INSERT_ID() gives the ID of the wrong row, so two requests are updating the same row.
In short: it's a complete mess and it's clearly sharing the DB connection between requests.
The question is: Why? How should I have configured things so that each request gets its own connection from the connection pool?
Finally solved it, what a day-waster!
The source of the problem is NHibernate's connection release mode:
11.7. Connection Release Modes
The legacy (1.0.x) behavior of NHibernate in regards to ADO.NET
connection management was that a ISession would obtain a connection
when it was first needed and then hold unto that connection until the
session was closed. NHibernate introduced the notion of connection
release modes to tell a session how to handle its ADO.NET connections.
...
The different release modes are identified by the enumerated values of
NHibernate.ConnectionReleaseMode:
OnClose - is essentially the legacy behavior described above. The
NHibernate session obtains a connection when it first needs to perform
some database access and holds unto that connection until the session
is closed.
AfterTransaction - says to release connections after a
NHibernate.ITransaction has completed.
The configuration parameter hibernate.connection.release_mode is used
to specify which release mode to use.
...
after_transaction - says to use
ConnectionReleaseMode.AfterTransaction. Note that with
ConnectionReleaseMode.AfterTransaction, if a session is considered to
be in auto-commit mode (i.e. no transaction was started) connections
will be released after every operation.
This got entangled together with MySQL .NET/Connector's default connection pooling, and effectively meant that the connections were swapped between concurrent requests, as one request released the connection back to the pool and the other acquired it.
However, I think that the fact that NHibernate calls LAST_INSERT_ID() after releasing and re-acquiring the connection is a bug. It should call LAST_INSERT_ID() inside the same "operation".
Anyway, solutions:
Use transactions, which is what we normally do, or
If you can't or don't want to use transactions in a certain context for some reason (which is what happened to use today), set the connection release mode to "on close". With FluentNHibernate that would be:
.ExposeConfiguration(cfg =>
cfg.SetProperty("connection.release_mode", "on_close"));
And from here on the connection is bound to the session even if there is no transaction.
I'm working with C# for three days now, so excuse my noobish question please.
I'm trying to build an own database class which shall contain, among other things, a method for sendig select-statements.
How do i prevent this method from being called, when connect() wasn't invoked before by the object? I thought about a simple boolean variable, but this is, in my opinion, a very ugly solution.
Separate out the idea of "something which is capable of connecting" from "a live connection which is capable of running a query".
Make your Connect method (you should really start following .NET naming conventions by the way) return something which you can query.
That way each class is responsible for one job, and you avoid the concept of trying to query before connecting from even being represented in code.
You could have a State field that stores the state and use that. In this simple example I've got a boolean Connected (True / False) but you could have an enum with Connected, Disconnected, Faulted etc.
private bool _Connected = false;
public bool Connect()
{
// ...
if (success)
{
_Connected = true;
}
}
public bool Disconnect()
{
// ...
_Connected = false;
}
public IEnumberable<Data> GetData()
{
if (!_Connected)
{
// Handle not connected here...
}
}
There is no need of a boolean.
just do something like:
if (con == NULL || !con.isOpen())
{
// throw an exception or return
}
simple and allways prevents you from access to the con object if something go wrong.
I'm dealing with a courious scenario.
I'm using EntityFramework to save (insert/update) into a SQL database in a multithreaded environment. The problem is i need to access database to see whether a register with a particular key has been already created in order to set a field value (executing) or it's new to set a different value (pending). Those registers are identified by a unique guid.
I've solved this problem by setting a lock since i do know entity will not be present in any other process, in other words, i will not have same guid in different processes and it seems to be working fine. It looks something like that:
static readonly object LockableObject = new object();
static void SaveElement(Entity e)
{
lock(LockableObject)
{
Entity e2 = Repository.FindByKey(e);
if (e2 != null)
{
Repository.Insert(e2);
}
else
{
Repository.Update(e2);
}
}
}
But this implies when i have a huge ammount of requests to be saved, they will be queued.
I wonder if there is something like that (please, take it just as an idea):
static void SaveElement(Entity e)
{
(using ThisWouldBeAClassToProtectBasedOnACondition protector = new ThisWouldBeAClassToProtectBasedOnACondition(e => e.UniqueId)
{
Entity e2 = Repository.FindByKey(e);
if (e2 != null)
{
Repository.Insert(e2);
}
else
{
Repository.Update(e2);
}
}
}
The idea would be having a kind of protection that protected based on a condition so each entity e would have its own lock based on e.UniqueId property.
Any idea?
Don't use application-locks where database transactions or constraints are needed.
The use of a lock to prevent duplicate entries in a database is not a good idea. It limits the scalability of your application be forcing only a single instance to ever exist that can add or update such records. Or worse, someone will eventually try to scale the application to multiple processes or servers and it will cause data corruption (since locks are local to a single process).
What you should consider instead is using a combination of unique constraints in the database and transactions to ensure that no two attempts to add the same entry can both succeed. One will succeed - the other will be forced to rollback.
This might work for you, you can just lock on the instance of e:
lock(e)
{
Entity e2 = Repository.FindByKey(e);
if (e2 != null)
{
Repository.Insert(e2);
}
else
{
Repository.Update(e2);
}
}