ExecuteReaderAsync and Autofac - c#

I have created an OracleUnitOfWork and Repository class like this:
public class OracleUnitOfWork : IOracleUnitOfWork
{
// Private properties
private readonly OracleConnection _connection;
private readonly OracleCommand _command;
private readonly Dictionary<Type, object> _repositories;
private Object thisLock = new Object();
/// <summary>
/// Default constructor
/// </summary>
/// <param name="config">The Cormar config class</param>
public OracleUnitOfWork(CormarConfig config)
{
// Create instances for our private properties
this._repositories = new Dictionary<Type, object>();
this._connection = new OracleConnection(config.ConnectionString);
this._command = new OracleCommand
{
Connection = this._connection,
CommandType = CommandType.StoredProcedure,
BindByName = true
};
// Open our connection
this._connection.Open();
}
/// <summary>
/// Gets the entity repository
/// </summary>
/// <typeparam name="T">The entity model</typeparam>
/// <returns></returns>
public IRepository<T> GetRepository<T>() where T : class, new()
{
// Lock the thread so we can't execute at the same time
lock (thisLock)
{
// If our repositories have a matching repository, return it
if (_repositories.Keys.Contains(typeof(T)))
return _repositories[typeof(T)] as IRepository<T>;
// Create a new repository for our entity
var repository = new Repository<T>(this._command);
// Add to our list of repositories
_repositories.Add(typeof(T), repository);
// Return our repository
return repository;
}
}
/// <summary>
/// Dispose
/// </summary>
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
/// <summary>
/// Disposes of any attached resources
/// </summary>
/// <param name="disposing">A boolean indicating whether the object is being disposed</param>
protected virtual void Dispose(bool disposing)
{
// If we are disposing
if (disposing)
{
// Close our connection
this._connection.Close();
this._connection.Dispose();
this._command.Dispose();
}
}
}
public class Repository<T> : IRepository<T> where T : class, new()
{
// private properties
private readonly OracleCommand _command;
private Object thisLock = new Object();
/// <summary>
/// Default constructor
/// </summary>
/// <param name="command"></param>
public Repository(OracleCommand command)
{
this._command = command;
}
/// <summary>
/// Returns the datareader for the stored procedure
/// </summary>
/// <param name="storedProcedureName">The name of the SPROC to execute</param>
/// <param name="parameters">The parameters needed for the SPROC</param>
/// <returns></returns>
public async Task<IDataReader> ExecuteReaderAsync(string storedProcedureName, IList<OracleParameter> parameters = null)
{
// Set up our command
this.InitiateCommand(storedProcedureName, parameters.ToArray());
// Return our data reader
return await this._command.ExecuteReaderAsync();
}
/// <summary>
/// Create, updates or deletes an entity
/// </summary>
/// <param name="storedProcedureName">The name of the SPROC to execute</param>
/// <param name="parameters">The parameters needed for the SPROC</param>
public async Task CreateUpdateOrDeleteAsync(string storedProcedureName, IList<OracleParameter> parameters = null)
{
// Set up our command
this.InitiateCommand(storedProcedureName, parameters.ToArray());
// Execute our command
await this._command.ExecuteNonQueryAsync();
}
/// <summary>
/// Intiates the command object
/// </summary>
/// <param name="storedProcedureName">The name of the SPROC to execute</param>
/// <param name="parameters">An array of parameters</param>
private void InitiateCommand(string storedProcedureName, OracleParameter[] parameters)
{
// Lock the thread so we can't execute at the same time
lock (thisLock)
{
// Set up the command object
this._command.CommandTimeout = 1800;
this._command.FetchSize = 1000;
this._command.CommandText = storedProcedureName;
this._command.Parameters.Clear();
// If we have any parameters
if (parameters != null)
{
// Assign each parameter to the command object
this._command.Parameters.AddRange(parameters);
}
}
}
}
In my AutoFac module, I register the OracleUnitOfWork as a single instance like this:
builder.RegisterType<OracleUnitOfWork>().As<IOracleUnitOfWork>().SingleInstance();
For most queries, this is fine, but I seem to have a problem when trying to execute multiple queries simultaneously. It errors out on the ExecuteReaderAsync method in my repository and states:
Object reference not set to an instance of an object.
Sometimes I even get this error:
Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index
But I can't figure out how to fix the issue.
Prior to this I was getting issues with the GetRepository method, but when I added locking that fixed the issue. I can't do that to ExecuteReaderAsync method because it will no longer be asynchronous and I need it to be.
Does anyone know how to solve this issue?

For most queries, this is fine, but I seem to have a problem when
trying to execute multiple queries simultaneously.
You have a race condition, you're trying to access the same references across multiple threads and getting "spooky" behaviour.
You can't pass a mutable singleton across multiple threads like that, it will break. Either bite the bullet and use a a _lock or rethink your approach (i.e. Don't use a singleton).
Just remember, locks will kill your multi-threaded performance if not used correctly.

Kushan is right about OracleConnection not being thread safe. If you really need to execute multiple queries in the same time and you are not affected by the overhead, you can remove SingleInstance() and allow multiple instances to be built.
This way, each of your threads can obtain a new instance of OracleUnitOfWork and do its work independently (fetch data, perform changes, persist changes).

Related

Avoid non-readonly static fields - Immutability NDepend

I am using NDepend for code analysis and got this warning:
https://www.ndepend.com/default-rules/NDepend-Rules-Explorer.html?ruleid=ND1901#!
This rule warns about static fields that are not declared as read-only.
In Object-Oriented-Programming the natural artifact to hold states that can be modified is instance fields. Such mutable static fields create confusion about the expected state at runtime and impairs the code testability since the same mutable state is re-used for each test.
My code is as follows:
using Cosmonaut;
using Microsoft.Azure.Documents.Client;
using System.Configuration;
using LuloWebApi.Entities;
namespace LuloWebApi.Components
{
/// <summary>
/// Main class that encapsulates the creation of instances to connecto to Cosmos DB
/// </summary>
public sealed class CosmosStoreHolder
{
/// <summary>
/// Property to be initiated only once in the constructor (singleton)
/// </summary>
private static CosmosStoreHolder instance = null;
/// <summary>
/// To block multiple instance creation
/// </summary>
private static readonly object padlock = new object();
/// <summary>
/// CosmosStore object to get tenants information
/// </summary>
public Cosmonaut.ICosmosStore<SharepointTenant> CosmosStoreTenant { get; }
/// <summary>
/// CosmosStore object to get site collection information
/// </summary>
public Cosmonaut.ICosmosStore<SiteCollection> CosmosStoreSiteCollection { get; }
/// <summary>
/// CosmosStore object to get page templates information
/// </summary>
public Cosmonaut.ICosmosStore<PageTemplate> CosmosStorePageTemplate { get; }
/// <summary>
/// CosmosStore object to get pages information
/// </summary>
public Cosmonaut.ICosmosStore<Page> CosmosStorePage { get; }
/// <summary>
/// CosmosStore object to get roles information
/// </summary>
public Cosmonaut.ICosmosStore<Role> CosmosStoreRole { get; }
/// <summary>
/// CosmosStore object to get clients information
/// </summary>
public Cosmonaut.ICosmosStore<Client> CosmosStoreClient { get; }
/// <summary>
/// CosmosStore object to get users information
/// </summary>
public Cosmonaut.ICosmosStore<User> CosmosStoreUser { get; }
/// <summary>
/// CosmosStore object to get partners information
/// </summary>
public Cosmonaut.ICosmosStore<Partner> CosmosStorePartner { get; }
/// <summary>
/// CosmosStore object to get super administrators information
/// </summary>
public Cosmonaut.ICosmosStore<SuperAdministrator> CosmosStoreSuperAdministrator { get; }
/// <summary>
/// Constructor
/// </summary>
CosmosStoreHolder()
{
CosmosStoreSettings settings = new Cosmonaut.CosmosStoreSettings(ConfigurationManager.AppSettings["database"].ToString(),
ConfigurationManager.AppSettings["endpoint"].ToString(),
ConfigurationManager.AppSettings["authKey"].ToString());
settings.ConnectionPolicy = new ConnectionPolicy
{
ConnectionMode = ConnectionMode.Direct,
ConnectionProtocol = Protocol.Tcp
};
CosmosStoreTenant = new CosmosStore<SharepointTenant>(settings);
CosmosStoreSiteCollection = new CosmosStore<SiteCollection>(settings);
CosmosStorePageTemplate = new CosmosStore<PageTemplate>(settings);
CosmosStorePage = new CosmosStore<Page>(settings);
CosmosStoreRole = new CosmosStore<Role>(settings);
CosmosStoreClient = new CosmosStore<Client>(settings);
CosmosStoreUser = new CosmosStore<User>(settings);
CosmosStorePartner = new CosmosStore<Partner>(settings);
CosmosStoreSuperAdministrator = new CosmosStore<SuperAdministrator>(settings);
}
/// <summary>
/// Instance access, singleton
/// </summary>
public static CosmosStoreHolder Instance
{
get
{
lock (padlock)
{
if (instance == null)
{
instance = new CosmosStoreHolder();
}
return instance;
}
}
}
}
}
However I am not sure how to fix this warning.
This is a guide, not a hard rule. Usually, non-readonly static fields are hard to intuit about. But in this case you're doing lazy deferred loading, so... a lock and mutate is indeed one way of achieving that, without causing it to be loaded prematurely.
So a pragmatic fix is: just ignore/override the warning
Another approach, however, is to move the field to another type where it is readonly, and rely on the deferred .cctor semantics:
public static CosmosStoreHolder Instance {
[MethodImpl(MethodImplOptions.NoInlining)]
get => DeferredHolder.Instance;
}
private static class DeferredHolder {
internal static readonly CosmosStoreHolder Instance = new CosmosStoreHolder();
}
Then you don't even need the lock semantics (.cctor deals with that for you).

How do I create an interactive Powershell instance from C#?

I have a Powershell script that requires user interaction. I can call powershell.exe from C# using System.Diagnostics.Process and pass the scripts path as a parameter but I would like the script to be an embedded resource of the project. I tried creating a Runspace (see below) and running the script but because the script requires user interaction I receive an exception.
var assembly = Assembly.GetExecutingAssembly();
var resourceName = "mynamespace.myscriptfile.ps1";
string result = "";
using (Stream stream = assembly.GetManifestResourceStream(resourceName))
using (StreamReader reader = new StreamReader(stream))
{
result = reader.ReadToEnd();
Console.WriteLine(result);
}
//Create Powershell Runspace
Runspace runspace = RunspaceFactory.CreateRunspace();
runspace.Open();
// Create pipeline and add commands
Pipeline pipeline = runspace.CreatePipeline();
pipeline.Commands.AddScript(result);
// Execute Script
Collection<PSObject> results = new Collection<PSObject>();
try
{
results = pipeline.Invoke();
}
catch (Exception ex)
{
results.Add(new PSObject((object)ex.Message));
}
runspace.Close();
Console.ReadKey();
Is there a way to either pass the embedded resource to powershell.exe using System.Diagnostics.Process or is there a way to Invoke the script from C# where the user can interact?
UPDATE:
It seems to me that I may be able to use an implementation of the abstract class PSHost along with using the PSHostUserInterface property correctly, I may be able to create a Runspace that takes the PSHost implementation as a parameter to use the native Powershell console. I have been trying to test the idea but I'm not quite sure how to implement the abstract class.
Below is a sample of code that I obtained from Microsoft. I am confused with a couple of things. If it matters I will be creating the Runspace in a console application with a namespace called: WebRequirements in the Program class.
private Host01 program; (Would Host01 be Program?)
PSHostUserInterface (Is this where I would dictate that I want to use a native Powershell host and if so how would I do that?)
internal class MyHost : PSHost
{
///
/// A reference to the PSHost implementation.
///
private Host01 program;
/// <summary>
/// The culture information of the thread that created
/// this object.
/// </summary>
private CultureInfo originalCultureInfo =
System.Threading.Thread.CurrentThread.CurrentCulture;
/// <summary>
/// The UI culture information of the thread that created
/// this object.
/// </summary>
private CultureInfo originalUICultureInfo =
System.Threading.Thread.CurrentThread.CurrentUICulture;
/// <summary>
/// The identifier of this PSHost implementation.
/// </summary>
private Guid myId = Guid.NewGuid();
/// <summary>
/// Initializes a new instance of the MyHost class. Keep
/// a reference to the host application object so that it
/// can be informed of when to exit.
/// </summary>
/// <param name="program">
/// A reference to the host application object.
/// </param>
public MyHost(Host01 program)
{
this.program = program;
}
/// <summary>
/// Return the culture information to use. This implementation
/// returns a snapshot of the culture information of the thread
/// that created this object.
/// </summary>
public override System.Globalization.CultureInfo CurrentCulture
{
get { return this.originalCultureInfo; }
}
/// <summary>
/// Return the UI culture information to use. This implementation
/// returns a snapshot of the UI culture information of the thread
/// that created this object.
/// </summary>
public override System.Globalization.CultureInfo CurrentUICulture
{
get { return this.originalUICultureInfo; }
}
/// <summary>
/// This implementation always returns the GUID allocated at
/// instantiation time.
/// </summary>
public override Guid InstanceId
{
get { return this.myId; }
}
/// <summary>
/// Return a string that contains the name of the host implementation.
/// Keep in mind that this string may be used by script writers to
/// identify when your host is being used.
/// </summary>
public override string Name
{
get { return "MySampleConsoleHostImplementation"; }
}
/// <summary>
/// This sample does not implement a PSHostUserInterface component so
/// this property simply returns null.
/// </summary>
public override PSHostUserInterface UI
{
get { return null; }
}
/// <summary>
/// Return the version object for this application. Typically this
/// should match the version resource in the application.
/// </summary>
public override Version Version
{
get { return new Version(1, 0, 0, 0); }
}
/// <summary>
/// Not implemented by this example class. The call fails with
/// a NotImplementedException exception.
/// </summary>
public override void EnterNestedPrompt()
{
throw new NotImplementedException(
"The method or operation is not implemented.");
}
/// <summary>
/// Not implemented by this example class. The call fails
/// with a NotImplementedException exception.
/// </summary>
public override void ExitNestedPrompt()
{
throw new NotImplementedException(
"The method or operation is not implemented.");
}
/// <summary>
/// This API is called before an external application process is
/// started. Typically it is used to save state so the parent can
/// restore state that has been modified by a child process (after
/// the child exits). In this example, this functionality is not
/// needed so the method returns nothing.
/// </summary>
public override void NotifyBeginApplication()
{
return;
}
/// <summary>
/// This API is called after an external application process finishes.
/// Typically it is used to restore state that a child process may
/// have altered. In this example, this functionality is not
/// needed so the method returns nothing.
/// </summary>
public override void NotifyEndApplication()
{
return;
}
/// <summary>
/// Indicate to the host application that exit has
/// been requested. Pass the exit code that the host
/// application should use when exiting the process.
/// </summary>
/// <param name="exitCode">The exit code to use.</param>
public override void SetShouldExit(int exitCode)
{
this.program.ShouldExit = true;
this.program.ExitCode = exitCode;
}
}

ASP.NET MVC Transfer HTML Table Contents to Another View

Is this do-able?
Here's my situation and could you suggest an easier or more efficient way if what I'm trying to do isn't advisable.
We're talking about report generation page here.First, I have a stored procedure that takes a REALLY long time to finish executing if no filters/condition is set. Meaning it is a view all, this stored proc returns a list. This list then populates a table in my view. It could be just 10 up to thousands records but the execution is pretty long because it computes this and that against thousands of record, to make it short, I won't alter my stored procedure.
Now from this first view, I have a printable version button which calls another page with the same contents but print-friendly page. I dont want to execute the painful stored proc to get the same list, I want to re-use what is already generated. How can I do this?
From what I understand you are thinking of implementing some way of caching the data that you calculated through a painfully slow stored procedure?
One option would be to implement a CacheManager and cache the results for a certain period of time:
/// <summary>
/// Cache Manager Singleton
/// </summary>
public class CacheManager
{
/// <summary>
/// The instance
/// </summary>
private static MemoryCache instance = null;
/// <summary>
/// Gets the instance of memoryCache.
/// </summary>
/// <value>The instance of memoryCache.</value>
public static MemoryCache Instance
{
get
{
if (instance == null)
{
instance = new MemoryCache();
}
return instance;
}
}
}
/// <summary>
/// Cache Manager
/// </summary>
public class MemoryCache
{
/// <summary>
/// Gets the expiration date of the object
/// </summary>
/// <value>The no absolute expiration.</value>
public DateTime NoAbsoluteExpiration
{
get { return DateTime.MaxValue; }
}
/// <summary>
/// Retrieve the object in cache
/// If the object doesn't exist in cache or is obsolete, getItemCallback method is called to fill the object
/// </summary>
/// <typeparam name="T">Object Type to put or get in cache</typeparam>
/// <param name="httpContext">Http Context</param>
/// <param name="cacheId">Object identifier in cache - Must be unique</param>
/// <param name="getItemCallback">Callback method to fill the object</param>
/// <param name="slidingExpiration">Expiration date</param>
/// <returns>Object put in cache</returns>
public T Get<T>(string cacheId, Func<T> getItemCallback, TimeSpan? slidingExpiration = null) where T : class
{
T item = HttpRuntime.Cache.Get(cacheId) as T;
if (item == null)
{
item = getItemCallback();
if (slidingExpiration == null)
{
slidingExpiration = TimeSpan.FromSeconds(30);
}
HttpRuntime.Cache.Insert(cacheId, item, null, this.NoAbsoluteExpiration, slidingExpiration.Value);
}
return item;
}
/// <summary>
/// Retrieve the object in cache
/// If the object doesn't exist in cache or is obsolete, null is returned
/// </summary>
/// <typeparam name="T">Object Type to put or get in cache</typeparam>
/// <param name="httpContext">Http Context</param>
/// <param name="cacheId">Object identifier in cache - Must be unique</param>
/// <returns>Object put in cache</returns>
public T Get<T>(string cacheId) where T : class
{
T item = HttpRuntime.Cache.Get(cacheId) as T;
if (item == null)
{
return null;
}
return item;
}
/// <summary>
/// Delete an object using his unique id
/// </summary>
/// <param name="httpContext">Http Context</param>
/// <param name="cacheId">Object identifier in cache</param>
/// <returns><c>true</c> if XXXX, <c>false</c> otherwise</returns>
public bool Clear(string cacheId)
{
var item = HttpRuntime.Cache.Get(cacheId);
if (item != null)
{
HttpRuntime.Cache.Remove(cacheId);
return true;
}
return false;
}
/// <summary>
/// Delete all object in cache
/// </summary>
/// <param name="httpContext">Http Context</param>
public void ClearAll(string filter = null)
{
var item = HttpRuntime.Cache.GetEnumerator();
while (item.MoveNext())
{
DictionaryEntry entry = (DictionaryEntry)item.Current;
var key = entry.Key.ToString();
if (filter != null && (key.ToLower().Contains(filter.ToLower()) || filter == "*" )) //if filter, only delete if the key contains the filter value
{
HttpRuntime.Cache.Remove(entry.Key.ToString());
}
else if (filter == null) //no filter, delete everything
{
HttpRuntime.Cache.Remove(entry.Key.ToString());
}
}
}
}
Note: I didn't write this myself but can't find the original source.
This is how you use it:
// Retrieve object from cache if it exist, callback is performed to set and return object otherwise
UserObject userObject = CacheManager.Instance.Get(httpContext, "singleId", ()=> {
return new UserObject() {
Id = Guid.NewGuid()
};
});
// Retrieve object from cache if it exist, return null otherwise
UserObject userObjectRetrieved = CacheManager.Instance.Retrieve<UserObject>(httpContext, "singleId");
// Remove object from cache
CacheManager.Instance.Clear(httpContext, "singleId");
// Remove all object from cache
CacheManager.Instance.ClearAll(httpContext);
Not sure how your View design is meant to be but you could populate the printable version at the same time and hide it until the button is clicked.

Caching strategy with an abstract Type Mapper implementation on top of Dapper

Overview
We use Dapper to execute stored procedures on our internal applications. I need to build out a set of APIs that we could use that sit on top of Dapper, so the enterprise can avoid being tightly coupled with dapper. I wrote the set of APIs and have them working and performing great.
A simple example of the usage is:
private async Task Delete()
{
// Get an instance of the graph builder from our factory.
IGraph graphBuilder = EntityGraphFactory.CreateEntityGraph();
// Associate the builder to a stored procedure, and map an entity instance to it.
// We provide the graph the entities primary key and value.
graphBuilder.MapToProcedure("DeleteAddress").MapEntity(this.CustomerAddress)
.DefineProperty(address => address.AddressId).IsKey();
// Get an instance of our repository and delete the entity defined in the graph.
IGraphRepository repository = GraphRepositoryFactory.CreateRepository();
await repository.DeleteAsync(graphBuilder);
this.CustomerAddress = new Address();
}
The problem
The challenge I now have is caching. I want the repository to handle the caching for us automatically. When we query for lookup data like this:
private async Task RestoreAddress()
{
IGraph graphBuilder = EntityGraphFactory.CreateEntityGraph();
IGraphRepository repository = GraphRepositoryFactory.CreateRepository();
// Map ourself to a stored procedure. Tell the graph we are going to
// take the entered Id, and pass it in to the stored procedure as a
// "AddressId" parameter.
// We then define each of the properties that the returned rows
// must map back to, renaming the columns to their associated properties.
graphBuilder.MapToProcedure("GetAddressbyId")
.MapFromEntity(this.CustomerAddress)
.DefineProperty(address => address.AddressId.ToString())
.MapToEntity<Address>()
.DefineProperty(address => address.Street == "AddressLine1")
.DefineProperty(address => address.City)
.DefineProperty(address => address.RowGuid.ToString() == "rowguid")
.DefineProperty(address => address.LastModified.ToString() == "ModifiedDate")
.DefineProperty(address => address.PostalCode)
.DefineProperty(address => address.AddressId)
.MapToEntity<StateProvince>()
.DefineProperty(province => province.StateProvinceId.ToString() == "StateProvinceId");
IEnumerable<Address> addresses = await repository.GetAsync<Address>(graphBuilder);
this.CustomerAddress = addresses.FirstOrDefault() ?? new Address();
this.SelectedProvince = this.Provinces.FirstOrDefault(
province => province.StateProvinceId == this.CustomerAddress.StateProvinceId);
}
Addresses in this example is a set of lookup data that won't change during the runtime of the app. Not until a sync is performed, at which point the cache could be cleared. The issue though is that I'm not sure how to go about caching. In this example, I am executing GetAddressById, but I could have executed GetAddressByStateId or GetAllAddresses. Then I don't know what data was already fetched and still needs to be fetched.
Potential solutions
I have a few ideas on how to go about doing this, but I'm not sure if they're going to cause conflicts or issues if I were to implement them. So before I outline them, I want to show you the implementation of the IGraph interface.
/// <summary>
/// Exposes methods for retrieving mapping information and entity definitions.
/// </summary>
internal class Graph : IGraph
{
/// <summary>
/// Initializes a new instance of the <see cref="Graph"/> class.
/// </summary>
internal Graph()
{
this.ProcedureMapping = new ProcedureBuilder(this);
this.GraphMap = new Dictionary<Type, List<PropertyDefinition>>();
}
/// <summary>
/// Gets the graph definitions created for each Type registered with it.
/// </summary>
internal Dictionary<Type, List<PropertyDefinition>> GraphMap { get; private set; }
/// <summary>
/// Gets or sets the key used by the graph as it's Primary Key.
/// </summary>
internal PropertyDefinition RootKey { get; set; }
/// <summary>
/// Gets the procedure mapping.
/// </summary>
internal ProcedureBuilder ProcedureMapping { get; private set; }
/// <summary>
/// Gets the graph generated for the given entity
/// </summary>
/// <typeparam name="TEntity">The entity type to retrieve definition information from.</typeparam>
/// <returns>
/// Returns a collection of PropertyDefinition objects
/// </returns>
public IEnumerable<PropertyDefinition> GetEntityGraph<TEntity>() where TEntity : class, new()
{
return this.GetEntityGraph(typeof(TEntity));
}
/// <summary>
/// Gets a collection of PropertyDefinition objects that make up the data graph for the Entity speified.
/// </summary>
/// <param name="entityType">The entity type to retrieve definition information from.</param>
/// <returns>
/// Returns a collection of PropertyDefinition objects
/// </returns>
public IEnumerable<PropertyDefinition> GetEntityGraph(Type entityType)
{
if (GraphMap.ContainsKey(entityType))
{
return GraphMap[entityType];
}
return Enumerable.Empty<PropertyDefinition>();
}
/// <summary>
/// Gets the graph generated by the graph for all entities graphed on it.
/// </summary>
/// <returns>
/// Returns a dictionary where the key is a mapped type and the value is its definition data.
/// </returns>
public Dictionary<Type, IEnumerable<PropertyDefinition>> GetBuilderGraph()
{
// Return a new dictionary containing the same values. This prevents someone from adding content to the
// dictionary we hold internally.
return this.GraphMap.ToDictionary(keySelector => keySelector.Key, valueSelector => valueSelector.Value as IEnumerable<PropertyDefinition>);
}
/// <summary>
/// Resets the graph so that it may be used in a fresh state.
/// </summary>
public void ClearGraph()
{
this.GraphMap.Clear();
this.RootKey = null;
this.ProcedureMapping = new ProcedureBuilder(this);
}
/// <summary>
/// Gets the primary key defined for this data graph.
/// </summary>
/// <returns>Returns the PropertyDefinition associated as the Builder Key.</returns>
public PropertyDefinition GetKey()
{
return this.RootKey;
}
/// <summary>
/// Gets the stored procedure for the operation type provided.
/// </summary>
/// <param name="operationType">Type of operation the procedure will perform when executed.</param>
/// <returns>
/// Returns the ProcedureDefinition mapped to this graph for the given operation type.
/// </returns>
public ProcedureDefinition GetProcedureForOperation(ProcedureOperationType operationType)
{
string procedureName = this.ProcedureMapping.ProcedureMap[operationType];
return new ProcedureDefinition(operationType, procedureName);
}
/// <summary>
/// Gets all of the associated stored procedure mappings.
/// </summary>
/// <returns>
/// Returns a collection of ProcedureDefinition objects mapped to this data graph.
/// </returns>
public IEnumerable<ProcedureDefinition> GetProcedureMappings()
{
// Convert the builders dictionary mapping of stored procedures to OperationType into a collection of ProcedureDefinition objects.
return this.ProcedureMapping.ProcedureMap
.Where(kvPair => !string.IsNullOrEmpty(kvPair.Value))
.Select(kvPair => new ProcedureDefinition(kvPair.Key, kvPair.Value));
}
/// <summary>
/// Maps the data defined in this graph to a stored procedure.
/// </summary>
/// <param name="procedureName">Name of the procedure responsible for receiving the data in this graph.</param>
/// <returns>
/// Returns the data graph.
/// </returns>
public IGraph MapToProcedure(string procedureName)
{
this.ProcedureMapping.DefineForAllOperations(procedureName);
return this;
}
/// <summary>
/// Allows for mapping the data in this graph to different stored procedures.
/// </summary>
/// <returns>
/// Returns an instance of IProcedureBuilder used to perform the mapping operation
/// </returns>
public IProcedureBuilder MapToProcedure()
{
return this.ProcedureMapping;
}
/// <summary>
/// Defines what Entity will be used to building out property definitions
/// </summary>
/// <typeparam name="TEntity">The type of the entity to use during the building process.</typeparam>
/// <returns>
/// Returns an instance of IEntityDefinition that can be used for building out the entity definition
/// </returns>
public IPropertyBuilderForInput<TEntity> MapFromEntity<TEntity>() where TEntity : class, new()
{
this.CreateDefinition<TEntity>();
return new PropertyBuilder<TEntity>(this, DefinitionDirection.In);
}
/// <summary>
/// Defines what Entity will be used to building out property definitions
/// </summary>
/// <typeparam name="TEntity">The type of the entity to use during the building process.</typeparam>
/// <param name="entity">An existing instance of the entity used during the building process.</param>
/// <returns>
/// Returns an instance of IEntityDefinition that can be used for building out the entity definition
/// </returns>
public IPropertyBuilderForInput<TEntity> MapFromEntity<TEntity>(TEntity entity) where TEntity : class, new()
{
this.CreateDefinition<TEntity>();
return new PropertyBuilder<TEntity>(this, DefinitionDirection.In, entity);
}
public IPropertyBuilderForOutput<TEntity> MapToEntity<TEntity>() where TEntity : class, new()
{
this.CreateDefinition<TEntity>();
return new PropertyBuilder<TEntity>(this, DefinitionDirection.Out);
}
public IPropertyBuilderForInput<TEntity> MapEntity<TEntity>() where TEntity : class, new()
{
this.CreateDefinition<TEntity>();
return new PropertyBuilder<TEntity>(this, DefinitionDirection.Both);
}
public IPropertyBuilderForInput<TEntity> MapEntity<TEntity>(TEntity entity) where TEntity : class, new()
{
this.CreateDefinition<TEntity>();
return new PropertyBuilder<TEntity>(this, DefinitionDirection.Both, entity);
}
private void CreateDefinition<TEntity>() where TEntity : class, new()
{
// A definition has already been created, so return.
if (this.GraphMap.ContainsKey(typeof(TEntity)))
{
return;
}
this.GraphMap.Add(typeof(TEntity), new List<PropertyDefinition>());
}
}
The point of this class is to let you map a Type to it, and then use interfaces that are returned on the MapEntity methods, to define properties and their characteristics. The repository then is given the builder, and pulls the Mappings from it, generating a Dapper DynamicParameters collection from it. The map is also used in a custom Dapper TypeMapper.
Since I am only caching things that are queried, I'll save some page-space and just share the query method on my repository, and its TypeMapper.
public async Task<IEnumerable<TEntity>> GetAsync<TEntity>(IGraph builder, IDataContext context = null)
{
IEnumerable<TEntity> items = null;
DynamicParameters parameters = this.GetParametersFromDefinition(builder, DefinitionDirection.In);
// Setup our mapping of the return results.
this.SetupSqlMapper<TEntity>(builder);
ProcedureDefinition mapping = builder.GetProcedureForOperation(ProcedureOperationType.Select);
// Query the database
await this.SetupConnection(
context,
async (connection, transaction) => items = await connection.QueryAsync<TEntity>(
mapping.StoredProcedure,
parameters,
commandType: CommandType.StoredProcedure,
transaction: transaction));
return items;
}
private async Task SetupConnection(IDataContext context, Func<IDbConnection, IDbTransaction, Task> communicateWithDatabase)
{
SqlDataContext connectionContext = await this.CreateConnectionContext(context);
IDbConnection databaseConnection = await connectionContext.GetConnection();
// Fetch the transaction, if any, associated with the context. If none exists, null is returned and passed
// in to the ExecuteAsync method.
IDbTransaction transaction = connectionContext.GetTransaction();
try
{
await communicateWithDatabase(databaseConnection, transaction);
}
catch (Exception)
{
this.RollbackChanges(connectionContext);
throw;
}
// If we are given a shared connection, we are not responsible for closing it.
if (context == null)
{
this.CloseConnection(connectionContext);
}
}
private DynamicParameters GetParametersFromDefinition(IGraph builder, DefinitionDirection direction)
{
// Fetch the model definition, then loop through each property we are saving and add it
// do a Dapper DynamicParameer dictionary.
Dictionary<Type, IEnumerable<PropertyDefinition>> definition = builder.GetBuilderGraph();
var parameters = new DynamicParameters();
foreach (var pair in definition)
{
IEnumerable<PropertyDefinition> properties =
pair.Value.Where(property => property.Direction == direction || property.Direction == DefinitionDirection.Both);
foreach (PropertyDefinition data in properties)
{
parameters.Add(data.ResolvedName, data.PropertyValue);
}
}
return parameters;
}
/// <summary>
/// Sets up the Dapper SQL Type mapper.
/// </summary>
/// <param name="type">The type we want to map the build definition to.</param>
/// <param name="graph">The graph.</param>
private void SetupSqlMapper(Type type, IGraph builder)
{
SqlMapper.SetTypeMap(
type,
new CustomPropertyTypeMap(type, (typeToMap, columnName) =>
{
// Grab all of the property definitions on the entity defined with the IGraph
IEnumerable<PropertyDefinition> entityDefinition = builder.GetEntityGraph(typeToMap);
PropertyInfo propertyForColumn;
// Lookup a PropertyDefinition definition from the IGraph that can map to the columnName provided by the database.
PropertyDefinition propertyData = null;
if (this.dataStoreConfig.EnableSensitiveCasing)
{
propertyData = entityDefinition.FirstOrDefault(
definition => definition.ResolvedName.Equals(columnName) || definition.Property.Name.Equals(columnName));
}
else
{
propertyData = entityDefinition.FirstOrDefault(
definition =>
definition.ResolvedName.ToLower().Equals(columnName.ToLower()) ||
definition.Property.Name.ToLower().Equals(columnName.ToLower()));
}
// If a mapping definition was not found, use the TypePool to fetch the property info from the type cache.
// Otherwise we assign the property from the definition mapping.
if (propertyData == null)
{
propertyForColumn = this.dataStoreConfig.EnableSensitiveCasing
? TypePool.GetProperty(typeToMap, (info) => info.Name.Equals(columnName))
: TypePool.GetProperty(typeToMap, (info) => info.Name.ToLower().Equals(columnName.ToLower()));
}
else
{
propertyForColumn = propertyData.Property;
}
if (propertyForColumn == null)
{
Debug.WriteLine(string.Format("The column {0} could not be mapped to the Type {1}. It does not have a definition associated with the data graph, nor a property with a matching name.", columnName, typeToMap.Name));
}
return propertyForColumn;
}));
}
There are a couple different paths I'm considering here.
Override GetHashCode() on my IGraph implementation and have it return a hashed value of the Dictionary & RootKey properties. Then in the repository, I ask for the graphs hashcode, query the database and cache the return results in a Dictionary of <HashCode, ResultSet>. The next time I create the builder, or re-use an existing one, the repository could do a key lookup and return back the cache.
In this approach, is that safe? Can I call GraphMap.GethashCode() and rest assured that the hash will be based on the contents of the dictionary, and therefore (mostly)unique? Would I have to iterate over each item in the dictionary, asking for HashCodes on their members to prevent hash code collisions?
Cache the expression used to generate the map. Within my repository, I can generate a hash, based on the hashcode of each expression used on the builder. This way, if you ever use the same series of expressions to build the mapping, the repository would know and could return the previously fetched data?
Are hashcodes a safe way to go, or should I be exploring different routes? Is there an industry standard way of going about this?

Data Caching in ASP.Net

I need to fill some dropdown boxex from some reference data. i.e City List, Country List etc. I need to fill it in various webforms. I think, we should cache this data in our application so that, we don't hit database on every form. I am new to caching and ASP.Net. Please suggest me how to do this.
I always add the following class to all my projects which give me easy access to the Cache object. Implementing this, following Hasan Khan's answer would be a good way to go.
public static class CacheHelper
{
/// <summary>
/// Insert value into the cache using
/// appropriate name/value pairs
/// </summary>
/// <typeparam name="T">Type of cached item</typeparam>
/// <param name="o">Item to be cached</param>
/// <param name="key">Name of item</param>
public static void Add<T>(T o, string key, double Timeout)
{
HttpContext.Current.Cache.Insert(
key,
o,
null,
DateTime.Now.AddMinutes(Timeout),
System.Web.Caching.Cache.NoSlidingExpiration);
}
/// <summary>
/// Remove item from cache
/// </summary>
/// <param name="key">Name of cached item</param>
public static void Clear(string key)
{
HttpContext.Current.Cache.Remove(key);
}
/// <summary>
/// Check for item in cache
/// </summary>
/// <param name="key">Name of cached item</param>
/// <returns></returns>
public static bool Exists(string key)
{
return HttpContext.Current.Cache[key] != null;
}
/// <summary>
/// Retrieve cached item
/// </summary>
/// <typeparam name="T">Type of cached item</typeparam>
/// <param name="key">Name of cached item</param>
/// <param name="value">Cached value. Default(T) if item doesn't exist.</param>
/// <returns>Cached item as type</returns>
public static bool Get<T>(string key, out T value)
{
try
{
if (!Exists(key))
{
value = default(T);
return false;
}
value = (T)HttpContext.Current.Cache[key];
}
catch
{
value = default(T);
return false;
}
return true;
}
}
From other question of yours I read that you're using 3 layer architecture with dal, business and presentation layer.
So I assume that you have some data access class. Ideal thing to do would be to have a cached implementation of the same class and do caching in that.
Eg: If you have an interface IUserRepository then UserRepository class would implement it and add/delete/update entries in db via methods then you can also have CachedUserRepository which will contain instance of UserRepository object and on get methods it will first look into the cache against some key (derived from method parameters) and if the item is found then it will return it otherwise you call the method on internal object; get the data; add to cache and then return it.
Your CachedUserRepository will also have instance of cache object obviously. You can look at http://msdn.microsoft.com/en-us/library/18c1wd61(v=vs.85).aspx for details on how to use Cache object.

Categories

Resources