I am creating an SQLite Database in Visual Studio with Xamarin in C#.
I should note that this is for android only.
I am trying to make it so I am able to insert data into the SQLite database but I am unsure how.
I have been following this but I'm still unsure.
Here is the method I am trying to create.
/// <summary>
/// Insert a single ping group into the SQLite ping database.
/// </summary>
/// <param name="pingGroup"></param>
public void AddUnsynchronizedPing(PingGroup pingGroup)
{
// TODO: Add the passed ping group parameter into the SQLite database as new/unsynchronized.
if (pingGroup != null)
{
// Add ping group to the database.
// Add pings to the database.
// Maybe one step, maybe done separately.
// If done separately, must set Ping.PingGroupID to ID of original ping group.
}
}
For context, here is the entire class.
namespace BB.Mobile
{
/// <summary>
/// A class to provide a single interface for interacting with all SQLite data operations for stored tracking points.
/// </summary>
///
class DataManager
{
private SQLiteConnection db = null;
public DataManager()
{
if (this.db == null)
{
string dbPath = Path.Combine(
System.Environment.GetFolderPath(System.Environment.SpecialFolder.Personal),
"bb.db3");
db = new SQLiteConnection(dbPath);
db.CreateTable<Ping>();
db.CreateTable<PingGroup>();
}
}
/// <summary>
/// Will compile and return all matching unsynchronized ping data from the SQLite database.
/// </summary>
/// <returns></returns>
public List<PingGroup> GetUnsynchronizedPings()
{
List<PingGroup> unsynchronizedPings = new List<PingGroup>();
// TODO: Retrieve all unsynchronized pings from the SQLite database and return them to the caller.
//var pGroup = db.Get<PingGroup>();
//var pGroupList = db.List<PingGroup>();
var pGroups = db.Table<PingGroup>();
foreach (var pGroup in pGroups)
{
}
return unsynchronizedPings;
}
/// <summary>
/// Insert a single ping group into the SQLite ping database.
/// </summary>
/// <param name="pingGroup"></param>
public void AddUnsynchronizedPing(PingGroup pingGroup)
{
// TODO: Add the passed ping group parameter into the SQLite database as new/unsynchronized.
if (pingGroup != null)
{
// Add ping group to the database.
// Add pings to the database.
// Maybe one step, maybe done separately.
// If done separately, must set Ping.PingGroupID to ID of original ping group.
}
}
/// <summary>
/// Mark all open and unsynchronized pings in the database as synchronized.
/// </summary>
public void SetAllPingsSynchronized()
{
db.DeleteAll<PingGroup>();
db.DeleteAll<Ping>();
}
}
}
Thank you in advance.
To insert the object to sqlite database, you can just use something like:
void InsertPing(Ping p)
{
db.Insert(p);
}
void InsertGroupOfPings(IEnumerable<Ping> pings)
{
db.InsertAll(pings);
}
and to retrieve objects (for example):
List<Ping> GetPings()
{
// I assume here that Ping object has property named Synchronized
return db.Query<Ping>("select * from Ping where Synchronized = 0");
}
The SQLite library creates its tables according to your class definitions, so you can think about the properties of the class as of columns inside the table.
Related
***Just for learning purpose***
Recently I just knew the word cache and cache mechanism and generally understand that the cache mechanism is a good thing on system responding performance and reduce many interacting with database.
And based on the talking with someone else, they told me the general idea that we can create an independent library and cache the data retrieving from database and once we need it in our business layer, then we can retrieve it from the cache layer.
And they also shared something but not very detailed that the database can update the cache layer automatically when the data in database refreshed, like updating, adding and deleting.
So my questions comes, how does database know and update cache layer proactively and automatically? Can anybody share something with me? or are there any existing frameworks, open source solutions?
I would much appreciate for your kindly help. I'm looking forward to hearing from you my friend.
Try this third party cache: CacheCrow, it is a simple LFU based cache.
Install using powershell command in visual studio: Install-Package CacheCrow
Code Snippet:
// initialization of singleton class
ICacheCrow<string, string> cache = CacheCrow<string, string>.Initialize(1000);
// adding value to cache
cache.Add("#12","Jack");
// searching value in cache
var flag = cache.LookUp("#12");
if(flag)
{
Console.WriteLine("Found");
}
// removing value
var value = cache.Remove("#12");
For more information you can visit: https://github.com/RishabKumar/CacheCrow
Jacob,
Let me give you an example...
In the data layer when we are going to retrieve a list of objects that should be cached from the database we could to something like this.
if (!CacheHelper.Get("AllRoles", out entities))
{
var items = _context.Set<Roles>().ToList();
entities = items;
var cachableEntities = entities.ToList();
CacheHelper.Add(cachableEntities, "AllRoles");
}
return entities;
You'll notice that I have Cache helper that will search the cache for the key "AllRoles" if it finds the cache it will return the entities from the cache. If it cant find it it will get the data from the database and Create the cache with the key.
Additionally, every time we add/delete/or change an item in this table we could simple destroy this cache.
CacheHelper.Clear(CacheKey);
So answering the question, in this sample the database doesn't know when to recreate the cache, the application logic does.
Here a sample of a Cache Helpers you may use....
using System;
using System.Collections.Generic;
using System.Web;
namespace Core.Helpers
{
public static class CacheHelper
{
public static List<string> GetCacheKeys()
{
List<string> keys = new List<string>();
// retrieve application Cache enumerator
var enumerator = System.Web.HttpRuntime.Cache.GetEnumerator();
while (enumerator.MoveNext())
{
keys.Add(enumerator.Key.ToString());
}
return keys;
}
/// <summary>
/// Insert value into the cache using
/// appropriate name/value pairs
/// </summary>
/// <typeparam name="T">Type of cached item</typeparam>
/// <param name="o">Item to be cached</param>
/// <param name="key">Name of item</param>
public static void Add<T>(T o, string key)
{
// NOTE: Apply expiration parameters as you see fit.
// I typically pull from configuration file.
// In this example, I want an absolute
// timeout so changes will always be reflected
// at that time. Hence, the NoSlidingExpiration.
if (HttpContext.Current != null)
HttpContext.Current.Cache.Insert(
key,
o,
null,
DateTime.Now.AddMinutes(1440),
System.Web.Caching.Cache.NoSlidingExpiration);
}
/// <summary>
/// Remove item from cache
/// </summary>
/// <param name="key">Name of cached item</param>
public static void Clear(string key)
{
if (HttpContext.Current != null)
HttpContext.Current.Cache.Remove(key);
}
/// <summary>
/// Check for item in cache
/// </summary>
/// <param name="key">Name of cached item</param>
/// <returns></returns>
public static bool Exists(string key)
{
var exists= HttpContext.Current != null && HttpContext.Current.Cache[key] != null;
return exists;
}
/// <summary>
/// Retrieve cached item
/// </summary>
/// <typeparam name="T">Type of cached item</typeparam>
/// <param name="key">Name of cached item</param>
/// <param name="value">Cached value. Default(T) if
/// item doesn't exist.</param>
/// <returns>Cached item as type</returns>
public static bool Get<T>(string key, out T value)
{
try
{
if (!Exists(key))
{
value = default(T);
return false;
}
value = (T)HttpContext.Current.Cache[key];
}
catch
{
value = default(T);
return false;
}
return true;
}
}
}
I'm developing Window Forms project with Entity Framework 6.0. I apply Database-first approach and create edmx file by using Visual Studio Wizards. Like all others, I also produce pre-generated view to improve performance. I used Entity Framework Power Tools. It works and produce my_EF.views.cs file.
But my first query still take long time.
I doubt that my project file structure might be the problem. In my solution explorer, I have two projects "MyApp.GUI" (Windows Form Project) and "MyApp.DataAccess" (Class Library Project). I add reference dll of "MyApp.DataAccess" to "MyApp.GUI".
My Entity Framework edmx file is located in "MyApp.DataAccess" project. I don't know where to put my pre-generated view file whether in "MyApp.DataAccess" class library or "MyApp.GUI" windows form project. Currently the pre-generated view file is in "MyApp.DataAccess" class library.
Does my problem associated with my project file structure ? Or, may be the pre-generated view file does not work (produced by Entity Framework Power Tools) ? This is my pre-generated view file. Please suggest me possible solutions.
//------------------------------------------------------------------------------
// <auto-generated>
// This code was generated by a tool.
//
// Changes to this file may cause incorrect behavior and will be lost if
// the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------
using System.Data.Entity.Infrastructure.MappingViews;
[assembly: DbMappingViewCacheTypeAttribute(
typeof(TheWayPointOfSale.DataAccess.TheWayPOSEntities),
typeof(Edm_EntityMappingGeneratedViews.ViewsForBaseEntitySetsa6f884d523752b7a79e1bab1a97ba058ffdac6b45a04cc850e30f503a4e1fc49))]
namespace Edm_EntityMappingGeneratedViews
{
using System;
using System.CodeDom.Compiler;
using System.Data.Entity.Core.Metadata.Edm;
/// <summary>
/// Implements a mapping view cache.
/// </summary>
[GeneratedCode("Entity Framework Power Tools", "0.9.0.0")]
internal sealed class ViewsForBaseEntitySetsa6f884d523752b7a79e1bab1a97ba058ffdac6b45a04cc850e30f503a4e1fc49 : DbMappingViewCache
{
/// <summary>
/// Gets a hash value computed over the mapping closure.
/// </summary>
public override string MappingHashValue
{
get { return "a6f884d523752b7a79e1bab1a97ba058ffdac6b45a04cc850e30f503a4e1fc49"; }
}
/// <summary>
/// Gets a view corresponding to the specified extent.
/// </summary>
/// <param name="extent">The extent.</param>
/// <returns>The mapping view, or null if the extent is not associated with a mapping view.</returns>
public override DbMappingView GetView(EntitySetBase extent)
{
if (extent == null)
{
throw new ArgumentNullException("extent");
}
var extentName = extent.EntityContainer.Name + "." + extent.Name;
if (extentName == "TheWayPOSModelStoreContainer.Products")
{
return GetView0();
}
if (extentName == "TheWayPOSModelStoreContainer.Um")
{
return GetView1();
}
if (extentName == "TheWayPOSModelStoreContainer.Products_Um")
{
return GetView2();
}
if (extentName == "TheWayPOSEntities.Products")
{
return GetView3();
}
if (extentName == "TheWayPOSEntities.Ums")
{
return GetView4();
}
if (extentName == "TheWayPOSEntities.Products_Um")
{
return GetView5();
}
return null;
}
/// <summary>
/// Gets the view for TheWayPOSModelStoreContainer.Products.
/// </summary>
/// <returns>The mapping view.</returns>
private static DbMappingView GetView0()
{
return new DbMappingView(#"
SELECT VALUE -- Constructing Products
[TheWayPOSModel.Store.Products](T1.[Products.product_code], T1.[Products.product_name], T1.[Products.buying_price], T1.[Products.discount_percentage], T1.[Products.retail_price], T1.[Products.wholesale_price], T1.Products_supplier, T1.[Products.created_at], T1.[Products.updated_at])
FROM (
SELECT
T.product_code AS [Products.product_code],
T.product_name AS [Products.product_name],
T.buying_price AS [Products.buying_price],
T.discount_percentage AS [Products.discount_percentage],
T.retail_price AS [Products.retail_price],
T.wholesale_price AS [Products.wholesale_price],
T.supplier AS Products_supplier,
T.created_at AS [Products.created_at],
T.updated_at AS [Products.updated_at],
True AS _from0
FROM TheWayPOSEntities.Products AS T
) AS T1");
}
/// <summary>
/// Gets the view for TheWayPOSModelStoreContainer.Um.
/// </summary>
/// <returns>The mapping view.</returns>
private static DbMappingView GetView1()
{
return new DbMappingView(#"
SELECT VALUE -- Constructing Um
[TheWayPOSModel.Store.Um](T1.[Um.um_code], T1.[Um.um_shortname], T1.[Um.um_fullname], T1.Um_disposable, T1.[Um.disposed_um_code], T1.[Um.disposed_um_qty], T1.[Um.depend_on_product], T1.[Um.created_at], T1.[Um.updated_at])
FROM (
SELECT
T.um_code AS [Um.um_code],
T.um_shortname AS [Um.um_shortname],
T.um_fullname AS [Um.um_fullname],
T.disposable AS Um_disposable,
T.disposed_um_code AS [Um.disposed_um_code],
T.disposed_um_qty AS [Um.disposed_um_qty],
T.depend_on_product AS [Um.depend_on_product],
T.created_at AS [Um.created_at],
T.updated_at AS [Um.updated_at],
True AS _from0
FROM TheWayPOSEntities.Ums AS T
) AS T1");
}
/// <summary>
/// Gets the view for TheWayPOSModelStoreContainer.Products_Um.
/// </summary>
/// <returns>The mapping view.</returns>
private static DbMappingView GetView2()
{
return new DbMappingView(#"
SELECT VALUE -- Constructing Products_Um
[TheWayPOSModel.Store.Products_Um](T1.[Products_Um.id], T1.[Products_Um.product_code], T1.[Products_Um.um_code], T1.[Products_Um.disposed_um_code], T1.[Products_Um.disposed_um_qty])
FROM (
SELECT
T.id AS [Products_Um.id],
T.product_code AS [Products_Um.product_code],
T.um_code AS [Products_Um.um_code],
T.disposed_um_code AS [Products_Um.disposed_um_code],
T.disposed_um_qty AS [Products_Um.disposed_um_qty],
True AS _from0
FROM TheWayPOSEntities.Products_Um AS T
) AS T1");
}
/// <summary>
/// Gets the view for TheWayPOSEntities.Products.
/// </summary>
/// <returns>The mapping view.</returns>
private static DbMappingView GetView3()
{
return new DbMappingView(#"
SELECT VALUE -- Constructing Products
[TheWayPOSModel.Product](T1.[Product.product_code], T1.[Product.product_name], T1.[Product.buying_price], T1.[Product.discount_percentage], T1.[Product.retail_price], T1.[Product.wholesale_price], T1.Product_supplier, T1.[Product.created_at], T1.[Product.updated_at])
FROM (
SELECT
T.product_code AS [Product.product_code],
T.product_name AS [Product.product_name],
T.buying_price AS [Product.buying_price],
T.discount_percentage AS [Product.discount_percentage],
T.retail_price AS [Product.retail_price],
T.wholesale_price AS [Product.wholesale_price],
T.supplier AS Product_supplier,
T.created_at AS [Product.created_at],
T.updated_at AS [Product.updated_at],
True AS _from0
FROM TheWayPOSModelStoreContainer.Products AS T
) AS T1");
}
/// <summary>
/// Gets the view for TheWayPOSEntities.Ums.
/// </summary>
/// <returns>The mapping view.</returns>
private static DbMappingView GetView4()
{
return new DbMappingView(#"
SELECT VALUE -- Constructing Ums
[TheWayPOSModel.Um](T1.[Um.um_code], T1.[Um.um_shortname], T1.[Um.um_fullname], T1.Um_disposable, T1.[Um.disposed_um_code], T1.[Um.disposed_um_qty], T1.[Um.depend_on_product], T1.[Um.created_at], T1.[Um.updated_at])
FROM (
SELECT
T.um_code AS [Um.um_code],
T.um_shortname AS [Um.um_shortname],
T.um_fullname AS [Um.um_fullname],
T.disposable AS Um_disposable,
T.disposed_um_code AS [Um.disposed_um_code],
T.disposed_um_qty AS [Um.disposed_um_qty],
T.depend_on_product AS [Um.depend_on_product],
T.created_at AS [Um.created_at],
T.updated_at AS [Um.updated_at],
True AS _from0
FROM TheWayPOSModelStoreContainer.Um AS T
) AS T1");
}
/// <summary>
/// Gets the view for TheWayPOSEntities.Products_Um.
/// </summary>
/// <returns>The mapping view.</returns>
private static DbMappingView GetView5()
{
return new DbMappingView(#"
SELECT VALUE -- Constructing Products_Um
[TheWayPOSModel.Products_Um](T1.[Products_Um.id], T1.[Products_Um.product_code], T1.[Products_Um.um_code], T1.[Products_Um.disposed_um_code], T1.[Products_Um.disposed_um_qty])
FROM (
SELECT
T.id AS [Products_Um.id],
T.product_code AS [Products_Um.product_code],
T.um_code AS [Products_Um.um_code],
T.disposed_um_code AS [Products_Um.disposed_um_code],
T.disposed_um_qty AS [Products_Um.disposed_um_qty],
True AS _from0
FROM TheWayPOSModelStoreContainer.Products_Um AS T
) AS T1");
}
}
}
I know this information is not worth to be an answer. But for other newbies like me, I would like to share my experience.
According to Maarten's comment, I have put breakpoints in each of the pre-generated view files. The pre-generated view in DataAccess class library is hit by the breakpoint. So, it can be generally assume to place the pre-generated view file in DataAccess class library where edmx file is located (in my kinds of situations).
Well, as the title says. I'd like to use a script component destination, and then utilize LINQ to select which rows to process for output.
For a bit more background, I have this ugly merged thing with a one-to-many relationship. The rows look sort of like:
[ID] [Title] [OneToManyDataID]
1 Item one 2
1 Item one 4
1 Item one 3
3 Item two 1
3 Item two 5
We'll call the objects [Item], which has the ID and Title columns and [OneToMany]
I was hoping I could throw the entire thing to a script component destination, and then use LINQ to do something like group by the item and only take the data from the highest OneToMany object. Sort of like:
foreach(var item in Data.GroupBy(d=>d.Item).Select(d=> new {Item = d.Key})){
//Then pick out the highest OneToMany ID for that row to use with it.
}
I realize there are probably better LINQ queries to accomplish this, but the point is, the script component in SSIS seems to only allow working with it on a per-row basis, with the predefined ProcessInputRow-method. Where I'd like to determine exactly which rows are processed and what properties are passed to that method.
How would I go about doing this?
To restate your problem, how can I make an Script Transformation stop processing row-by-row? By default, a script transformation is going to be a synchronous component - 1 row in, 1 row out. You'll want to change that to an asynchronous component 1 row in - 0 to many rows out.
On your Script Transformation Editor, the Inputs and Outputs tab, for your output collection Output 0 change the value of SynchronousInputID from whatever it is to None.
Don't cast stones on my LINQ code-I trust you can handle making that work right. The intention of this code block is to demonstrate how you would collect your rows for processing and then pass them on to a downstream consumer after modifying them. I commented on the methods to help you understand what each one of them does in the script component life cycle but if you'd rather read MSDN they know a bit more than I do ;)
using System;
using System.Data;
using System.Linq;
using System.Collections.Generic;
using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
using Microsoft.SqlServer.Dts.Runtime.Wrapper;
[Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute]
public class ScriptMain : UserComponent
{
/// <summary>
/// Our LINQ-able thing.
/// </summary>
List<Data> data;
/// <summary>
/// Do our preexecute tasks, in particular, we will instantiate
/// our collection.
/// </summary>
public override void PreExecute()
{
base.PreExecute();
this.data = new List<Data>();
}
/// <summary>
/// This method is called once the last row has hit.
/// Since we will can only find the highest OneToManyDataId
/// after receiving all the rows, this the only time we can
/// send rows to the output buffer.
/// </summary>
public override void FinishOutputs()
{
base.FinishOutputs();
CreateNewOutputRows();
}
/// <summary>
/// Accumulate all the input rows into an internal LINQ-able
/// collection
/// </summary>
/// <param name="Row">The buffer holding the current row</param>
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
// there is probably a more graceful mechanism of spinning
// up this struct.
// You must also worry about fields that have null types.
Data d = new Data();
d.ID = Row.ID;
d.Title = Row.Title;
d.OneToManyId = Row.OneToManyDataID;
this.data.Add(d);
}
/// <summary>
/// This is the process to generate new rows. As we only want to
/// generate rows once all the rows have arrived, only call this
/// at the point our internal collection has accumulated all the
/// input rows.
/// </summary>
public override void CreateNewOutputRows()
{
foreach (var item in this.data.GroupBy(d => d.ID).Select(d => new { Item = d.Key }))
{
//Then pick out the highest OneToMany ID for that row to use with it.
// Magic happens
// I don't "get" LINQ so I can't implement the poster's action
int id = 0;
int maxOneToManyID = 2;
string title = string.Empty;
id = item.Item;
Output0Buffer.AddRow();
Output0Buffer.ID = id;
Output0Buffer.OneToManyDataID = maxOneToManyID;
Output0Buffer.Title = title;
}
}
}
/// <summary>
/// I think this works well enough to demo
/// </summary>
public struct Data
{
public int ID { get; set; }
public string Title { get; set; }
public int OneToManyId { get; set; }
}
Configuration of the Script Transformation
Pls see this first:
Good Coding Practices
So, this is my design.
Website
2.Business Logic Layer
3.DALFacade (we use dalfacade to hide the data access, because we use 2 different stores, sql and db2)
4.DAL
In the DAL we use unit of work pattern and repository pattern.
1. The big question here is: If the code below will run ok for transactions that are created from the business logic.?
Page:
public partial class NewBonusRequest : System.Web.UI.Page
{
#region Constructor and Instantiation of Business Logic
/// <summary>
/// Property that holds the Business Logic type to call methods
/// </summary>
public IRequestBL RequestBL { get; private set; }
/// <summary>
/// The default constructor will use the default implementation of the business logic interface
/// </summary>
public Request()
: this(new RequestBL())
{
}
/// <summary>
/// The constructor accepts a IEcoBonusRequestFacade type
/// </summary>
/// <param name="ecoBonusRequestBL">IEcoBonusRequestFacade type</param>
public NewRequest(IRequestBL RequestBL)
{
RequestBL = RequestBL;
}
#endregion
protected void PageLoad(object sender, EventArgs e)
{
if(!Page.IsPostBack)
{
}
}
#region Control Events
protected void BtnSubmitRequestClick(object sender, EventArgs e)
{
var request= new Request
{
IsOnHold = true
//All other properties go here.
};
RequestBL.Save(request);
}
Business Logic Code.
public interface IRequestBL
{
void Save(Request request);
}
/// <summary>
/// Class in charge of the business logic for EcoBonusRequest
/// </summary>
public class RequestBL : IRequestBL
{
/// <summary>
/// <summary>
/// Saves a new ecobonus request into database and evaluate business rules here
/// </summary>
/// <param name="ecoBonusWorkflow">EcoBonusWorkflow entity</param>
public void Save(Request Request)
{
using (var scope = new TransactionScope())
{
Request.Save(request);
// Call to other DALCFacade methods that insert data in different tables
// OtherObject.Save(otherobject)
scope.Complete();
}
}
}
Yes in the same thread, EF will properly consider transaction scope if it exists. EF will not create new transaction if it's already in one.
However, you must be careful because if you query your database without transaction then you will get dirty reads. Because EF will not read anything in transaction if it does not exist but it creates new transaction if it does not exist while saving changes.
In your code you are only saving changes in transaction but you should be careful while reading and you should encapsulate your queries also in scope in smaller units.
I created a simple Caching Data Access Layer that has caching using the Enterprise Library Caching Application Block and also makes use of SQL Query Notification - therefore not supporting any queries which are not valid for query notification.
Background: This was put in place after the application was developed in order to lighten the load on the database and speed up the application. The main use of this DAL is for pulling data that is not expected to change very often such as data in Look Up Tables (presented in drop downs on the UI, etc).
It is mainly used like the following example:
var cachingDal = new CachingDataAccessLayer();
var productTypes = cachingDal.LoadData<ProductType>();
Where ProductType is a Linq to SQL table. I am curious to see what people think of the implementation I came up with and if it is horrible or amazing.
Here's the code. Looking for any suggestions, criticisms, etc etc. Keep in mind I didn't choose the technology and am building on top of an existing system so switching data access stories is not really my call.
using System;
using System.Collections.Generic;
using System.Data.SqlClient;
using System.Linq;
using Microsoft.Practices.EnterpriseLibrary.Caching;
using Microsoft.Practices.EnterpriseLibrary.Logging;
using MyDatabase;
public class CachingDataAccessLayer
{
#region Cache Keys
private const string CacheManagerName = "CachingDataAccessLayer";
#endregion
#region Database
/// <summary>
/// Instantiate new MyDataContext
/// </summary>
/// <returns></returns>
private MyDataContext DatabaseConnection()
{
// instantiate database connection
var database = new MyDataContext(Constants.DatabaseConnectionString);
// set transaction isolation level to read committed
database.ExecuteQuery(typeof(string), "SET TRANSACTION ISOLATION LEVEL READ COMMITTED");
return database;
}
#endregion
#region Generic Data Access with Caching
/// <summary>
/// Calls .Exists on list using predicate and if it evaluates to false, adds records to list using predicate.
/// </summary>
/// <typeparam name="TEntity">Database table</typeparam>
/// <param name="list">List to add records to</param>
/// <param name="predicate">The delagate that defines the conditions of elements to search for.</param>
public void AddRecordsIfNeeded<TEntity>(ref List<TEntity> list, Predicate<TEntity> predicate) where TEntity : class
{
// check if items are in list based on predicate and if not, add them to the list
if (!list.Exists(predicate))
{
list.AddRange(LoadData<TEntity>(predicate.Invoke));
}
}
/// <summary>
/// Retrieve all records of type TEntity from the cache if available with filter Active = true (if Active property exists).<br/>
/// If data is not available in cache go directly to the database.<br/>
/// In addition, sets up query notification and refreshes cache on database change.
/// </summary>
/// <typeparam name="TEntity">Database table to retrieve.</typeparam>
/// <returns>returns List of TEntity</returns>
public List<TEntity> LoadData<TEntity>() where TEntity : class
{
// default filter is no filter
Func<TEntity, bool> predicate = delegate { return true; };
// check for active property
var activeProperty = typeof (TEntity).GetProperty("Active");
// if active property exists and is a boolean, set predicate to filter Active == true
if (activeProperty != null)
if (activeProperty.PropertyType.FullName == typeof (bool).FullName)
predicate = (x => (bool) activeProperty.GetValue(x, null));
// load data & return
return LoadData(predicate);
}
/// <summary>
/// Retrieve all records of type TEntity from the cache if available.<br/>
/// If data is not available in cache go directly to the database.<br/>
/// In addition, sets up query notification and refreshes cache on database change.
/// </summary>
/// <typeparam name="TEntity">Database table to retrieve.</typeparam>
/// <param name="predicate">A function to test each element for a condition.</param>
/// <returns>returns List of TEntity</returns>
public List<TEntity> LoadData<TEntity>(Func<TEntity, bool> predicate) where TEntity : class
{
// default is to not refresh cache
return LoadData(predicate, false);
}
/// <summary>
/// Retrieve all records of type TEntity from the cache if available.<br/>
/// If data is not available in cache or refreshCache is set to true go directly to the database.<br/>
/// In addition, sets up query notification and refreshes cache on database change.
/// </summary>
/// <typeparam name="TEntity">Database table to retrieve.</typeparam>
/// <param name="predicate">A function to test each element for a condition.</param>
/// <param name="refreshCache">If true, ignore cache and go directly to the database and update cache.</param>
/// <returns></returns>
public List<TEntity> LoadData<TEntity>(Func<TEntity, bool> predicate, bool refreshCache) where TEntity : class
{
// instantiate database connection
using (var database = DatabaseConnection())
{
// instantiate the cache
var cache = CacheFactory.GetCacheManager(CacheManagerName);
// get cache key name
var cacheKey = typeof(TEntity).Name;
// if the value is in the cache, return it
if (cache.Contains(cacheKey) && !refreshCache)
// get data from cache, filter it and return results
return (cache.GetData(cacheKey) as List<TEntity>).Where(predicate).ToList();
// retrieve the data from the database
var data = from x in database.GetTable<TEntity>()
select x;
// if value is in cache, remove it
if (cache.Contains(cacheKey))
cache.Remove(cacheKey);
// add unfiltered results to cache
cache.Add(cacheKey, data.ToList());
Logger.Write(string.Format("Added {0} to cache {1} with key '{2}'", typeof(TEntity).Name, CacheManagerName, cacheKey));
// set up query notification
SetUpQueryNotification<TEntity>();
// return filtered results
return data.Where(predicate).ToList();
}
}
#endregion
#region Query Notification
public void SetUpQueryNotification<TEntity>() where TEntity : class
{
// get database connection
var database = DatabaseConnection();
// set up query notification
using (var sqlConnection = new SqlConnection(Constants.DatabaseConnectionString))
{
// linq query
var query = from t in database.GetTable<TEntity>()
select t;
var command = database.GetCommand(query);
// create sql command
using (var sqlCommand = new SqlCommand(command.CommandText, sqlConnection))
{
// get query parameters
var sqlCmdParameters = command.Parameters;
// add query parameters to dependency query
foreach (SqlParameter parameter in sqlCmdParameters)
{
sqlCommand.Parameters.Add(new SqlParameter(parameter.ParameterName, parameter.SqlValue));
}
// create sql dependency
var sqlDependency = new SqlDependency(sqlCommand);
// set up query notification
sqlDependency.OnChange += sqlDependency_OnChange<TEntity>;
// open connection to database
sqlConnection.Open();
// need to execute query to make query notification work
sqlCommand.ExecuteNonQuery();
}
}
Logger.Write(string.Format("Query notification set up for {0}", typeof(TEntity).Name));
}
/// <summary>
/// Calls LoadData of type TEntity with refreshCache param set to true.
/// </summary>
/// <typeparam name="TEntity">Database table to refresh.</typeparam>
void RefreshCache<TEntity>() where TEntity : class
{
// refresh cache
LoadData<TEntity>(delegate { return true; }, true);
}
/// <summary>
/// Refreshes data in cache for type TEntity if type is Delete, Insert or Update.<br/>
/// Also re-sets up query notification since query notification only fires once.
/// </summary>
/// <typeparam name="TEntity">Database table</typeparam>
void sqlDependency_OnChange<TEntity>(object sender, SqlNotificationEventArgs e) where TEntity : class
{
var sqlDependency = sender as SqlDependency;
// this should never happen
if (sqlDependency == null)
return;
// query notification only happens once, so remove it, it will be set up again in LoadData
sqlDependency.OnChange -= sqlDependency_OnChange<TEntity>;
// if the data is changed (delete, insert, update), refresh cache & set up query notification
// otherwise, just set up query notification
if (e.Info == SqlNotificationInfo.Delete || e.Info == SqlNotificationInfo.Insert || e.Info == SqlNotificationInfo.Update)
{
// refresh cache & set up query notification
Logger.Write(string.Format("sqlDependency_OnChange (Info: {0}, Source: {1}, Type: {2}). Refreshing cache for {3}", e.Info, e.Source, e.Type, typeof(TEntity).Name));
RefreshCache<TEntity>();
}
else
{
// set up query notification
SetUpQueryNotification<TEntity>();
}
}
#endregion
}
Personally, I'd suggest using the Repository pattern, where you have an IRepository.
Then, in real terms you could use an IoC container to provide your app with a CacheRepository for some static types that uses the caching system in the first instance and either automatically delegates on to a LinqToSqlRepository where data isn't found, or returns null and allows you to deal with populating the cache yourself.
If the data isn't expected to change very much and it's used for the UI such as for drop-downs etc., why not cache the data on client's machine? We did this for an application we built a while back. We cached almost all of our "look-up" type data in files on the client machine and then built a mechanism to invalidate the data when it was modified in the database. This was very fast and worked well for us.
BTW, are you aware that L2S does it's own caching?