Unit Testing class that requires file data? - c#

I'm working on expanding our unit test suite, and I've come across a specific class that I'm attempting to figure out how to mock. I have a method that accepts a byte[] array as a parameter. In a perfect world, this byte array will always be a PDF file that contains a form of some sort. It then extracts all of the form fields from that pdf and returns them.
How can I potentially mock up logic that is dependent on file data? My only real ideas are to include the pdf in the project and use IO to read the test file, or to attempt to generate a pdf form on the fly and then extract those fields.
Here is the code for the PDF extractor:
public class PdfFormExtractor : IDisposable
{
private readonly PdfReader _pdfReader;
private readonly MemoryStream _newPdf;
private readonly PdfStamper _pdfStamper;
public PdfFormExtractor(byte[] pdf)
{
_pdfReader = new PdfReader(pdf);
_newPdf = new MemoryStream();
_pdfStamper = new PdfStamper(_pdfReader, _newPdf);
}
public FormDto ExtractForm()
{
var pdfFormFields = _pdfStamper.AcroFields;
var form = new FormDto()
{
Fields = pdfFormFields.Fields.Select(n => new FormFieldDto
{
Name = n.Key,
Label = n.Key
}).ToList()
};
return form;
}
#region IDisposable Support
// disposable implementation
#endregion
}

Use Resource files.
In Visual Studio, just create a resource file in your test project to contain all the files you want to use in your tests.
Open the resx and you will see the usual list of strings. But you're not limited to strings: you can select "Files" in the top-left dropdown and then drag and drop files INTO the resx file.
When you do, pay attention to the pasted file properties: you can select to interpret the file as binary (a byte[] is exposed, as in your use case) or text (with encoding, which exposes a string).
Then, in your test you can just reference the strongly typed Resource object and the strongly typed byte[] with the contents of your test file.
This strategy has a lot of applications when testing complex scenarios, especially when paired with a smart enough serializer/deserializer (like Json.NET).
You can serialize any complex data structure as Json, then in your tests reference it as a string (exposed directly by the Resource file's class), deserialize it with a simple JsonConvert.DeserializeObject and run your test on the business logic directly.

You can use Microsoft.Fakes to generate fake assembly for your *.dll. With Fakes, we can bend the outcome of any properties, methods,..
I faked Sqlconnection class which usually is hardened for mocking.
Right-click on your assembly (in my case, System.Data)
Create fakes assembly
It creates shims & stubs
We need to add scope by using (ShimsContext.Create()). Everything inside the scope will behave as you proposed.
public void ExtractFormTest()
{
using (ShimsContext.Create())
{
#region FakeIt
System.Data.SqlClient.Fakes.ShimSqlConnection.AllInstances.Open = (SqlConnection sqlConnection) =>
{
Console.WriteLine("Opened a session with Virtual Sql Server");
};
System.Data.SqlClient.Fakes.ShimSqlConnection.AllInstances.Close = (SqlConnection sqlConnection) =>
{
Console.WriteLine("Closed the session with Virtual Sql Server");
};
System.Data.SqlClient.Fakes.ShimSqlCommand.AllInstances.ExecuteNonQuery = (SqlCommand sqlCommand) =>
{
if (sqlCommand.CommandText.ToLower().Contains("truncate table"))
{
Console.WriteLine("Ran " + sqlCommand.CommandText + " at Virtual Sql Server");
return 1;
}
return 0;
};
System.Data.SqlClient.Fakes.ShimSqlBulkCopy.AllInstances.WriteToServerDataTable = (SqlBulkCopy sqlBulkCopy, DataTable datatable) =>
{
Console.WriteLine("Written #" + datatable.Rows.Count + " records to Virtual Sql Server");
};
System.Data.Common.Fakes.ShimDbDataAdapter.AllInstances.FillDataSet = (DbDataAdapter dbDataAdapter, DataSet dataSet) =>
{
var _dataSet = new DataSet();
var _dataTable = DataTableHelper.LoadFlatfileIntoDataTable(Path.Combine(dailyEmailFlatfilesDirectory, "Flatfile.txt"), flatfileDelimiter, flatfileDataTableFields, regexPatternMdmValidEmail, traceWriter);
if (dbDataAdapter.SelectCommand.CommandText.Equals(mdmSqlStorProcForSpFlatfileData))
{
while (_dataTable.Rows.Count > 1000)
_dataTable.Rows.RemoveAt(0);
}
else if (dbDataAdapter.SelectCommand.CommandText.Equals(mdmSqlStorProcForStFlatfileData))
{
while (_dataTable.Rows.Count > 72)
_dataTable.Rows.RemoveAt(0);
}
dataSet.Tables.Add(_dataTable);
dataSet = _dataSet;
return 1;
};
#endregion
#region Act
FormDto formDto = ExtractForm();
#endregion
#region Assert
// Upto the scope of your method and acceptance criteria
#endregion
}
}
Hope this helps!

Related

How do I refactor this C# SQL query to do a unit test?

I have this code that queries a database. I want to put the actual database code into a separate class so I can reuse it in other places. This will leave just the actual read of the PassResult value so I can make a Unit Test of the code without having the SQL code running. I am having trouble finding references on how to make this kind of code Unit Testable. Could someone help out?
using System;
using System.Data;
using System.Data.SqlClient;
namespace CS_UI_Final_Inspection
{
public class CalibrationTestCheck
{
// declare the variables
private bool _calibrationTestPass = false;
private string _connectionString = string.Empty;
public bool CheckCalibrationTestResults(string serialNumber, IDeviceInfo deviceInfo, string mapID)
{
// get database location
DhrLocationPull dhrLocation = new DhrLocationPull();
_connectionString = dhrLocation.PullDhrLocation();
// build the query
SqlConnection calibrationCheckConnection = new SqlConnection(_connectionString);
SqlCommand calibrationCheckCommand = new SqlCommand("[MfgFloor].[GetLatestTestResultsForDeviceByTestType]",
calibrationCheckConnection);
// build the stored proc
calibrationCheckCommand.CommandType = CommandType.StoredProcedure;
calibrationCheckCommand.Parameters.Add(new SqlParameter("#SerialNumber", serialNumber));
calibrationCheckCommand.Parameters.Add(new SqlParameter("#DeviceTypeID", mapID));
calibrationCheckCommand.Parameters.Add(new SqlParameter("#TestDataMapTypeID", "C"));
calibrationCheckCommand.Connection.Open();
SqlDataReader calibrationCheckReader = calibrationCheckCommand.ExecuteReader();
// is there data?
if (calibrationCheckReader.HasRows)
{
// read the data
calibrationCheckReader.Read();
try
{
_calibrationTestPass = (bool) calibrationCheckReader["PassResult"];
}
catch (InvalidOperationException)
{
// means last element was not filled in
}
finally
{
// close refs
calibrationCheckReader.Close();
calibrationCheckCommand.Connection.Close();
calibrationCheckConnection.Close();
calibrationCheckReader.Dispose();
calibrationCheckCommand.Dispose();
calibrationCheckConnection.Dispose();
}
}
return _calibrationTestPass;
}
}
}
create an interface and implement it.
move all references to be tested to use the interface (exposing any methods/properties required through the interface)
have the constructor or method being tested take the interface as a parameter.
Roy Oscherov is a good resource on this. Roy Oscherov wrote a great book called "The art of unit testing". Roy's website can be found here: http://osherove.com/

Keeping track of user customization's c#

Good evening; I have an application that has a drop down list; This drop down list is meant to be a list of commonly visited websites which can be altered by the user.
My question is how can I store these values in such a manor that would allow the users to change it.
Example; I as the user, decide i want google to be my first website, and youtube to be my second.
I have considered making a "settings" file however is it practical to put 20+ websites into a settings file and then load them at startup? Or a local database, but this may be overkill for the simple need.
Please point me in the right direction.
Given you have already excluded database (probably for right reasons.. as it may be over kill for a small app), I'd recommend writing the data to a local file.. but not plain text..
But preferably serialized either as XML or JSON.
This approach has at least two benefits -
More complex data can be stored in future.. example - while order can be implicit, it can be made explicit.. or additional data like last time the url was used etc..
Structured data is easier to validate against random corruption.. If it was a plain text file.. It will be much harder to ensure its integrity.
The best would be to use the power of Serializer and Deserializer in c#, which will let you work with the file in an Object Oriented. At the same time you don't need to worry about storing into files etc... etc...
Here is the sample code I quickly wrote for you.
using System;
using System.IO;
using System.Collections;
using System.Xml.Serialization;
namespace ConsoleApplication3
{
public class UrlSerializer
{
private static void Write(string filename)
{
URLCollection urls = new URLCollection();
urls.Add(new Url { Address = "http://www.google.com", Order = 1 });
urls.Add(new Url { Address = "http://www.yahoo.com", Order = 2 });
XmlSerializer x = new XmlSerializer(typeof(URLCollection));
TextWriter writer = new StreamWriter(filename);
x.Serialize(writer, urls);
}
private static URLCollection Read(string filename)
{
var x = new XmlSerializer(typeof(URLCollection));
TextReader reader = new StreamReader(filename);
var urls = (URLCollection)x.Deserialize(reader);
return urls;
}
}
public class URLCollection : ICollection
{
public string CollectionName;
private ArrayList _urls = new ArrayList();
public Url this[int index]
{
get { return (Url)_urls[index]; }
}
public void CopyTo(Array a, int index)
{
_urls.CopyTo(a, index);
}
public int Count
{
get { return _urls.Count; }
}
public object SyncRoot
{
get { return this; }
}
public bool IsSynchronized
{
get { return false; }
}
public IEnumerator GetEnumerator()
{
return _urls.GetEnumerator();
}
public void Add(Url url)
{
if (url == null) throw new ArgumentNullException("url");
_urls.Add(url);
}
}
}
You clearly need some sort of persistence, for which there are a few options:
Local database
- As you have noted, total overkill. You are just storing a list, not relational data
Simple text file
- Pretty easy, but maybe not the most "professional" way. Using XML serialization to this file would allow for complex data types.
Settings file
- Are these preferences really settings? If they are, then this makes sense.
The Registry - This is great for settings you don't want your users to ever manually mess with. Probably not the best option for a significant amount of data though
I would go with number 2. It doesn't sound like you need any fancy encoding or security, so just store everything in a text file. *.ini files tend to meet this description, but you can use any extension you want. A settings file doesn't seem like the right place for this scenario.

Files,strings and save

I been having trouble trying to figure this out. When I think I have it I get told no. Here is a picture of it.
I am working on the save button. Now after the user adds the first name, last name and job title they can save it. If a user loads the file and it comes up in the listbox, that person should be able to click on the name and then hit the edit button and they should be able to edit it. I have code, but I did get inform it looked wackey and the string should have the first name, last name and job title.
It is getting me really confused as I am learning C#. I know how to use savefiledialog but I am not allowed to use it on this one. Here is what I am suppose to be doing:
When the user clicks the “Save” button, write the selected record to
the file specified in txtFilePath (absolute path not relative) without
truncating the values currently inside.
I am still working on my code since I got told that it will be better file writes records in a group of three strings. But this is the code I have right now.
private void Save_Click(object sender, EventArgs e)
{
string path = txtFilePath.Text;
if (File.Exists(path))
{
using (StreamWriter sw = File.CreateText(path))
{
foreach (Employee employee in employeeList.Items)
sw.WriteLine(employee);
}
}
else
try
{
StreamWriter sw = File.AppendText(path);
foreach (var item in employeeList.Items)
sw.WriteLine(item.ToString());
}
catch
{
MessageBox.Show("Please enter something in");
}
Now I can not use save or open file dialog. The user should be able to open any file on the C,E,F drive or where it is. I was also told it should be obj.Also the program should handle and exceptions that arise.
I know this might be a noobie question but my mind is stuck as I am still learning how to code with C#. Now I have been searching and reading. But I am not finding something to help me understand how to have all this into 1 code. If someone might be able to help or even point to a better web site I would appreciate it.
There are many, many ways to store data in a file. This code demonstrates 4 methods that are pretty easy to use. But the point is that you should probably be splitting up your data into separate pieces rather than storing them as one long string.
public class MyPublicData
{
public int id;
public string value;
}
[Serializable()]
class MyEncapsulatedData
{
private DateTime created;
private int length;
public MyEncapsulatedData(int length)
{
created = DateTime.Now;
this.length = length;
}
public DateTime ExpirationDate
{
get { return created.AddDays(length); }
}
}
class Program
{
static void Main(string[] args)
{
string testpath = System.IO.Path.Combine(
Environment.GetFolderPath(Environment.SpecialFolder.Desktop), "TestFile");
// Method 1: Automatic XML serialization
// Requires that the type being serialized and all its serializable members are public
System.Xml.Serialization.XmlSerializer xs =
new System.Xml.Serialization.XmlSerializer(typeof(MyPublicData));
MyPublicData o1 = new MyPublicData() {id = 3141, value = "a test object"};
MyEncapsulatedData o2 = new MyEncapsulatedData(7);
using (System.IO.StreamWriter w = new System.IO.StreamWriter(testpath + ".xml"))
{
xs.Serialize(w, o1);
}
// Method 2: Manual XML serialization
System.Xml.XmlWriter xw = System.Xml.XmlWriter.Create(testpath + "1.xml");
xw.WriteStartElement("MyPublicData");
xw.WriteStartAttribute("id");
xw.WriteValue(o1.id);
xw.WriteEndAttribute();
xw.WriteAttributeString("value", o1.value);
xw.WriteEndElement();
xw.Close();
// Method 3: Automatic binary serialization
// Requires that the type being serialized be marked with the "Serializable" attribute
using (System.IO.FileStream f = new System.IO.FileStream(testpath + ".bin", System.IO.FileMode.Create))
{
System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bf =
new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();
bf.Serialize(f, o2);
}
// Demonstrate how automatic binary deserialization works
// and prove that it handles objects with private members
using (System.IO.FileStream f = new System.IO.FileStream(testpath + ".bin", System.IO.FileMode.Open))
{
System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bf =
new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter();
MyEncapsulatedData o3 = (MyEncapsulatedData)bf.Deserialize(f);
Console.WriteLine(o3.ExpirationDate.ToString());
}
// Method 4: Manual binary serialization
using (System.IO.FileStream f = new System.IO.FileStream(testpath + "1.bin", System.IO.FileMode.Create))
{
using (System.IO.BinaryWriter w = new System.IO.BinaryWriter(f))
{
w.Write(o1.id);
w.Write(o1.value);
}
}
// Demonstrate how manual binary deserialization works
using (System.IO.FileStream f = new System.IO.FileStream(testpath + "1.bin", System.IO.FileMode.Open))
{
using (System.IO.BinaryReader r = new System.IO.BinaryReader(f))
{
MyPublicData o4 = new MyPublicData() { id = r.ReadInt32(), value = r.ReadString() };
Console.WriteLine("{0}: {1}", o4.id, o4.value);
}
}
}
}
As you are writing the employee objects with WriteLine, the underlying ToString() is being invoked. What you have to do first is to customize that ToString() methods to fit your needs, in this way:
public class Employee
{
public string FirstName;
public string LastName;
public string JobTitle;
// all other declarations here
...........
// Override ToString()
public override string ToString()
{
return string.Format("'{0}', '{1}', '{2}'", this.FirstName, this.LastName, this.JobTitle);
}
}
This way, your writing code still keeps clean and readable.
By the way, there is not a reverse equivalent of ToSTring, but to follow .Net standards, I suggest you to implement an Employee's method like:
public static Employee Parse(string)
{
// your code here, return a new Employee object
}
You have to determine a way of saving that suits your needs. A simple way to store this info could be CSV:
"Firstname1","Lastname 1", "Jobtitle1"
" Firstname2", "Lastname2","Jobtitle2 "
As you can see, data won't be truncated, since the delimiter " is used to determine string boundaries.
As shown in this question, using CsvHelper might be an option. But given this is homework and the constraints therein, you might have to create this method yourself. You could put this in Employee (or make it override ToString()) that does something along those lines:
public String GetAsCSV(String firstName, String lastName, String jobTitle)
{
return String.Format("\"{0}\",\"{1}\",\"{2}\"", firstName, lastName, jobTitle);
}
I'll leave the way how to read the data back in as an exercise to you. ;-)

Sterling database created from Windows Console app won't read inside WP7 app

I have created a Sterling database inside a standard Windows console app, then I have added that database file as a resource inside a WP7 app. I find that the database reading code causing an ArgumentNullException when accessing the LazyValue.Value member.
Here's the database creation code, excluding the model 'Venue'.
public class TestDatabaseInstance : BaseDatabaseInstance
{
public override string Name
{
get
{
return "TestDatabase";
}
}
protected override List<ITableDefinition> RegisterTables()
{
return new List<ITableDefinition>
{
CreateTableDefinition<Venue, int>(x=>x.VenueId)
};
}
}
class Program
{
static void Main(string[] args)
{
//CreateData();
LoadData();
}
private static void CreateData()
{
using (SterlingEngine engine = new SterlingEngine())
{
engine.Activate();
var databaseInstance = engine.SterlingDatabase.RegisterDatabase<TestDatabaseInstance>();
for (int i = 100; i < 1000; i++)
{
var venue = new Venue();
venue.Name = "test";
venue.AddressLine1 = "this is an address";
venue.VenueId = i;
var key = databaseInstance.Save<Venue>(venue);
}
FileStream fs = File.Open("c:\\myvenuedata.dat", FileMode.CreateNew, FileAccess.ReadWrite, FileShare.Write);
using (var binaryWriter = new BinaryWriter(fs))
{
engine.SterlingDatabase.Backup<TestDatabaseInstance>(binaryWriter);
}
databaseInstance = null;
}
}
private static void LoadData()
{
SterlingEngine engine = new SterlingEngine();
var fs = File.Open("c:\\myvenuedata.dat", FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite);
engine.Activate();
var databaseInstance = engine.SterlingDatabase.RegisterDatabase<TestDatabaseInstance>();
engine.SterlingDatabase.Restore<TestDatabaseInstance>(new BinaryReader(fs));
engine.Dispose();
engine = new SterlingEngine();
engine.Activate();
// THIS LINE WORKS FINE IN MY CONSOLE APP
databaseInstance.Query<Venue, int>().ForEach(x => Console.WriteLine(x.LazyValue.Value.Name));
}
}
Then if I put the equivalent code inside the WP7 app:
SterlingEngine engine = new SterlingEngine();
StreamResourceInfo sri = Application.GetResourceStream(new Uri("/SterlingDBReader;component/myvenuedata.dat", UriKind.RelativeOrAbsolute));
var fs = sri.Stream;
engine.Activate();
var databaseInstance = engine.SterlingDatabase.RegisterDatabase<TestDatabaseInstance>();
engine.SterlingDatabase.Restore<TestDatabaseInstance>(new BinaryReader(fs));
engine.Dispose();
engine = new SterlingEngine();
engine.Activate();
// **Errors with ArgumentNullException here because x.LazyValue.Value IS NULL.**
databaseInstance.Query<Venue, int>().ForEach(x => Debug.WriteLine(x.LazyValue.Value.Name));
The only differences are
that the parent namespace of the reader WP7 app is different to the console app
It's a WP7 app reading from a console app created data file
It loads the data file from isolated storage as a resource
Any ideas?
thanks
Kris
Currently Sterling stores types using the fully qualified assembly type name. That means the referenced classes should be in the exact same project = preferably a shared Silverlight 3 DLL. If you are just linking the files and recompiling it won't work due to this. The goal is to change this in version 2.0 to improve the type checking but that's the case for now.
As far as I know, the type is an important part of the storage mechanism for Sterling, so you'd need to make sure that the namespaces and type names of the type that are stored in the database to match exactly.
I have no idea if the scenarion your suggesting is a supported one, though the restore approach sounds like it should work. I'd recommend asking on CodePlex.
Given the exception you're getting, it sounds like the type names might not match properly, hence the LazyValue.Value being null.

Store the cache data locally

I develops a C# Winform application, it is a client and connect to web service to get data. The data returned by webservice is a DataTable. Client will display it on a DataGridView.
My problem is that: Client will take more time to get all data from server (web service is not local with client). So I must to use a thread to get data. This is my model:
Client create a thread to get data -> thread complete and send event to client -> client display data on datagridview on a form.
However, when user closes the form, user can open this form in another time, and client must get data again. This solution will cause the client slowly.
So, I think about a cached data:
Client <---get/add/edit/delete---> Cached Data ---get/add/edit/delete--->Server (web service)
Please give me some suggestions.
Example: cached data should be developed in another application which is same host with client? Or cached data is running in client.
Please give me some techniques to implement this solution.
If having any examples, please give me.
Thanks.
UPDATE : Hello everyone, maybe you think my problem so far. I only want to cache data in client's lifetime. I think cache data should be stored in memory. And when client want to get data, it will check from cache.
If you're using C# 2.0 and you're prepared to ship System.Web as a dependency, then you can use the ASP.NET cache:
using System.Web;
using System.Web.Caching;
Cache webCache;
webCache = HttpContext.Current.Cache;
// See if there's a cached item already
cachedObject = webCache.Get("MyCacheItem");
if (cachedObject == null)
{
// If there's nothing in the cache, call the web service to get a new item
webServiceResult = new Object();
// Cache the web service result for five minutes
webCache.Add("MyCacheItem", webServiceResult, null, DateTime.Now.AddMinutes(5), Cache.NoSlidingExpiration, System.Web.Caching.CacheItemPriority.Normal, null);
}
else
{
// Item already in the cache - cast it to the right type
webServiceResult = (object)cachedObject;
}
If you're not prepared to ship System.Web, then you might want to take a look at the Enterprise Library Caching block.
If you're on .NET 4.0, however, caching has been pushed into the System.Runtime.Caching namespace. To use this, you'll need to add a reference to System.Runtime.Caching, and then your code will look something like this:
using System.Runtime.Caching;
MemoryCache cache;
object cachedObject;
object webServiceResult;
cache = new MemoryCache("StackOverflow");
cachedObject = cache.Get("MyCacheItem");
if (cachedObject == null)
{
// Call the web service
webServiceResult = new Object();
cache.Add("MyCacheItem", webServiceResult, DateTime.Now.AddMinutes(5));
}
else
{
webServiceResult = (object)cachedObject;
}
All these caches run in-process to the client. Because your data is coming from a web service, as Adam says, you're going to have difficulty determining the freshness of the data - you'll have to make a judgement call on how often the data changes and how long you cache the data for.
Do you have the ability to make changes/add to the webservice?
If you can Sync Services may be an option for you. You can define which tables are syncronised, and all the sync stuff is managed for you.
Check out
http://msdn.microsoft.com/en-us/sync/default.aspx
and shout if you need more information.
You might try the Enterprise Library's Caching Application Block. It's easy to use, stores in memory and, if you ever need to later, it supports adding a backup location for persisting beyond the life of the application (such as to a database, isolated storage, file, etc.) and even encryption too.
Use EntLib 3.1 if you're stuck with .NET 2.0. There's not much new (for caching, at least) in the newer EntLibs aside from better customization support.
Identify which objects you would like to serialize, and cache to isolated storage. Specify the level of data isolation you would like (application level, user level, etc).
Example:
You could create a generic serializer, a very basic sample would look like this:
public class SampleDataSerializer
{
public static void Deserialize<T>(out T data, Stream stm)
{
var xs = new XmlSerializer(typeof(T));
data = (T)xs.Deserialize(stm);
}
public static void Serialize<T>(T data, Stream stm)
{
try
{
var xs = new XmlSerializer(typeof(T));
xs.Serialize(stm, data);
}
catch (Exception e)
{
throw;
}
}
}
Note that you probably should put in some overloads to the Serialize and Deserialize methods to accomodate readers, or any other types you are actually using in your app (e.g., XmlDocuments, etc).
The operation to save to IsolatedStorage can be handled by a utility class (example below):
public class SampleIsolatedStorageManager : IDisposable
{
private string filename;
private string directoryname;
IsolatedStorageFile isf;
public SampleIsolatedStorageManager()
{
filename = string.Empty;
directoryname = string.Empty;
// create an ISF scoped to domain user...
isf = IsolatedStorageFile.GetStore(IsolatedStorageScope.User |
IsolatedStorageScope.Assembly | IsolatedStorageScope.Domain,
typeof(System.Security.Policy.Url), typeof(System.Security.Policy.Url));
}
public void Save<T>(T parm)
{
using (IsolatedStorageFileStream stm = GetStreamByStoredType<T>(FileMode.Create))
{
SampleDataSerializer.Serialize<T>(parm, stm);
}
}
public T Restore<T>() where T : new()
{
try
{
if (GetFileNameByType<T>().Length > 0)
{
T result = new T();
using (IsolatedStorageFileStream stm = GetStreamByStoredType<T>(FileMode.Open))
{
SampleDataSerializer.Deserialize<T>(out result, stm);
}
return result;
}
else
{
return default(T);
}
}
catch
{
try
{
Clear<T>();
}
catch
{
}
return default(T);
}
}
public void Clear<T>()
{
if (isf.GetFileNames(GetFileNameByType<T>()).Length > 0)
{
isf.DeleteFile(GetFileNameByType<T>());
}
}
private string GetFileNameByType<T>()
{
return typeof(T).Name + ".cache";
}
private IsolatedStorageFileStream GetStreamByStoredType<T>(FileMode mode)
{
var stm = new IsolatedStorageFileStream(GetFileNameByType<T>(), mode, isf);
return stm;
}
#region IDisposable Members
public void Dispose()
{
isf.Close();
}
}
Finally, remember to add the following using clauses:
using System.IO;
using System.IO.IsolatedStorage;
using System.Xml.Serialization;
The actual code to use the classes above could look like this:
var myClass = new MyClass();
myClass.name = "something";
using (var mgr = new SampleIsolatedStorageManager())
{
mgr.Save<MyClass>(myClass);
}
This will save the instance you specify to be saved to the isolated storage. To retrieve the instance, simply call:
using (var mgr = new SampleIsolatedStorageManager())
{
mgr.Restore<MyClass>();
}
Note: the sample I've provided only supports one serialized instance per type. I'm not sure if you need more than that. Make whatever modifications you need to support further functionalities.
HTH!
You can serialise the DataTable to file:
http://forums.asp.net/t/1441971.aspx
Your only concern then is deciding when the cache has gone stale. Perhaps timestamp the file?
In our implementation every row in the database has a last-updated timestamp. Every time our client application accesses a table we select the latest last-updated timestamp from the cache and send that value to the server. The server responds with all the rows that have newer timestamps.

Categories

Resources