Good evening; I have an application that has a drop down list; This drop down list is meant to be a list of commonly visited websites which can be altered by the user.
My question is how can I store these values in such a manor that would allow the users to change it.
Example; I as the user, decide i want google to be my first website, and youtube to be my second.
I have considered making a "settings" file however is it practical to put 20+ websites into a settings file and then load them at startup? Or a local database, but this may be overkill for the simple need.
Please point me in the right direction.
Given you have already excluded database (probably for right reasons.. as it may be over kill for a small app), I'd recommend writing the data to a local file.. but not plain text..
But preferably serialized either as XML or JSON.
This approach has at least two benefits -
More complex data can be stored in future.. example - while order can be implicit, it can be made explicit.. or additional data like last time the url was used etc..
Structured data is easier to validate against random corruption.. If it was a plain text file.. It will be much harder to ensure its integrity.
The best would be to use the power of Serializer and Deserializer in c#, which will let you work with the file in an Object Oriented. At the same time you don't need to worry about storing into files etc... etc...
Here is the sample code I quickly wrote for you.
using System;
using System.IO;
using System.Collections;
using System.Xml.Serialization;
namespace ConsoleApplication3
{
public class UrlSerializer
{
private static void Write(string filename)
{
URLCollection urls = new URLCollection();
urls.Add(new Url { Address = "http://www.google.com", Order = 1 });
urls.Add(new Url { Address = "http://www.yahoo.com", Order = 2 });
XmlSerializer x = new XmlSerializer(typeof(URLCollection));
TextWriter writer = new StreamWriter(filename);
x.Serialize(writer, urls);
}
private static URLCollection Read(string filename)
{
var x = new XmlSerializer(typeof(URLCollection));
TextReader reader = new StreamReader(filename);
var urls = (URLCollection)x.Deserialize(reader);
return urls;
}
}
public class URLCollection : ICollection
{
public string CollectionName;
private ArrayList _urls = new ArrayList();
public Url this[int index]
{
get { return (Url)_urls[index]; }
}
public void CopyTo(Array a, int index)
{
_urls.CopyTo(a, index);
}
public int Count
{
get { return _urls.Count; }
}
public object SyncRoot
{
get { return this; }
}
public bool IsSynchronized
{
get { return false; }
}
public IEnumerator GetEnumerator()
{
return _urls.GetEnumerator();
}
public void Add(Url url)
{
if (url == null) throw new ArgumentNullException("url");
_urls.Add(url);
}
}
}
You clearly need some sort of persistence, for which there are a few options:
Local database
- As you have noted, total overkill. You are just storing a list, not relational data
Simple text file
- Pretty easy, but maybe not the most "professional" way. Using XML serialization to this file would allow for complex data types.
Settings file
- Are these preferences really settings? If they are, then this makes sense.
The Registry - This is great for settings you don't want your users to ever manually mess with. Probably not the best option for a significant amount of data though
I would go with number 2. It doesn't sound like you need any fancy encoding or security, so just store everything in a text file. *.ini files tend to meet this description, but you can use any extension you want. A settings file doesn't seem like the right place for this scenario.
Related
I'm using protobuf-net in a c# application to load and save my program's 'project files'. At save time, the program creates a ProjectData object and adds many different objects to it - see general principle below.
static ProjectData packProjectData()
{
ProjectData projectData = new ProjectData();
projectData.projectName = ProjectHandler.projectName;
foreach (KeyValuePair<int, Module> kvp in DataHandler.modulesDict)
{
projectData.modules.Add(serializeModule(kvp.Value));
}
return projectData;
}
[ProtoContract]
public class ProjectData
{
[ProtoMember(1)]
public List<SEModule> modules = new List<SEModule>();
[ProtoMember(2)]
public string projectName = "";
}
Once this is created, it's zipped and save to the disk. The problem I am having is that when the number of modules gets very big (40,000+) System.OutOfMemoryException is being reported during the packProjectData function.
I've seen questions like this asked already, but these do not contain a clear answer to address the problem. If anyone can give me either a specific solution, or a general principle to follow that would be greatly appreciated.
What sort of size are we talking about here? Most likely this is due to buffering required for the length prefix - something that v3 will address, but for now - if the file is huge, a pragmatic workaround might be:
[ProtoContract]
public class ProjectData
{
[ProtoMember(1, DataFormat = DataFormat.Grouped)]
public List<SEModule> modules = new List<SEModule>();
[ProtoMember(2)]
public string projectName = "";
}
This changes the internal encoding format of the SEModule items so that no length-prefix is required. This same approach may also be useful for some elements inside SEModule, but I can't see that to comment.
Note that this changes the data layout, so should be considered a breaking change.
This may be a noob question, but I need some help. I have written two simple methods in C#: ReadCsv_IT and GetTranslation. The ReadCsv_IT method reads from a csv file. The GetTransaltion method calls the ReadCsv_IT method and returns the translated input (string key).
My problem is that in the future I will have to request a lot of times GetTranslation, but I obviously don't want to read the .csv files every time. So I was thinking about ways to use cache Memory to optimize my program, so that I don't have to read the .csv file on every request. But I am not sure how to do it and what I could do to optimize my program. Can anyone please help ?
public string ReadCsv_IT(string key)
{
string newKey = "";
using (var streamReader = new StreamReader(#"MyResource.csv"))
{
CsvReader csv = new CsvReader(streamReader);
csv.Configuration.Delimiter = ";";
List<DataRecord> rec = csv.GetRecords<DataRecord>().ToList();
DataRecord record = rec.FirstOrDefault(a => a.ORIGTITLE == key);
if (record != null)
{
//DOES THE LOCALIZATION with the help of the .csv file.
}
}
return newKey;
}
Here is the GetTranslation Method:
public string GetTranslation(string key, string culture = null)
{
string result = "";
if (culture == null)
{
culture = Thread.CurrentThread.CurrentCulture.Name;
}
if (culture == "it-IT")
{
result = ReadCsv_IT(key);
}
return result;
}
Here is also the class DataRecord.
class DataRecord
{
public string ORIGTITLE { get; set; }
public string REPLACETITLE { get; set; }
public string ORIGTOOLTIP { get; set; }
}
}
Two options IMO:
Turn your stream into an object?
In other words:
Make a class stream so you can refer to that object of the class stream.
Second:
Initialize your stream in the scope that calls for GetTranslation, and pass it on as an attribute to GetTranslation and ReadCSV_IT.
Brecht C and Thom Hubers have already given you good advice. I would like to add one more point, though: using csv files for localization in .NET is not really a good idea. Microsoft recommends using a resource-based approach (this article is a good starting point). It seems to me that you are trying to write code for something that is already built into .NET.
From a translation point of view csv files are not the best possible format either. First of all, they are not really standardized: many tools have slightly different ways to handle commas, quotes, and line breaks that are part of the translated text. Besides, translators will be tempted to open them in Excel, and -unless handled with caution- Excel will write out translations in whatever encoding it deems best.
If the project you are working on is for learning please feel free to go ahead with it, but if you are developing software that will be used by customers, updated, translated into several target languages, and redeployed, I would recommend to reconsider your internationalization approach.
#Brecht C is right, use that answer to start. When a variable has to be cached to be used by multiple threads or instances: take a look at InMemoryCache or Redis when perfomance and distribution over several clients gets an issue.
I'm coming from a SQL Server background, and experimenting with Redis in .NET using ServiceStack. I don't mean for Redis to be a full replacement for SQL Server, but I just wanted to get a basic idea of how to use it so I could see where we might make good use of it.
I'm struggling with what I think is a pretty basic issue. We have a list of items that are maintained in a couple of different data stores. For the sake of simplicity, assume the definition of the item is basic: an integer id and a string name. I'm trying to do the following:
Store an item
Retrieve an item if we only know its id
Overwrite an existing item if we only know its id
Show all the items for that specific type
And here's some of the code I've put together:
public class DocumentRepositoryRedis
{
private static string DOCUMENT_ID_KEY_BASE = "document::id::";
public IQueryable<Document> GetAllDocuments()
{
IEnumerable<Document> documentsFromRedis;
using (var documents = new RedisClient("localhost").As<Document>())
{
documentsFromRedis = documents.GetAll();
}
return documentsFromRedis.AsQueryable();
}
public Document GetDocument(int id)
{
Document document = null;
using (var redisDocuments = new RedisClient("localhost").As<Document>())
{
var documentKey = GetKeyByID(document.ID);
if (documentKey != null)
document = redisDocuments.GetValue(documentKey);
}
return document;
}
public void SaveDocument(Document document)
{
using (var redisDocuments = new RedisClient("localhost").As<Document>())
{
var documentKey = GetKeyByID(document.ID);
redisDocuments.SetEntry(documentKey, document);
}
}
private string GetKeyByID(int id)
{
return DOCUMENT_ID_KEY_BASE + id.ToString();
}
}
It all seems to work - except for GetAllDocuments. That's returning 0 documents, regardless of how many documents I have stored. What am I doing wrong?
The typed Redis client also gives you access to the non-typed methods - since Redis ultimately doesn't know or care about your object types. So when you use the client.SetEntry() method, it bypasses some of the typed client's features and just stores the object by a key. You'll want to use the client.Store method since it goes ahead and creates a SET in Redis with all the object IDs related to your type. This SET is important because it's what the GetAll method relies on to serve back all the objects to you. The client.Store method does infer the ID automatically so you'll want to play around with it.
You'd change your GetDocument(int id) and SaveDocument(Document document) methods to use the client.GetById(string id) method, and you'd use client.Store(T value) method. You won't need your GetKeyByID() method anymore. I believe your Document object will need an "Id" property for the typed client to infer your object ID.
I found something similar to what I need here:
http://www.codeproject.com/KB/cs/PropertiesSettings.aspx
But it does not quite do it for me. The user settings are stored in some far away location such as C:\documents and settings\[username]\local settings\application data\[your application], but I do not have access to these folders and I cannot copy the settings file from one computer to another, or to delete the file altogether. Also, it would be super-convenient to have the settings xml file right next to the app, and to copy/ship both. This is used for demo-ware (which is a legitimate type of coding task) and will be used by non-technical people in the field. I need to make this quickly, so I need to reuse some existing library and not write my own. I need to make it easy to use and be portable. The last thing I want is to get a call at midnight that says that settings do not persist when edited through the settings dialog that I will have built.
So, user settings are stored god knows where, and application settings are read-only (no go). Is there anything else that I can do? I think app.config file has multiple purposes and I think I once saw it being used the way I want, I just cannot find the link.
Let me know if something is not clear.
You could create a class that holds your settings and then XML-serialize it:
public class Settings
{
public string Setting1 { get; set; }
public int Setting2 { get; set; }
}
static void SaveSettings(Settings settings)
{
var serializer = new XmlSerializer(typeof(Settings));
using (var stream = File.OpenWrite(SettingsFilePath))
{
serializer.Serialize(stream, settings);
}
}
static Settings LoadSettings()
{
if (!File.Exists(SettingsFilePath))
return new Settings();
var serializer = new XmlSerializer(typeof(Settings));
using (var stream = File.OpenRead(SettingsFilePath))
{
return (Settings)serializer.Deserialize(stream);
}
}
I develops a C# Winform application, it is a client and connect to web service to get data. The data returned by webservice is a DataTable. Client will display it on a DataGridView.
My problem is that: Client will take more time to get all data from server (web service is not local with client). So I must to use a thread to get data. This is my model:
Client create a thread to get data -> thread complete and send event to client -> client display data on datagridview on a form.
However, when user closes the form, user can open this form in another time, and client must get data again. This solution will cause the client slowly.
So, I think about a cached data:
Client <---get/add/edit/delete---> Cached Data ---get/add/edit/delete--->Server (web service)
Please give me some suggestions.
Example: cached data should be developed in another application which is same host with client? Or cached data is running in client.
Please give me some techniques to implement this solution.
If having any examples, please give me.
Thanks.
UPDATE : Hello everyone, maybe you think my problem so far. I only want to cache data in client's lifetime. I think cache data should be stored in memory. And when client want to get data, it will check from cache.
If you're using C# 2.0 and you're prepared to ship System.Web as a dependency, then you can use the ASP.NET cache:
using System.Web;
using System.Web.Caching;
Cache webCache;
webCache = HttpContext.Current.Cache;
// See if there's a cached item already
cachedObject = webCache.Get("MyCacheItem");
if (cachedObject == null)
{
// If there's nothing in the cache, call the web service to get a new item
webServiceResult = new Object();
// Cache the web service result for five minutes
webCache.Add("MyCacheItem", webServiceResult, null, DateTime.Now.AddMinutes(5), Cache.NoSlidingExpiration, System.Web.Caching.CacheItemPriority.Normal, null);
}
else
{
// Item already in the cache - cast it to the right type
webServiceResult = (object)cachedObject;
}
If you're not prepared to ship System.Web, then you might want to take a look at the Enterprise Library Caching block.
If you're on .NET 4.0, however, caching has been pushed into the System.Runtime.Caching namespace. To use this, you'll need to add a reference to System.Runtime.Caching, and then your code will look something like this:
using System.Runtime.Caching;
MemoryCache cache;
object cachedObject;
object webServiceResult;
cache = new MemoryCache("StackOverflow");
cachedObject = cache.Get("MyCacheItem");
if (cachedObject == null)
{
// Call the web service
webServiceResult = new Object();
cache.Add("MyCacheItem", webServiceResult, DateTime.Now.AddMinutes(5));
}
else
{
webServiceResult = (object)cachedObject;
}
All these caches run in-process to the client. Because your data is coming from a web service, as Adam says, you're going to have difficulty determining the freshness of the data - you'll have to make a judgement call on how often the data changes and how long you cache the data for.
Do you have the ability to make changes/add to the webservice?
If you can Sync Services may be an option for you. You can define which tables are syncronised, and all the sync stuff is managed for you.
Check out
http://msdn.microsoft.com/en-us/sync/default.aspx
and shout if you need more information.
You might try the Enterprise Library's Caching Application Block. It's easy to use, stores in memory and, if you ever need to later, it supports adding a backup location for persisting beyond the life of the application (such as to a database, isolated storage, file, etc.) and even encryption too.
Use EntLib 3.1 if you're stuck with .NET 2.0. There's not much new (for caching, at least) in the newer EntLibs aside from better customization support.
Identify which objects you would like to serialize, and cache to isolated storage. Specify the level of data isolation you would like (application level, user level, etc).
Example:
You could create a generic serializer, a very basic sample would look like this:
public class SampleDataSerializer
{
public static void Deserialize<T>(out T data, Stream stm)
{
var xs = new XmlSerializer(typeof(T));
data = (T)xs.Deserialize(stm);
}
public static void Serialize<T>(T data, Stream stm)
{
try
{
var xs = new XmlSerializer(typeof(T));
xs.Serialize(stm, data);
}
catch (Exception e)
{
throw;
}
}
}
Note that you probably should put in some overloads to the Serialize and Deserialize methods to accomodate readers, or any other types you are actually using in your app (e.g., XmlDocuments, etc).
The operation to save to IsolatedStorage can be handled by a utility class (example below):
public class SampleIsolatedStorageManager : IDisposable
{
private string filename;
private string directoryname;
IsolatedStorageFile isf;
public SampleIsolatedStorageManager()
{
filename = string.Empty;
directoryname = string.Empty;
// create an ISF scoped to domain user...
isf = IsolatedStorageFile.GetStore(IsolatedStorageScope.User |
IsolatedStorageScope.Assembly | IsolatedStorageScope.Domain,
typeof(System.Security.Policy.Url), typeof(System.Security.Policy.Url));
}
public void Save<T>(T parm)
{
using (IsolatedStorageFileStream stm = GetStreamByStoredType<T>(FileMode.Create))
{
SampleDataSerializer.Serialize<T>(parm, stm);
}
}
public T Restore<T>() where T : new()
{
try
{
if (GetFileNameByType<T>().Length > 0)
{
T result = new T();
using (IsolatedStorageFileStream stm = GetStreamByStoredType<T>(FileMode.Open))
{
SampleDataSerializer.Deserialize<T>(out result, stm);
}
return result;
}
else
{
return default(T);
}
}
catch
{
try
{
Clear<T>();
}
catch
{
}
return default(T);
}
}
public void Clear<T>()
{
if (isf.GetFileNames(GetFileNameByType<T>()).Length > 0)
{
isf.DeleteFile(GetFileNameByType<T>());
}
}
private string GetFileNameByType<T>()
{
return typeof(T).Name + ".cache";
}
private IsolatedStorageFileStream GetStreamByStoredType<T>(FileMode mode)
{
var stm = new IsolatedStorageFileStream(GetFileNameByType<T>(), mode, isf);
return stm;
}
#region IDisposable Members
public void Dispose()
{
isf.Close();
}
}
Finally, remember to add the following using clauses:
using System.IO;
using System.IO.IsolatedStorage;
using System.Xml.Serialization;
The actual code to use the classes above could look like this:
var myClass = new MyClass();
myClass.name = "something";
using (var mgr = new SampleIsolatedStorageManager())
{
mgr.Save<MyClass>(myClass);
}
This will save the instance you specify to be saved to the isolated storage. To retrieve the instance, simply call:
using (var mgr = new SampleIsolatedStorageManager())
{
mgr.Restore<MyClass>();
}
Note: the sample I've provided only supports one serialized instance per type. I'm not sure if you need more than that. Make whatever modifications you need to support further functionalities.
HTH!
You can serialise the DataTable to file:
http://forums.asp.net/t/1441971.aspx
Your only concern then is deciding when the cache has gone stale. Perhaps timestamp the file?
In our implementation every row in the database has a last-updated timestamp. Every time our client application accesses a table we select the latest last-updated timestamp from the cache and send that value to the server. The server responds with all the rows that have newer timestamps.