I am working on a Silverlight client and associated ASP.NET web services (not WCF), and I need to implement some features containing user preferences such as a "favourite items" system and whether they'd like word-wrapping or not. In order to make a pleasant (rather than infuriating) user experience, I want to persist these settings across sessions. A brief investigation suggests that there are two main possibilities.
Silverlight isolated storage
ASP.NET-accessible database
I realise that option 2 is probably the best option as it ensures that even if a user disables isolated storage for Silverlight, their preferences still persist, but I would like to avoid the burden of maintaining a database at this time, and I like the idea that the preferences are available for loading and editing even when server connectivity is unavailable. However, I am open to reasoned arguments why it might be preferrable to take this hit now rather than later.
What I am looking for is suggestions on the best way to implement settings persistence, in either scenario. For example, if isolated storage is used, should I use an XML format, or some other file layout for persisting the settings; if the database approach is used, do I have to design a settings table or is there a built-in mechanism in ASP.NET to support this, and how do I serve the preferences to the client?
So:
Which solution is the better solution for user preference persistence? How might settings be persisted in that solution, and how might the client access and update them?
Prior Research
Note that I have conducted a little prior research on the matter and found the following links, which seem to advocate either solution depending on which article you read.
http://www.ddj.com/windows/208300036
http://tinesware.blogspot.com/2008/12/persisting-user-settings-in-silverlight.html
Update
It turns out that Microsoft have provided settings persistence in isolated storage as a built-in part of Silverlight (I somehow missed it until after implementing an alternative). My answer below has more details on this.
I'm keeping the question open as even though Microsoft provides client-side settings persistence, it doesn't necessarily mean this is the best approach for persisting user preferences and I'd like to canvas more opinions and suggestions on this.
After investigating some more and implementing my own XML file-based settings persistence using IsolatedStorage, I discovered the IsolatedStorageSettings class and the IsolatedStorageSettings.ApplicationSettings object that is a key/value collection specifically for storing user-specific, application settings.
It all seems obvious now. Of course, in the long term, a mechanism for backing up and restoring settings using a server database would be a good enhancement to this client-side settings persistence.
I think in general the default would be to store on the server; only when there are specific compelling reasons to attempt to store on the client should we do so. The more you rely on storing in a medium you can't control, the more risk you take on.
That having been said, and setting myself on the "database" side of the argument, I would ask what the downside of a database is? You mentioned using XML - is your data only semi-structured? If so, why not store XML in a SQL database? Setting up something this simple would not generally be considered a "burden" by most standards. A simple web service could act as a go-between between your Silverlight client and the settings database.
If it is an important feature for you that users have access to their preferences while offline, then it looks like isolated storage is the way to go for you. If it's more important that users be able to save preferences even if they have turned off isolated storage (is this really a problem? I'd be tempted to call YAGNI on this, but I'm not terribly experienced with the Silverlight platform...) then you need to host a database. If both are important, then you're probably looking at some kind of hybrid solution; using isolated storage if available, then falling back to a database.
In other words, I think the needs of your application are more important than some abstract best practice.
Related
I have an asp classic and asp.net application. I want to share the session from asp classic to asp.net back and forth. I made a solution to share the session by iterating the session names and put it in the hidden objects. This works fine , but I don't know if it is a better approach to share the session. iread some articles some of them using xml and pass it through form posting, some using cookies, and the other are using database.From this approach, what is the best to impleemnt with, or is there any other best solution?
Unfortunately your description of what you've already done is not very clear:
I made a solution to share the session by iterating the session names and put it in the hidden objects.
So, I'm going to go back one step and approach it from there. Before that, however, I should say that I don't think integrating two systems on this level is a good idea at all; you are trying to make two systems work together by replacing some of the piping that they run on, in such a way that they don't really notice. This might open you up for errors later on, when something changes and the relevant change is not made in the other system. I also would advise against the heavy use of sessions in general, although many ASP systems did abuse it.
Basically, both the classic ASP and ASP.Net's session works similarly, they use a session identifier (either stored automatically as a cookie - usually the default, or passed via the URL), and allow you to store key / value pairs of objects against this session identifier.
If you want to share the objects in session, you can either pass the information directly between the two applications when they interact (this would be the "form posting" approach), or store it in an external place, such as the database. I have some issues with the form posting approach, namely:
It is passing data which should be strictly on the server to the browser and back again, which has both security and bandwidth implications (you would probably want to at least encrypt the data);
It assumes that you have a clear handover or boundary between the two applications. This may or may not be the case.
I don't really see too many benefits to this approach, apart from the fact that it's probably the simplest, as long as you have limited interactions (say, a few fields, and only one or two pages that interact). The same argument goes for any other approach that passes all of the session info to the browser and back to the server (e.g. storing everything in cookies).
Other approaches will assume that you have a place to store your session, let's assume a database for now. With a database, we have a single place where the data exists, and each application can simply read and write to this single place. If you have a large number of different interactions, and / or the systems don't have a clear handover to each other, this is clearly an easier approach.
What remains is a question of how to encode the different types of data and how to store it. This is essentially a question of encoding, and you would need to be fairly clear on the different data types being used on each side (you should make sure that the ASP.Net code uses types that are at least logically equivalent to the ASP data types).
To go about implementing this, I would create a database implementation which supports how I want to store my session data. It could be a simple key / value pair table, along with data type information, or it could be more normalised than that, it's your decision. After this, I would create a new implementation of Session for each application, which makes use of this database. All that is left is to replace all references to the standard Session objects with the new Session that you've created, and everything should work. This is probably similar to what you've done, but I am specifically mentioning the use of a database as a data store.
Performance issues and the cleaning of old sessions should be relatively easy to work out, although this implementation will definitely be slower than an in-memory implementation.
I know its not exactly an answer to your question but I have worked on and integrated my self a few .net and classic asp applications, just wanted to give you some comments from my experience:
it can be confusing if its not well documented..
on some of the systems that were not properly documented when we re-deployed we missed out a .net app that handled part of the functionality
the most solid and best to work with (in my experience) were the ones where we picked one application to be the "core" with other parts making calls to this core to get work done.
for example creating .net applications (desktop, web, handheld) that called classic asp web pages that were effectively providing a simple xml web service - seeing as the classic asp apps tended to be the legacy "core" part of the system. (xml web services are fun to write in classic asp - seriously they are fast and easy to code - no type casting issues ;-) )
I don't know what you are doing here specifically but i found that this approach made the structure of the code more intuitive and actually simplified many odd associated issues beyond sessions e.g. on one app had an access DB used as part of it and only having classic asp access it removed concurrency issues.
May not be of any use but its based experience working with around 10 operational management systems that integrated .net and classic asp in various ways.
I can't decide whether to keep the help desk application in the same database as the rest of the corporate applications or completely separate it.
The help desk application can log support request from a phone call, email, website.
We can get questions sent to us from registered customers and non-registered customers.
The only reason to keep the help desk application in the same database is so that we can share the user base. But then again we can have the user create a new account for support or sync the user accounts with the help desk application.
If we separate the help desk application, our database backup will be smaller. Or we can just keep the help desk application in the same database, which makes development/integration a lot easier overall, having only one database to backup. (Maybe larger but still one database with everything.)
What to do?
I think this is a subjective answer, but I would keep the help desk system as a separate entity, unless there is a good business reason to use the same user base.
This is mostly based on what I've seen in professional helpdesk call logging/ticket software, but I do have another compelling reason - security - logic is as follows:
Generally, a helpdesk ticketing system generally needs less sensitive information than other business system (accounting, shopping, CRM, etc). Your technicians will likely need to know how to contact a customer, but probably won't need to store full addresses, birth dates, etc. All of the following is based on an assumption - that your existing customer data contains sensitive or personally identifiable data that would not be needed by your ticketing system.
Principle 1: Reducing the attack surface area by limiting the stored data. Generally, I subscribe to the principle that you should ONLY collect the data you absolutely need. Having less sensitive information available means less that an attacker can steal.
Principle 2: Reducing the surface area by minimizing avenues of attack into existing sensitive data. Assuming you already have a large user base, and assuming that you're already storing potentially useful data about your customers, adding another application with hooks into that data is just adding further avenues of attack into the existing customer base. This leads me to...
Principle 3: Least privilege. The user you set up for the helpdesk software database should have access ONLY to the data absolutely needed by your helpdesk analysts. Accomplishing this is easier if you design your database with a specific set of needs in mind. It's a lot more difficult from a maintenance standpoint to have to set up views and stored procedures over sensitive data in order to only allow access to the non-sensitive data than it is to have a database designed to have only the data that you need.
Of course, I may be over-thinking it. And there are other compelling reasons for going either route. I'm just trying to give you something to think about.
This will definitely be a subjective answer based upon your environment. You have to weigh the benefits/drawbacks of one choice with the benefits/drawbacks of the other choice. However, my opinion would be that the best benefits will be found in separating the two databases. I really don't like to have one database with two purposes. Instead look to create a database with one purpose only. Here are the benefits I see to doing this:
Portability - if you decide to move the helpdesk to a different server, you can without issue. The same is true if you want to move the corporate database somewhere else
Separation of concerns - each database is designed for its own purpose. The security of one won't interfere with the security of the other.
Backup policies - Currently, you can only have one backup policy for both systems since they are in the same database. If you split them, you could back up one more often than the other (and the backup would be smaller/faster).
The drawbacks I see (not being able to access the corporate data as easily) actually come out as a positive in my mind. Accessing the data from the corporate database sounds good but it can be a security issue (also a maintainability issue). Instead, this way you can limit how much access (and what type of access) is granted to the helpdesk system. Databases can access each other fairly easily so it won't be that inconvenient and it will allow you to add a nice security barrier between your corporate data and your helpdesk data.
What is the best way in logging actions, activities, etc done in an asp.net application. Also, which storage is best for logging these? XML? DB?
Thank you very much.
The answer I hate most, actually applies here: "it depends". Specifically, it depends on several things:
Who is the logging information for? Is it intended for business users (i.e., are there actual business requirements), is the information oriented at application management, do you need insight in frequently used features, etc.
What is the granularity of the logged info? For instance: do you only need to know if the search function was used, do you want to know the search query or do you also need info on the actual search results?
How accurate & complete does the info have to be? Audit trail requirements are usually very tight, technical ones often less so.
Do you want to be able to roll back the actions/activities? And if so, who is going to do that (business user, support personnel)
What does your deployment look like? If you have a single server, logging to text files or XML is more feasible than if you have a farm/load balanced environment.
For application logging, look at well-known providers such as log4net or the Enterprise Library logging application block; both allow you to configure where you want to log to (text file, database, etc).
For logging database actions, I suggest a solution in the database. Several versions of SQL Server 2008 have built-in support for auditing, Oracle has had this for years IIANM.
PostSharp probably. Log to a DB.
-- Edit:
This is for logging all code actions. To log all DB actions, I'd use triggers.
I would use :log4net
because you can configure and change the output(file, mail, DB, ...) in the config file so you do not have to rebuild your code.
If you don't need a complex auditing system, but just logging what your code is doing, I'll recommend you to use the tracing system integrated with .NET Framework and ASP.NET.
Using very simple classes in the framework, your code emit traces, and then, via configuration files you can send them to different storing systems (file, database, windows events, ...). You can even create your own store system trace provider
http://msdn.microsoft.com/en-us/library/system.diagnostics.trace.aspx
Related:
Storing Images in DB - Yea or Nay?
After reading the above question, it seems the preferred method for image storage with databases is to store only the filepath within the database. However, most of these answers seem to focus on web servers.
In my case, I'm developing a desktop application that will be used across multiple computers within an intranet. A dedicated server will host the database, containing information related to performing tests on various equipment.
Images need to be stored on the server in some way. Would storing the images in the database be the correct approach in this case, or even the only approach?
Pros:
Backup is limited to only the database.
No need to open up the server's file system to the network.
Single protocol for server information access.
Protected file access. (User can't go in and delete all the images)
Cons
Performance issues in future if there's too many images.
Edit: As stated in the tags, the application is being written in C#/.NET. If writing the images to the file system is an option in this case, I could use some help understanding how this is done.
Edit 2: As elaborated some in the comments below, for now I'm assuming a MySQL database, although the FileStream capabilities of SQL Server 2008 could potentially change that.
Also in my case, images will be added often, and can be considered read-only after this point since they should never be changed, and will just be read out when needed. Images will likely be small (~70k each), and I'm also considering some other binary format storage on the server, files which are ~20k each which I can likely apply the same approach for storing and retrieving.
I'd suggest keeping those files on disk in the file system, rather than in the database. File system for files, databases for relational data, etc.
Deliver by Web Service
Consider delivering those images to your desktop app by hosting a web service/app on that DB machine. That app's job it is to serve only images. Setup a web server on that machine with an ASP.NET application. Have an .ashx handle requests and stream the binary image. Something like this:
http://myserver/myapp/GetImage.ashx?CustomerID=123&ImageID=456
Security
If intranet security is an issue, this would be the point where you could ensure that the user is authenticated and authorized for read access to the image. Audit trails could be implemented here as well.
File System Security
Regarding security on those images, consider that NTFS gives you a lot of measures to ensure that only those who are authorized can read/delete/put files as required. The task then would be to define those roles and implement Windows security groups.
Future Needs
This approach allows you to securely consume those images from anywhere on the intranet. Perhaps this app would be migrated to a web application at some point? Perhaps a feature request comes from the customer where a web solution is appropriate?
This might sound like overkill rather than reading a blob from the database, but it's great from a security perspective. Consider your customers' and patients' expectations on privacy and security.
<%# WebHandler Language="C#" Class="Handler" %>
public class Handler : IHttpHandler {
public void ProcessRequest (HttpContext context)
{
//go to the DB and get the path for this ID.
string filePath = GetImagePath(context.Request.QueryString["ImageID"]);
//now you have the path on disk; read the file
byte[] imgBytes=GetBytesFromDisk(filePath);
// send back as byte[]
context.Response.BinaryWrite(imgBytes);
}
I think the answer is that there is no right answer. As with most things in programming (and life), It DEPENDS.
Here are some Pros and Cons of storing in DB:
PROS
Easy backup, management and one stop shop for data in your application
Less dependencies in your app and fewer moving parts. KISS Principle
Works fine on small files under 1GB.
Hey its a DB, so saves can be done inside transactions and rolled back if there are network problems
Sharepoint and TFS store everything in the DB and work just fine. even the big boys do it
Security can be easily controlled by the app and not involve file/folder permissions
Cons
Eats up db space
Potentially effect performance if not done right
Not such a great idea if always storing large files (>1GB) unless using Filestream in SQL Server 2k8
Requires you to implement a decent caching strategy (although you would probably want this anyways)
File system feels more natural than DB and easier for manually replacing/viewing files.
I guess when it comes to your situation, I would lean towards the simplicity of storing in the DB.
From an architecture perspective, you'll get the best performance by splitting the solution into two pieces: a database server, and an image server.
You would do this both in order to keep row sizes small, and also to separate your transactional environment from content. Relational databases in the vein of SQL Server and mysql will support big BLOBs but aren't optimized for them.
Most people equate "image server" to "web server" because they work on web applications and therefore have a de facto image repository (a directory on a local disk). However, this does not have to be the case. Images can be served from any location over any protocol.
You mentioned a C#/.NET platform and an intranet. Can we assume a Windows environment, possibly Active Directory?
If so, a plain vanilla file server could be your image server. Set up a file share, set read/create (but not modify/delete) permissions on it for all users of this app, store the UNC path somewhere in the database (so you don't have to redeploy the app if you decide to relocate it), and have your client application generate a unique, relative path using something reliable like a Guid.
It's not as elegant as a web service (which is my preferred approach), nor quite as maintenance-free as the pure-database approach, but my impression of this topic is that you're on a tight budget with a short delivery deadline, and a Windows or NFS file server is cheaper, easier, and faster to set up and maintain (including backups) than a full-fledged web server, so it might be just what you're looking for here.
Most businesses already have a file server, so usually this won't require any new infrastructure whatsoever. But even if you don't, I've seen file servers run off old reconditioned workstations - it's not fancy, but in a low-traffic environment it gets the job done.
If you choose this approach, I would suggest some kind of directory structure on the file share to simplify backups, archiving, etc. For example:
\\ImageServer\MyAppRepository\yyyy-mm\{image-file-name-or-guid}.{ext}.
Hope that helps.
How many images are we talking? Are they unique/updated frequently? If not can you package the images with the client that you are going distribute to multiple computers?
Personally, I would avoid storing images in the database, and instead as you said store the file paths.
If you have read through all of the other similar questions (This, this, and this) but are still asking if this is a good idea, then maybe your problem is different enough that this would be a good idea.
My company developed a Windows forms c# application that stores images in a database and it worked out pretty well. We have been actively using it since 2003 and have about 150 gigs of data in the system.
First, let me say that this is NOT the optimal performance architecture. We have had some problems with keeping the database statistics up to date and keeping the indexes tuned correctly. We basically have to re-index the system monthly. You need to be aware that the built-in optimization system of most RDBMS servers is not set up for large collections of binary objects.
The reason we chose to put the images in the database is because of database level replication. Our system is spread across seven offices in five states and I needed to sync the data to each site. So, I pinned up a VPN between each site and our corporate office and set up SQL merge replication on the database. In this way, I can sync the data and images at the same time with only one channel open between offices.
So, I would say that images in the database is not the optimal solution in most cases but it worked out for our requirements.
I don't think it matters where the images are stored. Pick the simplest approach that will work. But you should have an architecture where you can change the approach if it proves to be the wrong one.
To accomplish this, I would put the data and the image storage both behind a web services interface. Pick a technology - doesn't matter. All access to the data (and images) would be the same way - through the web service.
By doing this, you have decoupled where the data is stored from the desktop application. The desktop app doesn't care. All it knows is that the server at a certain address can get it the data.
Then store the data and the images wherever you want. Choose the simplest thing for you. If you end up having issues, then (and only then) should you add additional complexity in order to solve the problem. The good news is that the additional complexity and work shouldn't affect the desktop applications at all. You can make the changes on the server without having to deploy a new version of the desktop applications.
If you're looking for alternatives, one of my favorites is a ten-line HTTP POST file upload handler (PHP, .NET, Java, etc.) + one webserver. When the script validates max file size, and possibly extracts the width & height, it inserts a row into the database. Retrieval need not go through the script. Standard file hosting will work. This would require you to open port 80. You needn't complicate this with SOAP or anything. A regular upload handler would do the job.
Then there's WebDAV, along the same lines. Of course, with this method, you'd have to monitor the filesystem and adjust the database accordingly. You could use a polling service or hook into file system events. Actually, you could also inject an ISAPI filter or Apache handler to perform the database updates.
You could use FTP. Add an extension to ProFTPd that will update the database and keep everything in sync.
Lots of ways to avoid putting image data into tables.
If you opt for the database solution, just be sure to segment your BLOBs into separate tables. Separate table spaces / devices / partitions, if you can. Or, use Oracle and ignore everything I've said.
Use Amazon S3 storage for your images
Just store the GUID or other file name in the DB
Amazon is simple , fast, cheap. secure etc etc
It scales fine, and optionally provides CDN like edge services directly from S3
Storing images in the DB always seems to turn into a nightmare over time
It seems to me that what you want to do something like what Infovark do.
They use Firebird for this and I'll give you a link on Firebird and storing image
you should try MS SQl 2008, it comes with a Type: FileStream, which automatically store blob in file system.
I plan to be storing all my config settings in my application's app.config section (using the ConfigurationManager.AppSettings class). As the user changes settings using the app's UI (clicking checkboxes, choosing radio buttons, etc.), I plan to be writing those changes out to the AppSettings. At the same time, while the program is running I plan to be accessing the AppSettings constantly from a process that will be constantly processing data. Changes to settings via the UI need to affect the data processing in real-time, which is why the process will be accessing the AppSettings constantly.
Is this a good idea with regard to performance? Using AppSettings is supposed to be "the right way" to store and access configuration settings when writing .Net apps, but I worry that this method wasn't intended for a constant load (at least in terms of settings being constantly read).
If anyone has experience with this, I would greatly appreciate the input.
Update: I should probably clarify a few points.
This is not a web application, so connecting a database to the application might be overkill simply for storing configuration settings. This is a Windows Forms application.
According to the MSDN documention, the ConfigurationManager is for storing not just application level settings, but user settings as well. (Especially important if, for instance, the application is installed as a partial-trust application.)
Update 2: I accepted lomaxx's answer because Properties does indeed look like a good solution, without having to add any additional layers to my application (such as a database). When using Properties, it already does all the caching that others suggested. This means any changes and subsequent reads are all done in memory, making it extremely fast. Properties only writes the changes to disk when you explicitly tell it to. This means I can make changes to the config settings on-the-fly at run time and then only do a final save out to disk when the program exits.
Just to verify it would actually be able to handle the load I need, I did some testing on my laptop and was able to do 750,000 reads and 7,500 writes per second using Properties. That is so far above and beyond what my application will ever even come close to needing that I feel quite safe in using Properties without impacting performance.
since you're using a winforms app, if it's in .net 2.0 there's actually a user settings system (called Properties) that is designed for this purpose. This article on MSDN has a pretty good introduction into this
If you're still worried about performance then take a look at SQL Compact Edition which is similar to SQLite but is the Microsoft offering which I've found plays very nicely with winforms and there's even the ability to make it work with Linq
Check out SQLite, it seems like a good option for this particular scenario.
Dylan,
Don't use the application config file for this purpose, use a SQL DB (SQLite, MySQL, MSSQL, whatever) because you'll have to worry less about concurrency issues during reads and writes to the config file.
You'll also have better flexibility in the type of data you want to store. The appSettings section is just a key/value list which you may outgrow as time passes and as the app matures. You could use custom config sections but then you're into a new problem area when it comes to the design.
The appSettings isn't really meant for what you are trying to do.
When your .NET application starts, it reads in the app.config file, and caches its contents in memory. For that reason, after you write to the app.config file, you'll have to somehow force the runtime to re-parse the app.config file so it can cache the settings again. This is unnecessary
The best approach would be to use a database to store your configuration settings.
Barring the use of a database, you could easily setup an external XML configuration file. When your application starts, you could cache its contents in a NameValueCollection object or HashTable object. As you change/add settings, you would do it to that cached copy. When your application shuts down, or at an appropriate time interval, you can write the cache contents back out to file.
Someone correct me if I'm wrong, but I don't think that AppSettings is typically meant to be used for these type of configuration settings. Normally you would only put in settings that remain fairly static (database connection strings, file paths, etc.). If you want to store customizable user settings, it would be better to create a separate preferences file, or ideally store those settings in a database.
I would not use config files for storing user data. Use a db.
Could I ask why you're not saving the user's settings in a database?
Generally, I save application settings that are changed very infrequently in the appSettings section (the default email address error logs are sent to, the number of minutes after which you are automatically logged out, etc.) The scope of this really is at the application, not at the user, and is generally used for deployment settings.
one thing I would look at doing is caching the appsettings on a read, then flushing the settings from the cache on the write which should minimize the amount of actual load the server has to deal with for processing the appSettings.
Also, if possible, look at breaking the appSettings up into configSections so you can read write and cache related settings.
Having said all that, I would seriously consider looking at storing these values in a database as you seem to actually be storing user preferences, and not application settings.