I'm working on a Cloud-Hosted ZipFile creation service.
This is a Cross-Origin WebApi2 service used to provide ZipFiles from a file system that cannot host any server side code.
The basic operation goes like this:
User makes a POST request with a string[] of Urls that correlate to file locations
WebApi reads the array into memory, and creates a ticket number
WebApi returns the ticket number to the user
AJAX callback then redirects the user to a web address with the ticket number appended, which returns the zip file in the HttpResponseMessage
In order to handle the ticket system, my design approach was to set up a Global Dictionary that paired a randomly generated 10 digit number to a List<String> value, and the dictionary was paired to a Queue storing 10,000 entries at a time. (Reference here)
This is partially due to the fact that WebApi does not support Cache
When I make my AJAX call locally, it works 100% of the time. When I make the call remotely, it works about 20% of the time.
When it fails, this is the error I get:
The given key was not present in the dictionary.
Meaning, the ticket number was not found in the Global Dictionary Object.
I've implemented quite a few Lazy Singletons in the last few months, and I've never run into this.
Where did I go wrong?
//Initital POST request, sent to the service with the string[]
public string Post([FromBody]string value)
{
try
{
var urlList = new JavaScriptSerializer().Deserialize<List<string>>(value);
var helper = new Helper();
var random = helper.GenerateNumber(10);
CacheDictionary<String, List<String>>.Instance.Add(random, urlList);
return random;
}
catch (Exception ex)
{
return ex.Message;
}
}
//Response, cut off where the error occurs
public async Task<HttpResponseMessage> Get(string id)
{
try
{
var urlList = CacheDictionary<String, List<String>>.Instance[id];
}
catch (Exception ex)
{
var response = new HttpResponseMessage(HttpStatusCode.InternalServerError)
{
Content = new StringContent(ex.Message)
};
return response;
}
}
//CacheDictionary in its Lazy Singleton form:
public class CacheDictionary<TKey, TValue>
{
private Dictionary<TKey, TValue> dictionary;
private Queue<TKey> keys;
private int capacity;
private static readonly Lazy<CacheDictionary<String, List<String>>> lazy =
new Lazy<CacheDictionary<String, List<String>>>(() => new CacheDictionary<String, List<String>>(10000));
public static CacheDictionary<String, List<String>> Instance { get { return lazy.Value; } }
private CacheDictionary(int capacity)
{
this.keys = new Queue<TKey>(capacity);
this.capacity = capacity;
this.dictionary = new Dictionary<TKey, TValue>(capacity);
}
public void Add(TKey key, TValue value)
{
if (dictionary.Count == capacity)
{
var oldestKey = keys.Dequeue();
dictionary.Remove(oldestKey);
}
dictionary.Add(key, value);
keys.Enqueue(key);
}
public TValue this[TKey key]
{
get { return dictionary[key]; }
}
}
More Error Detail
at System.Collections.Generic.Dictionary`2.get_Item(TKey key)
at ZipperUpper.Models.CacheDictionary`2.get_Item(TKey key)
I think you will find it's to do with where you're locating your Global Dictionary. For example, if this was a web farm, and your dictionary was in Session, one instance of your app could access a different Session from another instance, unless the Session state handling was set up correctly. In your case it's in the cloud, so you will need to make provision in the same kind of way for related requests and responses being handled by different machines. Therefore one could send out the key, and another could receive the AJAX redirect but not have that key in its own "global" data.
Related
I am working with windows services to get some data from an API
service#1 gets data from "http://api.provider.com/Entity1"
service#2 gets data from "http://api.provider.com/Entity2"
and I have both these services in one .csproj and I use a singleton httpClient to retrieve data from API:
public sealed class Client : HttpClient{
private static readonly object padlock = new object();
private static Client instance = null;
public static Client Instance
{
get
{
if (instance == null)
{
lock (padlock)
{
if (instance == null)
{
instance = new Client();
}
}
}
return instance;
}
}
private Client()
{
DefaultRequestHeaders.Accept.Clear();
DefaultRequestHeaders.Accept.Add(MediaTypeWithQualityHeaderValue.Parse("application/json"));
DefaultRequestHeaders.Add("...", "...");
}
public async Task<string> Get(string url)
{
var result = await GetStringAsync(url);
return result;
}}
but these processes are parallel so the singleton class is a shared static class between the two.And Then I have this class as the consumer fo the first:
class APIHAndler{
public List<obj1> f1()
{
var jsonResult = Client.Instance.Get(url1).Result;
//make list of obj1 out of json
}
public List<obj2> f2()
{
var jsonResult = Client.Instance.Get(url2).Result;
//make list of obj2 out of json
}
}
What I did is to create one instance of the class APIHAndler in each of my services and call f1 and f2 based on the ongoing business and I get this error:
Response status code does not indicate success: 429 (Too Many Requests).
I think it is probably due to the fact of having two different connections open at the same time. but I don't know how to avoid this. If you can help me fix this or have a better solution I will be very happy to hear about.
I don't know how to avoid this
The service you're calling should have documentation that explains what causes a 429. Sometimes it's only a single request at a time; more often it's a certain number of requests within a time window. Either way, you'll need to throttle your requests, and you can build throttling with a SemaphoreSlim.
I am stuck in a scenario.
My code is like below :
Update : its not about how to use data cache, i am already using it and its working , its about expanding it so the method don't make call between the time of expiry and getting new data from external source
object = (string)this.GetDataFromCache(cache, cacheKey);
if(String.IsNullOrEmpty(object))
{
// get the data. It takes 100ms
SetDataIntoCache(cache, cacheKey, object, DateTime.Now.AddMilliseconds(500));
}
So user hit the cache and get data from it if the item expire it calls and get the data from service and save it in case , the problem is , when ever there is a pending request ( request ongoing ) the service send another request because the object is expired . in final there should be max 2-3 calls/ seconds and there are 10-20 calls per seconds to external service .
Is there any optimal way to doing it so no conflict between requests time other then creating own custom class with arrays and time stamps etc?
btw the saving code for cache is-
private void SetDataIntoCache(ObjectCache cacheStore, string cacheKey, object target, DateTime slidingExpirationDuration)
{
CacheItemPolicy cacheItemPolicy = new CacheItemPolicy();
cacheItemPolicy.AbsoluteExpiration = slidingExpirationDuration;
cacheStore.Add(cacheKey, target, cacheItemPolicy);
}
Use Double-checked locking pattern:
var cachedItem = (string)this.GetDataFromCache(cache, cacheKey);
if (String.IsNullOrEmpty(object)) { // if no cache yet, or is expired
lock (_lock) { // we lock only in this case
// you have to make one more check, another thread might have put item in cache already
cachedItem = (string)this.GetDataFromCache(cache, cacheKey);
if (String.IsNullOrEmpty(object)) {
//get the data. take 100ms
SetDataIntoCache(cache, cacheKey, cachedItem, DateTime.Now.AddMilliseconds(500));
}
}
}
This way, while there is an item in your cache (so, not expired yet), all requests will be completed without locking. But if there is no cache entry yet, or it expired - only one thread will get data and put it into the cache.
Make sure you understand that pattern, because there are some caveats while implementing it in .NET.
As noted in comments, it is not necessary to use one "global" lock object to protect every single cache access. Suppose you have two methods in your code, and each of those methods caches object using it's own cache key (but still using the same cache). Then you have to use two separate lock objects, because if you will use one "global" lock object, calls to one method will unnecessary wait for calls to the other method, while they never work with the same cache keys.
I have adapted the solution from Micro Caching in .NET for use with the System.Runtime.Caching.ObjectCache for MvcSiteMapProvider. The full implementation has an ICacheProvider interface that allows swapping between System.Runtime.Caching and System.Web.Caching, but this is a cut down version that should meet your needs.
The most compelling feature of this pattern is that it uses a lightweight version of a lazy lock to ensure that the data is loaded from the data source only 1 time after the cache expires regardless of how many concurrent threads there are attempting to load the data.
using System;
using System.Runtime.Caching;
using System.Threading;
public interface IMicroCache<T>
{
bool Contains(string key);
T GetOrAdd(string key, Func<T> loadFunction, Func<CacheItemPolicy> getCacheItemPolicyFunction);
void Remove(string key);
}
public class MicroCache<T> : IMicroCache<T>
{
public MicroCache(ObjectCache objectCache)
{
if (objectCache == null)
throw new ArgumentNullException("objectCache");
this.cache = objectCache;
}
private readonly ObjectCache cache;
private ReaderWriterLockSlim synclock = new ReaderWriterLockSlim(LockRecursionPolicy.NoRecursion);
public bool Contains(string key)
{
synclock.EnterReadLock();
try
{
return this.cache.Contains(key);
}
finally
{
synclock.ExitReadLock();
}
}
public T GetOrAdd(string key, Func<T> loadFunction, Func<CacheItemPolicy> getCacheItemPolicyFunction)
{
LazyLock<T> lazy;
bool success;
synclock.EnterReadLock();
try
{
success = this.TryGetValue(key, out lazy);
}
finally
{
synclock.ExitReadLock();
}
if (!success)
{
synclock.EnterWriteLock();
try
{
if (!this.TryGetValue(key, out lazy))
{
lazy = new LazyLock<T>();
var policy = getCacheItemPolicyFunction();
this.cache.Add(key, lazy, policy);
}
}
finally
{
synclock.ExitWriteLock();
}
}
return lazy.Get(loadFunction);
}
public void Remove(string key)
{
synclock.EnterWriteLock();
try
{
this.cache.Remove(key);
}
finally
{
synclock.ExitWriteLock();
}
}
private bool TryGetValue(string key, out LazyLock<T> value)
{
value = (LazyLock<T>)this.cache.Get(key);
if (value != null)
{
return true;
}
return false;
}
private sealed class LazyLock<T>
{
private volatile bool got;
private T value;
public T Get(Func<T> activator)
{
if (!got)
{
if (activator == null)
{
return default(T);
}
lock (this)
{
if (!got)
{
value = activator();
got = true;
}
}
}
return value;
}
}
}
Usage
// Load the cache as a static singleton so all of the threads
// use the same instance.
private static IMicroCache<string> stringCache =
new MicroCache<string>(System.Runtime.Caching.MemoryCache.Default);
public string GetData(string key)
{
return stringCache.GetOrAdd(
key,
() => LoadData(key),
() => LoadCacheItemPolicy(key));
}
private string LoadData(string key)
{
// Load data from persistent source here
return "some loaded string";
}
private CacheItemPolicy LoadCacheItemPolicy(string key)
{
var policy = new CacheItemPolicy();
// This ensures the cache will survive application
// pool restarts in ASP.NET/MVC
policy.Priority = CacheItemPriority.NotRemovable;
policy.AbsoluteExpiration = DateTimeOffset.Now.AddMinutes(1);
// Load Dependencies
// policy.ChangeMonitors.Add(new HostFileChangeMonitor(new string[] { fileName }));
return policy;
}
NOTE: As was previously mentioned, you are probably not gaining anything by caching a value that takes 100ms to retrieve for only 500ms. You should most likely choose a longer time period to hold items in the cache. Are the items really that volatile in the data source that they could change that quickly? If so, maybe you should look at using a ChangeMonitor to invalidate any stale data so you don't spend so much of the CPU time loading the cache. Then you can change the cache time to minutes instead of milliseconds.
You will have to use locking to make sure request is not send when cache is expired and another thread is getting it from remote/slow service, it will look something like this (there are better implementations out there that are easier to use, but they require separate classes):
private static readonly object _Lock = new object();
...
object = (string)this.GetDataFromCache(cache, cacheKey);
if(object == null)
{
lock(_Lock)
{
object = (string)this.GetDataFromCache(cache, cacheKey);
if(String.IsNullOrEmpty(object))
{
get the data // take 100ms
SetDataIntoCache(cache, cacheKey, object, DateTime.Now.AddMilliseconds(500));
}
}
}
return object;
Also, you want to make sure your service doesn't return null as it will assume that no cache exists and will try to get the data on every request. That is why more advanced implementations typically use something like CacheObject, which supports null values storage.
By the way, 500 milliseconds is too small time to cache, you will end up lots of CPU cycle just to add/remove cache which will eventually remove cache too soon before any other request can get benefit of cache. You should profile your code to see if it actually benefits.
Remember, cache has lot of code in terms of locking, hashing and many other moving around data, which costs good amount of CPU cycles and remember, all though CPU cycles are small, but in multi threaded, multi connection server, CPU has lot of other things to do.
Original Answer https://stackoverflow.com/a/16446943/85597
private string GetDataFromCache(
ObjectCache cache,
string key,
Func<string> valueFactory)
{
var newValue = new Lazy<string>(valueFactory);
//The line below returns existing item or adds
// the new value if it doesn't exist
var value = cache.AddOrGetExisting(key, newValue, DateTimeOffset.Now.AddMilliseconds(500)) as Lazy<string>;
// Lazy<T> handles the locking itself
return (value ?? newValue).Value;
}
// usage...
object = this.GetDataFromCache(cache, cacheKey, () => {
// get the data...
// this method will be called only once..
// Lazy will automatically do necessary locking
return data;
});
I'm writing a C#-based web application using SignalR. So far I have a 'lobby' area (where open communication is allowed), and an 'session' area (where groups of 5 people can engage in private conversation, and any server interactions are only shown to the group).
What I'd like to do is create a 'logging' object in memory - one for each session (so if there are three groups of five people, I'd have three logging objects).
The 'session' area inherits from Hubs (and IDisconnect), and has several methods (Join, Send, Disconnect, etc.). The methods pass data back to the JavaScript client, which calls client-side JS functions. I've tried using a constructor method:
public class Session : Hub, IDisconnect
{
public class Logger
{
public List<Tuple<string, string, DateTime>> Log;
public List<Tuple<string, string, DateTime>> AddEvent(string evt, string msg, DateTime time)
{
if (Log == null)
{
Log = new List<Tuple<string, string, DateTime>>();
}
Log.Add(new Tuple<string, string, DateTime>(evt, msg, time));
return Log;
}
}
public Logger eventLog = new Logger();
public Session()
{
eventLog = new Logger();
eventLog.AddEvent("LOGGER INITIALIZED", "Logging started", DateTime.Now);
}
public Task Join(string group)
{
eventLog.AddEvent("CONNECT", "User connect", DateTime.Now);
return Groups.Add(Context.ConnectionId, group);
}
public Task Send(string group, string message)
{
eventLog.AddEvent("CHAT", "Message Sent", DateTime.Now);
return Clients[group].addMessage(message);
}
public Task Interact(string group, string payload)
{
// deserialise the data
// pass the data to the worker
// broadcast the interaction to everyone in the group
eventLog.AddEvent("INTERACTION", "User interacted", DateTime.Now);
return Clients[group].interactionMade(payload);
}
public Task Disconnect()
{
// grab someone from the lobby?
eventLog.AddEvent("DISCONNECT","User disconnect",DateTime.Now);
return Clients.leave(Context.ConnectionId);
}
}
But this results in the Logger being recreated every time a user interacts with the server.
Does anyone know how I'd be able to create one Logger per new session, and add elements to it? Or is there a simpler way to do this and I'm just overthinking the problem?
Hubs are created and disposed of all the time! Never ever put data in them that you expect to last (unless it's static).
I'd recommend creating your logger object as it's own class (not extending Hub/IDisconnect).
Once you have that create a static ConcurrentDictionary on the hub which maps SignalR groups (use these to represent your sessions) to loggers.
When you have a "Join" method triggered on your hub it's easy as looking up the group that the connection was in => Sending the logging data to the groups logger.
Checkout https://github.com/davidfowl/JabbR when it comes to making "rooms" and other sorts of groupings via SignalR
Hope this helps!
I'm rethinking a current WCF service we're using right now. We do A LOT of loading XML to various databases. In some cases, we can store it as XML data, and in others, we need to store it as rowsets.
So I'm redesigning this service to accept different providers. My first thought, classic abstract factory, but now I'm having my doubts. Essentially, the service class has one operation contract method, Load. But to me, it seems silly to new-up provider instances every time Load is called.
Currently:
// Obviously incomplete example:
public class XmlLoaderService : IXmlLoaderService
{
readonly IXmlLoaderFactory _xmlLoaderFactory;
readonly IXmlLoader _xmlLoader;
public XmlLoaderService()
{
_xmlLoader = _xmlLoaderFactory(ProviderConfiguration configuration);
}
public void Load(Request request)
{
_xmlLoader.Load(request);
}
}
I'm thinking about changing to:
public class XmlLoaderService : IXmlLoaderService
{
static readonly IDictionary<int, IXmlLoader> _providerDictionary;
static public XmlLoaderService()
{
_providerDictionary = PopulateDictionaryFromConfig();
}
public void Load(Request request)
{
// Request will always supply an int that identifies the
// request type, can be used as key in provider dictionary
var xmlLoader = _providerDictionary[request.RequestType];
xmlLoader.Load(request);
}
}
Is this a good approach? I like the idea of caching the providers, seems more efficient to me... though, I tend to overlook the obvious sometimes. Let me know your thoughts!
Why can't you use both? Pass in your dependency into the Load method and if the type is already cached use the cached instance.
public void Load(Request request)
{
// Request will always supply an int that identifies the
// request type, can be used as key in provider dictionary
IXmlLoader xmlLoader;
if(_providerDictionary.ContainsKey(request.RequestType))
{
xmlLoader = _providerDictionary[request.RequestType];
}
else
{
xmlLoader = //acquire from factory
_providerDictionary.Add(request.RequestType, xmlLoader);
}
xmlLoader.Load(request);
}
I was wondering what the best implementation for a global error (doesn't have to be errors, can also be success messages) handler would be? Let me break it down for you with an example:
User tries to delete a record
Deletion fails and an error is logged
User redirects to another page
Display error message for user (using a HtmlHelper or something, don't want it to be a specific error page)
I'm just curious what you guys think. I've been considering TempData, ViewData and Session but they all have their pros and cons.
TIA!
UPDATE:
I'll show an example what I exactly mean, maybe I wasn't clear enough.
This is an example of a method that adds a message when user deletes a record.
If user succeeds, user redirects to another page
public ActionResult DeleteRecord(Record recordToDelete)
{
// If user succeeds deleting the record
if (_service.DeleteRecord(recordToDelete)
{
// Add success message
MessageHandler.AddMessage(Status.SUCCESS, "A message to user");
// And redirect to list view
return RedirectToAction("RecordsList");
}
else
{
// Else return records details view
return View("RecordDetails", recordToDelete);
}
}
And in the view "RecordsList", it would be kinda cool to show all messages (both error and success messages) in a HtmlHelper or something.
<%= Html.RenderAllMessages %>
This can be achieved in many ways, I'm just curious what you guys would do.
UPDATE 2:
I have created a custom error (message) handler. You can see the code if you scroll down.
Just for fun, I created my own custom error (message) handler that works pretty much as TempData, but with the small difference that this handler is accessible all over the application.
I'm not going to explain every single step of code, but to sum it all up, I used IHttpModule to fire a method for every request and Session to save data. Below is the code, feel free to edit or give suggestions for improvements.
Web.config (Define module)
<httpModules>
<add name="ErrorManagerModule" type="ErrorManagerNamespace.ErrorManager"/>
</httpModules>
<system.webServer>
<modules runAllManagedModulesForAllRequests="true">
<add name="ErrorManagerModule" type="ErrorManagerNamespace.ErrorManager"/>
</modules>
</system.webServer>
ErrorManager.cs (Error manager handler code)
public class ErrorManager : IRequiresSessionState, IHttpModule
{
private const string SessionKey = "ERROR_MANAGER_SESSION_KEY";
public enum Type
{
None,
Warning,
Success,
Error
}
/*
*
* Public methods
*
*/
public void Dispose()
{
}
public void Init(HttpApplication context)
{
context.AcquireRequestState += new EventHandler(Initiliaze);
}
public static IList<ErrorModel> GetErrors(ErrorManager.Type type = Type.None)
{
// Get all errors from session
var errors = GetErrorData();
// Destroy Keep alive
// Decrease all errors request count
foreach (var error in errors.Where(o => type == ErrorManager.Type.None || o.ErrorType == type).ToList())
{
error.KeepAlive = false;
error.IsRead = true;
}
// Save errors to session
SaveErrorData(errors);
//return errors;
return errors.Where(o => type == ErrorManager.Type.None || o.ErrorType == type).ToList();
}
public static void Add(ErrorModel error)
{
// Get all errors from session
var errors = GetErrorData();
var result = errors.Where(o => o.Key.Equals(error.Key, StringComparison.OrdinalIgnoreCase)).FirstOrDefault();
// Add error to collection
error.IsRead = false;
// Error with key is already associated
// Remove old error from collection
if (result != null)
errors.Remove(result);
// Add new to collection
// Save errors to session
errors.Add(error);
SaveErrorData(errors);
}
public static void Add(string key, object value, ErrorManager.Type type = Type.None, bool keepAlive = false)
{
// Create new error
Add(new ErrorModel()
{
IsRead = false,
Key = key,
Value = value,
KeepAlive = keepAlive,
ErrorType = type
});
}
public static void Remove(string key)
{
// Get all errors from session
var errors = GetErrorData();
var result = errors.Where(o => o.Key.Equals(key, StringComparison.OrdinalIgnoreCase)).FirstOrDefault();
// Error with key is in collection
// Remove old error
if (result != null)
errors.Remove(result);
// Save errors to session
SaveErrorData(errors);
}
public static void Clear()
{
// Clear all errors
HttpContext.Current.Session.Remove(SessionKey);
}
/*
*
* Private methods
*
*/
private void Initiliaze(object o, EventArgs e)
{
// Get context
var context = ((HttpApplication)o).Context;
// If session is ready
if (context.Handler is IRequiresSessionState ||
context.Handler is IReadOnlySessionState)
{
// Load all errors from session
LoadErrorData();
}
}
private static void LoadErrorData()
{
// Get all errors from session
var errors = GetErrorData().Where(o => !o.IsRead).ToList();
// If KeepAlive is set to false
// Mark error as read
foreach (var error in errors)
{
if (error.KeepAlive == false)
error.IsRead = true;
}
// Save errors to session
SaveErrorData(errors);
}
private static void SaveErrorData(IList<ErrorModel> errors)
{
// Make sure to remove any old errors
HttpContext.Current.Session.Remove(SessionKey);
HttpContext.Current.Session.Add(SessionKey, errors);
}
private static IList<ErrorModel> GetErrorData()
{
// Get all errors from session
return HttpContext.Current.Session[SessionKey]
as IList<ErrorModel> ??
new List<ErrorModel>();
}
/*
*
* Model
*
*/
public class ErrorModel
{
public string Key { get; set; }
public object Value { get; set; }
public bool KeepAlive { get; set; }
internal bool IsRead { get; set; }
public Type ErrorType { get; set; }
}
HtmlHelperExtension.cs (An extension method for rendering the errors)
public static class HtmlHelperExtension
{
public static string RenderMessages(this HtmlHelper obj, ErrorManager.Type type = ErrorManager.Type.None, object htmlAttributes = null)
{
var builder = new TagBuilder("ul");
var errors = ErrorManager.GetErrors(type);
// If there are no errors
// Return empty string
if (errors.Count == 0)
return string.Empty;
// Merge html attributes
builder.MergeAttributes(new RouteValueDictionary(htmlAttributes), true);
// Loop all errors
foreach (var error in errors)
{
builder.InnerHtml += String.Format("<li class=\"{0}\"><span>{1}</span></li>",
error.ErrorType.ToString().ToLower(),
error.Value as string);
}
return builder.ToString();
}
}
Usage for creating errors
// This will only be available for one request
ErrorManager.Add("Key", "An error message", ErrorManager.Type.Error);
// This will be available for multiple requests
// When error is read, it will be removed
ErrorManager.Add("Key", "An error message", ErrorManager.Type.Error, true);
// Remove an error
ErrorManager.Remove("AnotherKey");
// Clear all error
ErrorManager.Clear();
Usage for rendering errors
// This will render all errors
<%= Html.RenderMessages() %>
// This will just render all errors with type "Error"
<%= Html.RenderMessages(ErrorManager.Type.Error) %>
I'm confused by these steps:
Deletion fails and an error is logged
User redirects to another page
Why would you redirect the User when an error occurs? That doesnt make any sense, unless im misunderstanding something.
Generally, i follow these guidelines:
Error with form submission (e.g HTTP POST): check ModelState.IsValid and return the same View and render the error out with #Html.ValidationSummary()
Error with AJAX call: return JsonResult (like #Tomas says), and use basic client-side scripting to inspect the JSON and show the result
Error with domain/business: throw custom exceptions and let the controller catch them and add to ModelState as above
I prefer writing my server layer as an API emitting JSON - in ASP.NET MVC that's real simple - you just create a bunch of nested anonymous objects, and return Json(data);. The JSON object is then consumed by the client layer, which consists of html, css and javascript (I use jQuery a lot, but you might prefer other tools).
Since javascript is dynamic, it is then real easy to just have a property status on the data object, and the client side script can interpret that and display status or error messages as needed.
For example, consider the following action method:
public ActionResult ListStuff()
{
var stuff = Repo.GetStuff();
return Json(new { status = "OK", thestuff = stuff });
}
This will return JSON in the following format:
{ "status": "OK", "thestuf": [{ ... }, { ... }] }
where ... is a placeholder for the properties of stuff. Now, if I want error handling, I can just do
try
{
var stuff = Repo.GetStuff();
return Json(new { status = "OK", thestuff = stuff});
}
catch (Exception ex)
{
Log.Error(ex);
return Json(new { status = "Fail", reason = ex.Message });
}
Since javascript is dynamic, it doesn't matter that the two anonymous objects don't have the same properties. Based on the value of status, I'll only look for properties that are actually there.
This can be implemented even better if you create your own action result classes, which extend JsonResult and add the status property automatically. For example, you can create one for failed requests that takes an exception in the constructor and one for successful ones than take an anonymous object.
If all you're going to do is redirect the user to another page, then you can use any ActionMethod to do so and just redirect to it.
If you want a global error, such as a 500 or 403 or some other error, then the MVC 3 default template creates an _Error.cshtml page for you and registers the error handler in the global.asax.
If you want to catch specific errors, then you can register additional handlers in the same place and tell the system which Error page to use for that error.