I have a Windows 8 application. This application has several custom-defined classes. I need to store instances of these instances into Isolated Storage. From my understanding, Isolated Storage has been replaced with ApplicationDataContainer. Currently, I'm trying the following:
public class MyClass
{
private HttpClient service = new HttpClient();
public string FirstName { get; set; }
public DateTime? BirthDate { get; set; }
public int Gender { get; set; }
public async Task Save()
{
// Do stuff...
}
}
...
MyClass myInstance = new MyInstance();
// do stuff...
try {
ApplicationDataContainer storage = ApplicationData.Current.LocalSettings;
if (storage != null)
{
if (storage.Values.ContainsKey("MyKey"))
storage.Values["MyKey"] = myInstance;
else
storage.Values.Add("MyKey", myInstance);
}
} catch (Exception ex)
{
MessageDialog dialog = new MessageDialog("Unable to save to isolated storage");
dialog.ShowAsync();
}
What am I missing. Why is an exception always being thrown. The exception is not very descriptive. Its just a generic System.Exception and the message doesn't help either. Can someone please help me?
Thank you
The exception I get from the code above seems pretty clear:
Data of this type is not supported.
Per Accessing app data with the Windows Runtime
The Windows Runtime data types are supported for app settings.
Note that there is no binary type. If you need to store binary data, use an application file.
You can use the ApplicationDataCompositeValue class to group settings that must be treated atomically (but they still need to be supported runtime data types). Scenario 4 of the Application Data Sample covers this.
In your specific case though, you may want to consider serializing to a file and using app file storage versus settings.
I have implemented my ModelStorage framework for this scenario.
Related
Because a partner we are trying to setup FHIR communications is using an in-between version of the FHIR schema, they are sending and expecting a Practitioner/PractitionerRoleComponent that has an organization element, instead of an managingOrganization which the FHIR.NET API is expecting.
I've subclassed the Practitioner and PractitionerRoleComponent and got the objects creating fine, so Practitioner now has a custom "THXPractitionerRole" in our case. No errors are encountered, I have all attributes in place in the subclass (see below).
However, when I serialize to XML, the results have no PractitionerRole at all -- it almost seems like the serializer is just totally ignoring it. I'm going to guess that the FHIR.Net Serializers have some sort of check to make sure they are only serializing valid FHIR types? Or is there anything I'm missing from the subclass that might be preventing it from working?
The API I'm talking about is here: https://github.com/ewoutkramer/fhir-net-api/tree/develop
The goal is to be able to have a Practitioner/practitionerRole/organization element in the resulting XML/Json.
[FhirType("Practitioner", IsResource = true)]
[DataContract]
public partial class THXPractitioner : Hl7.Fhir.Model.Practitioner, System.ComponentModel.INotifyPropertyChanged
{
[FhirElement("practitionerRole", Order = 170)]
[Cardinality(Min = 0, Max = -1)]
[DataMember]
public List<THXPractitionerRoleComponent> THXPractitionerRole
{
get { if (_PractitionerRole == null) _PractitionerRole = new List<THXPractitionerRoleComponent>(); return _PractitionerRole; }
set { _PractitionerRole = value; OnPropertyChanged("PractitionerRole"); }
}
private List<THXPractitionerRoleComponent> _PractitionerRole;
[FhirType("PractitionerRoleComponent")]
[DataContract]
public partial class THXPractitionerRoleComponent : Hl7.Fhir.Model.Practitioner.PractitionerRoleComponent, System.ComponentModel.INotifyPropertyChanged, IBackboneElement
{
[NotMapped]
public override string TypeName { get { return "PractitionerRoleComponent"; } }
/// <summary>
/// Organization where the roles are performed
/// </summary>
[FhirElement("organization", Order = 40)]
[References("Organization")]
[DataMember]
public ResourceReference organization
{
get { return _organization; }
set { _organization = value; OnPropertyChanged("organization");}
}
private ResourceReference _organization;
}
Here's where it gets called:
fhirpractitioner.THXPractitionerRole = new List<Model.THXPractitioner.THXPractitionerRoleComponent>()
{
new Model.THXPractitioner.THXPractitionerRoleComponent()
{
Extension = new List<Extension>()
{
new Extension()
{
Url = "[My Url]",
}
},
organization = new ResourceReference()
{
Reference = "asdfasfd"
,Display = "organization"
,DisplayElement= new FhirString("organization")
}
}
};
Thanks.
A colleague of mine ended up finding this "issue" on GitHub for the project:
https://github.com/ewoutkramer/fhir-net-api/issues/337
So, I ended up taking a copy of the library and following the ideas suggested in the issue thread and recompiled. We now have a custom library.
From the issue:
the only way to handle custom resources currently is to create a StructureDefinition for it, add it to the profiles-resources.xml file in the Model/Source directory, and rerun the T4 templates - you'll get your own version of the .NET library, with a POCO for your own resources....
I did this, had to delete the Generated/Practitioner.cs file before running Template-Model.tt via the Run Custom Tool context menu option in Visual Studio. Once that completed, the Practitioner.cs file was generated with our new/custom resource and the Library was able to serialize it into the XML we needed.
Since there is no official FHIR release that has what you need, and therefore no version of the library that you can use, we think your best option would be to fork the source of the library (see https://github.com/ewoutkramer/fhir-net-api). You can then lookup other resources to see the code for their components, and alter the Practitioner to include the PractitionerRoleComponent. Build the solution, and you will be able to use that as your library instead of the official one.
I'm writing an add-in for another piece of software through its API. The classes returned by the API can only be access through the native software and the API. So I am writing my own stand alone POCO/DTO objects which map to the API classes. I'm working on a feature which will read in a native file, and return a collection of these POCO objects which I can stole elsewhere. Currently I'm using JSON.NET to serialize these classes to JSON if that matters.
For example I might have a DTO like this
public class MyPersonDTO
{
public string Name {get; set;}
public string Age {get; set;}
public string Address {get; set;}
}
..and a method like this to read the native "Persons" into my DTO objects
public static class MyDocReader
{
public static IList<MyPersonDTO> GetPersons(NativeDocument doc)
{
//Code to read Persons from doc and return MyPersonDTOs
}
}
I have unit tests setup with a test file, however I keep running into unexpected problems when running my export on other files. Sometimes native objects will have unexpected values, or there will be flat out bugs in the API which throw exceptions when there is no reason to.
Currently when something "exceptional" happens I just log the exception and the export fails. But I've decided that I'd rather export what I can, and record the errors somewhere.
The easiest option would be to just log and swallow the exceptions and return what I can, however then there would be no way for my calling code to know when there was a problem.
One option I'm considering is returning a dictionary of errors as a separate out parameter. The key would identify the property which could not be read, and the value would contain the details of the exception/error.
public static class MyDocReader
{
public static IList<MyPersonDTO> persons GetPersons(NativeDocument doc, out IDictionary<string, string> errors)
{
//Code to read persons from doc
}
}
Alternatively I was also considering just storing the errors in the return object itself. This inflates the size of my object, but has the added benefit of storing the errors directly with my objects. So later if someone's export generates an error, I don't have to worry about tracking down the correct log file on their computer.
public class MyPersonDTO
{
public string Name {get; set;}
public string Age {get; set;}
public string Address {get; set;}
public IDictionary<string, string> Errors {get; set;}
}
How is this typically handled? Is there another option for reporting the errors along with the return values that I'm not considering?
Instead of returning errors as part of the entities you could wrap the result in a reply or response message. Errors could then be a part of the response message instead of the entities.
The advantage of this is that the entities are clean
The downside is that it will be harder to map the errors back to offending entities/attributes.
When sending batches of entities this downside can be a big problem. When the API is more single entity oriented it wouldn't matter that much.
In principal, if something goes wrong in API (which cannot be recovered), the calling code must be aware that an exception has occurred. So it can have a strategy in place to deal with it.
Therefor, the approach that comes to my mind is influenced by the same philosophy -
1> Define your own exception Lets say IncompleteReadException.
This Exception shall have a property IList<MyPersonDTO> to store the records read until the exception occurred.
public class IncompleteReadException : Exception
{
IList<MyPersonDTO> RecordsRead { get; private set; }
public IncompleteReadException(string message, IList<MyPersonDTO> recordsRead, Exception innerException) : base(message,innerException)
{
this.RecordsRead = recordsRead;
}
}
2> When an exception occurs while reading, you can catch the original exception, wrap the original exception in this one & throw IncompleteReadException
This will allow the calling code (Application code), to have a strategy in place to deal with the situation when incomplete data is read.
Instead of throwing Exceptions throughout your code, you can return some extra information together with whatever you want to return.
public (ErrMsg Msg, int? Result) Divide(int x, int y)
{
ErrMsg msg = new ErrMsg();
try
{
if(x == 0){
msg = new ErrMsg{Severity = Severity.Warning, Text = "X is zero - result will always be zero"};
return (msg, x/y);
}
else
{
msg = new ErrMsg{Severity = Severity.Info, Text = "All is well"};
return (msg, x/y);
}
}
catch (System.Exception ex)
{
logger.Error(ex);
msg = new ErrMsg{Severity=Severity.Error, Text = ex.Message};
return (msg, null);
}
}
Recently we learned about AppDomain Recycling of IIS and how it affects static variables setting them to their primary values (nulls, 0s, etc).
We use some static variables that are initialized in a static constructor (for first time initialization, configuration values like "number of decimal places", "administrator email", etc... that are retrieved from DB) and then only read their value along the website execution.
Whats the best way of solving this problem? Some possible ideas:
Checking if variable is null/0 at each retrieval (don't like it because of a possible performance impact + time spent to add this check to each variable + code overload added to the project)
Somehow preventing AppDomain Recycling (this reset logic doesn't happen in Windows forms with static variables, shouldn't it work similarly as being the same language in both environments? At least in terms of standards as static variables management)
Using some other way of holding these variables (but we think that for being some values used for info as global reference for all users, static variables were the best option performance/coding wise)
Subscribing to an event that is triggered in those AppDomain Recycling so we can reinitialize all those variables (maybe best option if recycling can't be prevented...)
Ideas?
I would go with the approach that you don't like.
Checking if variable is null/0 at each retrieval (don't like it because of a possible performance impact + time spent to add this check to each variable + code overload added to the project)
I think it's faster than retireving from web.config.
You get a typed object
Its not a performance impact as you are not going to database on every retrieval request. You'll go to database (or any source) only when you find that current value set to its default value.
Checking the null wrapped into code:
public interface IMyConfig {
string Var1 { get; }
string Var2 { get; }
}
public class MyConfig : IMyConfig {
private string _Var1;
private string _Var2;
public string Var1 { get { return _Var1; } }
public string Var2 { get { return _Var2; } }
private static object s_SyncRoot = new object();
private static IMyConfig s_Instance;
private MyConfig() {
// load _Var1, _Var2 variables from db here
}
public static IMyConfig Instance {
get {
if (s_Instance != null) {
return s_Instance;
}
lock (s_SyncRoot) {
s_Instance = new MyConfig();
}
return s_Instance;
}
}
}
Is there any reason why you can't store these values in your web.config file and use ConfiguationManager.AppSettings to retrieve them?
ConfigurationManager.AppSettings["MySetting"] ?? "defaultvalue";
In view of your edit, why not cache the required values when they're first retrieved?
var val = HttpContext.Cache["MySetting"];
if (val == null)
{
val = // Database retrieval logic
HttpContext.Cache["MySetting"] = val;
}
It sounds like you need a write-through (or write-behind) cache, which can be done with static variables.
Whenever a user changes the value, write it back to the database. Then, whenever the AppPool is recycled (which is a normal occurrence and shouldn't be avoided), the static constructors can read the current values from the database.
One thing you'll have to consider: If you ever scale out to a web farm, you'll need to have some sort of "trigger" when a shared variable changes so the other servers on the farm can know to retrieve the new values from the server.
Comments on other parts of your question:
(don't like [Checking if variable is null/0 at each retrieval] because of a possible performance impact + time spent to add this check to each variable + code overload added to the project
If you use a write-through cache you won't need this, but in either case The time spent to check a static variable for 0 or null should be negligible.
[AppDomain recycling] doesn't happen in Windows forms with static variables, shouldn't it work similarly as being the same language in both environments?
No, WebForms and WinForms are completely different platforms with different operating models. Web sites should be able to respond to many (up to millions) of concurrent users. WinForms are built for single-user access.
've resolved this kind of issue, following a pattern similar to this. This enabled me to cater for handling circumstances where the data could change. I set up my ISiteSettingRepository in the bootstrapper. In 1 application I get the configuration from an XML file but in others I get it from the database, as and when I need it.
public class ApplicationSettings
{
public ApplicationSettings()
{
}
public ApplicationSettings(ApplicationSettings settings)
{
ApplicationName = settings.ApplicationName;
EncryptionAlgorithm = settings.EncryptionAlgorithm;
EncryptionKey = settings.EncryptionKey;
HashAlgorithm = settings.HashAlgorithm;
HashKey = settings.HashKey;
Duration = settings.Duration;
BaseUrl = settings.BaseUrl;
Id = settings.Id;
}
public string ApplicationName { get; set; }
public string EncryptionAlgorithm { get; set; }
public string EncryptionKey { get; set; }
public string HashAlgorithm { get; set; }
public string HashKey { get; set; }
public int Duration { get; set; }
public string BaseUrl { get; set; }
public Guid Id { get; set; }
}
Then a "Service" Interface to
public interface IApplicaitonSettingsService
{
ApplicationSettings Get();
}
public class ApplicationSettingsService : IApplicaitonSettingsService
{
private readonly ISiteSettingRepository _repository;
public ApplicationSettingsService(ISiteSettingRepository repository)
{
_repository = repository;
}
public ApplicationSettings Get()
{
SiteSetting setting = _repository.GetAll();
return setting;
}
}
I would take a totally different approach, one that doesn't involve anything static.
First create a class to strongly-type the configuration settings you're after:
public class MyConfig
{
int DecimalPlaces { get; set; }
string AdministratorEmail { get; set; }
//...
}
Then abstract away the persistence layer by creating some repository:
public interface IMyConfigRepository
{
MyConfig Load();
void Save(MyConfig settings);
}
The classes that can read and write these settings can then statically declare that they depend on an implementation of this repository:
public class SomeClass
{
private readonly IMyConfigRepository _repo;
public MyClass(IMyConfigRepository repo)
{
_repo = repo;
}
public void DoSomethingThatNeedsTheConfigSettings()
{
var settings = _repo.Load();
//...
}
}
Now implement the repository interface the way you want (today you want the settings in a database, tomorrow might be serializing to a .xml file, and next year using a cloud service) and the config interface as you need it.
And you're set: all you need now is a way to bind the interface to its implementation. Here's a Ninject example (written in a NinjectModule-derived class' Load method override):
Bind<IMyConfigRepository>().To<MyConfigSqlRepository>();
Then, you can just swap the implementation for a MyConfigCloudRepository or a MyConfigXmlRepository implementation when/if you ever need one.
Being an asp.net application, just make sure you wire up those dependencies in your Global.asax file (at app start-up), and then any class that has a IMyConfigRepository constructor parameter will be injected with a MyConfigSqlRepository which will give you MyConfigImplementation objects that you can load and save as you please.
If you're not using an IoC container, then you would just new up the MyConfigSqlRepository at app start-up, and manually inject the instance into the constructors of the types that need it.
The only thing with this approach, is that if you don't already have a DependencyInjection-friendly app structure, it might mean extensive refactoring - to decouple objects and eliminate the newing up of dependencies, making unit tests much easier to get focused on a single aspect, and much easier to mock-up the dependencies... among other advantages.
I am making an application in c#.I am using .Net Remoting for calling the method of windows application in web application.For communication between windows and web application i made one remoting object in which i declare one method.In windows application i have collection of one class and that class is declared in remote object.
Now my problem is that whenever i am calling the interface method,the collection value becomes zero.Before calling that method it contains some data.
Also whenever i am inserting hard coded value then its working but whenever i am inserting runtime value,its giving problem.I am using threading to insert the data into the collection.
Remote object has two components as StreamDataInfo.cs and IRemoteStreamData.cs as.These two are different classes in one class library.
namespace StreamDataService
{
public interface IRemoteStreamData
{
List<string> GetPatientHistory(string BedID);
}
}
namespace StreamDataService
{
[Serializable] public class StreamDataInfo:MarshalByRefObject
{
public string m_PortNumber { get; set; }
public string m_BedID { get; set; }
public List<string> m_StreamData { get; set; }
}
}
And in server application i wrote interface method as
public List<string> GetPatientHistory(string PortNumber)
{
StreamDataInfo objStreamDataInfo = new StreamDataInfo();
lock (this)
{
objStreamDataInfo = (from temp in listStreamDataInfo
where temp.m_PortNumber.Equals(PortNumber.ToString())
select temp).SingleOrDefault();
}
return objStreamDataInfo.m_StreamData;
}
Please help me.Thanks in advance.
Generic collections are not supported in remoting. You can either use arrays or try your own implementation (a VB sample is here).
I'm trying to pass a complex object via Windows Communication Foundation, but I get Read errors. I'm able to binaryFormat the object to a file and reload and deserialize it. All the components/ referenced component Classes are marked with the Serializable attribute. As a work round I have serialized the object to a memory stream, passed the memory stream over WCF and then deSerialized the memory stream at the other end. Although I could live with this solution it doesn't seem very neat. I can't seem to work out what the criteria are for being able to read from the proxy. Relatively simple objects, even ones that include a reference to another class can be be passed and read without any attribute at all. Any advice welcomed.
Edit: Unrecognised error 109 (0x6d) System.IO.IOException the Read Operation Failed.
Edited As Requested here's the class and the base class. Its pretty complicated that's why I didn't include code at the start, but it binary serializes fine.
[Serializable]
public class View : Descrip
{
//MsgSentCoreDel msgHandler;
public Charac playerCharac { get; internal set;}
KeyList<UnitV> unitVs;
public override IReadList<Unit> units { get { return unitVs; } }
public View(Scen scen, Charac playerCharacI /* , MsgSentCoreDel msgHandlerI */)
{
playerCharac = playerCharacI;
//msgHandler = msgHandlerI;
DateTime dateTimeI = scen.dateTime;
polities = new PolityList(this, scen.polities);
characs = new CharacList(this, scen.characs);
unitVs = new KeyList<UnitV>();
scen.unitCs.ForEach(i => unitVs.Add(new UnitV(this, i)));
if (scen.map is MapFlat)
map = new MapFlat(this, scen.map as MapFlat);
else
throw new Exception("Unknown map type in View constructor");
map.Copy(scen.map);
}
public void SendMsg(MsgCore msg)
{
msg.dateT = dateTime;
//msgHandler(msg);
}
}
And here's the base class:
[Serializable]
public abstract class Descrip
{
public DateTime dateTime { get; set; }
public MapStrat map { get; set; }
public CharacList characs { get; protected set; }
public PolityList polities { get; protected set; }
public abstract IReadList<Unit> units { get; }
public GridElList<Hex> hexs { get { return map.hexs; } }
public GridElList<HexSide> sides { get { return map.sides; } }
public Polity noPolity { get { return polities.none; } }
public double hexScale {get { return map.HexScale;}}
protected Descrip ()
{
}
public MapArea newMapArea()
{
return new MapArea(this, true);
}
}
I suggest that you take a look at the MSDN documentation for DataContracts in WCF since that provides some very helpful guidance.
Update
Based on the provided code and exception information, there are two areas of suspicion:
1) Collections and Dictionaries, especially those that are generics-based, always give the WCF client a hard time since it will not differentiate between two of these types of objects with what it considers to be the same signature. This will usually result in a deserialization error on the client, though, so this may not be your problem.
If it is your problem, I have outlined some of the steps to take on the client in my answer to this question.
2) You could have, somewhere in your hierarchy, an class that is not serializable.
If your WCF service is hosted in IIS, then the most invaluable tool that I have found for tracking down this kind of issue is the built-in WCF logger. To enable this logging, add the following to your web.config file in the main configuration section:
After you have generated the error, double-click on the svclog file and the Microsoft Service Trace Viewer will be launched. The items in red on the left-hand side are where exceptions occur and after selecting one, you can drill into its detail on the right hand side and it usually tells you exactly which item it had a problem with. Once we found this tool, tracking down these issues went from hours to minutes.
You should use DataContract and DataMember attributes to be explicit about which fields WCF should serialise, else also implement ISerializable and write (de-)serialisation yourself.