Can we deserialize InstallState file? - c#

I am trying to pass the information in Install state file to the installer class which will then uninstall.
But before passing it I need to convert the info to System.Collections.IDictionary savedState.
For this, is it possible to deserialize install state file?
Screenshot of the Installstate file

If you use the AssemblyInstaller class, it appears (although this doesn't seem to be documented) that it will, in general, ignore any passed savedState parameter and will instead deal with the INSTALLSTATE file instead (writing it during install, reading it during uninstall).
If you're unable to use it, for some reason, you can probably use a disassembly tool to extract the necessary code from its Uninstall method to perform the deserialization (I believe, and it appears so, that the specific serialization methods used vary between .NET versions, so I'd recommend using the one appropriate to whichever .NET version you're currently working with).
This is the Uninstall method, decompiled from System.Configuration.Install (File Version 4.6.1590.0):
public override void Uninstall(IDictionary savedState)
{
this.PrintStartText(Res.GetString("InstallActivityUninstalling"));
if (!this.initialized)
{
this.InitializeFromAssembly();
}
string installStatePath = this.GetInstallStatePath(this.Path);
if ((installStatePath != null) && File.Exists(installStatePath))
{
FileStream input = new FileStream(installStatePath, FileMode.Open, FileAccess.Read);
XmlReaderSettings settings = new XmlReaderSettings {
CheckCharacters = false,
CloseInput = false
};
XmlReader reader = null;
if (input != null)
{
reader = XmlReader.Create(input, settings);
}
try
{
if (reader != null)
{
NetDataContractSerializer serializer = new NetDataContractSerializer();
savedState = (Hashtable) serializer.ReadObject(reader);
}
goto Label_00C6;
}
catch
{
object[] args = new object[] { this.Path, installStatePath };
base.Context.LogMessage(Res.GetString("InstallSavedStateFileCorruptedWarning", args));
savedState = null;
goto Label_00C6;
}
finally
{
if (reader != null)
{
reader.Close();
}
if (input != null)
{
input.Close();
}
}
}
savedState = null;
Label_00C6:
base.Uninstall(savedState);
if ((installStatePath != null) && (installStatePath.Length != 0))
{
try
{
File.Delete(installStatePath);
}
catch
{
object[] objArray2 = new object[] { installStatePath };
throw new InvalidOperationException(Res.GetString("InstallUnableDeleteFile", objArray2));
}
}
}
You'll notice that it doesn't use whatever was passed to it as savedSate - by the time it uses that variable for anything (here, passing it to its base class), it's either over-written it from the INSTALLSTATE file or it's assigned null to it.

Related

Compare CSV Header to Map Class

I have a process whereby we have written a class to import a large (ish) CSV into our app using CsvHelper (https://joshclose.github.io/CsvHelper).
I would like to compare the header to the Map to ensure the header's integrity. We get the CSV file from a 3rd party and I want to ensure it doesn't change over time and thought the best way to do this would be to compare it against the map.
We have a class set up as so (trimmed):
public class VisitExport
{
public int? Count { get; set; }
public string CustomerName { get; set; }
public string CustomerAddress { get; set; }
}
And its corresponding map (also trimmed):
public class VisitMap : ClassMap<VisitExport>
{
public VisitMap()
{
Map(m => m.Count).Name("Count");
Map(m => m.CustomerName).Name("Customer Name");
Map(m => m.CustomerAddress).Name("Customer Address");
}
}
This is the code I have for reading the CSV file and it works great. I have a try catch in place for the error but ideally, if it fails specifically for a header miss match, I'd like to handle that specifically.
private void fileLoadedLink_LinkClicked(object sender, LinkLabelLinkClickedEventArgs e)
{
try
{
var filePath = string.Empty;
data = new List<VisitExport>();
using (OpenFileDialog openFileDialog = new OpenFileDialog())
{
openFileDialog.InitialDirectory = new KnownFolder(KnownFolderType.Downloads).Path;
openFileDialog.Filter = "csv files (*.csv)|*.csv";
openFileDialog.FilterIndex = 2;
openFileDialog.RestoreDirectory = true;
if (openFileDialog.ShowDialog() == DialogResult.OK)
{
filePath = openFileDialog.FileName;
var fileStream = openFileDialog.OpenFile();
var culture = CultureInfo.GetCultureInfo("en-GB");
using (StreamReader reader = new StreamReader(fileStream))
using (var readCsv = new CsvReader(reader, culture))
{
var map = new VisitMap();
readCsv.Context.RegisterClassMap(map);
var fileContent = readCsv.GetRecords<VisitExport>();
data = fileContent.ToList();
fileLoadedLink.Text = filePath;
viewModel.IsFileLoaded = true;
}
}
}
}
catch (CsvHelperException ex)
{
Console.WriteLine(ex.InnerException != null ? ex.InnerException.Message : ex.Message);
fileLoadedLink.Text = "Error loading file.";
viewModel.IsFileLoaded = false;
}
}
Is there a way of comparing the Csv header vs my map?
There are two basic cases for CSV files with headers: missing CSV columns, and extra CSV columns. The first is already detected by CsvHelper while the detection of the second is not implemented out of the box and requires subclassing of CsvReader.
(As CsvHelper maps CSV columns to model properties by name, permuting the order of the columns in the CSV file would not be considered a breaking change.)
Note that this only applies to CSV files that actually contain headers. Since you are not setting CsvConfiguration.HasHeaderRecord = false I assume that this applies to your use case.
Details about each of the two cases follow.
Missing CSV columns.
Currently CsvHelper already throws an exception by default in such situations. When unmapped data model properties are found, CsvConfiguration.HeaderValidated is invoked. By default this is set to ConfigurationFunctions.HeaderValidated whose current behavior is to throw a HeaderValidationException if there are any unmapped model properties. You can replace or extend HeaderValidated with logic of your own if you prefer:
var culture = CultureInfo.GetCultureInfo("en-GB");
var config = new CsvConfiguration (culture)
{
HeaderValidated = (args) =>
{
// Add additional logic as required here
ConfigurationFunctions.HeaderValidated(args);
},
};
using (var readCsv = new CsvReader(reader, config))
{
// Remainder unchanged
Demo fiddle #1 here.
Extra CSV columns.
Currently CsvHelper does not inform the application when this happens. See Throw if csv contains unexpected columns #1032 which confirms that this is not implemented out of the box.
In a GitHub comment, user leopignataro suggests a workaround, which is to subclass CsvReader and add the necessary validation logic oneself. However the version shown in the comment doesn't seem to handle duplicated column names or embedded references. The following subclass of CsvHelper should do this correctly. It is based on the logic in CsvReader.ValidateHeader(ClassMap map, List<InvalidHeader> invalidHeaders). It recursively walks the incoming ClassMap, attempts to find a CSV header corresponding to each member or constructor parameter, and flags the index of each one that is mapped. Afterwards, if there are any unmapped headers, the supplied Action<CsvContext, List<string>> OnUnmappedCsvHeaders is invoked to notify the application of the problem and throw some exception if desired:
public class ValidatingCsvReader : CsvReader
{
public ValidatingCsvReader(TextReader reader, CultureInfo culture, bool leaveOpen = false) : this(new CsvParser(reader, culture, leaveOpen)) { }
public ValidatingCsvReader(TextReader reader, CsvConfiguration configuration) : this(new CsvParser(reader, configuration)) { }
public ValidatingCsvReader(IParser parser) : base(parser) { }
public Action<CsvContext, List<string>> OnUnmappedCsvHeaders { get; set; }
public override void ValidateHeader(Type type)
{
base.ValidateHeader(type);
var headerRecord = HeaderRecord;
var mapped = new BitArray(headerRecord.Length);
var map = Context.Maps[type];
FlagMappedHeaders(map, mapped);
var unmappedHeaders = Enumerable.Range(0, headerRecord.Length).Where(i => !mapped[i]).Select(i => headerRecord[i]).ToList();
if (unmappedHeaders.Count > 0)
{
OnUnmappedCsvHeaders?.Invoke(Context, unmappedHeaders);
}
}
protected virtual void FlagMappedHeaders(ClassMap map, BitArray mapped)
{
// Logic adapted from https://github.com/JoshClose/CsvHelper/blob/0d753ff09294b425e4bc5ab346145702eeeb1b6f/src/CsvHelper/CsvReader.cs#L157
// By https://github.com/JoshClose
foreach (var parameter in map.ParameterMaps)
{
if (parameter.Data.Ignore)
continue;
if (parameter.Data.IsConstantSet)
// If ConvertUsing and Constant don't require a header.
continue;
if (parameter.Data.IsIndexSet && !parameter.Data.IsNameSet)
// If there is only an index set, we don't want to validate the header name.
continue;
if (parameter.ConstructorTypeMap != null)
{
FlagMappedHeaders(parameter.ConstructorTypeMap, mapped);
}
else if (parameter.ReferenceMap != null)
{
FlagMappedHeaders(parameter.ReferenceMap.Data.Mapping, mapped);
}
else
{
var index = GetFieldIndex(parameter.Data.Names.ToArray(), parameter.Data.NameIndex, true);
if (index >= 0)
mapped.Set(index, true);
}
}
foreach (var memberMap in map.MemberMaps)
{
if (memberMap.Data.Ignore || !CanRead(memberMap))
continue;
if (memberMap.Data.ReadingConvertExpression != null || memberMap.Data.IsConstantSet)
// If ConvertUsing and Constant don't require a header.
continue;
if (memberMap.Data.IsIndexSet && !memberMap.Data.IsNameSet)
// If there is only an index set, we don't want to validate the header name.
continue;
var index = GetFieldIndex(memberMap.Data.Names.ToArray(), memberMap.Data.NameIndex, true);
if (index >= 0)
mapped.Set(index, true);
}
foreach (var referenceMap in map.ReferenceMaps)
{
if (!CanRead(referenceMap))
continue;
FlagMappedHeaders(referenceMap.Data.Mapping, mapped);
}
}
}
And then in your code, handle the OnUnmappedCsvHeaders callback however you would like, such as by throwing a CsvHelperException or some other custom exception:
using (var readCsv = new ValidatingCsvReader(reader, culture)
{
OnUnmappedCsvHeaders = (context, headers) => throw new CsvHelperException(context, string.Format("Unmapped CSV headers: \"{0}\"", string.Join(",", headers))),
})
Demo fiddles:
#2 (your model).
#3 (with external references).
#4 (duplicate names).
#5 (using the auto-generated map).
This could use additional testing, e.g. for data models with parameterized constructors and additional, mutable properties.
How about catching HeaderValidationException before catching CsvHelperException
catch (HeaderValidationException ex)
{
var message = ex.Message.Split('\n')[0];
var currentHeader = ex.Context.Reader.HeaderRecord;
message += $"{Environment.NewLine}Header: \"{string.Join(",", currentHeader)}\"";
Console.WriteLine(message);
fileLoadedLink.Text = "Error loading file.";
viewModel.IsFileLoaded = false;
}
catch (CsvHelperException ex)
{
Console.WriteLine(ex.InnerException != null ? ex.InnerException.Message : ex.Message);
fileLoadedLink.Text = "Error loading file.";
viewModel.IsFileLoaded = false;
}

High memory usage of a windows forms method

I have a c# windows forms application which uses too much memory. The peace of code, that is the problem is this
private void mainTimer_Tick(object sender, EventArgs e)
{
try
{
if (DateTime.Now.DayOfWeek == DayOfWeek.Saturday)
{
if (File.Exists(Globals.pathNotifFile + "1"))
{
File.Delete(Globals.pathNotifFile + "1");
File.Move(Globals.pathNotifFile, Globals.pathNotifFile + "1");
}
File.Move(Globals.pathNotifFile, Globals.pathNotifFile + "1");
}
if (DateTime.Now.DayOfWeek == DayOfWeek.Sunday)
{
return;
}
if (Globals.shotsFired != true)
{
CreateDLclient();
Globals.shotsFired = true;
}
if (Globals.pathNotifFile == null)
{
return;
}
var data = Deserialize();
foreach (var notifyData in data.#new)
{
if (notifyData.Status == "1" || notifyData.Status == string.Empty)
{
if (DateTime.Now >= Convert.ToDateTime(notifyData.DateTime))
{
if (notifyData.Message != string.Empty)
{
notifyData.Status = SendMessageToUser(notifyData.Message, notifyData.Company, notifyData.EmojiCode);
Serialize(data);
}
else
{
notifyData.Status = "3";
Serialize(data);
}
}
else if (DateTime.Now >= Convert.ToDateTime(notifyData.DateTime).AddMinutes(5))
{
if (notifyData.Message != string.Empty)
{
notifyData.Status = SendMessageToUser(notifyData.Message, notifyData.Company, notifyData.EmojiCode);
Serialize(data);
}
else
{
notifyData.Status = "3";
Serialize(data);
}
}
}
}
}
It causes a huge problem and the application crashes with 'out of memory' Can somebody give me an advice how can I reduce the memory usage of that. I've tried to invoke the GC /I know it is not a good idea/, but it didn't help.
Thank you in advance
You have not provided any info on which serializer you used in your program but I am quite inclined to think it is XMLSerializer because it is prone to memory leak and you said in your comment that the program is crashing after working more then 10-12 hours.
XmlSerializer uses assembly generation, and assemblies cannot be collected. As far as I know It does some caching for re-use but only for simple cases.
So if you have code like the following,which is called pretty often→
XmlSerializer xml = new XmlSerializer(typeof(MyObject), ....
Then you will get out of memory exception sooner or later.
Edit: How to avoid Memory Leak from XMLSerializer:
Please have a look at the Topic
Dynamically Generated Assemblies in the following link→MSDN Link
If I just summarize what is written in there is that, you have a couple of ways.
1) You can use the following constructors to avoid dynamic assembly
XmlSerializer.XmlSerializer(Type)
XmlSerializer.XmlSerializer(Type, String)
2) Using a dictionary or hashtable, create your own caching
private Dictionary<Tuple<Type, XmlRootAttribute>, XmlSerializer> cacheSerializer = new Dictionary<Tuple<Type, XmlRootAttribute>, XmlSerializer>();
public XmlSerializer GetXmlSerializer(Type type, XmlRootAttribute root) {
var key = Tuple.Create(type, root);
XmlSerializer xmlSerializer;
if (cacheSerializer.TryGetValue(key, out xmlSerializer)) {
return xmlSerializer;
}
xmlSerializer = new XmlSerializer(type, root);
cacheSerializer.Add(key,xmlSerializer);
return xmlSerializer;
}

JObject.SelectToken Equivalent in .NET

I need to remove the outer node of a JSON. So an example would be :
{
app: {
...
}
}
Any ideas on how to remove the outer node, so we get only
{
...
}
WITHOUT using JSON.NET, only tools in the .NET Framework (C#).
In Json.NET I used:
JObject.Parse(json).SelectToken("app").ToString();
Alternatively, any configuration of the DataContractJsonSerializer, so that it ignores the root when deserializing, would also work. The way I do the desrialization now is:
protected T DeserializeJsonString<T>(string jsonString)
{
T tempObject = default(T);
using (var memoryStream = new MemoryStream(Encoding.Unicode.GetBytes(jsonString)))
{
var serializer = new DataContractJsonSerializer(typeof(T));
tempObject = (T)serializer.ReadObject(memoryStream);
}
return tempObject;
}
Note that the root object's property name can differ from case to case. For example it can be "transaction".
Thanks for any suggestion.
There is no equivalent to SelectToken built into .Net. But if you simply want to unwrap an outer root node and do not know the node name in advance, you have the following options.
If you are using .Net 4.5 or later, you can deserialize to a Dictionary<string, T> with DataContractJsonSerializer.UseSimpleDictionaryFormat = true:
protected T DeserializeNestedJsonString<T>(string jsonString)
{
using (var memoryStream = new MemoryStream(Encoding.Unicode.GetBytes(jsonString)))
{
var serializer = new DataContractJsonSerializer(typeof(Dictionary<string, T>));
serializer.UseSimpleDictionaryFormat = true;
var dictionary = (Dictionary<string, T>)serializer.ReadObject(memoryStream);
if (dictionary == null || dictionary.Count == 0)
return default(T);
else if (dictionary.Count == 1)
return dictionary.Values.Single();
else
{
throw new InvalidOperationException("Root object has too many properties");
}
}
}
Note that if your root object contains more than one property, you cannot deserialize to a Dictionary<TKey, TValue> to get the first property since the order of the items in this class is undefined.
On any version of .Net that supports the data contract serializers, you can take advantage of the fact that DataContractJsonSerializer inherits from XmlObjectSerializer to call JsonReaderWriterFactory.CreateJsonReader() to create an XmlReader that actually reads JSON, then skip forward to the first nested "element":
protected T DeserializeNestedJsonStringWithReader<T>(string jsonString)
{
var reader = JsonReaderWriterFactory.CreateJsonReader(Encoding.Unicode.GetBytes(jsonString), System.Xml.XmlDictionaryReaderQuotas.Max);
int elementCount = 0;
while (reader.Read())
{
if (reader.NodeType == System.Xml.XmlNodeType.Element)
elementCount++;
if (elementCount == 2) // At elementCount == 1 there is a synthetic "root" element
{
var serializer = new DataContractJsonSerializer(typeof(T));
return (T)serializer.ReadObject(reader, false);
}
}
return default(T);
}
This technique looks odd (parsing JSON with an XmlReader?), but with some extra work it should be possible to extend this idea to create SAX-like parsing functionality for JSON that is similar to SelectToken(), skipping forward in the JSON until a desired property is found, then deserializing its value.
For instance, to select and deserialize specific named properties, rather than just the first root property, the following can be used:
public static class DataContractJsonSerializerExtensions
{
public static T DeserializeNestedJsonProperty<T>(string jsonString, string rootPropertyName)
{
// Check for count == 2 because there is a synthetic <root> element at the top.
Predicate<Stack<string>> match = s => s.Count == 2 && s.Peek() == rootPropertyName;
return DeserializeNestedJsonProperties<T>(jsonString, match).FirstOrDefault();
}
public static IEnumerable<T> DeserializeNestedJsonProperties<T>(string jsonString, Predicate<Stack<string>> match)
{
DataContractJsonSerializer serializer = null;
using (var reader = JsonReaderWriterFactory.CreateJsonReader(Encoding.UTF8.GetBytes(jsonString), XmlDictionaryReaderQuotas.Max))
{
var stack = new Stack<string>();
while (reader.Read())
{
if (reader.NodeType == System.Xml.XmlNodeType.Element)
{
stack.Push(reader.Name);
if (match(stack))
{
serializer = serializer ?? new DataContractJsonSerializer(typeof(T));
yield return (T)serializer.ReadObject(reader, false);
}
if (reader.IsEmptyElement)
stack.Pop();
}
else if (reader.NodeType == XmlNodeType.EndElement)
{
stack.Pop();
}
}
}
}
}
See Mapping Between JSON and XML for details on how JsonReaderWriterFactory maps JSON to XML.

Strategy for splitting a large JSON file

I'm trying to split very large JSON files into smaller files for a given array. For example:
{
"headerName1": "headerVal1",
"headerName2": "headerVal2",
"headerName3": [{
"element1Name1": "element1Value1"
},
{
"element2Name1": "element2Value1"
},
{
"element3Name1": "element3Value1"
},
{
"element4Name1": "element4Value1"
},
{
"element5Name1": "element5Value1"
},
{
"element6Name1": "element6Value1"
}]
}
...down to { "elementNName1": "elementNValue1" } where N is a large number
The user provides the name which represents the array to be split (in this example "headerName3") and the number of array objects per file, e.g. 1,000,000
This would result in N files each containing the top name:value pairs (headerName1, headerName3) and up to 1,000,000 of the headerName3 objects in each file.
I'm using the excellent Newtonsof JSON.net and understand that I need to do this using a stream.
So far I have looked a reading in JToken objects to establish where the PropertyName == "headerName3" occurs when reading in the tokens but what I would like to do is then read in the entire JSON object for each object in the array and not have to continue parsing JSON into JTokens;
Here's a snippet of the code I am building so far:
using (StreamReader oSR = File.OpenText(strInput))
{
using (var reader = new JsonTextReader(oSR))
{
while (reader.Read())
{
if (reader.TokenType == JsonToken.StartObject)
{
intObjectCount++;
}
else if (reader.TokenType == JsonToken.EndObject)
{
intObjectCount--;
if (intObjectCount == 1)
{
intArrayRecordCount++;
// Here I want to read the entire object for this record into an untyped JSON object
if( intArrayRecordCount % 1000000 == 0)
{
//write these to the split file
}
}
}
}
}
}
I don't know - and in fact, and am not concerned with - the structure of the JSON itself, and the objects can be of varying structures within the array. I am therefore not serializing to classes.
Is this the right approach? Is there a set of methods in the JSON.net library I can easily use to perform such operation?
Any help appreciated.
You can use JsonWriter.WriteToken(JsonReader reader, true) to stream individual array entries and their descendants from a JsonReader to a JsonWriter. You can also use JProperty.Load(JsonReader reader) and JProperty.WriteTo(JsonWriter writer) to read and write entire properties and their descendants.
Using these methods, you can create a state machine that parses the JSON file, iterates through the root object, loads "prefix" and "postfix" properties, splits the array property, and writes the prefix, array slice, and postfix properties out to new file(s).
Here's a prototype implementation that takes a TextReader and a callback function to create sequential output TextWriter objects for the split file:
enum SplitState
{
InPrefix,
InSplitProperty,
InSplitArray,
InPostfix,
}
public static void SplitJson(TextReader textReader, string tokenName, long maxItems, Func<int, TextWriter> createStream, Formatting formatting)
{
List<JProperty> prefixProperties = new List<JProperty>();
List<JProperty> postFixProperties = new List<JProperty>();
List<JsonWriter> writers = new List<JsonWriter>();
SplitState state = SplitState.InPrefix;
long count = 0;
try
{
using (var reader = new JsonTextReader(textReader))
{
bool doRead = true;
while (doRead ? reader.Read() : true)
{
doRead = true;
if (reader.TokenType == JsonToken.Comment || reader.TokenType == JsonToken.None)
continue;
if (reader.Depth == 0)
{
if (reader.TokenType != JsonToken.StartObject && reader.TokenType != JsonToken.EndObject)
throw new JsonException("JSON root container is not an Object");
}
else if (reader.Depth == 1 && reader.TokenType == JsonToken.PropertyName)
{
if ((string)reader.Value == tokenName)
{
state = SplitState.InSplitProperty;
}
else
{
if (state == SplitState.InSplitProperty)
state = SplitState.InPostfix;
var property = JProperty.Load(reader);
doRead = false; // JProperty.Load() will have already advanced the reader.
if (state == SplitState.InPrefix)
{
prefixProperties.Add(property);
}
else
{
postFixProperties.Add(property);
}
}
}
else if (reader.Depth == 1 && reader.TokenType == JsonToken.StartArray && state == SplitState.InSplitProperty)
{
state = SplitState.InSplitArray;
}
else if (reader.Depth == 1 && reader.TokenType == JsonToken.EndArray && state == SplitState.InSplitArray)
{
state = SplitState.InSplitProperty;
}
else if (state == SplitState.InSplitArray && reader.Depth == 2)
{
if (count % maxItems == 0)
{
var writer = new JsonTextWriter(createStream(writers.Count)) { Formatting = formatting };
writers.Add(writer);
writer.WriteStartObject();
foreach (var property in prefixProperties)
property.WriteTo(writer);
writer.WritePropertyName(tokenName);
writer.WriteStartArray();
}
count++;
writers.Last().WriteToken(reader, true);
}
else
{
throw new JsonException("Internal error");
}
}
}
foreach (var writer in writers)
using (writer)
{
writer.WriteEndArray();
foreach (var property in postFixProperties)
property.WriteTo(writer);
writer.WriteEndObject();
}
}
finally
{
// Make sure files are closed in the event of an exception.
foreach (var writer in writers)
using (writer)
{
}
}
}
This method leaves all the files open until the end in case "postfix" properties, appearing after the array property, need to be appended. Be aware that there is a limit of 16384 open files at one time, so if you need to create more split files, this won't work. If postfix properties are never encountered in practice, you can just close each file before opening the next and throw an exception in case any postfix properties are found. Otherwise you may need to parse the large file in two passes or close and reopen the split files to append them.
Here is an example of how to use the method with an in-memory JSON string:
private static void TestSplitJson(string json, string tokenName)
{
var builders = new List<StringBuilder>();
using (var reader = new StringReader(json))
{
SplitJson(reader, tokenName, 2, i => { builders.Add(new StringBuilder()); return new StringWriter(builders.Last()); }, Formatting.Indented);
}
foreach (var s in builders.Select(b => b.ToString()))
{
Console.WriteLine(s);
}
}
Prototype fiddle.

WPF: How do I validate that a number from memory is not duplicated in db table?

I'm having trouble figuring out how to determine if a number is duplicated.
Right now, the process is when the user clicks on a button to browse for an xml file, the xml file gets deserialized and stored into db and the data gets shown on a DataGrid on the view.
So, I added a confirmation dialog so when the user clicks on browse, the code checks to see if the lot_number being deserialized is a duplicate or not from inside a column from a table in database. I only want the user to be able to add lot numbers to db that are not duplicates.
Here's my code so far:
public void DeSerializationStream(string filePath)
{
XmlRootAttribute xRoot = new XmlRootAttribute();
xRoot.ElementName = "lot_information";
xRoot.IsNullable = false;
// Create an instance of lotinformation class.
var lot = new LotInformation();
// Create an instance of stream writer.
TextReader txtReader = new StreamReader(filePath);
// Create and instance of XmlSerializer class.
XmlSerializer xmlSerializer = new XmlSerializer(typeof(LotInformation), xRoot);
// DeSerialize from the StreamReader
lot = (LotInformation)xmlSerializer.Deserialize(txtReader);
// Close the stream reader
txtReader.Close();
}
public void ReadLot(LotInformation lot)
{
try
{
using (var db = new DDataContext())
{
var lotNumDb = db.LotInformation.FirstOrDefault(r => r.lot_number.Equals(r.lot_number));
if (lotNumDb != null || lotNumDb.lot_number.ToString().Equals(lot.lot_number))
{
confirmationWindow.Message = LanguageResources.Resource.Sample_Exists_Already;
dialogService.ShowDialog(LanguageResources.Resource.Error, confirmationWindow);
}
else {
Console.WriteLine("lot does not exist. yay");
}
DateTime ExpirationDate = lot.exp_date;
if (ExpirationDate != null)
{
if (DateTime.Compare(ExpirationDate, DateTime.Now) > 0)
{
try
{
LotInformation lotInfo = db.LotInformation.FirstOrDefault(r => r.lot_number.Equals(lotNumber));
}
catch (InvalidOperationException e)
{
//TODO: Add a Dialog Here
}
}
else
{
Console.WriteLine(ExpirationDate);
errorWindow.Message = LanguageResources.Resource.Lot_Expired;
dialogService.ShowDialog(LanguageResources.Resource.Error, errorWindow);
}
}
else
{
errorWindow.Message = LanguageResources.Resource.Lot_Not_In_Database;
dialogService.ShowDialog(LanguageResources.Resource.Error, errorWindow);
}
}
}
catch
{
errorWindow.Message = LanguageResources.Resource.Database_Error;
dialogService.ShowDialog(LanguageResources.Resource.Error, errorWindow);
logger.writeErrLog(LanguageResources.Resource.Database_Error);
}
}
I think I'm just having problems with when to grab the lot_number in this process.
This part below gives me problems. It keeps showing the Sample Exists already message for unique lot numbers that I'm uploading and I'm not sure why. I think it's a problem with my LINQ query but I'm not sure how to fix it. Any ideas?
var lotNumDb = db.LotInformation.FirstOrDefault(r => r.lot_number.Equals(r.lot_number));
if (lotNumDb != null || lotNumDb.lot_number.ToString().Equals(lot.lot_number))
{
confirmationWindow.Message = LanguageResources.Resource.Sample_Exists_Already;
dialogService.ShowDialog(LanguageResources.Resource.Error, confirmationWindow);
}
else {
Console.WriteLine("lot does not exist. yay");
}
you can't use like this:
db.LotInformation.FirstOrDefault(r => r.lot_number.Equals(r.lot_number))
may be :
db.LotInformation.FirstOrDefault(r => r.lot_number.Equals(lot.lot_number))
or
db.LotInformation.FirstOrDefault(r => r.lot_number.Equals(a string))

Categories

Resources