I have been working on an application which allows the user to make a label template for printing purposes by adding label controls to a panel(which I use as a container). I have reached the point where I need to be able to save the template to a file which I can load into memory later for printing. Since the form is not serializable does anyone have suggestions on how I can save the form or container(with added label controls) to a file that can be reused later?
Thanks.
I wouldn't directly serialize a form to a file. Sounds like you need to create a class that will hold the state of the user's work. You should then serialize that class to and from a file. There are built in methods for that using either Binary or XML Serialization.
Create a struct that contains enough information (and no more) about each Label that you can reconstitute the Label from it.
Write a method that takes a List<MyStruct> and populates a Panel from your structs.
Write methods to serialize and deserialize this list.
Encapsulate the whole thing in a class.
Try this. It uses the ISerializationSurrogate interface to get around the problem of the form object not being serializable:
How to serialize an object which is NOT marked as 'Serializable' using a surrogate.
http://www.codeproject.com/KB/dotnet/Surrogate_Serialization.aspx
Personally, I would serialize it as JSON.
When bringing it back you can use a generic method
that loops through and sets the properties through reflection.
Also take notice that the library I've linked to will automatically serialize objects that you pass to it.
JSON
JSON.NET
[{ "Label": [{"Top": 102}, {"Left": 105}, {"Text": "blah, blah"}] }]
From JSON.NET
Product product = new Product();
product.Name = "Apple";
product.Expiry = new DateTime(2008, 12, 28);
product.Price = 3.99M;
product.Sizes = new string[] { "Small", "Medium", "Large" };
string json = JsonConvert.SerializeObject(product);
//{
// "Name": "Apple",
// "Expiry": new Date(1230422400000),
// "Price": 3.99,
// "Sizes": [
// "Small",
// "Medium",
// "Large"
// ]
//}
Product deserializedProduct = JsonConvert.DeserializeObject<Product>(json);
You can get the position, size and other properties about the form's controls at runtime and save that state in an XML or JSON file.
This isn't trivial, but personally I would set up a function that can be called recursively that would add nodes to an XML file.
I don't have actual code, but pseudo-code looks like this: (you will need to do some clean-up, because I'm doing this off the top of my head without the aid of Intellisense.)
XmlDocument doc;
function SaveForm()
{
doc = new XmlDocument("FormInfo");
foreach(Control ctrl in this.Controls)
{
AddControlToXml(ctrl, doc.Documentelement);
}
}
function AddControlToXml(Control ctrl, XmlNode currentNode)
{
XmlNode n = new XmlNode;
Node.InnerText = ctrl.Name;
foreach(Control ctrl2 in ctrl.Controls)
{
AddControlToXml(ctrl2);
}
}
Related
I am trying to utilize Azure Cognitive services to perform basic document extraction.
My intent is to input PDFs and DOCXs (and possibly some other files) into the Cognitive Engine for parsing, but unfortunately, the implementation of this is not as simple as it seems.
According to the documentation (https://learn.microsoft.com/en-us/azure/search/cognitive-search-skill-document-extraction#sample-definition), I must define the skill and then I should be able to input files, but there is no examples on how this should be done.
So far I have been able to define the skill but I am still not sure where I should be dropping the files into.
Please see my code below, as it seeks to replicate the same data structure shown in the example code (albeit using the C# Library)
public static DocumentExtractionSkill CreateDocumentExtractionSkill()
{
List<InputFieldMappingEntry> inputMappings = new List<InputFieldMappingEntry>
{
new("file_data") {Source = "/document/file_data"}
};
List<OutputFieldMappingEntry> outputMappings = new List<OutputFieldMappingEntry>
{
new("content") {TargetName = "extracted_content"}
};
DocumentExtractionSkill des = new DocumentExtractionSkill(inputMappings, outputMappings)
{
Description = "Extract text (plain and structured) from image",
ParsingMode = BlobIndexerParsingMode.Text,
DataToExtract = BlobIndexerDataToExtract.ContentAndMetadata,
Context = "/document",
};
return des;
}
And then I build on this skill like so:
_indexerClient = new SearchIndexerClient(new Uri(Environment.GetEnvironmentVariable("SearchEndpoint")), new AzureKeyCredential(Environment.GetEnvironmentVariable("SearchKey"));
List<SearchIndexerSkill> skills = new List<SearchIndexerSkill> { Skills.DocExtractionSkill.CreateDocumentExtractionSkill() };
SearchIndexerSkillset skillset = new SearchIndexerSkillset("DocumentSkillset", skills)
{
Description = "Document Cracker Skillset",
CognitiveServicesAccount = new CognitiveServicesAccountKey(Environment.GetEnvironmentVariable("CognitiveServicesKey"))
};
await _indexerClient.CreateOrUpdateSkillsetAsync(skillset);
And... then what?
There is no clear method that would fit what I believe the next stage, actually parsing documents.
What is the next step from here to begin dumping files into the _indexerClient (of type SearchIndexerClient)?
As the next stage shown in the documentation is:
{
"values": [
{
"recordId": "1",
"data":
{
"file_data": {
"$type": "file",
"data": "aGVsbG8="
}
}
}
]
}
Which is not clear as to where I would be doing this.
According to the document that you have mentioned. They are actually trying to get the output through postman. They are using a GET Method to receive the extracted document content by sending JSON request to the mentioned URL(i.e. Cognitive skill url) and the files/documents are needed to be uploaded to your storage account in order to get extracted.
you can follow this tutorial to get more insights.
I am trying o use a tree view in a winform with c# to allow selection of servers in data centers for a small inventory of applications. I am using the following .json as a locally stored hard coded file of the application inventory.
{
"App1": {
"DataCenter1": [ "DC1_serverA", "DC1_serverB", "DC1_serverC" ],
"DataCenter2": [ "DC2_serverX", "DC2_serverY", "DC2_serverZ" ]
},
"App2": {
"DataCenter1": [ "DC1_serverQ", "DC1_serverR", "DC1_serverT" ],
"CDC2": [ "DC2_serverM", "DC2_serverN", "DC2_serverP" ]
}
}
I am using c# to iterate over this json to create dynamically the tree view to allow users to select what apps/ data center/ server they want. my high level code, where I cant figure the leaf level logic to extract the data is this:
dynamic dynJson = JsonConvert.DeserializeObject(File.ReadAllText(#"servers.json"));
foreach (var item in dynJson)
{
TreeNode treeNodeCI = new TreeNode(item);
treCIListing.Nodes.Add(treeNodeCI);
}
I seem to be using the wrong handle to get at the items in the json. I can edit the .json file format to be more suitable and easier in c# to turn the data into a UI of selectable items like this picture attached.
I recommend you to go recursively for ex.
private void FillTree(TreeNode parentNode, IEnumerable dynJson)
{
foreach (var item in dynJson)
{
TreeNode treeNodeCI = new TreeNode(item);
FillTree(treeNodeCI, item);
parentNode.Nodes.Add(treeNodeCI);
}
}
then the initial call should be something like:
dynamic dynJson = JsonConvert.DeserializeObject(File.ReadAllText(#"servers.json"));
FillTree(treCIListing, dynJson);
You can handle the types more specifically than I, I am using IEnumerable and TreeNode but you can use your specifics types
I'm creating a software on which I added a profiles feature where the user can create profile to load his informations faster. To store these informations, I'm using a JSON file, which contains as much objects as there are profiles.
Here is the format of the JSON file when a profile is contained (not the actual one, an example) :
{
"Profile-name": {
"form_email": "example#example.com",
//Many other informations...
}
}
Here is the code I'm using to write the JSON and its content :
string json = File.ReadAllText("profiles.json");
dynamic profiles = JsonConvert.DeserializeObject(json);
if (profiles == null)
{
File.WriteAllText(jsonFilePath, "{}");
json = File.ReadAllText(jsonFilePath);
profiles = JsonConvert.DeserializeObject<Dictionary<string, Profile_Name>>(json);
}
profiles.Add(profile_name.Text, new Profile_Name { form_email = form_email.Text });
var newJson = JsonConvert.SerializeObject(profiles, Formatting.Indented);
File.WriteAllText(jsonFilePath, newJson);
profile_tr.Nodes.Add(profile_name.Text, profile_name.Text);
debug_tb.Text += newJson;
But when the profiles.json file is completely empty, the profile is successfully written, but when I'm trying to ADD a profile when another one already exists, I get this error :
The best overloaded method match for 'Newtonsoft.Json.Linq.JObject.Add(string, Newtonsoft.Json.Linq.JToken)' has some invalid arguments on the profiles.Add(); line.
By the way, you can notice that I need to add {} by a non-trivial way in the file if it's empty, maybe it has something to do with the error ?
The expected output would be this JSON file :
{
"Profile-name": {
"form_email": "example#example.com",
//Many other informations...
},
"Second-profile": {
"form_email": "anotherexample#example.com"
//Some other informations...
}
}
Okay so I found by reading my code again, so I just replaced dynamic profiles = JsonConvert.DeserializeObject(json); to dynamic profiles = JsonConvert.DeserializeObject<Dictionary<string, Profile_Name>>(json);.
But it still don't fix the non-trivial way I use to add the {} to my file...
The object the first DeserializeObject method returns is actually a JObject, but below you deserialize it as a Dictionary. You shouldn't be mixing the types, choose either one.
If you use the JObject then to add objects you need to convert them to JObjects:
profiles.Add(profile_name.Text, JObject.FromObject(new Profile_Name { form_email = form_email.Text }));
In both cases, when the profile is null you just need to initialize it:
if (profiles == null)
{
profiles = new JObject(); // or new Dictionary<string, Profile_Name>();
}
For a project I am currently working on I'm required to use the clipboard to a certain extent.
What I need:
Save text and some additional application specific data to the clipboard. The text is supposed to be usable with CTRL + V within other applications while the application data should usually be omitted as it is mostly used for referencing stuff (like quotes and so on)
What I tried:
Copying custom object to clipboard and overwriting the ToString-Method, which was a little naive to think it would work
[Serializable]
public class TestData {
public string txt;
public string additionalStuffs;
public override string ToString() {
return txt;
}
}
Clipboard.SetData( "TestData", new TestData() { txt = "This is a text", additionalStuffs = "Stuffs" } );
I would now need the txt to be pastable into other applications as a string while the other data is ignored unless posted in my application. For the sake of being readable and easy to use for the user.
Can any of you explain how I need to approach this problem? Is there even a way to do that?
Okay, a little more trial and error using the documentation and I actually found a solution.
For everyone having the same problem: the trick is using a DataObject like the following:
[Serializable]
public class TestData {
public string Whatever;
}
IDataObject dataObject = new DataObject();
dataObject.SetData( "System.String", "Test" );
dataObject.SetData( "Text", "Test" );
dataObject.SetData( "UnicodeText", "Test" );
dataObject.SetData( "OEMText", "Test" );
dataObject.SetData( "TestData", new TestData() { Whatever = "NONONONONO", } );
Clipboard.SetDataObject( dataObject );
Using this construct you can set a Text using multiple "DataTypes" so whatever the application you want to paste to requires you have a value supplied. This way only the text shows up when pasting but hidden inside is also the additional data.
Sorry for putting this question up without researching to the end. Have a great day!
I have the below JSON (been snipped for space), as you can see in the "test" and "tooltip" I have a property that needs to contain a function "formatter" (note this JSON is read in from an XML file and converted to JSON in .NET)
{
"test": {
"formatter": function(){return '<b>'+ this.point.name +'<\/b>: '+ this.y +' %';}
},
"title": {
"align": "center",
"text": "Your chart title here"
},
"tooltip": {
"formatter": function(){return '<b>'+ this.point.name +'<\/b>: '+ this.y +' %';}
}
}
Unfortunatly I'm getting an error on the ASPX page that produces the JSON file
There was an error parsing the JSON document. The document may not be well-formed.
This error is due to the fact that the bit after the "formatter" is not in quotation marks as it thinks it's a string. but If I put a string around it then the front end html page that uses the JSON doesn't see the function.
Is it possible to pass this as a function and not a string?
Many thanks.
Edit:
Thanks for the quick replys. As I said I know that the above isn't correct JSON due to the fact that the "function(){...}" part isn't in quote marks. The front end that reads the JSON file is 3rd party so I was wondering how I could pass the function through, I understand about the problems of injection (from a SQL point of view) and understand why it's not possible in JSON (not worked with JSON before).
If you passed it as a string you could use Javascripts EVAL function, but EVAL is EVIL.
What about meeting it half way and using Object Notation format ?
This is a template jquery plugin that I use at work, the $.fn.extend shows this notation format.
/*jslint browser: true */
/*global window: true, jQuery: true, $: true */
(function($) {
var MyPlugin = function(elem, options) {
// This lets us pass multiple optional parameters to your plugin
var defaults = {
'text' : '<b>Hello, World!</b>',
'anotherOption' : 'Test Plugin'
};
// This merges the passed options with the defaults
// so we always have a value
this.options = $.extend(defaults, options);
this.element = elem;
};
// Use function prototypes, it's a lot faster.
// Loads of sources as to why on the 'tinternet
MyPlugin.prototype.Setup = function()
{
// run Init code
$(this.element).html(this.options.text);
};
// This actually registers a plugin into jQuery
$.fn.extend({
// by adding a jquery.testPlugin function that takes a
// variable list of options
testPlugin: function(options) {
// and this handles that you may be running
// this over multiple elements
return this.each(function() {
var o = options;
// You can use element.Data to cache
// your plugin activation stopping
// running it again;
// this is probably the easiest way to
// check that your calls won't walk all
// over the dom.
var element = $(this);
if (element.data('someIdentifier'))
{
return;
}
// Initialise our plugin
var obj = new MyPlugin(element, o);
// Cache it to the DOM object
element.data('someIdentifier', obj);
// Call our Setup function as mentioned above.
obj.Setup();
});
}
});
})(jQuery);