I'm new to coding and I've started working on a task in c#. I've to develop a code to get file information, like file type, file size, owner name for a given directory path input.
Now, to save time, I thought of building a dictionary in which I'll store all the SID and corresponding Owner information. And the code won't loop through to get owner name each time by converting SID for each file, instead it will get the SID for that file and map it to it's owner using the built dictionary. This dictionary will be built once and will be updated if any new owner joins.
Does anyone know how to create a dictionary that can be used separately.
Here's the code I'm working on --
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
//using System.IO;
using System.Security.AccessControl;
using System.Security.Principal;
using System.Diagnostics;
using System.Collections;
using Alphaleonis.Win32.Filesystem;
// Alpha is external library used to allow long paths.
namespace Get_SID_Owner_Info
{
internal class Program
{
private static void Main()
{
Console.Write("Please enter the Directory Path -- ");
string foldr = Console.ReadLine();
Console.Write("Please enter the result path location e.g. D:\\Nolder\\Outfile.csv -- ");
string outfile = Console.ReadLine();
//string foldr = "D:\\Ansys_Training";
//string outfile = "D:\\Get_SID_Owner.csv";
var watch = new System.Diagnostics.Stopwatch();
watch.Start();
int[] index = new int[1000000];
int i;
i = 0;
IdentityReference[] SID_store = new IdentityReference[1000000];
IdentityReference[] Owner_store = new IdentityReference[1000000];
if (File.Exists(outfile))
{
File.Delete(outfile);
}
// Create a new file
using (System.IO.StreamWriter sw = File.CreateText(outfile))
{
sw.WriteLine("{0},{1}", "SID", "Owner Name");
string[] files = System.IO.Directory.GetFiles(foldr);
DirectoryInfo tempWithoutMac = new DirectoryInfo(foldr);
foreach (FileInfo fi in tempWithoutMac.GetFiles())
{
// SID --
FileSecurity fs = File.GetAccessControl(fi.FullName);
IdentityReference SID = fs.GetOwner(typeof(SecurityIdentifier));
// Owner --
IdentityReference Owner = SID.Translate(typeof(NTAccount));
SID_store[i] = SID;
Owner_store[i] = Owner;
i = i + 1;
}
foreach (string d in System.IO.Directory.GetDirectories(foldr, "*", System.IO.SearchOption.AllDirectories))
{
tempWithoutMac = new DirectoryInfo(d);
foreach (FileInfo fi in tempWithoutMac.GetFiles())
{
// SID --
FileSecurity fs = File.GetAccessControl(fi.FullName);
IdentityReference SID = fs.GetOwner(typeof(SecurityIdentifier));
// Owner --
IdentityReference Owner = SID.Translate(typeof(NTAccount));
SID_store[i] = SID;
Owner_store[i] = Owner;
i = i + 1;
}
}
IdentityReference[] SID_store2 = new IdentityReference[i];
IdentityReference[] Owner_store2 = new IdentityReference[i];
for (int j = 0; j < i; j++)
{
SID_store2[j] = SID_store[j];
Owner_store2[j] = Owner_store[j];
}
var SID_Unique = SID_store2.Distinct().ToList(); // Contains Unique SID's for the given directory --
var Owner_Unique = Owner_store2.Distinct().ToList();
Dictionary<IdentityReference, IdentityReference> SID_Owner_Data = new Dictionary<IdentityReference, IdentityReference>();
for (int j = 0; j < SID_Unique.Count; j++) // SID to Owner conversion for the Unique SID's --
{
SID_Owner_Data.Add(SID_Unique[j], Owner_Unique[j]);
Console.WriteLine(SID_Unique[j]);
Console.WriteLine(Owner_Unique[j]);
}
Console.WriteLine(SID_Unique.Count);
for (int k = 0; k < SID_Unique.Count; k++)
{
sw.WriteLine("{0},{1}", SID_Unique[k], Owner_Unique[k]);
}
}
watch.Stop();
Console.WriteLine($"Execution Time: {watch.ElapsedMilliseconds} ms");
Console.ReadKey();
}
}
}
If I understand correctly, you want to be able to resolve an SID to its owner, but through a caching mechanism so that each SID is resolved only once. This is trivial to do with a ConcurrentDictionary and GetOrAdd.
ConcurrentDictionary<IdentityReference,IdentityReference> _cache = new ConcurrentDictionary<IdentityReference,IdentityReference>();
IdentityReference GetTranslationWithCache(IdentityReference SID)
{
return _cache.GetOrAdd( SID, () => SID.Translate(typeof(NTAccount));
}
In this example. GetOrAdd will search the cache for the SID and return the corresponding translation if it is found. If it is not found, the key is added to the dictionary and the delegate (() => SID.Translate) is called in order to populate its value. This makes it good for use as a cache. As a bonus, your dictionary is thread-safe, so you can populate it from multiple threads to improve performance, and still guarantee that the calls to Translate will only happen once per SID.
Related
I'm currently working on a c# form.Basically, I have a lot of log files and most of them have duplicates lines between them. This form is supposed to concatenate a lot of those files into one file then delete all the duplicates in it so that I can have one log file without duplicates. I've already successfully made it work by taking 2 files, concatenating them, deleting all the duplicates in it then reproducing the process until I have no more files. Here is the function I made for this:
private static void DeleteAllDuplicatesFastWithMemoryManagement(HashSet<string>[] path_list, string parent_path, ProgressBar pBar1, BackgroundWorker backgroundWorker1)
{
for (int j = 0; j < path_list.Length; j++)
{
HashSet<string>.Enumerator em = path_list[j].GetEnumerator();
List<string> LogFile = new List<string>();
while (em.MoveNext())
{
var secondLogFile = File.ReadAllLines(em.Current);
LogFile = LogFile.Concat(secondLogFile).ToList();
LogFile = LogFile.Distinct().ToList();
backgroundWorker1.ReportProgress(1);
}
LogFile = LogFile.Distinct().ToList();
string new_path = parent_path + "/new_data/probe." + j + ".log";
File.WriteAllLines(new_path, LogFile.Distinct().ToArray());
}
}
path_list contains all the path to the files I need to process.
path_list[0] contains all the probe.0.log files
path_list[1] contains all the probe.1.log files ...
Here is the idea I have for my problem but I have no idea how to code it :
private static void DeleteAllDuplicatesFastWithMemoryManagement(HashSet<string>[] path_list, string parent_path, ProgressBar pBar1, BackgroundWorker backgroundWorker1)
{
for (int j = 0; j < path_list.Length; j++)
{
HashSet<string>.Enumerator em = path_list[j].GetEnumerator();
List<string> LogFile = new List<string>();
while (em.MoveNext())
{
// how I see it
if (currentMemoryUsage + newfile.Length > maximumProcessMemory) {
LogFile = LogFile.Distinct().ToList();
}
//end
var secondLogFile = File.ReadAllLines(em.Current);
LogFile = LogFile.Concat(secondLogFile).ToList();
LogFile = LogFile.Distinct().ToList();
backgroundWorker1.ReportProgress(1);
}
LogFile = LogFile.Distinct().ToList();
string new_path = parent_path + "/new_data/probe." + j + ".log";
File.WriteAllLines(new_path, LogFile.Distinct().ToArray());
}
}
I think this method will be much quicker, and it will adjust to any computer specs. Can anyone help me to make this work ? Or tell me if I'm wrong.
You are creating far too many lists and arrays and Distincts.
Just combine everything in a HashSet, then write it out
private static void CombineNoDuplicates(HashSet<string>[] path_list, string parent_path, ProgressBar pBar1)
{
var logFile = new HashSet<string>(1000); // pre-size your hashset to a suitable size
foreach (var paths in path_list)
{
logFile.Clear();
foreach (var path in paths)
{
var lines = File.ReadLines(file);
logFile.UnionWith(lines);
backgroundWorker1.ReportProgress(1);
}
string new_path = Path.Combine(parent_path, "new_data", "probe." + j + ".log");
File.WriteAllLines(new_path, logFile);
}
}
Ideally you should use async instead of BackgroundWorker which is deprecated. This also means you don't need to store a whole file in memory at once, except for the first one.
private static async Task CombineNoDuplicatesAsync(HashSet<string>[] path_list, string parent_path, ProgressBar pBar1)
{
var logFile = new HashSet<string>(1000); // pre-size your hashset to a suitable size
foreach (var paths in path_list)
{
logFile.Clear();
foreach (var path in paths)
{
using (var sr = new StreamReader(file))
{
string line;
while ((line = await sr.ReadLineAsync()) != null)
{
logFile.Add(line);
}
}
}
string new_path = Path.Combine(parent_path, "new_data", "probe." + j + ".log");
await File.WriteAllLinesAsync(new_path, logFile);
}
}
If you want to risk a colliding hash-code, you could cut down your memory usage even further by just putting the strings' hashes in a HashSet, then you can fully stream all files.
Caveat: colliding hash-codes are a distinct possibility, especially with many strings. Analyze your data to see fi you can risk this.
private static async Task CombineNoDuplicatesAsync(HashSet<string>[] path_list, string parent_path, ProgressBar pBar1)
{
var hashes = new HashSet<int>(1000); // pre-size your hashset to a suitable size
foreach (var paths in path_list)
{
hashes.Clear();
string new_path = Path.Combine(parent_path, "new_data", "probe." + j + ".log");
using (var output = new StreamWriter(new_path))
{
foreach (var path in paths)
{
using (var sr = new StreamReader(file))
{
string line;
while ((line = await sr.ReadLineAsync()) != null)
{
if (hashes.Add(line.GetHashCode())
await output.WriteLineAsync(line);
}
}
}
}
}
}
You can get even more performance if you would read Span<byte> arrays and parse the lines like that, I will leave that as an exercise to the reader as it's quite complex.
Assuming your log files already contain lines that are sorted in chronological order1, we can effectively treat them as intermediate files for a multi-file sort and perform merging/duplicate elimination in one go.
It would be a new class, something like this:
internal class LogFileMerger : IEnumerable<string>
{
private readonly List<IEnumerator<string>> _files;
public LogFileMerger(HashSet<string> fileNames)
{
_files = fileNames.Select(fn => File.ReadLines(fn).GetEnumerator()).ToList();
}
public IEnumerator<string> GetEnumerator()
{
while (_files.Count > 0)
{
var candidates = _files.Select(e => e.Current);
var nextLine = candidates.OrderBy(c => c).First();
for (int i = _files.Count - 1; i >= 0; i--)
{
while (_files[i].Current == nextLine)
{
if (!_files[i].MoveNext())
{
_files.RemoveAt(i);
break;
}
}
}
yield return nextLine;
}
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
}
You can create a LogFileMerger using the set of input log file names and pass it directly as the IEnumerable<string> to some method like File.WriteAllLines. Using File.ReadLines should mean that the amount of memory being used for each input file is just a small buffer on each file, and we never attempt to have all of the data from any of the files loaded at any time.
(You may want to adjust the OrderBy and comparison operations in the above if there are requirements around case insensitivity but I don't see any evident in the question)
(Note also that this class cannot be enumerated multiple times in the current design. That could be adjusted by storing the paths instead of the open enumerators in the class field and making the list of open enumerators a local inside GetEnumerator)
1If this is not the case, it may be more sensible to sort each file first so that this assumption is met and then proceed with this plan.
I have a BIM model in IFC format and I want to add a new property, say cost, to every object in the model using Xbim. I am building a .NET application. The following code works well, except, the property is also added to storeys, buildings and sites - and I only want to add it to the lowest-level objects that nest no other objects.
To begin with, I have tried various methods to print the "related objects" of each object, thinking that I could filter out any objects with non-null related objects. This has led me to look at this:
IfcRelDefinesByType.RelatedObjects (http://docs.xbim.net/XbimDocs/html/7fb93e55-dcf7-f6da-0e08-f8b5a70accf2.htm) from thinking that RelatedObjects (https://standards.buildingsmart.org/IFC/RELEASE/IFC2x3/FINAL/HTML/ifckernel/lexical/ifcreldecomposes.htm) would contain this information.
But I have not managed to implement working code from this documentation.
Here is my code:
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using Xbim.Ifc;
using Xbim.Ifc2x3.Interfaces;
using Xbim.Ifc4.Kernel;
using Xbim.Ifc4.MeasureResource;
using Xbim.Ifc4.PropertyResource;
using Xbim.Ifc4.Interfaces;
using IIfcProject = Xbim.Ifc4.Interfaces.IIfcProject;
namespace MyPlugin0._1
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
outputBox.AppendText("Plugin launched successfully");
}
private void button1_Click(object sender, EventArgs e)
{
// Setup the editor
var editor = new XbimEditorCredentials
{
ApplicationDevelopersName = "O",
ApplicationFullName = "MyPlugin",
ApplicationIdentifier = "99990100",
ApplicationVersion = "0.1",
EditorsFamilyName = "B",
EditorsGivenName = "O",
EditorsOrganisationName = "MyWorkplace"
};
// Choose an IFC file to work with
OpenFileDialog dialog = new OpenFileDialog();
dialog.ShowDialog();
string filename = dialog.FileName;
string newLine = Environment.NewLine;
// Check if the file is valid and continue
if (!filename.ToLower().EndsWith(".ifc"))
{
// Output error if the file is the wrong format
outputBox.AppendText(newLine + "Error: select an .ifc-file");
}
else
{
// Open the selected file (## Not sure what the response is to a corrupt/invalid .ifc-file)
using (var model = IfcStore.Open(filename, editor, 1.0))
{
// Output success when the file has been opened
string reversedName = Form1.ReversedString(filename);
int filenameShortLength = reversedName.IndexOf("\\");
string filenameShort = filename.Substring(filename.Length - filenameShortLength, filenameShortLength);
outputBox.AppendText(newLine + filenameShort + " opened successfully for editing");
////////////////////////////////////////////////////////////////////
// Get all the objects in the model ( ### lowest level only??? ###)
var objs = model.Instances.OfType<IfcObjectDefinition>();
////////////////////////////////////////////////////////////////////
// Create and store a new property
using (var txn = model.BeginTransaction("Store Costs"))
{
// Iterate over all the walls to initiate the Point Source property
foreach (var obj in objs)
{
// Create new property set to host properties
var pSetRel = model.Instances.New<IfcRelDefinesByProperties>(r =>
{
r.GlobalId = Guid.NewGuid();
r.RelatingPropertyDefinition = model.Instances.New<IfcPropertySet>(pSet =>
{
pSet.Name = "Economy";
pSet.HasProperties.Add(model.Instances.New<IfcPropertySingleValue>(p =>
{
p.Name = "Cost";
p.NominalValue = new IfcMonetaryMeasure(200.00); // Default Currency set on IfcProject
}));
});
});
// Add property to the object
pSetRel.RelatedObjects.Add(obj);
// Rename the object
outputBox.AppendText(newLine + "Cost property added to " + obj.Name);
obj.Name += "_withCost";
//outputBox.AppendText(newLine + obj.OwnerHistory.ToString());
}
// Commit changes to this model
txn.Commit();
};
// Save the changed model with a new name. Does not overwrite existing files but generates a unique name
string newFilename = filenameShort.Substring(0, filenameShort.Length - 4) + "_Modified.IFC";
int i = 1;
while (File.Exists(newFilename))
{
newFilename = filenameShort.Substring(0, filenameShort.Length - 4) + "_Modified(" + i.ToString() + ").IFC";
i += 1;
}
model.SaveAs(newFilename); // (!) Gets stored in the project folder > bin > Debug
outputBox.AppendText(newLine + newFilename + " has been saved");
};
}
}
// Reverse string-function
static string ReversedString(string text)
{
if (text == null) return null;
char[] array = text.ToCharArray();
Array.Reverse(array);
return new String(array);
}
private void Form1_Load(object sender, EventArgs e)
{
}
}
}
You're starting out by getting too broad a set of elements in the model. Pretty much everything in an IFC model will be classed as (or 'derived from') an instance of IfcObjectDefinition - including Spatial concepts (spaces, levels, zones etc) as well as more abstract concepts of Actors (people), Resources and the like.
You'd be better off filtering down objs to the more specific types such as IfcElement, IfcBuildingElement - or even the more real world elements below (IfcWindow, IfcDoor etc.)
// Get all the building elements in the model
var objs = model.Instances.OfType<IfcBuildingElement>();
You could also filter by more specific clauses more than just their type by using the other IFC relationships.
I'm trying to transfer data from a treenode (at least I think that's what it is) which contains much more data than I need. It would be very difficult for me to manipulate the data within the treenode. I would much rather have an array which provides me with only the necessary data for data manipulation.
I would like higher rates have following variables:
1. BookmarkNumber (integer)
2. Date (string)
3. DocumentType (string)
4. BookmarkPageNumberString (string)
5. BookmarkPageNumberInteger (integer)
I would like to the above defined rate from the data from variable book_mark (as can be seen in my code).
I've been wrestling with this for two days. Any help would be much appreciated. I'm probably sure that the question wasn't phrased correctly so please ask questions so that I may explain further if needed.
Thanks so much
BTW what I'm trying to do is create a Windows Form program which parses a PDF file which has multiple bookmarks into discrete PDF files for each bookmark/chapter while saving the bookmark in the correct folder with the correct naming convention, the folder and naming convention dependent upon the PDF name and title name of the bookmark/chapter being parsed.
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using System.IO;
using itextsharp.pdfa;
using iTextSharp.awt;
using iTextSharp.testutils;
using iTextSharp.text;
using iTextSharp.xmp;
using iTextSharp.xtra;
namespace WindowsFormsApplication1
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void ChooseImageFileWrapper_Click(object sender, EventArgs e)
{
OpenFileDialog openFileDialog1 = new OpenFileDialog();
openFileDialog1.InitialDirectory = GlobalVariables.InitialDirectory;
openFileDialog1.Filter = "Pdf Files|*.pdf";
openFileDialog1.RestoreDirectory = true;
openFileDialog1.Title = "Image File Wrapper Chooser";
if (openFileDialog1.ShowDialog() == DialogResult.OK)
{
try
{
GlobalVariables.ImageFileWrapperPath = openFileDialog1.FileName;
}
catch (Exception ex)
{
MessageBox.Show("Error: Could not read file from disk. Original error: " + ex.Message);
}
}
ImageFileWrapperPath.Text = GlobalVariables.ImageFileWrapperPath;
}
private void ImageFileWrapperPath_TextChanged(object sender, EventArgs e)
{
}
private void button2_Click(object sender, EventArgs e)
{
iTextSharp.text.pdf.PdfReader pdfReader = new iTextSharp.text.pdf.PdfReader(GlobalVariables.ImageFileWrapperPath);
IList<Dictionary<string, object>> book_mark = iTextSharp.text.pdf.SimpleBookmark.GetBookmark(pdfReader);
List<ImageFileWrapperBookmarks> IFWBookmarks = new List<ImageFileWrapperBookmarks>();
foreach (Dictionary<string, object> bk in book_mark) // bk is a single instance of book_mark
{
ImageFileWrapperBookmarks.BookmarkNumber = ImageFileWrapperBookmarks.BookmarkNumber + 1;
foreach (KeyValuePair<string, object> kvr in bk) // kvr is the key/value in bk
{
if (kvr.Key == "Kids" || kvr.Key == "kids")
{
//create recursive program for children
}
else if (kvr.Key == "Title" || kvr.Key == "title")
{
}
else if (kvr.Key == "Page" || kvr.Key == "page")
{
}
}
}
MessageBox.Show(GlobalVariables.ImageFileWrapperPath);
}
}
}
Here's one way to parse a PDF and create a data structure similar to what you describe. First the data structure:
public class BookMark
{
static int _number;
public BookMark() { Number = ++_number; }
public int Number { get; private set; }
public string Title { get; set; }
public string PageNumberString { get; set; }
public int PageNumberInteger { get; set; }
public static void ResetNumber() { _number = 0; }
// bookmarks title may have illegal filename character(s)
public string GetFileName()
{
var fileTitle = Regex.Replace(
Regex.Replace(Title, #"\s+", "-"),
#"[^-\w]", ""
);
return string.Format("{0:D4}-{1}.pdf", Number, fileTitle);
}
}
A method to create a list of Bookmark (above):
List<BookMark> ParseBookMarks(IList<Dictionary<string, object>> bookmarks)
{
int page;
var result = new List<BookMark>();
foreach (var bookmark in bookmarks)
{
// add top-level bookmarks
var stringPage = bookmark["Page"].ToString();
if (Int32.TryParse(stringPage.Split()[0], out page))
{
result.Add(new BookMark() {
Title = bookmark["Title"].ToString(),
PageNumberString = stringPage,
PageNumberInteger = page
});
}
// recurse
if (bookmark.ContainsKey("Kids"))
{
var kids = bookmark["Kids"] as IList<Dictionary<string, object>>;
if (kids != null && kids.Count > 0)
{
result.AddRange(ParseBookMarks(kids));
}
}
}
return result;
}
Call method above like this to dump the results to a text file:
void DumpResults(string path)
{
using (var reader = new PdfReader(path))
{
// need this call to parse page numbers
reader.ConsolidateNamedDestinations();
var bookmarks = ParseBookMarks(SimpleBookmark.GetBookmark(reader));
var sb = new StringBuilder();
foreach (var bookmark in bookmarks)
{
sb.AppendLine(string.Format(
"{0, -4}{1, -100}{2, -25}{3}",
bookmark.Number, bookmark.Title,
bookmark.PageNumberString, bookmark.PageNumberInteger
));
}
File.WriteAllText(outputTextFile, sb.ToString());
}
}
The bigger problem is how to extract each Bookmark into a separate file. If every Bookmark starts a new page it's easy:
Iterate over the return value of ParseBookMarks()
Select a page range that begins with the current BookMark.Number, and ends with the next BookMark.Number - 1
Use that page range to create separate files.
Something like this:
void ProcessPdf(string path)
{
using (var reader = new PdfReader(path))
{
// need this call to parse page numbers
reader.ConsolidateNamedDestinations();
var bookmarks = ParseBookMarks(SimpleBookmark.GetBookmark(reader));
for (int i = 0; i < bookmarks.Count; ++i)
{
int page = bookmarks[i].PageNumberInteger;
int nextPage = i + 1 < bookmarks.Count
// if not top of page will be missing content
? bookmarks[i + 1].PageNumberInteger - 1
/* alternative is to potentially add redundant content:
? bookmarks[i + 1].PageNumberInteger
*/
: reader.NumberOfPages;
string range = string.Format("{0}-{1}", page, nextPage);
// DEMO!
if (i < 10)
{
var outputPath = Path.Combine(OUTPUT_DIR, bookmarks[i].GetFileName());
using (var readerCopy = new PdfReader(reader))
{
var number = bookmarks[i].Number;
readerCopy.SelectPages(range);
using (FileStream stream = new FileStream(outputPath, FileMode.Create))
{
using (var document = new Document())
{
using (var copy = new PdfCopy(document, stream))
{
document.Open();
int n = readerCopy.NumberOfPages;
for (int j = 0; j < n; )
{
copy.AddPage(copy.GetImportedPage(readerCopy, ++j));
}
}
}
}
}
}
}
}
}
The problem is that it's highly unlikely all bookmarks are going to be at the top of every page of the PDF. To see what I mean, experiment with commenting / uncommenting the bookmarks[i + 1].PageNumberInteger lines.
I'm writing a Windows app in C#. I have a custom data type that I need to write as raw data to a binary file (not text/string based), and then open that file later back into that custom data type.
For example:
Matrix<float> dbDescs = ConcatDescriptors(dbDescsList);
I need to save dbDescs to file blah.xyz and then restore it as Matrix<float> later. Anyone have any examples? Thanks!
As I've mentioned, the options are overwhelming and this question comes with a ton of opinions as far as which one is the best. With that being said, BinaryFormatter could prove to be useful here as it serializes and deserializes object (along with graphs of connected objects) in binary.
Here's the MSDN link that explains the usage: https://msdn.microsoft.com/en-us/library/system.runtime.serialization.formatters.binary.binaryformatter(v=vs.110).aspx
Just in case that link fails down the line and because I'm too lazy to provide my own example, here's an example from MSDN:
using System;
using System.IO;
using System.Collections;
using System.Runtime.Serialization.Formatters.Binary;
using System.Runtime.Serialization;
public class App
{
[STAThread]
static void Main()
{
Serialize();
Deserialize();
}
static void Serialize()
{
// Create a hashtable of values that will eventually be serialized.
Hashtable addresses = new Hashtable();
addresses.Add("Jeff", "123 Main Street, Redmond, WA 98052");
addresses.Add("Fred", "987 Pine Road, Phila., PA 19116");
addresses.Add("Mary", "PO Box 112233, Palo Alto, CA 94301");
// To serialize the hashtable and its key/value pairs,
// you must first open a stream for writing.
// In this case, use a file stream.
FileStream fs = new FileStream("DataFile.dat", FileMode.Create);
// Construct a BinaryFormatter and use it to serialize the data to the stream.
BinaryFormatter formatter = new BinaryFormatter();
try
{
formatter.Serialize(fs, addresses);
}
catch (SerializationException e)
{
Console.WriteLine("Failed to serialize. Reason: " + e.Message);
throw;
}
finally
{
fs.Close();
}
}
static void Deserialize()
{
// Declare the hashtable reference.
Hashtable addresses = null;
// Open the file containing the data that you want to deserialize.
FileStream fs = new FileStream("DataFile.dat", FileMode.Open);
try
{
BinaryFormatter formatter = new BinaryFormatter();
// Deserialize the hashtable from the file and
// assign the reference to the local variable.
addresses = (Hashtable) formatter.Deserialize(fs);
}
catch (SerializationException e)
{
Console.WriteLine("Failed to deserialize. Reason: " + e.Message);
throw;
}
finally
{
fs.Close();
}
// To prove that the table deserialized correctly,
// display the key/value pairs.
foreach (DictionaryEntry de in addresses)
{
Console.WriteLine("{0} lives at {1}.", de.Key, de.Value);
}
}
}
Consider the Json.Net package (you can download it to your project via Nuget; the better way, or get it directly from their website).
JSON is just a string (text) that holds values for complex objects. It allows you to turn many (not all) objects into savable files easily which then can be pulled back. To serialize into JSON with JSON.net:
Product product = new Product();
product.Name = "Apple";
product.Expiry = new DateTime(2008, 12, 28);
product.Sizes = new string[] { "Small" };
string json = JsonConvert.SerializeObject(product);
And then to deserialize:
var product = JsonConvert.DeserializeObject(json);
To write the json to a file:
using (StreamWriter writer = new StreamWriter(#"C:/file.txt"))
{
writer.WriteLine(json);
}
I am not a Web Developer so I am not sure that JSON is Binary. Isnt it still text based? So here is what I know is a Binary Answer. Hope this Helps!
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace BinarySerializerSample
{
class Program
{
public static void WriteValues(string fName, double[] vals)
{
using (BinaryWriter writer = new BinaryWriter(File.Open(fName, FileMode.Create)))
{
int len = vals.Length;
for (int i = 0; i < len; i++)
writer.Write(vals[i]);
}
}
public static double[] ReadValues(string fName, int len)
{
double [] vals = new double[len];
using (BinaryReader reader = new BinaryReader(File.Open(fName, FileMode.Open)))
{
for (int i = 0; i < len; i++)
vals[i] = reader.ReadDouble();
}
return vals;
}
static void Main(string[] args)
{
const double MAX_TO_VARY = 100.0;
const int NUM_ITEMS = 100;
const string FILE_NAME = "dblToTestx.bin";
double[] dblToWrite = new double[NUM_ITEMS];
Random r = new Random();
for (int i = 0; i < NUM_ITEMS; i++)
dblToWrite[i] = r.NextDouble() * MAX_TO_VARY;
WriteValues(FILE_NAME, dblToWrite);
double[] dblToRead ;
dblToRead = ReadValues(FILE_NAME, NUM_ITEMS);
int j = 0;
bool areEqual = true;
while (areEqual && j < NUM_ITEMS)
{
areEqual = dblToRead[j] == dblToWrite[j];
++j;
}
if (areEqual)
Console.WriteLine("Test Passed: Press any Key to Exit");
else
Console.WriteLine("Test Failed: Press any Key to Exit");
Console.Read();
}
}
}
How can I use WMIUserID, WMIPassword, WMIAlternateCredentials using C#?
Also, is it possible to get remote computer's Administrator-password?
Please try to explain with examples.
Thanks.
Her is some example code
using System;
using System.Text;
using System.Threading;
using Microsoft.Management.Infrastructure;
using Microsoft.Management.Infrastructure.Options;
using System.Security;
namespace SMAPIQuery
{
class Program
{
static void Main(string[] args)
{
string computer = "Computer_B";
string domain = "DOMAIN";
string username = "AdminUserName";
string plaintextpassword;
Console.WriteLine("Enter password:");
plaintextpassword = Console.ReadLine();
SecureString securepassword = new SecureString();
foreach (char c in plaintextpassword)
{
securepassword.AppendChar(c);
}
// create Credentials
CimCredential Credentials = new CimCredential(PasswordAuthenticationMechanism.Default,
domain,
username,
securepassword);
// create SessionOptions using Credentials
WSManSessionOptions SessionOptions = new WSManSessionOptions();
SessionOptions.AddDestinationCredentials(Credentials);
// create Session using computer, SessionOptions
CimSession Session = CimSession.Create(computer, SessionOptions);
var allVolumes = Session.QueryInstances(#"root\cimv2", "WQL", "SELECT * FROM Win32_Volume");
var allPDisks = Session.QueryInstances(#"root\cimv2", "WQL", "SELECT * FROM Win32_DiskDrive");
// Loop through all volumes
foreach (CimInstance oneVolume in allVolumes)
{
// Show volume information
if (oneVolume.CimInstanceProperties["DriveLetter"].ToString()[0] > ' ' )
{
Console.WriteLine("Volume ‘{0}’ has {1} bytes total, {2} bytes available",
oneVolume.CimInstanceProperties["DriveLetter"],
oneVolume.CimInstanceProperties["Size"],
oneVolume.CimInstanceProperties["SizeRemaining"]);
}
}
// Loop through all physical disks
foreach (CimInstance onePDisk in allPDisks)
{
// Show physical disk information
Console.WriteLine("Disk {0} is model {1}, serial number {2}",
onePDisk.CimInstanceProperties["DeviceId"],
onePDisk.CimInstanceProperties["Model"].ToString().TrimEnd(),
onePDisk.CimInstanceProperties["SerialNumber"]);
}
Console.ReadLine();
}
}
}