Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have been tasked to overwrite all the free space on a few laptops 3 times. I know there are some alternatives but I like to know how things work and if I can to do it myself with C#.
1) yes, I know there are plenty of freeware applications that will do this
2) no, we don't need to conform to any specific government standard
Where do I look for ideas on how to start this?
Thanks if you can point me in the right direction.
Can it be achieved using C#? If so, how?
Simple algorithm:
Create a large text file full of arbitrary text (best to use a pre-created file instead of regenerating random for performance reasons. Test it out.)
Create a clever folder and file naming scheme so as to be able to track the files. You should also track the files with your app but if it crashes, especially near the end of a first few test runs, you'll want to be able to easily find and clean up your handy work.
Write it to the HDD until it's full
Delete the files you created
Repeat above steps two more times
Update: More advanced consideration on wiping per subsequent discussion:
On first pass write files of 0x0000 values (all bits off)
On second pass write all bits as 0xFFFF (all bits on)
On last pass repeat with 0x0000
The above ignores a few ideas such as what is the best size of the file, depends on your file system anyway. You might also get different behaviors from the OS as you near a filled HDD...
This is really dangerous but..
You can use the Defrag APIs (here's a C# wrapper) to get hold of the drive 'map' and specifically target the freespace and write junk to those parts of the disk.
Check the SDelete documentation, maybe you can get a clue there.
You're going to have to do some low level manipulations, so you'll certainly have to talk to the Win32 API. I haven't done this sort of thing, so I can't give you specifics, but a good place to start looking might be the Win32 API reference:http://msdn.microsoft.com/en-us/library/aa383749%28VS.85%29.aspx
I'm really not an expert in this field at all, but it seems to my naive understanding that you'll need to:
1) get info on where the filesystem starts & stops
2) using the non-deleted files as a reference, get a list of physical locations of what should be free space
3) write 0's to those locations
Maybe this isn't a great answer since I'm not an expert in the field, but it was a bit too long for a comment ;) I hope that helps a little.
System.Diagonstics.Process.Start("chipher.exe /WC:\");
This is asynchronous by default, you get the idea.
This code is from The Code Project I think. I'm unsure where the orignal article is, but it does what you asked for:
Based on comments I clearly need to spoonfeed a bit more..
You can do this very simply, based on your requirements.
Make 1 large file that fills the remaining free size on your drive. Then simply wipe this file.
Make several files until your drive is full. (This might be better if you want to use the machine while its going on ). Then you can start wiping each file, so in effect the total time the system has a full hard disj drive is smaller than using method 1. But it will likely be a bit slower and use a bit more code.
The advantage of using are a few, easy code for you to use. You don't have to play with low level APIs that will screw you over.
using System;
using System.IO;
using System.Security.Cryptography;
namespace QuickStarterShared
{
public class Wipe
{
/// <summary>
/// Deletes a file in a secure way by overwriting it with
/// random garbage data n times.
/// </summary>
/// <param name="filename">Full path of the file to be deleted</param>
/// <param name="timesToWrite">Specifies the number of times the file should be overwritten</param>
public void WipeFile(string filename, int timesToWrite)
{
#if !DEBUG
try
{
#endif
if (File.Exists(filename))
{
// Set the files attributes to normal in case it's read-only.
File.SetAttributes(filename, FileAttributes.Normal);
// Calculate the total number of sectors in the file.
double sectors = Math.Ceiling(new FileInfo(filename).Length/512.0);
// Create a dummy-buffer the size of a sector.
byte[] dummyBuffer = new byte[512];
// Create a cryptographic Random Number Generator.
// This is what I use to create the garbage data.
RNGCryptoServiceProvider rng = new RNGCryptoServiceProvider();
// Open a FileStream to the file.
FileStream inputStream = new FileStream(filename, FileMode.Open);
for (int currentPass = 0; currentPass < timesToWrite; currentPass++)
{
// Go to the beginning of the stream
inputStream.Position = 0;
// Loop all sectors
for (int sectorsWritten = 0; sectorsWritten < sectors; sectorsWritten++)
{
// Fill the dummy-buffer with random data
rng.GetBytes(dummyBuffer);
// Write it to the stream
inputStream.Write(dummyBuffer, 0, dummyBuffer.Length);
}
}
// Truncate the file to 0 bytes.
// This will hide the original file-length if you try to recover the file.
inputStream.SetLength(0);
// Close the stream.
inputStream.Close();
// As an extra precaution I change the dates of the file so the
// original dates are hidden if you try to recover the file.
DateTime dt = new DateTime(2037, 1, 1, 0, 0, 0);
File.SetCreationTime(filename, dt);
File.SetLastAccessTime(filename, dt);
File.SetLastWriteTime(filename, dt);
File.SetCreationTimeUtc(filename, dt);
File.SetLastAccessTimeUtc(filename, dt);
File.SetLastWriteTimeUtc(filename, dt);
// Finally, delete the file
File.Delete(filename);
}
#if !DEBUG
}
catch(Exception e)
{
}
#endif
}
}
# region Events
# region PassInfo
public delegate void PassInfoEventHandler(PassInfoEventArgs e);
public class PassInfoEventArgs : EventArgs
{
private readonly int cPass;
private readonly int tPass;
public PassInfoEventArgs(int currentPass, int totalPasses)
{
cPass = currentPass;
tPass = totalPasses;
}
/// <summary> Get the current pass </summary>
public int CurrentPass { get { return cPass; } }
/// <summary> Get the total number of passes to be run </summary>
public int TotalPasses { get { return tPass; } }
}
# endregion
# region SectorInfo
public delegate void SectorInfoEventHandler(SectorInfoEventArgs e);
public class SectorInfoEventArgs : EventArgs
{
private readonly int cSector;
private readonly int tSectors;
public SectorInfoEventArgs(int currentSector, int totalSectors)
{
cSector = currentSector;
tSectors = totalSectors;
}
/// <summary> Get the current sector </summary>
public int CurrentSector { get { return cSector; } }
/// <summary> Get the total number of sectors to be run </summary>
public int TotalSectors { get { return tSectors; } }
}
# endregion
# region WipeDone
public delegate void WipeDoneEventHandler(WipeDoneEventArgs e);
public class WipeDoneEventArgs : EventArgs
{
}
# endregion
# region WipeError
public delegate void WipeErrorEventHandler(WipeErrorEventArgs e);
public class WipeErrorEventArgs : EventArgs
{
private readonly Exception e;
public WipeErrorEventArgs(Exception error)
{
e = error;
}
public Exception WipeError{get{ return e;}}
}
# endregion
# endregion
}
Related
I am connecting my application with stock market live data provider using web socket. So when market is live and socket is open then it's giving me nearly 45000 lines in a minute. at a time I am deserializing it line by line
and then write that line into text file and also reading text file and removing first line of text file. So handling another process with socket becomes slow. So please can you help me that how should I perform that process very fast like nearly 25000 lines in a minute.
string filePath = #"D:\Aggregate_Minute_AAPL.txt";
var records = (from line in File.ReadLines(filePath).AsParallel()
select line);
List<string> str = records.ToList();
str.ForEach(x =>
{
string result = x;
result = result.TrimStart('[').TrimEnd(']');
var jsonString = Newtonsoft.Json.JsonConvert.DeserializeObject<List<LiveAMData>>(x);
foreach (var item in jsonString)
{
string value = "";
string dirPath = #"D:\COMB1\MinuteAggregates";
string[] fileNames = null;
fileNames = System.IO.Directory.GetFiles(dirPath, item.sym+"_*.txt", System.IO.SearchOption.AllDirectories);
if(fileNames.Length > 0)
{
string _fileName = fileNames[0];
var lineList = System.IO.File.ReadAllLines(_fileName).ToList();
lineList.RemoveAt(0);
var _item = lineList[lineList.Count - 1];
if (!_item.Contains(item.sym))
{
lineList.RemoveAt(lineList.Count - 1);
}
System.IO.File.WriteAllLines((_fileName), lineList.ToArray());
value = $"{item.sym},{item.s},{item.o},{item.h},{item.c},{item.l},{item.v}{Environment.NewLine}";
using (System.IO.StreamWriter sw = System.IO.File.AppendText(_fileName))
{
sw.Write(value);
}
}
}
});
How to make process fast, if application perform this then it takes nearly 3000 to 4000 symbols. and if there is no any process then it executes 25000 lines per minute. So how to increase line execution time/process with all this code ?
First you need to cleanup you code to gain more visibility, i did a quick refactor and this is what i got
const string FilePath = #"D:\Aggregate_Minute_AAPL.txt";
class SomeClass
{
public string Sym { get; set; }
public string Other { get; set; }
}
private void Something() {
File
.ReadLines(FilePath)
.AsParallel()
.Select(x => x.TrimStart('[').TrimEnd(']'))
.Select(JsonConvert.DeserializeObject<List<SomeClass>>)
.ForAll(WriteRecord);
}
private const string DirPath = #"D:\COMB1\MinuteAggregates";
private const string Separator = #",";
private void WriteRecord(List<SomeClass> data)
{
foreach (var item in data)
{
var fileNames = Directory
.GetFiles(DirPath, item.Sym+"_*.txt", SearchOption.AllDirectories);
foreach (var fileName in fileNames)
{
var fileLines = File.ReadAllLines(fileName)
.Skip(1).ToList();
var lastLine = fileLines.Last();
if (!lastLine.Contains(item.Sym))
{
fileLines.RemoveAt(fileLines.Count - 1);
}
fileLines.Add(
new StringBuilder()
.Append(item.Sym)
.Append(Separator)
.Append(item.Other)
.Append(Environment.NewLine)
.ToString()
);
File.WriteAllLines(fileName, fileLines);
}
}
}
From here should be more easy to play with List.AsParallel to check how and with what parameters the code is faster.
Also:
You are opening the write file twice
The removes are also somewhat expensive, in the index 0 is more (however, if there are few elements this could not make much difference
if(fileNames.Length > 0) is useless, use a for, if the list is empty, then he for will simply skip
You can try StringBuilder instead string interpolation
I hope this hints can help you to improve your time! and that i have not forgetting something.
Edit
We have nearly 10,000 files in our directory. So when process is
running then it's passing an error that The Process can not access the
file because it is being used by another process
Well, is there a possibility that in your process lines there is duplicated file names?
If that is the case, you could try a simple approach, a retry after some milliseconds, something like
private const int SleepMillis = 5;
private const int MaxRetries = 3;
public void WriteFile(string fileName, string[] fileLines, int retries = 0)
{
try
{
File.WriteAllLines(fileName, fileLines);
}
catch(Exception e) //Catch the special type if you can
{
if (retries >= MaxRetries)
{
Console.WriteLine("Too many tries with no success");
throw; // rethrow exception
}
Thread.Sleep(SleepMillis);
WriteFile(fileName, fileLines, ++retries); // try again
}
}
I tried to keep it simple, but there are some annotations:
- If you can make your methods async, it could be an improvement by changing the sleep for a Task.Delay, but you need to know and understand well how async works
- If the collision happens a lot, then you should try another approach, something like a concurrent map with semaphores
Second edit
In real scenario I am connecting to websocket and receiving 70,000 to
1 lac records on every minute and after that I am bifurcating those
records with live streaming data and storing in it's own file. And
that becomes slower when I am applying our concept with 11,000 files
It is a hard problem, from what i understand, you're talking about 1166 records per second, at this size the little details can become big bottlenecks.
At that phase i think it is better to think about other solutions, it could be so much I/O for the disk, could be many threads, or too few, network...
You should start by profiling the app to check where the app is spending more time to focus in that area, how much resources is using? how much resources do you have? how is the memory, processor, garbage collector, network? do you have an SSD?
You need a clear view of what is slowing you down so you can attack that directly, it will depend on a lot of things, it will be hard to help with that part :(.
There are tons of tools for profile c# apps, and many ways to attack this problem (spread the charge in several servers, use something like redis to save data really quick, some event store so you can use events....
I am currently using VirusTotal.NET nuget package in my C# MVC project to scan uploaded files. I am using the same example given here https://github.com/Genbox/VirusTotal.NET
VirusTotal virusTotal = new VirusTotal("YOUR API KEY HERE");
//Use HTTPS instead of HTTP
virusTotal.UseTLS = true;
//Create the EICAR test virus. See http://www.eicar.org/86-0-Intended-use.html
byte[] eicar =
Encoding.ASCII.GetBytes(#"X5O!P%#AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*");
//Check if the file has been scanned before.
FileReport report = await virusTotal.GetFileReportAsync(eicar);
Console.WriteLine("Seen before: " + (report.ResponseCode == FileReportResponseCode.Present ? "Yes" : "No"));
I am loading the byte array of the uploaded file to eicar variable in the above code. According to the given example, it will provide the file is scanned before or not. But what I actually need is whether the file is infected or not. Can anyone suggest me a solution?
Checking out the UrlReport class, the report you get back has a lot more info than just the response code in their code sample. There are 3 fields that look interesting:
/// <summary>
/// How many engines flagged this resource.
/// </summary>
public int Positives { get; set; }
/// <summary>
/// The scan results from each engine.
/// </summary>
public Dictionary<string, UrlScanEngine> Scans { get; set; }
/// <summary>
/// How many engines scanned this resource.
/// </summary>
public int Total { get; set; }
This may give you the results you're looking for. VirusTotal actually returns results for multiple scan engines, some of which might detect a virus and some which might not.
Console.WriteLine($"{report.Positives} out of {report.Total} scan engines detected a virus.");
You could do anything you want with that data, like calculate the percentage:
var result = 100m * report.Positives / report.Total;
Console.WriteLine($"{result}% of scan engines detected a virus.");
Or just treat a majority of positive scan engine results as an overall positive result:
var result = Math.Round(report.Positives / Convert.ToDecimal(report.Total));
Console.WriteLine($"Virus {(result == 0 ? "not detected": "detected")});
The user is asked to type in the serial number on the device he's using. Then the program uses this serial number for all the functions. This was made so that the user can easily replace said device, without any technical help - just typing the new serial number in the application.
However, the way I've done it, the user needs to type in the serial number each time the program is opened, and it's kind of tedious.
Is there a way to store the last entered serial number, so that it loads the next time the program is being runned?
I have checked this link. While it seems promising, it hasn't solved the problem for me. I'll explain with my code below.
Here is the code asking for the user input serial number:
byte[] xbee { get; set; }
var xbee_serienr = prop1_serienr.Text;
xbee = new byte[xbee_serienr.Length / 2];
for (var i = 0; i < xbee.Length; i++)
{
xbee[i] = byte.Parse(xbee_serienr.Substring(i * 2, 2), NumberStyles.HexNumber);
}
I tried the aforementioned link, and save it like so:
prop1_serienr string user 0013A20040A65E23
And then use it in the code like so:
prop1_serienr = Xbee.Properties.Settings.Default.prop1_serienr;
//keep in mind I made the silly decision using Xbee as namespace and xbee as a variable
But the prop1_serienr remains empty this way.
Any tips or guidelines on how to make this easier than having to type it every time the program starts would be greatly appreciated. If that's my only option I will resort to hard coding the serial numbers and then change the code every time a device is changed.
Hard coding the serial numbers is really not an option, especially when something as "saving a serial number" is not very complicated at all (but like all things, complicated it can be, if you let it).
The very easy approach:
public partial class Form1 : Form
{
private byte[] _xbee;
public Form1()
{
if (!File.Exists("serial.txt"))
{
File.Create("serial.txt");
}
else
{
_xbee = File.ReadAllBytes("serial.txt");
}
InitializeComponent();
}
private void btnSaveSerial_Click(object sender, EventArgs e)
{
byte[] xbee { get; set; }
var xbee_serienr = prop1_serienr.Text;
xbee = new byte[xbee_serienr.Length / 2];
for (var i = 0; i < xbee.Length; i++)
{
xbee[i] = byte.Parse(xbee_serienr.Substring(i * 2, 2), NumberStyles.HexNumber);
}
_xbee = xbee;
File.WriteAllBytes("serial.txt", xbee);
}
}
It reads the bytes from the file at startup (if the file exists).
It writes the bytes to the file when the user has changed the serial (and clicked on a button to save it).
As I've said, you can make this as easy or as complicated as you like, but this should get you going.
No shortage of search for string performance questions out there yet I still can not make heads or tails out of what the best approach is.
Long story short, I have committed to moving from 4NT to PowerShell. In leaving the 4NT I am going to miss the console super quick string searching utility that came with it called FFIND. I have decided to use my rudimentary C# programming skills to try an create my own utility to use in PowerShell that is just as quick.
So far search results on a string search in 100's of directories across a few 1000 files, some of which are quite large, are FFIND 2.4 seconds and my utility 4.4 seconds..... after I have ran mine at least once????
The first time I run them FFIND does it near the same time but mine takes over a minute? What is this? Loading of libraries? File indexing? Am I doing something wrong in my code? I do not mind waiting a little longer but the difference is extreme enough that if there is a better language or approach I would rather start down that path now before I get too invested.
Do I need to pick another language to write a string search that will be lighting fast
I have the need to use this utility to search through 1000 of files for strings in web code, C# code, and another propitiatory language that uses text files. I also need to be able to use this utility to find strings in very large log files, MB size.
class Program
{
public static int linecounter;
public static int filecounter;
static void Main(string[] args)
{
//
//INIT
//
filecounter = 0;
linecounter = 0;
string word;
// Read properties from application settings.
string filelocation = Properties.Settings.Default.FavOne;
// Set Args from console.
word = args[0];
//
//Recursive search for sub folders and files
//
string startDIR;
string filename;
startDIR = Environment.CurrentDirectory;
//startDIR = "c:\\SearchStringTestDIR\\";
filename = args[1];
DirSearch(startDIR, word, filename);
Console.WriteLine(filecounter + " " + "Files found");
Console.WriteLine(linecounter + " " + "Lines found");
Console.ReadKey();
}
static void DirSearch(string dir, string word, string filename)
{
string fileline;
string ColorOne = Properties.Settings.Default.ColorOne;
string ColorTwo = Properties.Settings.Default.ColorTwo;
ConsoleColor valuecolorone = (ConsoleColor)Enum.Parse(typeof(ConsoleColor), ColorOne);
ConsoleColor valuecolortwo = (ConsoleColor)Enum.Parse(typeof(ConsoleColor), ColorTwo);
try
{
foreach (string f in Directory.GetFiles(dir, filename))
{
StreamReader file = new StreamReader(f);
bool t = true;
int counter = 1;
while ((fileline = file.ReadLine()) != null)
{
if (fileline.Contains(word))
{
if (t)
{
t = false;
filecounter++;
Console.ForegroundColor = valuecolorone;
Console.WriteLine(" ");
Console.WriteLine(f);
Console.ForegroundColor = valuecolortwo;
}
linecounter++;
Console.WriteLine(counter.ToString() + ". " + fileline);
}
counter++;
}
file.Close();
file = null;
}
foreach (string d in Directory.GetDirectories(dir))
{
//Console.WriteLine(d);
DirSearch(d,word,filename);
}
}
catch (System.Exception ex)
{
Console.WriteLine(ex.Message);
}
}
}
}
If you want to speed up your code run a performance analysis and see what is taking the most time. I can almost guaruntee the longest step here will be
fileline.Contains(word)
This function is called on every line of the file, on every file. Naively searching for a word in a string can taken len(string) * len(word) comparisons.
You could code your own Contains method, that uses a faster string comparison algorithm. Google for "fast string exact matching". You could try using a regex and seeing if that gives you a performance enhancement. But I think the simplest optimization you can try is :
Don't read every line. Make a large string of all the content of the file.
StreamReader streamReader = new StreamReader(filePath, Encoding.UTF8);
string text = streamReader.ReadToEnd();
Run contains on this.
If you need all the matches in a file, then you need to use something like Regex.Matches(string,string).
After you have used regex to get all the matches for a single file, you can iterate over this match collection (if there are any matches). For each match, you can recover the line of the original file by writing a function that reads forward and backward from the match object index attribute, to where you find the '\n' character. Then output that string between those two newlines, to get your line.
This will be much faster, I guarantee it.
If you want to go even further, some things I've noticed are :
Remove the try catch statement from outside the loop. Only use it exactly where you need it. I would not use it at all.
Also make sure your system is running, ngen. Most setups usually have this, but sometimes ngen is not running. You can see the process in process explorer. Ngen generates a native image of the C# managed bytecode so the code does not have to be interpreted each time, but can be run natively. This speeds up C# a lot.
EDIT
Other points:
Why is there a difference between first and subsequent run times? Seems like caching. The OS could have cached the requests for the directories, for the files, for running and loading programs. Usually one sees speedups after a first run. Ngen could also be playing a part here, too, in generating the native image after compilation on the first run, then storing that in the native image cache.
In general, I find C# performance too variable for my liking. If the optimizations suggested are not satisfactory and you want more consistent performance results, try another language -- one that is not 'managed'. C is probably the best for your needs.
Hi guys I have a dictionary which has to be shared between two different exe files. The first application creates a key, then stores it in the dictionary, then the other application creates a key and stores it in the dictionary.
At the moment i do this:
private static void WriteToFile(Dictionary<string, byte[]> dictionary, string path)
{
Contract.Requires(dictionary != null);
Contract.Requires(!string.IsNullOrEmpty(path));
if (!(timestamp == File.GetLastWriteTime(DatabasePath)))
{
using (FileStream fs = File.OpenWrite(path))
using (var writer = new BinaryWriter(fs))
{
// Put count.
writer.Write(dictionary.Count);
// Write pairs.
foreach (var pair in dictionary)
{
writer.Write(pair.Key);
writer.Write(pair.Value);
}
timestamp = DateTime.Now;
File.SetLastWriteTime(DatabasePath, timestamp);
}
}
}
/// <summary>
/// This is used to read a dictionary from a file
/// http://www.dotnetperls.com/dictionary-binary
/// </summary>
/// <param name="path">The path to the file</param>
/// <returns>The dictionary read from the file</returns>
private static Dictionary<string, byte[]> ReadFromFile(string path)
{
Contract.Requires(!string.IsNullOrEmpty(path));
var result = new Dictionary<string, byte[]>();
using (FileStream fs = File.OpenRead(path))
using (var reader = new BinaryReader(fs))
{
// Determine the amount of key value pairs to read.
int count = reader.ReadInt32();
// Read in all the pairs.
for (int i = 0; i < count; i++)
{
string key = reader.ReadString();
//// The byte value is hardcoded as the keysize is consistent
byte[] value = reader.ReadBytes(513);
result[key] = value;
}
}
return result;
}
Then when I want to store a key I call this method:
public static bool StoreKey(byte[] publicKey, string uniqueIdentifier)
{
Contract.Requires(ValidPublicKeyBlob(publicKey));
Contract.Requires(publicKey != null);
Contract.Requires(uniqueIdentifier != null);
Contract.Requires(uniqueIdentifier != string.Empty);
bool success = false;
if (File.Exists(DatabasePath))
{
keyCollection = ReadFromFile(DatabasePath);
}
if (!keyCollection.ContainsKey(uniqueIdentifier))
{
if (!keyCollection.ContainsValue(publicKey))
{
keyCollection.Add(uniqueIdentifier, publicKey);
success = true;
WriteToFile(keyCollection, DatabasePath);
}
}
return success;
}
When the programs generates the key and when we then try to access them, it only has 1 key, what am I doing wrong? The key and string is stored perfectly, but I'm just afraid that they are overwriting the files or something.
Thank you very much in advance, any help is greatly appreciated
PS: The databasePath is the path where I want to save the file, created as a field.
It is hard to say what exactly going on since you've not provided an information regarding how many items in dictionary and so on, but it seems like you've encountered some kind of a file access issue when accessing the same file from multiple processes.
You can use named Mutex as a cross process synchronization object so before accessing a file you have to ensure that Mutex handle is released so you can aquire an ownership and an other process would be able to wait before accessing a file.
// Create a mutex
Mutex mutex = new Mutex(false, "DictionaryAccessMutex");
// Acquire an ownership
mutex.WaitOne();
// Release
mutex.ReleaseMutex();
EDIT: New finding
Also you trying to write immediately after the read, so perhaps FileSystem operation is not completed yet so write failed, I'm not sure 100% in this perhaps .NET managed classes like File/StreamReader/etc already handled such cases but I believe it worth to double check in your case since is not 100% clear what is happened. So just try out adding some timeout like Thread.Sleep(500) between read and write operations.
EDIT: One more thing you can do is download Process Monitor SysInternals utility and see which operations are failed when accessign a given file. So just add a new filter Path=file name and you would be able see what is going on on the low level.
Writing to a file in parallel is generally not the best idea. You have two options here:
Use a mutex for cross-process synchronization to regulate access to the file.
Forward all write requests to a third process that has exclusive ownership of the file and does the actual writing.
So Process 1 loads the dictionary, adds an item, calls write.
So Process 2 loads the dictionary, adds an item, calls write.
You get which ever one writes second, and you don't know which one it will be.
Trying to make this work is way more trouble than it's worth, and it wil be as future proof as an inflatable dartboard.
Mutex at a push or a third process to maintain the dictionary.