As part of cleaning up config files in a build script, we have something like this:
Regex.IsMatch(LongStringOfFilecontents, #"Password=""[0-9a-zA-Z]*""")
and
Regex.IsMatch(LongStringOfFilecontents, #"Password2=""[0-9a-zA-Z]*""")
When a match is found, the passwords are replaced with a dummy value before the app is released.
The problem is that it now finds "Password" but not "Password2" or "Password1".
This C# .NET 3.5 code has been in use for several years, has been run hundreds of times, and has not been changed. As recently as a few days ago it was run successfully. As of this morning it chokes on "Password2". The config file really does contain both Password="some arbitrary value" and Password2="some arbitrary value".
I suspected that "d2" might be taken as a pattern, but it is not inside a {}, and as mentioned, it has behaved correctly for several years.
I have tested against a possible timeout and that does not seem to be the issue. I have tried the CaseInsensitive Option, which should not matter anyway ([a-zA-Z], right?) and that also has no effect.
It fails on two different (Win 7 Professional, 64 bit, SP1) machines, but works as expected on an XP machine (SP 3).
Unless this was the result of this morning's Windows 7 Automatic Update I'm baffled.
Here's the complete context:
ReplaceInFile(filename, #"Password1=""[0-9a-zA-Z]*""", #"Password1=""REPLACE_ME""");
private static bool ReplaceInFile(string filename, string regexp, string replacement)
{
try
{
if (File.Exists(filename))
{
string oldContents = null;
using (StreamReader reader = new StreamReader(filename, true))
{
oldContents = reader.ReadToEnd();
}
if (Regex.IsMatch(oldContents, regexp))
{
string newContents = Regex.Replace(oldContents, regexp, replacement);
if (oldContents != newContents)
{
File.WriteAllText(filename, newContents);
return true;
}
}
else
{
BuildFailed("DID NOT FIND " + regexp + " in " + filename + " Case-SeNsiTive?");
}
}
return false;
}
catch (Exception ex)
{
BuildFailed(ex.Message);
return false;
}
}
And here's a small portion of the large file that it being examined:
<Kirk Enabled="1" Type="8000" Password1="test_pwd" Password2="dev_pwd" UserName1="admin" UserName2="GW-DECT/admin" DutyCycle="1" TcpPort="10000" ServerIP="localhost" />
I think your problem is more likely to be the contents of the password does not match [0-9a-zA-Z] rather than anything with the Password= or Password2= part. Most likely there is a non alpha-numeric character in that password.
Give this regex a try (using on RegexPal.com seemed to work). I did escape the equal sign.
#"Password[1-3]\=""[0-9a-zA-Z]*"""
Here was my test text:
Password1="Blah"
Password2="blah2"
Password3="045and2"
Related
I have no coding experience but have been trying to fix a broken program many years ago. I've been fumbling through fixing things but have stumbled upon a piece that I can't fix. From what I've gathered you get Alexa to append a Dropbox file and the program reads that file looking for the change and, depending on what it is, executes a certain command based on a customizable list in an XML document.
I've gotten this to work about five times in the hundred of attempts I've done, every other time it will crash and Visual Studio gives me: "System.IO.IOException: 'The process cannot access the file 'C:\Users\\"User"\Dropbox\controlcomputer\controlfile.txt' because it is being used by another process.'"
This is the file that Dropbox appends and this only happens when I append the file, otherwise, the program works fine and I can navigate it.
I believe this is the code that handles this as this is the only mention of StreamReader in all of the code:
public static void launchTaskControlFile(string path)
{
int num = 0;
StreamReader streamReader = new StreamReader(path);
string str = "";
while (true)
{
string str1 = streamReader.ReadLine();
string str2 = str1;
if (str1 == null)
{
break;
}
str = str2.TrimStart(new char[] { '#' });
num++;
}
streamReader.Close();
if (str.Contains("Google"))
{
MainWindow.googleSearch(str);
}
else if (str.Contains("LockDown") && Settings.Default.lockdownEnabled)
{
MainWindow.executeLock();
}
else if (str.Contains("Shutdown") && Settings.Default.shutdownEnabled)
{
MainWindow.executeShutdown();
}
else if (str.Contains("Restart") && Settings.Default.restartEnabled)
{
MainWindow.executeRestart();
}
else if (!str.Contains("Password"))
{
MainWindow.launchApplication(str);
}
else
{
SendKeys.SendWait(" ");
Thread.Sleep(500);
string str3 = "potato";
for (int i = 0; i < str3.Length; i++)
{
SendKeys.SendWait(str3[i].ToString());
}
}
Console.ReadLine();
}
I've searched online but have no idea how I could apply anything I've found to this. Once again before working on this I have no coding experience so act like you're talking to a toddler.
Sorry if anything I added here is unnecessary I'm just trying to be thorough. Any help would be appreciated.
I set up a try delay pattern like Adriano Repetti said and it seems to be working, however doing that flat out would only cause it to not crash so I had to add a loop around it and set the loop to stop when a variable hit 1, which happened whenever any command types are triggered. This takes it out of the loop and sets the integer back to 0, triggering the loop again. That seems to be working now.
I have the following upload code using Unity's UnityWebRequest API (Unity 2019.2.13f1):
public IEnumerator UploadJobFile(string jobId, string path)
{
if (!File.Exists(path))
{
Debug.LogError("The given file to upload does not exist. Please re-create the recording and try again.");
yield break;
}
UnityWebRequest upload = new UnityWebRequest(hostURL + "/jobs/upload/" + jobId);
upload.uploadHandler = new UploadHandlerFile(path);
upload.downloadHandler = new DownloadHandlerBuffer();
upload.method = UnityWebRequest.kHttpVerbPOST;
upload.SetRequestHeader("filename", Path.GetFileName(path));
UnityWebRequestAsyncOperation op = upload.SendWebRequest();
while (!upload.isDone)
{
//Debug.Log("Uploading file...");
Debug.Log("Uploading file. Progress " + (int)(upload.uploadProgress * 100f) + "%"); // <-----------------
yield return null;
}
if (upload.isNetworkError || upload.isHttpError)
{
Debug.LogError("Upload error:\n" + upload.error);
}
else
{
Debug.Log("Upload success");
}
// this is needed to clear resources on the file
upload.Dispose();
}
string hostURL = "http://localhost:8080";
string jobId = "manualUploadTest";
string path = "E:/Videos/short.mp4";
void Update()
{
if (Input.GetKeyDown(KeyCode.O))
{
Debug.Log("O key was pressed.");
StartCoroutine(UploadAndTest(jobId, path));
}
}
And the files I receive on the server side arrive broken, especially if they are larger (30 MB or more). They are missing bytes in the end and sometimes have entire byte blocks duplicated in the middle.
This happens both when testing client and server on the same machine or when running on different machines.
The server does not complain - from its perspective, no transport errors happened.
I noticed that if I comment out the access to upload.uploadProgress (and e.g. instead use the commented-out debug line above it which just prints a string literal), the files stay intact. Ditching the wile loop altogether and replacing it with yield return op also works.
I tested this strange behavior repeatedly in an outer loop - usually after at most 8 repetitions with the "faulty" code, the file appears broken. If I use the "correct" variant, 100 uploads (update: 500) in a row were successful.
Has upload.uploadProgress side-effects? For what it's worth, the same happens if I print op.progress instead - the files are also broken.
This sounds like a real bug. uploadProgress obviously should not have side effects.
I have an executable file that processes a large number (1000+) of strings and adds each one to a List of strings. It is coded in C#, compiled in Visual Studio 2017 on a windows machine, then exported to and run on a Linux machine with Mono. Oddly enough, writing all of the strings to a text file works just fine, but adding them to the list causes the program's user interface to freeze and become unresponsive.
Here's my code:
client.BigDB.LoadRange("Clans", "ByName", null, startAt + "0000000000", stopAt + "zzzzzzzzzz", 1000, delegate (DatabaseObject[] o)
{
foreach (DatabaseObject obj in o)
{
//this section here does not work as intended
//string ClanName = obj.GetString("name");
//ClanNames.Add(ClanName);
//main.ui.AppendTestBox(ClanName);
//Clans++;
//but this section works perfectly
using (StreamWriter w = File.AppendText("ClanNameList.txt"))
{
w.Write(obj.GetString("name") + Environment.NewLine);
}
}
});
After inspecting the output file, I suspect that it is getting caught on the following string: "AK Union, Local 47". It processed every previous kind of character without problems, but it appears to not like commas for some reason. How do I get around this, if that's actually what's going on?
I did try to search for this problem on google and this site, but the search results are wildly unhelpful and quite unrelated to what I need :(
I cant see exactly your problem, though one thing i would suggest is update once via a StringBuilder, not 1000 times
client.BigDB.LoadRange("Clans", "ByName", null, startAt + "0000000000", stopAt + "zzzzzzzzzz", 1000, delegate (DatabaseObject[] o)
{
var sb = new StringBuilder();
foreach (DatabaseObject obj in o)
{
var name = obj.GetString("name");
ClanNames.Add(name);
sb.Append(name);
}
main.ui.AppendTestBox(sb.ToString());
});
StringBuilder.Append Method (System.Text)
I have an app that reads from text files to determine which reports should be generated. It works as it should most of the time, but once in awhile, the program deletes one of the text files it reads from/writes to. Then an exception is thrown ("Could not find file") and progress ceases.
Here is some pertinent code.
First, reading from the file:
List<String> delPerfRecords = ReadFileContents(DelPerfFile);
. . .
private static List<String> ReadFileContents(string fileName)
{
List<String> fileContents = new List<string>();
try
{
fileContents = File.ReadAllLines(fileName).ToList();
}
catch (Exception ex)
{
RoboReporterConstsAndUtils.HandleException(ex);
}
return fileContents;
}
Then, writing to the file -- it marks the record/line in that file as having been processed, so that the same report is not re-generated the next time the file is examined:
MarkAsProcessed(DelPerfFile, qrRecord);
. . .
private static void MarkAsProcessed(string fileToUpdate, string
qrRecord)
{
try
{
var fileContents = File.ReadAllLines(fileToUpdate).ToList();
for (int i = 0; i < fileContents.Count; i++)
{
if (fileContents[i] == qrRecord)
{
fileContents[i] = string.Format("{0}{1} {2}"
qrRecord, RoboReporterConstsAndUtils.COMPLETED_FLAG, DateTime.Now);
}
}
// Will this automatically overwrite the existing?
File.Delete(fileToUpdate);
File.WriteAllLines(fileToUpdate, fileContents);
}
catch (Exception ex)
{
RoboReporterConstsAndUtils.HandleException(ex);
}
}
So I do delete the file, but immediately replace it:
File.Delete(fileToUpdate);
File.WriteAllLines(fileToUpdate, fileContents);
The files being read have contents such as this:
Opas,20170110,20161127,20161231-COMPLETED 1/10/2017 12:33:27 AM
Opas,20170209,20170101,20170128-COMPLETED 2/9/2017 11:26:04 AM
Opas,20170309,20170129,20170225-COMPLETED
Opas,20170409,20170226,20170401
If "-COMPLETED" appears at the end of the record/row/line, it is ignored - will not be processed.
Also, if the second element (at index 1) is a date in the future, it will not be processed (yet).
So, for these examples shown above, the first three have already been done, and will be subsequently ignored. The fourth one will not be acted on until on or after April 9th, 2017 (at which time the data within the data range of the last two dates will be retrieved).
Why is the file sometimes deleted? What can I do to prevent it from ever happening?
If helpful, in more context, the logic is like so:
internal static string GenerateAndSaveDelPerfReports()
{
string allUnitsProcessed = String.Empty;
bool success = false;
try
{
List<String> delPerfRecords = ReadFileContents(DelPerfFile);
List<QueuedReports> qrList = new List<QueuedReports>();
foreach (string qrRecord in delPerfRecords)
{
var qr = ConvertCRVRecordToQueuedReport(qrRecord);
// Rows that have already been processed return null
if (null == qr) continue;
// If the report has not yet been run, and it is due, add i
to the list
if (qr.DateToGenerate <= DateTime.Today)
{
var unit = qr.Unit;
qrList.Add(qr);
MarkAsProcessed(DelPerfFile, qrRecord);
if (String.IsNullOrWhiteSpace(allUnitsProcessed))
{
allUnitsProcessed = unit;
}
else if (!allUnitsProcessed.Contains(unit))
{
allUnitsProcessed = allUnitsProcessed + " and "
unit;
}
}
}
foreach (QueuedReports qrs in qrList)
{
GenerateAndSaveDelPerfReport(qrs);
success = true;
}
}
catch
{
success = false;
}
if (success)
{
return String.Format("Delivery Performance report[s] generate
for {0} by RoboReporter2017", allUnitsProcessed);
}
return String.Empty;
}
How can I ironclad this code to prevent the files from being periodically trashed?
UPDATE
I can't really test this, because the problem occurs so infrequently, but I wonder if adding a "pause" between the File.Delete() and the File.WriteAllLines() would solve the problem?
UPDATE 2
I'm not absolutely sure what the answer to my question is, so I won't add this as an answer, but my guess is that the File.Delete() and File.WriteAllLines() were occurring too close together and so the delete was sometimes occurring on both the old and the new copy of the file.
If so, a pause between the two calls may have solved the problem 99.42% of the time, but from what I found here, it seems the File.Delete() is redundant/superfluous anyway, and so I tested with the File.Delete() commented out, and it worked fine; so, I'm just doing without that occasionally problematic call now. I expect that to solve the issue.
// Will this automatically overwrite the existing?
File.Delete(fileToUpdate);
File.WriteAllLines(fileToUpdate, fileContents);
I would simply add an extra parameter to WriteAllLines() (which could default to false) to tell the function to open the file in overwrite mode, and not call File.Delete() at all then.
Do you currently check the return value of the file open?
Update: ok, it looks like WriteAllLines() is a .Net Framework function and therefore cannot be changed, so I deleted this answer. However now this shows up in the comments, as a proposed solution on another forum:
"just use something like File.WriteAllText where if the file exists,
the data is just overwritten, if the file does not exist it will be
created."
And this was exactly what I meant (while thinking WriteAllLines() was a user defined function), because I've had similar problems in the past.
So, a solution like that could solve some tricky problems (instead of deleting/fast reopening, just overwriting the file) - also less work for the OS, and possibly less file/disk fragmentation.
No shortage of search for string performance questions out there yet I still can not make heads or tails out of what the best approach is.
Long story short, I have committed to moving from 4NT to PowerShell. In leaving the 4NT I am going to miss the console super quick string searching utility that came with it called FFIND. I have decided to use my rudimentary C# programming skills to try an create my own utility to use in PowerShell that is just as quick.
So far search results on a string search in 100's of directories across a few 1000 files, some of which are quite large, are FFIND 2.4 seconds and my utility 4.4 seconds..... after I have ran mine at least once????
The first time I run them FFIND does it near the same time but mine takes over a minute? What is this? Loading of libraries? File indexing? Am I doing something wrong in my code? I do not mind waiting a little longer but the difference is extreme enough that if there is a better language or approach I would rather start down that path now before I get too invested.
Do I need to pick another language to write a string search that will be lighting fast
I have the need to use this utility to search through 1000 of files for strings in web code, C# code, and another propitiatory language that uses text files. I also need to be able to use this utility to find strings in very large log files, MB size.
class Program
{
public static int linecounter;
public static int filecounter;
static void Main(string[] args)
{
//
//INIT
//
filecounter = 0;
linecounter = 0;
string word;
// Read properties from application settings.
string filelocation = Properties.Settings.Default.FavOne;
// Set Args from console.
word = args[0];
//
//Recursive search for sub folders and files
//
string startDIR;
string filename;
startDIR = Environment.CurrentDirectory;
//startDIR = "c:\\SearchStringTestDIR\\";
filename = args[1];
DirSearch(startDIR, word, filename);
Console.WriteLine(filecounter + " " + "Files found");
Console.WriteLine(linecounter + " " + "Lines found");
Console.ReadKey();
}
static void DirSearch(string dir, string word, string filename)
{
string fileline;
string ColorOne = Properties.Settings.Default.ColorOne;
string ColorTwo = Properties.Settings.Default.ColorTwo;
ConsoleColor valuecolorone = (ConsoleColor)Enum.Parse(typeof(ConsoleColor), ColorOne);
ConsoleColor valuecolortwo = (ConsoleColor)Enum.Parse(typeof(ConsoleColor), ColorTwo);
try
{
foreach (string f in Directory.GetFiles(dir, filename))
{
StreamReader file = new StreamReader(f);
bool t = true;
int counter = 1;
while ((fileline = file.ReadLine()) != null)
{
if (fileline.Contains(word))
{
if (t)
{
t = false;
filecounter++;
Console.ForegroundColor = valuecolorone;
Console.WriteLine(" ");
Console.WriteLine(f);
Console.ForegroundColor = valuecolortwo;
}
linecounter++;
Console.WriteLine(counter.ToString() + ". " + fileline);
}
counter++;
}
file.Close();
file = null;
}
foreach (string d in Directory.GetDirectories(dir))
{
//Console.WriteLine(d);
DirSearch(d,word,filename);
}
}
catch (System.Exception ex)
{
Console.WriteLine(ex.Message);
}
}
}
}
If you want to speed up your code run a performance analysis and see what is taking the most time. I can almost guaruntee the longest step here will be
fileline.Contains(word)
This function is called on every line of the file, on every file. Naively searching for a word in a string can taken len(string) * len(word) comparisons.
You could code your own Contains method, that uses a faster string comparison algorithm. Google for "fast string exact matching". You could try using a regex and seeing if that gives you a performance enhancement. But I think the simplest optimization you can try is :
Don't read every line. Make a large string of all the content of the file.
StreamReader streamReader = new StreamReader(filePath, Encoding.UTF8);
string text = streamReader.ReadToEnd();
Run contains on this.
If you need all the matches in a file, then you need to use something like Regex.Matches(string,string).
After you have used regex to get all the matches for a single file, you can iterate over this match collection (if there are any matches). For each match, you can recover the line of the original file by writing a function that reads forward and backward from the match object index attribute, to where you find the '\n' character. Then output that string between those two newlines, to get your line.
This will be much faster, I guarantee it.
If you want to go even further, some things I've noticed are :
Remove the try catch statement from outside the loop. Only use it exactly where you need it. I would not use it at all.
Also make sure your system is running, ngen. Most setups usually have this, but sometimes ngen is not running. You can see the process in process explorer. Ngen generates a native image of the C# managed bytecode so the code does not have to be interpreted each time, but can be run natively. This speeds up C# a lot.
EDIT
Other points:
Why is there a difference between first and subsequent run times? Seems like caching. The OS could have cached the requests for the directories, for the files, for running and loading programs. Usually one sees speedups after a first run. Ngen could also be playing a part here, too, in generating the native image after compilation on the first run, then storing that in the native image cache.
In general, I find C# performance too variable for my liking. If the optimizations suggested are not satisfactory and you want more consistent performance results, try another language -- one that is not 'managed'. C is probably the best for your needs.