I write a code to read many text files and grouped them in one file called (all.txt), after that I read all.txt file to count the word frequency, and the result appears in richtextbox. the code work well but the problem when I run the program, part of result appears then the program is hang without responding. I think that may be from memory, my computer RAM is 4 GB any help would be appreciated.
note:my code work well in small text file.here's part of my code :
StreamWriter w=new StreamWriter(#"C:\documents\all.txt");
w.Write(all);
w.Close();
If your problem is reading a large file, try using MemoryMappedFile as indicated in here
Related
I have a routine that reads XPS documents and chops off pages to separate documents. Originally it read one document, decided how to chop it, closed it and wrote out the new files.
Features were added, this was causing headaches with cleaning up old files before running the routine and so I saved all the chopped pieces to be written out at the end.
ChoppedXPS is a dictionary, the key is the filename, the data is the FixedDocument prepared from the original:
foreach (String OneReport in ChoppedXPS.Keys)
{
File.Delete(OneReport);
using (XpsDocument TargetFile = new XpsDocument(OneReport, FileAccess.ReadWrite))
{
XpsDocumentWriter Writer = XpsDocument.CreateXpsDocumentWriter(TargetFile);
Writer.Write(ChoppedXPS[OneReport]);
Logger($"{OneReport} written to disk", 2);
}
Application.DoEvents();
}
If the FixedDocument being written out here contains graphics the source file is opened by the Writer.Write line and left open until the program is closed.
The XpsDocumentWriter does not seem to implement anything that can be used to clean it up.
(Yeah, that Application.DoEvents is ugly--this is an in-house program used by two people, it's not worth the hassle of making this run in the background and without it a big enough task can cause Windows to decide it's non-responsive and kill it. And, yes, I know how to indent--I took them out to make it all fit this screen.)
.Net 4.5, using some C# 8.0 features.
I found a workaround for this problem. I'm not going to try to post the whole thing as I had to change the whole data handling but the heart of it:
using (XPSDocument Source = new XPSDocument(SourceFile, FileAccess.Read)
{
[the using loop from my question]
}
I'm still hoping for understanding and something more appropriate than this approach.
Yes--this produces a warning that Source is unused, but the compiler isn't eliminating it so it does work.
Recently i used android TTS - I save the file as MP3 and play it using MediaPlayer so users can pause/resume etc.
It all works fine other than when i have a large text it just does not work.
I read that the android TTS has the limit of 4000 CHs? What should i do to tackle large amount of text?
The following is the code i am using to save MP3
Android.Speech.Tts.TextToSpeech textToSpeech;
...
textToSpeech = new Android.Speech.Tts.TextToSpeech(this, this, "com.google.android.tts");
...
textToSpeech.SynthesizeToFile(ReadableText, null, new Java.IO.File(System.IO.Path.Combine(documentsPath, ID + "_audio.mp3")), ID);
The following is the code i am using to playback the audio
MediaPlayer MP = new MediaPlayer();
...
MP.SetDataSource(System.IO.Path.Combine(documentsPath, ID + "_audio.mp3"));
MP.Prepare();
MP.Start();
It works for small amount of text but not for large text.
File gets saved (most likely just a corrupt file) because when i play it i get the following error
setDataSoruceFD failed: status=0x80000000
Java Solution is also acceptable
FYI - The question is about the max text size as I can generate the file for smaller text
Cheers
In Android ASOP (at least since API-18), TextToSpeech.MaxSpeechInputLength is set to 4000.
Note: OEMs could change this value in their OS image, so it would be wise to check the value and not make any assumptions.
Note: You are naming the output with an .mp3 extension, but by default the files created will be .wav formatted, some speech engines do support other formats/bitrate/etc. but you are passing null for the parameters.
Unless you want to deal with properly joining multiple wave files, I would recommend that you break your text into smaller parts and synthesize multiple files.
You can then play these back in sequence (using the MediaPlayer Completion event|listener).
This is a tricky question. I suspect it will require some advanced knowledge of file systems to answer.
I have a WPF application, "App1," targeting .NET framework 4.0. It has a Settings.settings file that generates a standard App1.exe.config file where default settings are stored. When the user modifies settings, the modifications go in AppData\Roaming\MyCompany\App1\X.X.0.0\user.config. This is all standard .NET behavior. However, on occasion, we've discovered that the user.config file on a customer's machine isn't what it's supposed to be, which causes the application to crash.
The problem looks like this: user.config is about the size it should be if it were filled with XML, but instead of XML it's just a bunch of NUL characters. It's character 0 repeated over and over again. We have no information about what had occurred leading up to this file modification.
We can fix that problem on a customer's device if we just delete user.config because the Common Language Runtime will just generate a new one. They'll lose the changes they've made to the settings, but the changes can be made again.
However, I've encountered this problem in another WPF application, "App2," with another XML file, info.xml. This time it's different because the file is generated by my own code rather than by the CLR. The common themes are that both are C# WPF applications, both are XML files, and in both cases we are completely unable to reproduce the problem in our testing. Could this have something to do with the way C# applications interact with XML files or files in general?
Not only can we not reproduce the problem in our current applications, but I can't even reproduce the problem by writing custom code that generates errors on purpose. I can't find a single XML serialization error or file access error that results in a file that's filled with nulls. So what could be going on?
App1 accesses user.config by calling Upgrade() and Save() and by getting and setting the properties. For example:
if (Settings.Default.UpgradeRequired)
{
Settings.Default.Upgrade();
Settings.Default.UpgradeRequired = false;
Settings.Default.Save();
}
App2 accesses info.xml by serializing and deserializing the XML:
public Info Deserialize(string xmlFile)
{
if (File.Exists(xmlFile) == false)
{
return null;
}
XmlSerializer xmlReadSerializer = new XmlSerializer(typeof(Info));
Info overview = null;
using (StreamReader file = new StreamReader(xmlFile))
{
overview = (Info)xmlReadSerializer.Deserialize(file);
file.Close();
}
return overview;
}
public void Serialize(Info infoObject, string fileName)
{
XmlSerializer writer = new XmlSerializer(typeof(Info));
using (StreamWriter fileWrite = new StreamWriter(fileName))
{
writer.Serialize(fileWrite, infoObject);
fileWrite.Close();
}
}
We've encountered the problem on both Windows 7 and Windows 10. When researching the problem, I came across this post where the same XML problem was encountered in Windows 8.1: Saved files sometime only contains NUL-characters
Is there something I could change in my code to prevent this, or is the problem too deep within the behavior of .NET?
It seems to me that there are three possibilities:
The CLR is writing null characters to the XML files.
The file's memory address pointer gets switched to another location without moving the file contents.
The file system attempts to move the file to another memory address and the file contents get moved but the pointer doesn't get updated.
I feel like 2 and 3 are more likely than 1. This is why I said it may require advanced knowledge of file systems.
I would greatly appreciate any information that might help me reproduce, fix, or work around the problem. Thank you!
It's well known that this can happen if there is power loss. This occurs after a cached write that extends a file (it can be a new or existing file), and power loss occurs shortly thereafter. In this scenario the file has 3 expected possible states when the machine comes back up:
1) The file doesn't exist at all or has its original length, as if the write never happened.
2) The file has the expected length as if the write happened, but the data is zeros.
3) The file has the expected length and the correct data that was written.
State 2 is what you are describing. It occurs because when you do the cached write, NTFS initially just extends the file size accordingly but leaves VDL (valid data length) untouched. Data beyond VDL always reads back as zeros. The data you were intending to write is sitting in memory in the file cache. It will eventually get written to disk, usually within a few seconds, and following that VDL will get advanced on disk to reflect the data written. If power loss occurs before the data is written or before VDL gets increased, you will end up in state 2.
This is fairly easy to repro, for example by copying a file (the copy engine uses cached writes), and then immediately pulling the power plug on your computer.
I had a similar problem and I was able to trace my problem to corrupted HDD.
Description of my problem (all related informations):
Disk attached to mainboard (SATA):
SSD (system),
3 * HDD.
One of the HDD's had a bad blocks and there were even problems reading the disk structure (directories and file listing).
Operation system: Windows 7 x64
file system (on all disks): NTFS
When the system tried to read or write to the corrupted disk (user request or automatic scan or any other reason) and the attempt failed, all write operations (to other disk's) were incorrect. The files created on system disk (mostly configuration files by another applications) were written and were valid (probably because the files were cashed in RAM) on direct check of file content.
Unfortunately, after a restart, all the files (written after the failed write/read access on corrupted drive) had the correct size, but the content of the files was 'zero byte' (exactly like in your case).
Try rule out hardware related problems. You can try to check 'copy' the file (after a change) to a different machine (upload to web/ftp). Or try to save specific content to a fixed file. When the check file on different will be correct, or when the fixed content file will be 'empty', the reason is probably on local machine. Try to change HW components, or reinstall the system.
There is no documented reason for this behavior, as this is happening to users but nobody can tell the origin of this odd conditions.
It might be CLR problem, although this is a very unlikely, the CLR doesn't just write null characters and XML document cannot contain null characters if there's no xsi:nil defined for the nodes.
Anyway, the only documented way to fix this is to delete the corrupted file using this line of code:
try
{
ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.PerUserRoamingAndLocal);
}
catch (ConfigurationErrorsException ex)
{
string filename = ex.Filename;
_logger.Error(ex, "Cannot open config file");
if (File.Exists(filename) == true)
{
_logger.Error("Config file {0} content:\n{1}", filename, File.ReadAllText(filename));
File.Delete(filename);
_logger.Error("Config file deleted");
Properties.Settings.Default.Upgrade();
// Properties.Settings.Default.Reload();
// you could optionally restart the app instead
}
else
{
_logger.Error("Config file {0} does not exist", filename);
}
}
It will restore the user.config using the Properties.Settings.Default.Upgrade();
again without null values.
I ran into a similar issue but it was on a server. The server restarted while a program was writing to a file which caused the file to contain all null characters and become unusable to the program writing/reading from it.
So the file looked like this:
The logs showed that the server restarted:
The corrupted file showed that it was last updated at the time of the restart:
I have the same problem, there is an extra "NUL" character at the end of serialized xml file:
I am using XMLWriter like this:
using (var stringWriter = new Utf8StringWriter())
{
using (var xmlWriter = XmlWriter.Create(stringWriter, new XmlWriterSettings { Indent = true, IndentChars = "\t", NewLineChars = "\r\n", NewLineHandling = NewLineHandling.Replace }))
{
xmlSerializer.Serialize(xmlWriter, data, nameSpaces);
xml = stringWriter.ToString();
var xmlDocument = new XmlDocument();
xmlDocument.LoadXml(xml);
if (removeEmptyNodes)
{
RemoveEmptyNodes(xmlDocument);
}
xml = xmlDocument.InnerXml;
}
}
I have a .txt file that i process into a List in my program.
I would like to somehow save that List and include it in the program itself so that it loads every time the program starts, so I don't have to process it every time from a .txt file.
Its more complicated than just "int x = 3;" cause it has like 10k lines and I don't wanna copy paste all that in the beginning.
I've looked all over but haven't found anything similar, any ideas guys?
Also if thee's a solution, can it work with any type (arrays, Dictionaries)?
As requested, the code is:
var text = System.IO.File.ReadAllText(#"C:\Users\jazz7\Desktop\links_zg.txt");
EDIT
Joe suggested the solution:
Included the file within the project, set its "build action" to embedded resource in Properties and used this code:
private string linkovi = "";
...
var assembly = Assembly.GetExecutingAssembly();
var resourceName = "WindowsFormsApplication4.links_zg.txt";
using (Stream stream = assembly.GetManifestResourceStream(resourceName))
using (StreamReader reader = new StreamReader(stream))
{
linkovi = reader.ReadToEnd();
}
string linkovi now contains the txt file and is now within the application. Thanks all!
You could store the file as a resource in your executable file.
This KB article describes how to do it.
Fundamentally, youve got to choose between storing your data in memory or storing it on the hard drive. The former will cut your loading time, but might use an unacceptable amount of memory, whilst the latter is slower, as youve identified. Either way, your data has to be stored somewhere.
Do you need to load all of the data at once? If the loading time is the issue, you could process the file line by line. While this would be slower overall, you would still have access to some usable data sooner.
I made a program in C# where it processes about 30 zipped folders which have about 35000 files in total. My purpose is to read every single file for processing its information. As of now, my code extracts all the folders and then read the files. The problem with this process is it takes about 15-20 minutes for it to happen, which is a lot.
I am using the following code to extract files:
void ExtractFile(string zipfile, string path)
{
ZipFile zip = ZipFile.Read(zipfile);
zip.ExtractAll(path);
}
The extraction part is the one which takes the most time to process. I need to reduce this time. Is there a way I can read the contents of the files inside the zipped folder without extracting them? or if anyone knows any other way that can help me reduce the time of this code ?
Thanks in advance
You could try reading each entry into a memory stream instead of to the file system:
ZipFile zip = ZipFile.Read(zipfile);
foreach(ZipEntry entry in zip.Entries)
{
using(MemoryStream ms = new MemoryStream())
{
entry.Extract(ms);
ms.Seek(0,SeekOrigin.Begin);
// read from the stream
}
}
Maybe instead of extracting it to the hard disk, you should try read it without extraction, using OpenRead, then you would have to use the ZipArchiveEntry.Open method.
Also have a look at the CodeFluent Runtime tool, which claims to be improved for performances issues.
Try to break your responses into single await async methods, which started one by one if one of the responses is longer than 50 ms. http://msdn.microsoft.com/en-us/library/hh191443.aspx
If we have for example 10 executions which call one by one, in async/await we call our executions parallel, and operation will depend only from server powers.