I want to search for an element value in all the XML files(assume 200+) in a folder using C#.
My scenario is each file will contain multiple item tags.So i have to check all item tags for User Selected SearchValue. Eg: ABC123
Currently i am using foreach loop and it's taking longtime.
Could you please suggest me a better option to get result much faster
Following is my current code implementation.
string[] arrFiles = Directory.GetFiles(temFolder, "*.xml");
foreach (string file in arrFiles)
{
XmlDocument doc = new XmlDocument();
doc.Load(file);
XmlNodeList lstEquip = doc.SelectNodes("scene/PackedUnit/Items/ItemCode");
foreach (XmlNode xnEquip in lstEquip)
{
if (xnEquip.InnerText.ToUpper() == equipCode.ToUpper())
{
String[] strings = file.Split('\\');
string fileName = strings[strings.Count() - 1];
fileName = fileName.Replace(".xml", "");
lstSubContainers.Add(fileName);
break;
}
}
}
Well, the first thing to work out is why it's taking a long time. You haven't provided any code, so it's hard to say what's going on.
One option is to parallelize the operation, using a pool of tasks each working on a single document at a time. In an ideal world you'd probably read from the files on a single thread (to prevent thrashing) and supply the files to the pool as you read them - but just reading in multiple threads it probably a good starting point. Using .NET 4's Parallel Extensions libraries would make this reasonably straightforward.
Personally I like the LINQ to XML API for querying, rather than using the "old" XmlElement etc API, but it's up to you. I wouldn't expect it to make much difference. Using XmlReader instead could be faster, avoiding creating as much garbage - but I would try to find out where the time is going in the "simple" code first. (I personally find XmlReader rather harder to use correctly than the "whole document in memory" APIs.)
If you're doing forward-only reading and not manipulating the Xml in anyway, switching to an XmlReader should speed up the processing, although I can't imagine it will really make a massive difference (maybe a second or two atmost) with the file sizes you have.
I've recently had to parse a 250mb XML file using LINQ-to-XML in Silverlight (a test app) and that took seconds to do. What is your machine?
Related
What is the best approach on reading large amounts of xml files (I need to read 8000 xml's) and do some computations on them, and have best speed on it? Is it ok using a xmlreader and returning the nodes i'm interested in in a list? Or is it faster when reading the node, also to do some computations on it? I tried the second approach(Returning the nodes in a list, as values, because I tried writing my application with as much modules as possible). I am using C#, but this is not relevant.
Thank you.
Is it ok using a xmlreader and returning the nodes i'm interested in in a list? Or is it faster when reading the node, also to do some computations on it?
I can't say whether returning a list is ok or not, because I don't know how large each file is, which would be more important in this regard than the number of XML documents.
However, it certainly could be very expensive, if an XML document, and hence the list produced, were very large.
Conversely, reading the node and calculating as you go will certainly be quicker to start producing results, and use less memory and hence faster in a degree ranging from negligible to so considerable as to have other approaches be infeasible, depending on just how large that source data is. It's the approach I take if I either have a strong concern with performance, or a good reason to suspect such a large dataset.
Somewhere between the two, is the approach of an IEnumerable<T> implementation that yields objects as it reads, along the lines of:
public IEnumerable<SomeObject> ExtractFromXml(XmlReader rdr)
{
using(rdr)
while(rdr.Read())
if(rdr.NodeType == XmlNodeType.Element && rdr.LocalName = "thatElementYouReallyCareAbout")
{
var current = /*Code to create a SomeObject from the XML goes here */
yield return current;
}
}
As with producing a list, this separates the code doing the calculation from that which parses the XML, but because you can start enumerating through it with a foreach before it has finished that parsing, the memory use can be less, as will the time to start the calculation. This makes little difference with small documents, but a lot if they are large.
The best solution I have personally come up with to deal with XML files is by taking advantage of the .Net's XmlSerializer class. You can define a model for your xml and create a List of that model where you keep your xml data then:
using (StreamWriter sw = new StreamWriter("OutPutPath")) {
new XmlSerializer(typeof(List<Model>)).Serialize(sw, Models);
sw.WriteLine();
}
you can read the file and deserilize the data and then assign them back to the model by calling the Deserialize method.
Is there some way I can combine two XmlDocuments without holding the first in memory?
I have to cycle through a list of up to a hundred large (~300MB) XML files, appending to each up to 1000 nodes, repeating the whole process several times (as the new node list is cleared to save memory). Currently I load the whole XmlDocument into memory before appending new nodes, which is currently not tenable.
What would you say is the best way to go about this? I have a few ideas but I'm not sure which is best:
Never load the whole XMLDocument, instead using XmlReader and XmlWriter simultaneously to write to a temp file which is subsequently renamed.
Make a XmlDocument for the new nodes only, and then manually write it to the existing file (i.e. file.WriteLine( "<node>\n" )
Something else?
Any help will be much appreciated.
Edit Some more details in answer to some of the comments:
The program parses several large logs into XML, grouping into different files by source. It only needs to run once a day, and once the XML is written there is a lightweight proprietary reader program which gives reports on the data. The program only needs to run once a day so can be slow, but runs on a server which performs other actions, mainly file compression and transfer, which cannot be effected too much.
A database would probably be easier, but the company isn't going to do this any time soon!
As is, the program runs on the dev machine using a few GB of memory at the most, but throws out of memory exceptions when run on the sever.
Final Edit
The task is quite low-prority, which is why it would only cost extra to get a database (though I will look into mongo).
The file will only be appended to, and won't grow indefinitely - each final file is only for a day's worth of the log, and then new files are generated the following day.
I'll probably use the XmlReader/Writer method since it will be easiest to ensure XML validity, but I have taken all your comments/answers into consideration. I know that having XML files this large is not a particularly good solution, but it's what I'm limited to, so thanks for all the help given.
If you wish to be completely certain of the XML structure, using XMLWriter and XMLReader are the best way to go.
However, for absolutely highest possible performance, you may be able to recreate this code quickly using direct string functions. You could do this, although you'd lose the ability to verify the XML structure - if one file had an error you wouldn't be able to correct it:
using (StreamWriter sw = new StreamWriter("out.xml")) {
foreach (string filename in files) {
sw.Write(String.Format(#"<inputfile name=""{0}"">", filename));
using (StreamReader sr = new StreamReader(filename)) {
// Using .NET 4's CopyTo(); alternatively try http://bit.ly/RiovFX
if (max_performance) {
sr.CopyTo(sw);
} else {
string line = sr.ReadLine();
// parse the line and make any modifications you want
sw.Write(line);
sw.Write("\n");
}
}
sw.Write("</inputfile>");
}
}
Depending on the way your input XML files are structured, you might opt to remove the XML headers, maybe the document element, or a few other un-necessary structures. You could do that by parsing the file line by line
I have a folder with 400k+ XML-documents and many more to come, each file is named with 'ID'.xml, and each belongs to a specific user. In a SQL server database I have the 'ID' from the XML-file matched with a userID which is where I interconnect the XML-document with the user. A user can have an infinite number of XML-document attached (but let's say maximum >10k documents)
All XML-documents have a few common elements, but the structure can vary a little.
Now, each user will need to make a search in the XML-documents belonging to her, and what I've tried so far (looping through each file and read it with a streamreader) is too slow. I don't care, if it reads and matches the whole file with attributes and so on, or just the text in each element. What should be returned in the first place is a list with the ID's from the filenames.
What is the fastest and smartest methods here, if any?
I think LINQ-to-XML is probably the direction you want to go.
Assuming you know the names of the tags that you want, you would be able to do a search for those particular elements and return the values.
var xDoc = XDocument.Load("yourFile.xml");
var result = from dec in xDoc.Descendants()
where dec.Name == "tagName"
select dec.Value;
results would then contain an IEnumerable of the value of any XML tag that has has a name matching "tagName"
The query could also be written like this:
var result = from dec in xDoc.Decendants("tagName")
select dec.Value;
or this:
var result = xDoc.Descendants("tagName").Select(tag => tag.Value);
The output would be the same, it is just a different way to filter based on the element name.
You'll have to open each file that contains relevant data, and if you don't know which files contain it, you'll have to open all that may match. So the only performance gain would be in the parsing routine.
When parsing Xml, if speed is the requirement, you could use the XmlReader as it performs way better than the other parsers (most read the entire Xml file before you can query them). The fact that it is forward-only should not be a limitation for this case.
If parsing takes about as long as the disk I/O, you could try parsing files in parallel, so one thread could wait for a file to be read while the other parses the loaded data. I don't think you can make that big a win there, though.
Also what is "too slow" and what is acceptable? Would this solution of many files become slower over time?
Use LINQ to XML.
Check out this article. over at msdn.
XDocument doc = XDocument.Load("C:\file.xml");
And don't forget that reading so many files will always be slow, you may try writing a multi-threaded program...
If I understood correctly you don't want to open each xml file for particular user because it's too slow whether you are using linq to xml or some other method.
Have you considered saving some values both in xml file and relational database (tags) (together with xml ID).
In that case you could search for some values in DB first and select only xml files that contain searched values ?
for example:
ID, tagName1, tagName2
xmlDocID, value1, value2
my other question is, why have you chosen to store xml documents in file system. If you are using SQL Server 2005/2008, it has very good support for storing, searching through xml columns (even indexing some values in xml)
Are you just looking for files that have a specific string in the content somewhere?
WARNING - Not a pure .NET solution. If this scares you, then stick with the other answers. :)
If that's what you're doing, another alternative is to get something like grep to do the heavy lifting for you. Shell out to that with the "-l" argument to specify that you are only interested in filenames and you are on to a winner. (for more usage examples, see this link)
L.B Have already made a valid point.
This is a case, where Lucene.Net(or any indexer) would be a must. It would give you a steady (very fast) performance in all searches. And it is one of the primary benefits of indexers, to handle a very large amount of arbitrary data.
Or is there any reason, why you wouldn't use Lucene?
Lucene.NET (and Lucene) support incremental indexing. If you can re-open the index for reading every so often, then you can keep adding documents to the index all day long -- your searches will be up-to-date with the last time you re-opened the index for searching.
I need to iterate through a large XML file (~2GB) and selectively copy certain nodes to one or more separate XML files.
My first thought is to use XPath to iterate through matching nodes and for each node test which other file(s) the node should be copied to, like this:
var doc = new XPathDocument(#"C:\Some\Path.xml");
var nav = doc.CreateNavigator();
var nodeIter = nav.Select("//NodesOfInterest");
while (nodeIter.MoveNext())
{
foreach (Thing thing in ThingsThatMightGetNodes)
{
if (thing.AllowedToHaveNode(nodeIter.Current))
{
thing.WorkingXmlDoc.AppendChild(... nodeIter.Current ...);
}
}
}
In this implementation, Thing defines public System.Xml.XmlDocument WorkingXmlDoc to hold nodes that it is AllowedToHave(). I don't understand, though, how to create a new XmlNode that is a copy of nodeIter.Current.
If there's a better approach I would be glad to hear it as well.
Evaluation of an XPath expression requires that the whole XML document (XML Infoset) be in RAM.
For an XML file whose textual representation exceeds 2GB, typically more than 10GB of RAM should be available just to hold the XML document.
Therefore, while not impossible, it may be preferrable (especially on a server that must have resources quickly available to many requests) to use another technique.
The XmlReader (based classes) is an excellent tool for this scenario. It is fast, forward only, and doesn't require to retain the read nodes in memory. Also, your logic will remain almost the same.
You should consider LINQ to XML. Check this blog post for details and examples:
http://james.newtonking.com/archive/2007/12/11/linq-to-xml-over-large-documents.aspx
Try an XQuery processor that implements document projection (an idea first published by Marion and Simeon). It's implemented in a number of processors including Saxon-EE. Basically, if you run a query such as //x, it will filter the input event stream and build a tree that only contains the information needed to handle this query; it will then execute the query in the normal way, but against a much smaller tree. If this is a small part of the total document, you can easily reduce the memory requirement by 95% or so.
I am developing a program to log data from a incoming serial communication. I have to invoke the serial box by sending a command, to recieve something. All this works fine, but i have a problem.
The program have to be run from a netbook ( approx: 1,5 gHZ, 2 gig ram ), and it can't keep up when i ask it to save these information to a XML file.
I am only getting communication every 5 second, i am not reading the file anywhere.
I use xml.save(string filename) to save the file.
Is there another, better way, to save the information to my XML, or should i use an alternative?
If i should use an alternative, which should it be?
Edit:
Added some code:
XmlDocument xml = new XmlDocument();
xml.Load(logFile);
XmlNode p = xml.GetElementsByTagName("records")[0];
for (int i = 0; i < newDat.Length; i++)
{
XmlNode q = xml.CreateElement("record");
XmlNode a = xml.CreateElement("time");
XmlNode b = xml.CreateElement("temp");
XmlNode c = xml.CreateElement("addr");
a.AppendChild(xml.CreateTextNode(outDat[i, 0]));
b.AppendChild(xml.CreateTextNode(outDat[i, 1]));
c.AppendChild(xml.CreateTextNode(outDat[i, 2]));
sendTime = outDat[i, 0];
points.Add(outDat[i, 2], outDat[i, 1]);
q.AppendChild(a);
q.AppendChild(b);
q.AppendChild(c);
p.AppendChild(q);
}
xml.AppendChild(p);
xml.Save(this.logFile);
This is the XML related code, running once every 5 seconds. I am reading (I get no error), adding some childs, and then saving it again. It is when I save that I get the error.
You may want to look at using an XMLWriter and building the XML file by hand. That would allow you to open a file and keep it open for the duration of the logging, appending one XML fragment at a time, as you read in data. The XMLReader class is optimized for forward-only writing to an XMLStream.
The above approach should be much faster when compared to using the Save method to serialize (save) a full XML document each time you read data and when you really only want to append a new fragment at the end.
EDIT
Based on the code sample you posted, it's the Load and Save that's causing the unnecessary performance bottleneck. Every time you're adding a log entry you're essentially loading the full XML document and behind the scenes parsing it into a full-blown XML tree. Then you modify the tree (by adding nodes) and then serialize it all to disk again. This is very very counter productive.
My proposed solution is really the way to go: create and open the log file only once; then use an XMLWriter to write out the XML elements one by one, each time you read new data; this way you're not holding the full contents of the XML log in memory and you're only appending small chunks of data at the end of a file - which should be unnoticeable in terms of overhead; at the end, simply close the root XML tag, close the XMLWriter and close the file. That's it! This is guaranteed to not slow down your UI even if you implement it synchronously, on the UI thread.
While not a direct answer to your question, it sounds like you're doing everything in a very linear way:
Receive command
Modify in memory XML
Save in memory XML to disk
GoTo 1
I would suggest you look into using some threading, or possibly Task's to make this more asynchronous. This would certainly be more difficult, and you would have to wrestle with the task synchronization, but in the long run it's going to perform a lot better.
I would look at having a thread (possibly the main thread, not sure if you're using WinForms, a console app or what) that receives the command, and posts the "changes" to a holding class. Then have a second thread, which periodically polls this holding class and checks it for a "Dirty" state. When it detects this state, it grabs a copy of the XML and saves it to disk.
This allows your serial communication to continue uninterrupted, regardless of how poorly the hardware you're running on performs.
Normally for log files one picks append-friendly format, otherwise you have to re-parse whole file every time you need to append new record and save the result. Plain text CSV is likely the simplest option.
One other option if you need to have XML-like file is to store list of XML fragments instead of full XML. This way you still can use XML API (XmlReader can read fragments when specifying ConformanceLevel.Frament in XmlReaderSettings of XmlReader.Create call), but you don't need to re-read whole document to append new entry - simple file-level append is enough. I.e. WCF logs are written this way.
The answer from #Miky Dinescu is one technique for doing this if your output must be an XML formatted file. The reason why is that you are asking it to completed load and reparse the entire XML file every single time you add another entry. Loading and parsing the XML file becomes more and more IO, memory, and CPU intensive the bigger the file gets. So it doesn't take long before the amount of overhead that has will overwhelm any hardware when it must run within a very limited time frame. Otherwise you need to re-think your whole process and could simply buffer all the data into an in memory buffer which you could write out (flush) at a much more leisurely pace.
I made this work, however I do not believe that it is the "best practice" method.
I have another class, where I have my XmlDocument running at all times, and then trying to save every time data is added. If it fails, it simply waits to save the next time.
I will suggest others to look at Miky Dinescu's suggestion. I just felt that I was in to deep to change how to save data.