I have a particular situation where my client require to import (periodically) an ms-access database into his mysql website database (so it's a remote database).
Because the hosting plan is a shared hosting (not a vps), the only way to do it is through PHP through an SQL query, because I don't have ODBC support on hosting.
My current idea is this one (obviusly the client has a MS-Windows O.S.):
Create a small C# application that convert MS-Access database into a big SQL query written on a file
The application will then use FTP info to send the file into a specified directory on the website
A PHP script will then run periodically (like every 30 minutes) and check if file exists, eventually importing it into the database
I know it's not the best approach so I'm proposing a question to create a different workaround for this problem. The client already said that he wants keep using his ms-access database.
The biggest problem I have is that scripts can last only 30 seconds, which is obviusly a problem to import data.
To work around the 30-second limit, call your script repeatedly, and keep track of your progress. Here's one rough idea:
if(!file_exists('upload.sql')) exit();
$max = 2000; // the maximum number you want to execute.
if(file_exists('progress.txt')) {
$progress = file_get_contents('progress.txt');
} else {
$progress = 0;
}
// load the file into an array, expecting one query per line
$file = file('upload.sql');
foreach($file as $current => $query) {
if($current < $progress) continue; // skip the ones we've done
if($current - $progress >= $max) break; // stop before we hit the max
mysql_query($query);
}
// did we finish the file?
if($current == count($file) - 1) {
unlink('progress.txt');
unlink('upload.sql');
} else {
file_put_contents('progress.txt', $current);
}
Related
hope you guys are fine?
OK.. i am using MySQL.Data client/library to access and use MySQL database. I was using happily it for sometimes on quite a few project. But suddenly facing a new issue that causing me hold on my current project. :(
Because current project makes some (looks like it's a lot) db queries. and i am facing following exception :
Can't create more than max_prepared_stmt_count statements (current value: 16382)
i am closing and disposing the db engine/connection every time i am done with it. But getting damn confused why i am still getting this error.
here is the sample code just to give you idea.. (trimmed out unnecessary parts)
//this loop call an API with pagination and get API response
while(ContinueSalesOrderPage(apiClient, ref pageNum, days, out string response, window) == true)
{
//this handle the API date for the current page, it's normally 500 entry per page, and it throws the error on 4th page
KeyValueTag error = HandleSalesOrderPageData(response, pageNum, out int numOrders, window);
}
private KeyValueTag HandleSalesOrderPageData(string response, int pageNum, out int numOrders, WaitWindow window)
{
numOrders = json.ArrayOf("List").Size;
//init db
DatabaseWriter dbEngine = new DatabaseWriter()
{
Host = dbHost,
Name = dbName,
User = dbUser,
Password = dbPass,
};
//connecting to database
bool pass = dbEngine.Connect();
//loop through all the entry for the page, generally it's 500 entries
for(int orderLoop = 0; orderLoop < numOrders; orderLoop++)
{
//this actually handle the queries, and per loop there could be 3 to 10+ insert/update query using prepared statements
KeyValueTag error = InsertOrUpdateSalesOrder(dbEngine, item, config, pageNum, orderLoop, numOrders, window);
}
//here as you can see, i disconnect from db engine, and following method also close the db connection before hand
dbEngine.Disconnect();
}
//code from DatabaseWriter class, as you see this method close and dispose the database properly
public void Disconnect()
{
_CMD.Dispose();
_engine.Close();
_engine.Dispose();
}
so, as you can see i close/dispose the database connection on each page processing, but still it shows me that error on 4th page. FYI, 4th page data is not the matter i checked that. If i skip the page and only process the 4th page, it process successfully.
and after some digging more in google, i found prepare statement is saved in database server and that needs to be close/deallocate. But i can't find any way to do that using MySQL.Data Client :(
following page says:
https://dev.mysql.com/doc/refman/8.0/en/sql-prepared-statements.html
A prepared statement is specific to the session in which it was created. If you terminate a session without deallocating a previously prepared statement, the server deallocates it automatically.
but that seems incorrect, as i facing the error even after closing connection on each loop
so, i am at dead end and looking for some help here?
thanks in advance
best regards
From the official docs, the role of max_prepared_stmt_count is
This variable limits the total number of prepared statements in the server.
Therefore, you need to increase the value of the above variable, so as to increase the maximum number of allowed prepared statements in your MySQL server's configuration
Open the my.cnf file
Under the mysqld section, there is a variable max_prepared_stmt_count. Edit the value accordingly(remember the upper end of this value is 1048576)
Save and close the file. Restart MySQL service for changes to take place.
You're probably running into bug 77421 in MySql.Data: by default, it doesn't reset connections.
This means that temporary tables, user-declared variables, and prepared statements are never cleared on the server.
You can fix this by adding Connection Reset = True; to your connection string.
Another fix would be to switch to MySqlConnector, an alternative ADO.NET provider for MySQL that fixes this and other bugs. (Disclaimer: I'm the lead author.)
Scenario: I want to write my own Autocomplete-API for Addresses, just like the One Google is offering. (Very Basic: Street, Housenumber, City, Postcode, Country). It is intended for private use and training-purposes only. I want to cover about 1 Million Addresses for a Start.
Technology Used: .Net Framework (not Core), C#, Visual Studio, OSMSharp, Microsoft SQL-Server, Web Api 2 (although i will probably switch to ASP.Net Core in the Future.)
Approach:
Set Up Project (Web Api 2 or Console Project for Demo-Purposes)
Download relevant File from OpenStreetMaps using DownloadClient() (https://download.geofabrik.de/)
Read in the File using OSMSharp and Filter out relevant Data.
Convert Filtered Data to a DataTable.
Use DataTable to feed SQLBulkCopy Method to import Data into Database.
Problem: Step 4 is taking way too long. For a File like "Regierungsbezirk Köln" in the Format osm.pbf which is about 160MB (the uncompressed osm file is about 2.8 GB) where talking about 4-5 Hours. I want to optimize this. The Bulk Copy of the DataTable into the Database on the other Hand (About 1 Million Rows) is taking just about 5 Seconds. (Woah. Amazing.)
Minimal Reproduction: https://github.com/Cr3pit0/OSM2Database-Minimal-Reproduction
What i tried:
Use a Stored Procedure in SQL-Server. This comes with a whole different Set of Problems and i didn't quite manage to get it Working (mainly because the uncompressed osm.pbf File is over 2GB and SQL Server doesn't like that)
Come up with a different approach to Filter and Convert the Data from the File to a DataTable (or CSV).
Use the Overpass-API. Although I read somewhere that the Overpass-API is not intended for DataSets above 10,000 Entries.
Ask the Jedi-Grandmasters on StackOverflow for Help. (Currently in Process ... :D)
Code Extract:
public static DataTable getDataTable_fromOSMFile(string FileDownloadPath)
{
Console.WriteLine("Finished Downloading. Reading File into Stream...");
using (var fileStream = new FileInfo(FileDownloadPath).OpenRead())
{
PBFOsmStreamSource source = new PBFOsmStreamSource(fileStream);
if (source.Any() == false)
{
return new DataTable();
}
Console.WriteLine("Finished Reading File into Stream. Filtering and Formatting RawData to Addresses...");
Console.WriteLine();
DataTable dataTable = convertAdressList_toDataTable(
source.Where(x => x.Type == OsmGeoType.Way && x.Tags.Count > 0 && x.Tags.ContainsKey("addr:street"))
.Select(Address.fromOSMGeo)
.Distinct(new AddressComparer())
);
return dataTable;
}
};
private static DataTable convertAdressList_toDataTable(IEnumerable<Address> addresses)
{
DataTable dataTable = new DataTable();
if (addresses.Any() == false)
{
return dataTable;
}
dataTable.Columns.Add("Id");
dataTable.Columns.Add("Street");
dataTable.Columns.Add("Housenumber");
dataTable.Columns.Add("City");
dataTable.Columns.Add("Postcode");
dataTable.Columns.Add("Country");
Int32 counter = 0;
Console.WriteLine("Finished Filtering and Formatting. Writing Addresses From Stream to a DataTable Class for the Database-SQLBulkCopy-Process ");
foreach (Address address in addresses)
{
dataTable.Rows.Add(counter + 1, address.Street, address.Housenumber, address.City, address.Postcode, address.Country);
counter++;
if (counter % 10000 == 0 && counter != 0)
{
Console.WriteLine("Wrote " + counter + " Rows From Stream to DataTable.");
}
}
return dataTable;
};
Okay i think i got it. Im down to about 12 Minutes for a File-Size of about 600mb and about 3.1 Million Rows of Data after Filtering.
The first Thing i tried is to replace the logic that populates my DataTable with FastMember. Which worked, but didnt give the Performance Increase i was hoping for (I canceled the Process after 3 Hours...). After more Research i stumbled upon an old Project which is called "osm2mssql" (https://archive.codeplex.com/?p=osm2mssql). I used a little part of the Code which directly read the Data from the osm.pbf File and modified it to my Use-Case ( → which is to extract Address-Data from Ways). I did actually use FastMember to write an IEnumerable<Address> to the Datatable, but i dont need OSM-Sharp and whatever extra Dependencies they have anymore. So thank you very much for the Suggestion of FastMember. I will certainly keep that Library in Mind in future Projects.
For those who are interested, i updated my Github-Project accordingly (https://github.com/Cr3pit0/OSM2Database-Minimal-Reproduction) (although i didnt thoroughly test it, because i moved on from the Test-Project to the Real Deal, which is a Web Api)
Im quite sure it can be further optimized but i dont think i care at the Moment. 12 Minutes for a Method which might be called once a month to update the whole Database is fine i guess. Now i can move on to opimizing my Queries for the Autocomplete.
So thank you very much to whoever wrote "osm2mssql".
Introduction to the Task at hand: can be skipped if impatient
The company I work for is not a software company, but focus on mechanical and thermodynamic engineering problems.
To help solve their system design challenges, they have developed a software for calculating the system impact of replacing individual components.
The software is quite old, written in FORTRAN and has evolved over a period of 30 years, which means that we cannot quickly re-write it or update it.
As you may imagine the way this software is installed has also evolved, but significantly slower than the rest of the system, meaning that packaging is done by a batch script that gathers files from different places, and puts them in a folder, which is then compiled into an iso, burned to a cd, and shipped with mail.
You young programmers (I am 30), may expect a program to load dll's, but otherwise be fairly self-contained after linking. Even if the code is made up of several classes, from different namespaces etc..
In FORTRAN 70 however.. Not so much. Which means that the software it self consists of an alarming number of calls to prebuilt modules (read: seperate programs)..
We need to be able to distribute via the internet, as any other modern company have been able to for a while. To do this we could just make the *.iso downloadable right?
Well, unfortunately no, the iso contains several files which are user specific.
As you may imagine with thousands of users, that would be thousands of isos, that are nearly identical.
Also we wan't to convert the old FORTRAN based installation software, into a real installation package, and all our other (and more modern) programs are C# programs packaged as MSI's..
But the compile time for a single msi with this old software on our server, is close to 10 seconds, so it is simply not an option for us to build the msi, when requested by the user. (if multiple users requests at the same time, the server won't be able to complete before requests timeout..)
Nor can we prebuild the user specific msi's and cache them, as we would run out of memory on the server.. (total at ~15 giga Byte per released version)
Task Description tl:dr;
Here is what I though I would do: (inspired by comments from Christopher Painter)
Create a base MSI, with dummy files instead of the the user specific files
Create cab file for each user, with the user specific files
At request time inject the userspecific cab file into a temporary copy of the base msi using the "_Stream" table.
Insert a reference into the Media table with a new 'DiskID' and a 'LastSequence' corresponding to the extra files, and the name of the injected cabfile.
Update the Filetable with the name of the user specific file in the new cab file, a new Sequence number (in the range of the new cab files sequence range), and the file size.
Question
My code fails to do the task just described. I can read from the msi just fine, but the cabinet file is never inserted.
Also:
If I open the msi with DIRECT mode, it corrupts the media table, and if I open it in TRANSACTION mode, it fails to change anything at all..
In direct mode the existing line in the Media table is replaced with:
DiskId: 1
LastSequence: -2145157118
Cabinet: "Name of action to invoke, either in the engine or the handler DLL."
What Am I doing wrong ?
Below I have provided the snippets involved with injecting the new cab file.
snippet 1
public string createCabinetFileForMSI(string workdir, List<string> filesToArchive)
{
//create temporary cabinet file at this path:
string GUID = Guid.NewGuid().ToString();
string cabFile = GUID + ".cab";
string cabFilePath = Path.Combine(workdir, cabFile);
//create a instance of Microsoft.Deployment.Compression.Cab.CabInfo
//which provides file-based operations on the cabinet file
CabInfo cab = new CabInfo(cabFilePath);
//create a list with files and add them to a cab file
//now an argument, but previously this was used as test:
//List<string> filesToArchive = new List<string>() { #"C:\file1", #"C:\file2" };
cab.PackFiles(workdir, filesToArchive, filesToArchive);
//we will ned the path for this file, when adding it to an msi..
return cabFile;
}
snippet 2
public int insertCabFileAsNewMediaInMSI(string cabFilePath, string pathToMSIFile, int numberOfFilesInCabinet = -1)
{
//open the MSI package for editing
pkg = new InstallPackage(pathToMSIFile, DatabaseOpenMode.Direct); //have also tried direct, while database was corrupted when writing.
return insertCabFileAsNewMediaInMSI(cabFilePath, numberOfFilesInCabinet);
}
snippet 3
public int insertCabFileAsNewMediaInMSI(string cabFilePath, int numberOfFilesInCabinet = -1)
{
if (pkg == null)
{
throw new Exception("Cannot insert cabinet file into non-existing MSI package. Please Supply a path to the MSI package");
}
int numberOfFilesToAdd = numberOfFilesInCabinet;
if (numberOfFilesInCabinet < 0)
{
CabInfo cab = new CabInfo(cabFilePath);
numberOfFilesToAdd = cab.GetFiles().Count;
}
//create a cab file record as a stream (embeddable into an MSI)
Record cabRec = new Record(1);
cabRec.SetStream(1, cabFilePath);
/*The Media table describes the set of disks that make up the source media for the installation.
we want to add one, after all the others
DiskId - Determines the sort order for the table. This number must be equal to or greater than 1,
for out new cab file, it must be > than the existing ones...
*/
//the baby SQL service in the MSI does not support "ORDER BY `` DESC" but does support order by..
IList<int> mediaIDs = pkg.ExecuteIntegerQuery("SELECT `DiskId` FROM `Media` ORDER BY `DiskId`");
int lastIndex = mediaIDs.Count - 1;
int DiskId = mediaIDs.ElementAt(lastIndex) + 1;
//wix name conventions of embedded cab files is "#cab" + DiskId + ".cab"
string mediaCabinet = "cab" + DiskId.ToString() + ".cab";
//The _Streams table lists embedded OLE data streams.
//This is a temporary table, created only when referenced by a SQL statement.
string query = "INSERT INTO `_Streams` (`Name`, `Data`) VALUES ('" + mediaCabinet + "', ?)";
pkg.Execute(query, cabRec);
Console.WriteLine(query);
/*LastSequence - File sequence number for the last file for this new media.
The numbers in the LastSequence column specify which of the files in the File table
are found on a particular source disk.
Each source disk contains all files with sequence numbers (as shown in the Sequence column of the File table)
less than or equal to the value in the LastSequence column, and greater than the LastSequence value of the previous disk
(or greater than 0, for the first entry in the Media table).
This number must be non-negative; the maximum limit is 32767 files.
/MSDN
*/
IList<int> sequences = pkg.ExecuteIntegerQuery("SELECT `LastSequence` FROM `Media` ORDER BY `LastSequence`");
lastIndex = sequences.Count - 1;
int LastSequence = sequences.ElementAt(lastIndex) + numberOfFilesToAdd;
query = "INSERT INTO `Media` (`DiskId`, `LastSequence`, `Cabinet`) VALUES (" + DiskId.ToString() + "," + LastSequence.ToString() + ",'#" + mediaCabinet + "')";
Console.WriteLine(query);
pkg.Execute(query);
return DiskId;
}
update: stupid me, forgot about "committing" in transaction mode - but now it does the same as in direct mode, so no real changes to the question.
I will answer this my self, since I just learned something about DIRECT mode that I didn't know before, and wan't to keep it here to allow for the eventual re-google..
Apparently we only succesfully updates the MSI, if we closed the database handle before the program eventually chrashed.
for the purpose of answering the question, this destructor should do it.
~className()
{
if (pkg != null)
{
try
{
pkg.Close();
}
catch (Exception ex)
{
//rollback not included as we edit directly?
//do nothing..
//atm. we just don't want to break anything if database was already closed, without dereferencing
}
}
}
after adding the correct close statement, the MSI grew in size
(and a media row was added to the media table :) )
I will post the entire class for solving this task, when its done and tested,
but I'll do it in the related question on SO.
the related question on SO
last year i developed an ASP.NET Application implenting MVP Model.
The site is not very large (about 9.000 views/day).
It is a common application witch just desplays articles, supports scheduling (via datetime),vote and views, sections and categories.
From then i create more than 15 sites with the same motive ( The database michanism was build in the same logic).
What i did was :
Every time a request arrive i have to take articles, sections, categories, views and votes from my Database and display them to the user...like all other web apps.
My database objects are somthing like the above :
public class MyObjectDatabaseManager{
public static string Table = DBTables.ArticlesTable;
public static string ConnectionString = ApplicationManager.ConnectionString;
public bool insertMyObject(MyObject myObject){/*.....*/}
public bool updateMyObject(MyObject myObject){/*.....*/}
public bool deleteMyObject(MyObject myObject){/*.....*/}
public MyObject getMyObject(int MyObjectID){/**/}
public List<MyObject> getMyObjects( int limit, int page, bool OrderBy, bool ASC){/*...*/}
}
When ever i want to communicate to the database i do something like the above
MySqlConnection myConnection = new MySqlConnection(ConnectionString);
try
{
myConnection.Open();
MySqlCommand cmd = new MySqlCommand(myQuery,myConnection);
cmd.Parameters.AddWithValue(...);
cmd.ExecuteReader(); /* OR */ ExecuteNonQuery();
}catch(Exception){}
finally
{
if (myConnection != null)
{
myConnection.Close();
myConnection.Dispose();
}
}
Two months later i've run into trouble.
The performance start falling down and the database starts to return errors : max_user_connections
Then i think.. " Let's cache the page "
And the start to use Output cache for the pages.
(not a very sophisticated good idea..)
12 months later my friend told to me to create a "live" article...
an article that can be updated without any delay. (from the output cache...)
Then it came into my mind that : " Why to use cache? joomla etc **doesn't"
So...i remove the magic "Output cache" directive...
From then i run again into the same problem...
MAX_USER_CONNETCTIONS! :/
What i'm doing wrong?
I know that my code communicates alot with the database but...
the connection pooling?
Sorry for my english
Please...help :/
i have no idea how to figure it out:/
Thank you.
I'm running into share hosting packet
*My db is over 60mb in size*
I have more than 6000 rows in some tables like articles
*My hosting provider gives me 25 connections to the database (very large number in my opinion)*
Your code looks fine to me, although from a style perspective I prefer "using" to "try / finally / Dispose()".
One thing to check is to make sure that the connection strings you're using are identical, everywhere in your code. Most DB drivers to connection pooling based on comparing the connection strings.
You may need to increase the max_connections variable in your mysql config.
See:
http://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html
Actually, Max #/connections is an OS-level configuration.
For example, under NT/XP, it was configurable in the registry, under HKLM, ..., TcpIp, Parameters, TcpNumConnections:
http://smallvoid.com/article/winnt-tcpip-max-limit.html
More important, you want to maximum the number of "ephemeral ports" needed to open new connections:
http://www.ncftp.com/ncftpd/doc/misc/ephemeral_ports.html
Windows:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
On the Edit menu, click Add Value, and then add the following registry value:
Value Name: MaxUserPort Data Type: REG_DWORD Value: 65534
Linux:
sudo sysctl -w net.ipv4.ip_local_port_range="1024 64000"
I am working on an application which reads eventlogs(Application) from remote machines. I am making use of EventLog class in .net and then iterating on the Log entries but this is very slow. In some cases, some machines have 40000+ log entries and it takes hours to iterate through the entries.
what is the best way to accomplish this task? Are there any other classes in .net which are faster or in any other technology?
Man, I feel your pain. We had the exact same issue in our app.
Your solution has a branch depending on what server version you're running on and what server version your "target" machine is running on.
If you're both on Vista or Windows Server 2008, you're in luck. You should look at System.Diagnostics.Eventing.Reader.EventLogQuery and System.Diagnostics.Eventing.Reader.EventLogReader. These are new in .net 3.5.
Basically, you can build a query in XML and ship it over to run on the remote computer. Maybe you're just searching for events of a specific type, or maybe just new events from a specific point in time. The search runs on the remote machine, and then you just get back the matching events. The new classes are much faster than the old .net 2.0 way, but again, they are only supported on Vista or Windows Server 2008.
For our app when the target is NOT on Vista/Win2008, we downloaded the raw .evt file from the remote system, and then parsed the file using its binary format. There are several sources of data about the event log format for .evt files (pre-Vista), including link text and an article I recall on codeproject.com that had some c# code.
Vista and Windows Server 2008 machines use a new .evtx format that is a new format, so you can't use the same binary parsing approach across all versions. But the new EventLogQuery and EventLogReader classes are so fast that you won't have to. It's now perfectly speedy to just use the built-in classes.
Event Log Reader is horribly slow... too slow. WTF Microsoft?
Use LogParser 2.2 - Search for C# and LogParser on the Internet (or you can use the log parser commands from the command line). I don't want to duplicate the work already contributed by others.
I pull the log from the remote system by having the log exported as an EVTX file. I then copy the file from the remote system. This process is really quick - even with a network that spans the planet (I had issues with having the log exported to a network resource). Once you have it local, you can do your searches and processing.
There are multiple reasons for having the EVTX - I won't get into the reasons why we do this.
The following is a working example of the code to save a copy of the log as an EVTX:
(Notes: "device" is the network host name or IP. "LogName" is the name of the log desired: "System", "Security", or "Application". outputPathOnRemoteSystem is the path on the remote computer, such as "c:\temp\%hostname%.%LogName%.%YYYYMMDD_HH.MM%.evtx".)
static public bool DumpLog(string device, string LogName, string outputPathOnRemoteSystem, out string errMessage)
{
bool wasExported = false;
string errorMessage = "";
try
{
System.Diagnostics.Eventing.Reader.EventLogSession els = new System.Diagnostics.Eventing.Reader.EventLogSession(device);
els.ExportLogAndMessages(LogName, PathType.LogName, "*", outputPathOnRemoteSystem);
wasExported = true;
}
catch (UnauthorizedAccessException e)
{
errorMessage = "Unauthorized - Access Denied: " + e.Message;
}
catch (EventLogNotFoundException e)
{
errorMessage = "Event Log Not Found: " + e.Message;
}
catch (EventLogException e)
{
errorMessage = "Export Failed: " + e.Message + ", Log: " + LogName + ", Device: " + device;
}
errMessage = errorMessage;
return wasExported;
}
A good Explanation/Example can be found on MSDN.
EventLogSession session = new EventLogSession(Environment.MachineName);
// [System/Level=2] filters out the errors
// Where "Log" is the log you want to get data from.
EventLogQuery query = new EventLogQuery("Log", PathType.LogName, "*[System/Level=2]");
EventLogReader reader = new EventLogReader(query);
for (EventRecord eventInstance = reader.ReadEvent();
null != eventInstance;
eventInstance = reader.ReadEvent())
{
// Output or save your event data here.
}
When waiting 5-20 minutes with the old code this one does it in less than 10 seconds.
Maybe WMI can help you:
WMI with C#
Have you tried using the remoting features in powershell 2.0? They allow you to execute cmdlets (like ones to read event logs) on remote machines and return the results (as objects, of course) to the calling session.
You could place a Program at those machines that save the log to file and sends it to your webapplication i think that would be alot faster as you can do the looping local but im not sure how to do it so i cant ive you any code :(
I recently did such thing via WCF callback interface however my clients interacted with the server through WCF and adding a WCF Callback was easy in my project, full code with examples is available here
Just had the same issue and want to share my solution. It makes a search through application, system and security eventlogs from 260 seconds (using EventLog) about a 100 times faster (using EventLogQuery).
And this in a way where it is possible to check if the event message contains a pattern or any other check without the requirement of FormatDescription().
My trick is to use the same mechanism as PowerShells Get-WinEvent does and then pass it through the result check.
Here is my code to find all events within last 4 days where the event message contains a filter pattern.
string[] eventLogSources = {"Application", "System", "Security"};
var messagePattern = "*Your Message Search Pattern*";
var timeStamp = DateTime.Now.AddDays(-4);
var matchingEvents = new List<EventRecord>();
foreach (var eventLogSource in eventLogSources)
{
var i = 0;
var query = string.Format("*[System[TimeCreated[#SystemTime >= '{0}']]]",
timeStamp.ToUniversalTime().ToString("o"));
var elq = new EventLogQuery(eventLogSource, PathType.LogName, query);
var elr = new EventLogReader(elq);
EventRecord entryEventRecord;
while ((entryEventRecord = elr.ReadEvent()) != null)
{
if ((entryEventRecord.Properties)
.FirstOrDefault(x => (x.Value.ToString()).Contains(messagePattern)) != null)
{
matchingEvents.Add(entryEventRecord);
i++;
}
}
}
Maybe that the remote computers could do a little bit of computing. So this way your server would only deal with relevant information. It would be a kind of cluster using the remote computer to do some light filtering and the server would the the analysis part.