Parse IIS log file - is there an alternative to LogParser - c#

I need to parse an IIS log file. Is there any alternative to LogParser, a simple class to query a log file ?
I only need to know how many request I receive between 2 dates.
Here is an example of iis log file :
#Software: Microsoft Internet Information Services 7.5
#Version: 1.0
#Date: 2014-08-26 12:20:57
#Fields: date time s-sitename s-computername s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs-version cs(User-Agent) cs(Cookie) cs(Referer) cs-host sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken
2014-08-26 12:20:57 W3SVC1 QXXXSXXXX 172.25.161.53 POST /XXXX/XXX/XXXX/XXXXX/1.0/XXXX/XXXXXXXX/xxxxxx.svc - 443 - 999.99.999.999 HTTP/1.1 - - - xxxx.xxxx.xxx.xxx.xxxx.xxxx.xxx.com 200 0 0 4302 5562 1560

You can use Tx (LINQ to Logs and Traces) , you can install it via nuget
and use it like this:
var iisLog = W3CEnumerable.FromFile(pathToLog);
int nbOfLogsForLastHour = iisLog.Where(x => x.dateTime > DateTime.Now.AddHours(-1)).Count();
If the log file is used by another process, you can use W3CEnumerable.FromStream

It's 2017 and the LogParser is still closed source. Moreover, all the instrumentation provided by cloud solutions appears to be making the need for parsing IIS logs a thing of the past. But since I am also dealing with legacy apps, I wrote this simple parser using .NET core.
using System;
using System.IO;
using W3CParser.Extensions;
using W3CParser.Instrumentation;
using W3CParser.Parser;
namespace W3CParser
{
class Program
{
static void Main(string[] args)
{
var reader = new W3CReader(File.OpenText(args.Length > 0 ? args[0] : "Data/foobar.log"));
using (new ConsoleAutoStopWatch())
{
foreach (var #event in reader.Read())
{
Console.WriteLine("{0} ({1}):{2}/{3} {4} (bytes sent)",
#event.Status.ToString().Red().Bold(),
#event.ToLocalTime(),
#event.UriStem.Green(),
#event.UriQuery,
#event.BytesSent);
}
}
}
}
}
Source code: https://github.com/alexnolasco/32120528

You can use IISLogParser , and install it via nuget, it has support for large files (> 1Gb)
List<IISLogEvent> logs = new List<IISLogEvent>();
using (ParserEngine parser = new ParserEngine([filepath]))
{
while (parser.MissingRecords)
{
logs = parser.ParseLog().ToList();
}
}

If you're dealing with large volumes and/or dispersed locations of IIS log files, then SpectX is a handy tool for this because you don't have to ingest the logs and can run queries directly on multiple raw files. Avg processing speed per core - 350MB/sec.
It's not open source but the full-functionality 30-day trial is free.
Tutorials:
Parsing IIS logs.
Analyzing IIS logs - 20 sample queries.
To filter a time period, sort the logs by date and filter the period you need, e.g:
| sort(date_time desc)
| filter(date_time > T('2019-11-01 08:48:20.000 +0200'))
| filter(date_time < T('2019-11-05 11:48:20.000 +0200'));

I use filter feature of CMTrace.exe tool (Refer screenshot):

Related

Maintaining a (single) stable filename using Serilog File sink

I am logging the data like below :
.WriteTo.File("log.txt", rollOnFileSizeLimit: true,retainedFileCountLimit: 1,
fileSizeLimitBytes: 10)
This creates a file set like this:
log.txt => log_001.txt => log_002.txt
Since I set retainedFileCountLimit = 1 once the log.text was deleted then log_001.txt
Is there any way to maintain only the log.log file instant of keeping int values in an increased format ?
There is a forked sink, Serilog.Sinks.PersistentFile that does this.
But it's a very debatable ask which is why it's not supported by the standard Serilog.Sinks.File for the foreseeable future.

Why ManagementObjectSearcher call is insanely slow (30sec-2mins) [duplicate]

I am enumerating installed applications using WMI, and this block is taking a relatively long time to complete no matter how I structure it. It takes 13 seconds in my environment every time. Is there a better (faster) way to check if a program is installed? (I'm using iTunes as an example program to check for)
private static string Timestamp
{
get { return DateTime.Now.ToString("HH:mm:ss.ffff"); }
}
private static void LoadInstalledPrograms()
{
List<string> installedPrograms = new List<string>();
Console.WriteLine("0 - {0}", Timestamp);
ManagementObjectSearcher mos = new ManagementObjectSearcher("SELECT * FROM Win32_Product");
Console.WriteLine("1 - {0}", Timestamp);
ManagementObjectCollection managementObjectCollection = mos.Get();
Console.WriteLine("2 - {0}", Timestamp);
foreach (ManagementObject mo in managementObjectCollection)
{
installedPrograms.Add(mo["Name"].ToString());
}
Console.WriteLine("3 - {0}", Timestamp);
Console.WriteLine("Length - {0}", installedPrograms.Count);
}
SELECT * FROM Win32_Product
0 - 08:08:51.3762
1 - 08:08:51.3942
2 - 08:08:51.4012
3 - 08:09:04.8326
Length - 300
SELECT * FROM Win32_Product WHERE name = 'iTunes'
0 - 08:14:17.6529
1 - 08:14:17.6709
2 - 08:14:17.6779
3 - 08:14:31.0332
Length - 1
SELECT * FROM Win32_Product WHERE name LIKE 'iTunes'
0 - 08:16:38.2719
1 - 08:16:38.2899
2 - 08:16:38.2999
3 - 08:16:51.5113
Length - 1
SELECT name FROM Win32_Product WHERE name LIKE 'iTunes'
0 - 08:19:53.9144
1 - 08:19:53.9324
2 - 08:19:53.9394
3 - 08:20:07.2794
Length - 1
If you query "Win32_product" the msi-installer checks and validates every product.
The KB article http://support.microsoft.com/kb/974524 shows:
Win32_product Class is not query optimized. Queries such as “select * from Win32_Product where (name like 'Sniffer%')” require WMI to use the MSI provider to enumerate all of the installed products and then parse the full list sequentially to handle the “where” clause. This process also initiates a consistency check of packages installed, verifying and repairing the install. With an account with only user privileges, as the user account may not have access to quite a few locations, may cause delay in application launch and an event 11708 stating an installation failure.
Win32reg_AddRemovePrograms is a much lighter and effective way to do this, which avoids the calls to do a resiliency check, especially in a locked down environment. So when using Win32reg_AddRemovePrograms we will not be calling on msiprov.dll and will not be initiating a resiliency check.
So be careful with "Win32_product".
Update: nice article https://sdmsoftware.com/group-policy-blog/wmi/why-win32_product-is-bad-news/
WMI is taking it's time as you already noticed. Iterating through the registry might do the trick for you.
You might have a look at Get installed applications in a system here on stackoverflow, where both methods are mentioned.
As Bernhard points out, WMI use of Win32_Product initiates an integrity check of the package estate, and will hence be quite slow to use - and in special cases it can trigger an MSI self-repair (I have never seen this happen on my machines).
Instead of WMI, you can use the MSI automation interface directly to enumerate the applications installed via Windows Installer packages (MSI files) on the machine. This is very quick and doesn't touch WMI at all.
See this example: how to find out which products are installed - newer product are already installed MSI windows (full blown, but basic and easy to understand VBScript example - do check it out). There are many properties you can retrieve for each product, please consult the MSDN documentation for the MSI automation interface. The linked sample VBScript code and the MSDN documentation taken together should help you get going quickly I hope.
P.S: I know this is an old question, but this issue keeps coming up (specifically the slowness of WMI) - just for future reference.
As mentioned here Registry is not reliable and WMI is slow. Thus for me the best option was using Windows Installer API. Add msi.dll to your references and then adapt the following code to your needs:
public static string GetVersionOfInstalledApplication(string queryName)
{
string name;
string version;
Type type = Type.GetTypeFromProgID("WindowsInstaller.Installer");
Installer installer = Activator.CreateInstance(type) as Installer;
StringList products = installer.Products;
foreach (string productGuid in products)
{
string currName = installer.ProductInfo[productGuid, "ProductName"];
string currVersion = installer.ProductInfo[productGuid, "VersionString"];
if (currName == queryName)
{
name = currName;
version = currVersion;
return version;
}
}
return null;
}
You Should use SELECT Name FROM Win32_Product in WMI Query, it works for me
SELECT * make Load all Data Members, so using it are taking much time
Powershell 5.1 has "get-package" instead.
get-package *chrome*
Name Version Source ProviderName
---- ------- ------ ------------
Google Chrome 109.0.5414.75 msi

Shutting down VM returns all VM states as unknown

When using the methods below to shutdown and query the role instances. When I shutdown a VM all other role instances are returned with a status of ready state unknown. After about a couple of minutes I can query again and get the actual status. How can I get the actual status in real time, using Azure Management APIs. Or is this an issue with how the VMs are configured? They are configured with the same storage location and same virtual network
The code shown was based off the template for Deploy and Manage Virtual Machines in Visual Studio 2015.
The call to shutdown the VM:
var shutdownParams = new VirtualMachineShutdownParameters();
if (deallocate)//deallocate is true in this instance
shutdownParams.PostShutdownAction = PostShutdownAction.StoppedDeallocated; // Fully deallocate resources and stop billing
else
shutdownParams.PostShutdownAction = PostShutdownAction.Stopped; // Just put the machine in stopped state, keeping resources allocated
await _computeManagementClient.VirtualMachines.ShutdownAsync(_parameters.CloudServiceName, _parameters.CloudServiceName, vmName, shutdownParams);
The call to query for all role instances
XXX_VirtualMachine is a class that holds the name and instance status:
internal List<XXX_VirtualMachine> GetAllVirtualMachines()
{
List<XXX_VirtualMachine> vmList = new List<XXX_VirtualMachine>();
try
{
DeploymentGetResponse deployment;
deployment = _computeManagementClient.Deployments.GetByName(_parameters.CloudServiceName, _parameters.CloudServiceName);
for (int i = 0; i < deployment.RoleInstances.Count; i++)
{
vmList.Add(new XXX_VirtualMachine(deployment.RoleInstances[i].InstanceName, deployment.RoleInstances[i]));
}
}
catch (Exception e)
{
System.Windows.Forms.MessageBox.Show(e.Message);
}
return vmList;
}
So I finally got around to giving this a kick! (apologies for the delay, people kept expecting that work stuff - inconsiderate fools!)
Firstly, this isn't really an answer! just an explore of the problem, and you probably know all of this already, but maybe someone reading it will see something I've missed.
I've created three VMs, in a single Cloud Service, and lo-and-behold! it did exactly what you predicted when you shut one down.
Firstly both portals appear to be giving reliable answers, even when the .Net request is reporting RoleStatusUnknown.
Looking at the Xml that comes out of the request to
https://management.core.windows.net/{subscriptionid}/services/hostedservices/vm01-u3rzv2q6/deploymentslots/Production
we get
<RoleInstance>
<RoleName>vm01</RoleName>
<InstanceName>vm01</InstanceName>
<InstanceStatus>RoleStateUnknown</InstanceStatus>
<InstanceSize>Basic_A1</InstanceSize>
<InstanceStateDetails />
<PowerState>Started</PowerState>
I then fired up Powershell to see if that was doing the same, which it was (not unexpected since it calls the same REST point). with Get-AzureVm returning
ServiceName Name Status
----------- ---- ------
vm01-u3rzv2q6 vm01 CreatingVM
vm01-u3rzv2q6 vm02 RoleStateUnknown
vm01-u3rzv2q6 vm03 RoleStateUnknown
At the appropriate times, which again, is as seen.
Wondering what the timing was, I then ran this
while ($true) { (get-azurevm -ServiceName vm01-u3rzv2q6 -Name vm01).InstanceStatus ; get-azurevm ; (date).DateTime }
ReadyRole
vm01-u3rzv2q6 vm01 ReadyRole
vm01-u3rzv2q6 vm02 ReadyRole
vm01-u3rzv2q6 vm03 ReadyRole
07 March 2016 04:31:01
07 March 2016 04:31:36
StoppedDeallocated
vm01-u3rzv2q6 vm01 Stoppe...
vm01-u3rzv2q6 vm02 RoleSt...
vm01-u3rzv2q6 vm03 RoleSt...
07 March 2016 04:31:49
07 March 2016 04:33:44
StoppedDeallocated
vm01-u3rzv2q6 vm01 Stoppe...
vm01-u3rzv2q6 vm02 ReadyRole
vm01-u3rzv2q6 vm03 ReadyRole
07 March 2016 04:33:52
So it seems that the machine shuts down, then a process must begin to update the cloud service, which takes its ability to query its status for, what seems, exactly two minutes.
Somewhere in the API there must be a location that it is reported properly because the portals don't have this problem.
I spent a while down a blind alley looking for an 'InstanceView' for the VM, but it seems that doesn't exist for classic deployments.
My next thought is to put together a simple rest client that takes a management certificate and see if the URI can be hacked around a bit to give anything more interesting. (its got to be there somewhere!)
What may be useful, is that the PowerState isn't affected by this problem. So you could potentially have a secondary check for that while you have the RoleStateUnknown error, its far far from perfect, but depending on what you're looking to do it might work.
Failing that, I'd say it is clearly an bug in Azure, and could definitely have a support call raised for it.

Massive performance problems running applications in hyper v

Update: Se bottom: Added CreateArticleRelations code example.
Ok, this is a tricky one. I am experiencing a massive performance problem in hyper-v when it comes to preview and production environments.
First of all, here is the setup.
.NET 4.0 on all servers.
Preview
Webserver : Virtual machine, 8gb ram, 4 cpu, windows server 2008 r2 (64)
Database: Virtual server, 6gb ram, 2 cpu, windows server 2008 r2 (64)
Production:
Webserver: Virtual machine, 8gb ram, 4 cpu, windows server 2008 r2 (64)
Database: Physical machine, 48 gb ram, 16 cpu, windows server 2008 r2 (64)
This is a B2B shop running on these. And when running integration for one product the results are mind blowing for me. I will provide pictures.
Running for one product in preview takes: 83 seconds for everything to update.
Running for one product in production takes 301 seconds (!) for everything to update. Same product!
If I run this locally it takes about 40 seconds to complete.
I have ran dotTrace profiling remote on the both servers to actually see what is taking time. I use EnterpriseLibrary for logging, umbraco for cms and uCommerce as a commerce platform. Look at the picture example below of one finding I have seen.
First of all, CreateArticleRelations takes 140 seconds on the production server but only 46 on the preview. Same product, same data. Then to the really funky stuff. On the production site at the top we see enterpriselogging taking 64 seconds, but on the preview run it is way down so can't even say it taking absolutley no time.
The implementation of the logging looks like this.
private string LogMessage(string message, int priority, string category, TraceEventType severity, int code, int skipFrames = 2)
{
//Check if "code" exists in eCommerceB2B.LogExceptions
var dbf = new ThirdPartSources();
var exeptions = dbf.GetLogExeptions();
foreach (var exeption in exeptions)
{
if (code.ToString() == exeption)
return DateTime.Now.ToString();
}
try
{
var stack = new StackTrace(skipFrames);
if (_logWriter.IsLoggingEnabled())
{
var logEntry = new LogEntry
{
Title =
stack.GetFrame(0).GetMethod().ReflectedType.FullName + " " +
stack.GetFrame(0).GetMethod(),
Message = message,
Priority = priority,
Severity = severity,
TimeStamp = DateTime.Now,
EventId = code
};
logEntry.Categories.Add(category);
_logWriter.Write(logEntry);
return logEntry.TimeStampString;
}
Logging is set up to a rolling flatfile and a database. I have tried to disable the logging and that saves about 20 seconds, but still the LogMessage is up at the top.
This has blown my mind for days and i can't seem to find a solution. Sure I can remove logging completely but I wan't to find the cause of the problem.
What bothers me is that for example running one method (CreateArticleRelations) takes almost 4 times as long on the production server. The cpu level is never over 30 % and 5 gb ram is available. The applications is ran as a console application.
Someone please save me! :) I can provide more data if needed. My bet is that is has something to do with the virtual server but I have no idea what to check.
Update:
Per comment, i tried to comment out the LogMessage completly. It saves about 100 seconds totally which tells me that something is terribly wrong. Still taking 169 seconds to create the relations vs 46 seconds in preview, and in preview logging is still enabled. What can be wrong to enterprise library to make it behave this way? And still, why i code running 4x slower on the production server? Se image after I remove LogMessage. It is from PRODUCTION.
CreateArticleRelations
private void CreateArticleRelations(Product uCommerceProduct, IStatelessSession session)
{
var catalogues = _jsbArticleRepository.GetCustomerSpecificCatalogue(uCommerceProduct.Sku).CustomerRelations;
var defaultCampaignName = _configurationManager.GetValue(ConfigurationKeys.UCommerceDefaultCampaignName);
var optionalArticleCampaignName = _configurationManager.GetValue(ConfigurationKeys.UCommerceDefaultOptionalArticleCampaignName);
var categoryRelations =
session.Query<CategoryProductRelation>()
.Fetch(x => x.Category)
.Fetch(x => x.Product)
.Where(
x =>
x.Category.Definition.Name == _customerArticleCategory && x.Product.Sku == uCommerceProduct.Sku)
.ToList();
var relationsAlreadyAdded = _categoryRepository.RemoveCataloguesNotInRelation(catalogues, categoryRelations,
session);
_categoryRepository.SetArticleCategories(session, relationsAlreadyAdded, uCommerceProduct, catalogues,
_customerCategories, _customerArticleCategory,
_customerArticleDefinition);
//set campaigns and optional article
foreach (var jsbArticleCustomerRelation in catalogues)
{
// Article is in campaign for just this user
if (jsbArticleCustomerRelation.HasCampaign)
{
_campaignRepository.CreateCampaignAndAddProduct(session, jsbArticleCustomerRelation, defaultCampaignName);
}
else // remove the article from campaign for user if exists
{
_campaignRepository.DeleteProductFromCampaign(session, jsbArticleCustomerRelation, defaultCampaignName);
}
// optional article
if(jsbArticleCustomerRelation.IsOptionalArticle)
{
_campaignRepository.CreateCampaignAndAddProduct(session, jsbArticleCustomerRelation, optionalArticleCampaignName);
}
else
{
_campaignRepository.DeleteProductFromCampaign(session, jsbArticleCustomerRelation, optionalArticleCampaignName);
}
}
}
We hit the database almost on every row here in some way. For example in the DeleteProductFromCampaign the following code take 43 seconds in the preview environment and 169 seconds in the production environment.
public void DeleteProductFromCampaign(IStatelessSession session, JSBArticleCustomerRelation jsbArticleCustomerRelation, string campaignName)
{
var productTarget =
session.Query<ProductTarget>()
.FirstOrDefault(
x =>
x.CampaignItem.Campaign.Name == jsbArticleCustomerRelation.CustomerNumber &&
x.CampaignItem.Name == campaignName &&
x.Sku == jsbArticleCustomerRelation.ArticleNumber);
if (productTarget != null)
{
session.Delete(productTarget);
}
}
So this code for example runs 4x slower on the production server. The biggest difference between the servers are that the production (physical) server in set up with noumerous instances and I am using one of them (20 gb om ram).

What is the Fastest way to read event log on remote machine?

I am working on an application which reads eventlogs(Application) from remote machines. I am making use of EventLog class in .net and then iterating on the Log entries but this is very slow. In some cases, some machines have 40000+ log entries and it takes hours to iterate through the entries.
what is the best way to accomplish this task? Are there any other classes in .net which are faster or in any other technology?
Man, I feel your pain. We had the exact same issue in our app.
Your solution has a branch depending on what server version you're running on and what server version your "target" machine is running on.
If you're both on Vista or Windows Server 2008, you're in luck. You should look at System.Diagnostics.Eventing.Reader.EventLogQuery and System.Diagnostics.Eventing.Reader.EventLogReader. These are new in .net 3.5.
Basically, you can build a query in XML and ship it over to run on the remote computer. Maybe you're just searching for events of a specific type, or maybe just new events from a specific point in time. The search runs on the remote machine, and then you just get back the matching events. The new classes are much faster than the old .net 2.0 way, but again, they are only supported on Vista or Windows Server 2008.
For our app when the target is NOT on Vista/Win2008, we downloaded the raw .evt file from the remote system, and then parsed the file using its binary format. There are several sources of data about the event log format for .evt files (pre-Vista), including link text and an article I recall on codeproject.com that had some c# code.
Vista and Windows Server 2008 machines use a new .evtx format that is a new format, so you can't use the same binary parsing approach across all versions. But the new EventLogQuery and EventLogReader classes are so fast that you won't have to. It's now perfectly speedy to just use the built-in classes.
Event Log Reader is horribly slow... too slow. WTF Microsoft?
Use LogParser 2.2 - Search for C# and LogParser on the Internet (or you can use the log parser commands from the command line). I don't want to duplicate the work already contributed by others.
I pull the log from the remote system by having the log exported as an EVTX file. I then copy the file from the remote system. This process is really quick - even with a network that spans the planet (I had issues with having the log exported to a network resource). Once you have it local, you can do your searches and processing.
There are multiple reasons for having the EVTX - I won't get into the reasons why we do this.
The following is a working example of the code to save a copy of the log as an EVTX:
(Notes: "device" is the network host name or IP. "LogName" is the name of the log desired: "System", "Security", or "Application". outputPathOnRemoteSystem is the path on the remote computer, such as "c:\temp\%hostname%.%LogName%.%YYYYMMDD_HH.MM%.evtx".)
static public bool DumpLog(string device, string LogName, string outputPathOnRemoteSystem, out string errMessage)
{
bool wasExported = false;
string errorMessage = "";
try
{
System.Diagnostics.Eventing.Reader.EventLogSession els = new System.Diagnostics.Eventing.Reader.EventLogSession(device);
els.ExportLogAndMessages(LogName, PathType.LogName, "*", outputPathOnRemoteSystem);
wasExported = true;
}
catch (UnauthorizedAccessException e)
{
errorMessage = "Unauthorized - Access Denied: " + e.Message;
}
catch (EventLogNotFoundException e)
{
errorMessage = "Event Log Not Found: " + e.Message;
}
catch (EventLogException e)
{
errorMessage = "Export Failed: " + e.Message + ", Log: " + LogName + ", Device: " + device;
}
errMessage = errorMessage;
return wasExported;
}
A good Explanation/Example can be found on MSDN.
EventLogSession session = new EventLogSession(Environment.MachineName);
// [System/Level=2] filters out the errors
// Where "Log" is the log you want to get data from.
EventLogQuery query = new EventLogQuery("Log", PathType.LogName, "*[System/Level=2]");
EventLogReader reader = new EventLogReader(query);
for (EventRecord eventInstance = reader.ReadEvent();
null != eventInstance;
eventInstance = reader.ReadEvent())
{
// Output or save your event data here.
}
When waiting 5-20 minutes with the old code this one does it in less than 10 seconds.
Maybe WMI can help you:
WMI with C#
Have you tried using the remoting features in powershell 2.0? They allow you to execute cmdlets (like ones to read event logs) on remote machines and return the results (as objects, of course) to the calling session.
You could place a Program at those machines that save the log to file and sends it to your webapplication i think that would be alot faster as you can do the looping local but im not sure how to do it so i cant ive you any code :(
I recently did such thing via WCF callback interface however my clients interacted with the server through WCF and adding a WCF Callback was easy in my project, full code with examples is available here
Just had the same issue and want to share my solution. It makes a search through application, system and security eventlogs from 260 seconds (using EventLog) about a 100 times faster (using EventLogQuery).
And this in a way where it is possible to check if the event message contains a pattern or any other check without the requirement of FormatDescription().
My trick is to use the same mechanism as PowerShells Get-WinEvent does and then pass it through the result check.
Here is my code to find all events within last 4 days where the event message contains a filter pattern.
string[] eventLogSources = {"Application", "System", "Security"};
var messagePattern = "*Your Message Search Pattern*";
var timeStamp = DateTime.Now.AddDays(-4);
var matchingEvents = new List<EventRecord>();
foreach (var eventLogSource in eventLogSources)
{
var i = 0;
var query = string.Format("*[System[TimeCreated[#SystemTime >= '{0}']]]",
timeStamp.ToUniversalTime().ToString("o"));
var elq = new EventLogQuery(eventLogSource, PathType.LogName, query);
var elr = new EventLogReader(elq);
EventRecord entryEventRecord;
while ((entryEventRecord = elr.ReadEvent()) != null)
{
if ((entryEventRecord.Properties)
.FirstOrDefault(x => (x.Value.ToString()).Contains(messagePattern)) != null)
{
matchingEvents.Add(entryEventRecord);
i++;
}
}
}
Maybe that the remote computers could do a little bit of computing. So this way your server would only deal with relevant information. It would be a kind of cluster using the remote computer to do some light filtering and the server would the the analysis part.

Categories

Resources