Massive performance problems running applications in hyper v - c#

Update: Se bottom: Added CreateArticleRelations code example.
Ok, this is a tricky one. I am experiencing a massive performance problem in hyper-v when it comes to preview and production environments.
First of all, here is the setup.
.NET 4.0 on all servers.
Preview
Webserver : Virtual machine, 8gb ram, 4 cpu, windows server 2008 r2 (64)
Database: Virtual server, 6gb ram, 2 cpu, windows server 2008 r2 (64)
Production:
Webserver: Virtual machine, 8gb ram, 4 cpu, windows server 2008 r2 (64)
Database: Physical machine, 48 gb ram, 16 cpu, windows server 2008 r2 (64)
This is a B2B shop running on these. And when running integration for one product the results are mind blowing for me. I will provide pictures.
Running for one product in preview takes: 83 seconds for everything to update.
Running for one product in production takes 301 seconds (!) for everything to update. Same product!
If I run this locally it takes about 40 seconds to complete.
I have ran dotTrace profiling remote on the both servers to actually see what is taking time. I use EnterpriseLibrary for logging, umbraco for cms and uCommerce as a commerce platform. Look at the picture example below of one finding I have seen.
First of all, CreateArticleRelations takes 140 seconds on the production server but only 46 on the preview. Same product, same data. Then to the really funky stuff. On the production site at the top we see enterpriselogging taking 64 seconds, but on the preview run it is way down so can't even say it taking absolutley no time.
The implementation of the logging looks like this.
private string LogMessage(string message, int priority, string category, TraceEventType severity, int code, int skipFrames = 2)
{
//Check if "code" exists in eCommerceB2B.LogExceptions
var dbf = new ThirdPartSources();
var exeptions = dbf.GetLogExeptions();
foreach (var exeption in exeptions)
{
if (code.ToString() == exeption)
return DateTime.Now.ToString();
}
try
{
var stack = new StackTrace(skipFrames);
if (_logWriter.IsLoggingEnabled())
{
var logEntry = new LogEntry
{
Title =
stack.GetFrame(0).GetMethod().ReflectedType.FullName + " " +
stack.GetFrame(0).GetMethod(),
Message = message,
Priority = priority,
Severity = severity,
TimeStamp = DateTime.Now,
EventId = code
};
logEntry.Categories.Add(category);
_logWriter.Write(logEntry);
return logEntry.TimeStampString;
}
Logging is set up to a rolling flatfile and a database. I have tried to disable the logging and that saves about 20 seconds, but still the LogMessage is up at the top.
This has blown my mind for days and i can't seem to find a solution. Sure I can remove logging completely but I wan't to find the cause of the problem.
What bothers me is that for example running one method (CreateArticleRelations) takes almost 4 times as long on the production server. The cpu level is never over 30 % and 5 gb ram is available. The applications is ran as a console application.
Someone please save me! :) I can provide more data if needed. My bet is that is has something to do with the virtual server but I have no idea what to check.
Update:
Per comment, i tried to comment out the LogMessage completly. It saves about 100 seconds totally which tells me that something is terribly wrong. Still taking 169 seconds to create the relations vs 46 seconds in preview, and in preview logging is still enabled. What can be wrong to enterprise library to make it behave this way? And still, why i code running 4x slower on the production server? Se image after I remove LogMessage. It is from PRODUCTION.
CreateArticleRelations
private void CreateArticleRelations(Product uCommerceProduct, IStatelessSession session)
{
var catalogues = _jsbArticleRepository.GetCustomerSpecificCatalogue(uCommerceProduct.Sku).CustomerRelations;
var defaultCampaignName = _configurationManager.GetValue(ConfigurationKeys.UCommerceDefaultCampaignName);
var optionalArticleCampaignName = _configurationManager.GetValue(ConfigurationKeys.UCommerceDefaultOptionalArticleCampaignName);
var categoryRelations =
session.Query<CategoryProductRelation>()
.Fetch(x => x.Category)
.Fetch(x => x.Product)
.Where(
x =>
x.Category.Definition.Name == _customerArticleCategory && x.Product.Sku == uCommerceProduct.Sku)
.ToList();
var relationsAlreadyAdded = _categoryRepository.RemoveCataloguesNotInRelation(catalogues, categoryRelations,
session);
_categoryRepository.SetArticleCategories(session, relationsAlreadyAdded, uCommerceProduct, catalogues,
_customerCategories, _customerArticleCategory,
_customerArticleDefinition);
//set campaigns and optional article
foreach (var jsbArticleCustomerRelation in catalogues)
{
// Article is in campaign for just this user
if (jsbArticleCustomerRelation.HasCampaign)
{
_campaignRepository.CreateCampaignAndAddProduct(session, jsbArticleCustomerRelation, defaultCampaignName);
}
else // remove the article from campaign for user if exists
{
_campaignRepository.DeleteProductFromCampaign(session, jsbArticleCustomerRelation, defaultCampaignName);
}
// optional article
if(jsbArticleCustomerRelation.IsOptionalArticle)
{
_campaignRepository.CreateCampaignAndAddProduct(session, jsbArticleCustomerRelation, optionalArticleCampaignName);
}
else
{
_campaignRepository.DeleteProductFromCampaign(session, jsbArticleCustomerRelation, optionalArticleCampaignName);
}
}
}
We hit the database almost on every row here in some way. For example in the DeleteProductFromCampaign the following code take 43 seconds in the preview environment and 169 seconds in the production environment.
public void DeleteProductFromCampaign(IStatelessSession session, JSBArticleCustomerRelation jsbArticleCustomerRelation, string campaignName)
{
var productTarget =
session.Query<ProductTarget>()
.FirstOrDefault(
x =>
x.CampaignItem.Campaign.Name == jsbArticleCustomerRelation.CustomerNumber &&
x.CampaignItem.Name == campaignName &&
x.Sku == jsbArticleCustomerRelation.ArticleNumber);
if (productTarget != null)
{
session.Delete(productTarget);
}
}
So this code for example runs 4x slower on the production server. The biggest difference between the servers are that the production (physical) server in set up with noumerous instances and I am using one of them (20 gb om ram).

Related

Shutting down VM returns all VM states as unknown

When using the methods below to shutdown and query the role instances. When I shutdown a VM all other role instances are returned with a status of ready state unknown. After about a couple of minutes I can query again and get the actual status. How can I get the actual status in real time, using Azure Management APIs. Or is this an issue with how the VMs are configured? They are configured with the same storage location and same virtual network
The code shown was based off the template for Deploy and Manage Virtual Machines in Visual Studio 2015.
The call to shutdown the VM:
var shutdownParams = new VirtualMachineShutdownParameters();
if (deallocate)//deallocate is true in this instance
shutdownParams.PostShutdownAction = PostShutdownAction.StoppedDeallocated; // Fully deallocate resources and stop billing
else
shutdownParams.PostShutdownAction = PostShutdownAction.Stopped; // Just put the machine in stopped state, keeping resources allocated
await _computeManagementClient.VirtualMachines.ShutdownAsync(_parameters.CloudServiceName, _parameters.CloudServiceName, vmName, shutdownParams);
The call to query for all role instances
XXX_VirtualMachine is a class that holds the name and instance status:
internal List<XXX_VirtualMachine> GetAllVirtualMachines()
{
List<XXX_VirtualMachine> vmList = new List<XXX_VirtualMachine>();
try
{
DeploymentGetResponse deployment;
deployment = _computeManagementClient.Deployments.GetByName(_parameters.CloudServiceName, _parameters.CloudServiceName);
for (int i = 0; i < deployment.RoleInstances.Count; i++)
{
vmList.Add(new XXX_VirtualMachine(deployment.RoleInstances[i].InstanceName, deployment.RoleInstances[i]));
}
}
catch (Exception e)
{
System.Windows.Forms.MessageBox.Show(e.Message);
}
return vmList;
}
So I finally got around to giving this a kick! (apologies for the delay, people kept expecting that work stuff - inconsiderate fools!)
Firstly, this isn't really an answer! just an explore of the problem, and you probably know all of this already, but maybe someone reading it will see something I've missed.
I've created three VMs, in a single Cloud Service, and lo-and-behold! it did exactly what you predicted when you shut one down.
Firstly both portals appear to be giving reliable answers, even when the .Net request is reporting RoleStatusUnknown.
Looking at the Xml that comes out of the request to
https://management.core.windows.net/{subscriptionid}/services/hostedservices/vm01-u3rzv2q6/deploymentslots/Production
we get
<RoleInstance>
<RoleName>vm01</RoleName>
<InstanceName>vm01</InstanceName>
<InstanceStatus>RoleStateUnknown</InstanceStatus>
<InstanceSize>Basic_A1</InstanceSize>
<InstanceStateDetails />
<PowerState>Started</PowerState>
I then fired up Powershell to see if that was doing the same, which it was (not unexpected since it calls the same REST point). with Get-AzureVm returning
ServiceName Name Status
----------- ---- ------
vm01-u3rzv2q6 vm01 CreatingVM
vm01-u3rzv2q6 vm02 RoleStateUnknown
vm01-u3rzv2q6 vm03 RoleStateUnknown
At the appropriate times, which again, is as seen.
Wondering what the timing was, I then ran this
while ($true) { (get-azurevm -ServiceName vm01-u3rzv2q6 -Name vm01).InstanceStatus ; get-azurevm ; (date).DateTime }
ReadyRole
vm01-u3rzv2q6 vm01 ReadyRole
vm01-u3rzv2q6 vm02 ReadyRole
vm01-u3rzv2q6 vm03 ReadyRole
07 March 2016 04:31:01
07 March 2016 04:31:36
StoppedDeallocated
vm01-u3rzv2q6 vm01 Stoppe...
vm01-u3rzv2q6 vm02 RoleSt...
vm01-u3rzv2q6 vm03 RoleSt...
07 March 2016 04:31:49
07 March 2016 04:33:44
StoppedDeallocated
vm01-u3rzv2q6 vm01 Stoppe...
vm01-u3rzv2q6 vm02 ReadyRole
vm01-u3rzv2q6 vm03 ReadyRole
07 March 2016 04:33:52
So it seems that the machine shuts down, then a process must begin to update the cloud service, which takes its ability to query its status for, what seems, exactly two minutes.
Somewhere in the API there must be a location that it is reported properly because the portals don't have this problem.
I spent a while down a blind alley looking for an 'InstanceView' for the VM, but it seems that doesn't exist for classic deployments.
My next thought is to put together a simple rest client that takes a management certificate and see if the URI can be hacked around a bit to give anything more interesting. (its got to be there somewhere!)
What may be useful, is that the PowerState isn't affected by this problem. So you could potentially have a secondary check for that while you have the RoleStateUnknown error, its far far from perfect, but depending on what you're looking to do it might work.
Failing that, I'd say it is clearly an bug in Azure, and could definitely have a support call raised for it.

C#: ImageList in foreach loop causes application to crash on terminal server

i'm part of a (small) development team who took over the development and support of a C#.Net program. It has ~ 300.000 LOC and the software design makes it impossible to change anything big without causing millions of side effects. It's like trying to turn a jungle full of poisenous snakes into a nice little garden. Using only a small scissor.
The application is a big WinForms-Application with Database access.
About the problem: A customer received our software and cannot run it. Unlike other customers, they have multiple Windows Server 2008 R1 terminal servers and installed the software on a network drive. Their users connect to one of the terminal servers and run our application (and others, like windows office etc) from the network drive. Our application however crashes after ~ 5 seconds without any notice. Our loading screen appears and closes again. The application produces a log file which shows this exception:
2014-08-04 11:15:23 [3372] ERROR – An exception occurred:
'OutOfMemoryException': Not enough memory.
System.Drawing
at System.Drawing.Graphics.CheckErrorStatus(Int32 status)
at System.Drawing.Graphics.DrawImage(Image image, Rectangle destRect, Int32 srcX, Int32 srcY, Int32 srcWidth, Int32 srcHeight, GraphicsUnit srcUnit, ImageAttributes imageAttrs, DrawImageAbort callback, IntPtr callbackData)
at System.Drawing.Bitmap.MakeTransparent(Color transparentColor)
at System.Windows.Forms.ImageList.CreateBitmap(Original original, Boolean& ownsBitmap)
at System.Windows.Forms.ImageList.CreateHandle()
at System.Windows.Forms.ImageList.get_Handle()
at System.Windows.Forms.ImageList.GetBitmap(Int32 index)
at System.Windows.Forms.ImageList.ImageCollection.GetEnumerator()
at <our application>.InitializeHtmlResources()
The method that calls ImageCollection.GetEnumerator() is this:
void InitializeHtmlResources()
{
string baseDirPath = ... //It's in AppData/Local/<ourFolder>/Icons;
int index = -1;
foreach (Image image in UIResources.Icons.Images)
{
index += 1;
image.Save(baseDirPath + Path.DirectorySeparatorChar + index.ToString(), ImageFormat.Png);
}
}
Those images are stored inside a Resource.Icons.dll. The icons have their own project in the solution that contains of a few helper classes (like UIResources) and contains a folder with every icon we use + an xml where they are all listed, named and indexed. UIResources is a static class that allows access to the icons and the image list. This is how the property "Images" is initialized:
...
// Code snippet of Images Initialization
var ilist = new ImageList();
ilist.ColorDepth = ColorDepth.Depth32Bit;
ilist.ImageSize = new Size(16, 16);
ilist.TransparentColor = Color.Fuchsia;
UIResources.Icons.Images = ilist;
...
This method is used to extract an Icon from the DLL file.
static IEnumerable<IconInfo> GetIcons()
{
XDocument doc;
using (var stream = GetResourceStream("Our.Namespace.Resources.Icons.Icons.xml"))
{
doc = XDocument.Load(stream);
}
// ReSharper disable once PossibleNullReferenceException
foreach (var elem in doc.Element("Icons").Elements("Icon"))
{
int index = (int)elem.Attribute("Index");
var bmp = ReadIcon("Icons", (string)elem.Attribute("FileName"));
string name = (string)elem.Attribute("Name");
yield return new IconInfo(bmp, index, name);
}
}
static Bitmap ReadIcon(string kind, string fileName)
{
using (var stream = GetResourceStream("Our.Namespace.Resources." + kind + "." + fileName))
{
return new Bitmap(stream);
}
}
static Stream GetResourceStream(string resourceName)
{
return typeof(IconProvider).Assembly.GetManifestResourceStream(resourceName);
}
IconInfo is only a record containing the values.
And finally, the ImageList is filled with values:
foreach (var icon in GetIcons())
UIResources.Icons.Images.Add(icon.Name, icon.Image);
The application works fine. But when that customer runs the software on his terminal server via Remote Desktop, the application crashes and throws an OutOfMemory Exception in InitializeHtmlResources() right at the foreach-loop (when accessing the enumerator of ImageList).
The confusing part: A "OutOfMemory" exception is thrown, though the memory is neither full, nor is the 2 GB limit for 32bit application reached. The app peaks at 120 MB during loading.
I have absolutely no idea how this error is caused and spent the last 2-3 days trying to find a solution. I haven't.
I appreciate every bit of advice you can give me.
EDIT:
I tried disabling the InitializeHtmlResources-Method. This enabled the application to start. However: After working a few seconds with the application, an outofmemory exception appeared anyway. The cause is another ImageList accessor.
It works fine with Server 2012. We created a VM with Windows Server 2008 and the error happens there too.
We found the issue! Instead of
UIResources.Icons.Images.Add(icon.Name, icon.Image);
to fill the ImageList, we now use
UIResources.Icons.Images.Add(icon.Name, new Bitmap(icon.Image));
Now our application works on Windows Server 2008 :-)
Is it possible?
Memory limitations of 32 bit apps on 64 bit Terminal Server 2008 Standard

Why do I get Ice::MemoryLimitException, even with Ice.MessageSizeMax=2000000

Hi I have written a C# client/server application using the Zeroc Ice communication libary (v3.4.2).
I am transferring a sequence of objects from the server which are then displaying them in the client in a tabular format. Simple enough.
I defined following slice types
enum DrawType { All, Instant, Raffle };
struct TicketSoldSummary {
int scheduleId;
DrawType dType;
string drawName;
long startDate;
long endDate;
string winningNumbers;
int numTicket;
string status;
};
sequence<TicketSoldSummary> TicketSoldSummaryList;
interface IReportManager {
[..]
TicketSoldSummaryList getTicketSoldSummary(long startTime, long endTime);
};
When I call this method it usually works fine, but occasionally (approx 25% of the time) the caller gets a Ice::MemoryLimitException. We are usually running 2-3 clients at a time.
I searched on the Internet for answers and I was told to increase Ice.MessageSizeMax, which I did. I have increased MessageSizeMax right up to 2,000,000 Kb, but it made no difference, I just did a test with 31,000 records (approximately 1.8 Megs of data) and still get Ice.MemoryLimitException. 1.8 Megs is not very big!
Am I doing something wrong or is there a bug in Zeroc Ice?
Thanks so much to anyone that can offer some help.
I believe MessageSizeMax needs to be configured on the client as well as the server side. Also have tracing enabled with max value (3) and check the size of the messages (on the wire)
Turn on Ice.Warn.Connections on the server side and see the logs. Also make sure the client max message size gets applied correctly. I set Ice.MessageSizeMax on the client as below,
Ice.Properties properties = Ice.Util.createProperties();
properties.setProperty("Ice.MessageSizeMax", "2097152");//2gb in kb
Ice.InitializationData initData = new Ice.InitializationData();
initData.properties = properties;
Ice.Communicator communicator = Ice.Util.initialize(initData);

Use C# to interact with Windows Update

Is there any API for writing a C# program that could interface with Windows update, and use it to selectively install certain updates?
I'm thinking somewhere along the lines of storing a list in a central repository of approved updates. Then the client side applications (which would have to be installed once) would interface with Windows Update to determine what updates are available, then install the ones that are on the approved list. That way the updates are still applied automatically from a client-side perspective, but I can select which updates are being applied.
This is not my role in the company by the way, I was really just wondering if there is an API for windows update and how to use it.
Add a Reference to WUApiLib to your C# project.
using WUApiLib;
protected override void OnLoad(EventArgs e){
base.OnLoad(e);
UpdateSession uSession = new UpdateSession();
IUpdateSearcher uSearcher = uSession.CreateUpdateSearcher();
uSearcher.Online = false;
try {
ISearchResult sResult = uSearcher.Search("IsInstalled=1 And IsHidden=0");
textBox1.Text = "Found " + sResult.Updates.Count + " updates" + Environment.NewLine;
foreach (IUpdate update in sResult.Updates) {
textBox1.AppendText(update.Title + Environment.NewLine);
}
}
catch (Exception ex) {
Console.WriteLine("Something went wrong: " + ex.Message);
}
}
Given you have a form with a TextBox this will give you a list of the currently installed updates. See http://msdn.microsoft.com/en-us/library/aa387102(VS.85).aspx for more documentation.
This will, however, not allow you to find KB hotfixes which are not distributed via Windows Update.
The easiest way to do what you want is using WSUS. It's free and basically lets you setup your own local windows update server where you decide which updates are "approved" for your computers. Neither the WSUS server nor the clients need to be in a domain, though it makes it easier to configure the clients if they are. If you have different sets of machines that need different sets of updates approved, that's also supported.
Not only does this accomplish your stated goal, it saves your overall network bandwidth as well by only downloading the updates once from the WSUS server.
If in your context you're allowed to use Windows Server Update Service (WSUS), it will give you access to the Microsoft.UpdateServices.Administration Namespace.
From there, you should be able to do some nice things :)
P-L right. I tried first the Christoph Grimmer-Die method, and in some case, it was not working. I guess it was due to different version of .net or OS architecture (32 or 64 bits).
Then, to be sure that my program get always the Windows Update waiting list of each of my computer domain, I did the following :
Install a serveur with WSUS (may save some internet bandwith) : http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=5216
Add all your workstations & servers to your WSUS server
Get SimpleImpersonation Lib to run this program with different admin right (optional)
Install only the administration console component on your dev workstation and run the following program :
It will print in the console all Windows updates with UpdateInstallationStates.Downloaded
using System;
using Microsoft.UpdateServices.Administration;
using SimpleImpersonation;
namespace MAJSRS_CalendarChecker
{
class WSUS
{
public WSUS()
{
// I use impersonation to use other logon than mine. Remove the following "using" if not needed
using (Impersonation.LogonUser("mydomain.local", "admin_account_wsus", "Password", LogonType.Batch))
{
ComputerTargetScope scope = new ComputerTargetScope();
IUpdateServer server = AdminProxy.GetUpdateServer("wsus_server.mydomain.local", false, 80);
ComputerTargetCollection targets = server.GetComputerTargets(scope);
// Search
targets = server.SearchComputerTargets("any_server_name_or_ip");
// To get only on server FindTarget method
IComputerTarget target = FindTarget(targets, "any_server_name_or_ip");
Console.WriteLine(target.FullDomainName);
IUpdateSummary summary = target.GetUpdateInstallationSummary();
UpdateScope _updateScope = new UpdateScope();
// See in UpdateInstallationStates all other properties criteria
_updateScope.IncludedInstallationStates = UpdateInstallationStates.Downloaded;
UpdateInstallationInfoCollection updatesInfo = target.GetUpdateInstallationInfoPerUpdate(_updateScope);
int updateCount = updatesInfo.Count;
foreach (IUpdateInstallationInfo updateInfo in updatesInfo)
{
Console.WriteLine(updateInfo.GetUpdate().Title);
}
}
}
public IComputerTarget FindTarget(ComputerTargetCollection coll, string computername)
{
foreach (IComputerTarget target in coll)
{
if (target.FullDomainName.Contains(computername.ToLower()))
return target;
}
return null;
}
}
}

What is the Fastest way to read event log on remote machine?

I am working on an application which reads eventlogs(Application) from remote machines. I am making use of EventLog class in .net and then iterating on the Log entries but this is very slow. In some cases, some machines have 40000+ log entries and it takes hours to iterate through the entries.
what is the best way to accomplish this task? Are there any other classes in .net which are faster or in any other technology?
Man, I feel your pain. We had the exact same issue in our app.
Your solution has a branch depending on what server version you're running on and what server version your "target" machine is running on.
If you're both on Vista or Windows Server 2008, you're in luck. You should look at System.Diagnostics.Eventing.Reader.EventLogQuery and System.Diagnostics.Eventing.Reader.EventLogReader. These are new in .net 3.5.
Basically, you can build a query in XML and ship it over to run on the remote computer. Maybe you're just searching for events of a specific type, or maybe just new events from a specific point in time. The search runs on the remote machine, and then you just get back the matching events. The new classes are much faster than the old .net 2.0 way, but again, they are only supported on Vista or Windows Server 2008.
For our app when the target is NOT on Vista/Win2008, we downloaded the raw .evt file from the remote system, and then parsed the file using its binary format. There are several sources of data about the event log format for .evt files (pre-Vista), including link text and an article I recall on codeproject.com that had some c# code.
Vista and Windows Server 2008 machines use a new .evtx format that is a new format, so you can't use the same binary parsing approach across all versions. But the new EventLogQuery and EventLogReader classes are so fast that you won't have to. It's now perfectly speedy to just use the built-in classes.
Event Log Reader is horribly slow... too slow. WTF Microsoft?
Use LogParser 2.2 - Search for C# and LogParser on the Internet (or you can use the log parser commands from the command line). I don't want to duplicate the work already contributed by others.
I pull the log from the remote system by having the log exported as an EVTX file. I then copy the file from the remote system. This process is really quick - even with a network that spans the planet (I had issues with having the log exported to a network resource). Once you have it local, you can do your searches and processing.
There are multiple reasons for having the EVTX - I won't get into the reasons why we do this.
The following is a working example of the code to save a copy of the log as an EVTX:
(Notes: "device" is the network host name or IP. "LogName" is the name of the log desired: "System", "Security", or "Application". outputPathOnRemoteSystem is the path on the remote computer, such as "c:\temp\%hostname%.%LogName%.%YYYYMMDD_HH.MM%.evtx".)
static public bool DumpLog(string device, string LogName, string outputPathOnRemoteSystem, out string errMessage)
{
bool wasExported = false;
string errorMessage = "";
try
{
System.Diagnostics.Eventing.Reader.EventLogSession els = new System.Diagnostics.Eventing.Reader.EventLogSession(device);
els.ExportLogAndMessages(LogName, PathType.LogName, "*", outputPathOnRemoteSystem);
wasExported = true;
}
catch (UnauthorizedAccessException e)
{
errorMessage = "Unauthorized - Access Denied: " + e.Message;
}
catch (EventLogNotFoundException e)
{
errorMessage = "Event Log Not Found: " + e.Message;
}
catch (EventLogException e)
{
errorMessage = "Export Failed: " + e.Message + ", Log: " + LogName + ", Device: " + device;
}
errMessage = errorMessage;
return wasExported;
}
A good Explanation/Example can be found on MSDN.
EventLogSession session = new EventLogSession(Environment.MachineName);
// [System/Level=2] filters out the errors
// Where "Log" is the log you want to get data from.
EventLogQuery query = new EventLogQuery("Log", PathType.LogName, "*[System/Level=2]");
EventLogReader reader = new EventLogReader(query);
for (EventRecord eventInstance = reader.ReadEvent();
null != eventInstance;
eventInstance = reader.ReadEvent())
{
// Output or save your event data here.
}
When waiting 5-20 minutes with the old code this one does it in less than 10 seconds.
Maybe WMI can help you:
WMI with C#
Have you tried using the remoting features in powershell 2.0? They allow you to execute cmdlets (like ones to read event logs) on remote machines and return the results (as objects, of course) to the calling session.
You could place a Program at those machines that save the log to file and sends it to your webapplication i think that would be alot faster as you can do the looping local but im not sure how to do it so i cant ive you any code :(
I recently did such thing via WCF callback interface however my clients interacted with the server through WCF and adding a WCF Callback was easy in my project, full code with examples is available here
Just had the same issue and want to share my solution. It makes a search through application, system and security eventlogs from 260 seconds (using EventLog) about a 100 times faster (using EventLogQuery).
And this in a way where it is possible to check if the event message contains a pattern or any other check without the requirement of FormatDescription().
My trick is to use the same mechanism as PowerShells Get-WinEvent does and then pass it through the result check.
Here is my code to find all events within last 4 days where the event message contains a filter pattern.
string[] eventLogSources = {"Application", "System", "Security"};
var messagePattern = "*Your Message Search Pattern*";
var timeStamp = DateTime.Now.AddDays(-4);
var matchingEvents = new List<EventRecord>();
foreach (var eventLogSource in eventLogSources)
{
var i = 0;
var query = string.Format("*[System[TimeCreated[#SystemTime >= '{0}']]]",
timeStamp.ToUniversalTime().ToString("o"));
var elq = new EventLogQuery(eventLogSource, PathType.LogName, query);
var elr = new EventLogReader(elq);
EventRecord entryEventRecord;
while ((entryEventRecord = elr.ReadEvent()) != null)
{
if ((entryEventRecord.Properties)
.FirstOrDefault(x => (x.Value.ToString()).Contains(messagePattern)) != null)
{
matchingEvents.Add(entryEventRecord);
i++;
}
}
}
Maybe that the remote computers could do a little bit of computing. So this way your server would only deal with relevant information. It would be a kind of cluster using the remote computer to do some light filtering and the server would the the analysis part.

Categories

Resources