I recently had to log the time it takes to write a file for some reason. I did this on Windows Server 2012 on an Azure VM. When I looked at the logs, most of the times were 0 ms, only very few were 15.something ms. I thought I did something wrong with the measurement and tested with this piece of code:
class Program
{
static void Main(string[] args)
{
string dir = System.Environment.GetFolderPath(System.Environment.SpecialFolder.ApplicationData) + "\\TEST";
if (!Directory.Exists(dir))
{
Directory.CreateDirectory(dir);
}
for (int i = 0; i < 10; i++)
{
DateTime start = DateTime.Now;
File.WriteAllText(dir + #"\test.txt", "test");
DateTime end = DateTime.Now;
TimeSpan duration = end - start;
double ms = duration.TotalMilliseconds;
File.AppendAllText(dir+#"\log.txt", Convert.ToString(ms) + System.Environment.NewLine);
}
}
}
Output on my PC (OS locale set to German, so the comma is a decimal mark, not a thousands separator, making the first number 5 ms):
5,0037 1,001 0,9976 1,001 1,001 2,002 0,9956 2,002 1,0003 0,9996
Output on the remote machine:
0 0 0 0 0 0 0 0 0 0
And sometimes (OS locale set to English, so the point is the decimal mark, making the relevant number 15.5 ms):
0 0 0 0 0 0 15.4914 0 0 0
What's the reason for that? Is this some write-cache either on Windows Server 2012 or Azure VM?
Additional info:
My PC has a single SSD, the remote machine is configured with the OS and Data on seperate drives (not sure what type)
What drive are you testing this on? OS Drive/temp drive/attached drive...that matters. As you mentioned the cache setting does matter as well, best to start with attached drive and no cache.
The ALU format size of the drive matters. For example, if you chunk up the files in 64kb blocks and have 64kb reads/writes that is faster.
Azure drives on storage accounts have a 20 minute warm-up phase where they can be "slow" unitialized or when not used for 20 minutes.
Related
When using the methods below to shutdown and query the role instances. When I shutdown a VM all other role instances are returned with a status of ready state unknown. After about a couple of minutes I can query again and get the actual status. How can I get the actual status in real time, using Azure Management APIs. Or is this an issue with how the VMs are configured? They are configured with the same storage location and same virtual network
The code shown was based off the template for Deploy and Manage Virtual Machines in Visual Studio 2015.
The call to shutdown the VM:
var shutdownParams = new VirtualMachineShutdownParameters();
if (deallocate)//deallocate is true in this instance
shutdownParams.PostShutdownAction = PostShutdownAction.StoppedDeallocated; // Fully deallocate resources and stop billing
else
shutdownParams.PostShutdownAction = PostShutdownAction.Stopped; // Just put the machine in stopped state, keeping resources allocated
await _computeManagementClient.VirtualMachines.ShutdownAsync(_parameters.CloudServiceName, _parameters.CloudServiceName, vmName, shutdownParams);
The call to query for all role instances
XXX_VirtualMachine is a class that holds the name and instance status:
internal List<XXX_VirtualMachine> GetAllVirtualMachines()
{
List<XXX_VirtualMachine> vmList = new List<XXX_VirtualMachine>();
try
{
DeploymentGetResponse deployment;
deployment = _computeManagementClient.Deployments.GetByName(_parameters.CloudServiceName, _parameters.CloudServiceName);
for (int i = 0; i < deployment.RoleInstances.Count; i++)
{
vmList.Add(new XXX_VirtualMachine(deployment.RoleInstances[i].InstanceName, deployment.RoleInstances[i]));
}
}
catch (Exception e)
{
System.Windows.Forms.MessageBox.Show(e.Message);
}
return vmList;
}
So I finally got around to giving this a kick! (apologies for the delay, people kept expecting that work stuff - inconsiderate fools!)
Firstly, this isn't really an answer! just an explore of the problem, and you probably know all of this already, but maybe someone reading it will see something I've missed.
I've created three VMs, in a single Cloud Service, and lo-and-behold! it did exactly what you predicted when you shut one down.
Firstly both portals appear to be giving reliable answers, even when the .Net request is reporting RoleStatusUnknown.
Looking at the Xml that comes out of the request to
https://management.core.windows.net/{subscriptionid}/services/hostedservices/vm01-u3rzv2q6/deploymentslots/Production
we get
<RoleInstance>
<RoleName>vm01</RoleName>
<InstanceName>vm01</InstanceName>
<InstanceStatus>RoleStateUnknown</InstanceStatus>
<InstanceSize>Basic_A1</InstanceSize>
<InstanceStateDetails />
<PowerState>Started</PowerState>
I then fired up Powershell to see if that was doing the same, which it was (not unexpected since it calls the same REST point). with Get-AzureVm returning
ServiceName Name Status
----------- ---- ------
vm01-u3rzv2q6 vm01 CreatingVM
vm01-u3rzv2q6 vm02 RoleStateUnknown
vm01-u3rzv2q6 vm03 RoleStateUnknown
At the appropriate times, which again, is as seen.
Wondering what the timing was, I then ran this
while ($true) { (get-azurevm -ServiceName vm01-u3rzv2q6 -Name vm01).InstanceStatus ; get-azurevm ; (date).DateTime }
ReadyRole
vm01-u3rzv2q6 vm01 ReadyRole
vm01-u3rzv2q6 vm02 ReadyRole
vm01-u3rzv2q6 vm03 ReadyRole
07 March 2016 04:31:01
07 March 2016 04:31:36
StoppedDeallocated
vm01-u3rzv2q6 vm01 Stoppe...
vm01-u3rzv2q6 vm02 RoleSt...
vm01-u3rzv2q6 vm03 RoleSt...
07 March 2016 04:31:49
07 March 2016 04:33:44
StoppedDeallocated
vm01-u3rzv2q6 vm01 Stoppe...
vm01-u3rzv2q6 vm02 ReadyRole
vm01-u3rzv2q6 vm03 ReadyRole
07 March 2016 04:33:52
So it seems that the machine shuts down, then a process must begin to update the cloud service, which takes its ability to query its status for, what seems, exactly two minutes.
Somewhere in the API there must be a location that it is reported properly because the portals don't have this problem.
I spent a while down a blind alley looking for an 'InstanceView' for the VM, but it seems that doesn't exist for classic deployments.
My next thought is to put together a simple rest client that takes a management certificate and see if the URI can be hacked around a bit to give anything more interesting. (its got to be there somewhere!)
What may be useful, is that the PowerState isn't affected by this problem. So you could potentially have a secondary check for that while you have the RoleStateUnknown error, its far far from perfect, but depending on what you're looking to do it might work.
Failing that, I'd say it is clearly an bug in Azure, and could definitely have a support call raised for it.
I need to parse an IIS log file. Is there any alternative to LogParser, a simple class to query a log file ?
I only need to know how many request I receive between 2 dates.
Here is an example of iis log file :
#Software: Microsoft Internet Information Services 7.5
#Version: 1.0
#Date: 2014-08-26 12:20:57
#Fields: date time s-sitename s-computername s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs-version cs(User-Agent) cs(Cookie) cs(Referer) cs-host sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken
2014-08-26 12:20:57 W3SVC1 QXXXSXXXX 172.25.161.53 POST /XXXX/XXX/XXXX/XXXXX/1.0/XXXX/XXXXXXXX/xxxxxx.svc - 443 - 999.99.999.999 HTTP/1.1 - - - xxxx.xxxx.xxx.xxx.xxxx.xxxx.xxx.com 200 0 0 4302 5562 1560
You can use Tx (LINQ to Logs and Traces) , you can install it via nuget
and use it like this:
var iisLog = W3CEnumerable.FromFile(pathToLog);
int nbOfLogsForLastHour = iisLog.Where(x => x.dateTime > DateTime.Now.AddHours(-1)).Count();
If the log file is used by another process, you can use W3CEnumerable.FromStream
It's 2017 and the LogParser is still closed source. Moreover, all the instrumentation provided by cloud solutions appears to be making the need for parsing IIS logs a thing of the past. But since I am also dealing with legacy apps, I wrote this simple parser using .NET core.
using System;
using System.IO;
using W3CParser.Extensions;
using W3CParser.Instrumentation;
using W3CParser.Parser;
namespace W3CParser
{
class Program
{
static void Main(string[] args)
{
var reader = new W3CReader(File.OpenText(args.Length > 0 ? args[0] : "Data/foobar.log"));
using (new ConsoleAutoStopWatch())
{
foreach (var #event in reader.Read())
{
Console.WriteLine("{0} ({1}):{2}/{3} {4} (bytes sent)",
#event.Status.ToString().Red().Bold(),
#event.ToLocalTime(),
#event.UriStem.Green(),
#event.UriQuery,
#event.BytesSent);
}
}
}
}
}
Source code: https://github.com/alexnolasco/32120528
You can use IISLogParser , and install it via nuget, it has support for large files (> 1Gb)
List<IISLogEvent> logs = new List<IISLogEvent>();
using (ParserEngine parser = new ParserEngine([filepath]))
{
while (parser.MissingRecords)
{
logs = parser.ParseLog().ToList();
}
}
If you're dealing with large volumes and/or dispersed locations of IIS log files, then SpectX is a handy tool for this because you don't have to ingest the logs and can run queries directly on multiple raw files. Avg processing speed per core - 350MB/sec.
It's not open source but the full-functionality 30-day trial is free.
Tutorials:
Parsing IIS logs.
Analyzing IIS logs - 20 sample queries.
To filter a time period, sort the logs by date and filter the period you need, e.g:
| sort(date_time desc)
| filter(date_time > T('2019-11-01 08:48:20.000 +0200'))
| filter(date_time < T('2019-11-05 11:48:20.000 +0200'));
I use filter feature of CMTrace.exe tool (Refer screenshot):
Update: Se bottom: Added CreateArticleRelations code example.
Ok, this is a tricky one. I am experiencing a massive performance problem in hyper-v when it comes to preview and production environments.
First of all, here is the setup.
.NET 4.0 on all servers.
Preview
Webserver : Virtual machine, 8gb ram, 4 cpu, windows server 2008 r2 (64)
Database: Virtual server, 6gb ram, 2 cpu, windows server 2008 r2 (64)
Production:
Webserver: Virtual machine, 8gb ram, 4 cpu, windows server 2008 r2 (64)
Database: Physical machine, 48 gb ram, 16 cpu, windows server 2008 r2 (64)
This is a B2B shop running on these. And when running integration for one product the results are mind blowing for me. I will provide pictures.
Running for one product in preview takes: 83 seconds for everything to update.
Running for one product in production takes 301 seconds (!) for everything to update. Same product!
If I run this locally it takes about 40 seconds to complete.
I have ran dotTrace profiling remote on the both servers to actually see what is taking time. I use EnterpriseLibrary for logging, umbraco for cms and uCommerce as a commerce platform. Look at the picture example below of one finding I have seen.
First of all, CreateArticleRelations takes 140 seconds on the production server but only 46 on the preview. Same product, same data. Then to the really funky stuff. On the production site at the top we see enterpriselogging taking 64 seconds, but on the preview run it is way down so can't even say it taking absolutley no time.
The implementation of the logging looks like this.
private string LogMessage(string message, int priority, string category, TraceEventType severity, int code, int skipFrames = 2)
{
//Check if "code" exists in eCommerceB2B.LogExceptions
var dbf = new ThirdPartSources();
var exeptions = dbf.GetLogExeptions();
foreach (var exeption in exeptions)
{
if (code.ToString() == exeption)
return DateTime.Now.ToString();
}
try
{
var stack = new StackTrace(skipFrames);
if (_logWriter.IsLoggingEnabled())
{
var logEntry = new LogEntry
{
Title =
stack.GetFrame(0).GetMethod().ReflectedType.FullName + " " +
stack.GetFrame(0).GetMethod(),
Message = message,
Priority = priority,
Severity = severity,
TimeStamp = DateTime.Now,
EventId = code
};
logEntry.Categories.Add(category);
_logWriter.Write(logEntry);
return logEntry.TimeStampString;
}
Logging is set up to a rolling flatfile and a database. I have tried to disable the logging and that saves about 20 seconds, but still the LogMessage is up at the top.
This has blown my mind for days and i can't seem to find a solution. Sure I can remove logging completely but I wan't to find the cause of the problem.
What bothers me is that for example running one method (CreateArticleRelations) takes almost 4 times as long on the production server. The cpu level is never over 30 % and 5 gb ram is available. The applications is ran as a console application.
Someone please save me! :) I can provide more data if needed. My bet is that is has something to do with the virtual server but I have no idea what to check.
Update:
Per comment, i tried to comment out the LogMessage completly. It saves about 100 seconds totally which tells me that something is terribly wrong. Still taking 169 seconds to create the relations vs 46 seconds in preview, and in preview logging is still enabled. What can be wrong to enterprise library to make it behave this way? And still, why i code running 4x slower on the production server? Se image after I remove LogMessage. It is from PRODUCTION.
CreateArticleRelations
private void CreateArticleRelations(Product uCommerceProduct, IStatelessSession session)
{
var catalogues = _jsbArticleRepository.GetCustomerSpecificCatalogue(uCommerceProduct.Sku).CustomerRelations;
var defaultCampaignName = _configurationManager.GetValue(ConfigurationKeys.UCommerceDefaultCampaignName);
var optionalArticleCampaignName = _configurationManager.GetValue(ConfigurationKeys.UCommerceDefaultOptionalArticleCampaignName);
var categoryRelations =
session.Query<CategoryProductRelation>()
.Fetch(x => x.Category)
.Fetch(x => x.Product)
.Where(
x =>
x.Category.Definition.Name == _customerArticleCategory && x.Product.Sku == uCommerceProduct.Sku)
.ToList();
var relationsAlreadyAdded = _categoryRepository.RemoveCataloguesNotInRelation(catalogues, categoryRelations,
session);
_categoryRepository.SetArticleCategories(session, relationsAlreadyAdded, uCommerceProduct, catalogues,
_customerCategories, _customerArticleCategory,
_customerArticleDefinition);
//set campaigns and optional article
foreach (var jsbArticleCustomerRelation in catalogues)
{
// Article is in campaign for just this user
if (jsbArticleCustomerRelation.HasCampaign)
{
_campaignRepository.CreateCampaignAndAddProduct(session, jsbArticleCustomerRelation, defaultCampaignName);
}
else // remove the article from campaign for user if exists
{
_campaignRepository.DeleteProductFromCampaign(session, jsbArticleCustomerRelation, defaultCampaignName);
}
// optional article
if(jsbArticleCustomerRelation.IsOptionalArticle)
{
_campaignRepository.CreateCampaignAndAddProduct(session, jsbArticleCustomerRelation, optionalArticleCampaignName);
}
else
{
_campaignRepository.DeleteProductFromCampaign(session, jsbArticleCustomerRelation, optionalArticleCampaignName);
}
}
}
We hit the database almost on every row here in some way. For example in the DeleteProductFromCampaign the following code take 43 seconds in the preview environment and 169 seconds in the production environment.
public void DeleteProductFromCampaign(IStatelessSession session, JSBArticleCustomerRelation jsbArticleCustomerRelation, string campaignName)
{
var productTarget =
session.Query<ProductTarget>()
.FirstOrDefault(
x =>
x.CampaignItem.Campaign.Name == jsbArticleCustomerRelation.CustomerNumber &&
x.CampaignItem.Name == campaignName &&
x.Sku == jsbArticleCustomerRelation.ArticleNumber);
if (productTarget != null)
{
session.Delete(productTarget);
}
}
So this code for example runs 4x slower on the production server. The biggest difference between the servers are that the production (physical) server in set up with noumerous instances and I am using one of them (20 gb om ram).
I am saving some of the rendered html of a web site by overriding the Render method and using HtmlAgilityPack. Here is the code:
protected override void Render(HtmlTextWriter writer)
{
using (HtmlTextWriter htmlwriter = new HtmlTextWriter(new StringWriter()))
{
base.Render(htmlwriter);
string output= htmlwriter.InnerWriter.ToString();
var doc = new HtmlDocument();
doc.LoadHtml(output);
doc.Save(currDir + "\\" + reportDir + "\\dashboardTable.html");
}
}
However, some process does not let go of the saved file and I am unable to delete it from the server. Does anyone know of an HtmlAgilityPack issue that would cause this?
Any advice is appreciated.
Regards.
EDIT:
I have tried both of the methods suggested. I can't tell if they are the solution yet because my app is frozen on the server due to the files I can't delete. However, when I use these solutions on my own machine, the rendered HTML does not save as an HTML table anymore but rather like this:
INCIDENT MANAGEMENT
Jul '12 F'12
Trend F'12 2011
(avg)
Severe Incidents (Sev1/2): 3 2.1 4.16
Severe Avoidable Incidents (Sev1/2): 1 1.3 1.91
Incidents (Sev3): 669 482 460.92
Incidents (Sev4) - No business Impact: 1012 808 793
Proactive Tickets Opened: 15 19.3 14
Proactive Tickets Resolved/Closed: 14 17.3 11
CHANGE MANAGEMENT
Total Planned Changes: 531 560 583.58
Change Success Rate (%): 99.5 99.4 99
Non-Remedial Urgent Changes: 6 11 47.08
PROBLEM MANAGEMENT
New PIRs: 2 1.4 2
Closed PIRs: 0 2 3
Overdue Action items: 2 3.2 0
COMPLIANCE MEASUREMENTS
Jul Trend Jun
Total Number of Perimeter Devices: 250 258
Perimeter Devices - Non Compliant: 36 31
Total Number of Internal Devices: 6676 6632
Internal Devices - Non Compliant: 173 160
Unauthorized Perimeter Changes: 0 0
Unauthorized Internal Changes 0 0
LEGEND
ISP LINKS
July June Trend
SOC CPO DRP SOC CPO DRP
40% 34% 74% 39% 35% 74%
BELL MPLS HEAD ENDS
July June Trend
SOC CPO SOC CPO
8% 5% 7% 10% 8% 5.5% 7% 10%
ENTERPRISE NETWORK (# of issues called out)
July June Trend
CORE FW/DMZ CORE FW/DMZ
1 0 1 0
US & INTL (# of issues called out)
July June Trend
US Intl US Intl
2 2 2 3
LINE OF BUSINESS BELL WAN MPLS
<> 50%-65% >65% <> 50%-65% >65% Trend
Retail: 2272 0 1 2269 4 0
Business Banking: 59 1 0 60 0 0
Wealth: 122 2 0 121 2 1
Corporate: 51 0 0 49 2 0
Remote ATM: 280 0 0 280 0 0
TOOLS
Version Currency Vulnerability Status Health Status
Key Messages:
where only the text data has been saved and all of the HTML and CSS is missing. If I just use doc.Save() I get an exact representation of the table as it displays on the website.
Try this instead. Maybe the Save method isn't closing the underlying stream.
using( FileStream stream = File.OpenWrite( currDir + "\\" + reportDir + "\\dashboardTable.html" ) ){
doc.Save(stream);
stream.Close();
}
Edit
Per #L.B's comments it appears that HtmlAgilityPack does use a using block as in my example so it will be ensuring that the stream gets closed.
Thus as I suggested at the end of my original answer this must be a server environment problem
Original Answer
This may be some sort of bug with HtmlAgilityPack - you may want to report it to the developers.
However to eliminate that possibility you may want to consider explicitly controlling the creation of the StreamWriter for the file so you are explicitly closing it yourself. Replace this line:
doc.Save(currDir + "\\" + reportDir + "\\dashboardTable.html");
With the following:
using (StreamWriter fileWriter = new StreamWriter(currDir + "\\" + reportDir + "\\dashboardTable.html"))
{
doc.Save(fileWriter);
fileWriter.Close();
}
If the issue still persists even with this change then that would suggest an issue with your server environment rather than an issue with HtmlAgilityPack. Btw to test if this change makes a difference you should start from a clean server environment rather than one where you are already having issues deleting the file in question.
Hi I have written a C# client/server application using the Zeroc Ice communication libary (v3.4.2).
I am transferring a sequence of objects from the server which are then displaying them in the client in a tabular format. Simple enough.
I defined following slice types
enum DrawType { All, Instant, Raffle };
struct TicketSoldSummary {
int scheduleId;
DrawType dType;
string drawName;
long startDate;
long endDate;
string winningNumbers;
int numTicket;
string status;
};
sequence<TicketSoldSummary> TicketSoldSummaryList;
interface IReportManager {
[..]
TicketSoldSummaryList getTicketSoldSummary(long startTime, long endTime);
};
When I call this method it usually works fine, but occasionally (approx 25% of the time) the caller gets a Ice::MemoryLimitException. We are usually running 2-3 clients at a time.
I searched on the Internet for answers and I was told to increase Ice.MessageSizeMax, which I did. I have increased MessageSizeMax right up to 2,000,000 Kb, but it made no difference, I just did a test with 31,000 records (approximately 1.8 Megs of data) and still get Ice.MemoryLimitException. 1.8 Megs is not very big!
Am I doing something wrong or is there a bug in Zeroc Ice?
Thanks so much to anyone that can offer some help.
I believe MessageSizeMax needs to be configured on the client as well as the server side. Also have tracing enabled with max value (3) and check the size of the messages (on the wire)
Turn on Ice.Warn.Connections on the server side and see the logs. Also make sure the client max message size gets applied correctly. I set Ice.MessageSizeMax on the client as below,
Ice.Properties properties = Ice.Util.createProperties();
properties.setProperty("Ice.MessageSizeMax", "2097152");//2gb in kb
Ice.InitializationData initData = new Ice.InitializationData();
initData.properties = properties;
Ice.Communicator communicator = Ice.Util.initialize(initData);