NewRelic ignore a single page from monitoring - c#

I have a site hosted on Appharbor (free version), and then have the NewRelic free add-on. I setup the availability monitoring to go against my homepage.
Now, I'm getting a bunch of errors because my REST api page is returning errors. I want NewRelic to completely ignore this page.
How do I have NewRelic ignore this page?

It sounds like you want to investigate DisableBrowserMonitoring() in the New Relic .NET agent API.
If you only want to turn off the RUM feature for some applications (app/website being monitored) you can use the DisableBrowserMonitoring() in the New Relic .NET agent API mentioned above. This disables the automatic insertion of browser monitoring scripts for specific pages. Currently, this is only supported with web applications, but we have experienced success that this can work with static pages. Add this call to any pages you do not wish to instrument with page load timing (sometimes referred to as real user monitoring or RUM). More information, recommendations and an example how to use this here: http://docs.newrelic.com/docs/agents/net-agent/features/net-agent-api#disable_browser.
Another solution is to use the browserMonitoring element child of the configuration element. browserMonitoring configures page load timing (sometimes referred to as real user monitoring or RUM) in your .NET application. Page load timing gives you insight your end users' performance experience. This is accomplished by measuring the time it takes for your users' browsers to download and render your webpages by injecting a small amount of JavaScript code into the header and footer of each page. More information: https://docs.newrelic.com/docs/agents/net-agent/installation-configuration/net-agent-configuration#browsermon-autoInstrument
<browserMonitoring autoInstrument="true">
<attributes enabled=”true”>
<exclude>myApiKey.*</exclude>
<include>myApiKey.foo</include>
</attributes>
</browserMonitoring>
The config file method lets you filter without having to change code. However, you also have to be careful if you use the config option to exclude paths because you're putting a Regular Expression in there, and if it is a complex one (which it shouldn't be) it could affect performance and things like that. On the other hand, if you just use a plain and simple regex to look for a page, it is pretty fast too.
I think that the API calls might perform better but that is totally subjective, and I wanted to give you both options.
Note, after any change in your configuration, you will need to perform an iisreset as administrator and exercise your app for a while to see the changes reflected on your New Relic Dashboard.

Related

Multiple AJAX calls - single aspx page or multiple aspx pages for better performance

I am currently rewriting a large website with the goal of replacing a large number of page/form submittals - with AJAX calls. The goal is to reduce the amount of server roundtrips - and all the state handling on pages that are rich with client .
Having spent some time considering the best way forward with regards to performance - my question is now the following.
Will it lead to better performance to have just one single aspx page that are used for all AJAX calls - or will it be better to have a aspx page for every use of AJAX on a given webage?
Thank you very much for any insights
Lars Kjeldsen
Performancewise either approach can be made to work on a similar order of magnitude.
Maintanancewise, I prefer to have separate pages for each logical part of your site. Again, either can work, but I've seen more people make a mess of things with "monolithic" type approaches. Single page you'll need a good amount of skill structuring your scripts and client side logic. Well done there isn't a problem, however, I just see more people getting it right when they use separate pages for separate parts of the site.
If you take a look at the site http://battlelog.battlefield.com/ (you'll have to create an account) you'll notice a few things about this it.
It never refreshes the page as you navigate the website. (Using JSON to transmit new data)
It updates the URL and keeps track of where you are.
You can use the updated URL and immediately navigate to that portion of the web-application. (In this case it returns the HTML page)
Here's a full write up on the website.
Personally, I like this approach from a technology/performance perspective, but I don't know what the impact it will have on SEO since this design relies on the HTML5 History state mechanism in JavaScript.
Here's an article on SEO and JavaScript, but you'll have to do more research.
NOTE: History.js provides graceful degradation for Browsers that do not support History state.

Recommended Approach to Creating a Facade Over a Web Service API

History: There is a web service I use that is based half on the latest MISMO dtd (for the property in question) and the other half are reports that are to be run against that property. All this is bundled up into one big piece of xml and POSTED to the endpoint, the response is well, you guessed it.... xml, and lots of it.
The reporting aspect of it is the problem. Say there are 100 different reports that can be run, you can ask for a single report or any combination of reports. Right now all these flags are attributes of one node, you turn them on or off by setting the report attribute to Y(es) or N(o) (e.g. <someNode _fooReport="Y" _barReport="Y" .... />
Question: I'm interested in hearing your input on how to design a cleaner Web API over top of this, one that will simplify the the choosing of the reports for starters. I'm using c# 4.0.
Regards,
Stephen
To design a clean API you need to drop the notion of running multiple reports by specific names.
You should instead focus on tagging and grouping your reports by a keyword. If your end user needs to have reports about everything to do with some keyword then allow that user to pass in the keyword.
What you would end up with is writing a search API on your reports. This can be as simple as a list of keywords to something as large as google depending on the amount of choice you want to offer your clients.
From an URL api design view I would consider having something like
url/reports/[id OR name]
and
url/reports/?search=[query]
This will result in a cleaner web API

Monitoring .NET ASP.NET Applications

I have a number of applications running on top of ASP.NET I want to monitor. The main things I care about are:
Exceptions: We currently some custom code which will email us when an exception occurs. If the application is failing hard it will crash our outlook... I know (and use) elmah which partly solves the problem however it is still just a big table of exceptions with a pretty(ish) UI. I want something that makes sense of all of these exceptions (e.g. groups exceptions, alerts when new ones occur, tells me what the common ones are that I should fix, etc)
Logging: We currently log to files which are then accessible via a shared folder which dev's grep & tail. Does anyone know of better ways of presenting this information. In an ideal world I want to associate it with exceptions.
Performance: Request times, memory usage, cpu, etc. whatever stats I can get
I'm guessing this is probably going to be solved by a number of tools, has anyone got any suggestions?
You should take a look at Gibraltar not used it myself but looks very good! Also works with nLog and log4net so if you use those you are in luck!!
Well, we have exactly the same current solution. Emails upon emails litter my inbox and mostly get ignored. Over the holidays an error caused everyone in dev to hit their inbox limit ;)
So where are we headed with a solution?
We already generate classes for all our excpetions, they rarely get used from more than one place. This was essentially step one, but we started the code base this way.
We modified the generator to create unique HRESULT values for all exceptions.
Again we added a generator to create a .MC message resource file for every exception.
Now every exception can write itself to the Windows Event Log, and thus we removed all emails etc, and rely on the event log.
Now with an event log full of information, including unique message codes and parameters for each exception, we can use off-the-shelf tools to aggregate, monitor, and alert.
The exception generator (before modifications above) is located here:
http://csharptest.net/browse/src/Tools/Generators
It integrates with visual studio by replacing the ResX generator with this:
http://csharptest.net/browse/src/Tools/CmdTool
I have not yet published the MC generator, or the HRESULT generation; however, it will be available in the above locations when I get the time.
-- UPDATE --
All the tools and source are now available online for this. So where do I go from here?
Download the source or binaries from: http://code.google.com/p/csharptest-net/
Take a look at the help for CmdTool.exe Visual Studio Integration
Then review the help on Generators for ResX and MC files, there are several ways to generate MC files or complete message DLLs from ResX files. Pick the approach that fits you best.
Run CmdTool.exe REGISTER to register the tool with Visual Studio
Create a ResX file as normal, then change the custom tool to CmdTool
You will need to add some entries to the resx file. At minimal create the following:
".AutoLog" = true
".NextMessageId" = 1
".EventSource" = "HelloWorld"
"MyCustomException" = "Some exception text"
Other settings exampled by the NUnit: http://csharptest.net/browse/src/Tools/Generators/Test/TestResXAutoLog.cs#80
Now you should have an exception class being generated that writes log events. You will need to build the MC file as a pre/post build action with something like:
CSharpTest.Net.Generators.exe RESXTOMESSAGEDLL /output=MyCustomMessages.dll /input=TheProjectWithAResX.csproj
Lastly, you need to register it, run the framework's InstallUtil.exe MyCustomMessages.dll
That should get you started until I get time to document it all.
One suggestion from Ryans Roberts I really like is exceptioneer which seems to solve my exception woes at least.
I would first go for log4net. The SmtpAppender can wait for N exceptions to cumulate before sending an email and avoid crashing Outlook. And log4net also logs to log files that can be stored on network drives, read with cat and grep, etc.
About stats, you can perform a health/performance logging with the same tools, ie. spawn a thread that every minute logs CPU usage etc.
I don't have a concrete answer for the first part of question, since it implies automated log analysis and coalescence. At university, we made a tool that is designed to do part of these things but doesn't apply to your scenario (but it's two-way integrated with log4net).
In terms of handled exceptions or just typical logging l4ndash is worth a look. I always set our log4net to not only write out text files, but to append to the database. That way l4ndash can analyse it easily. It'll group your errors, let you see where bad things are occurring a lot. You get one free dev license
With Elmah we just pull down the logs periodically. It can exports as csv, then we use Excel do filter/group the data. It's not ideal, but it works. It would be nice to write a script to automate this a bit more. I've not seen much else out there for Elmah.
You can get some metrics on request times (and anything else that's saved) by running LogParser over the IIS logs.
We have built a simple monitoring app that sits on the desktop and flashes up red when there is either an exception written to the event log from one of the apps on the server or it writes an error to the error log table on the database. It also monitors the database health, checking fragmentation and the like.
The only problem we have with this is that it can be a little intrusive on the desktop as it keeps popping up with a red message box if there is a a problem. However it does encourage you to fix it asap.
We currently have this running on several of the developers machines. The improvement we are thinking of making is to have one monitoring app running on a server that then publishes an rss feed so that the app is only checking once in one place but we can consume the information from anywhere using whichever method we choose at the time (such as through our phones when we aren't in the office).
You can have an RSS feed select from your Exceptions table (and other things).
Then you can subscribe to the RSS feed in MS Outlook or on any smart phone. I use an RSS feed reader called NewsRob because it alerts me when there is something new.
I blog about how to do this HERE.
As a related step, I found a way to notify myself when something DIDN'T happen. That blog is HERE.

Options for storing Session state with SharePoint

I have written a user control for our SharePoint site that builds an HTML menu - this has been injected into the master page and as such ends up rendering on all pages that use it. There are some pretty computationally expensive calls made while generating this HTML and caching is the logical choice for keeping the page loads snappy. The HttpRuntime Cache has worked perfectly for this so far.
Now we are embarking down version 1.1 of this user control and a new requirement has crept in to allow per-user customization of the menu. Not a problem, except that I can no longer blindly use the HttpRuntime Cache object - or at least, not use it without prefacing the user id and making it user specific someway.
Ideally I would like to be able to use the ASP.NET Session collection to store the user specific code. I certainly don't need it hanging around in the cache if the user isn't active and further, this really is kind of session specific data. I have looked at several options including it in the ViewState or enabling Session management (by default it is disabled for a good reason). But I am really not all that happy with either of them.
So my question is: How should I go about caching output like this on a per user basis? Right now, my best bet seems to be include their user id in the Cache key, and give it a sliding expiration.
It's worth pointing out that I believe the 'end of days' link providing is relevant for SPS2003 not MOSS2007 - afaik MOSS integration into Asp.Net means that the mentioned issue is not a problem in MOSS.
I use ViewState on a fairly large MOSS 2007 deployment (1000+ users) for custom webparts and pages, and I haven't noticed a detrimental effect on the deployment performance at all.
Just use it, is my advice.
I don't understand why you wouldn't use SharePoints built in web part caching mechanism for this on a per user basis (Personal) - its exactly what its designed for.
http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.webpartpages.webpart.partcacheread.aspx

ASP.NET - Common Gotchas [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
When I am working with ASP.NET, I find that there are always unexpected things I run into that take forever to debug. I figure that having a consolidated list of these would be great for those "weird error" circumstances, plus to expand our knowledge of oddness in the platform.
So: answer with one of your "Gotcha"s!
I'll start:
Under ASP.NET (VB), performing a Response.Redirect inside a try/catch block does not stop execution of the current Response, which can lead to two concurrent Responses executing against the same Session.
Don't dynamically add controls after the page init event as it will screw up the viewstate tree.
Viewstate ... if you are using it ... can get out of control if you are not paying attention to it.
The whole life-cycle thing in general.
Not that I see anything wrong with it, it's just that you'd be amazed at the number of people who start working on large ASP.Net projects before understanding it, rather than vice versa. Hence, it becomes a gotcha.
Note that I said large projects: I think the best way to come to terms with the life cycle is to work on a few smaller projects yourself first, where it doesn't matter so much if you screw them up.
Life cycle of custom controls does not match up perfectly with page life cycle events of same name.
Page_Load is run before control handlers. So you can't make changes in an event handler and then use those changes in the page load. This becomes an issue when you have controls in a master page (such as a login control). You can get around the issue by redirecting, but it's definitely a gotcha.
Having to jump through hoops to get the .ClientID property into javascript.
It'd be nice if the render phase of the lifecycle created a script that set up a var for each server control with the same name as the control that was automatically initialized to the clientID value. Or maybe have some way to easily trigger this action.
Hmm... I bet I could set up a method for this on my own via reflection.
Don't edit your web.config with notepad if you have accented characters, it will replace it with one with the wrong encoding. It will look the same though. Just your application will not run.
I just learned this today: the Bind() method, as used with GridViews and ListViews, doesn't exist. It's actually hiding some Reflector magic that turns it into an Eval() and some kind of variable assignment.
The upshot of this is that calls like:
<%# FormatNameHelper(Bind("Name")) %>
that look perfectly valid will fail. See this blog post for more details.
Debugging is a very cool feature of ASP.Net, but as soon as you change some code in the app_code folder, you trigger a re-build of the application, leading to all sessions being lost.
This can get very annoying while debugging a website, but you can easily prevent this using the "StateServer mode" : it's just a service to start and a line to change in the web.config :
refer to msdn : http://msdn.microsoft.com/en-us/library/ms178586.aspx
InProc mode, which stores session state in memory on the Web server. This is the default.
StateServer mode, which stores session state in a separate process called the ASP.NET state service. This ensures that session state is preserved if the Web application is restarted and also makes session state available to multiple Web servers in a Web farm.
SQL Server ...
Custom ...
Off!
If you are running Classic ASP applications in the same Virtual Directory as you ASP.Net application, the fist hit on the application must be on an ASP.Net page. This will ensure that the AppPool be built with the right context configurations. If the first page to be hit is a Classic ASP page, the results may vary from application to application. In general the AppPool is configured to use the latest framework.
Making a repeater-like control, and not knowing about INamingContainer.
You have to worry about session timeouts for applications where the user might take a long time.
You also have to worry about uploading timeouts for large applications, too
Validatiors may not always scroll your page to the scene of the data entry error (so the user may not ever see it and will only wonder why the submit button won't work )
If the user enters HTML symbols such as <, > (for example, P > 3.14 ), or an inadvertant <br> from copy-pasting on another page, ASP.NET will reject the page and display a error.
null.ToString() produces a big fat error. Check carefully.
Session pool sharing across multiple applications is a disaster silently waiting to happen
Moving applications around on machines with different environments is a migraine that involves web.config and many potential hours of google
ASP.NET and MySQL are prone to caching problems if you use stored procedures
AJAX can make a mess, too:
There are situations where the client can bypass page validation (especially by pressing ENTER instead of pressing the submit button). You can fix it by calling if(! Page.IsValid) { return ; }
ASP buttons usually don't work correctly inside of UpdatePanels
The more content in your UpdatePanel, the more data is asynchronously transmitted, so the longer it takes to load
If your AJAX panel has a problem or error of some kind, it "locks up" and doesn't respond to events inside it anymore
Custom controls are only supported by the designer when building the control or when building the page that uses the control, but not both.
When using a gridview without a datasource control (i.e. binding a dataset straight to the control) you need to manually implement sorting and paging events as shown here:
http://ryanolshan.com/technology/gridview-without-datasourcecontrol-datasource/
Linq: If you are using Linq-To-SQL, you call SubmitChanges() on the data context and it throws an exception (e.g. duplicate key or other constraint violation), the offending object values remain in your memory while you are debugging, and will be resubmitted every time you subsequently call SubmitChanges().
Now here's the real kicker: the bad values will remain in memory even if you push the "stop" button in your IDE and restart! I don't understand why anyone thought this was a good idea - but that little ASP.NET icon that pops up in your system tray stays running, and it appears to save your object cache. If you want to flush your memory space, you have to right-click that icon and forcibly shut it down! GOTCHA!
You can't reference anything at all above the application's root folder.
All the code I have to maintain that still looks like it was written in vb6, showing complete ignorance of the newer styles.
I'm talking things like CreateObject(), excessive <% %> blocks, And/Or instead of AndAlso/OrElse, Len() instead of .Length(), s/o Hungarian prefix warts, Dim MyVariable with no type, functions with no return type... I could go on.
Being unaware of heaps of existing and extensible functionality in the framework. Things often redone are membership, roles, authorization, site maps. Then there are the controls and the associated tags which can be customized to alleviate issues with the client IDs among others. Also simple things like not knowing to properly use the .config file to auto import namespaces into templates, and being able to do that on a directory basis. Less known things like tag expressions can be valuable at times as well. Surely, as with all frameworks, there is a learning curve and always something left to be desired, however more often than not it is better to customize and extend an existing framework instead of rolling your own.
Not a pure ASP.NET thing, but ...
I was trying to use either a) nested SELECT or b) WITH clause and just could not get it to work, but people who were obviously more knowledgeable (including someone I work with) told me the syntax was fine. TURNS OUT ...
Was not able to use either of those with OLEDB.
OLEDB query to SQL Server fails
(Also, I was bit by the response.redirect() in the try ... catch 'feature' mentioned in the OP! Great thread!)
Databound controls inside an INamingContainer control must not be placed inside templated controls such as FormView. See this bug report for an example. Since INamingContainer controls creates their own namespace for their contained controls, two-way databinding using Bind() will not work properly. But when loading the values everything will look fine (because it is done with Eval()) it is not before you try to post back the values that they will mysteriously seem to not land in the database.
This so question demonstrates the issue well: AJAX Tabcontainer inside formview not inserting values
(VB.NET) If you send an Object via a Property's Get accessor into a function with a ByRef keyword, it will actually attempt to update the object using the Set accessor for the Property.
Ex:
UpdateName(ByRef aName as String)
UpdateName(Employee.Name) will attempt to update the name by using the Set on the Name property of Employee.

Categories

Resources