I have a site hosted on Appharbor (free version), and then have the NewRelic free add-on. I setup the availability monitoring to go against my homepage.
Now, I'm getting a bunch of errors because my REST api page is returning errors. I want NewRelic to completely ignore this page.
How do I have NewRelic ignore this page?
It sounds like you want to investigate DisableBrowserMonitoring() in the New Relic .NET agent API.
If you only want to turn off the RUM feature for some applications (app/website being monitored) you can use the DisableBrowserMonitoring() in the New Relic .NET agent API mentioned above. This disables the automatic insertion of browser monitoring scripts for specific pages. Currently, this is only supported with web applications, but we have experienced success that this can work with static pages. Add this call to any pages you do not wish to instrument with page load timing (sometimes referred to as real user monitoring or RUM). More information, recommendations and an example how to use this here: http://docs.newrelic.com/docs/agents/net-agent/features/net-agent-api#disable_browser.
Another solution is to use the browserMonitoring element child of the configuration element. browserMonitoring configures page load timing (sometimes referred to as real user monitoring or RUM) in your .NET application. Page load timing gives you insight your end users' performance experience. This is accomplished by measuring the time it takes for your users' browsers to download and render your webpages by injecting a small amount of JavaScript code into the header and footer of each page. More information: https://docs.newrelic.com/docs/agents/net-agent/installation-configuration/net-agent-configuration#browsermon-autoInstrument
<browserMonitoring autoInstrument="true">
<attributes enabled=”true”>
<exclude>myApiKey.*</exclude>
<include>myApiKey.foo</include>
</attributes>
</browserMonitoring>
The config file method lets you filter without having to change code. However, you also have to be careful if you use the config option to exclude paths because you're putting a Regular Expression in there, and if it is a complex one (which it shouldn't be) it could affect performance and things like that. On the other hand, if you just use a plain and simple regex to look for a page, it is pretty fast too.
I think that the API calls might perform better but that is totally subjective, and I wanted to give you both options.
Note, after any change in your configuration, you will need to perform an iisreset as administrator and exercise your app for a while to see the changes reflected on your New Relic Dashboard.
I am currently rewriting a large website with the goal of replacing a large number of page/form submittals - with AJAX calls. The goal is to reduce the amount of server roundtrips - and all the state handling on pages that are rich with client .
Having spent some time considering the best way forward with regards to performance - my question is now the following.
Will it lead to better performance to have just one single aspx page that are used for all AJAX calls - or will it be better to have a aspx page for every use of AJAX on a given webage?
Thank you very much for any insights
Lars Kjeldsen
Performancewise either approach can be made to work on a similar order of magnitude.
Maintanancewise, I prefer to have separate pages for each logical part of your site. Again, either can work, but I've seen more people make a mess of things with "monolithic" type approaches. Single page you'll need a good amount of skill structuring your scripts and client side logic. Well done there isn't a problem, however, I just see more people getting it right when they use separate pages for separate parts of the site.
If you take a look at the site http://battlelog.battlefield.com/ (you'll have to create an account) you'll notice a few things about this it.
It never refreshes the page as you navigate the website. (Using JSON to transmit new data)
It updates the URL and keeps track of where you are.
You can use the updated URL and immediately navigate to that portion of the web-application. (In this case it returns the HTML page)
Here's a full write up on the website.
Personally, I like this approach from a technology/performance perspective, but I don't know what the impact it will have on SEO since this design relies on the HTML5 History state mechanism in JavaScript.
Here's an article on SEO and JavaScript, but you'll have to do more research.
NOTE: History.js provides graceful degradation for Browsers that do not support History state.
The application in question is a fairly extensive with many different types of access roles (read Customer Service, HR, Admins, etc etc). Tiered access, so each role inherits the access below it, so HR has Read Only, CS has edit abilities, Admins full control. Menu bars and buttons enable/visible attributes are controlled by an outside an outside library that handles all role-based access via reflection. The man who wrote this was an evil genius.
That being said, I'd like eventually to remove it. The knowledge base on how it works left with him years ago, and development on this application is starting to stagnate since the documentation on the security 'suite' is awful. Everything is stored within a database, down to label visibility for each label. It's a bit overboard and not refactor-friendly.
I've spent a solid amount of time looking into windows forms security. We're running our own user/roles for this app rather than Active Directory. I'd like to use User/Principal, since that looks like the best option. If there's another option, I'm open to advice, I'd like to see this done the right way since we're considering a full rewrite (unrelated to this).
All the searching I've done through MSDN and other websites has led me to believe that I can only control flow through methods and classes based on roles, not as granular as "enable this button" or "hide this menu bar."
Is there a better way than doing something along the lines of:
btnA.Visible = Thread.CurrentPrincipal.IsInRole("HR");
btnA.Enabled = Thread.CurrentPrincipal.IsInRole("CS") ||
Thread.CurrentPrincipal.IsInRole("ADMIN");
Is there a better way in general? What's the best way to handle this?
That's pretty close to the way that we do it, both in WinForms and in our ASP.net applications. The one difference is that we store the role names in a database so that they are easier to maintain and upgrade than hardcoded constants.
While it lacks the sexiness of some sort of automatic binding (which it seems that you are looking for), it's solid and has not been troublesome to deal with. However, our application does not have a tremendous variation between users. For the most part, if a user can get access to part of the application they can perform most of the actions.
I've created an web authentication app using c# & asp.net and want to bounce off how secure you think it is. All navigation is done by https.
User Registration
User enters 3 datapoints (SSN,Lname and DOB). If that combination is found in our system, a session variable is set and navigates to next page.
If session variable for #1 is set, proceed and ask for username, pwd, security q&A etc. Use Linq to save data and verify session variable before actual save event. PWD and security answer is hashed using salt and sha. (use validation controls and textbox limits to limit input)
Reset password
Same as #1 in registration but includes username. If ok, set step 1 session variable.
If step 1 session variable is set, ask security question up to 3x. Salt/hash and verify to database salt/hash. If match, set step 2 session variable.(use validation controls and textbox limits to limit input)
Check for step 2 session variable. Ask for new pwd. Hash/salt and save using LINQ.
Login (use validation controls and textbox limits to limit input)
Gather username and password. HASH/salt password that matches username and see if password matches hash. If okay, instatiate user objects and pass to default page.
All pages inherit from masterpage. Masterpage has code to verify if user objects are set to a valid instance. If not valid user object, logoff is called which redirects to main login page.
Kind of wordy but wanted to be clear.
Am I missing anything here? I wanted to use MS's forms auth but decided to roll my own as I had some issues getting some of the custom stuff I wanted done using FBA. By using session variables as step completion markers, does that adequately prevent session stealing or bookmarking? Is there a better way to do this?
Thoughts please?
What aspect of either ASP.NET Forms Authentication or using the Membership Provider bits didn't fit with your needs? I've found both to be very flexible in many different scenarios?
Rolling your own is usually going to make life hard in the future, especially when you need to start making changes. Also your use of a master page to verify a users logon state etc might be fine for now, but when you require more master pages you then start needing to replicate the same blob of code in every masterpage and keep it all consistent. That can then become a maintenance nightmare somewhere down the road.
If you're not using the ready baked authentication tools in the framework you should be plumbing this kind of thing in somewhere else, such in an HttpModule.
I think you should revisit what you're doing. Take a look at implementing your own custom IIdentity objects if you need to hang user specific data/objects off of a user object. Then assign to a custom IPrincipal you can attach to Context.User in ASP.NET.
#asp316 and #Jack (comment) I would advise grabbing these two books:
Developing More-Secure Microsoft® ASP.NET 2.0 Applications by Dominick Baier
Professional ASP.NET 2.0 Security, Membership, and Role Management by Stefan Schackow
You'll be surprised how flexible the built in security infrastructure in .NET really is. There's a lot more to it than just adding a <authentication mode="Forms"> setting to your web.config and slapping a <asp:login runat="server"/> control on a page.
The thing about "rolling your own" is that it's very easy to get it wrong in subtle ways such that it appears to work. You then, of course, deploy this code and move on to other things with no clue that anything is wrong. After all, it passed all your tests.
A year later it turns out your site was hacked six months previously, and you never even knew it until just then.
Much better to find a way to rely on an implementation written by security experts.
I think you're pretty well set; I would also lock a user out for a time after a certain amount of bad login attempts (1 hour after 5 bad tries?) and check for time between requests (the AjaxControlToolkit "nobot" works wonders here, in my experience).
One option, rather than using your Master page for the security code, is to implement an Interface that the pages (or Master page) can inherit from; this way, if you ever do expand to multiple master pages, or if you have pages outside of the master, you can continue to use the same security-checking code.
Depending on your requirements, I'd shy away from (required) security questions; I always forget mine. You're already checking for SSN, BDay, Last Name, username, and password; anyone who knows all of this probably can guess your mother's maiden name.
[edit]
Also, I do think it's a ok to roll your own, as long as you vet it like crazy. Throw some other people at it and see how it holds up. I totally understand the inflexibility of the ASP.NET control options; their controls will probably be more secure (although, you should never blindly trust anything, especially something that you don't know what's going on behind the magical black box), but sometimes you just have to roll your own when it's not flexible enough.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
When I am working with ASP.NET, I find that there are always unexpected things I run into that take forever to debug. I figure that having a consolidated list of these would be great for those "weird error" circumstances, plus to expand our knowledge of oddness in the platform.
So: answer with one of your "Gotcha"s!
I'll start:
Under ASP.NET (VB), performing a Response.Redirect inside a try/catch block does not stop execution of the current Response, which can lead to two concurrent Responses executing against the same Session.
Don't dynamically add controls after the page init event as it will screw up the viewstate tree.
Viewstate ... if you are using it ... can get out of control if you are not paying attention to it.
The whole life-cycle thing in general.
Not that I see anything wrong with it, it's just that you'd be amazed at the number of people who start working on large ASP.Net projects before understanding it, rather than vice versa. Hence, it becomes a gotcha.
Note that I said large projects: I think the best way to come to terms with the life cycle is to work on a few smaller projects yourself first, where it doesn't matter so much if you screw them up.
Life cycle of custom controls does not match up perfectly with page life cycle events of same name.
Page_Load is run before control handlers. So you can't make changes in an event handler and then use those changes in the page load. This becomes an issue when you have controls in a master page (such as a login control). You can get around the issue by redirecting, but it's definitely a gotcha.
Having to jump through hoops to get the .ClientID property into javascript.
It'd be nice if the render phase of the lifecycle created a script that set up a var for each server control with the same name as the control that was automatically initialized to the clientID value. Or maybe have some way to easily trigger this action.
Hmm... I bet I could set up a method for this on my own via reflection.
Don't edit your web.config with notepad if you have accented characters, it will replace it with one with the wrong encoding. It will look the same though. Just your application will not run.
I just learned this today: the Bind() method, as used with GridViews and ListViews, doesn't exist. It's actually hiding some Reflector magic that turns it into an Eval() and some kind of variable assignment.
The upshot of this is that calls like:
<%# FormatNameHelper(Bind("Name")) %>
that look perfectly valid will fail. See this blog post for more details.
Debugging is a very cool feature of ASP.Net, but as soon as you change some code in the app_code folder, you trigger a re-build of the application, leading to all sessions being lost.
This can get very annoying while debugging a website, but you can easily prevent this using the "StateServer mode" : it's just a service to start and a line to change in the web.config :
refer to msdn : http://msdn.microsoft.com/en-us/library/ms178586.aspx
InProc mode, which stores session state in memory on the Web server. This is the default.
StateServer mode, which stores session state in a separate process called the ASP.NET state service. This ensures that session state is preserved if the Web application is restarted and also makes session state available to multiple Web servers in a Web farm.
SQL Server ...
Custom ...
Off!
If you are running Classic ASP applications in the same Virtual Directory as you ASP.Net application, the fist hit on the application must be on an ASP.Net page. This will ensure that the AppPool be built with the right context configurations. If the first page to be hit is a Classic ASP page, the results may vary from application to application. In general the AppPool is configured to use the latest framework.
Making a repeater-like control, and not knowing about INamingContainer.
You have to worry about session timeouts for applications where the user might take a long time.
You also have to worry about uploading timeouts for large applications, too
Validatiors may not always scroll your page to the scene of the data entry error (so the user may not ever see it and will only wonder why the submit button won't work )
If the user enters HTML symbols such as <, > (for example, P > 3.14 ), or an inadvertant <br> from copy-pasting on another page, ASP.NET will reject the page and display a error.
null.ToString() produces a big fat error. Check carefully.
Session pool sharing across multiple applications is a disaster silently waiting to happen
Moving applications around on machines with different environments is a migraine that involves web.config and many potential hours of google
ASP.NET and MySQL are prone to caching problems if you use stored procedures
AJAX can make a mess, too:
There are situations where the client can bypass page validation (especially by pressing ENTER instead of pressing the submit button). You can fix it by calling if(! Page.IsValid) { return ; }
ASP buttons usually don't work correctly inside of UpdatePanels
The more content in your UpdatePanel, the more data is asynchronously transmitted, so the longer it takes to load
If your AJAX panel has a problem or error of some kind, it "locks up" and doesn't respond to events inside it anymore
Custom controls are only supported by the designer when building the control or when building the page that uses the control, but not both.
When using a gridview without a datasource control (i.e. binding a dataset straight to the control) you need to manually implement sorting and paging events as shown here:
http://ryanolshan.com/technology/gridview-without-datasourcecontrol-datasource/
Linq: If you are using Linq-To-SQL, you call SubmitChanges() on the data context and it throws an exception (e.g. duplicate key or other constraint violation), the offending object values remain in your memory while you are debugging, and will be resubmitted every time you subsequently call SubmitChanges().
Now here's the real kicker: the bad values will remain in memory even if you push the "stop" button in your IDE and restart! I don't understand why anyone thought this was a good idea - but that little ASP.NET icon that pops up in your system tray stays running, and it appears to save your object cache. If you want to flush your memory space, you have to right-click that icon and forcibly shut it down! GOTCHA!
You can't reference anything at all above the application's root folder.
All the code I have to maintain that still looks like it was written in vb6, showing complete ignorance of the newer styles.
I'm talking things like CreateObject(), excessive <% %> blocks, And/Or instead of AndAlso/OrElse, Len() instead of .Length(), s/o Hungarian prefix warts, Dim MyVariable with no type, functions with no return type... I could go on.
Being unaware of heaps of existing and extensible functionality in the framework. Things often redone are membership, roles, authorization, site maps. Then there are the controls and the associated tags which can be customized to alleviate issues with the client IDs among others. Also simple things like not knowing to properly use the .config file to auto import namespaces into templates, and being able to do that on a directory basis. Less known things like tag expressions can be valuable at times as well. Surely, as with all frameworks, there is a learning curve and always something left to be desired, however more often than not it is better to customize and extend an existing framework instead of rolling your own.
Not a pure ASP.NET thing, but ...
I was trying to use either a) nested SELECT or b) WITH clause and just could not get it to work, but people who were obviously more knowledgeable (including someone I work with) told me the syntax was fine. TURNS OUT ...
Was not able to use either of those with OLEDB.
OLEDB query to SQL Server fails
(Also, I was bit by the response.redirect() in the try ... catch 'feature' mentioned in the OP! Great thread!)
Databound controls inside an INamingContainer control must not be placed inside templated controls such as FormView. See this bug report for an example. Since INamingContainer controls creates their own namespace for their contained controls, two-way databinding using Bind() will not work properly. But when loading the values everything will look fine (because it is done with Eval()) it is not before you try to post back the values that they will mysteriously seem to not land in the database.
This so question demonstrates the issue well: AJAX Tabcontainer inside formview not inserting values
(VB.NET) If you send an Object via a Property's Get accessor into a function with a ByRef keyword, it will actually attempt to update the object using the Set accessor for the Property.
Ex:
UpdateName(ByRef aName as String)
UpdateName(Employee.Name) will attempt to update the name by using the Set on the Name property of Employee.