Validating multiple Captchas on a single page - c#

I am building a comments section for a blog and I would really like some help\guidance with it, specifically around the Recaptcha. I have questions around
Adding a variable amount of captchas to one page
validating that the user has completed the captcha before posting to the server
A blog post can have X number of comments and for each comment left and I want to add a reply form for each. I've tried to mock this up in a .net fiddle (https://dotnetfiddle.net/3GruG1) to demo it. In my application, the form is located in a partial, and when the foreach (blog comments) are processed it loads in a form at the bottom. The fiddle is purely for demo purposes and not how its implemented.
To try and limit spam submissions I would like to add a Recaptcha to each form. I've read that you can have multiple v2 Recaptcha per single view but can anyone suggest the best way to achieve this with a variable amount of forms per page? I am generating a GUID in the partial so maybe I can lean on that?
I assume this is achievable in js but my clientside is weak and before going down that route I wanted to get people's opinions on what is the best approach?
I would also like to validate that the user has interacted with the captcha - I played around with disabling the button until the box was checked but unsure if this is good for UX?
Any thoughts, please?

Related

IE shows a previously cached version of my page

my scenario is this; the user selects the list of reports they wish to print, once they select and click on the a button, i open up another page with the selected reports ready for printing. I am using a session variable to pass reports from one page to another.
first time you try it, it works fine, second time you try it, it opens the report window with the previous selected reports. I have to refresh the page to make sure it loads the latest selections.
is there a way to get the latest value from the session every time you use it? or is there a better way to solve this problem. open for suggestions...
Thanks
C# Asp.net, IE&7 /IE 8
After doing some more checking maybe if you check out COMET it might help.
The idea is that you can have code in your second page which will keep checking the server for updated values every few seconds and if it finds updated values it will refresh itself.
There are 2 very good links explaining the imlementation.
Scalable COMET Combined with ASP.NET
Scalable COMET Combined with ASP.NET - Part 2
The first link explains what COMET is and how it ties in with ASP.NET, the second link has an example using a chat room. However, I'm sure the code querying for updates will be pretty generic and can be applied to your scenario.
I have never implemented COMET yet so I'm not sure how complex it is or if it is easy to implement into your solution.
Maybe someone developing the SO application is able to resolve this issue for you. SO uses some real-time feature for the notifications on a page, i.e: You are in the middle of writing an answer and a message pops up in your client letting you know someone else has added an answer and to click "here" to refresh.
The proper fix is to set the caching directives on the HTTP response correctly, so that the cached response is not reused without validation from the server.
When you fail to specify the cache lifetime, the client has to "guess" how long the response is good for, and the browser's guess probably isn't what you want. See http://blogs.msdn.com/b/ie/archive/2010/07/14/caching-improvements-in-internet-explorer-9.aspx
It's better to use URL paramaters. So you have a view of value of the paramaters.

prevention of bots automating payment process in web application

I have a .net c# web application that allows users to purchase products.
My site has a payment page, with input fields etc.
I have had some attacks recently via bots automating the submit of payments just to validate credit card authorization.
So I need to change my page so that bots cant do this. So I am looking at advice as to do this? I have started by changing the field names so that they are different each time to page loads, via a hash. Any other tips?
CAPTCHA will help to stop this. It will require the user to complete a validation check before the page will go through. Here is a sample implementation of how CAPTCHA can be implemented in ASP.Net
A "captcha" is the standard way of preventing bot submitted forms. Recaptcha is free, works well and is actually helping to scan books through its use.
I tried the following approach once and it gave me good results.
In short the idea is to create an invisible field, name it so that the robot could easily understand it and, on the server side, check the value of this field. If it is populated, than it was definitely a robot and you can safely ignore this request.
For example:
Rename your FirstName field to, say, EmanTsrif.
Add another field <input id="FirstName" class="trap_for_robots">.
Define css class trap_for_robots: {display: none} (preferable in a standalone .css file - don't use style="display:none"!).
In your codebehind check if (FirstName.Text != "") { //do nothing, log something }.
I can understand that you are trying to find a solution which does not involve human interaction in order to keep the user experience as good as possible.
Since the evil-doers are using your site to check credit card validity you are probably dealing with a more targeted misuse of your resources as opposed to common blocking scenarios for automated processes, like comment spam bots and alike. Depending on how valuable "your service" is to the people who are exploiting your website, locking them out without requiring human interaction might only work until they figured out what you changed.
Alternating field names aren't going to stop them from populating the fields by order of appearance on your site, for example.
Solutions like having javascript populate a hidden form field are only good as long the bot does not speak javascript.
I would suggest to use all the techniques found when searching for captcha alternatives and use the methods in random combinations for each request - then hope that another site is less secured so they start using a different site to get what they need.
If all doesn't help you can still use a solution that involves human interaction.
Ajax control kit has a "NoBot" control you can use.
http://www.asp.net/ajax/ajaxcontroltoolkit/samples/NoBot/NoBot.aspx
Have look on this wikipedia article , read the "Technical measures to stop bots" section, it gives a number of measures to stop bots.

How to make the browser back button take you back through AJAX calls?

I have a page with a lot of dynamically generated check boxes on it. As the users click these check boxes a lot of content on the page changes dynamically via ajax. The end users are complaining that after hitting submit and then hitting the back button to change something, their selections are blown away and they have to do it all over again.
I have seen a few sites (gmail, facebook, etc...) use the hash symbol in the URL to hack the back button so that it performs AJAX calls instead of going back to the previous full page request. I would like to do this to modify the URL before the page submits so that hitting the back button will load their previously selected fields.
For instance:
In Gmail if I am viewing my inbox then my URL looks like this: https://mail.google.com/mail/?shva=1#inbox
Then if I click "Sent Mail" an AJAX call is performed and my URL is modified to look like this: https://mail.google.com/mail/?shva=1#sent
I really like this behavior and want to duplicate it. How is this accomplished?
Do your links actually trigger any javascript or do they just link to the URL with the appropriate hash symbol information?
How do you read in the hash symbol info in javascript?
How does this type of navigation affect search engines? Would a search engine know that two URLS that are the same except for the information after the hash are actually different URLs and index them as such?
What are some other pros and cons of this technique that I should take into consideration?
NOTE: I am using C# with ASP.NET Web Forms and ASP.NET MVC 3.0 in case that matters at all.
To manipulate hashtags, look at location.hash (javascript).
You'll also be interested in the new push/pop state stuff in HTML 5. https://developer.mozilla.org/en/DOM/Manipulating_the_browser_history.
github has done some pretty cool things with this. Check out their blog entry on their tree slider feature at https://github.com/blog/760-the-tree-slider.
There's also the jQuery history plugin at http://tkyk.github.com/jquery-history-plugin/. (EDIT: I see Joe beat me to this one).
take a look at the jquery history plugin http://tkyk.github.com/jquery-history-plugin/ I have used it in the past, and it just might do what you want.
JQuery plugin:
http://tkyk.github.com/jquery-history-plugin/
Another jQuery library that I have used in the past:
jQuery BBQ: Back Button & Query Library
Also, a more scaled down version of the previous if you don't need all it's features and just gives you the hashchange event for all browsers:
jQuery hashchange event
NOTE: Just as a brief intro to the above libraries. The hashchange event is supported natively by newer(HTML5 supported) browsers in which case the scripts will just bind to that event. For older browsers that don't support that event, the script creates a polling loop to simulate the event. In either case you can bind to the event and handle appropriately.
EDIT: To answer your questions:
The links do not trigger javascript, links simply change the url with the hash. The hashchange event monitors this action, and when the hash changes(which is logged in browser history stack) the event fires.
location.hash is used to read the hash value, and any appropriate parsing you would need from that point.
Probably not SEO savvy enough to give you a complete answer on that, but fairly sure search engines DO NOT index hashes.
Pros for this technique is usability as your users will be able to properly use their back buttons. Also any history.back(0) javascript calls will also work properly(i don't like them but people use them). Cons are that as you're initially developing, you can get some quirky bugs depending on how your code is written. All in all though, I think with the use of the plugins much of the legwork has been taken out of the process and it is a great method for usability purposes.

WebCrawling Dynamic Links

Anybody has any idea on crawling websites that have dynamic pages/queries? I mean if I click a certain link, it has different values every I try to reload it in a web browser. Now my webcrawler could not download the contents of these pages. Please advise.
it would be the same way even it is dynamic or not. actually a crawler is only a mater of 3 things
The url
The data it sent to server if it is a POST Method then
The cookie if authentication is required
that's all,
the common problem when doing crawler:
Miss-guess of default page [index.html, index.php, default.aspx etc].. actually it will work without it for all method [POST/GET]
One of each field name is not written exactly
ASP.Net form viewstate id field (i forgot the name) but i can be achieve easily
Dynamic page generated by javascript. this one is the hardest part and the most cases even google still have problem about this.
hope that help.
You might want to look at this question which details how to write a crawler or look at the source code for http://searcharoo.net/ which contains a good crawler (see here).

jQuery Core/Data or Custom Attributes(Data-Driven)

A similar question was asked here in storing information in a given html element.
I'm still green to jQuery, but I'm looking for the best way to store information on the page. I have a Repeater that holds one image per item. These images are clickable and can fire a given jQuery event. The issue I'm having is, the objects that the Repeater is bound to holds some specific information(such as "Subtext", "LargerImage", etc) which I would like to be accessible from the page.
Core/Data in jQuery accomplishes this just fine, however we would still need to build the jQuery statement from C#, as all the data is stored on the server. To clarify a bit, this is storing information on the page from a database, which is a bit different than arbitrary information being made available through jQuery.
I'm not restricting this question to "how to bind a custom attribute to an element", because I did come across an idea of generating a JS Struct from the C# codebehind to store information, but I'm avoiding any code generating code scenarios(or trying to).
Custom Attributes from HTML5(ie, "data-subtext") are also a possibility as I can easily add those from the itemdatabound event:
sampleImageElement.Attributes.Add("data-subtext", "And this what the image is about");
I'm a bit confused on browser support for this specific attribute though, or if it is even best practice so early in the game. If custom attributes are the way to go, that's an easy change to make happen. If jQuery can accomplish the same, I'd love to be pointed that way at least for my own understanding.
Any thoughts are greatly appreciated.
I'm answering this question only for the record keeping purposes of stackoverflow, as this is the solution I've moved forward with for this scenario. An AJAX call is completely warranted for any larger datasets and is a direction I would definitely go otherwise.
I went ahead with the "data-" field in the HTML5 spec, provided by the jQuery meta-data plugin.
I'm wrote a short extension method on the Web.UI.AttributeCollection class called "AddMetaData", which accepts an IList as well as a string "Key" to ease the attachment to a given page element.
I'm not marking this as the answer just yet, as there might be some community feedback on my own direction.
To clarify what happens in ASP.NET, once the page is served to the client, the objects that the Repeater is bound to on the server are destroyed and are then recreated upon each page postback.
It sounds like you want to achieve some kind of tooltip effect where the contents are retrieved from the server through AJAX? There are numerous different tooltip options available
jQuery Tooltip plugin
Random.Next()'s jQuery AJAX tooltip
dhtml goodies AJAX tooltip
clueTip
that can be used to do this. You could then set up a webservice or page method to retrieve the relevant data from your datasource.
Of course, you could have the content rendered in the HTML sent to the client when the request is processed and simply hide this markup. Then write your own plugin to display the markup in the form you require.

Categories

Resources