I have a C# application that searches on Google. After a few hits, I see the captcha message.
To solve this, I open Internet Explorer, go to the same page, and I'm presented with the captcha as well. I complete that and then, its all good; search results are shown.
But in my c# application when I hit the same URL, I still see the captcha. Why is that, and how could I bypass it? I am confused as I've completed the captcha (using IE), so why do I see it again on next hit in c# but not from the browser!
I just need to be pointed in the right direction , or some ideas or suggestions.
I don't have any knowledge of how Google does it, but I've seen websites which track how often you use them based on:
IP Address
User-Agent String
Cookies
You can spoof number 2 so its the same as in Internet Explorer, just in case its through that.
Number 3 is easy to check I suppose, and you can transmit the cookie if there is one.
Google wants to prevent other people to send requests by their own applications, there is no advertise , ... . And maybe this is attack, You've two options : 1. Your application should act as the way as the browser acts , for example modifying User-Agent and cookies. 2. Contact google to provide you a API. I'm sure Google provides API for this reason , but I've no more details information.
Related
I know the question sounds too vague so let me explain exactly what I want to implement.
I have a WebApplication that many users log into to submit a request, Request in my project is a form that accepts some information from the user and when he click submit, it reflects on the administrator page. then the admin can grant or decline this request. and of course the result need to be sent to the user's 'Pending Requests' page.
this process is all about time so I need a clean and efficient way to show the admin the requests instantly and for the user to see the admin's response instantly. (kind of like facebook notification system).
I hope my problem is know clear. I understand that there are many ways to implement this and I have a very small knowledge about them. But I just want you guys to recommend an effecient way because I'm sure that the good ways to do this is limited.
Thanks in advance everybody :)
I will suggest you take a look at SignalR (https://github.com/SignalR/SignalR). It is a framework developed by a few MS developers for doing long polling/notifications from the server.
Link for webforms walkthrough - http://www.infinitelooping.com/blog/2011/10/17/using-signalr/.
You could also look into using a Timer control. It's a client side control that will cause a postback for ASP.NET AJAX applications. Here's a simple tutorial
http://ajax.net-tutorials.com/controls/timer-control/
What you're talking about is a 'push' notification, where the server would pass a notification to the client (a browser) without the client requesting anything.
This isnt something which HTTP is naturally capable of, however have a read about Comet - this will let you know the current state of what is possible.
You may opt for creating a 'heartbeat' on the client side - a polling mechanism which requests from the server every x seconds, and updates the page when new content is found.
I need a clean and efficient way to show the admin the requests instantly and for the user to see the admin's response instantly.
Instantly is a very strong term and isn't usually very scalable.
For some ideas on how you might implement this I'd recommend you take a look at Wikipedia's Comet Programming page
When a user submit requests I assume that his request is first stored in the database. So on the admin & user part you use ajax which periodically update data from database (for un-approved data), do some google search on ajax auto-update or Javascript's timeout or similar function. The same process will be involved in user part.
Hey guys, I'm trying to create a website that can help a user purchase items from other websites. What would be the best way to go about doing this?
I know most of the sites I'm using are sending their information using FORM:POST, but I'm having trouble finding the exact POST packet in fiddler (I'm assuming it's encrypted?), and know that a lot of the sites are using login credentials, so that complicates things a bit.
Is there any way I could use webkit or something to handle all the http stuff, and just pass javascript to fill in the forms? Or is there an even simpler way to create proper POST packets and use a WebRequest?
Thank you!
1) get permission
2) use their published API
If the sites do not have an API and allow you to use their server process, copy their forms to your site and use post. You can post from your server with credentials using for example CURL
Usually shopping cart and credit-card transaction use SSL and you have to login in the site. So I think it's not so simple to bridge with a javascript or a simple webrequest.
There's not a statndard-simple-way way to do this!
You're heading for a world of hurt.
First, you should check if what you're trying to do is legal. Does the web site allow "proxy orders"? Or are they forbidden by their EULA?
Second, you'll have to handle the user's confidential data (username, password, credit card number), and especially credit card numbers are calling for troubles.
Third, how are you planning to implement payment methods like PayPal? You're going to collect the user's PayPal credentials in order to make payments on their behalf? (See point number two if answer is yes.)
Fourth, since you have to fake HTTP requests, as soon as the web site changes a single field, your tool will break, how are you planning to handle this?
Or you're trying to automate only the first steps of the orders and not the payment?
I'm trying to automate the download of a file from a website. Normally to download the file, I login with a username and password. Navigate to a particular screen then click a button.
I've been trying to watch the sequence of POSTs using Chrome's developer mode, and then replicate all the steps using .Net WebClient class, but to no success. I've derived from the WebClient class and added cookie handling. Which seems to be working. I go to the login page and post using WebClient.UploadValues. About half the times it seems to work. The next step appears to make another POST action to a reporting URL. Once again I use WebClient.UploadValues, but the response from the server is a page showing an internal error.
I have a couple of questions.
1) Are there better tools than hand coding C# code to replicate a bunch of web browser interactions? I really only care about being able to download the file at a particular time each day onto a Windows box.
2) The WebClient does not seem to be the best class to use for this. Perhaps it's a bit to simplistic. I tried using HttpWebRequest, but it has no facilities for encoding POST requests. Any other recommendations?
3) Although Chrome's developer plugin appears to show all interaction, I find it a bit cumbersome to use. I'd be interested in seeing all of the raw communication (unencrypted though, the site is only accesses via https), so I can see if I'm really replicating all of the steps.
I can even post the exact code I'm using. The site I'm pulling data from, specifically is the Standard and Poors website. They have the ability to create custom reports for downloading historical data which I need for reporting, not republishing.
Using IE to download the file would be a much easier, as compared to writing C# / Perl / Java code to replicate http requests.
Reason is, even a slight change in JavaScript code can break the flow.
With IE, you can automate it using COM. Following VBA example opens IS and performs a google search:
Sub Search_Google()
Dim IE As Object
Set IE = CreateObject("InternetExplorer.Application")
IE.Navigate "http://www.google.com" 'load web page google.com
While IE.Busy
DoEvents 'wait until IE is done loading page.
Wend
IE.Document.all("q").Value = "what you want to put in text box"
ie.Document.all("btnG").Click
'clicks the button named "btng" which is google's "google search" button
While ie.Busy
DoEvents 'wait until IE is done loading page.
Wend
End Sub
3) Although Chrome's developer plugin appears to show all interaction, I find it a bit cumbersome to use. I'd be interested in seeing all of the raw communication (unencrypted though, the site is only accesses via https), so I can see if I'm really replicating all of the steps.
For this you can use Fiddler to view all the interaction going on and the RAW data going back and forth. To make it work with HTTPS you will need to install the Certificates to enable decryption of trafffic.
I am thinking about working with remote data and receive or send data actually in external web sites. exists a large amount of examples in World Wide Web are working. For example: free online web tools like web stats OR Google's AdSense .... .you know in such web services some code will generate for publishers and the publisher put generated code in her BODY of web page document(HTML file) and the system after that will work. we can have count of visits for home pages, count of clicks on advertisements and so on.now this is my question: How such systems Work? and how can I investigate and search about them to find out how to program them? can you suggest me some keywords? Which Titles should I looking for? and which Technologies is relevant to this kind of programming? Exactly I want to find some relevant references to learn and start some experiences on these systems. if my Q is not Clear I will Explain it more if you want...Help me I am confused.
Consider that I am an Programmer want to program such a systems not to use them.
There are a few different ways to track clicks.
Redirection Tracking
One is to link the advertisement (or any link) to a redirection script. You would normally pass it some sort of ID so it knows which URL it should forward to. But before redirecting the user to that page it can first record that click in a database where it can store the users IP, timestamp, browser information, etc. It will then forward the user (without them really knowing) to the specified URL.
Advertisement ---> Redirection Script (records click) ---> Landing Page
Pixel Tracking
Another way to do it is to use pixel tracking. This is where you put a "pixel" or a piece of Javascript code onto the body of a webpage. The pixel is just an image (or a script posing as an image) which will then be requested by the user visiting the page. The tracker which hosts the pixel can record the relevant information by that image request. Some systems will use Javascript instead of an image (or they use both) to track clicks. This may allow them to gain slightly more information using Javascript's functions.
Advertisement ---> Landing Page ---> User requests pixel (records click)
Here is an example of a pixel: <img src="http://tracker.mydomain.com?id=55&type=png" />
I threw in the png at the end because some systems might require a valid image filetype.
Hidden Tracking
If you do not want the user to know what the tracker is you can put code on your landing page to pass data to your tracker. This would be done on the backend (server side) so it is invisible to the user. Essentially you can just "request" the tracker URL while passing relevant data via the GET parameters. The tracker would then record that data with very limited server load on the landing page's server.
Advertisement ---> Landing Page requests tracker URL and concurrently renders page
Your question really isn't clear I'm afraid.
Are you trying to find out information on who uses your site, how many click you get and so one? Something like Google Analytics might be what you are after - take a look here http://www.google.com/analytics/
EDIT: Adding more info in response to comment.
Ah, OK, so you want to know how Google tracks clicks on sites when those sites use Google ads? Well, a full discussion on how Google AdSense works is well beyond me I'm afraid - you'll probably find some useful info on Google itself and on Wikipedia.
In a nutshell, and at a very basic level, Google Ads work by actually directing the click to Google first - if you look at the URL for a Google ad (on this site for example) you will see the URL starts with "http://googleads.g.doubleclick.net..." (Google own doubleclick), the URL also contains a lot of other information which allows Google to detect where the click came from and where to redirect you to see the actual web site being advertised.
Google analytics is slightly different in that it is a small chunk of JavaScript you run in your page, but that too basically reports back to Google that the page was clicked on, when you landed there and how long you spend on a page.
Like I said a full discussion of this is beyond me I'm afraid, sorry.
I'm trying to build a C# console application to automate grabbing certain files from our website, mostly to save myself clicks and - frankly - just to have done it. But I've hit a snag that for which I've been unable to find a working solution.
The website I'm trying to which I'm trying to connect uses ASP.Net forms authorization, and I cannot figure out how to authenticate myself with it. This application is a complete hack so I can hard code my username and password or any other needed auth info, and the solution itself doesn't need to be something that is viable enough to release to general users. In other words, if the only possible solution is a hack, I'm fine with that.
Basically, I'm trying to use HttpWebRequest to pull the site that has the list of files, iterating through that list and then downloading what I need. So the actual work on the site is fairly trivial once I can get the website to consider me authorized.
I have dealt with something similar, and the hardest part is figuring out exactly what you needed to "fake" to get authorized. In my case it was authorizing into some Lotus Notes webservice, but the details are unimportant, the method is the same.
Essentially, we need to record a regular user session. I would recommend Fiddler http://www.fiddler2.com but if you're on linux or something, then you'll need to use wireshark to figure some of the things out. Not sure if there is a firefox plugin that could be used.
Anyway, start up IE, then start up Fiddler. Complete the login process.
Stop what you're doing. Switch to the fiddler pane, and examine the recorded sessions in detail. It should give you exactly what you need to fake using WebRequests.
This page should get you started. You need to first make a request to the page, and then saving the cookie to a container that you include in all later request. That should keep you logged in, and able to retrieve the files.