.net, c# post to php page - c#

I have a website that is primarily PHP but we have built some new pages in c# .net. I need to be able to post (i think) to the PHP page.
The PHP page has a login that takes the login name and password. I am trying to allow my .net page to have a login that directs to the PHP page and once there the user is already logged in.
I believe I can post the data to the PHP page... is that correct?
If so, can someone share a code snippet that points me in the right direction?
Thanks for the help!!

I believe there is a cURL version for c#, the easy way would be to set that up if you can. Some information on using cURL with .NET can be found here and there are a few other resources on Google for c# curl

There are lots and lots of ways. I prefer javascript ( with jquery) ajax calls
$.post('login.php', function(data) {
alert( "returned " + data + " from php page");
});
see jQuery documentation for more details.
If you need to do this serverside (in the c# code behind), let me know, I will write up some sample code.

There's no difference between submitting a form post to a PHP script or to a C# application or to whatever program written in whatever language. How the data is transferred is defined by Internet standards.
Of course, different implementations may have slightly different behaviors. For instance, PHP considers all input names that have square brackets in their name to be part of an array.

Related

Firing hCaptcha callback function for bypass token

I'm trying to bypass the hCaptcha in Discord Account Registration using selenium webDriver in C#. I'm using CapMonster Cloud API for solving the captcha itself and as response I'm getting bypass token.
The problem that I currently have is that I can't locate the callback function that I need to call/submit, in order to pass the hCaptcha.
I'm setting the bypass token into "g-recaptcha-response" and "h-captcha-response" textareas, but can't find a way to locate and call the callback function. There is no form to be submitted.
using selenium webDriver in C#
10/10 Would recommend doing discord captcha bypasses using:
PuppeteerExtraSharp/ExtraStealth
(as selenium has some obvious tracers)
Puppeteer has a lot more freedom in it's API as well as the fact that 2capthca is a much more popular method for solving h-captchas
I know this doesn't answer your question but i hope you look into this as a potential better alternative if you do not receive a more traditional answer.
You can do that with Anti-Captcha.com plugin which will do the job automatically. It injects its own callbacks, so when a token is ready it submits the form. If you ever have problems with plugin, support guys here will help you out.
Web communication has to happen in one of the methods defined on this page
So if anything is being sent and received from a server to the browser it has to be one among those methods. Generally the most common methods are POST and GET.
The statement "There is no form to be submitted" is somewhat confusing. A form is just display of fields to collect data from a user. In case a website does not need user input they do not show the form. They would instead capture the required data and send a POST request to the server (without the user ever noticing), in a manner similar to how a form would have sent the data. This is a normal behavior for almost all major websites. An example is google-analytics codes.
So what you need to look for is a POST request (mostly or PUT maybe GET - depends) where the data you are targeting is received or sent.
In your case there indeed is a form which displays the captcha (that is how you see it) and and associated POST request which does what you need.
Url for the post request on the captcha is POST /getcaptcha?s=xxxxxxxx-xxxe-xxxx-xxxx-xxxxxxxxxxxx HTTP/3
Url where it is sent is POST /api/v9/auth/register HTTP/3
These basics apply to any web communication and not just the website in question.

accessing websites using C#

I have a problem here. Assume there's a basic calculator implemented in javascript hosted on a website ( I have googled it and to find an example and found this one: http://www.unitsconverter.net/calculator/ ). What I want to do is make a program that opens this website, enters some value and gets the return value. So, in our website calculator, the program:
- open the website
- enters an operand
- enters an operation
- enters an operand
- retrieve the result
Note: things should be done without the need to show anything to the user ( the browser for example ).
I did some search and found about HttpWebRequest and HttpWebRespond. But I think those can be used to post data to the server, which means, The file I'm sending data to must be php, aspx or jsp. But Javascript is client side. So, I think they are kind of useless to me in this case.
Any help?
Update:
I have managed to develop the web bot using WebBrowser Control tool ( found in System.Windows.Forms )
Here's a sample of the code:
webBrowser1.Navigate("LinkOfTheSiteYouWant"); // this will load the page specified in the string. You can add webBrowser1.ScriptErrorsSuppressed = true; to disable the script in a page
webBrowser1.Document.GetElementById("ElementId").SetAttribute("HTMLattrbute", "valueToBeSet");
Those are the main methods I have used to do what I wanted to.
I have found this video useful: http://www.youtube.com/watch?v=5P2KvFN_aLY
I guess you could use something like WatiN to pipe the user's input/output from your app to the website and return the results, but as another commenter pointed out, the value of this sort of thing when you could just write your own calculator fairly escapes me.
You'll need a JavaScript interpreter (engine) to parse all the JavaScript code on the page.
https://www.google.com/search?q=c%23+javascript+engine
What you're looking for is something more akin to a web service. The page you provided doesn't seem like it accepts any data in an HTTP POST and doesn't have any meaningful information in the source that you could scrape. If for example you wanted to programmatically make searches for eBay auctions, you could figure out how to correctly post data to it eg:
http://www.ebay.com/sch/i.html?_nkw=http+for+dummies&_sacat=267&_odkw=http+for+dummies&_osacat=0
and then look through the http response for the information you're looking for. You'd probably need to create a regular expression to match the markup you're looking for like if you wanted to know how many results, you'd search the http response for this bit of markup:
<div class="alt w"><div class="cnt">Your search returned <b>0 items.</b></div></div>
As far as clientside/javascript stuff, you just plain aren't going to be able to do anything like what you're going for.
It is a matter of API: "Does the remote website expose any API for the required functionality?".
Well web resources that expose interactive API are called web service. There are tons of examples (Google Maps for istance).
You can access the API -depending on the Terms & Conditions of the service- through a client. The nature of the client depends on the kind of web service you are accessing.
A SOAP based service is based on SOAP protocol.
A REST based service is based on REST principles.
So, if there is an accessible web service called "Calculator", then you can access the service and, for istance, invoke the sum method.
In your example, the calculator is a Javascript implementation, so it is not a web service and it cannot be accessed via HTTP requests. Though, its implementation is still accessible: it is the javascript file where the calculator is implemented. You can always include the file in your website and access its functions via javascript (always mind terms and conditions!!).
A very common example is the jQuery library stored in Google Libraries.

Return PHP page from ASP route handler

I would like to return a PHP page from a route handler like so:
return BuildManager.CreateInstanceFromVirtualPath("/redirects.php", typeof(Page)) as Page;
This requires an extra buildProvider and returns a page with PHP directives unprocessed, so I can view all the PHP code with View Source. How can I tell it to process the code rather than just output the page?
It's unusual to want ASP.NET and PHP on the same server. The answer is that you need to install the PHP interpreter on your server.
I presume you're using using IIS. This seems like a good place to start: http://php.iis.net/
You would have to call it as a URL. Using CURL or something. I'm not sure what the C# equivalent is.

redirect to another page from Silverlight

In side a Silverlight Page, I want to redirect to another aspx page in the same web site and using POST method to send some additional header information. Any ideas how to implement this? Any samples are appreciated. :-)
I am using VSTS2008 + .Net 3.5 + Silverlight 2.0 + C#.
My suggestion would be to have a Visibility=hidden Button on the page, and then use javascript to retrieve it and .Click() it. Thus you get to do a post without all the work that this guy went through:
http://mentaljetsam.wordpress.com/2008/06/02/using-javascript-to-post-data-between-pages/
Especially considering it's a bear to craft a POST-to-ASP.NET through javascript, as ASP.NET requires pesky things like viewstate etc.
I think you're looking for HtmlPage.Document.Submit() .
Can Silverlight initiate Page Refreshes?

What is the best way to crawl a login based sites?

I've to automate a file download activity from a website (similar to, let's say, yahoomail.com). To reach a page which has this file download link, i've to login, jump from page to page to provide some parameters like dates etc., and finally click on download link.
I am thinking of three approaches:
Using WatIN and develop a windows service that periodically executes some WatiN code to traverse through the page and download the file.
Using AutoIT (no much idea)
Using a simple HTML parsing technique (there are several questions here eg., how to maintain a session after doing a login? how to do a logout after doing it?
I use scrapy.org, it's a python library. It's quiet good actually. Easy to write spiders and it's very extensive in it's functionality. Scraping sites after login is available in the package.
Here is an example of a spider that would crawl a site after authentication.
class LoginSpider(BaseSpider):
domain_name = 'example.com'
start_urls = ['http://www.example.com/users/login.php']
def parse(self, response):
return [FormRequest.from_response(response,
formdata={'username': 'john', 'password': 'secret'},
callback=self.after_login)]
def after_login(self, response):
# check login succeed before going on
if "authentication failed" in response.body:
self.log("Login failed", level=log.ERROR)
return
# continue scraping with authenticated session...
I used mechanize for Python with success for a few things. It's easy to use and supports HTTP authentication, form handling, cookies, automatic HTTP redirection (30X), ... Basically the only thing missing is JavaScript, but if you need to rely on JS you're pretty much screwed anyway.
Try a Selenium script, automated with Selenium Remote Control.
Free Download Manager is great for crawling, and you could use wget.

Categories

Resources