I have been trying to establishing an integration with twitter, what i need is just below features.
I should be able to extract all the tweets for my account and display it in my application.
I should be able to post status when i enter a text in my textbox and click on submit.
I have tried multiple ways, even used 3rd party Libraries like TweetSharp(it worked pretty well.) As i need to deliver this to a client, i dont want to use any third party tools, as there will be no one to support in case of any issues.
First think it should be completely free. So i have tried using OAuth as explained in the below link. i have updated the twitter api version to 1.1 and added my Consumer Key, Secret, AccessToken and Secret. I executed the application. To my surprise, my message was posted on to the twitter. but when i changed the status and executed again, it stopped working and it keep gives my un authorized error.
The example i m following might not be the complete one, may be i need to regenerate a access key and do something else. I m confused. can you please help me how i can proceed further and with link to any post with complete code.
http://www.codeproject.com/Articles/247336/Twitter-OAuth-authentication-using-Net
Thank you.
Every status must be unique, or you'll receive an error. For this reason, I append the date/time to the end of the tweet when testing.
BTW, if you ever change your mind about 3rd party libraries, check out LINQ to Twitter.
Just for Info:
New API is more strict while sending headers and creating Signature.
Make sure you fulfill these requirements:
Header values should be in sorted order(lexicographically)
GET Request:
Header should not include any querystring values.
Signature base should contain all values including querystring values
POST Reqeust:
Header should include post parameter values.
Signature base should contain all values including parameter values
Post request Url should append all prameter values in QueryString and requst should be sent to that
This should solve most authorization issues.
Related
I'm trying to bypass the hCaptcha in Discord Account Registration using selenium webDriver in C#. I'm using CapMonster Cloud API for solving the captcha itself and as response I'm getting bypass token.
The problem that I currently have is that I can't locate the callback function that I need to call/submit, in order to pass the hCaptcha.
I'm setting the bypass token into "g-recaptcha-response" and "h-captcha-response" textareas, but can't find a way to locate and call the callback function. There is no form to be submitted.
using selenium webDriver in C#
10/10 Would recommend doing discord captcha bypasses using:
PuppeteerExtraSharp/ExtraStealth
(as selenium has some obvious tracers)
Puppeteer has a lot more freedom in it's API as well as the fact that 2capthca is a much more popular method for solving h-captchas
I know this doesn't answer your question but i hope you look into this as a potential better alternative if you do not receive a more traditional answer.
You can do that with Anti-Captcha.com plugin which will do the job automatically. It injects its own callbacks, so when a token is ready it submits the form. If you ever have problems with plugin, support guys here will help you out.
Web communication has to happen in one of the methods defined on this page
So if anything is being sent and received from a server to the browser it has to be one among those methods. Generally the most common methods are POST and GET.
The statement "There is no form to be submitted" is somewhat confusing. A form is just display of fields to collect data from a user. In case a website does not need user input they do not show the form. They would instead capture the required data and send a POST request to the server (without the user ever noticing), in a manner similar to how a form would have sent the data. This is a normal behavior for almost all major websites. An example is google-analytics codes.
So what you need to look for is a POST request (mostly or PUT maybe GET - depends) where the data you are targeting is received or sent.
In your case there indeed is a form which displays the captcha (that is how you see it) and and associated POST request which does what you need.
Url for the post request on the captcha is POST /getcaptcha?s=xxxxxxxx-xxxe-xxxx-xxxx-xxxxxxxxxxxx HTTP/3
Url where it is sent is POST /api/v9/auth/register HTTP/3
These basics apply to any web communication and not just the website in question.
Maybe title is not that clear but..
I have created a POST query that works in Postman using OAuth 1.0 authentication.
Mu calls are made to url:
https://lo.enghist.liveperson.net/abc/api/def/1234567/ghi/search
How does postman know all other urls - to request token url etc.
I’m trying to rewrite it in a custom C# app but have no idea how to track what happens when I click send - if I go to Developer Console I only see the final request with final params that were obtained somewhere?
Is it always sth default like:
https://lo.enghist.liveperson.net/oauth/request_token
Answering myself:
I didn't correctly understand OAuth 1.0. I first thought that there is a different URL that we make calls to receive the token which we then use to make the final call. This is not the case, we create our token using secrets, nonce (random string) and few other rules, then it's all hashed and sent to WebService which does the same and compares both values.
Postman now provides you with code - below the button "Send" there is a link "Code" which gives you so many languages and one of them is C# using RestSharp.
Regarding above, it sadly shows a semi working solution - quite a lot of logic is skipped and all the values are precalculated so I was thinking I need to calculate them myself even if RestSharp can do that for you, please check my final working code here:
https://stackoverflow.com/a/64819771/1619684
So I've been working on an api that, using RESTful, allows the user to get data from a database. Pretty simple, URL is along the lines of local:port/projects/[id of project]. The api returns some xml with 4 or 5 results.
What I'm having trouble with is PUT. As far as I understand it, I should use the same url, but use the PUT request method, and include the data that I want sent in a parameter. The problem seems to be that when I run the PUT, it just returns the same data as a GET.
I'm using the following site to test this: wst dot mytechlabs dot com (won't let me post two links here;?)
The code for my controller is located here: http://pastebin.com/3HXXR4YY
Thanks in advance, I'll monitor this, so let me know if I forgot any info that would help.
I'm trying to query a delivery companies consignment status page, though it uses ASP.NET viewstate which when not supplied as parameters it does not return a result.
How can I reliably either:
Not submit the values, or submit blank values
Submit a constant value that is reliable.
The resource in question is http:// 61.9.216.242 /xlcoads/contrack.aspx
I've tried using cURL and been successful but I don't know if I need to change viewstate etc.
I've also contacted the company without luck in having a more ReSTful version of the site available.
First of all you are trying to do something which doesn't support it, therefore there will not be any standard method for doing it.
__VIEWSTATE and __EVENTVALIDATION is used by .net to provide a sense of statefulness over stateless protocol(Tcp/ip), but it is easy to fool it.
Your friends will be Firefox and firebug, Submit 4-5 different request and see the sent data in firebug, you will be able to figure out which datas are constant and which changes. When figured out, use webrequest to get the url data, then extract view state and other datas as needed make another webrequest to submit the modified data with the search string.
and yes I use this method for a site with the same problem.
Similar questions have been asked about the nature of when to use POST and when to use GET in an AJAX request
Here:
What are the advantages of using a GET request over a POST request?
and here: GET vs. POST ajax requests: When and how to use either?
However, I want to make it clear that that is not exactly what I am asking. I get idempotence, sensitive data, the ability for browsers to be able to try again in the event of an error, and the ability for the browser to be able to cache query string data.
My real scenario is such that I want to prevent my users from being able to simply enter in the URL to my "Compute.cshtml" file (i.e. the file on the server that my jQuery $.ajax function posts to).
I am in a WebMatrix C#.net web-pages environment and I have tried to precede the file name with an underscore (_), but apparently an AJAX request falls under the same criteria that this underscore was designed to prevent the display of and it, of course, breaks the request.
So if I use POST I can simply use this logic:
if (!IsPost) //if this is not a post...
{
Response.Redirect("~/") //...redirect back to home page.
}
If I use GET, I suppose I can send additional data like a string containing the value "AccessGranted" and check it on the other side to see if it equals this value and redirect if not, but this could be easily duplicated through typing in the address bar (not that the data is sensitive on the other side, but...).
Anyway, I suppose I am asking if it is okay to always use POST to handle this logic or what the appropriate way to handle my situation is in regards to using GET or POST with AJAX in a WebMatrix C#.net web-pages environment.
My advice is, don't try to stop them. It's harmless.
You won't have direct links to it, so it won't really come up. (You might want your robots.txt to exclude the whole /api directory, for Google's sake).
It is data they have access to anyway (otherwise you need server-side trimming), so you can't be exposing anything dangerous or sensitive.
The advantages in using GETs for GET-like requests are many, as you linked to (caching, semantics, etc)
So what's the harm in having that url be accessible via direct browser entry? They can POST directly too, if they're crafty enough, using Fiddler "compose" for example. And having the GETs be accessible via url is useful for debugging.
EDIT: See sites like http://www.robotstxt.org/orig.html for lots of details, but a robots.txt that excluded search engines from your web services directory called /api would look like this:
User-agent: *
Disallow: /api/
Similar to IsPost, you can use IsAjax to determine whether the request was initiated by the XmlHttpRequest object in most browsers.
if(!IsAjax){
Response.Redirect("~/WhatDoYouThinkYoureDoing.cshtml");
}
It checks the request to see if it has an X-Requested-With header with the value of XmlHttpRequest, or if there is an item in the Request object with the key X-Requested-With that has a value of XmlHttpRequest.
One way to detect a direct AJAX call is to check for the presence of the http_referer header. Directly typed URLs won't generate a referrer, but you still won't be able to differentiate the call from a simple anchor link.
(Just keep in mind that some browsers don't generate the header for XHR requests.)