its a possible duplicate of this and i have seen this Is there an existing Google+ API?
now my question is i have seen the google+ history API and also google+ Api..
i have never worked with API's before and don't know where to start all i know is i have to implement a code to post on google+ page for my brand..I understand i have to have the access token to do this.. but can someone tell me where i should start and what i need to know before understanding this and implementing it..
i know there are a number of websites which allow us to post on a google plus page for example hootsuite.. which allows us to post to all the different networking sites at once and it does the same to goole+ as well.. so i am assuming there is definitely a work around to do this..can someone help me ..from where to start??
Thanks !
PS: let me know if i am not clear or my question is too vague!
There is currently no publicly documented API that lets you automatically post to your Google+ page.
There are some tools (such as HootSuite) that do allow this, however, and since you have never used an API, this may be a good path for you to investigate.
The API that HootSuite is using is slowly opening up to other vendors. See https://plus.google.com/u/0/104946722942277428266/posts/LUi2ZNyRHag for more information about what is coming and how you can sign up to request access to this.
Glancing at the API really quickly, I'm not seeing what call you would make to create a post, I'm only seeing a read API so far...
But if you can find where to send the data, you need to look into how to HTTP POST data to that url.
Your main options built into C# are WebClient.UploadString() or a HttpWebRequest with the Method property set to POST
Then you upload the JSON object specified to create the post. There are probably frameworks to help with AJAX or JSON that would make it easier, but for just making a single type of post doing it manually wouldn't be too hard.
Related
I am trying to understand what exactly a REST-based API is. From what I understand it is just a convention for writing the functions within the API? All functions should be of either GET/POST/DELETE/PUT form? So, for example a function in a REST API could be
public string getLastName(User x)
{
return x.lastName;
}
I am mainly confused about how JSON/XML play a role in this?
Its more than just a convention. The concept behind a REST API is that you access it using the HTTP verbs, and that the functions those verbs have been mapped to perform the described action.
For example:
GET will return data to the caller/sender
DELETE will delete a record
And it goes further, but a lot of it is based on relying on HTTP to provide a level of consistency. For example, in a RESTful service, you might use the Accept HTTP header to request a JSON response or an XML response by supplying the application/json or application/xml values, respectively. This is just a simple example, and it is up to the implementer to decide how their API will work, but it highlights the importance placed on leveraging HTTP.
Why JSON/XML?
Along the same lines, JSON and XML are used because they are widespread and standard ways of representing and transmitting data over the web. JSON (JavaScript Object Notation) is very common in doing data transfer (especially on GET requests) due to most requests coming from JavaScript, and JS can easily interact with JSON without having to do the parsing required when dealing with XML. On the other hand, XML provides its own benefits, such as the ability to use schemas and namespaces. You may already be aware of benefits/drawbacks of each, but that's a separate discussion. The main point is that JSON/XML are the primary ways of communicating data in a REST API due to both of them being the de facto standards of the web.
There are lots of good resources for more information, this MSDN article may be helpful: http://msdn.microsoft.com/en-us/library/dd203052.aspx
There's a lot of confusion and misconceptions around REST. Unfortunately, it's a lot more common to find applications that are doing the exact opposite of what REST means and calling themselves RESTful than real REST applications.
From what I understand it is just a convention for writing the functions within the API?
No, REST is not just a convention for writing functions within the API, nor it's directly related to SOAP or HTTP as other answers here say. REST is a software architectural style inspired by the successful design decisions made for the web itself. To put it in simple terms, a REST API should work like a website does.
When you enter a website, you go to a homepage having an idea what the website is about, and the HTML document will have hyperlinks pointing you to the resources you need. The only out-of-band information are the media-types of the resources themselves, not how to find them. For instance, when you enter StackOverflow, you know what questions and answers are, and you look for links pointing you to them. How your browser render those links, how you choose them and follow them isn't different from other websites, like an email or news website. What makes it different is the media-type of the resources you're after.
That's how a REST API should work. Clients should not depend on any out-of-band information other than detailing what the resources do. They should connect to a home page that returns them links they should follow to do whatever they need. If an API doesn't do this, then it's not REST. Period.
I like to call those APIs "street-REST", because people often implement them by copying what they see in other APIs that also call themselves REST, and by what other people talk about REST.
All functions should be of either GET/POST/DELETE/PUT form?
That's a common confusion, and you'll see a lot of that, including people conflating that to CRUD operations, or that REST doesn't allow any other verbs, etc. That's bull.
REST is independent of any particular protocol, so it doesn't make sense to say functions should follow HTTP methods. REST constrain your applications into an uniform interface, meaning that whatever protocol you should using, you must stick to the standardized behavior as much as possible. If you're implementating a REST application over HTTP, this means your API must stick to the HTTP methods for the client-server interaction, meaning you can't invent other HTTP methods as some applications using HTTP do. If you change the communication protocol, clients need to know that information before entering your API, and that's more out-of-band information.
How you implement this on your code is irrelevant to REST. REST isn't a development pattern or philosophy, but an architectural style.
I am mainly confused about how JSON/XML play a role in this?
There's a lot of confusion on this too. On a REST application you should define abstract entities with states describing all the behavior you need. The API will serve as a channel to transfer media-type representations of those entities between client and server. REST means Representational State Transfer. The URI the client is requesting is an identifier for that resource, and the metadata for that request will tell the server what media-types the client is prepared to accept. JSON/XML don't play any direct role in REST, at all. They are simply one representation media-type that is easier for computers to parse and obtain information, in contrast to formats like text/html, which is meant to be rendered for human visualization by a browser.
For example, take StackOverflow itself. What's a question, in the abstract sense? It's a request for information. How that abstract resource is formally defined? There's an author, there's the actual text of the question being asked, there's the upvotes and downvotes, the comments and possible answers, etc, all of which are also abstract resources with their own definitions. The actual data is stored in a database somewhere, and when you request your homepage, it returns links with URIs identifying those questions. Take this question for instance, it has the URI http://stackoverflow.com/questions/24092517. When I want to read that question in a pretty document rendered on my browser, I will request that URI, but telling the server through the Accept header that I want a text/html representation, and my browser knows how to render the HTML into a pretty page. On the other hand, when I want to request that question to store it back on a database, for instance, I don't need all the cute stuff needed to render a pretty document page, so I ask a format that's easier to parse and doesn't contain a lot of unnecessary information, like JSON or XML.
Most people who build street-REST APIs understand this point, but they miss the most interesting part, which is that you're not limited to media-types that already exist. You can create private media-types that only exist within your API, or your company's ecosystem of applications. So, for instance, instead of calling the media type of all JSON documents application/json, no matter the content of them, we could have custom media-types that reflect more accurately that particular type of resource. So, we could have a application/vnd.stackoverflow.question.v1+json for StackOverflow questions represented in a JSON format. Once you do that, instead of wasting time documenting operations already standardized by the communication protocol, you should focus on documenting that custom media-type and how to interact with it, independently of the communication protocol. Once you have that clear, clients can interact with your services using any protocol available.
If you understand these three main points, you understand what REST is about. By using hyperlinks as the engine of your application state you're not tied to any particular point in time of your implementation. Your server can evolve at will, you can change URIs as much as you want, and clients won't break. By sticking to the standardized protocol, it's easier for generic clients to make use of your API, not to mention that it's easier for developers to understand how to integrate if they already know that you won't break the protocol. By focusing your design and documentation efforts on your media-types, not on protocol details and URI semantics, you're avoiding introducing more out-of-band information needed to drive your API, and your clients are also more resilient to changes.
REST API's act as middleman to deliver data packets between the web service and an application wanted to use some resource from web service for its operations, in order to do so Json comes into play, which helps in data transfer/communication over the channel. When one's application wishes to get list of resource, it actually get complex data or queryset from database which turn into simple data type with serialization and then turn into json to travers the channel.
RESTful API is much more than just the convention of writing functions.
The abbreviation REST stands for "REpresentational State Transfer".
REST APIs are used to call resources and allow the software to communicate based on standardized principles, properties, and constraints. REST APIs operate on a simple request/response system. You can send a request using these HTTP methods:
GET
POST
PUT
PATCH
DELETE
Also, REST APIs can include endpoints, headers, URL parameters, and the request body.
The endpoint (or route) is the URL you request for. The path determines the resource you’re requesting for. Think of it like an automated answering machine that asks you to press 1 for service, press 2 for another service, 3 for yet another service, and so on.
I am mainly confused about how JSON/XML play a role in this?
When you send a request to an endpoint, it responds with the relevant data, which is generally formatted as JSON, XML, plain text, images, HTML, and more.
I am mainly confused about how JSON/XML play a role in this?
JSON/XML are called streaming data format. There are others but over the years these two became so popular because of low latency they provide. XML is probably still little bit more popular than JSON, but JSON is more compact.
Also another main reason to use them is because they are both supported by almost all languages and their frameworks.
Let's say you have an ASP.NET MVC 4 Web API project. One of your resources, when you call it by URL, waits while it gets performance monitoring data for a specified amount of time and then returns it all in JSON form once it has completed. However, between entering the URL and when the process has completed, is there a way to return data dynamically, ie. at each second when performance data is retrieved and display it in the browser.
Here's the issue:
Calling an API resource via URL is static as far as I know and as far as anyone seems to know. Meaning, the JSON won't appear until the resource has retrieved all of its information, which is not what I want. I want to be able to constantly update the JSON in the browser WHILE the API resource is retrieving data.
Since I'm working in a repository class and a controller class, Javascript is not an option. I've tried using SignalR, but apparently that does not work in this scenario, especially since I'm not able to use Javascript.
Is there any possible way to get real-time data with a URL call to the API?
Case in point:
Google Maps.
The only way you can call the Google Maps API via URL is if you want a "static" map, that displays a single image of a specific location. No interaction of any kind. If you want a dynamic, "real time" map, you need to build a web application and consume the API resource in your application with Javascript in a view page. There is no way to call it via URL.
You can put together an old-school ASP.Net IHttpHandler implementation regardless of MVC controllers and routing pipeleline. http://support.microsoft.com/kb/308001. You would then have full acesss to the response stream and you could buffer or not as you see fit. You've got to consider though whether you want to tie up worker thread for that long and if you're planning on streaming more or less continuously then you definately want to use IAsyncHttpHandler while you await further responses from your repo.
Having said that, Web API supports Async too but it's a little more sophisticated. If you plan on sending back data as-and-when, then I'd strongly recommend you take another look at SignalR which does all this out of the box if you are planning on having some JavaScript client side eventually. It's much, much easier.
If you really want to write Async stuff in Web API though, here's a couple of resources that may help you;
http://blogs.msdn.com/b/henrikn/archive/2012/02/24/async-actions-in-asp-net-web-api.aspx
And this one looks like exactly what you need;
http://blogs.msdn.com/b/henrikn/archive/2012/04/23/using-cookies-with-asp-net-web-api.aspx
In order to use that PushStreamContent() class in the example though, you'll not find that in your System.Net.Http.dll, you'll need to go get that from the Web API stack nightly builds at http://aspnetwebstack.codeplex.com/SourceControl/list/changesets
YMMV - good luck!
I think what you're asking for is a sort of streaming mechanism over HTTP. Of course doing that would require sending a response of unknown content length.
This question deals with that sort of chunked transfer encoding which is probably part of the solution. Not knowing what is on the client side, I can't say how it would deal with the JSON you want to push through.
Great question.
You can certainly start streaming the response back to the browser as soon as you want. It's normally buffered, but it doesn't have to be. I've used this trick in the past. In fact SignalR does something similar in some operational modes, although I should add (now I've re-read your question) that although HTTP supports this, it won't be obvious by default from a Web API controller. I think you'll need to get a little lower into the response handling so you can flush the buffer than simply returning a POCO from your web method if that's what you mean.
Essentially, you'll be wanting to write and flush the buffer after you've gathered each piece of information, so I don't think you'll be able to do that with the typical model. I think you'll need a custom message handler http://www.asp.net/web-api/overview/working-with-http/http-message-handlers to get at the payload in order to do that.
I'm curious though, you say you want to send back JSON but you're not allowed JavaScript?
When I try to download a Google Search result page using HttpWebRequest in C#, everything works very well if I use simple search terms, like
http://www.google.com/search?q=stackoverflow
But when I try to make it more complex, for example
http://www.google.com/search?q=inurl%3A%22goethe%22%20filetype%3Apdf
which means
inurl:"goethe" filetype:pdf
, I will receive a 503 error because Google thinks I'm a bot. Is there any workaround?
Edit: UserAgent is set to "Mozilla/5.0".
well.. if your search is done programmatically, then Google just so happens to be right.. you ARE a bot :-)
cheers!
I don't believe it has much to do with how complex your query happens to be. The only thing that really matters is if they think that you're a bot. If you're submitting queries at a very high rate, then Google will think you're a bot so there are several possible solutions:
Reduce the rate at which you're sending queries.
Use proxies to make multiple queries.
Additionally, it's important to note that if you make web requests without saving cookies, then that might be another "signal" for Google to think that you're a bot. You should also be very careful not to get the proxies blocked by Google, because you're scraping the big G. It's hard to find free proxies and if you abuse them, then they'll get shut down so be a good citizen!
Good luck!
Try Google Custom Search APIs and Tools. This will allow you to retrieve search results without fear of being denied access (up to a limit).
Alternatively, mimic all nuances of a typical search query. For example, in my browser, searching for inurl:"goethe" filetype:pdf results in this URL being requested.
Then there are cookies and other http headers. Make it look a lot more like a browser is requesting it.
I have been looking around for hours trying to find a clear simple solution to my question and have yet to find a good answer. I am trying to do a URL request in flash to my NOPCommerce site. I want to pass a GET value to the my .cs file which i'll then use that value to grab specific information and return it back to flash. How would I set up the C# or asp.net side of things? If anyone could give me an example of what I am looking for I would greatly appreciate it.
I don't know if I am supposed to use a .aspx, .cs or .ascx file.
Thanks,
Brennan
I found it to be extremely simple with web services in as3. Here is a link to see what I mean
As3 Web Services
Use the HttpWebRequest class to GET the variables, do the magic and return a result by invoking the HttpWebRequest again.
Examples and usage here:
http://www.csharp-station.com/HowTo/HttpWebFetch.aspx
You have a few options for server-side communication with flash.
Flash remoting. This is the most popular because it's the most performant, but not the easiest to understand at first glance. It transfers data in a binary format. Available libraries are Weborb and Fluorine.
Web Services as mentioned in a previous post.
Ajax/JSON. I think with Flash Player 11.3, JSON decoding is native in the player now.
Straight up http request.
Sockets (not recommended for beginners)
To answer your question as you asked it, though, for all but #4, you'd be using a CS file to retrieve your data. For #4, you'd most likely be using an .aspx page, but it could be a combination of .aspx and .ascx files.
My recommendation is that you do some research on each of these methods to decide what would work best with your development environment, required level of security, and project. Then, ask specific questions about each method as necessary.
Good Luck!
I've been entrusted with an idiotic and retarded task by my boss.
The task is: given a web application that returns a table with pagination, do a software that "reads and parses it" since there is nothing like a webservice that provides the raw data. It's like a "spider" or a "crawler" application to steal data that is not meant to be accessed programmatically.
Now the thing: the application is made with standart aspx webform engine, so nothing like standard URLs or posts, but the dreadful postback engine crowded with javascript and non accessible html. The pagination links call the infamous javascript:__doPostBack(param, param) so I think it wouldn't even work if I try even to simulate clicks on those links.
There are also inputs to filter the results and they are also part of the postback mechanism, so I can't simulate a regular post to get the results.
I was forced to do something like this in the past, but it was on a standard-like website with parameters in the querystring like pagesize and pagenumber so I was able to sort it out.
Anyone has a vague idea if this is doable, or if I should tell to my boss to quit asking me to do this retarded stuff?
EDIT: maybe I was a bit unclear about what I have to achieve. I have to parse, extract and convert that data in another format - let's say excel - and not just read it. And this stuff must be automated without user input. I don't think Selenium would cut it.
EDIT: I just blogged about this situation. If anyone is interested can check my post at http://matteomosca.com/archive/2010/09/14/unethical-programming.aspx and comment about that.
Stop disregarding the tools suggested.
No, the parser you can write isn't WatiN or Selenium, both of those Will work in that scenario.
ps. had you mentioned anything on needing to extract the data from flash/flex/silverlight/similar this would be a different answer.
btw, reason to proceed or not is Definitely not technical, but ethical and maybe even lawful. See my comment on the question for my opinion on this.
WatiN will help you navigate the site from the perspective of the UI and grab the HTML for you, and you can find information on .NET DOM parsers here.
Already commented but think thus is actually an answer.
You need a tool which can click client side links and wait while page reloads.
Tool s like selenium can do that.
Also (from comments) WatiN WatiR
#Insane, the CDC's website has this exact problem, and the data is public (and we taxpayers have paid for it), I'm trying to get the survey and question data from http://wwwn.cdc.gov/qbank/Survey.aspx and it's absurdly difficult. Not illegal or unethical, just a terrible implementation that appears to be intentionally making it difficult to get the data (also inaccessible to search engines).
I think Selenium is going to work for us, thanks for the suggestion.