For the past year and half i have been using this method:
string page = "";
WebClient WC = new WebClient();
page = WC.DownloadString("http://WWW.LINKEDINPROFILEIWANTTOOPEN.COM");
WC.Dispose();
and i haven't been having any problems, downloading 1000s of profiles a day and then analyzing them with a c# webservice looking through the html for the tags im after. Today i have been presented with an error code of:
"System.Net.WebException: The remote server returned an error: (999)
Request denied."
im guessing linked in has locked me out as they want me to use the api instead. after looking through documentation for the last hour or so it seems as if the api is set up to make your own linkedin app. i just want the standard html or just the basic fields such as name and various positions they have had over time. Also it wants me to log on as me and this just seems unnecessary.
Is the only way i can get round this problem to sign up for the api and then authorize myself and get the fields through the api or does anyone else know a solution to simply download the html for a public profile for which i have the URL
Related
i have xml data, and i need download to string this. But c# return error like this "Remote adress return error" , but this site is alive and work on my firefox. How to download this data?
My codes:
WebClient x = new WebClient();
string y= x.DownloadString("http://dizilab.com/diziler.xml");
MessageBox.Show(y);
According to a CloudFlare employee who answered this question: cURL - Load a site with CloudFlare protection
If you own the hosted site, you can whitelist your calling domain to allow access, otherwise you "supposedly" have no way of getting around this protection. However, there is a second answer that offers an option that you may find useful if you're familiar with cURL.
I'm writing an app to ensure my website is always up to date with our suppliers products. I can get the categories but not the subs.
Basically a webrequest on "xxxx/products/8-propagation/?sub-category=96" always returns "xxxx/products/8-propagation/". I have used console on firefox to see what headers are sent when browsing, I didn't see anything particular but still emulated.
Is there any way to retrieve php requests from URL's or is this something server side only?
I have tried numerous ways of doing this, all the same result.
Show us your server side code. I think, this is problem with your routing in controller.
I'm working on a continuing API project. The current issue at hand is to be able to download my data from the AtTask server in precisely the folder structure they exist in on the AtTask servers. I've got the folder creation working nicely; the data types between Document, Document Folder and Document Version seem to be pretty clear. I am a little disillusioned about the fact that extension isn't in the document object (that I have to refer to the document VERSION for that)... but I can see some of the reason for that from a design perspective.
The issue I'm running into now is that I need to get the file content. I originally through from the API documentation that I'd be able to get to the file contents the same way as the documentation recommends uploading it -- through the handle. Unfortunately, neither document nor docv seem to support me accessing the handle except to write a new file.
So that leaves me the "download URL" as the remaining option. If I build the UI strings from the API calls using my browser, I get a URL with https://attaskURL/document/download?ID=xxxx (and can also get the versionID and such). If I paste the url into the browser where I'm logged in to the user interface of AtTask, it works fine and I can download the file. If, instead, I use my C# code to do so, I get the login page returned as a stream for me to download instead of my actual file because I'm not authenicated. I've tried creating a network credential and attaching it to the request with the username and password, but to no avail.
I imagine there's a couple ways to solve this problem -- the easy one being finding a way to "log in" to the download site through code (which doesn't seem to be the usual network credential object in C#) OR find a way to access the file contents through the API.
Appreciate your thoughts!
It looks like you can use the download URL if you put a session id in the URL. The details on getting a session id are here (basically just call login and a session id is returned in JSON):
http://developers.attask.com/api-docs/#Authentication
Then cram it on the end of your document download URL:
https://yourcompany.attask-ondemand.com/document/download?ID=xxxx&sessionID=abc1234
I've given this a quick test and I'm able to access a document.
You can use the downloadURL and a sessionID IF you are not using SAML authentication.
I have tried it both ways and using SAML will redirect you to the login page.
I'm trying to parse through and obtain my (my personal account not my app) albums from Facebook using the Facebook C# SDK. My goal is to grab the 10-12 most recent photos on my account. However, I understand I have to grab the albums first.
So, I've tried numerous things and ended up with the following url which returns a 400 Bad Request:
https://graph.facebook.com/{my_user_id}/albums?access_token={my_access_token}
The token was obtained by calling:
https://graph.facebook.com/oauth/access_token?client_id={0}&client_secret={1}&grant_type={2}&scope={3}
Any ideas why I'd be getting the 400?
When using grant_type = client_credentials you're requesting the an app access token. This will allow you to do various administrative actions for your application. See App Login in http://developers.facebook.com/docs/authentication/.
However, when using the user-parts of the Graph Api you need to perform a User Login using the oAuth Dialog. There are different ways of doing this such as with the Javascript SDK which should be straightforward to use.
I've not found a nice way of doing this in a standalone web app using Facebook C# SDK without the Javascript SDK (it's easy in a canvas app using the CanvasAuthorize attribute).
Here's an example of how to do it i a WinForms app http://blog.prabir.me/post/Facebook-CSharp-SDK-Writing-your-first-Facebook-Application.aspx. It might work in a Asp.Net app if you could use the WebBrowser control. I've tried with WebClient but didn't have any luck.
Update
By looking at the sample here http://facebooksdk.codeplex.com/SourceControl/changeset/view/534da45e108f#Samples%2fCSMvcWebsite%2fControllers%2fHomeController.cs it looks like you should be able to use the FacebookAuthorize attribute in a standalone site.
Error code 400 means that the request was not correctly formatted. Verify that that the Final URL looks OK and try it in a browser.
The request could not be understood by
the server due to malformed syntax.
The client SHOULD NOT repeat the
request without modifications.
I.e. that you try the following https://graph.facebook.com/someuser/albums?access_token=1234 You would pre presented with the following:
{
"error": {
"type": "OAuthException",
"message": "Invalid OAuth access token."
}
}
If you provide an OK token and a real user, the result will probably look a bit different, but in your case you get a 400 because there is something wrong with the request.
I have a method that searches for movies in IMDB. Problem is, I only take into account if the site returns a page with movie OPTIONS. If the site automatically finds the movie in question, my program breaks.
Is there a way for me to check the URL of source code in C#?
I think maybe you're trying to parse the page instead of using a web service to access the information. parsing a page of dynamic content is difficult, if you want them, you must create a parser capable of handling such situations you describe.
You can try these links
Imdb Services
IMDB API
AllowAutoRedirect = false;