I know, this question has been posted a couple of times before, but I didn't get a clear answer/find a solution. I'm simply using WebClient.DownloadString on a website that uses SSL. Whenever I run my program, I get a "404 not found" error. I tried my program on a website that doesn't use SSL, and it worked perfectly.
Here's my code:
System.Net.WebClient webClient = new System.Net.WebClient();
string webData = webClient.DownloadString("https://example.com?user=" + listBox1.Items[currentIndex]);
I'm trying to make my program compatible with websites that use SSL. Does anyone have any examples on how to do this? Thanks, all help is appreciated.
System.Net.WebClient.DownloadString() does support websites that use SSL. If the server returned a 404 error, that either means that the resource you're requesting doesn't exist or the web server is incorrectly configured to handle SSL requests as you desire.
Related
i have xml data, and i need download to string this. But c# return error like this "Remote adress return error" , but this site is alive and work on my firefox. How to download this data?
My codes:
WebClient x = new WebClient();
string y= x.DownloadString("http://dizilab.com/diziler.xml");
MessageBox.Show(y);
According to a CloudFlare employee who answered this question: cURL - Load a site with CloudFlare protection
If you own the hosted site, you can whitelist your calling domain to allow access, otherwise you "supposedly" have no way of getting around this protection. However, there is a second answer that offers an option that you may find useful if you're familiar with cURL.
I'm quite a noob and have been developing 'seriously' (as my job) from a couple of months barely, so i apologize in advance for my ignorance.
There is this web service that i need to consume from a C# client on an aspx page I want to develop, but first I need to understand the webservice, I'm not sure which language it's made upon but I think it's PERL, since the web service's URL is like this "http://wschsol.mideplan.cl/mod_perl/xml/fps-by-rut". This webservice was developed by other people which i cannot contact right now and is running on a linux server to which i don't have access either.
The webservice's job is pretty simple, it receives a person's national ID number and returns some information about him on xml format, which i want to show on my client aspx page with some grids and stuff.
I have read around the internet that it's possible to see a description of a webservice and its methods using the WSDL variable after the common ".asmx" extension, but in this case there is no extension and so, i can't use the the ?WSDL. I'm guessing that maybe "fps_by_rut" is only a webmethod, and not the webservice itself. So the question is: how do I use the webservice?
Since I know what kind of request is expected (a person's ID), I tried to manually add an ID to the URL through the browser (like this: "http://wschsol.mideplan.cl/mod_perl/xml/fps-by-rut?rut=6985462-1") and if I do it responds normally in xml format.
I tried to add a web service reference for it on my project, but well, i pasted the whole URL and when I click "go" it says it needs credentials. I have these credentials, a user and password, but they are not working... what confuses me is that there is another client to this same webservice programmed on classic asp made by the guy before me here, and i can acces that code, and when i see the line on which he calls the web service it's like this:
xml.Open "GET", "http://wschsol.mideplan.cl/mod_perl/xml/fps-by-rut?rut="&rsVac(0), False,"user","password"
i have censored the "user" and "password" strings since those are the actual credentials. This classic asp client works fine with those credentials. I tried to use those when creating the reference, but they are not working. Even more, when i manually added the ID through the browser it asked me for credentials and they worked too...
Am I going the wrong way? Please guys, i need guidance. If there is a course out there which I can read that helps me understand all of this webservices stuff, i'd be hugely grateful. Or if someone can tell me which way to go, I'm pretty sure I'm in the wrong direction...
Thanks in advance for any help!!!
WSDL is used for SOAP and we both don't know if it's an SOAP-Service.
You should just use a HttpClient and make your Get-Calls to the API.
You can use something like this:
var client = new HttpClient();
client.BaseAddress = new Uri("http://wschsol.mideplan.cl");
var httpResponse = await client.GetStringAsync("mod_perl/xml/fps-by-rut?rut=<InsertParamHere>");
Edit Authorization:
You have to add this:
var byteArray = Encoding.ASCII.GetBytes("username:password1234");
client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Basic", Convert.ToBase64String(byteArray));
Snippet source: Simple C# .NET 4.5 HTTPClient Request Using Basic Auth and Proxy
I am busy working on a mobile application that retrieves data from a web service.
Ofcourse everything is working perfectly, I get everything I need to and I can consume the services without much effort... on the emulator.
However, when I move over to testing this application on the device, instead of getting back the data that I am expecting I am getting a website returned. How am I supposed to handle this?
Currently I am using this to call my service: (using system.net - I dont know if this is what I should be using on windows phone 7 either)
WebClient proxy = new WebClient();
string strURI = "http://www.google.co.za";
proxy.OpenReadCompleted +=
new OpenReadCompletedEventHandler(proxy_openreadcompleted);
proxy.OpenReadAsync(new Uri(strURI));
Please note: I am not really calling google, it is just an example. So anyways I am expecting my JSON to be returned, instead I am getting a message from the service provider to change mobile options... I can put this into isolated storage and render it with a browser, however I do not know what the source of the message is so when you click on a button, the forms use relative URLs, so instead of it doing what it is intened to do I just see what it is trying to call.
Is there anyway to get the source of the response? I am looking for a source like http://vodafonelive.mobi/ or something like that. If someone can tell me what to do I would greatly appreciate it, my current thinking is that if I can identify the source I can create a webbrowser task so that my application does not need to handle this, however... since I am calling a specific source I don't know how to identify where the response is comming from.
Any help is appreciated.
This is most likely due to differences in the user agent of the emulator and the device. Check what is sent or set this explicitly to ensure that the 2 behave in the same way. To ensure the server doesn't try and redirect to a different location.
Alternatively, it could be a mobile operator proxy being "helpful" and adjusting the request that goes over their network.
Ok so after spending some time on this, I finally found a way that I could get a response uri (where the response was comming from) by using another method to do the call to the service:
So basically this is the call:
WebRequest request = HttpWebRequest.Create(strURI);
var result = (IAsyncResult)request.BeginGetResponse(ResponseCallback, request);
So in the function ResponseCallBack function I do something like this:
WebRequest request = (HttpWebRequest)result.AsyncState;
WebResponse response = request.EndGetResponse(result);
which then allows me to check the response uri (the source of the intercepted message) and have the native browser handle the html that I was not expecting.
WebBrowserTask webBrowserTask = new WebBrowserTask();
webBrowserTask.Uri = response.ResponseUri;
Thanks for the help though, hopefully this will help someone with a similar issue.
We have an in-house application that pulls some data from some of Amazon's pages occassionally (We know they have APIs for certain operations... what we're doing requires some custom info not included in the APIs). We have never had a problem pulling their pages, but suddenly Amazon is returning "(503) Server Unavailable" on pretty much every request, and this has been happening for several days, so we doubt it's a temporary thing. Even something as simple as this:
System.Net.WebClient client = new System.Net.WebClient();
string data = client.DownloadString(new Uri("http://www.amazon.com/Bose-Companion-multimedia-speaker-Graphite/dp/B000HZBR64/"));
The strange thing is that these pages load just fine in a web browser, but any time we try to pull them through code, it is failing.
What could cause these functions to fail? Is it possible that they changed something on their end and that we need to do some custom logic with our calls?
After some further testing, it turns out that this was happening because Amazon needs the Accept parameter of the HttpWebRequest to be specifically set. When setting it to:
request.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"
Everything worked fine. This is a recent change, so they must have altered something on their end.
check the user-agent of you request. make the user-agent the same as your browser. And check if you set any proxy for your app? maybe your browser and your app are using different proxies
I am trying to download a file from rapidshare using System.Net.WebClient in C#.
I wanted to implement authorization using the http header field "Authorization: Basic ".
I do it with the following code:
WebClient.Headers.Add(HttpRequestHeader.Authorization,
"Basic " +
Convert.ToBase64String(System.Text.ASCIIEncoding.ASCII.GetBytes(_userPass)));
Problem is that, when i access rapidshare I get a redirected to a sub domoin of rapidshare, this means The problem is that this field, Authorization, (unlike "Cookie") isn't added to the hedear in the second (redirected) request.
This blocks me from authenticating with the server.
How can I make the class pass the authorization header with the redirected request, or is there a better way to pass on authorization?
Is better, "righter" way to do this, maybe with a different library?
All help will be very appreciated.
I think that rapidshare uses cookies to authenticate users and allow direct downloads...
EDIT: I did some digging on google and found a "Simple Rapidshare Download Class". Maybe this would be helpful to you?