WebUrl Download - c#

I am using c#.
I am trying to see if a web url Contains a string.
I am currently using
using (var client = new WebClient())
{
result = client.DownloadString(WebUrl);
}
if (result.Contains(String))
{
return true;
}
I am doing this for about 300 Urls, is there a faster way or do I have to download each WebPage?

Related

Webclient does not understand blob:http uri

I am trying to download data from a blob:http link problem is the webclient complains about the url formating. blob:http://localhost/7420f6fc-9c83-43a3-aa53-4a68ebec9518 this format is not know to webclient is there another way to download this data without using Azure calls?
using (var client = new WebClient())
{
//NotSupportedException: The URI prefix is not recognized.
var model = client.DownloadData(new Uri("blob:http://localhost/7420f6fc-9c83-43a3-aa53-4a68ebec9518"));
//Also tried
var model = client.DownloadData("blob:http://localhost/7420f6fc-9c83-43a3-aa53-4a68ebec9518");
}

Response URL is returning previous Imgur image URL instead of current image URL

I've built a new Unity project that makes use of C# and System.Net to access Imgur's API. I'm able to pull images into my application from anywhere, and I can upload screenshots to Imgur through my new application's client ID. I'm able to get a decent amount of response data on a successful upload, but the URL in the response is always outdated by 1. If I just uploaded Screenshot D, the URL I get back will link to Screenshot C, and so on.
Here is what the response data looks like:
This is the result I've gotten across all attempts, which includes using w.UploadValuesAsync() and in trying both Anonymous and OAuth2 (no callback) versions of Imgur applications.
Here is the bulk of my code, which is sourced from here.
public void UploadThatScreenshot()
{
StartCoroutine(AppScreenshotUpload());
}
IEnumerator AppScreenshotUpload()
{
yield return new WaitForEndOfFrame();
ScreenCapture.CaptureScreenshot(Application.persistentDataPath + filename);
//Make sure that the file save properly
float startTime = Time.time;
while (false == File.Exists(Application.persistentDataPath + filename))
{
if (Time.time - startTime > 5.0f)
{
yield break;
}
yield return null;
}
//Read the saved file back into bytes
byte[] rawImage = File.ReadAllBytes(Application.persistentDataPath + filename);
//Before we try uploading it to Imgur we need a Server Certificate Validation Callback
ServicePointManager.ServerCertificateValidationCallback = MyRemoteCertificateValidationCallback;
//Attempt to upload the image
using (var w = new WebClient())
{
string clientID = "a95bb1ad4f8bcb8";
w.Headers.Add("Authorization", "Client-ID " + clientID);
var values = new NameValueCollection
{
{ "image", Convert.ToBase64String(rawImage) },
{ "type", "base64" },
};
byte[] response = w.UploadValues("https://api.imgur.com/3/image.xml", values);
string returnResponse = XDocument.Load(new MemoryStream(response)).ToString();
Debug.Log(returnResponse);
}
}
Why is the URL I get back from w.UploadValues(...) always behind by one?

How to download a file from a URL which is passed as a query string?

I am passing a URL which has another URL in the query string. I have shown an example below:
https://www.aaa.com/triBECKML/kmlHelper.htm?https://lkshd.ty.etys.nux/incoming/triBEC/final_year_base_data/KMLS/NetAverages.kml
I have tried a WebClient to download the file but it only downloads an empty .kml. Additionally when I call the method with just the 2nd URL (IN THE QUERY STRING), the file gets downloaded smoothly.
using (var client = new WebClient())
{
client.DownloadFile(url, destinationPath);
}
You can try to split the string. And use only the second part?
Example:
string[] urls = url.Split('?');
using (var client = new WebClient())
{
client.DownloadFile(urls[1], destinationPath);
}
This splits the url string and puts the first url and the second in a array. Then you can use the url you want.

Uri ignore special characters

I got a string that represent a url and i want to download the content via c#
my url contains right to left mark and left to right mark %E2%80%8F & %E2%80%8E.
when i paste the url in the browser i can show the file.
when i use the code in .net i get an error ( because .net ignore those marks and doesnt send them in the request.
this is the query string i got
/MBA%20%281%29/%D7%90%D7%A1%D7%98%D7%A8%D7%98%D7%92%D7%99%D7%94%20%D7%A2%D7%A1%D7%A7%D7%99%D7%AA%20%D7%AA%D7%97%D7%A8%D7%95%D7%AA%D7%99%D7%AA%20%28%E2%80%8F%EF%BB%BF13015%E2%80%8E%29%E2%80%8F/%D7%93%D7%A4%D7%99%20%D7%A0%D7%95%D7%A1%D7%97%D7%90%D7%95%D7%AA%20%D7%9C%D7%9E%D7%91%D7%97%D7%9F/%D7%90%D7%A1%D7%98%D7%A8%D7%98%D7%92%D7%99%D7%94%20%D7%93%D7%A3%20%D7%A0%D7%95%D7%A1%D7%97%D7%90%D7%95%D7%AA-2%20%D7%A2%D7%9E-%203%20%D7%A2%D7%9E%D7%95%D7%93%D7%95%D7%AA%20%D7%9C%D7%A8%D7%95%D7%97%D7%91.pdf
using fiddle i can that .net send
/MBA%20(1)/%D7%90%D7%A1%D7%98%D7%A8%D7%98%D7%92%D7%99%D7%94%20%D7%A2%D7%A1%D7%A7%D7%99%D7%AA%20%D7%AA%D7%97%D7%A8%D7%95%D7%AA%D7%99%D7%AA%20(%EF%BB%BF13015)/%D7%93%D7%A4%D7%99%20%D7%A0%D7%95%D7%A1%D7%97%D7%90%D7%95%D7%AA%20%D7%9C%D7%9E%D7%91%D7%97%D7%9F/%D7%90%D7%A1%D7%98%D7%A8%D7%98%D7%92%D7%99%D7%94%20%D7%93%D7%A3%20%D7%A0%D7%95%D7%A1%D7%97%D7%90%D7%95%D7%AA-2%20%D7%A2%D7%9E-%203%20%D7%A2%D7%9E%D7%95%D7%93%D7%95%D7%AA%20%D7%9C%D7%A8%D7%95%D7%97%D7%91.pdf
youll notice that marks are gone.
any idea how to solve it?
here some code:
var s = "https://dl.dropboxusercontent.com/1/view/4fsaouo0ob52xkz/MBA%20%281%29/%D7%90%D7%A1%D7%98%D7%A8%D7%98%D7%92%D7%99%D7%94%20%D7%A2%D7%A1%D7%A7%D7%99%D7%AA%20%D7%AA%D7%97%D7%A8%D7%95%D7%AA%D7%99%D7%AA%20%28%E2%80%8F%EF%BB%BF13015%E2%80%8E%29%E2%80%8F/%D7%93%D7%A4%D7%99%20%D7%A0%D7%95%D7%A1%D7%97%D7%90%D7%95%D7%AA%20%D7%9C%D7%9E%D7%91%D7%97%D7%9F/%D7%90%D7%A1%D7%98%D7%A8%D7%98%D7%92%D7%99%D7%94%20%D7%93%D7%A3%20%D7%A0%D7%95%D7%A1%D7%97%D7%90%D7%95%D7%AA-2%20%D7%A2%D7%9E-%203%20%D7%A2%D7%9E%D7%95%D7%93%D7%95%D7%AA%20%D7%9C%D7%A8%D7%95%D7%97%D7%91.pdf";
using (HttpClient client = new HttpClient())
{
using (var sr = client.GetAsync(s).Result)
{
Console.WriteLine(sr.Headers);
}
}
I tried to create the uri manually and pass it to the httpClient - save result.
The sr - got 403 error code instead of 200( the link i provided in invalid but the end result is the same).
Try this:
var uri = "https://dl.dropboxusercontent.com/1/view/4fsaouo0ob52xkz/MBA%20%281%29/%D7%90%D7%A1%D7%98%D7%A8%D7%98%D7%92%D7%99%D7%94%20%D7%A2%D7%A1%D7%A7%D7%99%D7%AA%20%D7%AA%D7%97%D7%A8%D7%95%D7%AA%D7%99%D7%AA%20%28%E2%80%8F%EF%BB%BF13015%E2%80%8E%29%E2%80%8F/%D7%93%D7%A4%D7%99%20%D7%A0%D7%95%D7%A1%D7%97%D7%90%D7%95%D7%AA%20%D7%9C%D7%9E%D7%91%D7%97%D7%9F/%D7%90%D7%A1%D7%98%D7%A8%D7%98%D7%92%D7%99%D7%94%20%D7%93%D7%A3%20%D7%A0%D7%95%D7%A1%D7%97%D7%90%D7%95%D7%AA-2%20%D7%A2%D7%9E-%203%20%D7%A2%D7%9E%D7%95%D7%93%D7%95%D7%AA%20%D7%9C%D7%A8%D7%95%D7%97%D7%91.pdf";
string Url= System.Web.HttpUtility.UrlDecode(uri);
//objDataToXml.GenerateXml();
using (HttpClient client = new HttpClient())
{
using (var sr = client.GetAsync(Url).Result)
{
Console.WriteLine(sr.Headers);
}
}

How can i get website text wothout using web browser?

I tryed to do a webbrowser that fro, him i get the text.
But insted of getting the text is downloading to my computer the file.
How can i get this text without using it?
Thanks
You can use a WebClient:
string output = string.Empty;
using (WebClient wc = new WebClient())
{
output = wc.DownloadString("http://stackoverflow.com");
}

Categories

Resources