I need to fill my picturebox (or panel) with google map, but it won't whenever I change size in url. How to do that, or is there a better provider (msn, openstreetmap ...ect)?
string urlmaps = "http://maps.googleapis.com/maps/api/staticmap?center=43.56,4.48&size=600x600&sensor=true&format=png&maptype=roadmap&zoom=10";
var request = WebRequest.Create(urlmaps);
using (var response = request.GetResponse())
using (var stream = response.GetResponseStream())
{
pictureBox1.Image = Bitmap.FromStream(stream);
You can use PictureBox.Load(string url) to load the remote image directly into the picure box.
Related
I am writing a proxy for some site using ASP.NET Core 2.0. My proxy works fine if all it does is just re-translating HttpResponseMessage to the browser. My proxy based on this example. But I need to make some changes site content, for instance, some of the href contains an absolute reference. So when I click them from my proxy, I get to the original site, and it is a problem.
When I tried to get site HTML as a string, I got only special characters. After some search, I find this solution. It fits for my case well and I successfully get site content as a string, which starts with "<!DOCTYPE html>...". After changes are done I need to send this string to my browser, and here I have a problem. I work with the HttpResponseMessage the following way:
using (var responseStream = await responseMessage.Content.ReadAsStreamAsync())
{
string str;
using (var gZipStream = new GZipStream(responseStream, CompressionMode.Decompress))
using (var streamReader = new StreamReader(gZipStream))
{
str = await streamReader.ReadToEndAsync();
//some stings changes...
}
var bytes = Encoding.UTF8.GetBytes(str);
using (var msi = new MemoryStream(bytes))
using (var mso = new MemoryStream())
{
using (var gZipStream = new GZipStream(mso, CompressionMode.Compress))
{
await msi.CopyToAsync(gZipStream);
await gZipStream.CopyToAsync(response.Body, StreamCopyBufferSize, context.RequestAborted);
}
}
//next string works, but I don't change content this way
//await responseStream.CopyToAsync(response.Body, StreamCopyBufferSize, context.RequestAborted);
}
I create GZipStream, successfully copy MemoryStream into it, then I want to copy GZipStream into a response.Body (HttpResponse) using CopyToAsync. On that line I get `NotSupportedException' with message
GZipStream does not support reading
I find out, after compression into GZipStream gZipStream.CanRead is false for some reason. I tried to copy msi into response.Body, it doesn't throw exceptions, but in the browser, I get an empty page (document Response in Network in browser console is also empty).
Hope someone will be able to tell me, that I am doing wrong.
I am trying to read the content of a page from URL by using the below code in MVC C#
var webRequest = WebRequest.Create(#"https://example.com/aa/aa");
webRequest.Method = "GET";
using (var response = webRequest.GetResponse())
using (var content = response.GetResponseStream())
using (var reader = new StreamReader(content))
{
var strContent = reader.ReadToEnd();
}
but I didnot receive any response (the call never returned to strContent)
but when I run the same code using URL : https://google.com/, it worked fine.
I checked the source code for both pages, and found that https://google.com/ has a proper doctype and tags declared but the one I am hitting seems to be a properties file with no tags and doctype defined.
Any help will be appreciated.
using (var client = new WebClient())
{
string data = client.DownloadString("www.yourUrl.com");
}
I have tried both WebClient's DownloadSring and WebRequest+Stream to try and scrape a page (This one) and get the Raw Paste data from it. I have scoured the net but have found no answers.
I have this code:
WebRequest request = WebRequest.Create("http://pastebin.com/raw.php?i=" + textBox1.Text);
WebResponse response = request.GetResponse();
Stream data = response.GetResponseStream();
string pasteContent = "";
using (StreamReader sr = new StreamReader(data))
{
pasteContent = sr.ReadToEnd();
}
new Note().txtMain.Text += pasteContent;
new Note().txtMain.Refresh();
and I have multiple forms so I am editing Note's txtMain textbox to add the paste content but it seems to return nothing, no matter which function I use. I know cross-form editing works because I have multiple things that can return to it.
How can I scrape the raw data?
Thank you VERY much,
P
There is no problem in downloading the content of your site. You simply doesn't use the instance of the Node class you created.
var note = new Note();
note.txtMain.Text += pasteContent;
note.Show();
I need to have access at the HTML of a Facebook page, to extract from it some data. So, I need to create a WebRequest.
Example:
My code worked well for other sites, but for Facebook, I must be logged in to can access the HTML.
How can I use Firefox data for creating a WebRequest for Facebook page?
I tried this:
List<string> HTML_code = new List<string>();
WebRequest request = WebRequest.Create(URL);
using (WebResponse response = request.GetResponse())
using (StreamReader stream = new StreamReader(response.GetResponseStream()))
{
string line;
while ((line = stream.ReadLine()) != null)
{
HTML_code.Add(line);
}
}
...but the HTML resulted is the HTML of Facebook Home Page when I am not logged in.
If what you are trying to is retrieve the number of likes from a Facebook page, you can use Facebook's Graph API service. Just too keep it simple, this is what I basically did in the code:
Retrieve the Facebook page's data. In this case I used the Coke page's data since it was an example FB had listed.
Parse the returned Json using Json.Net. There are other ways to do this, but this just keeps it simple, and you can get Json.Net over at Codeplex. The documentation that I looked for my code was from this page in the docs. Their documentation will also help you with parsing and serializing even more Json if you need to.
Then that basically translates in to this code. Just note that I left out all the fancy exception handling to keep it simple as using networking is not always reliable! Also don't forget to include the Json.Net library in your project!
Usings:
using System.IO;
using System.Net;
using Newtonsoft.Json.Linq;
Code:
string url = "https://graph.facebook.com/cocacola";
WebClient client = new WebClient();
string jsonData = string.Empty;
// Load the Facebook page info
Console.WriteLine("Connecting to Facebook...");
using (Stream data = client.OpenRead(url))
{
using (StreamReader reader = new StreamReader(data))
{
jsonData = reader.ReadToEnd();
}
}
// Get number of likes from Json data
JObject jsonParsed = JObject.Parse(jsonData);
int likes = (int)jsonParsed.SelectToken("likes");
// Write out the result
Console.WriteLine("Number of Likes: " + likes);
I have a page http://www.mysite.com/image.aspx, that I want to load and display an image instead of rendering HTML.
I have the ContentType of the page set to image/png, and here's my code:
using (Bitmap image = new Bitmap("http://www.google.com/images/img.png"))
{
using (MemoryStream ms = new MemoryStream())
{
image.Save(ms, System.Drawing.Imaging.ImageFormat.Png);
ms.WriteTo(Response.OutputStream);
}
}
But I get an error saying:
URI formats are not supported.
How can I load an external image and render it to the page?
You can't load a Bitmap using a URI - it has to be a local file to your computer.
If you want to load an image from off the web and then render it, you need to make a web request off to that specific resource and then render the bytes to the stream as you are doing.
AKA
WebRequest webRequest = WebRequest.Create("http://www.google.com/images/img.png");
using(WebResponse response = webRequest.GetResponse())
{
using(MemoryStream stream = new MemoryStream(response.GetResponseStream())
{
stream.WriteTo(Response.OutputStream);
}
}