How to save dynamic image in WatIn [duplicate] - c#

This question already has answers here:
How do I get a bitmap of the WatiN image element?
(5 answers)
Closed 8 years ago.
I am trying to save an image that changes dynamically with each request.
I tried WatIn and HttpWebRequest (getting new image)
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("www.test.com");
request.AllowAutoRedirect = false;
WebResponse response = request.GetResponse();
using (Stream stream = response.GetResponseStream())
using (FileStream fs = File.OpenWrite(ImageCodePath))
{
byte[] bytes = new byte[1024];
int count;
while ((count = stream.Read(bytes, 0, bytes.Length)) != 0)
{
fs.Write(bytes, 0, count);
}
}
and (User32.dll) URLDownloadToFile (getting new image)
[DllImport("urlmon.dll", CharSet = CharSet.Auto, SetLastError = true)]
static extern Int32 URLDownloadToFile(Int32 pCaller, string szURL, string szFileName, Int32 dwReserved, Int32 lpfnCB);
URLDownloadToFile(0, "https://test.reporterImages.php?MAIN_THEME=1", ImageCodePath, 0, 0);
I looked in all the temp folders and still can't find the image.
Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.InternetCache),"Content.IE5");
Every time I try to save it the server builds a new image and returns it. If I right click on the image and choose:
Save picture as...
It will save the picture. I need to somehow implement this method (Right-click, Save picture as...) in WatIn in IE, or somehow download it with HttpRequest from my HTML page without server interaction.
Does anyone know how I can do this?

As i understand, the idea is to capture current CAPTCHA image on page in browser to bypass it by some text recognition (btw, strange idea). And no problem to get image url (it is always the same). In this case you can use API to access browser cache.
Specifically for IE FindFirstUrlCacheEntry/FindNextUrlCacheEntry
It can help if your application hosts WebBrowser.

Use WebClient:
using (var client = new WebClient())
{
client.DownloadFile("http://cdn.sstatic.net/stackoverflow/img/sprites.png?v=3c6263c3453b", "so-sprites.png");
}
If you provide an url to html page that contains an img that you want to download and not directly to an image you can still use a web client with two consecutive requests. First is to scrap the image url from the page source (not required if url is static) and second to actually download the image.
Edit:
Sorry. It was not clear from the start. So you host a browser in your app and you see the actual image in it. You want to download the image you are seeing. Of course, making another request won't work because another image will be generated.
The solution depends on the web browser control that you use.
For WebBrowser (System.Windows.Forms) you can use IHTMLDocument2. See example here:
Saving image shown in web browser

Related

C#.Net Download Image from URL, Crop, and Upload without Saving or Displaying

I have a large number of images on a Web server that need to be cropped. I would like to automate this process.
So my thought is to create a routine that, given the URL of the image, downloads the image, crops it, then uploads it back to the server (as a different file). I don't want to save the image locally, and I don't want to display the image to the screen.
I already have a project in C#.Net that I'd like to do this in, but I could do .Net Core if I have to.
I have looked around, but all the information I could find for downloading an image involves saving the file locally, and all the information I could find about cropping involves displaying the image to the screen.
Is there a way to do what I need?
It's perfectly possible to issue a GET request to a URL and have the response returned to you as a byte[] using HttpClient.GetByteArrayAsync. With that binary content, you can read it into an Image using Image.FromStream.
Once you have that Image object, you can use the answer from here to do your cropping.
//Note: You only want a single HttpClient in your application
//and re-use it where possible to avoid socket exhaustion issues
using (var httpClient = new HttpClient())
{
//Issue the GET request to a URL and read the response into a
//stream that can be used to load the image
var imageContent = await httpClient.GetByteArrayAsync("<your image url>");
using (var imageBuffer = new MemoryStream(imageContent))
{
var image = Image.FromStream(imageBuffer);
//Do something with image
}
}

c# windows forms, load first google image in app itself

I am wondering, if its possible to display the first image of google picture search in the visual studio windows form.
The way I imagine this would work, is that a person enters a string, then the app googles the string, copies the first image, and then displays it in the app itself.
Thank you.
EDIT: Please consider, that I am a beginner in C# programming, so if you are going to use some difficult coding or suggest to use some APIs, could you please explain in more detail how to do so, thank you.
Short answer, Yes.
We know the URL to get an image is
https://www.google.co.uk/search?q=plane&tbm=isch&site=imghp
On the Form create a PictureBox(Call it pbImage), a TextBox(Call it tbSearch), a Button(Call it btnLookup).
Using the Nuget Package Manager (Tools-> Nuget.. -> Manage..), select browse and search for HtmlAgilityPack. Click the your project on the right and then click install.
When we send a request to google using System.Net.WebClient there is no javascript being executed (however this can be done by some trickery with the winforms web browser).
As there is no javascript the page will be rendered differently to what you are used to. Inspecting the page without javascript tells us the following flow of the page:
Within the document body a table with a class called 'images_table'
Within that we can find several img elements.
Here is a code listing:
private void btnLookup_Click(object sender, EventArgs e)
{
string templateUrl = #"https://www.google.co.uk/search?q={0}&tbm=isch&site=imghp";
//check that we have a term to search for.
if (string.IsNullOrEmpty(tbSearch.Text))
{
MessageBox.Show("Please supply a search term"); return;
}
else
{
using (WebClient wc = new WebClient())
{
//lets pretend we are IE8 on Vista.
wc.Headers.Add("user-agent", "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)");
string result = wc.DownloadString(String.Format(templateUrl, new object[] { tbSearch.Text }));
//we have valid markup, this will change from time to time as google updates.
if (result.Contains("images_table"))
{
HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();
doc.LoadHtml(result);
//lets create a linq query to find all the img's stored in that images_table class.
/*
* Essentially we get search for the table called images_table, and then get all images that have a valid src containing images?
* which is the string used by google
eg https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcQmGxh15UUyzV_HGuGZXUxxnnc6LuqLMgHR9ssUu1uRwy0Oab9OeK1wCw
*/
var imgList = from tables in doc.DocumentNode.Descendants("table")
from img in tables.Descendants("img")
where tables.Attributes["class"] != null && tables.Attributes["class"].Value == "images_table"
&& img.Attributes["src"] != null && img.Attributes["src"].Value.Contains("images?")
select img;
byte[] downloadedData = wc.DownloadData(imgList.First().Attributes["src"].Value);
if (downloadedData != null)
{
//store the downloaded data in to a stream
System.IO.MemoryStream ms = new System.IO.MemoryStream(downloadedData, 0, downloadedData.Length);
//write to that stream the byte array
ms.Write(downloadedData, 0, downloadedData.Length);
//load an image from that stream.
pbImage.Image = Image.FromStream(ms);
}
}
}
}
}
Using System.Net.WebClient a request is sent to google using the url specified in the template string.
Adding headers makes the request looks more genuine. WebClient is used to download the markup, this is stored in result.
HtmlAgilityPack.HtmlDocument create a document object, we then load the data that was stored in result.
A Linq query is obtains the img elements, taking the first in that list we download the data and store it in a byte array.
With that data a memory stream is created (this should be encapsulated in a using().)
Write the data into the memory stream, then load that stream into the picture boxes image.

How to display the skype user photo from HTTP Response using UCWA API?

I am working on Universal Windows Applications, in my current project I used Unified Communications Web API (UCWA) to display the skype user status it's working fine but when I am trying to display the skype user photo at that time I got stuck.
I followed below link to display the photo
https://msdn.microsoft.com/en-us/skype/ucwa/getmyphoto
I got response code of 200 OK for my GET request but I don't know how to display the image from my response.
Please tell me how to resolve it.
-Pradeep
I got Result, After getting HTTP Response then I am converting those response content to stream type by using this below line.
var presenceJsonStr = await httpResponseMessage.Content.ReadAsStreamAsync();
This is the code to display the image
var photo = await AuthenticationHelper.Photo();
// Create a .NET memory stream.
var memStream = new MemoryStream();
// Convert the stream to the memory stream, because a memory stream supports seeking.
await photo.CopyToAsync(memStream);
// Set the start position.
memStream.Position = 0;
// Create a new bitmap image.
var bitmap = new BitmapImage();
// Set the bitmap source to the stream, which is converted to a IRandomAccessStream.
bitmap.SetSource(memStream.AsRandomAccessStream());
// Set the image control source to the bitmap.
imagePreivew.ImageSource = bitmap;
Assuming you put an Accept header specifying an image type, you should be able to look at the Content-Length header to determine if the user has an image set on the server. If the length is zero you should consider providing a default image to be displayed. If not, I would suggest taking a look at Convert a Bitmapimage into a Byte Array and Vice Versa in UWP Platform as you should treat the response body as a byte array with its length defined by the Content-Length header.
If for some reason no Accept header was provided, the response body is not an image/* type, and looks like a string then you might be dealing with a Base64 encoded image. This case should be much less likely to deal with, but if you need advice I would suggest looking at Reading and Writing Base64 in the Windows Runtime.
You can directly use the URL generated for the user photo resource. Just set the URL of the image as the source of the Image container. You application would load it automatically.

how to save/download pdf embedded in web page without a pdf filename

I'm writing a web scraping program in C#. So far, I have been able to log in to website, save cookie, and return source code of another page. From this source code, I get a link that takes me to a pdf, but the page doesn't end with .pdf extension. In the browser, this page shows the pdf image and there are controls in the browser including a save button.
I believe the pdf page was created with ColdFusion as it has .cfm, CFID and CFTOKEN in the URL.
How do I save this pdf file programmatically?
Two answers have suggested I save the binary stream to pdf. How do I get the binary data in the first place? I have tried the following:
byte[] result;
byte[] buffer = new byte[4096];
WebRequest wr = WebRequest.Create(billURL);
using (WebResponse response = wr.GetResponse())
{
using (Stream responseStream = response.GetResponseStream())
{
using (MemoryStream memoryStream = new MemoryStream())
{
int count = 0;
do
{
count = responseStream.Read(buffer, 0, buffer.Length);
memoryStream.Write(buffer, 0, count);
} while (count != 0);
result = memoryStream.ToArray();
}
}
}
Do I then want to save result as a pdf, or am I doing something wrong there?
The common method in CF for streaming a PDF to the browser is using this method:
<cfheader name="Content-Disposition" value="attachment;filename=#PDFFileName#">
<cfcontent type="application/pdf" reset="true" variable="#toBinary(PDFinMemory)#">
Use a C# WebRequest to get the URL of the PDf. Then check the response header for a 'Content-Type of 'application/pdf'. If so, save the binary stream to a PDF file on disk.
Assuming the CFID and CFTOKEN are not really needed, (you can test the URL without CFID and CFTOKEN and see if you can still fetches the PDF successfully)
Use WebRequest to make a GET request to that URL (see: http://support.microsoft.com/kb/307023)
Save the binary stream as a PDF File.
I get a link that takes me to a pdf, but the page doesn't end with
.pdf extension ..
How do I get the binary data in the first place?
In addition to the other suggestions, one small clarification. The file extension does not really matter. What is important is the content. A .cfm script can return any content-type, not just text/html. So it can mimic a pdf, image, etcetera. As long as your link returns type application/pdf you should get back a binary stream (ie the pdf) you can save to a file. The original file name can be obtained from the WebResponse headers.

How can I create a screenshot of a http website and save it on my server?

How can I create a screenshot of an http website and save it on my server, using dot.net
byte[] byteArray = Encoding.ASCII.GetBytes( resp.BodyStr );
MemoryStream stream = new MemoryStream( byteArray );
pictureBox1.Image = Image.FromStream(stream);
stream.Close();
I have tried the above code but it's not working
Edit: Someone (not the author) completely changed the question in the OP. My answer was right for that question, and now I'm getting marked down for it.
Use the WebClient class to download the image from the web
Construct a MemoryStream for the data
Use the Image.FromStream method to load the stream into an image for your forms
Edit: Here is a wonderful answer for you. Check the accepted answer for exactly what you're after: Using WebClient to get Remote Images Produces Grainy GIFs and Can't Handle PNG+BMP

Categories

Resources