I am trying to get an image via HttpWebRequest and show it in the win form imagebox. While tracking the request via Fiddler ImageView tab I can see that image can be seen correctly but while reading the stream I got Stream was not readable error on
Image img = Image.FromStream(stream).
What am I missing?
HttpWebRequest req = HttpWebRequest request = (HttpWebRequest)WebRequest.Create("[URL here]");
HttpWebResponse response = (HttpWebResponse)req.GetResponse();
Stream stream = response.GetResponseStream();
Image img = Image.FromStream(stream); // ERROR occurs here
stream.Close();
After some digging found an answer here C# gif Image to MemoryStream and back (lose animation):
HttpWebRequest req = HttpWebRequest request = (HttpWebRequest)WebRequest.Create("[URL here]");
HttpWebResponse response = (HttpWebResponse)req.GetResponse();
Stream stream = response.GetResponseStream();
MemoryStream memoryStream = new MemoryStream();
stream.CopyTo(memoryStream);
memoryStream.Position = 0;
stream = memoryStream;
Image img = Image.FromStream(stream);
stream.Close();
Related
I use this code to capture an image from IP Camera:
HttpWebRequest reqs = (HttpWebRequest)WebRequest.Create("http://" + ip + snapshotCommand);
reqs.Method = "POST";
reqs.Timeout = 4000;
reqs.Credentials = new NetworkCredential(user, pass);
reqs.PreAuthenticate = true;
HttpWebResponse resp = (HttpWebResponse)reqs.GetResponse();
if (resp != null)
{
Stream stm = resp.GetResponseStream();
img = new Bitmap(stm);
stm.Close();
}
But stream threw an exception because CanSeek & CanWrite is false.
I tried many ways, for example Copyto (MemoryStream), but the problem still persists.
Would you please help me on that?
This is the code using MemoryStream:
Stream stm = resp.GetResponseStream();
MemoryStream ms = new MemoryStream();
stm.CopyTo(ms);
ms.Position = 0;
And this "ms" for ReadTimeout & WriteTimeout threw:
Message "Timeouts are not supported on this stream."
Because canTimeout() is false for MemoryStream too.
Finally I found this solution, and it works well:
https://stackoverflow.com/a/2368505/492628
You should be able to copy the stream into a memory stream if it isn't seekable
Here's a post that might help.
public class PrintPage
{
public void buildPdf(string url)
{
Bitmap bmp = PrintHelpPage(url);
Document dc = new Document();
PdfWriter pdfWrt = PdfWriter.GetInstance(dc, new FileStream(#"D:/Experiment/Sample.pdf", FileMode.Create));
dc.Open();
iTextSharp.text.Image pdfImage = iTextSharp.text.Image.GetInstance(bmp, System.Drawing.Imaging.ImageFormat.Jpeg);
dc.Add(pdfImage);
dc.Close();
}
private Bitmap PrintHelpPage(string url)
{
if (url.ToUpper()=="DEFAULT")
{
url = #"https://www.google.com";
}
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
System.Text.Encoding Enc = System.Text.Encoding.GetEncoding(response.CharacterSet);
StreamReader sr = new StreamReader(response.GetResponseStream(), Enc);
string sDoc = sr.ReadToEnd();
sr.Close();
byte[] by = Encoding.ASCII.GetBytes(sDoc);
Bitmap bm = ByteToImage(by);
return bm;
}
public static Bitmap ByteToImage(byte[] blob)
{
using (MemoryStream mStream = new MemoryStream())
{
mStream.Write(blob, 0, blob.Length);
mStream.Seek(0, SeekOrigin.Begin);
Bitmap bm = new Bitmap(mStream);
return bm;
}
}
}
EDIT: Subsequent to your comment:
actually I am trying to capture whole page as a picture of a random website to convert as PDF
Then you're going about it the wrong way. You'll need to start a browser (e.g. a System.Windows.Forms.WebBrowser) and somehow do a screen capture. This will be non-trivial. It's also important that you understand why your current approach doesn't work - it suggests a fundamental misunderstanding of how the web works.
Original answer
This is your most fundamental problem:
System.Text.Encoding Enc = System.Text.Encoding.GetEncoding(response.CharacterSet);
StreamReader sr = new StreamReader(response.GetResponseStream(), Enc);
string sDoc = sr.ReadToEnd();
sr.Close();
byte[] by = Encoding.ASCII.GetBytes(sDoc);
You're reading an image as if it were a text file. It won't be. You'll be losing data like this.
Additionally, you're closing the memory stream that you're passing into the Bitmap constructor - you shouldn't do that.
You should just copy the response stream directly into a MemoryStream, and use that for the Bitmap:
MemoryStream stream = new MemoryStream();
using (var input = response.GetResponseStream())
{
input.CopyTo(stream);
}
stream.Position = 0;
Bitmap bitmap = new Bitmap(stream);
Oh, and you should also use a using statement for the response, otherwise that won't get disposed, which can cause timeouts for future requests due to connection pooling.
HttpRequest req = new HttpRequest(imageName, "http://panonest.com", "");
var imgSrc=req.MapPath("~/view/vacantapredeal/vacantapredeal.jpg");
Bitmap img = new Bitmap(imgSrc);
How should I do this? I get a parameter is not valid exception, which is thrown by the Bitmap constructor.
here is another way to do it:
WebClient MyWebClient = new WebClient();
byte[] BytesImage = MyWebClient.DownloadData("http://www.google.com/intl/en_com/images/srpr/logo3w.png");
System.IO.MemoryStream iStream= new System.IO.MemoryStream(BytesImage);
System.Drawing.Bitmap b = new System.Drawing.Bitmap(iStream);
Good luck!
If you are just loading the image from your local server you can do it easily using System.Drawing.Image:
System.Drawing.Bitmap bmp =
new System.Drawing.Bitmap(System.Drawing.Image.FromFile(
MapPath("~/view/vacantapredeal/vacantapredeal.jpg")));
If this is an image on a remote server, then according to MSDN, you need to do something like:
System.Net.WebRequest request = System.Net.WebRequest.Create("http://panonest.com" + imageName);
System.Net.WebResponse response = request.GetResponse();
System.IO.Stream responseStream = response.GetResponseStream();
Bitmap bitmap2 = new Bitmap(responseStream);
bitmap2.Save("~/view/vacantapredeal/vacantapredeal.jpg");
I have a external link with an image which i want to stream, but i get this error when i try.
error
"URI formats are not supported."
I tried to stream:
Stream fileStream = new FileStream("http://www.lokeshdhakar.com/projects/lightbox2/images/image-2.jpg", FileMode.Open);
byte[] fileContent = new byte[fileStream.Length];
can anyone put some light on this.
Thanks
The FileStream contructor you are using must be provided with a path on your local harddrive and not with an external URL.
You are probably looking for this:
string url = "http://www.lokeshdhakar.com/projects/lightbox2/images/image-2.jpg";
HttpWebRequest httpWebRequest = (HttpWebRequest)HttpWebRequest.Create(url);
HttpWebResponse httpWebReponse = (HttpWebResponse)httpWebRequest.GetResponse();
Stream stream = httpWebReponse.GetResponseStream();
Probably also for this:
Image pic = Image.FromStream(stream);
MemoryStream ms = new MemoryStream();
pic.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg);
Byte[] arr = ms.ToArray();
FileStream doesn't support the opening files over the internet.
Try this:
var webClient = new WebClient();
using(var fileStream = webClient.OpenRead("http://www.lokeshdhakar.com/projects/lightbox2/images/image-2.jpg"))
{
byte[] fileContent = new byte[fileStream.Length];
}
In C#.NET, I want to fetch data from an URL and save it to a file in binary.
Using HttpWebRequest/Streamreader to read into a string and saving using StreamWriter works fine with ASCII, but non-ASCII characters get mangled because the Systems thinks it has to worry about Encodings, encode to Unicode or from or whatever.
What is the easiest way to GET data from an URL and saving it to a file, binary, as-is?
// This code works, but for ASCII only
String url = "url...";
HttpWebRequest request = (HttpWebRequest)
WebRequest.Create(url);
// execute the request
HttpWebResponse response = (HttpWebResponse)
request.GetResponse();
// we will read data via the response stream
Stream ReceiveStream = response.GetResponseStream();
StreamReader readStream = new StreamReader( ReceiveStream );
string contents = readStream.ReadToEnd();
string filename = #"...";
// create a writer and open the file
TextWriter tw = new StreamWriter(filename);
tw.Write(contents.Substring(5));
tw.Close();
Minimalist answer:
using (WebClient client = new WebClient()) {
client.DownloadFile(url, filePath);
}
Or in PowerShell (suggested in an anonymous edit):
[System.Net.WebClient]::WebClient
$client = New-Object System.Net.WebClient
$client.DownloadFile($URL, $Filename)
Just don't use any StreamReader or TextWriter. Save into a file with a raw FileStream.
String url = ...;
HttpWebRequest request = (HttpWebRequest) WebRequest.Create(url);
// execute the request
HttpWebResponse response = (HttpWebResponse) request.GetResponse();
// we will read data via the response stream
Stream ReceiveStream = response.GetResponseStream();
string filename = ...;
byte[] buffer = new byte[1024];
FileStream outFile = new FileStream(filename, FileMode.Create);
int bytesRead;
while((bytesRead = ReceiveStream.Read(buffer, 0, buffer.Length)) != 0)
outFile.Write(buffer, 0, bytesRead);
// Or using statement instead
outFile.Close()
This is what I use:
sUrl = "http://your.com/xml.file.xml";
rssReader = new XmlTextReader(sUrl.ToString());
rssDoc = new XmlDocument();
WebRequest wrGETURL;
wrGETURL = WebRequest.Create(sUrl);
Stream objStream;
objStream = wrGETURL.GetResponse().GetResponseStream();
StreamReader objReader = new StreamReader(objStream, Encoding.UTF8);
WebResponse wr = wrGETURL.GetResponse();
Stream receiveStream = wr.GetResponseStream();
StreamReader reader = new StreamReader(receiveStream, Encoding.UTF8);
string content = reader.ReadToEnd();
XmlDocument content2 = new XmlDocument();
content2.LoadXml(content);
content2.Save("direct.xml");