I am trying to make a CLR with .NET 2.0 integrated into MS SQL Server 2008. I call an API with and I should receive a .zip as response. I store the response into a Stream and I want to export this file to a physical .zip file.
I tried exporting the file with C# and SQL (BCB OR OLE) and all resulted into a corrupted file. So, I believe I am doing something wrong with in the making of the stream.
The C# code is the following:
private static byte[] GetStreamFileResult(Cookie loginCookie, Guid fileGuid, String baseUri)
{
byte[] output = null;
//String result = null;
string url = "some url"
CookieContainer cookies = new CookieContainer();
cookies.Add(new Uri(url), loginCookie);
WebRequest request = WebRequest.Create(url);
(request as HttpWebRequest).CookieContainer = cookies;
WebResponse response = request.GetResponse();
HttpWebResponse resp = response as HttpWebResponse;
Stream dataStream = response.GetResponseStream();
using (MemoryStream ms = new MemoryStream())
{
CopyStream(dataStream, ms);
output = ms.ToArray();
}
dataStream.Close();
response.Close();
return output;
}
The C# code to export the zip is the following:
File.WriteAllBytes("C:\\folder\\t.zip", stream); // Requires System.IO
The copy from stream to stream:
public static void CopyStream(Stream input, Stream output)
{
if (input != null)
{
using (StreamReader reader = new StreamReader(input))
using (StreamWriter writer = new StreamWriter(output))
{
writer.Write(reader.ReadToEnd());
}
}
}
Your CopyStream is broken. You need to talk binary. You're currently treating binary zip data as though it were text:
byte[] buffer = new byte[2048];
int bytesRead;
while((bytesRead = input.Read(buffer, 0, buffer.Length)) > 0) {
output.Write(buffer, 0, bytesRead);
}
Related
I am calling a third party service, which return a pdf file by IO.Stream. I need to change to MemoryStream, and save to a pdf file.
cRequestString = ".....";//You need to set up you own URL here.
//Make the API call
try
{
byte[] bHeaderBytes = System.Text.Encoding.UTF8.GetBytes(GetUserPasswordString()); //user and pa for third party call.
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(cRequestString);
request.Method = WebRequestMethods.Http.Get;
request.PreAuthenticate = true;
request.ContentType = "application/pdf";
request.Accept = "application/pdf";
request.Headers.Add("Authorization", "Basic " + Convert.ToBase64String(bHeaderBytes));
MemoryStream memStream;
WebResponse response = request.GetResponse();
using (Stream stream = response.GetResponseStream())
using (StreamReader reader = new StreamReader(stream))
{
memStream = new MemoryStream();
//read small blocks to show correctly
byte[] buffer = new Byte[1023];
int byteCount;
do
{
byteCount = stream.Read(buffer, 0, buffer.Length);
memStream.Write(buffer, 0, byteCount);
} while (byteCount > 0);
}
memStream.Seek(0, SeekOrigin.Begin);//set position to beginning
return memStream;
}
catch
{
return null;
}
//save MemoryStream to local pdf file
private void SavePDFFile(string cReportName, MemoryStream pdfStream)
{
//Check file exists, delete
if (File.Exists(cReportName))
{
File.Delete(cReportName);
}
using (FileStream file = new FileStream(cReportName, FileMode.Create, FileAccess.Write))
{
byte[] bytes = new byte[pdfStream.Length];
pdfStream.Read(bytes, 0, (int)pdfStream.Length);
file.Write(bytes, 0, bytes.Length);
pdfStream.Close();
}
}
You could do the following.
using (Stream stream = response.GetResponseStream())
using(MemoryStream memStream = new MemoryStream())
{
memStream = new MemoryStream();
stream.CopyTo(memoryStream);
// TODO : Rest of your task
}
More details on Stream.CopyTo on MSDN
I have an XML data which I would like to compress it using GZipStream and upload it to webservice. I would like to create the gzip file in memory instead of of creating it in local disk. I have tried the following:
public string class1(string url, string xml)
{
byte[] data = Encoding.ASCII.GetBytes(xml);
MemoryStream memory = new MemoryStream();
GZipStream gzip = new GZipStream(memory, CompressionMode.Compress, true);
gzip.Write(data, 0, data.Length);
byte[] zip=memory.ToArray();
HttpWebRequest wReq = (HttpWebRequest)WebRequest.Create(url);
wReq.Method = "POST";
wReq.ContentType = "application/zip";
var reqStream = wReq.GetRequestStream();
reqStream.Write(zip,0,zip.Length);
reqStream.Close();
var wRes = wReq.GetResponse();
var resStream = wRes.GetResponseStream();
var resgzip = new GZipStream(resStream, CompressionMode.Decompress);
var reader = new StreamReader(resgzip);
var textResponse = reader.ReadToEnd();
reader.Close();
resStream.Close();
wRes.Close();
return textResponse;
}
After writing data to webservice the server unzips the file and processess it. While the server decompresses the data an exception is thrown in server "Premature end of file". Please help me in this.
Add below method to convert Stream to MemoryStream
public static MemoryStream Read(Stream stream)
{
MemoryStream memStream = new MemoryStream();
byte[] readBuffer = new byte[4096];
int bytesRead;
while ((bytesRead = stream.Read(readBuffer, 0, readBuffer.Length)) > 0)
memStream.Write(readBuffer, 0, bytesRead);
return memStream;
}
then call as below.
var wRes = wReq.GetResponse();
var memstream = Read(wRes.GetResponseStream());
var resgzip = new GZipStream(memstream, CompressionMode.Decompress);
var reader = new StreamReader(resgzip);
var textResponse = reader.ReadToEnd();
I'm trying to download a .torrent file (not the contents of the torrent itself) in my .NET application.
Using the following code works for other files, but not .torrent. The resulting files is about 1-3kb smaller than if I download the file via a browser. When opening it in a torrent client, it says the torrent is corrupt.
WebClient web = new WebClient();
web.Headers.Add("Content-Type", "application/x-bittorrent");
web.DownloadFile("http://kat.ph/torrents/linux-mint-12-gnome-mate-dvd-64-bit-t6008958/", "test.torrent");
Opening the URL in a browser results in the file being downloaded correctly.
Any ideas as to why this would happen? Are there any good alternatives to WebClient that would download the file correctly?
EDIT: I've tried this as well as WebClient, and it results in the same thing:
private void DownloadFile(string url, string file)
{
byte[] result;
byte[] buffer = new byte[4096];
WebRequest wr = WebRequest.Create(url);
wr.ContentType = "application/x-bittorrent";
using (WebResponse response = wr.GetResponse())
{
using (Stream responseStream = response.GetResponseStream())
{
using (MemoryStream memoryStream = new MemoryStream())
{
int count = 0;
do
{
count = responseStream.Read(buffer, 0, buffer.Length);
memoryStream.Write(buffer, 0, count);
} while (count != 0);
result = memoryStream.ToArray();
using (BinaryWriter writer = new BinaryWriter(new FileStream(file, FileMode.Create)))
{
writer.Write(result);
}
}
}
}
}
The problem that server returns content compressed by gzip and you download this compressed content to file. For such cases you should check the "Content-Encoding" header and use proper stream reader to decompress the source.
I modified your function to handle gzipped content:
private void DownloadFile(string url, string file)
{
byte[] result;
byte[] buffer = new byte[4096];
WebRequest wr = WebRequest.Create(url);
wr.ContentType = "application/x-bittorrent";
using (WebResponse response = wr.GetResponse())
{
bool gzip = response.Headers["Content-Encoding"] == "gzip";
var responseStream = gzip
? new GZipStream(response.GetResponseStream(), CompressionMode.Decompress)
: response.GetResponseStream();
using (MemoryStream memoryStream = new MemoryStream())
{
int count = 0;
do
{
count = responseStream.Read(buffer, 0, buffer.Length);
memoryStream.Write(buffer, 0, count);
} while (count != 0);
result = memoryStream.ToArray();
using (BinaryWriter writer = new BinaryWriter(new FileStream(file, FileMode.Create)))
{
writer.Write(result);
}
}
}
}
aHi all:
I have created some .zip file on my web site, says 1110_1200_events.zip. I used the code to return .zip files in FileResult.
public FileResult GetEvents()
{
string fileName = "1020_1200_events.zip",
filePath = Server.MapPath("~/public/Event/" + fileName);
return File(filePath, "application/zip", fileName);
}
The problem is that if i used a WebRequest to read the stream of the file, I got the I/O exception at webResponse.GetResponseStream().Read(buffer, 0, buffer.Length): Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
Below is the code snippet. How do I work around it? Thank you.
var webRequest = WebRequest.Create(GetSPIUrl() + "SB/GetEvents");
webRequest.Method = "POST";
webRequest.ContentType = "application/zip";
StreamWriter writer = new StreamWriter(webRequest.GetRequestStream());
writer.WriteLine();
writer.Close();
// Send the data to the webserver
var webResponse = webRequest.GetResponse();
var reader = new StreamReader(webResponse.GetResponseStream(), Encoding.UTF8);
FileInfo fi = new FileInfo("myData.zip");
using (FileStream fs = fi.OpenWrite())
{
byte[] buffer = new byte[8 * 1024];
int len;
while ((len = webResponse.GetResponseStream().Read(buffer, 0, buffer.Length)) > 0)
{
fs.Write(buffer, 0, len);
}
}
In C#.NET, I want to fetch data from an URL and save it to a file in binary.
Using HttpWebRequest/Streamreader to read into a string and saving using StreamWriter works fine with ASCII, but non-ASCII characters get mangled because the Systems thinks it has to worry about Encodings, encode to Unicode or from or whatever.
What is the easiest way to GET data from an URL and saving it to a file, binary, as-is?
// This code works, but for ASCII only
String url = "url...";
HttpWebRequest request = (HttpWebRequest)
WebRequest.Create(url);
// execute the request
HttpWebResponse response = (HttpWebResponse)
request.GetResponse();
// we will read data via the response stream
Stream ReceiveStream = response.GetResponseStream();
StreamReader readStream = new StreamReader( ReceiveStream );
string contents = readStream.ReadToEnd();
string filename = #"...";
// create a writer and open the file
TextWriter tw = new StreamWriter(filename);
tw.Write(contents.Substring(5));
tw.Close();
Minimalist answer:
using (WebClient client = new WebClient()) {
client.DownloadFile(url, filePath);
}
Or in PowerShell (suggested in an anonymous edit):
[System.Net.WebClient]::WebClient
$client = New-Object System.Net.WebClient
$client.DownloadFile($URL, $Filename)
Just don't use any StreamReader or TextWriter. Save into a file with a raw FileStream.
String url = ...;
HttpWebRequest request = (HttpWebRequest) WebRequest.Create(url);
// execute the request
HttpWebResponse response = (HttpWebResponse) request.GetResponse();
// we will read data via the response stream
Stream ReceiveStream = response.GetResponseStream();
string filename = ...;
byte[] buffer = new byte[1024];
FileStream outFile = new FileStream(filename, FileMode.Create);
int bytesRead;
while((bytesRead = ReceiveStream.Read(buffer, 0, buffer.Length)) != 0)
outFile.Write(buffer, 0, bytesRead);
// Or using statement instead
outFile.Close()
This is what I use:
sUrl = "http://your.com/xml.file.xml";
rssReader = new XmlTextReader(sUrl.ToString());
rssDoc = new XmlDocument();
WebRequest wrGETURL;
wrGETURL = WebRequest.Create(sUrl);
Stream objStream;
objStream = wrGETURL.GetResponse().GetResponseStream();
StreamReader objReader = new StreamReader(objStream, Encoding.UTF8);
WebResponse wr = wrGETURL.GetResponse();
Stream receiveStream = wr.GetResponseStream();
StreamReader reader = new StreamReader(receiveStream, Encoding.UTF8);
string content = reader.ReadToEnd();
XmlDocument content2 = new XmlDocument();
content2.LoadXml(content);
content2.Save("direct.xml");