How to set custom location for rdlc report on IIS? - c#

Although I can create rdlc report in debug mode, I encounter an error "Access to the path 'C:\xxx.xlsx' is denied." After looking on the web for workaround, I see that lots of the solutions suggest to give permission to the C drive for IIS user. However, it does not seem to be wisely to give permission to entire drive for just rendering a report. So, how to change this render location i.e. C:\inetpub\MyApplication? On the other hand I think there is no settings needed on reporting side i.e. ReportViewer.ProcessingMode = ProcessingMode.Local; or changing rdlc properties "Build Action", "Copy Output Directory"?
Note: I do not want the reports to be rendered on client's machine as some of them has no right to write any location under C:\ and I think generating reports on IIS location is much better. Is not it?
So, what is the best solution in this situation?
Update: How can I modify this method so that it just read the stream as excel withou writing it?
public static void StreamToProcess(Stream readStream)
{
var writeStream = new FileStream(String.Format("{0}\\{1}.{2}", Environment.GetFolderPath(Environment.SpecialFolder.InternetCache), "MyFile", "xlsx"), FileMode.Create, FileAccess.Write);
const int length = 16384;
var buffer = new Byte[length];
var bytesRead = readStream.Read(buffer, 0, length);
while (bytesRead > 0)
{
writeStream.Write(buffer, 0, bytesRead);
bytesRead = readStream.Read(buffer, 0, length);
}
readStream.Close();
writeStream.Close();
Process.Start(Environment.GetFolderPath(Environment.SpecialFolder.InternetCache) + "\\" + "file" + "." + "xlsx");
}

Here is how we render Excel files from an rdlc without saving it to a server folder. Just call the action and it will download to the user's browser.
public FileStreamResult ExcelReport(int type)
{
var body = _db.MyObjects.Where(x => x.Type == type);
ReportDataSource rdsBody = new ReportDataSource("MyReport", body);
ReportViewer viewer = new ReportViewer
{
ProcessingMode = ProcessingMode.Local
};
viewer.LocalReport.ReportPath = Server.MapPath(#"~\bin\MyReport.rdlc");
viewer.LocalReport.DataSources.Clear();
viewer.LocalReport.DataSources.Add(rdsBody);
viewer.LocalReport.EnableHyperlinks = true;
string filename = string.Format("MyReport_{0}.xls", type);
byte[] bytes = viewer.LocalReport.Render("Excel");
var stream = new MemoryStream(bytes);
return File(stream, "application/ms-excel", filename);
}

Related

How to create ShareFileClient without specifying it's size to be able to stream data to it?

I would like to create new blob on FileShareClient in Azure FileShare resource with .NET api. I cannot specify it's size in the beginning (because it will be filled with data later eg. csv file filled with lines, up to couple of GBs).
I was trying to use something like this in my code:
using Azure.Storage.Files.Shares
ShareFileClient fileShare = new ShareFileClient(
"MYConnectionString",
"FileShareName",
"TestDirectoryName");
if (!fileShare.Exists())
{
fileShare.Create(0);
}
var stream = fileShare.OpenWrite(true, 0);
[Edit]
I have got an exception: System.ArgumentException: 'options.MaxSize must be set if overwrite is set to true' Is there any way to avoid specyfining this size?
Please try to use the latest package Azure.Storage.Files.Shares, version 12.5.0.
Note, in the code new ShareFileClient(), the last parameter is the directory name + file name.
see the code below:
ShareFileClient f2 = new ShareFileClient(connectionString, shareName, dirName + "/test.txt");
if (!f2.Exists())
{
f2.Create(0);
}
The file with 0 size can be created, here is the test result:
Micro$ofts infinite wisdom: resize is hidden in SetHttpHeaders method.
public void AppendFile(string filePath, byte[] data)
{
ShareFileClient fileShare = new ShareFileClient(connectionString, shareClient.Name, filePath);
if (!fileShare.Exists())
fileShare.Create(0);
for (int i = 0; i < 10; i++)
{
var properties = fileShare.GetProperties();
var openOptions = new ShareFileOpenWriteOptions();
fileShare.SetHttpHeaders(properties.Value.ContentLength + data.Length);
var stream = fileShare.OpenWrite(false, properties.Value.ContentLength, openOptions);
stream.Write(data, 0, data.Length);
stream.Flush();
}
}

Error trying to upload file via FTP with C #

I send to my method the model and a file that I want to save in a specific path, but I get the following error:
Could not find file 'C:\img\iis2.png' : C\\Program Files (x86)\img\iis2.png
I already tried several examples that I found on the web, but so far nothing has worked for me
string path= #"..\img\";
code:
public ActionResult Guardar_registro(Models.CascadingModelLevantamiento model, HttpPostedFileBase file)
{
try
{
FtpWebRequest request = (FtpWebRequest)FtpWebRequest.Create(ftpURI + "/" +
Path.GetFileName(fileName));
WebRequest ftpRequest = WebRequest.Create(ftpURI);
ftpRequest.Method = WebRequestMethods.Ftp.MakeDirectory;
ftpRequest.Credentials = new NetworkCredential("xxx#xxxx", "xxxx*");
FileInfo fileInfo = new FileInfo(path + fileName);
FileStream fileStream = fileInfo.OpenRead();
int bufferLength = 2048;
byte[] buffer = new byte[bufferLength];
Stream uploadStream = request.GetRequestStream();
int contentLength = fileStream.Read(buffer, 0, bufferLength);
while (contentLength != 0)
{
uploadStream.Write(buffer, 0, contentLength);
contentLength = fileStream.Read(buffer, 0, bufferLength);
}
uploadStream.Close();
fileStream.Close();
request = null;
}
catch (Exception e)
{
ViewBag.Message = "Hubo un error";
return View("../Levantamiento/Levantamiento");
}
}
I'm a little new to this, but if someone could support me to solve the mistake
I'm too new to comment asking for clarification on what actually sets fileName but the error states what's up - the file doesn't exist where it's being tasked to look.
Assuming the file does exist where you want it - you're likely looking for something within System.IO.Path
Add using System.IO; at the top of your class file to use any of these.
If it's in a subfolder within the location of your executable, you want something like: Path.Combine(Directory.GetCurrentDirectory(), "img", fileName);
If it's in a specific folder like C:\img\iis2.png, you want something like Path.Combine("C:\img", "iis2.png");
There's a few examples on the Path.Combine page.

C# - Having problems with processing a Stream more than once

I have a android mobile application that has functionality to set a profile picture.
I send a variable containing the path of the image to a method that does the following:
string imagePath = _ProfilePicture.GetTag (Resource.String.profile_picture_path).ToString ();
byte[] imageBytes = System.IO.File.ReadAllBytes(imagePath);
Stream imageStream = new MemoryStream(imageBytes);
After this block of code I send the imageStream variable to UploadUserProfilePicture(imageStream); which is located on the WCF Service
Currently it only sends the stream, but because we cannot send another parameter containing the extension. We save all images as png. I have however found a library that requires the stream to be parsed to bytes and then based on the bytes the file type can retrieved.
However when I then try to use the same stream to Write the file to the location on the server, the position is at the end so the file created is always 0 bytes.
I have tried:
Doing the conversion to Bytes in another method and only returning the fileType, however the originals position was still at the end.
The CopyTo function gave me the same results.
I tried using the Seek function and setting it's position back to zero however the I get a NotSupportedException.
I tried this as well:
string content;
var reader = new StreamReader(image);
content = reader.ReadToEnd();
image.Dispose();
image = new MemoryStream(Encoding.UTF8.GetBytes(content));
^ this seems to corrupt the stream as I cannot get the FileType nor write it to the above location.
I have also had a look at: How to read a Stream and reset its position to zero even if stream.CanSeek == false
This is the method on the WCF Service:
public Result UploadUserProfilePicture(Stream image)
{
try
{
FileType fileType = CommonMethods.ReadToEnd(image).GetFileType();
Guid guid = Guid.NewGuid();
string imageName = guid.ToString() + "." + fileType.Extension;
var buf = new byte[1024];
var path = Path.Combine(#"C:\" + imageName);
int len = 0;
using (var fs = File.Create(path))
{
while ((len = image.Read(buf, 0, buf.Length)) > 0)
{
fs.Write(buf, 0, len);
}
}
return new Result
{
Success = true,
Message = imageName
};
}
catch(Exception ex)
{
return new Result
{
Success = false,
Message = ex.ToString()
};
}
Link to Library Used: https://github.com/Muraad/Mime-Detective
The CommonMethods.ReadToEnd(image) method can be found here: How to convert an Stream into a byte[] in C#? as the questions answer
I hope this is enough information on my problem.
On the server side, you receive a stream from WCF that does not support seek operations. You can, however, read the stream to memory as the GetFileType method requires an array of bytes as input parameter. Instead of accessing the original stream again, you can write the bytes of the array to disk in a very easy way using the File.WriteAllBytes method:
public Result UploadUserProfilePicture(Stream image)
{
try
{
// Store bytes in a variable
var bytes = CommonMethods.ReadToEnd(image);
FileType fileType = bytes.GetFileType();
Guid guid = Guid.NewGuid();
string imageName = guid.ToString() + "." + fileType.Extension;
var path = Path.Combine(#"C:\" + imageName);
File.WriteAllBytes(path, bytes);
return new Result
{
Success = true,
Message = imageName
};
}
catch(Exception ex)
{
return new Result
{
Success = false,
Message = ex.ToString()
};
}
}
Please note that this means that you store a possibly large amount of bytes in memory, in the same way you already did before. It would be better if you could use the stream without reading all bytes into memory, so looking for an alternative for the GetFileType method that can handle a stream is well worth the time. You could then first save the image to a temporary file and then open a new FileStream to discover the correct file type so that you can rename the file.

Upload > 5 MB files to sharepoint 2013 programmatically

I am having troubles uploading large files to my sharepoint 2013/office 365 site. I am using Visual Stuidos 2010 and .NET 4.0
I have tried code from these questions:
SP2010 Client Object Model 3 MB limit - updating maxReceivedMessageSize doesnt get applied
maximum file upload size in sharepoint
Upload large files 100mb+ to Sharepoint 2010 via c# Web Service
How to download/upload files from/to SharePoint 2013 using CSOM?
But nothing is working. So I need a little help. Here is code that I have tried:
1: ( I have also tried to use SharePointOnlineCredentials instead of NetworkCredential for this one)
#region 403 forbidden
byte[] content = System.IO.File.ReadAllBytes(fileInfo.FullName);
System.Net.WebClient webclient = new System.Net.WebClient();
System.Uri uri = new Uri(sharePointSite + directory + fileInfo.Name);
webclient.Credentials = new NetworkCredential(user, password.ToString(), sharePointSite + "Documents");
webclient.UploadData(uri, "PUT", content);
#endregion
2:
#region 500 Internal Server Error
using (var fs = new FileStream(fileInfo.FullName, FileMode.Open))
{
Microsoft.SharePoint.Client.File.SaveBinaryDirect(
context,
web.ServerRelativeUrl + "/" + directory,
fs,
true);
}
#endregion
I have gotten smaller file uploads to work with:
#region File upload for smaller files
Folder folder = context.Web.GetFolderByServerRelativeUrl(web.ServerRelativeUrl + directory);
web.Context.Load(folder);
context.ExecuteQuery();
FileCreationInformation fci = new FileCreationInformation();
fci.Content = System.IO.File.ReadAllBytes(fileInfo.FullName);
fciURL = sharePointSite + directory;
fciURL += (fciURL[fciURL.Length - 1] == '/') ? fileInfo.Name : "/" + fileInfo.Name;
fci.Url = fciURL;
fci.Overwrite = true;
Microsoft.SharePoint.Client.FileCollection documentfiles = folder.Files;
context.Load(documentfiles);
context.ExecuteQuery();
Microsoft.SharePoint.Client.File file = documentfiles.Add(fci);
context.Load(file);
context.ExecuteQuery();
#endregion
My Using Statement:
using (Microsoft.SharePoint.Client.ClientContext context = new Microsoft.SharePoint.Client.ClientContext(sharePointSite))
{
//string fciURL = "";
exception = "";
context.Credentials = new Microsoft.SharePoint.Client.SharePointOnlineCredentials(user, password);
Web web = context.Web;
web.Context.Credentials = context.Credentials;
if (!web.IsPropertyAvailable("ServerRelativeUrl"))
{
web.Context.Load(web, w => w.ServerRelativeUrl);
web.Context.ExecuteQuery();
}
//upload large file
}
The solution I went with:
MemoryStream destStream;
using (System.IO.FileStream fInfo = new FileStream(fileInfo.FullName, FileMode.Open))
{
byte[] buffer = new byte[16 * 1024];
byte[] byteArr;
using (MemoryStream ms = new MemoryStream())
{
int read;
while ((read = fInfo.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
byteArr = ms.ToArray();
}
destStream = new MemoryStream(byteArr);
Microsoft.SharePoint.Client.File.SaveBinaryDirect(
context,
serverRelativeURL + directory + fileInfo.Name,
destStream,
true);
context.ExecuteQuery();
results = "File Uploaded";
return true;
}
The problem with your code snippet number 2 is that you missed a file name:
using (var fs = new FileStream(fileInfo.FullName, FileMode.Open))
{
Microsoft.SharePoint.Client.File.SaveBinaryDirect(
context,
serverRelativeURL + directory + fs.Name,
^^^^^^^^
fs,
true);
}
My research on the subject showed that using the FrontPage Remote Control Procedures was the most adventageous way of reliably uploading large files.
This is because FrontPage RPC supports file fragmentation, which helps avoid OutOfMemomory exceptions due to Windows needing to allocate the entire file to continuous memory.
It also supports sending meta data, useful it pretty much any file upload application. One major advantage of this is that you can actually specify the correct content type without a user having to log in and change it later. (with all other methods I tried it would just be set as the default type.)
See my answer on the Sharepoint StackExchange for further detail on implementing Frontpage RPC.

how to move files from one ftp to another

I need to move files from one ftp to another (currently using ftpwebrequest) both requiring authentication and have different settings (timeout, ascii, active etc). Is downloading files from one to a local server and then uploading to the other significant slower than just copying the files (if that exists even, how would you do it, renameto?). It feels like it should be faster but I'm not sure, I have no understanding of file copying or downloading.
they are all .txt or .csv and mostly around 3-10 mb each so quite a bit of data
You can copy a file from FTP-Server A to FTP-Server B using FXP. Both servers and the client have to support that feature.
Some time we need to download, upload file from FTP server. Here is some good example for FTP operation in C#.
You can use this. It will help you to make a C# program to full fill your requirements.
File Download from FTP Server
public void DownloadFile(stringHostURL, string UserName, string Password, stringSourceDirectory, string FileName, string LocalDirectory)
{
if(!File.Exists(LocalDirectory + FileName))
{
try
{
FtpWebRequestrequestFileDownload = (FtpWebRequest)WebRequest.Create(HostURL + “/” + SourceDirectory + “/” + FileName);
requestFileDownload.Credentials = new NetworkCredential(UserName, Password);
requestFileDownload.Method = WebRequestMethods.Ftp.DownloadFile;
FtpWebResponseresponseFileDownload = (FtpWebResponse)requestFileDownload.GetResponse();
StreamresponseStream = responseFileDownload.GetResponseStream();
FileStreamwriteStream = new FileStream(LocalDirectory + FileName, FileMode.Create);
intLength = 2048;
Byte[] buffer = new Byte[Length];
intbytesRead = responseStream.Read(buffer, 0, Length);
while(bytesRead > 0)
{
writeStream.Write(buffer, 0, bytesRead);
bytesRead = responseStream.Read(buffer, 0, Length);
}
responseStream.Close();
writeStream.Close();
requestFileDownload = null;
responseFileDownload = null;
}
catch(Exception ex)
{
throwex;
}
}
}
Some Good Examples
Hope it will help you.

Categories

Resources