I am using FileStream.Create to upload a .csv file onto a server and then read it into a SQL database. Once it is read in, I just delete the file from the folder that it was written to. The goal is to just get the file into the database. This would run fine locally, but I cannot get write access on the new server so I get an UnauthorizedAccessException. I don't think that it is necessary to upload the file to the server to read it into the SQL table, but I am having trouble adjusting the code.
[HttpPost]
public ActionResult UploadValidationTable(HttpPostedFileBase csvFile)
{
var inputFileDescription = new CsvFileDescription
{
SeparatorChar = ',',
FirstLineHasColumnNames = true
};
var cc = new CsvContext();
var filePath = uploadFile(csvFile.InputStream);
var model = cc.Read<Credit>(filePath, inputFileDescription);
try
{
var entity = new Entities();
foreach (var item in model)
{
var tc = new TemporaryCsvUpload
{
Id = item.Id,
Amount = item.Amount,
Date = item.Date,
Number = item.Number,
ReasonId = item.ReasonId,
Notes = item.Notes
};
entity.TemporaryCsvUploads.Add(tc);
}
entity.SaveChanges();
System.IO.File.Delete(filePath);
Here is the uploadFile method:
private string uploadFile(Stream serverFileStream)
{
const string directory = "~/Content/CSVUploads";
var directoryExists = Directory.Exists(Server.MapPath(directory));
if (!directoryExists)
{
Directory.CreateDirectory(Server.MapPath(directory));
}
var targetFolder = Server.MapPath(directory);
var filename = Path.Combine(targetFolder, Guid.NewGuid() + ".csv");
try
{
const int length = 256;
var buffer = new byte[length];
// write the required bytes
using (var fs = new FileStream(filename, FileMode.Create))
{
int bytesRead;
do
{
bytesRead = serverFileStream.Read(buffer, 0, length);
fs.Write(buffer, 0, bytesRead);
} while (bytesRead == length);
}
serverFileStream.Dispose();
return filename;
}
catch (Exception)
{
return string.Empty;
}
}
To sum it up, I am uploading a .csv file to a temporary location, reading it into an object, reading it into a database, then deleting the .csv file out of the temporary location. I am using Linq2Csv to create the object. Can I do this without uploading the file to the server (because I can't get write access)?
According to http://www.codeproject.com/Articles/25133/LINQ-to-CSV-library,
you can read from a StreamReader
Read<T>(StreamReader stream)
Read<T>(StreamReader stream, CsvFileDescription fileDescription)
You can probably use a streamreader (or a stringbuilder) to create your file instead of a csv - Write StringBuilder to Stream
How to take a stringbuilder and convert it to a streamReader?
and then send that to your CSVContext?
Related
So I've managed to get basic files being pushed with the API following the provided sample code.
I do this by reading the files line by line and converting them to a string, and sending them as Raw Text in the GitCommitRef as the example did. However, I'm unsure how to push more complex files that can't be easily read and converted to a string, such as DLLs.
Is there a way to push files such as these using C#?
Below is the code I use to create the commit:
GitCommitRef commit = new GitCommitRef()
{
Comment = "Add a sample file",
Changes = new GitChange[]
{
new GitChange()
{
ChangeType = VersionControlChangeType.Add,
Item = new GitItem() {Path = "/TESTFOLDER/" + fileName, GitObjectType = GitObjectType.Blob, IsFolder = false },
NewContent = new ItemContent()
{
Content = Utilities.ReadFile(fileNamePath),
ContentType = ItemContentType.RawText
}
}
}
};
Dlls and other more complex files that cannot be read into a string can be pushed by reading the file into an array of bytes, converting that array into a base64 string, and pushing that string with content type of EncodedBase64.
Push the base 64 string
new GitChange()
{
ChangeType = VersionControlChangeType.Add,
Item = new GitItem() {Path = "/TESTFOLDER/" + fileName2, GitObjectType = GitObjectType.Blob, IsFolder = false },
NewContent = new ItemContent()
{
Content = Utilities.ReadFileAsBytes(fileNamePath2).ToString(),
ContentType = ItemContentType.Base64Encoded
}
}
Get the string as Base 64
using (System.IO.FileStream stream = new System.IO.FileStream(path, System.IO.FileMode.Open))
{
byte[] arr = new byte[stream.Length];
int numBytesToRead = (int)stream.Length;
while (numBytesToRead > 0)
{
int n = stream.Read(arr, 0, (int)stream.Length);
if (n == 0)
{
break;
}
numBytesToRead -= n;
}
return Convert.ToBase64String(arr);
}
I am trying to implement a "Download All" button that will zip up a selection of files from the server and return them as a zip file download. With the code below, I have the zip file being created. The expected files are inside, with the filenames expected, but the contents of the zipped files appears to be corrupted.
public ActionResult DownloadAll(Guid id)
{
var assets = db.InviteAssets.Include(i => i.AssetPages).Where(w => w.InviteID == id).ToList();
var cd = new System.Net.Mime.ContentDisposition
{
// for example foo.bak
FileName = "allAssets.zip",
// always prompt the user for downloading, set to true if you want
// the browser to try to show the file inline
Inline = false,
};
Response.AppendHeader("Content-Disposition", cd.ToString());
using (var memoryStream = new MemoryStream())
{
using (var archive = new ZipArchive(memoryStream, ZipArchiveMode.Create, true))
{
foreach (var asset in assets)
{
string path, extension, name;
if (asset.AssetType != AssetType.PDF)
{
path = asset.AssetPages.First(f => f.PageNumber == 1).FilePath;
}
else
{
path = string.Format("/Content/Assets/asset_{0}.pdf", asset.ID);
}
extension = path.Substring(path.IndexOf('.'));
name = "asset" + asset.Order + extension;
var file = archive.CreateEntry(name);
using (var streamWriter = new StreamWriter(file.Open()))
{
using (var fileStream = System.IO.File.Open(Server.MapPath("~" + path), FileMode.Open))
{
int filelength = (int)fileStream.Length;
var filedata = new byte[fileStream.Length];
streamWriter.Write(fileStream.Read(filedata, 0, filelength));
}
}
}
}
return File(memoryStream.ToArray(), "application/json", "allAssets.zip");
}
}
I'm thinking my issue is therefore with this section:
using (var streamWriter = new StreamWriter(file.Open()))
{
using (var fileStream = System.IO.File.Open(Server.MapPath("~" + path), FileMode.Open))
{
int filelength = (int)fileStream.Length;
var filedata = new byte[fileStream.Length];
streamWriter.Write(fileStream.Read(filedata, 0, filelength));
}
}
I keep reading examples that use a method archive.CreateEntryFromFile(filePath, fileName) but no such method is recognised. Has this been deprecated, or requires a higher version of .Net Framework?
Thanks in advance.
The problem is here:
streamWriter.Write(fileStream.Read(filedata, 0, filelength));
You’re reading the file contents into filedata but you’re at the same time writing the return value of Read into the archive, meaning a single int. You need to read and write separately:
fileStream.Read(filedata, 0, filelength));
streamWriter.Write(filedata, 0, filelength);
Or you can use the CreateEntryFromFile extension method in System.IO.Compression.ZipFileExtensions namespace.
I discovered that the reason I couldn't see the CreateEntryFromFile method was because I had not included a reference to System.IO.Compression.FileSystem. Once I added that, I could use CreateEntryFromFile which worked fine.
So now I have: archive.CreateEntryFromFile(Server.MapPath("~" + path), name);
Instead of:
var file = archive.CreateEntry(name);
using (var streamWriter = new StreamWriter(file.Open()))
{
using (var fileStream = System.IO.File.Open(Server.MapPath("~" + path), FileMode.Open))
{
int filelength = (int)fileStream.Length;
var filedata = new byte[fileStream.Length];
fileStream.Read(filedata, 0, filelength);
streamWriter.Write(filedata);
}
}
I am trying to download files from a SharePoint library using the client object model. I seem to be able to access the files using OpenBinaryStream() and then executing the query, but when I try to access the stream, it is a stream of Length = 0. I've seen many examples and I've tried several, but I can't get the files to download. I've uploaded successfully, and credentials and permissions aren't the problem. Anyone have any thoughts?
public SharepointFileContainer DownloadFolder(bool includeSubfolders, params object[] path)
{
try
{
List<string> pathStrings = new List<string>();
foreach (object o in path)
pathStrings.Add(o.ToString());
var docs = _context.Web.Lists.GetByTitle(Library);
_context.Load(docs);
_context.ExecuteQuery();
var rootFolder = docs.RootFolder;
_context.Load(rootFolder);
_context.ExecuteQuery();
var folder = GetFolder(rootFolder, pathStrings);
var files = folder.Files;
_context.Load(files);
_context.ExecuteQuery();
SharepointFileContainer remoteFiles = new SharepointFileContainer();
foreach (Sharepoint.File f in files)
{
_context.Load(f);
var file = f.OpenBinaryStream();
_context.ExecuteQuery();
var memoryStream = new MemoryStream();
file.Value.CopyTo(memoryStream);
remoteFiles.Files.Add(f.Name, memoryStream);
}
...
}
SharepointFileContainer is just a custom class for my calling application to dispose of the streams when it has finished processing them. GetFolder is a recursive method to drill down the given folder path. I've had problems with providing the direct url and have had the most success with this.
My big question is why "file.Value" is a Stream with a Length == 0?
Thanks in advance!
EDIT:
Thanks for your input so far...unfortunately I'm experiencing the same problem. Both solutions pitched make use of OpenBinaryDirect. The resulting FileInformation class has this for the stream...
I'm still getting a file with 0 bytes downloaded.
You need to get the list item of the file (as a ListItem object) and then use it's property File. Something like:
//...
// Previous code
//...
var docs = _context.Web.Lists.GetByTitle(Library);
var listItem = docs.GetItemById(listItemId);
_context.Load(docs);
clientContext.Load(listItem, i => i.File);
clientContext.ExecuteQuery();
var fileRef = listItem.File.ServerRelativeUrl;
var fileInfo = Microsoft.SharePoint.Client.File.OpenBinaryDirect(clientContext, fileRef);
var fileName = Path.Combine(filePath,(string)listItem.File.Name);
using (var fileStream = System.IO.File.Create(fileName))
{
fileInfo.Stream.CopyTo(fileStream);
}
After that you do whatever you need to do with the stream. The current one just saves it to the specified path, but you can also download it in the browser, etc..
We can use the following code to get the memory stream.
var fileInformation = Microsoft.SharePoint.Client.File.OpenBinaryDirect(clientContext, file.ServerRelativeUrl);
if (fileInformation != null && fileInformation.Stream != null)
{
using (MemoryStream memoryStream = new MemoryStream())
{
byte[] buffer = new byte[32768];
int bytesRead;
do
{
bytesRead = fileInformation.Stream.Read(buffer, 0, buffer.Length);
memoryStream.Write(buffer, 0, bytesRead);
} while (bytesRead != 0);
}
}
Reference: https://praveenkasireddy.wordpress.com/2012/11/11/download-document-from-document-set-using-client-object-model-om/
Elmah has recently reported this bug;
Microsoft.SharePoint.Client.ServerException: The request message is too big. The server does not allow messages larger than 5242880 bytes.
The code where it fell over was;
public SharepointFileInfo Save(byte[] file, string fileName)
{
using (var context = new ClientContext(this.SharepointServer))
{
context.Credentials = new NetworkCredential(this.UserName, this.Password, this.Domain);
var list = context.Web.Lists.GetByTitle(this.DocumentLibrary);
var fileCreationInformation = new FileCreationInformation
{
Content = file,
Overwrite = true,
Url = fileName
};
var uploadFile = list.RootFolder.Files.Add(fileCreationInformation);
var listItem = uploadFile.ListItemAllFields;
listItem.Update();
context.ExecuteQuery();
if (this.Metadata.Count > 0)
{
this.SaveMetadata(uploadFile, context);
}
return GetSharepointFileInfo(context, list, uploadFile);
}
}
I am using Sharepoint 2013.
How do I fix this?
It's a normal problme. You use the classic API (new FileCreationInformation [...] context.ExecuteQuery()) which sent a HTTP requet to the server. You file is up to 5 Mb. So, IIS receive a huge request, and reject it.
To Upload a file to SharePoint you need to use :
File.SaveBinaryDirect
(with this you don't need to change settings ;) )
using (FileStream fs = new FileStream(filePath, FileMode.Open))
{
Microsoft.SharePoint.Client.File.SaveBinaryDirect(ctx, string.Format("/{0}/{1}", libraryName, System.IO.Path.GetFileName(filePath)), fs, true);
}
Check this links to see how to upload a file to SharePoint using CSOM :
Upload large files sample app for SharePoint
good luck
There is several approche to do that (upload file with metaData). I propose 2 methods to you (one simple, second more complex)
In 2 Times (simple)
Upload the the file with the File.SaveBinaryDirect
Get the SPFile with CSOM by the file URL with SP.Web.getFileByServerRelativeUrl and File.listItemAllFields methodes.
Here an exemple : get listitem by file URL
With FileCreationInformation but more complex.
You need to use : File.StartUpload, File.ContinueUpload and File.FinishUpload
The code is from Microsoft the last part of the tuto, not mine
public Microsoft.SharePoint.Client.File UploadFileSlicePerSlice(ClientContext ctx, string libraryName, string fileName,int fileChunkSizeInMB = 3){
// Each sliced upload requires a unique ID.
Guid uploadId = Guid.NewGuid();
// Get the name of the file.
string uniqueFileName = Path.GetFileName(fileName);
// Ensure that target library exists, and create it if it is missing.
if (!LibraryExists(ctx, ctx.Web, libraryName))
{
CreateLibrary(ctx, ctx.Web, libraryName);
}
// Get the folder to upload into.
List docs = ctx.Web.Lists.GetByTitle(libraryName);
ctx.Load(docs, l => l.RootFolder);
// Get the information about the folder that will hold the file.
ctx.Load(docs.RootFolder, f => f.ServerRelativeUrl);
ctx.ExecuteQuery();
// File object.
Microsoft.SharePoint.Client.File uploadFile;
// Calculate block size in bytes.
int blockSize = fileChunkSizeInMB * 1024 * 1024;
// Get the information about the folder that will hold the file.
ctx.Load(docs.RootFolder, f => f.ServerRelativeUrl);
ctx.ExecuteQuery();
// Get the size of the file.
long fileSize = new FileInfo(fileName).Length;
if (fileSize <= blockSize)
{
// Use regular approach.
using (FileStream fs = new FileStream(fileName, FileMode.Open))
{
FileCreationInformation fileInfo = new FileCreationInformation();
fileInfo.ContentStream = fs;
fileInfo.Url = uniqueFileName;
fileInfo.Overwrite = true;
uploadFile = docs.RootFolder.Files.Add(fileInfo);
ctx.Load(uploadFile);
ctx.ExecuteQuery();
// Return the file object for the uploaded file.
return uploadFile;
}
}
else
{
// Use large file upload approach.
ClientResult<long> bytesUploaded = null;
FileStream fs = null;
try
{
fs = System.IO.File.Open(fileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
using (BinaryReader br = new BinaryReader(fs))
{
byte[] buffer = new byte[blockSize];
Byte[] lastBuffer = null;
long fileoffset = 0;
long totalBytesRead = 0;
int bytesRead;
bool first = true;
bool last = false;
// Read data from file system in blocks.
while ((bytesRead = br.Read(buffer, 0, buffer.Length)) > 0)
{
totalBytesRead = totalBytesRead + bytesRead;
// You've reached the end of the file.
if (totalBytesRead == fileSize)
{
last = true;
// Copy to a new buffer that has the correct size.
lastBuffer = new byte[bytesRead];
Array.Copy(buffer, 0, lastBuffer, 0, bytesRead);
}
if (first)
{
using (MemoryStream contentStream = new MemoryStream())
{
// Add an empty file.
FileCreationInformation fileInfo = new FileCreationInformation();
fileInfo.ContentStream = contentStream;
fileInfo.Url = uniqueFileName;
fileInfo.Overwrite = true;
uploadFile = docs.RootFolder.Files.Add(fileInfo);
// Start upload by uploading the first slice.
using (MemoryStream s = new MemoryStream(buffer))
{
// Call the start upload method on the first slice.
bytesUploaded = uploadFile.StartUpload(uploadId, s);
ctx.ExecuteQuery();
// fileoffset is the pointer where the next slice will be added.
fileoffset = bytesUploaded.Value;
}
// You can only start the upload once.
first = false;
}
}
else
{
// Get a reference to your file.
uploadFile = ctx.Web.GetFileByServerRelativeUrl(docs.RootFolder.ServerRelativeUrl + System.IO.Path.AltDirectorySeparatorChar + uniqueFileName);
if (last)
{
// Is this the last slice of data?
using (MemoryStream s = new MemoryStream(lastBuffer))
{
// End sliced upload by calling FinishUpload.
uploadFile = uploadFile.FinishUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// Return the file object for the uploaded file.
return uploadFile;
}
}
else
{
using (MemoryStream s = new MemoryStream(buffer))
{
// Continue sliced upload.
bytesUploaded = uploadFile.ContinueUpload(uploadId, fileoffset, s);
ctx.ExecuteQuery();
// Update fileoffset for the next slice.
fileoffset = bytesUploaded.Value;
}
}
}
} // while ((bytesRead = br.Read(buffer, 0, buffer.Length)) > 0)
}
}
finally
{
if (fs != null)
{
fs.Dispose();
}
}
}
return null;}
hope this help you
I am using the SharpZipLib open source .net library from www.icsharpcode.net
My goal is to unzip an xml file and read it into a dataset. However I get the following error reading the file into a dataset: "Data at the root level is invalid. Line 1, position 1."
I believe what is happening is the unzipping code is not releasing the file for the following reasons.
1.) If I unzip the file and exit the application. When I restart the app I CAN read the unzipped file into a dataset.
2.) If I read in the xml file right after writing it out (no zipping) then it works fine.
3.) If I write the dataset to xml, zip it up, unzip it, then attempt to read it back in I get the exception.
The code below is pretty straight forward. UnZipFile will return the name of the file just unzipped. Right below this call is the call to read it into a dataset. The variable fileToRead is the full path to the newly unzipped xml file.
string fileToRead = UnZipFile(filepath, DOViewerUploadStoreArea);
ds.ReadXml(fileToRead )
private string UnZipFile(string file, string dirToUnzipTo)
{
string unzippedfile = "";
try
{
ZipInputStream s = new ZipInputStream(File.OpenRead(file));
ZipEntry myEntry;
string tmpEntry = String.Empty;
while ((myEntry = s.GetNextEntry()) != null)
{
string directoryName = dirToUnzipTo;
string fileName = Path.GetFileName(myEntry.Name);
string fileWDir = directoryName + fileName;
unzippedfile = fileWDir;
FileStream streamWriter = File.Create(fileWDir);
int size = 4096;
byte[] data = new byte[4096];
while (true)
{
size = s.Read(data, 0, data.Length);
if (size > 0) { streamWriter.Write(data, 0, size); }
else { break; }
}
streamWriter.Close();
}
s.Close();
}
catch (Exception ex)
{
LogStatus.WriteErrorLog(ex, "ERROR", "DOViewer.UnZipFile");
}
return (unzippedfile);
}
Well, what does the final file look like? (compared to the original). You don't show the zipping code, which might be part of the puzzle, especially as you are partially swallowing the exception.
I would also try ensuring everything IDisposable is Dispose()d, ideally via using; also - in case the problem is with path construction, use Path.Combine. And note that if myEntry.Name contains sub-directories, you will need to create them manually.
Here's what I have - it works for unzipping ICSharpCode.SharpZipLib.dll:
private string UnZipFile(string file, string dirToUnzipTo)
{
string unzippedfile = "";
try
{
using(Stream inStream = File.OpenRead(file))
using (ZipInputStream s = new ZipInputStream(inStream))
{
ZipEntry myEntry;
byte[] data = new byte[4096];
while ((myEntry = s.GetNextEntry()) != null)
{
string fileWDir = Path.Combine(dirToUnzipTo, myEntry.Name);
string dir = Path.GetDirectoryName(fileWDir);
// note only supports a single level of sub-directories...
if (!Directory.Exists(dir)) Directory.CreateDirectory(dir);
unzippedfile = fileWDir; // note; returns last file if multiple
using (FileStream outStream = File.Create(fileWDir))
{
int size;
while ((size = s.Read(data, 0, data.Length)) > 0)
{
outStream.Write(data, 0, size);
}
outStream.Close();
}
}
s.Close();
}
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
return (unzippedfile);
}
It could also be that the problem is either in the code that writes the zip, or the code that reads the generated file.
I compared the original with the final using TextPad and they are identical.
Also I rewrote the code to take advantage of the using. Here is the code.
My issue seems to be centered around file locking or something. If I unzip the file quit the application then start it up it will read find.
private string UnZipFile(string file, string dirToUnzipTo)
{
string unzippedfile = "";
try
{
using (ZipInputStream s = new ZipInputStream(File.OpenRead(file)))
{
ZipEntry theEntry;
while ((theEntry = s.GetNextEntry()) != null)
{
string directoryName = dirToUnzipTo;
string fileName = Path.GetFileName(theEntry.Name);
string fileWDir = directoryName + fileName;
unzippedfile = fileWDir;
if (fileName != String.Empty)
{
using (FileStream streamWriter = File.Create(fileWDir))
{
int size = 2048;
byte[] data = new byte[2048];
while (true)
{
size = s.Read(data, 0, data.Length);
if (size > 0)
{
streamWriter.Write(data, 0, size);
}
else
{
break;
}
}
}
}
}
}
}
catch (Exception ex)
{
LogStatus.WriteErrorLog(ex, "ERROR", "DOViewer.UnZipFile");
}
return (unzippedfile);
}
This is a lot simpler to do with DotNetZip.
using (ZipFile zip = ZipFile.Read(ExistingZipFile))
{
zip.ExtractAll(TargetDirectory);
}
If you want to decide on which files to extract ....
using (ZipFile zip = ZipFile.Read(ExistingZipFile))
{
foreach (ZipEntry e in zip)
{
if (wantThisFile(e.FileName)) e.Extract(TargetDirectory);
}
}
If you would like to overwrite existing files during extraction:
using (ZipFile zip = ZipFile.Read(ExistingZipFile))
{
zip.ExtractAll(TargetDirectory, ExtractExistingFileAction.OverwriteSilently);
}
Or, to extract password-protected entries:
using (ZipFile zip = ZipFile.Read(ExistingZipFile))
{
zip.Password = "Shhhh, Very Secret!";
zip.ExtractAll(TargetDirectory, ExtractExistingFileAction.OverwriteSilently);
}