Umbraco Adding base64 File with SetValue - c#

I'll explain the problem right away, but first of all...is this achievable?
I have a Document Type in Umbraco where I store data from a Form. I can store everything except the file.
...
content.SetValue("notes", item.Notes);
content.SetValue("curriculum", item.Curriculum); /*this is the file*/
...
I'm adding items like this where SetValue comes from the following namespace namespace Umbraco.Core.Models and this is the function signature void SetValue(string propertyTypeAlias, object value)
And the return error is the following
"String or binary data would be truncated.
↵The statement has been terminated."
Did I missunderstood something? Shouldn't I be sending the base64? I'm adding the image to a media file where it creates a sub-folder with a sequential number. If I try to add an existing folder it appends the file just fine but if I point to a new media sub-folder it also returns an error. Any ideas on how should I approach this?
Thanks in advance
Edit 1: After Cryothic answer I've updated my code with the following
byte[] tempByte = Convert.FromBase64String(item.Curriculum);
var mediaFile = _mediaService.CreateMedia(item.cvExtension, -1, Constants.Conventions.MediaTypes.File);
Stream fileStream = new MemoryStream(tempByte);
var fileName = Path.GetFileNameWithoutExtension(item.cvExtension);
mediaFile.SetValue("umbracoFile", fileName, fileStream);
_mediaService.Save(mediaFile);
and the error happens at mediaFile.SetValue(...).
If I upload a file from umbraco it goes to "http://localhost:3295/media/1679/test.txt" and the next one would go to "http://localhost:3295/media/1680/test.txt". Where do I tell on my request that it has to add to the /media folder and increment? Do I only point to the media folder and umbaco handles the incrementation part?
If I change on SetValue to the following mediaFile.SetValue("curriculum", fileName, fileStream); the request succeeds but the file is not added to the content itself and the file is added to "http://localhost:3295/umbraco/media" instead of "http://localhost:3295/media".
If I try the following - content.SetValue("curriculum", item.cvExtension); - the file is added to the content but with the path "http://localhost:3295/umbraco/test.txt".
I'm not understanding very well how umbraco inserts files into the media folder (outside umbraco) and how you add the media service path to the content service.

Do you need to save base64?
I have done something like that, but using the MediaService.
My project had the option to upload multiple images on mulitple wizard-steps, and I needed to save them all at once. So I looped through the uploaded files (HttpFileCollection) per step. acceptedFiletypes is a string-list with the mimetypes I'd allow.
for (int i = 0; i < files.Count; i++) {
byte[] fileData = null;
UploadedFile uf = null;
try {
if (acceptedFiletypes.Contains(files[i].ContentType)) {
using (var binaryReader = new BinaryReader(files[i].InputStream)) {
fileData = binaryReader.ReadBytes(files[i].ContentLength);
}
if (fileData.Length > 0) {
uf = new UploadedFile {
FileName = files[i].FileName,
FileType = fileType,
FileData = fileData
};
}
}
}
catch { }
if (uf != null) {
projectData.UploadedFiles.Add(uf);
}
}
After the last step, I would loop throug my projectData.UploadedFiles and do the following.
var service = Umbraco.Core.ApplicationContext.Current.Services.MediaService;
var mediaTypeAlias = "Image";
var mediaItem = service.CreateMedia(fileName, parentFolderID, mediaTypeAlias);
Stream fileStream = new MemoryStream(file.FileData);
mediaItem.SetValue("umbracoFile", fileName, fileStream);
service.Save(mediaItem);
I also had a check which would see if the uploaded filename was ending on ".pdf". In that case I'd change the mediaTypeAlias to "File".
I hope this helps.

Related

Upload Csv to solution explorer

I'm trying to build an application in which a user can upload a csv file and convert it into XML. Currently my controller can create .txt file in a temp folder. However when I open the txt file it comes out all corrupted as below:
I have two questions
1. How can I make it so that the file displays properly i.e. as items separated by commas?
2. How can I change my code to make the file upload into my solution explorer
Here is the relevant controller code:
[HttpPost("UploadFiles")]
public async Task<IActionResult> FileUpload(List<IFormFile> files)
{
long size = files.Sum(f => f.Length);
var filePaths = new List<string>();
foreach (var formFile in files)
{
if(formFile.Length > 0)
{
var filePath = Path.GetTempPath()+ Guid.NewGuid().ToString()+".txt";
filePaths.Add(filePath);
using (var stream = new FileStream(filePath, FileMode.Create, FileAccess.ReadWrite))
{
await formFile.CopyToAsync(stream);
}
}
}
return Ok(new { count = files.Count, size, filePaths });
}
Any suggestions would be much appreciated
Thanks in advance
When the file is corrupt I think the conversion doesn't work. Try for now uploading it without any conversion.
For the second question you can do the following
var filePath = Path.Combine(AppContext.BaseDirectory, $"{Guid.NewGuid().ToString()}.csv"); // or whatever extension you are actually having without modifying the original extension
This will store the file either in the "bin" directory path or in the directory where the source-code is located.

Converting multiple byte arrays to XML, zipping them and then downloading them using ASP.NET MVC

I'm trying to achieve the following in ASP.NET MVC:
Select a number of files in a view and return their id strings to my HomeController
Using those id strings, I retrieve a List byte[] from my database, where I’ve stored xml files as blob types
Then I want to convert each single byte[] back to an xml file, zip all of them and download them
Here's my most succesful approach so far:
[HttpPost]
public ActionResult Download(List<string> fileIDs)
{
List<byte[]> blobs = new List<byte[]>();
foreach (var id in fileIDs)
{
//db.GetSelectedBlob returns a byte[] from the database
//that matches the inputparameter 'id'
byte[] blob = db.GetSelectedBlob(id);
blobs.Add(blob);
}
int counter = 0;
MemoryStream mso = new MemoryStream();
using (var zip = new ZipArchive(mso, ZipArchiveMode.Create, true))
{
foreach (var blob in blobs)
{
var file = File(blob, MediaTypeNames.Text.Xml, "UBL" + counter);
ZipArchiveEntry fileInArchive = zip.CreateEntry(file.FileDownloadName, CompressionLevel.Optimal);
counter++;
using (Stream entryStream = fileInArchive.Open())
using (var fileToCompressedStream = new MemoryStream(blob))
{
fileToCompressedStream.CopyTo(entryStream);
}
}
}
return File(mso.GetBuffer(), "application/zip", "UBL.zip");
}
The zip function works (but not entirely perfectly) and the downloaded folder contains the following files (i.e. the correct number of files that match my database query):
Generic file type
So, the problem I want to resolve is the following:
The current file type of my files is 'File', which I want to change to 'XML File'. I've tried to incorporate several approaches that I've found on Stack Overflow but I can't seem to get the desired result.
Thanks in advance for your help guys!
Make sure your files have a .xml extension:
var file = File(blob, MediaTypeNames.Text.Xml, "UBL" + counter + ".xml");

Get file path from inputstream?

I am trying to get the last modified date from a file, but need its path? Could someone please show me how i can get the file path?
[HttpGet]
public string uploadfile(string token, string filenameP, DateTime modDate, HttpPostedFileBase file)
{
MemoryStream target = new MemoryStream();
file.InputStream.CopyTo(target);
byte[] data = target.ToArray();
//ModDate = File.GetLastWriteTimeUtc("Path");
}
You are creating a new file on the server when you upload. The last modified date will be "now" (the time the file is created). There is no way to snoop the user's machine to get this information (which is not part of the file itself). Can't be done with an HTTP form upload.
Now, some file types may contain metadata in the file which may have pertinent information. If you know the file type and it does contain such metadata then you can open the file and have a look.
You just don't. Most (if not all) browsers do not provide this information for security reasons in internet sceanrios.
You can read date by javascript (HTML5) and send it as hidden input field of form.
Something like
<script>
function handleFileSelect(evt) {
var files = evt.target.files; // FileList object
// files is a FileList of File objects. List some properties.
var output = [];
for (var i = 0, f; f = files[i]; i++) {
output.push(f.lastModifiedDate ? f.lastModifiedDate.toLocaleDateString() );
}
document.getElementById('list').innerHTML = '<ul>' + output.join('') + '</ul>';
}
document.getElementById('files').addEventListener('change', handleFileSelect, false);
</script>
http://www.html5rocks.com/en/tutorials/file/dndfiles/

DotNetZip: How to extract files, but ignoring the path in the zipfile?

Trying to extract files to a given folder ignoring the path in the zipfile but there doesn't seem to be a way.
This seems a fairly basic requirement given all the other good stuff implemented in there.
What am i missing ?
code is -
using (Ionic.Zip.ZipFile zf = Ionic.Zip.ZipFile.Read(zipPath))
{
zf.ExtractAll(appPath);
}
While you can't specify it for a specific call to Extract() or ExtractAll(), the ZipFile class has a FlattenFoldersOnExtract field. When set to true, it flattens all the extracted files into one folder:
var flattenFoldersOnExtract = zip.FlattenFoldersOnExtract;
zip.FlattenFoldersOnExtract = true;
zip.ExtractAll();
zip.FlattenFoldersOnExtract = flattenFoldersOnExtract;
You'll need to remove the directory part of the filename just prior to unzipping...
using (var zf = Ionic.Zip.ZipFile.Read(zipPath))
{
zf.ToList().ForEach(entry =>
{
entry.FileName = System.IO.Path.GetFileName(entry.FileName);
entry.Extract(appPath);
});
}
You can use the overload that takes a stream as a parameter. In this way you have full control of path where the files will be extracted to.
Example:
using (ZipFile zip = new ZipFile(ZipPath))
{
foreach (ZipEntry e in zip)
{
string newPath = Path.Combine(FolderToExtractTo, e.FileName);
if (e.IsDirectory)
{
Directory.CreateDirectory(newPath);
}
else
{
using (FileStream stream = new FileStream(newPath, FileMode.Create))
e.Extract(stream);
}
}
}
That will fail if there are 2 files with equal filenames. For example
files\additionalfiles\file1.txt
temp\file1.txt
First file will be renamed to file1.txt in the zip file and when the second file is trying to be renamed an exception is thrown saying that an item with the same key already exists

Why can't this file be deleted after using C1ZipFile?

The following code gives me a System.IO.IOException with the message 'The process cannot access the file'.
private void UnPackLegacyStats()
{
DirectoryInfo oDirectory;
XmlDocument oStatsXml;
//Get the directory
oDirectory = new DirectoryInfo(msLegacyStatZipsPath);
//Check if the directory exists
if (oDirectory.Exists)
{
//Loop files
foreach (FileInfo oFile in oDirectory.GetFiles())
{
//Check if file is a zip file
if (C1ZipFile.IsZipFile(oFile.FullName))
{
//Open the zip file
using (C1ZipFile oZipFile = new C1ZipFile(oFile.FullName, false))
{
//Check if the zip contains the stats
if (oZipFile.Entries.Contains("Stats.xml"))
{
//Get the stats as a stream
using (Stream oStatsStream = oZipFile.Entries["Stats.xml"].OpenReader())
{
//Load the stats as xml
oStatsXml = new XmlDocument();
oStatsXml.Load(oStatsStream);
//Close the stream
oStatsStream.Close();
}
//Loop hit elements
foreach (XmlElement oHitElement in oStatsXml.SelectNodes("/*/hits"))
{
//Do stuff
}
}
//Close the file
oZipFile.Close();
}
}
//Delete the file
oFile.Delete();
}
}
}
I am struggling to see where the file could still be locked. All objects that could be holding onto a handle to the file are in using blocks and are explicitly closed.
Is it something to do with using FileInfo objects rather than the strings returned by the static GetFiles method?
Any ideas?
I do not see problems in your code, everything look ok. To check is the problem lies in C1ZipFile I suggest you initialize zip from stream, instead of initialization from file, so you close stream explicitly:
//Open the zip file
using (Stream ZipStream = oFile.OpenRead())
using (C1ZipFile oZipFile = new C1ZipFile(ZipStream, false))
{
// ...
Several other suggestions:
You do not need to call Close() method, with using (...), remove them.
Move xml processing (Loop hit elements) outsize zip processing, i.e. after zip file closeing, so you keep file opened as least as possible.
I assume you're getting the error on the oFile.Delete call. I was able to reproduce this error. Interestingly, the error only occurs when the file is not a zip file. Is this the behavior you are seeing?
It appears that the C1ZipFile.IsZipFile call is not releasing the file when it's not a zip file. I was able to avoid this problem by using a FileStream instead of passing the file path as a string (the IsZipFile function accepts either).
So the following modification to your code seems to work:
if (oDirectory.Exists)
{
//Loop files
foreach (FileInfo oFile in oDirectory.GetFiles())
{
using (FileStream oStream = new FileStream(oFile.FullName, FileMode.Open))
{
//Check if file is a zip file
if (C1ZipFile.IsZipFile(oStream))
{
// ...
}
}
//Delete the file
oFile.Delete();
}
}
In response to the original question in the subject: I don't know if it's possible to know if a file can be deleted without attempting to delete it. You could always write a function that attempts to delete the file and catches the error if it can't and then returns a boolean indicating whether the delete was successful.
I'm just guessing: are you sure that oZipFile.Close() is enough? Perhaps you have to call oZipFile.Dispose() or oZipFile.Finalize() to be sure it has actually released the resources.
More then Likely it's not being disposed, anytime you access something outside of managed code(streams, files, etc.) you MUST dispose of them. I learned the hard way with Asp.NET and Image files, it will fill up your memory, crash your server, etc.
In the interest of completeness I am posing my working code as the changes came from more than one source.
private void UnPackLegacyStats()
{
DirectoryInfo oDirectory;
XmlDocument oStatsXml;
//Get the directory
oDirectory = new DirectoryInfo(msLegacyStatZipsPath);
//Check if the directory exists
if (oDirectory.Exists)
{
//Loop files
foreach (FileInfo oFile in oDirectory.GetFiles())
{
//Set empty xml
oStatsXml = null;
//Load file into a stream
using (Stream oFileStream = oFile.OpenRead())
{
//Check if file is a zip file
if (C1ZipFile.IsZipFile(oFileStream))
{
//Open the zip file
using (C1ZipFile oZipFile = new C1ZipFile(oFileStream, false))
{
//Check if the zip contains the stats
if (oZipFile.Entries.Contains("Stats.xml"))
{
//Get the stats as a stream
using (Stream oStatsStream = oZipFile.Entries["Stats.xml"].OpenReader())
{
//Load the stats as xml
oStatsXml = new XmlDocument();
oStatsXml.Load(oStatsStream);
}
}
}
}
}
//Check if we have stats
if (oStatsXml != null)
{
//Process XML here
}
//Delete the file
oFile.Delete();
}
}
}
The main lesson I learned from this is to manage file access in one place in the calling code rather than letting other components manage their own file access. This is most apropriate when you want to use the file again after the other component has finished it's task.
Although this takes a little more code you can clearly see where the stream is disposed (at the end of the using), compared to having to trust that a component has correctly disposed of the stream.

Categories

Resources