I have a little app that is working for a month now. I made absolute no changes to the code of this part and from one moment to the other the app stops working.
I even tried old files that I already imported in the past. No rights issue. Same drive. I can open all files. No changes at all in the files.
The error is always:
CsvHelper.HeaderValidationException: "Header with name 'Betrag der Rate' was not found.
How can I solve this?
if (filename != string.Empty)
{
using (var reader = new StreamReader(filename))
using (var csv = new CsvReader(reader, CultureInfo.InvariantCulture))
{
csv.Configuration.RegisterClassMap<LastschriftMap>();
csv.Configuration.HasHeaderRecord = true;
var records = csv.GetRecords<Lastschriften>();
alleLastschriften = records.ToList();
}
}
If this is reading German data, try using CultureInfo.CreateSpecificCulture("de-DE") instead of CultureInfo.InvariantCulture. Or use CultureInfo.CurrentCulture which was the default in the past.
Related
I'm essentially trying to download from SharePoint if it has a newer copy of a file or else push mine if mine is newer. I'm never concerned with merging changes in this application. Newer is simply better as the data is all a raw query from a server. So that said, here is what I am doing to compare which file is newer and upload or download accordingly. The problem is, every upload and every download will make the "modified date" be equal to the exact time I do the transfer. I've found I can get the modified date from SharePoint and overwrite the Windows timestamp, but I'm not sure how to go the other way. I'd also accept if there was a way to retain the value to begin with depending on the impact to my current code. So how can I either retain or modify it?
using (var ctx = new ClientContext(DataSharepointSite))
{
ctx.AuthenticationMode = ClientAuthenticationMode.Default;
ctx.Credentials = GetSharepointCredentials();
var file = ctx.Web.GetFileByServerRelativeUrl(sharepointFile);
var fileData = file.OpenBinaryStream();
ctx.Load(file);
ctx.ExecuteQuery();
using (var sr = new StreamReader(fileData.Value))
{
DateTime lastDataUpdate = System.IO.File.GetLastWriteTime(localFile).ToUniversalTime();
DateTime ShareDate = file.TimeLastModified.ToUniversalTime();
// If SharePoint has a newer file, download it
if (ShareDate > lastDataUpdate)
using (FileStream fs = new FileStream(localFile, FileMode.Create))
{
sr.BaseStream.CopyTo(fs);
}
// Forced to overwrite modified date to match SharePoint
System.IO.File.SetLastWriteTime(localFile, ShareDate);
}
// else if local copy is newer, push to SharePoint
else if (ShareDate < lastDataUpdate)
{
using (FileStream fs = new FileStream(localFile, FileMode.Open))
{
// This, too, is getting a new "modified date" so next compare will re-download
Microsoft.SharePoint.Client.File.SaveBinaryDirect(ctx, sharepointFile, fs, true);
}
}
}
file.ListItemAllFields["Modified"] = System.IO.File.GetLastWriteTime(localFile);
file.ListItemAllFields.Update();
ctx.ExecuteQuery();
I have my desired txt files which I want to use as TextAssets. I need these files to be usable at runtime by my other scripts. Now the issue is that I can not figure out a way to make these things work.
I know that I should be using the Assets/Resources or the Streaming Assets folder but for some reason things are not working properly. Is there a way to incorporate it all with StreamWriters and Filestreams? What about TextAssets assigned in Unity Editor, can those also be setup as Streaming?
Some examples of code that uses my assets:
public void TaskOnClick() //getting multi-values
{
string filename = "Assets/Resources/TempoText/multi-export.txt";
using (StreamWriter writeFile = new StreamWriter(filename, false))
{
foreach (string inputJson in File.ReadLines("Assets/Resources/TempoText/multi-import.txt"))
{
string temperature = GetTemperatureByRegex(inputJson);
Debug.Log(temperature);
writeFile.AutoFlush = true;
Console.SetOut(writeFile);
writeFile.WriteLine(temperature.ToString());
}
}
File.Copy("Assets/Resources/TempoText/multi-export.txt", "Assets/Resources/multi-export.txt", true);
}
//or
FileStream filestream = new FileStream("Assets/Resources/TempoText/multi-import.txt", FileMode.Create, FileAccess.ReadWrite, FileShare.ReadWrite);
var writeFile = new StreamWriter(filestream);
{
var document = collection.Find(new BsonDocument()).Sort(sort).Limit(limit: limit).ForEachAsync(d => Console.WriteLine(d)); //displays last 10 entries
Debug.Log(document.ToString());
writeFile.AutoFlush = true;
Console.SetOut(writeFile);
writeFile.Write(document.ToString());
}
All help greatly appreciated, I've basically messed up big time since I only found out about this now when I built everything as is...
Edit: got the streamwriters to do everything nicely with Application.persistentDataPath! Now stuck with a problem that I already struggled with - how to assign a TextAsset to get the file from a fixed path...
public TextAsset textFile;
Wondering how to set this to get it's .txt from Application.persistentDataPath
Application.persistentDataPath is what was needed all along.
Something nobody ever mentioned wherever I looked around. Hope somebody will be able to find the correct way using this mess of a question and lackluster answer.
A little background on problem:
We have an ASP.NET MVC5 Application where we use FlexMonster to show the data in grid. The data source is a stored procedure that brings all the data into the UI grid, and once user clicks on export button, it exports the report to Excel. However, in some cases export to excel is failing.
Some of the data has some invalid characters, and it is not possible/feasible to fix the source as suggested here
My approach so far:
EPPlus library fails on initializing the workbook as the input excel file contains some invalid XML characters. I could find that the file is dumped with some invalid character in it. I looked into the possible approaches .
Firstly, I identified the problematic character in the excel file. I first tried to replace the invalid character with blank space manually using Notepad++ and the EPPlus could successfully read the file.
Now using the approaches given in other SO thread here and here, I replaced all possible occurrences of invalid chars. I am using at the moment
XmlConvert.IsXmlChar
method to find out the problematic XML character and replacing with blank space.
I created a sample program where I am trying to work on the problematic excel sheet.
//in main method
String readFile = File.ReadAllText(filePath);
string content = RemoveInvalidXmlChars(readFile);
File.WriteAllText(filePath, content);
//removal of invalid characters
static string RemoveInvalidXmlChars(string inputText)
{
StringBuilder withoutInvalidXmlCharsBuilder = new StringBuilder();
int firstOccurenceOfRealData = inputText.IndexOf("<t>");
int lastOccurenceOfRealData = inputText.LastIndexOf("</t>");
if (firstOccurenceOfRealData < 0 ||
lastOccurenceOfRealData < 0 ||
firstOccurenceOfRealData > lastOccurenceOfRealData)
return inputText;
withoutInvalidXmlCharsBuilder.Append(inputText.Substring(0, firstOccurenceOfRealData));
int remaining = lastOccurenceOfRealData - firstOccurenceOfRealData;
string textToCheckFor = inputText.Substring(firstOccurenceOfRealData, remaining);
foreach (char c in textToCheckFor)
{
withoutInvalidXmlCharsBuilder.Append((XmlConvert.IsXmlChar(c)) ? c : ' ');
}
withoutInvalidXmlCharsBuilder.Append(inputText.Substring(lastOccurenceOfRealData));
return withoutInvalidXmlCharsBuilder.ToString();
}
If I replaces the problematic character manually using notepad++, then the file opens fine in MSExcel. The above mentioned code successfully replaces the same invalid character and writes the content back to the file. However, when I try to open the excel file using MS Excel, it throws an error saying that file may have been corrupted and no content is displayed (snapshots below). Moreover, Following code
var excelPackage = new ExcelPackage(new FileInfo(filePath));
on the file that I updated via Notepad++, throws following exception
"CRC error: the file being extracted appears to be corrupted. Expected 0x7478AABE, Actual 0xE9191E00"}
My Questions:
Is my approach to modify content this way correct?
If yes, How can I write updated string to an Excel file?
If my approach is wrong then, How can I proceed to get rid of invalid XML chars?
Errors shown on opening file (without invalid XML char):
First Pop up
When I click on yes
Thanks in advance !
It does sounds like a binary (presumable XLSX) file based on your last comment. To confirm, open the file created by the FlexMonster with 7zip. If it opens properly and you see a bunch of XML files in folders, its a XLSX.
In that case, a search/replace on a binary file sounds like a very bad idea. It might work on the XML parts but might also replace legit chars in other parts. I think the better approach would be to do as #PanagiotisKanavos suggests and use ZipArchive. But you have to do rebuild it in the right order otherwise Excel complains. Similar to how it was done here https://stackoverflow.com/a/33312038/1324284, you could do something like this:
public static void ReplaceXmlString(this ZipArchive xlsxZip, FileInfo outFile, string oldString, string newstring)
{
using (var outStream = outFile.Open(FileMode.Create, FileAccess.ReadWrite))
using (var copiedzip = new ZipArchive(outStream, ZipArchiveMode.Update))
{
//Go though each file in the zip one by one and copy over to the new file - entries need to be in order
foreach (var entry in xlsxZip.Entries)
{
var newentry = copiedzip.CreateEntry(entry.FullName);
var newstream = newentry.Open();
var orgstream = entry.Open();
//Copy non-xml files over
if (!entry.Name.EndsWith(".xml"))
{
orgstream.CopyTo(newstream);
}
else
{
//Load the xml document to manipulate
var xdoc = new XmlDocument();
xdoc.Load(orgstream);
var xml = xdoc.OuterXml.Replace(oldString, newstring);
xdoc = new XmlDocument();
xdoc.LoadXml(xml);
xdoc.Save(newstream);
}
orgstream.Close();
newstream.Flush();
newstream.Close();
}
}
}
When it is used like this:
[TestMethod]
public void ReplaceXmlTest()
{
var datatable = new DataTable("tblData");
datatable.Columns.AddRange(new[]
{
new DataColumn("Col1", typeof (int)),
new DataColumn("Col2", typeof (int)),
new DataColumn("Col3", typeof (string))
});
for (var i = 0; i < 10; i++)
{
var row = datatable.NewRow();
row[0] = i;
row[1] = i * 10;
row[2] = i % 2 == 0 ? "ABCD" : "AXCD";
datatable.Rows.Add(row);
}
using (var pck = new ExcelPackage())
{
var workbook = pck.Workbook;
var worksheet = workbook.Worksheets.Add("source");
worksheet.Cells.LoadFromDataTable(datatable, true);
worksheet.Tables.Add(worksheet.Cells["A1:C11"], "Table1");
//Now similulate the copy/open of the excel file into a zip archive
using (var orginalzip = new ZipArchive(new MemoryStream(pck.GetAsByteArray()), ZipArchiveMode.Read))
{
var fi = new FileInfo(#"c:\temp\ReplaceXmlTest.xlsx");
if (fi.Exists)
fi.Delete();
orginalzip.ReplaceXmlString(fi, "AXCD", "REPLACED!!");
}
}
}
Gives this:
Just keep in mind that this is completely brute force. Anything you can do to make the file filter smarter rather then simply doing ALL xml files would be a very good thing. Maybe limit it to the SharedString.xml file if that is where the problem lies or in the xml files in the worksheet folders. Hard to say without knowing more about the data.
I'm trying out a project with ASP.Net MVC and have a large CSV file that I want to save to the LocalDB.
I have been following this tutorial (and the ones before that are about MVC): https://learn.microsoft.com/en-us/aspnet/mvc/overview/getting-started/introduction/creating-a-connection-string
Now I want to add data to this database that I have set up and I would like to read this data from a csv file and then save it to my database.
I have tried this: https://www.aspsnippets.com/Articles/Upload-Read-and-Display-CSV-file-Text-File-data-in-ASPNet-MVC.aspx
but when I try to upload my file I get an error that my file is too large?
I would love it if it could be automated so that when I start my application the database will be populated with the data from my csv file (and if it already is populated it will not do it again) or just some way of coding so that I can add the data from my csv file to the database (LocalDB).
protected override void Seed(ProductsDBContext context)
{
Assembly assembly = Assembly.GetExecutingAssembly();
string resourceName = "WebbApplication.App_Data.SeedData.price_detail.csv";
using (Stream stream = assembly.GetManifestResourceStream(resourceName))
{
using (StreamReader reader = new StreamReader(stream, Encoding.UTF8))
{
CsvReader csvReader = new CsvReader(reader);
var products = csvReader.GetRecords<PriceDetail>().ToArray();
context.PriceDetails.AddOrUpdate(c => c.PriceValueId, products);
}
}
}
Your second link includes the following line:
string csvData = System.IO.File.ReadAllText(filePath);
If you are getting an Out of Memory Exception, then you should not load the entire file into memory at once - i.e. do not read all of the text.
The StreamReader has a built-in function to handle this.
System.IO.StreamReader file = new System.IO.StreamReader("WebbApplication.App_Data.SeedData.price_detail.csv");
while((line = file.ReadLine()) != null)
{
System.Console.WriteLine(line);
//Replace with your operation below
}
Potentially the same problem solved at this question.
With Cinchoo ETL - an open source library, you can bulk load CSV file into sqlserver with few lines of code.
using (var p = new ChoCSVReader(** YOUR CSV FILE **)
.WithFirstLineHeader()
)
{
p.Bcp("** ConnectionString **", "** tablename **");
}
For more information, please visit codeproject article.
Hope it helps.
I'm trying to write a code which, given a path to an item in the TFS repository and two revisions, would compute a difference between the contents file had at these two moments. For now the code might look like this:
using (var projectCollection = new TfsTeamProjectCollection(new Uri(repositoryUrl)))
{
projectCollection.EnsureAuthenticated();
var versionControlServer = (VersionControlServer)projectCollection.GetService(typeof(VersionControlServer));
string path = "$/MyProject/path/to/file.xml"
var before = new DiffItemVersionedFile(versionControlServer, path, VersionSpec.ParseSingleSpec(minRevision.ToString(), null));
var after = new DiffItemVersionedFile(versionControlServer, path, VersionSpec.ParseSingleSpec(maxRevision.ToString(), null));
using (var stream = new MemoryStream())
using (var writer = new StreamWriter(stream))
{
var options = new DiffOptions();
options.Flags = DiffOptionFlags.EnablePreambleHandling;
options.OutputType = DiffOutputType.Unified;
options.TargetEncoding = Encoding.UTF8;
options.SourceEncoding = Encoding.UTF8;
options.StreamWriter = writer;
Difference.DiffFiles(versionControlServer, before, after, options, path, true);
writer.Flush();
var reader = new StreamReader(stream);
var diff = reader.ReadToEnd();
}
}
But once this code is executed, the variable diff is an empty string even though I know for sure the file has been modified between minRevision and maxRevision.
This code will also throw an exception if the file didn't exist at minRevision or was deleted in maxRevision, but this seems to be a problem to solve later, once I get this thing working with files which were only edited.
EDIT
Having checked temp files, I'm sure both versions of the file are downloaded correctly. Something is wrong with the computation of the diff or with writing the diff to a stream or with copying the diff to a string.
Solved. The problem was the reader. After I changed the last two lines to
var diff = Encoding.UTF8.GetString(stream.ToArray());
I got some diff at last.
I know you accepted your answer, and this was asked in 2012, but I recently had to do the same thing, but much prefer using a StreamReader vs .ToArray()
The answer is that you have to reset the MemoryStream before you start reading from it.
add this
stream.Position = 0;
right after you flush the writer