I am trying to rename an element inside a application.properties file using C#.
This is currently being accomplished via a batch file but it would be good if I could directly acheive this with C#.
For example inside my application.properties file I have the value 'name'.
With a batch file which is being run from within C# I can update the element 'name' in this file on the fly.
Im assuming there is a way to accomplish this in C# without the need for a batch file.
Managed to solve this using BinaryWriter.
using (BinaryWriter bw = new BinaryWriter(File.Open(ApppropFile, FileMode.Open)))
{
//write new data
string strNewData = "NEW DATA";
byte[] byteNewData = new byte[strNewData.Length];
// copy contents of string to byte array
for (var i = 0; i < strNewData.Length; i++)
{
byteNewData[i] = Convert.ToByte(strNewData[i]);
}
// write new data to file
bw.Seek(141, SeekOrigin.Begin); // seek to position
bw.Write(byteNewData, 0, byteNewData.Length);
}
Related
I have a CSV file containing the following columns -
Key,Value
First,Line
Second,Line
Third,Line
I want to add a new Key-Value to this file given the key is not already present in the file using C#? What would be the best way to do this? Would I have to traverse line by line and check for the Keys or is there any other better way?
I am not using the CSVHelper package or any other CSV writer.
You could do this:
string path = #"PathToFile.csv";
string Content = string.Empty;
using (StreamReader reader = new StreamReader(path))
{
Content = reader.ReadToEnd();
reader.Close();
}
if (!Content.Contains("YourKey"))
{
using (StreamWriter sw = new StreamWriter(path))
{
sw.WriteLine(Content + "\nYourkey,YourValue");
sw.Close();
}
}
Read the file and write all text to a string variable, check the variable if the key exists, if it doesn't then write content back to the file along with your new key. as the file grows it will take longer and longer to search the whole file but it'll work well for a couple thousand lines.
A little background on problem:
We have an ASP.NET MVC5 Application where we use FlexMonster to show the data in grid. The data source is a stored procedure that brings all the data into the UI grid, and once user clicks on export button, it exports the report to Excel. However, in some cases export to excel is failing.
Some of the data has some invalid characters, and it is not possible/feasible to fix the source as suggested here
My approach so far:
EPPlus library fails on initializing the workbook as the input excel file contains some invalid XML characters. I could find that the file is dumped with some invalid character in it. I looked into the possible approaches .
Firstly, I identified the problematic character in the excel file. I first tried to replace the invalid character with blank space manually using Notepad++ and the EPPlus could successfully read the file.
Now using the approaches given in other SO thread here and here, I replaced all possible occurrences of invalid chars. I am using at the moment
XmlConvert.IsXmlChar
method to find out the problematic XML character and replacing with blank space.
I created a sample program where I am trying to work on the problematic excel sheet.
//in main method
String readFile = File.ReadAllText(filePath);
string content = RemoveInvalidXmlChars(readFile);
File.WriteAllText(filePath, content);
//removal of invalid characters
static string RemoveInvalidXmlChars(string inputText)
{
StringBuilder withoutInvalidXmlCharsBuilder = new StringBuilder();
int firstOccurenceOfRealData = inputText.IndexOf("<t>");
int lastOccurenceOfRealData = inputText.LastIndexOf("</t>");
if (firstOccurenceOfRealData < 0 ||
lastOccurenceOfRealData < 0 ||
firstOccurenceOfRealData > lastOccurenceOfRealData)
return inputText;
withoutInvalidXmlCharsBuilder.Append(inputText.Substring(0, firstOccurenceOfRealData));
int remaining = lastOccurenceOfRealData - firstOccurenceOfRealData;
string textToCheckFor = inputText.Substring(firstOccurenceOfRealData, remaining);
foreach (char c in textToCheckFor)
{
withoutInvalidXmlCharsBuilder.Append((XmlConvert.IsXmlChar(c)) ? c : ' ');
}
withoutInvalidXmlCharsBuilder.Append(inputText.Substring(lastOccurenceOfRealData));
return withoutInvalidXmlCharsBuilder.ToString();
}
If I replaces the problematic character manually using notepad++, then the file opens fine in MSExcel. The above mentioned code successfully replaces the same invalid character and writes the content back to the file. However, when I try to open the excel file using MS Excel, it throws an error saying that file may have been corrupted and no content is displayed (snapshots below). Moreover, Following code
var excelPackage = new ExcelPackage(new FileInfo(filePath));
on the file that I updated via Notepad++, throws following exception
"CRC error: the file being extracted appears to be corrupted. Expected 0x7478AABE, Actual 0xE9191E00"}
My Questions:
Is my approach to modify content this way correct?
If yes, How can I write updated string to an Excel file?
If my approach is wrong then, How can I proceed to get rid of invalid XML chars?
Errors shown on opening file (without invalid XML char):
First Pop up
When I click on yes
Thanks in advance !
It does sounds like a binary (presumable XLSX) file based on your last comment. To confirm, open the file created by the FlexMonster with 7zip. If it opens properly and you see a bunch of XML files in folders, its a XLSX.
In that case, a search/replace on a binary file sounds like a very bad idea. It might work on the XML parts but might also replace legit chars in other parts. I think the better approach would be to do as #PanagiotisKanavos suggests and use ZipArchive. But you have to do rebuild it in the right order otherwise Excel complains. Similar to how it was done here https://stackoverflow.com/a/33312038/1324284, you could do something like this:
public static void ReplaceXmlString(this ZipArchive xlsxZip, FileInfo outFile, string oldString, string newstring)
{
using (var outStream = outFile.Open(FileMode.Create, FileAccess.ReadWrite))
using (var copiedzip = new ZipArchive(outStream, ZipArchiveMode.Update))
{
//Go though each file in the zip one by one and copy over to the new file - entries need to be in order
foreach (var entry in xlsxZip.Entries)
{
var newentry = copiedzip.CreateEntry(entry.FullName);
var newstream = newentry.Open();
var orgstream = entry.Open();
//Copy non-xml files over
if (!entry.Name.EndsWith(".xml"))
{
orgstream.CopyTo(newstream);
}
else
{
//Load the xml document to manipulate
var xdoc = new XmlDocument();
xdoc.Load(orgstream);
var xml = xdoc.OuterXml.Replace(oldString, newstring);
xdoc = new XmlDocument();
xdoc.LoadXml(xml);
xdoc.Save(newstream);
}
orgstream.Close();
newstream.Flush();
newstream.Close();
}
}
}
When it is used like this:
[TestMethod]
public void ReplaceXmlTest()
{
var datatable = new DataTable("tblData");
datatable.Columns.AddRange(new[]
{
new DataColumn("Col1", typeof (int)),
new DataColumn("Col2", typeof (int)),
new DataColumn("Col3", typeof (string))
});
for (var i = 0; i < 10; i++)
{
var row = datatable.NewRow();
row[0] = i;
row[1] = i * 10;
row[2] = i % 2 == 0 ? "ABCD" : "AXCD";
datatable.Rows.Add(row);
}
using (var pck = new ExcelPackage())
{
var workbook = pck.Workbook;
var worksheet = workbook.Worksheets.Add("source");
worksheet.Cells.LoadFromDataTable(datatable, true);
worksheet.Tables.Add(worksheet.Cells["A1:C11"], "Table1");
//Now similulate the copy/open of the excel file into a zip archive
using (var orginalzip = new ZipArchive(new MemoryStream(pck.GetAsByteArray()), ZipArchiveMode.Read))
{
var fi = new FileInfo(#"c:\temp\ReplaceXmlTest.xlsx");
if (fi.Exists)
fi.Delete();
orginalzip.ReplaceXmlString(fi, "AXCD", "REPLACED!!");
}
}
}
Gives this:
Just keep in mind that this is completely brute force. Anything you can do to make the file filter smarter rather then simply doing ALL xml files would be a very good thing. Maybe limit it to the SharedString.xml file if that is where the problem lies or in the xml files in the worksheet folders. Hard to say without knowing more about the data.
I have converted a .zip file into a byte[], and now I am trying to convert the byte[] back to the original .zip file. I am running out of the options that I have tried. Anyone give me a pointer how can I achieve this?
You want the System.IO.Compression.ZipArchive class:
using (ZipArchive zip = ZipFile.Open("test.zip", ZipArchiveMode.Create))
{
var entry = zip.CreateEntry("File Name.txt");
using (StreamWriter sw = new StreamWriter(entry.Open()))
{
sw.Write("Some Text");
}
}
using (ZipArchive zip = ZipFile.Open("test.zip", ZipArchiveMode.Read))
{
foreach (ZipArchiveEntry entry in zip.Entries)
{
using (StreamReader sr = new StreamReader(entry.Open()))
{
var result = sr.ReadToEnd();
}
}
}
You probably don't want to read in the raw zip file into a byte array first and then try to decompress it. Instead, access it through this helper method.
Note the use of ZipArchive.Entries to access the sub-files stored in the single zip archive; this tripped me up when first learning to use zip files.
I'm working on a log program, which would dump data into gzip archive.
The first entry would look like this:
using (var fs = File.OpenWrite(logFile))
{
using (var gs = new GZipStream(fs, CompressionMode.Compress))
{
using (var sw = new StreamWriter(gs))
{
sw.WriteLine(logEntry);
}
}
}
Now I want add other lines to that file without having to re-read all file content and than to re-write it in a way that the result can be read with a single GZipStream.
What is the best way to do that?
You can use gzlog.h and gzlog.c from the zlib distribution in the examples directory. They do exactly what you're looking for.
i have stored the txt file to sql server database .
i need to read the txt file line by line to get the content in it.
my code :
DataTable dtDeleteFolderFile = new DataTable();
dtDeleteFolderFile = objutility.GetData("GetTxtFileonFileName", new object[] { ddlSelectFile.SelectedItem.Text }).Tables[0];
foreach (DataRow dr in dtDeleteFolderFile.Rows)
{
name = dr["FileName"].ToString();
records = Convert.ToInt32(dr["NoOfRecords"].ToString());
bytes = (Byte[])dr["Data"];
}
FileStream readfile = new FileStream(Server.MapPath("txtfiles/" + name), FileMode.Open);
StreamReader streamreader = new StreamReader(readfile);
string line = "";
line = streamreader.ReadLine();
but here i have used the FileStream to read from the Particular path. but i have saved the txt file in byte format into my Database. how to read the txt file using the byte[] value to get the txt file content, instead of using the Path value.
Given th fact that you have the file in a byte array, you can make use of MemoryStream Class
Something like
using (MemoryStream m = new MemoryStream(buffer))
using (StreamReader sr = new StreamReader(m))
{
while (!sr.EndOfStream)
{
string s = sr.ReadLine();
}
}
Also make sure to use using Statement (C# Reference)
Defines a scope, outside of which an
object or objects will be disposed.
The using statement allows the
programmer to specify when objects
that use resources should release
them. The object provided to the using
statement must implement the
IDisposable interface. This interface
provides the Dispose method, which
should release the object's resources.
You could try something like this at the end of your foreach:
String txtFileContent = Encoding.Unicode.GetString((Byte[])dr["Data"]);