This question already has answers here:
Windows filesystem: Creation time of a file doesn't change when while is deleted and created again
(2 answers)
Closed 9 years ago.
I have a logging class. It created a new log.txt file if one isn't present and writes messsages to that file. I also have a method that runs to check the file size and when the file was created against local settings. If the difference between the log.txt's creation time and the current time exceeds the local settings MaxLogHours value, then it is archived to a local archive folder and deleted. The new log.txt file is created by the above process the next time a log message is sent to the class.
This works great, except when I look at the FileInfo.CreationTime for my log.txt file, it is always the same - 7/17/2012 12:05/18 PM - no matter what I do. I've manually deleted the file, the program deletes it, always the same. What is going on here? I also timestamp the old ones, but still nothing works. Does Windows think that the file is the same one because it has the same filename? I'd appreciate any help, thanks!
archive method
public static void ArchiveLog(Settings s)
{
FileInfo fi = new FileInfo(AppDomain.CurrentDomain.BaseDirectory + "\\log.txt");
string archiveDir = AppDomain.CurrentDomain.BaseDirectory + "\\archive";
TimeSpan ts = DateTime.Now - fi.CreationTime;
if ((s.MaxLogKB != 0 && fi.Length >= s.MaxLogKB * 1000) ||
(s.MaxLogHours != 0 && ts.TotalHours >= s.MaxLogHours))
{
if (!Directory.Exists(archiveDir))
{
Directory.CreateDirectory(archiveDir);
}
string archiveFile = archiveDir + "\\log" + string.Format("{0:MMddyyhhmmss}", DateTime.Now) + ".txt";
File.Copy(AppDomain.CurrentDomain.BaseDirectory + "\\log.txt", archiveFile);
File.Delete(AppDomain.CurrentDomain.BaseDirectory + "\\log.txt");
}
}
Writing/Creating the log:
public static void MsgLog(string Msg, bool IsStandardMsg = true)
{
try
{
using (StreamWriter sw = new StreamWriter(Directory.GetCurrentDirectory() + "\\log.txt", true))
{
sw.WriteLine("Msg at " + DateTime.Now + " - " + Msg);
Console.Out.WriteLine(Msg);
}
}
catch (Exception ex)
{
Console.Out.WriteLine(ex.Message);
}
}
This may happened , so it's writen in FileSystemInfo.CreationTime
This method may return an inaccurate value, because it uses native
functions whose values may not be continuously updated by the
operating system.
I think the problem is that you are using FileInfo.CreationTime without first checking if the file still exists. Run this POC - it will always generate "After delete CreationTime: 1/1/1601 12:00:00 AM" - because file does not exists anymore and you did not touch FileInfo.CreationTime prior to delete. However if you uncomment the line:
//Console.WriteLine("Before delete CreationTime: {0}", fi.CreationTime);
in the code below strangely both calls will return correct and updated value.
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading;
namespace ConsoleApplication17088573
{
class Program
{
static void Main(string[] args)
{
for (int i = 0; i < 10; i++)
{
string fname = "testlog.txt";
using (var fl = File.Create(fname))
{
using (var sw = new StreamWriter(fl))
{
sw.WriteLine("Current datetime is {0}", DateTime.Now);
}
}
var fi = new FileInfo(fname);
//Console.WriteLine("Before delete CreationTime: {0}", fi.CreationTime);
File.Delete(fname);
Console.WriteLine("After delete CreationTime: {0}", fi.CreationTime);
Thread.Sleep(1000);
}
}
}
}
Related
I have a requirement where my scheduler will run twice in a day. One in the morning and a second time in evening. When I run my current code it stores the file in a folder.
So again when I run the same application in the evening what happens is that the same file which is saved earlier in the morning is getting updated again which I don't want to happen. I want to save both the files. So what should I do?
Below is my current code. Please give me suggestions
public void ExportExcel(string strWorkbookName, DataSet ds)
{
string strDateFolder = "";
string strFileName = ConfigurationManager.AppSettings["FileName"].ToString();
try
{
using (XLWorkbook wb = new XLWorkbook())
{
strDateFolder = DateTime.Now.ToString("dd-MM-yyyy");
if (Directory.Exists(strDateFolder))
{
Directory.CreateDirectory(strDateFolder);
}
wb.Worksheets.Add(ds);
wb.SaveAs(ConfigurationRead.GetAppSetting("ReportDirectory") + "\\" + strDateFolder + "\\" + strFileName);
}
}
catch (Exception)
{
throw;
}
}
UPDATE
Also, I want to delete the folder created after 7 days..Is that also possible ?
strDateFolder will contain the same value through both runs because it gets the date. You may want to add time to that so it creates another file. Like this:
strDateFolder = DateTime.Now.ToString("dd-MM-yyyy-hh");
Then, the code below is like saying: if this directory exists, create it.
if (Directory.Exists(strDateFolder))
{
Directory.CreateDirectory(strDateFolder);
}
You can use only this, because it will create it only if it does not exist:
Directory.CreateDirectory(strDateFolder);
Update from post:
This would delete your folders older that 6 days
CultureInfo enUS = new CultureInfo("en-US");
string path = ConfigurationRead.GetAppSetting("ReportDirectory");
DateTime currentDate = DateTime.Now.AddDays(-7);
foreach (string s in Directory.GetDirectories(path))
{
string folderPath = s.Remove(0, path.Length);
if (DateTime.TryParseExact(folderPath, "dd-MM-yyyy hhmmss", enUS, DateTimeStyles.AssumeLocal, out DateTime td))
{
if (td <= currentDate)
{
Directory.Delete(s, true);
}
}
}
I'd really like some advice on this as I have been running into this issue quite a bit now. I have a couple applications, both big and small where I need to do some work with Netezza. Unfortunately, it seems a common issue with .net and Netezza, is Netezza takes a sql command, executes it (I have confirmed it in the log), but periodically does not send a return back, and my OLEDB connection in my C# app just sits there and times out. In the Netezza log I can also see that my session just sits there open, because my app is still waiting for NZ to send something back. This seems to happen only with a connection that executes more than 1 command.
Anyways, below is some code and I want some advice on how to mitigate this issue. I currently but in a retry count, but I'd really like something maybe more fail-safe. Does anyone have any advice on how to deal with an issue like this where you may not receive a return?
This particular loop is about 135 record updates, and only takes 1 minute normally. The lack of return isn't on any specific record. It's completely random and happens in other applications as well.
Any advice would be appreciated! Thank you!
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Data;
using System.Data.OleDb;
using System.Data.Sql;
using System.Data.SqlTypes;
using System.Data.SqlClient;
using System.Security.Cryptography;
using System.IO;
namespace RemoveVoidedInvoices
{
class UpdateNetezza
{
public bool NetezzaWorkFailure = false;
private void NetezzaWorkFailed()
{
NetezzaWorkFailure = true;
}
public void updateCounts(List<RecordCounts> recordCounts)
{
string connString = string.Format("Provider=NZOLEDB;Data Source={0};Initial Catalog=EBIDW;User ID=MYUSERNAME;Password={1}", Environment, passWord);
OleDbConnection netezzaConn = null;
//Due to timeout issues I am making a quick timespan entry so that I can keep track in the log of how long each day the bulk update took
DateTime Prequery = DateTime.Now;
int retrycount = 0;
try
{
netezzaConn = new OleDbConnection(connString);
netezzaConn.Open();
for (int i = 0; i < recordCounts.Count; i++)
{
try
{
if (recordCounts[i].RecordCount.ToString() != recordCounts[i].OrigCount.ToString())
{
string updateStatement = string.Format("UPDATE fct_ourtable SET LINESWRITTENTOFILE = {0} where EXTRACTFILENAME = '{1}' and LINESWRITTENTOFILE = {2}", recordCounts[i].RecordCount.ToString(), recordCounts[i].FileName, recordCounts[i].OrigCount.ToString());
Console.WriteLine("Executing query : " + updateStatement);
Console.WriteLine("Query start-time - " + DateTime.Now.ToString());
OleDbCommand exe = new OleDbCommand(updateStatement, netezzaConn);
exe.CommandTimeout = 2000;
int rowsUpdated;
rowsUpdated = exe.ExecuteNonQuery();
Console.WriteLine("Rows Updated = " + rowsUpdated.ToString());
Console.WriteLine("Query end-time - " + DateTime.Now.ToString());
Console.WriteLine();
}
else
{
Console.WriteLine("No records were removed from the file : " + recordCounts[i].FileName + ". Not updating Netezza.");
Console.WriteLine();
}
}
catch (OleDbException oledbex)
{
retrycount++;
if (retrycount > 3)
{
Console.WriteLine("Maximum number of retrys met. Canceling now.");
throw new System.Exception();
}
else
{
i = i - 1;
Console.WriteLine("Timeout on Query, retrying");
}
}
}
}
catch (Exception ex)
{
Console.WriteLine(ex);
NetezzaWorkFailed();
}
finally
{
if (netezzaConn != null)
{
netezzaConn.Close();
netezzaConn = null;
}
TimeSpan duration = DateTime.Now.Subtract(Prequery);
Console.WriteLine("Query Time: " + duration);
}
}
}
}
This may be more appropriate as a comment but I don't have enough rep.
I haven't used OleDB much but we run queries in a similar manner to what you're doing with ODBC and haven't had any issues. It would be interesting to try at least.
A few comments on your snippet, which I doubt will have much effect on your problem but will help clarify and shorten the code.
Call Dispose instead of Close on your connection.
Also Dispose your command object.
Use parameters instead of formatting a query string (works for OleDB and ODCB). Create your OleDBCommand outside the loop and add your three parameters. Inside the loop you set the Value of the parameters and execute the query as you do already.
Use using blocks instead of try-catch to avoid the explicit calls to Dispose. You will still need a try-catch block inside or outside the using block if you want to handle exceptions.
Some code I'm working with occasionally needs to refer to long UNC paths (e.g. \\?\UNC\MachineName\Path), but we've discovered that no matter where the directory is located, even on the same machine, it's much slower when accessing through the UNC path than the local path.
For example, we've written some benchmarking code that writes a string of gibberish to a file, then later read it back, multiple times. I'm testing it with 6 different ways to access the same shared directory on my dev machine, with the code running on the same machine:
C:\Temp
\\MachineName\Temp
\\?\C:\Temp
\\?\UNC\MachineName\Temp
\\127.0.0.1\Temp
\\?\UNC\127.0.0.1\Temp
And here are the results:
Testing: C:\Temp
Wrote 1000 files to C:\Temp in 861.0647 ms
Read 1000 files from C:\Temp in 60.0744 ms
Testing: \\MachineName\Temp
Wrote 1000 files to \\MachineName\Temp in 2270.2051 ms
Read 1000 files from \\MachineName\Temp in 1655.0815 ms
Testing: \\?\C:\Temp
Wrote 1000 files to \\?\C:\Temp in 916.0596 ms
Read 1000 files from \\?\C:\Temp in 60.0517 ms
Testing: \\?\UNC\MachineName\Temp
Wrote 1000 files to \\?\UNC\MachineName\Temp in 2499.3235 ms
Read 1000 files from \\?\UNC\MachineName\Temp in 1684.2291 ms
Testing: \\127.0.0.1\Temp
Wrote 1000 files to \\127.0.0.1\Temp in 2516.2847 ms
Read 1000 files from \\127.0.0.1\Temp in 1721.1925 ms
Testing: \\?\UNC\127.0.0.1\Temp
Wrote 1000 files to \\?\UNC\127.0.0.1\Temp in 2499.3211 ms
Read 1000 files from \\?\UNC\127.0.0.1\Temp in 1678.18 ms
I tried the IP address to rule out a DNS issue. Could it be checking credentials or permissions on each file access? If so, is there a way to cache it? Does it just assume since it's a UNC path that it should do everything over TCP/IP instead of directly accessing the disk? Is it something wrong with the code we're using for the reads/writes? I've ripped out the pertinent parts for benchmarking, seen below:
using System;
using System.Collections.Generic;
using System.IO;
using System.Runtime.InteropServices;
using System.Text;
using Microsoft.Win32.SafeHandles;
using Util.FileSystem;
namespace UNCWriteTest {
internal class Program {
[DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)]
public static extern bool DeleteFile(string path); // File.Delete doesn't handle \\?\UNC\ paths
private const int N = 1000;
private const string TextToSerialize =
"asd;lgviajsmfopajwf0923p84jtmpq93worjgfq0394jktp9orgjawefuogahejngfmliqwegfnailsjdhfmasodfhnasjldgifvsdkuhjsmdofasldhjfasolfgiasngouahfmp9284jfqp92384fhjwp90c8jkp04jk34pofj4eo9aWIUEgjaoswdfg8jmp409c8jmwoeifulhnjq34lotgfhnq34g";
private static readonly byte[] _Buffer = Encoding.UTF8.GetBytes(TextToSerialize);
public static string WriteFile(string basedir) {
string fileName = Path.Combine(basedir, string.Format("{0}.tmp", Guid.NewGuid()));
try {
IntPtr writeHandle = NativeFileHandler.CreateFile(
fileName,
NativeFileHandler.EFileAccess.GenericWrite,
NativeFileHandler.EFileShare.None,
IntPtr.Zero,
NativeFileHandler.ECreationDisposition.New,
NativeFileHandler.EFileAttributes.Normal,
IntPtr.Zero);
// if file was locked
int fileError = Marshal.GetLastWin32Error();
if ((fileError == 32 /* ERROR_SHARING_VIOLATION */) || (fileError == 80 /* ERROR_FILE_EXISTS */)) {
throw new Exception("oopsy");
}
using (var h = new SafeFileHandle(writeHandle, true)) {
using (var fs = new FileStream(h, FileAccess.Write, NativeFileHandler.DiskPageSize)) {
fs.Write(_Buffer, 0, _Buffer.Length);
}
}
}
catch (IOException) {
throw;
}
catch (Exception ex) {
throw new InvalidOperationException(" code " + Marshal.GetLastWin32Error(), ex);
}
return fileName;
}
public static void ReadFile(string fileName) {
var fileHandle =
new SafeFileHandle(
NativeFileHandler.CreateFile(fileName, NativeFileHandler.EFileAccess.GenericRead, NativeFileHandler.EFileShare.Read, IntPtr.Zero,
NativeFileHandler.ECreationDisposition.OpenExisting, NativeFileHandler.EFileAttributes.Normal, IntPtr.Zero), true);
using (fileHandle) {
//check the handle here to get a bit cleaner exception semantics
if (fileHandle.IsInvalid) {
//ms-help://MS.MSSDK.1033/MS.WinSDK.1033/debug/base/system_error_codes__0-499_.htm
int errorCode = Marshal.GetLastWin32Error();
//now that we've taken more than our allotted share of time, throw the exception
throw new IOException(string.Format("file read failed on {0} to {1} with error code {1}", fileName, errorCode));
}
//we have a valid handle and can actually read a stream, exceptions from serialization bubble out
using (var fs = new FileStream(fileHandle, FileAccess.Read, 1*NativeFileHandler.DiskPageSize)) {
//if serialization fails, we'll just let the normal serialization exception flow out
var foo = new byte[256];
fs.Read(foo, 0, 256);
}
}
}
public static string[] TestWrites(string baseDir) {
try {
var fileNames = new List<string>();
DateTime start = DateTime.UtcNow;
for (int i = 0; i < N; i++) {
fileNames.Add(WriteFile(baseDir));
}
DateTime end = DateTime.UtcNow;
Console.Out.WriteLine("Wrote {0} files to {1} in {2} ms", N, baseDir, end.Subtract(start).TotalMilliseconds);
return fileNames.ToArray();
}
catch (Exception e) {
Console.Out.WriteLine("Failed to write for " + baseDir + " Exception: " + e.Message);
return new string[] {};
}
}
public static void TestReads(string baseDir, string[] fileNames) {
try {
DateTime start = DateTime.UtcNow;
for (int i = 0; i < N; i++) {
ReadFile(fileNames[i%fileNames.Length]);
}
DateTime end = DateTime.UtcNow;
Console.Out.WriteLine("Read {0} files from {1} in {2} ms", N, baseDir, end.Subtract(start).TotalMilliseconds);
}
catch (Exception e) {
Console.Out.WriteLine("Failed to read for " + baseDir + " Exception: " + e.Message);
}
}
private static void Main(string[] args) {
foreach (string baseDir in args) {
Console.Out.WriteLine("Testing: {0}", baseDir);
string[] fileNames = TestWrites(baseDir);
TestReads(baseDir, fileNames);
foreach (string fileName in fileNames) {
DeleteFile(fileName);
}
}
}
}
}
This doesn't surprise me. You're writing/reading a fairly small amount of data, so the file system cache is probably minimizing the impact of the physical disk I/O; basically, the bottleneck is going to be the CPU. I'm not certain whether the traffic will be going via the TCP/IP stack or not but at a minimum the SMB protocol is involved. For one thing that means the requests are being passed back and forth between the SMB client process and the SMB server process, so you've got context switching between three distinct processes, including your own. Using the local file system path you're switching into kernel mode and back but no other process is involved. Context switching is much slower than the transition to and from kernel mode.
There are likely to be two distinct additional overheads, one per file and one per kilobyte of data. In this particular test the per file SMB overhead is likely to be dominant. Because the amount of data involved also affects the impact of physical disk I/O, you may find that this is only really a problem when dealing with lots of small files.
I am trying to create a file with a FileInfo object and I am getting strange behavior.
Here is the gist of what I am doing -
public void CreateLog()
{
FileInfo LogFile = new FileInfo("");
if (!LogFile.Directory.Exists) { LogFile.Directory.Create(); }
if (!LogFile.Exists) { LogFile.Create(); }
if (LogFile.Length == 0)
{
using (StreamWriter Writer = LogFile.AppendText())
{
Writer.WriteLine("Quotes for " + Instrument.InstrumentID);
Writer.WriteLine("Time,Bid Size,Bid Price,Ask Price,Ask Size");
}
}
}
However, when it checks to see the length of the logfile, it says that the file does not exist (I checked - it does exist).
When I substitute LogFile.Length with the following:
File.ReadAllLines(LogFile.FullName).Length;
Then I get an exception that says that it cannot access the file because something else is already accessing it.
BUT, if I do a Thread.Sleep(500) before I do ReadAllLines, then it seems to work fine.
What am I missing?
LogFile.Create() if you user this function ,you may lock the file, so you can use using ,like this
using(LogFile.Create()){}
after that you can use the file again
Right now all I am using to calculate the size are the files in the folders. I do not think this is all of it, because the content database size is about 15gb. When I calculate the size of all the files I get around 10gb. Does anyone know what I may be missing?
Here is the code I have so far.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.SharePoint;
using System.Globalization;
namespace WebSizeTesting
{
class Program
{
static void Main(string[] args)
{
long SiteCollectionBytes = 0;
using (SPSite mainSite = new SPSite("http://sharepoint-test"))
{
// loop through the websites
foreach (SPWeb web in mainSite.AllWebs)
{
long webBytes = GetSPFolderSize(web.RootFolder);
// Add in size of each web site's recycle bin
webBytes += web.RecycleBin.OfType<SPRecycleBinItem>().Select(item => item.Size).ToArray<long>().Sum();
Console.WriteLine("Url: {0}, Size: {1}", web.Url, ConvertBytesToDisplayText( webBytes ));
SiteCollectionBytes += webBytes;
}
long siteCollectionRecycleBinBytes = mainSite.RecycleBin.OfType<SPRecycleBinItem>().Select(item => item.Size).ToArray<long>().Sum();
Console.WriteLine("Site Collection Recycle Bin: " + ConvertBytesToDisplayText(siteCollectionRecycleBinBytes));
SiteCollectionBytes += siteCollectionRecycleBinBytes;
}
Console.WriteLine("Total Size: " + ConvertBytesToDisplayText(SiteCollectionBytes));
Console.ReadKey();
}
public static long GetSPFolderSize(SPFolder folder)
{
long byteCount = 0;
// calculate the files in the immediate folder
foreach (SPFile file in folder.Files)
{
byteCount += file.TotalLength;
// also include file versions
foreach (SPFileVersion fileVersion in file.Versions)
{
byteCount += fileVersion.Size;
}
}
// Handle sub folders
foreach (SPFolder subFolder in folder.SubFolders)
{
byteCount += GetSPFolderSize(subFolder);
}
return byteCount;
}
public static string ConvertBytesToDisplayText(long byteCount)
{
string result = "";
if (byteCount > Math.Pow(1024, 3))
{
// display as gb
result = (byteCount / Math.Pow(1024, 3)).ToString("#,#.##", CultureInfo.InvariantCulture) + " GB";
}
else if (byteCount > Math.Pow(1024, 2))
{
// display as mb
result = (byteCount / Math.Pow(1024, 2)).ToString("#,#.##", CultureInfo.InvariantCulture) + " MB";
}
else if (byteCount > 1024)
{
// display as kb
result = (byteCount / 1024).ToString("#,#.##", CultureInfo.InvariantCulture) + " KB";
}
else
{
// display as bytes
result = byteCount.ToString("#,#.##", CultureInfo.InvariantCulture) + " Bytes";
}
return result;
}
}
}
edit 2:15 pm 3/1/2010 cst I added in the ability to count file versions as part of the size to the code. As was suggested by Goyuix in the post below. It still is off by a considerable amount of the physical database size.
edit 8:38 am 3/3/2010 cst I added in the calculating of the recycle bin size for each web, and the site collection recycle bin. These changes where suggested by ArjanP. Also i wanted to add, that I am very open to more efficient ways of doing this.
Did you consider the Trash Can? There will be cans for Webs and the Site Collection, all taking up space in the content database.
There will always be 'overhead' in a content database.. every 'empty' Web will consume a number of bytes already. 30% seems much but not excessive, it depends on the ratio of content and the number of webs.
The content database also stores configuration information, like what lists actually exist, features, permissions, etc... while that would probably not account for 5GB of data, it is something to consider. Also, each file is also typically associate with an SPListItem that may contain metadata for that file.
Do you have versioning turned on for any of the lists / libraries? If so, you will also need to check the SPListItem.Versions property for each version.
I'm not quite sure your code considers list attachments, too.