show last 50 lines of log file on web page - c#

I have developed an asp.net c# webpage to allow user to download or view server logs.
Once the server and log date is selected they have the option to either open it via Notepad++, or view part of the log in a textbox.
Part of the requirement is to show only the last 50 lines of the log in the textbox, this in the only part I'm not sure of can anyone point me in the right direction?
Just now I'm building up the path then setting the text property of the textbox as follows:
_PathFrom = #"\\" + ddlServer.SelectedItem.Value + #"\Logs\" + AppOrSession.SelectedItem.Value + #"\" + ddlKernel.SelectedItem.Value + #"\" + txtLogName.Text;
WebClient MyClient = new WebClient();
_Log = MyClient.DownloadString(_PathFrom);
txtLog.Text = _Log;
thanks

using this method pick last 50 lines from file and display to front end
public static IList<string> GetLog(string logname, string numrows)
{
int lineCnt = 1;
List<string> lines = new List<string>();
int maxLines;
if (!int.TryParse(numrows, out maxLines))
{
maxLines = 50;
}
string logFile = HttpContext.Current.Server.MapPath("~/" + logname);
BackwardReader br = new BackwardReader(logFile);
while (!br.SOF)
{
string line = br.Readline();
lines.Add(line + System.Environment.NewLine);
if (lineCnt == maxLines) break;
lineCnt++;
}
lines.Reverse();
return lines;
}

Related

Razor, display a File Save dialog, and get folder / filename from it

I have a method that gets called when a button is pressed, to save the contents of a log file to a .csv.
I have hard-coded the folder & filename, but would like this to pop up a FileSaveDialog, take the folder & filename from that, and populate the 'path' variable with what is returned. Here's my code extract currently.
public void OnPostExport()
{
List<LogModel> logs = _logs;
//Insert the Column Names.
//logs.Insert(0, new LogModel {"W"});
StringBuilder sb = new StringBuilder();
string path = "c:\\temp\\steve.csv";
try
{
sb.AppendLine("Date,OriginLocationID,MagnumCount,Deleted,Location");
foreach (var log in logs)
{
sb.AppendLine(log.Date + "," + log.OriginLocationId + "," + log.Location);
}
File.WriteAllText(path, sb.ToString());
_notification = "Exported log history to " + path;
}
catch
{
_notification = "Failed to export log history";
}
}
Tried FileSaveDialog, but only available in winForms

While loop int does not update

I have a script that scrapes a website, but the while loop does not work. The script downloads a website and then checks if it is a website or a image.
If the downloaded item is a HTML file it saves it and adds 1 to i and the URL.
Problem: The URL does not change, even tho I think it should with this code.
int i = 0;
while (i < 5)
{
using var client = new WebClient();
client.Headers.Add("User-Agent", "C# console program");
int urlnumb = 1;
string url = "http://localhost:7211/database/resource/pk/" + urlnumb;
string content = client.DownloadString(url);
string htmldefiner = "html";
if (content.Contains(htmldefiner))
{
string savedirectory = #"C:/Temp/" + i + ".html";
System.IO.File.WriteAllText(savedirectory, content);
i++;
urlnumb++;
File.WriteAllText(#"C:/Temp/" + i + ".txt", url);
}
else
{
urlnumb++;
}
}

Asp.Net Mvc Delete file issue

I have an issue with Files.
I am doing an image importer so clients put their files on an FTP server and then they can import it in the application.
During the import process I copy the file in the FTP Folder to another folder with File.copy
public List<Visuel> ImportVisuel(int galerieId, string[] images)
{
Galerie targetGalerie = MemoryCache.GetGaleriById(galerieId);
List<FormatImage> listeFormats = MemoryCache.FormatImageToList();
int i = 0;
List<Visuel> visuelAddList = new List<Visuel>();
List<Visuel> visuelUpdateList = new List<Visuel>();
List<Visuel> returnList = new List<Visuel>();
foreach (string item in images)
{
i++;
Progress.ImportProgress[Progress.Guid] = "Image " + i + " sur " + images.Count() + " importées";
string extension = Path.GetExtension(item);
string fileName = Path.GetFileName(item);
string originalPath = HttpContext.Current.Request.PhysicalApplicationPath + "Uploads\\";
string destinationPath = HttpContext.Current.Server.MapPath("~/Images/Catalogue") + "\\";
Visuel importImage = MemoryCache.GetVisuelByFilName(fileName);
bool update = true;
if (importImage == null) { importImage = new Visuel(); update = false; }
Size imageSize = importImage.GetJpegImageSize(originalPath + fileName);
FormatImage format = listeFormats.Where(f => f.width == imageSize.Width && f.height == imageSize.Height).FirstOrDefault();
string saveFileName = Guid.NewGuid() + extension;
File.Copy(originalPath + fileName, destinationPath + saveFileName);
if (format != null)
{
importImage.format = format;
switch (format.key)
{
case "Catalogue":
importImage.fileName = saveFileName;
importImage.originalFileName = fileName;
importImage.dossier = targetGalerie;
importImage.dossier_id = targetGalerie.id;
importImage.filePath = "Images/Catalogue/";
importImage.largeur = imageSize.Width;
importImage.hauteur = imageSize.Height;
importImage.isRoot = true;
if (update == false) { MemoryCache.Add(ref importImage); returnList.Add(importImage); }
if (update == true) visuelUpdateList.Add(importImage);
foreach (FormatImage f in listeFormats)
{
if (f.key.StartsWith("Catalogue_"))
{
string[] keys = f.key.Split('_');
string destinationFileName = saveFileName.Insert(saveFileName.IndexOf('.'), "-" + keys[1].ToString());
string destinationFileNameDeclinaison = destinationPath + destinationFileName;
VisuelResizer declinaison = new VisuelResizer();
declinaison.Save(originalPath + fileName, f.width, f.height, 1000, destinationFileNameDeclinaison);
Visuel visuel = MemoryCache.GetVisuelByFilName(fileName.Insert(fileName.IndexOf('.'), "-" + keys[1].ToString()));
update = true;
if (visuel == null) { visuel = new Visuel(); update = false; }
visuel.parent = importImage;
visuel.filePath = "Images/Catalogue/";
visuel.fileName = destinationFileName;
visuel.originalFileName = string.Empty;
visuel.format = f;
//visuel.dossier = targetGalerie; On s'en fout pour les déclinaisons
visuel.largeur = f.width;
visuel.hauteur = f.height;
if (update == false)
{
visuelAddList.Add(visuel);
}
else
{
visuelUpdateList.Add(visuel);
}
//importImage.declinaisons.Add(visuel);
}
}
break;
}
}
}
MemoryCache.Add(ref visuelAddList);
// FONCTION à implémenter
MemoryCache.Update(ref visuelUpdateList);
return returnList;
}
After some processes on the copy (the original file is no more used)
the client have a pop-up asking him if he wants to delete the original files in the ftp folder.
If he clicks on Ok another method is called on the same controller
and this method use
public void DeleteImageFile(string[] files)
{
for (int i = 0; i < files.Length; i++)
{
File.Delete(HttpContext.Current.Request.PhysicalApplicationPath + files[i].Replace(#"/", #"\"));
}
}
This method works fine and really delete the good files when I use it in other context.
But here I have an error message:
Process can't acces to file ... because it's used by another process.
Someone have an idea?
Thank you.
Here's the screenshot of Process Explorer
There are couple of thing you can do here.
1) If you can repro it, you can use Process Explorer at that moment and see which process is locking the file and if the process is ur process then making sure that you close the file handle after your work is done.
2) Use try/catch around the delete statement and retry after few seconds to see if the file handle was released.
3) If you can do it offline you can put in some queue and do the deletion on it later on.
You solve this by using c# locks. Just embed your code inside a lock statement and your threads will be safe and wait each other to complete processing.
I found the solution:
in my import method, there a call to that method
public void Save(string originalFile, int maxWidth, int maxHeight, int quality, string filePath)
{
Bitmap image = new Bitmap(originalFile);
Save(ref image, maxWidth, maxHeight, quality, filePath);
}
The bitmap maintains the file opened blocking delete.
just added
image.Dispose();
in the methos and it work fine.
Thank you for your help, and thank you for process explorer. Very useful tool

SharpSVN not iterating over all subdirectories and files

Following the code found here:
How to check if file is under source control in SharpSvn?
I'm trying to make a small utility application that will iterate over a designated folder and print out the status of all the files.
private void btnCheckSVN_Click(object sender, EventArgs e)
{
ParseSVNResults(CheckSVN());
}
private Collection<SvnStatusEventArgs> CheckSVN()
{
string path = #"C:\AMG\trunk\AMC";
if (!Directory.Exists(path))
return null;
DevExpress.Utils.WaitDialogForm wait = new DevExpress.Utils.WaitDialogForm();
wait.Caption = "Please wait, loading SVN file statuses. This may take a moment.";
wait.Caption += Environment.NewLine + path;
wait.Show();
SvnClient client = new SvnClient();
SvnStatusArgs sa = new SvnStatusArgs();
sa.Depth = SvnDepth.Infinity;
Collection<SvnStatusEventArgs> statuses;
client.GetStatus(path, sa, out statuses);
wait.Close();
return statuses;
}
private void ParseSVNResults(Collection<SvnStatusEventArgs> results)
{
if (results == null)
return;
int modified = 0;
int unversioned = 0;
foreach (SvnStatusEventArgs item in results)
{
memoEditSVNFiles.Text += item.LocalContentStatus.ToString() + " -- " + item.Path + Environment.NewLine;
if (item.LocalContentStatus.ToString() == "Modified")
modified++;
else if (item.LocalContentStatus.ToString() == "NotVersioned")
unversioned++;
}
memoEditSVNFiles.Text += Environment.NewLine + "Modified: " + modified + Environment.NewLine;
memoEditSVNFiles.Text += "Not Versioned: " + unversioned + Environment.NewLine;
memoEditSVNFiles.Text += "Total: " + results.Count;
}
When the code executes, I get a total of 147 Files & Folders. The actual folder has a few thousand files. Is it possible I'm looking at too many files and SharpSVN just quits after a while?
edit; I just tried creating about 100 text files and putting 30 into 3 folders, then 'nesting' them. So I've got;
C:\AMG\trunk\test which has ~30 files
C:\AMG\trunk\test\Folder1 which has ~30 files
C:\AMG\trunk\test\Folder1\Sub which has another 30
Without comitting this to the repository, when I run the above code on C:\AMG\trunk\test instead of the given path in my code snippet, the output says 1 total file.
So it turns out the SvnStatusArgs class has a "RetrieveAllEntries" boolean flag that defaults to false.
As the name implies, setting this true returns every file, whether it was modified / unversioned or up to date.
1 extra line in the CheckSVN() method in my original post:
SvnClient client = new SvnClient();
SvnStatusArgs sa = new SvnStatusArgs();
sa.Depth = SvnDepth.Infinity;
sa.RetrieveAllEntries = true; //the new line
Collection<SvnStatusEventArgs> statuses;
client.GetStatus(path, sa, out statuses);

Getting "The process cannot access the file because it is being used by another process" when saving multiple files from the request stream

I am using the HTML5 canvas element and the new HTML5 file i\o function to drop multiple files on it and have them upload. It works fine, but now I need to generate a new filename if no files are in the destination directory (It's a 7 digit integer) or get the name of the last uploaded file, convert it to int32 and increment that by one for every new file being uploaded to the same directory. This is where the GetFileName(dir); comes in. The first image always uploads fine but the problem begins once the second file is saved and the process hits ImageJob.Build(), I presume this is because once the new file is starting to write, the GetFile() method runs for second file in line simultaneously and is checking for last written file, which is still being written and this creates the conflict. How can I fix this, maybe I can somehow itterate with a foreach over the Request.InputStream data or implement some kind process watch that waits for the process to finish?
Update: I tried using TempData to store the generated filename, and just increment on the int value in TempData for all the next file names and it appears to do better, gets more images in but still errors at some point. But TempData is not for that as it gets erased after each read, reassigning to it again does not help. Maybe I'll try storing it in session.
The process cannot access the file 'C:\Users\Admin\Documents\Visual Studio
2010\Projects\myproj\myproj\Content\photoAlbums\59\31\9337822.jpg'
because it is being used by another process.
public PartialViewResult Upload()
{
string fileName = Request.Headers["filename"];
string catid = Request.Headers["catid"];
string pageid = Request.Headers["pageid"];
string albumname = Request.Headers["albumname"];
var dir = "~/Content/photoAlbums/" + catid + "/" + pageid + "/" + (albumname ?? null);
var noex = GetFileName(dir);
var extension = ".jpg";
string thumbFile = noex + "_t" + extension;
fileName = noex + extension;
byte[] file = new byte[Request.ContentLength];
Request.InputStream.Read(file, 0, Request.ContentLength);
string imgdir;
string thumbimgdir;
string imageurl;
if (albumname != null)
{
imgdir = Server.MapPath("~/Content/photoAlbums/" + catid + "/" + pageid + "/" + albumname + "/" + fileName);
thumbimgdir = Server.MapPath("~/Content/photoAlbums/" + catid + "/" + pageid + "/" + albumname + "/" + thumbFile);
imageurl = "/Content/photoAlbums/" + catid + "/" + pageid + "/" + albumname + "/" + thumbFile;
}
else
{
imgdir = Server.MapPath("~/Content/photoAlbums/" + catid + "/" + pageid + "/" + fileName);
thumbimgdir = Server.MapPath("~/Content/photoAlbums/" + catid + "/" + pageid + "/" + thumbFile);
imageurl = "/Content/photoAlbums/" + catid + "/" + pageid + "/" + thumbFile;
}
ImageJob b = new ImageJob(file, imgdir, new ResizeSettings("maxwidth=1024&maxheight=768&format=jpg")); b.CreateParentDirectory = true; b.Build();
ImageJob a = new ImageJob(file, thumbimgdir, new ResizeSettings("w=100&h=100&mode=crop&format=jpg")); a.CreateParentDirectory = true; a.Build();
ViewBag.CatID = catid;
ViewBag.PageID = pageid;
ViewBag.FileName = fileName;
return PartialView("AlbumImage", imageurl);
}
public string GetFileName(string dir)
{
var FullPath = Server.MapPath(dir);
var dinfo = new DirectoryInfo(FullPath);
string FileName;
if (dinfo.Exists)
{
var Filex = dinfo.EnumerateFiles().OrderBy(x => x.Name).LastOrDefault();
FileName = Filex != null ? Path.GetFileNameWithoutExtension(Filex.Name) : null;
if (FileName != null)
{
FileName = FileName.Contains("_t") ? FileName.Substring(0, FileName.Length - 2) : FileName;
int fnum;
Int32.TryParse(FileName, out fnum);
FileName = (fnum + 1).ToString();
if (fnum > 999999) { return FileName; } //Check that TryParse produced valid int
else
{
var random = new Random();
FileName = random.Next(1000000, 9999000).ToString();
}
}
else
{
var random = new Random();
FileName = random.Next(1000000, 9999000).ToString();
}
}
else
{
var random = new Random();
FileName = random.Next(1000000, 9999000).ToString();
}
return FileName;
}
You simply cannot use the Random class if you want to generate unique filenames. It uses the current time as the seed, so two exactly concurrent requests will always produce the same 'random' number.
You could use a cryptographic random number generator,
but you would still have to ensure that (a) only one thread would generate it at a time, and (b) you used a sufficiently long identifier to prevent the Birthday paradox.
Thus, I suggest that everyone use GUID identifiers for their uploads, as they solve all of the above issues inherently (I believe an OS-level lock is used to prevent duplicates).
Your method also doesn't handle multiple file uploads per-request, although that may be intentional. You can support those by looping through Request.Files and passing each HttpPostedFile instance directly into the ImageJob.
Here's a simplified version of your code that uses GUIDs and won't encounter concurrency issues.
public PartialViewResult Upload()
{
string albumname = Request.Headers["albumname"];
string baseDir = "~/Content/photoAlbums/" + Request.Headers["catid"] + "/" + Request.Headers["pageid"] + "/" (albumname != null ? albumname + "/" : "");
byte[] file = new byte[Request.ContentLength];
Request.InputStream.Read(file, 0, Request.ContentLength);
ImageJob b = new ImageJob(file, baseDir + "<guid>.<ext>", new ResizeSettings("maxwidth=1024&maxheight=768&format=jpg")); b.CreateParentDirectory = true; b.Build();
ImageJob a = new ImageJob(file, baseDir + "<guid>_t.<ext>", new ResizeSettings("w=100&h=100&mode=crop&format=jpg")); a.CreateParentDirectory = true; a.Build();
//Want both the have the same GUID? Pull it from the previous job.
//string ext = PathUtils.GetExtension(b.FinalPath);
//ImageJob a = new ImageJob(file, PathUtils.RemoveExtension(a.FinalPath) + "_t." + ext, new ResizeSettings("w=100&h=100&mode=crop&format=jpg")); a.CreateParentDirectory = true; a.Build();
ViewBag.CatID = Request.Headers["catid"];
ViewBag.PageID = Request.Headers["pageid"];
ViewBag.FileName = Request.Headers["filename"];
return PartialView("AlbumImage", PathUtils.GuessVirtualPath(a.FinalPath));
}
If the process is relatively quick (small files) you could go in a loop, check for that exception, sleep the thread for a couple of seconds, and try again (up to a maximum number of iterations). One caveat is that if the upload is asynchronous you might miss a file.
A couple of other suggestions:
Make the GetFileName to be a private method so that it doesn't get triggered from the web.
The OrderBy in the Filex query might not do what you expect once the it goes to 8 digits (possible if the first Random() is a very high number).
The Random() should probably be seeded to produce better randomness.

Categories

Resources