So this application runs seamlessly in Visual Studio, but I have created an installer for the program in which the error is encountered. I think I have pinned down what the issue is. When a POST is received it is handled which kicks off a separate decoupled process which eventually gets aborted from the webpage disposing/closing.
The program flow is such
POST received context.Request.HttpMethod == "POST",
pertinent xml info extracted and written to disk,
csfireEyeHandler.DonJobOnLastIp(),
a monitor running in the background picks up on the file creation event `void OnChanged' and starts running services based on the XML doc
FileAdded --> readerRef.ReadInServices(e.FullPath, false).
The problem is after the POST is handled it causes the services to abort with the ThreadAbortException. If a delay is placed after handler.ProcessRequest(context) the services finish, I presume because the page still open. I cannot figure out how to properly handle this situation, its terribly difficult to debug because I cannot get the error to occur in VS.
public partial class fireEye : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
HttpContext context = Context;
fireEyeHandler handler = new fireEyeHandler();
handler.ProcessRequest(context);
}
}
public class fireEyeHandler : IHttpHandler
{
public void ProcessRequest(HttpContext context)
{
if (context.Request.HttpMethod == "POST")
{
var extension = context.Request.Url.AbsolutePath.Split('/')[2].ToLower();
var stream = context.Request.InputStream;
var buffer = new byte[stream.Length];
stream.Read(buffer, 0, buffer.Length);
var xml = Encoding.UTF8.GetString(buffer);
FileManage.WriteToFile(xml, #"C:\ECC_output\fireEye.xml");
var csfireEyeHandler = new FireEyeService { config = extension + ".config" };
csfireEyeHandler.Load();
csfireEyeHandler.DonJobOnLastIp();
context.Response.StatusCode = 202;
}
}
public bool IsReusable
{
get { return false; }
}
}
public class Monitor
{
bool monitorIsActive;
readonly XmlRead readerRef; // Reference to the xml reader
readonly FileSystemWatcher watch;
public bool monitorRunning;
public Monitor(XmlRead reader)
{
watch = new FileSystemWatcher();
readerRef = reader;
try
{
watch.Path = #"C:\ECC_temp"; //directory to monitor
}
catch (ArgumentException ex)
{
Report.LogLine (ex.Message);
return;
}
watch.NotifyFilter = NotifyFilters.LastAccess | NotifyFilters.LastWrite | NotifyFilters.FileName | NotifyFilters.DirectoryName;
watch.Filter = "*.xml";
monitorIsActive = true;
watch.Created += OnChanged;
watch.Deleted += OnChanged;
watch.Renamed += OnRenamed;
watch.EnableRaisingEvents = true;
}
/// <summary>
/// Toggles on/off if a directory is being monitored
/// </summary>
public void ToggleMonitor()
{
monitorIsActive = !monitorIsActive;
var monitorState = monitorIsActive ? "on" : "false";
Report.LogLine ("Monitor is " + monitorState);
}
/// <summary>
/// File has been added to the directory
/// </summary>
public bool FileAdded(FileSystemEventArgs e, XmlDocument xmlDocument)
{
try
{
var date = string.Format ("<br>\r\n**********************Report {0:yyyy MM-dd hh:mm tt}**********************", DateTime.Now);
Report.LogLine(date);
readerRef.Validate(e.FullPath, false);
readerRef.ReadInServices(e.FullPath, false);
Report.CreateReport();
}
catch (Exception exception)
{
Report.LogLine(exception.Message + " id:6");
Report.CreateReport();
return true;
}
return true;
}
/// <summary>
/// When a file is added, renamed or deleted, OnChanged is called and the appropriate action is taken
/// </summary>
private void OnChanged(object source, FileSystemEventArgs e)
{
monitorRunning = true;
while (true)
{
if (e.ChangeType == WatcherChangeTypes.Created || e.ChangeType == WatcherChangeTypes.Renamed)
{
var xmlDocument = new XmlDocument();
try
{
xmlDocument.Load(e.FullPath);
}
catch (IOException)
{
Thread.Sleep(100);
}
if (FileAdded(e, xmlDocument))
{
break;
}
}
}
monitorRunning = false;
}
}
Unless your application knows about this Monitor and knows how to detect that its currently handling a file change event and not return from the post, there's nothing you can do. I would suggest not coupling the web application to this Monitor (you haven't explained its purpose). I would suggest pulling it out of the web application and putting it in a Windows service of some sort. Or, explain in more detail why this Monitor is in the web application.
The first problem is that FileSystemWatcher can fire as soon as the file is created, which can often be while the ASP.NET process is still in the middle of writing the document.
I'd expect this to cause an XML read exception, or an access denied exception however, not neccessarily a ThreadAbort exception. ThreadAbort exceptions usually only happen if someone calls Thread.Abort, which you should never, ever do.
Anyway, You can get around this by using System.IO.FileStream to write your file and tell windows to lock it while it's being written to.
When you open the file to write it, specify FileShare.None. This will prevent the monitor from trying to read the file until the ASP.net process is done writing.
In your Monitor, add a retry loop with a small sleep in it when opening the file. You should get an access denied exception (which I think will be an IOException) repeatedly until the file is ready for you to read.
I can't really help any more than that without understanding what your csFireEyeHandler thing is supposed to do. Are you expecting the monitor service to finish processing the file in the middle of your ASP.net request? This is unlikely to happen.
What I'd expect the workflow to be:
ASP PAGE Monitor Process
> File Uploaded |
| |
> Begin Write to disk |
| > Try open file
| > fail to read file and retry
> Finish write to disk |
| > open file
| |
| > Begin process file
> csFireEyeHandler.Load |
> csFireEyeHandler.DonJob |
> RETURN |
|
> Finish process file (Report.CreateReport)
If in fact you need the fireEyeHandler to wait for the background service, there are several ways you can do this... but why not just process the file in the fireEyeHandler?
According to my interpretation of your question you don't want to wait until a file monitor event has occurred before you send the response to the client. You want to send 200 immediately and process in the background.
This is an anti-pattern in ASP.NET because the runtime does not guarantee that a worker process stays alive when all running HTTP requests have exited. In that sense, ASP.NET is conforming to the specification: It can abort your custom threads at any time.
It would be easier to get this right if you waited until the "service processing" is completed before sending 200 as an HTTP response.
Related
I'm writing a FileSystemWatcher which is to copy images from folder A to folder B, whenever an image is uploaded to folder A. I'm trying to use this as a windows service on the server PC but I'm having some issues where my files are locked when they are to be copied. I think I've found the root to my issue, but I'm not having any luck solving it. So, when I run my windows service it always ends unexpectedly at either the first or the second picture upload. The error message I'm getting says this: The process cannot access the file 'filepath' because it is being used by another process.
Relevant parts of my code:
public void WatchForChanges()
{
FileSystemWatcher watcher = new FileSystemWatcher();
watcher.Path = Program.SourceFolder;
watcher.Created += new FileSystemEventHandler(OnImageAdded);
watcher.EnableRaisingEvents = true;
watcher.IncludeSubdirectories = true;
}
public void OnImageAdded(object source, FileSystemEventArgs e)
{
FileInfo file = new FileInfo(e.FullPath);
ImageHandler handler = new ImageHandler();
if (handler.IsImage(file))
{
handler.CopyImage(file);
}
}
and, my CopyImage method, which includes one of my proposed solutions to this problem, utilizing a while loop that catches the error and retries the copying of the image:
public void CopyImage(FileSystemInfo file)
{
// code that sets folder paths
// code that sets folder paths
bool retry = true;
if (!Directory.Exists(targetFolderPath))
{
Directory.CreateDirectory(targetFolderPath);
}
while (retry)
{
try
{
File.Copy(file.FullName, targetPath, true);
retry = false;
}
catch (Exception e)
{
Thread.Sleep(2000);
}
}
}
but this CopyImage solution just keeps on copying the same file, which is not very ideal in my case. I wish it was enough but sadly I've got a queue of images waiting.
The image file is probably being created by another application that uses an exclusive access lock on both reading and writing external processes (for more informations, read this, especially the paragraph related to Microsoft Windows). You have to either:
stop/kill the process which is using the file;
wait until the file isn't being used anymore.
Since the other process is probably writing the file in the moment you try to copy it with your application, the first option is by no means recommendable. It could also be an anti-virus checking the new file, and even in this case the first option would not be recommendable.
You could try to integrate the following code into your CopyImage method so that your application will wait until the file will be no longer in use before proceeding:
private Boolean WaitForFile(String filePath)
{
Int32 tries = 0;
while (true)
{
++tries;
Boolean wait = false;
FileStream stream = null;
try
{
stream = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.None);
break;
}
catch (Exception ex)
{
Logger.LogWarning("CopyImage({0}) failed to get an exclusive lock: {1}", filePath, ex.ToString());
if (tries > 10)
{
Logger.LogWarning("CopyImage({0}) skipped the file after 10 tries.", filePath);
return false;
}
wait = true;
}
finally
{
if (stream != null)
stream.Close();
}
if (wait)
Thread.Sleep(250);
}
Logger.LogWarning("CopyImage({0}) got an exclusive lock after {1} tries.", filePath, tries);
return true;
}
While it seems straightforward, it's really unsatisfactorily complex.
The problem is, the application that's writing the file isn't done with it when you get the notification...and so you have a concurrency problem. There is no great way to know when the file closes. Well..one way is to subscribe to journal events - which is what FileSystemWatcher does - but this is fairly involved and requires a lot of moving parts. Going this route, you can be notified when the file closes. If you're interested, see https://msdn.microsoft.com/en-us/library/windows/desktop/aa363798(v=vs.85).aspx.
I'd divide the work into two parts. I think I'd start a ThreadPool thread to do the work, and have it read it's work from a list that the FileSystemWatcher's event handler writes to. That way, the event handler returns quickly. The ThreadPool thread would go through it's list, attempting to get an exclusive lock (similar to Tommaso's code) on the file. If it can't, it just moves on to the next file. Every time it successfully copies, it removes that file from the list.
You need to be concerned about thread safety...so you'd want to make a static object to coordinate writes to the list. Both the event handler and the ThreadPool thread would hold the lock while writing.
Here's a scaffold of the whole approach:
internal sealed class Copier: IDisposable
{
static object sync = new object();
bool quit;
FileSystemWatcher watcher;
List<string> work;
internal Copier( string pathToWatch )
{
work = new List<string>();
watcher = new FileSystemWatcher();
watcher.Path = pathToWatch;
watcher.Create += QueueWork;
ThreadPool.QueueUserWorkItem( TryCopy );
}
void Dispose()
{
lock( sync ) quit = true;
}
void QueueWork( object source, FileSystemEventArgs args )
{
lock ( sync )
{
work.Add( args.FullPath );
}
}
void TryCopy( object args )
{
List<string> localWork;
while( true )
{
lock ( sync )
{
if ( quit ) return; //--> we've been disposed
localWork = new List<string>( work );
}
foreach( var fileName in localWork )
{
var locked = true;
try
{
using
( var throwAway = new FileStream
( fileName,
FileMode.Open,
FileAccess.Read,
FileShare.None
)
); //--> no-op - will throw if we can't get exclusive read
locked = false;
}
catch { }
if (!locked )
{
File.Copy( fileName, ... );
lock( sync ) work.Remove( fileName );
}
}
}
}
}
Not tested - wrote it right here in the answer...but it, or something like it will cover the bases.
When a file is created (FileSystemWatcher_Created) in one directory I copy it to another. But When I create a big (>10MB) file it fails to copy the file, because it starts copying already, when the file is not yet finished creating...
This causes Cannot copy the file, because it's used by another process to be raised. ;(
Any help?
class Program
{
static void Main(string[] args)
{
string path = #"D:\levan\FolderListenerTest\ListenedFolder";
FileSystemWatcher listener;
listener = new FileSystemWatcher(path);
listener.Created += new FileSystemEventHandler(listener_Created);
listener.EnableRaisingEvents = true;
while (Console.ReadLine() != "exit") ;
}
public static void listener_Created(object sender, FileSystemEventArgs e)
{
Console.WriteLine
(
"File Created:\n"
+ "ChangeType: " + e.ChangeType
+ "\nName: " + e.Name
+ "\nFullPath: " + e.FullPath
);
File.Copy(e.FullPath, #"D:\levan\FolderListenerTest\CopiedFilesFolder\" + e.Name);
Console.Read();
}
}
There is only workaround for the issue you are facing.
Check whether file id in process before starting the process of copy. You can call the following function until you get the False value.
1st Method, copied directly from this answer:
private bool IsFileLocked(FileInfo file)
{
FileStream stream = null;
try
{
stream = file.Open(FileMode.Open, FileAccess.ReadWrite, FileShare.None);
}
catch (IOException)
{
//the file is unavailable because it is:
//still being written to
//or being processed by another thread
//or does not exist (has already been processed)
return true;
}
finally
{
if (stream != null)
stream.Close();
}
//file is not locked
return false;
}
2nd Method:
const int ERROR_SHARING_VIOLATION = 32;
const int ERROR_LOCK_VIOLATION = 33;
private bool IsFileLocked(string file)
{
//check that problem is not in destination file
if (File.Exists(file) == true)
{
FileStream stream = null;
try
{
stream = File.Open(file, FileMode.Open, FileAccess.ReadWrite, FileShare.None);
}
catch (Exception ex2)
{
//_log.WriteLog(ex2, "Error in checking whether file is locked " + file);
int errorCode = Marshal.GetHRForException(ex2) & ((1 << 16) - 1);
if ((ex2 is IOException) && (errorCode == ERROR_SHARING_VIOLATION || errorCode == ERROR_LOCK_VIOLATION))
{
return true;
}
}
finally
{
if (stream != null)
stream.Close();
}
}
return false;
}
From the documentation for FileSystemWatcher:
The OnCreated event is raised as soon as a file is created. If a file
is being copied or transferred into a watched directory, the
OnCreated event will be raised immediately, followed by one or more
OnChanged events.
So, if the copy fails, (catch the exception), add it to a list of files that still need to be moved, and attempt the copy during the OnChanged event. Eventually, it should work.
Something like (incomplete; catch specific exceptions, initialize variables, etc):
public static void listener_Created(object sender, FileSystemEventArgs e)
{
Console.WriteLine
(
"File Created:\n"
+ "ChangeType: " + e.ChangeType
+ "\nName: " + e.Name
+ "\nFullPath: " + e.FullPath
);
try {
File.Copy(e.FullPath, #"D:\levani\FolderListenerTest\CopiedFilesFolder\" + e.Name);
}
catch {
_waitingForClose.Add(e.FullPath);
}
Console.Read();
}
public static void listener_Changed(object sender, FileSystemEventArgs e)
{
if (_waitingForClose.Contains(e.FullPath))
{
try {
File.Copy(...);
_waitingForClose.Remove(e.FullPath);
}
catch {}
}
}
It's an old thread, but I'll add some info for other people.
I experienced a similar issue with a program that writes PDF files, sometimes they take 30 seconds to render.. which is the same period that my watcher_FileCreated class waits before copying the file.
The files were not locked.
In this case I checked the size of the PDF and then waited 2 seconds before comparing the new size, if they were unequal the thread would sleep for 30 seconds and try again.
You're actually in luck - the program writing the file locks it, so you can't open it. If it hadn't locked it, you would have copied a partial file, without having any idea there's a problem.
When you can't access a file, you can assume it's still in use (better yet - try to open it in exclusive mode, and see if someone else is currently opening it, instead of guessing from the failure of File.Copy). If the file is locked, you'll have to copy it at some other time. If it's not locked, you can copy it (there's slight potential for a race condition here).
When is that 'other time'? I don't rememeber when FileSystemWatcher sends multiple events per file - check it out, it might be enough for you to simply ignore the event and wait for another one. If not, you can always set up a time and recheck the file in 5 seconds.
Well you already give the answer yourself; you have to wait for the creation of the file to finish. One way to do this is via checking if the file is still in use. An example of this can be found here: Is there a way to check if a file is in use?
Note that you will have to modify this code for it to work in your situation. You might want to have something like (pseudocode):
public static void listener_Created()
{
while CheckFileInUse()
wait 1000 milliseconds
CopyFile()
}
Obviously you should protect yourself from an infinite while just in case the owner application never releases the lock. Also, it might be worth checking out the other events from FileSystemWatcher you can subscribe to. There might be an event which you can use to circumvent this whole problem.
When the file is writing in binary(byte by byte),create FileStream and above solutions Not working,because file is ready and wrotted in every bytes,so in this Situation you need other workaround like this:
Do this when file created or you want to start processing on file
long fileSize = 0;
currentFile = new FileInfo(path);
while (fileSize < currentFile.Length)//check size is stable or increased
{
fileSize = currentFile.Length;//get current size
System.Threading.Thread.Sleep(500);//wait a moment for processing copy
currentFile.Refresh();//refresh length value
}
//Now file is ready for any process!
So, having glanced quickly through some of these and other similar questions I went on a merry goose chase this afternoon trying to solve a problem with two separate programs using a file as a synchronization (and also file save) method. A bit of an unusual situation, but it definitely highlighted for me the problems with the 'check if the file is locked, then open it if it's not' approach.
The problem is this: the file can become locked between the time that you check it and the time you actually open the file. Its really hard to track down the sporadic Cannot copy the file, because it's used by another process error if you aren't looking for it too.
The basic resolution is to just try to open the file inside a catch block so that if its locked, you can try again. That way there is no elapsed time between the check and the opening, the OS does them at the same time.
The code here uses File.Copy, but it works just as well with any of the static methods of the File class: File.Open, File.ReadAllText, File.WriteAllText, etc.
/// <param name="timeout">how long to keep trying in milliseconds</param>
static void safeCopy(string src, string dst, int timeout)
{
while (timeout > 0)
{
try
{
File.Copy(src, dst);
//don't forget to either return from the function or break out fo the while loop
break;
}
catch (IOException)
{
//you could do the sleep in here, but its probably a good idea to exit the error handler as soon as possible
}
Thread.Sleep(100);
//if its a very long wait this will acumulate very small errors.
//For most things it's probably fine, but if you need precision over a long time span, consider
// using some sort of timer or DateTime.Now as a better alternative
timeout -= 100;
}
}
Another small note on parellelism:
This is a synchronous method, which will block its thread both while waiting and while working on the thread. This is the simplest approach, but if the file remains locked for a long time your program may become unresponsive. Parellelism is too big a topic to go into in depth here, (and the number of ways you could set up asynchronous read/write is kind of preposterous) but here is one way it could be parellelized.
public class FileEx
{
public static async void CopyWaitAsync(string src, string dst, int timeout, Action doWhenDone)
{
while (timeout > 0)
{
try
{
File.Copy(src, dst);
doWhenDone();
break;
}
catch (IOException) { }
await Task.Delay(100);
timeout -= 100;
}
}
public static async Task<string> ReadAllTextWaitAsync(string filePath, int timeout)
{
while (timeout > 0)
{
try {
return File.ReadAllText(filePath);
}
catch (IOException) { }
await Task.Delay(100);
timeout -= 100;
}
return "";
}
public static async void WriteAllTextWaitAsync(string filePath, string contents, int timeout)
{
while (timeout > 0)
{
try
{
File.WriteAllText(filePath, contents);
return;
}
catch (IOException) { }
await Task.Delay(100);
timeout -= 100;
}
}
}
And here is how it could be used:
public static void Main()
{
test_FileEx();
Console.WriteLine("Me First!");
}
public static async void test_FileEx()
{
await Task.Delay(1);
//you can do this, but it gives a compiler warning because it can potentially return immediately without finishing the copy
//As a side note, if the file is not locked this will not return until the copy operation completes. Async functions run synchronously
//until the first 'await'. See the documentation for async: https://msdn.microsoft.com/en-us/library/hh156513.aspx
CopyWaitAsync("file1.txt", "file1.bat", 1000);
//this is the normal way of using this kind of async function. Execution of the following lines will always occur AFTER the copy finishes
await CopyWaitAsync("file1.txt", "file1.readme", 1000);
Console.WriteLine("file1.txt copied to file1.readme");
//The following line doesn't cause a compiler error, but it doesn't make any sense either.
ReadAllTextWaitAsync("file1.readme", 1000);
//To get the return value of the function, you have to use this function with the await keyword
string text = await ReadAllTextWaitAsync("file1.readme", 1000);
Console.WriteLine("file1.readme says: " + text);
}
//Output:
//Me First!
//file1.txt copied to file1.readme
//file1.readme says: Text to be duplicated!
You can use the following code to check if the file can be opened with exclusive access (that is, it is not opened by another application). If the file isn't closed, you could wait a few moments and check again until the file is closed and you can safely copy it.
You should still check if File.Copy fails, because another application may open the file between the moment you check the file and the moment you copy it.
public static bool IsFileClosed(string filename)
{
try
{
using (var inputStream = File.Open(filename, FileMode.Open, FileAccess.Read, FileShare.None))
{
return true;
}
}
catch (IOException)
{
return false;
}
}
I would like to add an answer here, because this worked for me. I used time delays, while loops, everything I could think of.
I had the Windows Explorer window of the output folder open. I closed it, and everything worked like a charm.
I hope this helps someone.
I can't work out how to resume an interrupted upload in V3 of the C# YouTube API.
My existing code uses V1 and works fine but I'm switching to V3.
If I call UploadAsync() without changing anything, it starts from the beginning. Using Fiddler, I can see the protocol given here is not followed and the upload restarts.
I've tried setting the position within the stream as per V1 but there is no ResumeAsync() method available.
The Python example uses NextChunk but the SendNextChunk method is protected and not available in C#.
In the code below, both UploadVideo() and Resume() work fine if I leave them to completion but the entire video is uploaded instead of just the remaining parts.
How do I resume an interrupted upload using google.apis.youtube.v3?
Here is the C# code I have tried so far.
private ResumableUpload<Video> UploadVideo(
YouTubeService youTubeService, Video video, Stream stream, UserCredential userCredentials)
{
var resumableUpload = youTubeService.Videos.Insert(video,
"snippet,status,contentDetails", stream, "video/*");
resumableUpload.OauthToken = userCredentials.Token.AccessToken;
resumableUpload.ChunkSize = 256 * 1024;
resumableUpload.ProgressChanged += resumableUpload_ProgressChanged;
resumableUpload.ResponseReceived += resumableUpload_ResponseReceived;
resumableUpload.UploadAsync();
return resumableUpload;
}
private void Resume(ResumableUpload<Video> resumableUpload)
{
//I tried seeking like V1 but it doesn't work
//if (resumableUpload.ContentStream.CanSeek)
// resumableUpload.ContentStream.Seek(resumableUpload.ContentStream.Position, SeekOrigin.Begin);
resumableUpload.UploadAsync(); // <----This restarts the upload
}
void resumableUpload_ResponseReceived(Video obj)
{
Debug.WriteLine("Video status: {0}", obj.Status.UploadStatus);
}
void resumableUpload_ProgressChanged(IUploadProgress obj)
{
Debug.WriteLine("Position: {0}", (resumableUploadTest == null) ? 0 : resumableUploadTest.ContentStream.Position);
Debug.WriteLine("Status: {0}", obj.Status);
Debug.WriteLine("Bytes sent: {0}", obj.BytesSent);
}
private void button2_Click(object sender, EventArgs e)
{
Resume(resumableUploadTest);
}
Any solution/suggestion/demo or a link to the "google.apis.youtube.v3" source code will be very helpful.
Thanks in Advance !
EDIT: New information
I'm still working on this and I believe the API simply isn't finished. Either that or I'm missing something simple.
I still can't find the "google.apis.youtube.v3" source code so I downloaded the latest "google-api-dotnet-client" source code. This contains the ResumableUpload class used by the YouTube API.
I managed to successfully continue an upload by skipping the first four lines of code in the UploadAsync() method. I created a new method called ResumeAsync(), a copy of UploadAsync() with the first four lines of initialization code removed. Everything worked and the upload resumed from where it was and completed.
I'd rather not be changing code in the API so if anyone knows how I should be using this, let me know.
I'll keep plugging away and see if I can work it out.
This is the original UploadAsync() method and my ResumeAsync() hack.
public async Task<IUploadProgress> UploadAsync(CancellationToken cancellationToken)
{
try
{
BytesServerReceived = 0;
UpdateProgress(new ResumableUploadProgress(UploadStatus.Starting, 0));
// Check if the stream length is known.
StreamLength = ContentStream.CanSeek ? ContentStream.Length : UnknownSize;
UploadUri = await InitializeUpload(cancellationToken).ConfigureAwait(false);
Logger.Debug("MediaUpload[{0}] - Start uploading...", UploadUri);
using (var callback = new ServerErrorCallback(this))
{
while (!await SendNextChunkAsync(ContentStream, cancellationToken).ConfigureAwait(false))
{
UpdateProgress(new ResumableUploadProgress(UploadStatus.Uploading, BytesServerReceived));
}
UpdateProgress(new ResumableUploadProgress(UploadStatus.Completed, BytesServerReceived));
}
}
catch (TaskCanceledException ex)
{
Logger.Error(ex, "MediaUpload[{0}] - Task was canceled", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
throw ex;
}
catch (Exception ex)
{
Logger.Error(ex, "MediaUpload[{0}] - Exception occurred while uploading media", UploadUri);
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
}
return Progress;
}
public async Task<IUploadProgress> ResumeAsync(CancellationToken cancellationToken)
{
try
{
using (var callback = new ServerErrorCallback(this))
{
while (!await SendNextChunkAsync(ContentStream, cancellationToken).ConfigureAwait(false))
{
UpdateProgress(new ResumableUploadProgress(UploadStatus.Uploading, BytesServerReceived));
}
UpdateProgress(new ResumableUploadProgress(UploadStatus.Completed, BytesServerReceived));
}
}
catch (TaskCanceledException ex)
{
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
throw ex;
}
catch (Exception ex)
{
UpdateProgress(new ResumableUploadProgress(ex, BytesServerReceived));
}
return Progress;
}
These are the Fiddler records showing the upload resuming.
After a fair bit of deliberation, I've decided to modify the API code. My solution maintains backwards compatibility.
I've documented my changes below but I don't recommend using them.
In the UploadAsync() method in the ResumableUpload Class in "Google.Apis.Upload", I replaced this code.
BytesServerReceived = 0;
UpdateProgress(new ResumableUploadProgress(UploadStatus.Starting, 0));
// Check if the stream length is known.
StreamLength = ContentStream.CanSeek ? ContentStream.Length : UnknownSize;
UploadUri = await InitializeUpload(cancellationToken).ConfigureAwait(false);
with this code
UpdateProgress(new ResumableUploadProgress(
BytesServerReceived == 0 ? UploadStatus.Starting : UploadStatus.Resuming, BytesServerReceived));
StreamLength = ContentStream.CanSeek ? ContentStream.Length : UnknownSize;
if (UploadUri == null) UploadUri = await InitializeUpload(cancellationToken).ConfigureAwait(false);
I also made the UploadUri and BytesServerReceived properties public. This allows an upload to be continued after the ResumableUpload object has been destroyed or after an application restart.
You just recreate the ResumableUpload as per normal, set these two fields and call UploadAsync() to resume an upload. Both fields need to be saved during the original upload.
public Uri UploadUri { get; set; }
public long BytesServerReceived { get; set; }
I also added "Resuming" to the UploadStatus enum in the IUploadProgress class.
public enum UploadStatus
{
/// <summary>
/// The upload has not started.
/// </summary>
NotStarted,
/// <summary>
/// The upload is initializing.
/// </summary>
Starting,
/// <summary>
/// Data is being uploaded.
/// </summary>
Uploading,
/// <summary>
/// Upload is being resumed.
/// </summary>
Resuming,
/// <summary>
/// The upload completed successfully.
/// </summary>
Completed,
/// <summary>
/// The upload failed.
/// </summary>
Failed
};
Nothing has changed for starting an upload.
Provided the ResumableUpload Oject and streams have not been destroyed, call UploadAsync() again to resume an interrupted upload.
If they have been destroyed, create new objects and set the UploadUri and BytesServerReceived properties. These two properties can be saved during the original upload. The video details and content stream can be configured as per normal.
These few changes allow an upload to be resumed even after restarting your application or rebooting. I'm not sure how long before an upload expires but I'll report back when I've done some more testing with my real application.
Just for completeness, this is the test code I've been using, which happily resumes an interrupted upload after restarting the application multiple times during an upload. The only difference between resuming and restarting, is setting the UploadUri and BytesServerReceived properties.
resumableUploadTest = youTubeService.Videos.Insert(video, "snippet,status,contentDetails", fileStream, "video/*");
if (resume)
{
resumableUploadTest.UploadUri = Settings.Default.UploadUri;
resumableUploadTest.BytesServerReceived = Settings.Default.BytesServerReceived;
}
resumableUploadTest.ChunkSize = ResumableUpload<Video>.MinimumChunkSize;
resumableUploadTest.ProgressChanged += resumableUpload_ProgressChanged;
resumableUploadTest.UploadAsync();
I hope this helps someone. It took me much longer than expected to work it out and I'm still hoping I've missed something simple. I messed around for ages trying to add my own error handlers but the API does all that for you. The API does recover from minor short hiccups but not from an application restart, reboot or prolonged outage.
Cheers. Mick.
This issue has been resolved in version "1.8.0.960-rc" of the Google.Apis.YouTube.v3 Client Library.
They've added a new method called ResumeAsync and it works fine. I wish I'd known they were working on it.
One minor issue I needed to resolve was resuming an upload after restarting the application or rebooting. The current api does not allow for this but two minor changes resolved the issue.
I added a new signature for the ResumeAsync method, which accepts and sets the original UploadUri. The StreamLength property needs to be initialised to avoid an overflow error.
public Task<IUploadProgress> ResumeAsync(Uri uploadUri, CancellationToken cancellationToken)
{
UploadUri = uploadUri;
StreamLength = ContentStream.CanSeek ? ContentStream.Length : UnknownSize;
return ResumeAsync(cancellationToken);
}
I also exposed the getter for UploadUri so it can be saved from the calling application.
public Uri UploadUri { get; private set; }
I've managed to get this to work using reflection and avoided the need to modify the API at all. For completeness, I'll document the process but it isn't recommended. Setting private properties in the resumable upload object is not a great idea.
When your resumeable upload object has been destroyed after an application restart or reboot, you can still resume an upload using version "1.8.0.960-rc" of the Google.Apis.YouTube.v3 Client Library.
private static void SetPrivateProperty<T>(Object obj, string propertyName, object value)
{
var propertyInfo = typeof(T).GetProperty(propertyName, BindingFlags.NonPublic | BindingFlags.Instance);
if (propertyInfo == null) return;
propertyInfo.SetValue(obj, value, null);
}
private static object GetPrivateProperty<T>(Object obj, string propertyName)
{
if (obj == null) return null;
var propertyInfo = typeof(T).GetProperty(propertyName, BindingFlags.NonPublic | BindingFlags.Instance);
return propertyInfo == null ? null : propertyInfo.GetValue(obj, null);
}
You need to save the UploadUri during the ProgressChanged event.
Upload.ResumeUri = GetPrivateProperty<ResumableUpload<Video>>(InsertMediaUpload, "UploadUri") as Uri;
You need to set the UploadUri and StreamLength before calling ResumeAsync.
private const long UnknownSize = -1;
SetPrivateProperty<ResumableUpload<Video>>(InsertMediaUpload, "UploadUri", Upload.ResumeUri);
SetPrivateProperty<ResumableUpload<Video>>(InsertMediaUpload, "StreamLength", fileStream.CanSeek ? fileStream.Length : Constants.UnknownSize);
Task = InsertMediaUpload.ResumeAsync(CancellationTokenSource.Token);
I've a got a problem with the infamous message "The thread xxx has exited with code 0 (0x0)".
In my code I have a main class called "Load" that starts with a Windows Form load event:
public class Load
{
public Load()
{
Device[] devices = GetDevices(); // Get an array of devices from an external source
for (int i = 0; i < devices.Length; i++)
{
DeviceDiagnosticCtrl deviceDiagnostic = new DeviceDiagnosticCtrl(devices[i].name);
}
}
}
Inside the constructor, for each generic device read from an external source, I initialize a custom diagnostic class that runs a thread:
public class DeviceDiagnosticCtrl
{
private Thread diagnosticController;
private volatile bool diagnosticControllerIsRunning = false;
public DeviceDiagnosticCtrl(string _name)
{
// Thread initialization
this.diagnosticController = new Thread(new ThreadStart(this.CheckDiagnostic));
this.diagnosticController.Start();
this.diagnosticControllerIsRunning = true;
}
private void CheckDiagnostic()
{
while (this.diagnosticControllerIsRunning)
{
try
{
// Custom 'Poll' message class used to request diagnostic to specific device
Poll poll = new Poll();
// Generic Message result to diagnostic request
IGenericMessage genericResult;
// Use a custom driver to send diagnostic request
SendSyncMsgResult res = this.customDriver.SendSyncMessage(poll, out genericResult);
switch (res)
{
case SendSyncMessageResult.GOOD:
{
// Log result
}
break;
case SendSyncMessageResult.EXCEPTION:
{
// Log result
}
break;
}
Thread.Sleep(this.customDriver.PollScantime);
}
catch (Exception ex)
{
// Loggo exception
}
}
}
}
When I run the above code in debug mode I always read 8 devices from external source, and for each of them I continuously run a managed thread to retrieve diagnostic.
My problem is that randomly one or more of the 8 threads I expect from the code above exit with code 0, without any exception.
I've started/restarted the code in debug mode a lot of time, and almost everytime one of the thread exits.
I've read somewhere (i.e. this SO question) that it could depends of Garbage Collector action, but I'm not too sure if this is my case - and how to prevent it.
Do someone see something strange/wrong in the sample code I posted above? Any suggestion?
'while (this.diagnosticControllerIsRunning)' is quite likely to fail immediate, in which case the thread drops out. It's no good starting the thread and THEN setting 'this.diagnosticControllerIsRunning = true;' - you're quite likely to be too late.
Bolt/stable-door. Something like:
do{
lengthyStuff with Sleep() in it
}
while (this.diagnosticControllerRun);
Copied from Here
Right click in the Output window when you're running your program and
uncheck all of the messages you don't want to see (like Thread Exit
messages).
I am developing a .net application, where I am using FileSystemWatcher class and attached its Created event on a folder. I have to do action on this event (i.e. copy file to some other location). When I am putting a large size into the attached watch folder the event raised immediately even the file copy process still not completed. I don’t want to check this by file.open method.
Is there any way get notify that my file copy process into the watch folder has been completed and then my event get fire.
It is indeed a bummer that FileSystemWatcher (and the underlying ReadDirectoryChangesW API) provide no way to get notified when a new file has been fully created.
The best and safest way around this that I've come across so far (and that doesn't rely on timers) goes like this:
Upon receiving the Created event, start a thread that, in a loop, checks whether the file is still locked (using an appropriate retry interval and maximum retry count). The only way to check if a file is locked is by trying to open it with exclusive access: If it succeeds (not throwing an IOException), then the File is done copying, and your thread can raise an appropriate event (e.g. FileCopyCompleted).
I have had the exact same problem, and solved it this way:
Set FileSystemWatcher to notify when files are created and when they are modified.
When a notification comes in:
a. If there is no timer set for this filename (see below), set a timer to expire in a suitable interval (I commonly use 1 second).
b. If there is a timer set for this filename, cancel the timer and set a new one to expire in the same interval.
When a timer expires, you know that the associated file has been created or modified and has been untouched for the time interval. This means that the copy/modify is probably done and you can now process it.
You could listen for the modified event, and start a timer. If the modified event is raised again, reset the timer. When the timer has reached a certain value without the modify event being raised you can try to perform the copy.
I subscribe to the Changed- and Renamed-event and try to rename the file on every Changed-event catching the IOExceptions. If the rename succeeds, the copy has finished and the Rename-event is fired only once.
Three issues with FileSystemWatcher, the first is that it can send out duplicate creation events so you check for that with something like:
this.watcher.Created += (s, e) =>
{
if (!this.seen.ContainsKey(e.FullPath)
|| (DateTime.Now - this.seen[e.FullPath]) > this.seenInterval)
{
this.seen[e.FullPath] = DateTime.Now;
ThreadPool.QueueUserWorkItem(
this.WaitForCreatingProcessToCloseFileThenDoStuff, e.FullPath);
}
};
where this.seen is a Dictionary<string, DateTime> and this.seenInterval is a TimeSpan.
Next, you have to wait around for the file creator to finish writing it (the issue raised in the question). And, third, you must be careful because sometimes the file creation event gets thrown before the file can be opened without giving you a FileNotFoundException but it can also be removed before you can get a hold of it which also gives a FileNotFoundException.
private void WaitForCreatingProcessToCloseFileThenDoStuff(object threadContext)
{
// Make sure the just-found file is done being
// written by repeatedly attempting to open it
// for exclusive access.
var path = (string)threadContext;
DateTime started = DateTime.Now;
DateTime lastLengthChange = DateTime.Now;
long lastLength = 0;
var noGrowthLimit = new TimeSpan(0, 5, 0);
var notFoundLimit = new TimeSpan(0, 0, 1);
for (int tries = 0;; ++tries)
{
try
{
using (var fileStream = new FileStream(
path, FileMode.Open, FileAccess.ReadWrite, FileShare.None))
{
// Do Stuff
}
break;
}
catch (FileNotFoundException)
{
// Sometimes the file appears before it is there.
if (DateTime.Now - started > notFoundLimit)
{
// Should be there by now
break;
}
}
catch (IOException ex)
{
// mask in severity, customer, and code
var hr = (int)(ex.HResult & 0xA000FFFF);
if (hr != 0x80000020 && hr != 0x80000021)
{
// not a share violation or a lock violation
throw;
}
}
try
{
var fi = new FileInfo(path);
if (fi.Length > lastLength)
{
lastLength = fi.Length;
lastLengthChange = DateTime.Now;
}
}
catch (Exception ex)
{
}
// still locked
if (DateTime.Now - lastLengthChange > noGrowthLimit)
{
// 5 minutes, still locked, no growth.
break;
}
Thread.Sleep(111);
}
You can, of course, set your own timeouts. This code leaves enough time for a 5 minute hang. Real code would also have a flag to exit the thread if requested.
This answer is a bit late, but if possible I'd get the source process to copy a small marker file after the large file or files and use the FileWatcher on that.
Try to set filters
myWatcher.NotifyFilter = NotifyFilters.LastAccess | NotifyFilters.LastWrite;