Code Stucks at Microsoft.SharePoint.Client.File.OpenBinaryDirect Method - c#

I am trying to download a file using ServeRelativeUrl by OpenBinaryDirect Method and it works fine. But often, irrespective of size, extension or any other file metadata it just get stucks.
Please see the code below, it gets stucks at either of the bolded lines (more often at ExecuteQuery()) and throws an operation timeout error, later I tried to give timeouts (shown in italic) but then it got stuck for infinite time until the main thread got killed.
clientContext.RequestTimeout = -1;
FileInformation fileInfo = Microsoft.SharePoint.Client.File.OpenBinaryDirect(clientContext, file.ServerRelativeUrl);
clientContext.ExecuteQuery();
using (var fileStream = new FileStream(location, FileMode.Create))
{
fileInfo.Stream.WriteTimeout = -1;
fileInfo.Stream.CopyTo(fileStream);
}
fileInfo.Stream.Dispose();
size of the file in explorer remains 0kb.
Can anyone please help me out with this?
https://i.stack.imgur.com/bKWau.png

I maybe had a similar problem: I got a timeout problem after downloading a lot of files, I solved this by disposing the "fileInfo" object by putting it into a using statement:
using (var fileInfo = File.OpenBinaryDirect(_context, relativeUrl))
{
if (Directory.Exists(targetDirectory))
{
targetDirectory = Path.Combine(targetDirectory,
Path.GetFileName(new Uri(relativeUrl).LocalPath));
}
using (Stream destination = System.IO.File.Create(targetDirectory))
{
for (var a = fileInfo.Stream.ReadByte(); a != -1; a = fileInfo.Stream.ReadByte())
destination.WriteByte((byte)a);
}
}
After that it worked. It seems if the object is not disposed you run into a connection limit.

I just migrated ~100 files from OnPrem to SPO using the same method.
Some difference from my solution:
clientContext.ExecuteQuery();
is only required to ensure file.ServerRelativeUrl is initialized, so you can remove it from this position and load it before.
As I had to download+upload multiple dependend files, I copied each SP binary stream into MemoryStreams with Async methods and disposed the SP streams as fast as possible. I think there will be a connection limit.
I disposed the fileInfo itself instead of the inner Stream object.

Related

Returning a file download in async in embeddedIO (async and disposing of streams)

While this particular question is about embeddedIO and FileReponseAsync but it probably has more to do with the handling of streams and async in Tasks in C#.
I am using EmbeddedIO (I needed a quick and dirty web server, and so far it has worked like a charm -- however the lack of documentation is a bit frustrating), but I am attempting to return a file with the following code:
var file = new FileInfo(Path.Combine(FileLocations.TemplatePath, templateFile.FilePath));
string fileExtension = Path.GetExtension(templateFile.FilePath);
return this.FileResponseAsync(file, MimeTypes.DefaultMimeTypes.Value.ContainsKey(fileExtension) ?
MimeTypes.DefaultMimeTypes.Value[fileExtension] : "application/octet-stream");
I get the following error:
Message
Failing module name: Web API Module
Cannot access a closed file.
Stack Trace
at System.IO.__Error.FileNotOpen()
Which makes sense, since in the EmbedIO code FileResponseAsync looks like:
using (FileStream fileStream = file.OpenRead())
return context.BinaryResponseAsync(fileStream, ct, useGzip);
and the filestream will be disposed as soon as the BinaryReponse returns. I've solved the problem by changing my code to not dispose of the filestream:
var fileStream = file.OpenRead();
return this.BinaryResponseAsync(fileStream);
While this works, it seems wrong to rely on Garbage Collection to dispose of these files at a later date. How are resources like this (not only in EmbeddedIO but in this modern async world) supposed to be handled?

IO Exception in C#

In my c# application which developed with c# in visual studio 2012 I created a file by this command :
System.IO.File.Create("config.conf");
after that in the next line I want to use the file by this command :
System.IO.StreamReader rd = new System.IO.StreamReader("config.conf");
But I get This exception :
"The process cannot access the file '\config.far' because it is being used by >another process."
I used thread.sleep(2000) to make application wait but still it doesn't answer.
I will appropriate any help.
File.Create creates the file and returns a FileStream holding the file open.
You can do this:
System.IO.File.Create("config.conf").Dispose();
by disposing of the returned stream object, you close the file.
Or you can do this:
using (var stream = File.Create("config.conf"))
using (var rd = new StreamReader(stream))
{
.... rest of your code here
Additionally, since disposing of the StreamReader will also dispose of the underlying stream, you can reduce this to just:
using (var rd = new StreamReader(File.Create("config.conf")))
{
.... rest of your code here
Final question: Why are you opening a newly created stream for reading? It will contain nothing, so there's nothing to read.
using(var conf = System.IO.File.Create("config.conf"))
{
using (var rd = new System.IO.StreamReader(conf))
{
// Do whatever you want to do with the file here
}
}
The problem is that File.Create returns a stream to the file. That is: The file is already opened for you!
You could do this:
using (System.IO.StreamReader rd = new System.IO.StreamReader(System.IO.File.Create("config.conf")))
{
...
}
By the way, this does not really make sense. What do you expect an empty, newly created file to contain?
When working with files, it is always a good idea to dispose of the file once you are done.
This can be done by two different techniques, the most popular one is using a "using" statement:
using (FileStream fileStream = File.Create(fileNamePath))
{
// insert logic here, for example:
fileStream.SetLength(fileSize);
}
The other one, is calling the .Dispose method.
Close the file if it is opened in notepad or something similar.

Using C# File class to edit file, but throwing exception

I am using the File class to edit an HTML file. I need to delete a line of code from it. The way I am doing it is:
if (selectedFileType.Equals("html"))
{
string contentsOfHtml = File.ReadAllText(paramExportFilePath);
//delete part that I don't want
string deletedElement = "string I need to delete";
contentsOfHtml.Replace(deletedElement, "");
File.WriteAllText(paramExportFilePath, contentsOfHtml);
}
However it is throwing the exception: The process cannot access the file 'path\to\file.html' because it is being used by another process.
I am worried that this is happening because either the File.ReadAllText or File.WriteAllText methods are running on the file, even though in the documentation it specifies that they do close the file. So does anyone know what could be causing this?
If this is a file on a live site then there's a good chance that the web server has a lock on it.
Assuming your working in Windows, try using Process Explorer to see what has a lock on the file.
Whenever you are dealing with Stream based objects, you are always better off wrapping in using statements:
String s1;
using (StreamReader r = new StreamReader(paramExportFilePath, Encoding.ASCII))
{
s1 = r.ReadToEnd();
}
String s2 = s1.Replace("string to delete", "replacement string");
using (StreamWriter w = new StreamWriter(paramExportFilePath, false, Encoding.ASCII))
{
w.Write(s2);
}
The using statements ensure that objects are properly closed and, more importantly, disposed.
Note: replace Encoding.ASCII with whatever you like (perhaps UTF8 if it's HTML code).

StreamWriter not writing out the last few characters to a file

We are having an issue with one server and it's utilization of the StreamWriter class. Has anyone experienced something similar to the issue below? If so, what was the solution to fix the issue?
using( StreamWriter logWriter = File.CreateText( logFileName ) )
{
for (int i = 0; i < 500; i++)
logWriter.WriteLine( "Process completed successfully." );
}
When writing out the file the following output is generated:
Process completed successfully.
... (497 more lines)
Process completed successfully.
Process completed s
Tried adding logWriter.Flush() before close without any help. The more lines of text I write out the more data loss occurs.
Had a very similar issue myself. I found that if I enabled AutoFlush before doing any writes to the stream and it started working as expected.
logWriter.AutoFlush = true;
sometimes even u call flush(), it just won't do the magic. becus Flush() will cause stream to write most of the data in stream except the last block of its buffer.
try
{
// ... write method
// i dont recommend use 'using' for unmanaged resource
}
finally
{
stream.Flush();
stream.Close();
stream.Dispose();
}
Cannot reproduce this.
Under normal conditions, this should not and will not fail.
Is this the actual code that fails ? The text "Process completed" suggests it's an extract.
Any threading involved?
Network drive or local?
etc.
This certainly appears to be a "flushing" problem to me, even though you say you added a call to Flush(). The problem may be that your StreamWriter is just a wrapper for an underlying FileStream object.
I don't typically use the File.CreateText method to create a stream for writing to a file; I usually create my own FileStream and then wrap it with a StreamWriter if desired. Regardless, I've run into situations where I've needed to call Flush on both the StreamWriter and the FileStream, so I imagine that is your problem.
Try adding the following code:
logWriter.Flush();
if (logWriter.BaseStream != null)
logWriter.BaseStream.Flush();
In my case, this is what I found with output file
Case 1: Without Flush() and Without Close()
Character Length = 23,371,776
Case 2: With Flush() and Without Close()
logWriter.flush()
Character Length = 23,371,201
Case 3: When propely closed
logWriter.Close()
Character Length = 23,375,887 (Required)
So, In order to get proper result, always need to close Writer instance.
I faced same problem
Following worked for me
using (StreamWriter tw = new StreamWriter(#"D:\Users\asbalach\Desktop\NaturalOrder\NatOrd.txt"))
{
tw.Write(abc.ToString());// + Environment.NewLine);
}
Using framework 4.6.1 and under heavy stress it still has this problem. I'm not sure why it does this, though i found a way to solve it very differently (which strengthens my feeling its indeed a .net bug).
In my case i tried write huge jagged arrays to disk (video caching).
Since the jagged array is quite large it had to do lot of repeated writes to store a large set of video frames, and despite they where uncompressed and each cache file got exact 1000 frames, the logged cash files had all different sizes.
I had the problem when i used this
//note, generateLogfileName is just a function to create a filename()
using (FileStream fs = new FileStream(generateLogfileName(), FileMode.OpenOrCreate))
{
using (StreamWriter sw = new StreamWriter(fs)
{
// do your stuff, but it will be unreliable
}
}
However when i provided it an Encoding type, all logged files got an equal size, and the problem was gone.
using (FileStream fs = new FileStream(generateLogfileName(), FileMode.OpenOrCreate))
{
using (StreamWriter sw = new StreamWriter(fs,Encoding.Unicode))
{
// all data written correctly, no data lost.
}
}
Note also read the file width the same encoding type!
This did the trick for me:
streamWriter.flush();

where is leak in my code?

Here is my code which opens an XML file (old.xml), filter invalid characters and write to another XML file (abc.xml). Finally I will load the XML (abc.xml) again. When executing the followling line, there is exception says the xml file is used by another process,
xDoc.Load("C:\\abc.xml");
Does anyone have any ideas what is wrong? Any leaks in my code and why (I am using "using" keyword all the time, confused to see leaks...)?
Here is my whole code, I am using C# + VSTS 2008 under Windows Vista x64.
// Create an instance of StreamReader to read from a file.
// The using statement also closes the StreamReader.
Encoding encoding = Encoding.GetEncoding("utf-8", new EncoderReplacementFallback(String.Empty), new DecoderReplacementFallback(String.Empty));
using (TextWriter writer = new StreamWriter(new FileStream("C:\\abc.xml", FileMode.Create), Encoding.UTF8))
{
using (StreamReader sr = new StreamReader(
"C:\\old.xml",
encoding
))
{
int bufferSize = 10 * 1024 * 1024; //could be anything
char[] buffer = new char[bufferSize];
// Read from the file until the end of the file is reached.
int actualsize = sr.Read(buffer, 0, bufferSize);
writer.Write(buffer, 0, actualsize);
while (actualsize > 0)
{
actualsize = sr.Read(buffer, 0, bufferSize);
writer.Write(buffer, 0, actualsize);
}
}
}
try
{
XmlDocument xDoc = new XmlDocument();
xDoc.Load("C:\\abc.xml");
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
EDIT1: I have tried to change the size of buffer from 10M to 1M and it works! I am so confused, any ideas?
EDIT2: I find this issue is very easy to reproduce when the input old XML file is very big, like 100M or something. I am suspecting whether it is a .Net known bug? I am going to using tools like ProcessExplorer/ProcessMonitor to see which process locks the file to keep it from being accessed by XmlDocument.Load.
That works fine for me.
Purely a guess, but maybe a virus checker is scanning the file?
To investigate, try disabling your virus checker and see if it works (and then re-enable your virus checker).
As an aside, there is one way it can leave the file open: if the StreamReader constructor throws an exception; but then you won't reach the XmlDocument stuff anyway... but consider:
using (FileStream fs = new FileStream("C:\\abc.xml", FileMode.Create))
using (TextWriter writer = new StreamWriter(fs, Encoding.UTF8))
{
...
}
Now fs is disposed in the edge-case where new StreamWriter(...) throws. However, I do not believe that this is the problem here.
You running a FileSystemWatcher on the root perhaps?
You can also use ProcessMonitor to see who accesses that file.
The problem is your char[] which seems to be to big. If it is too big, it is located on the large objekt heap, not on the stack. Hence the large object heap is not compacted as long as the software is running, the once allocated space there may not be used again - which looks like a memory leak. Try splitting up your array to smaller chunks.
I second Leppie's suggestion to use ProcessMonitor (or equivalent) to see for sure who is locking the file. Anything else is just speculation.
Your buffer isnt being deallocated, is it?
Have you checked that no other process tries to access the file?
Code works fine. Just checked.
using will call Dispose, but will Dispose call close on the writing stream? If it does not, the system may still consider the file to be open for writing.
I'd try putting in a close of the writer just before then end of its using block.
Edit: Just tried out the code myself as well. Compiled and ran without the problem your are seeing. Try turning off Virus scanners like some others have mentioned and make sure you don't have a window somewhere with the file open.
The fact that it works for some people and not for others makes me think that the file isn't being closed. Close the writer before trying to load the file.
My bet is that you have some Antivirus solution running, which locks the file after it is being closed. To verify, try adding a delay (like, 1 second) before loading the file. If that works, you probably found the cause.
Run Process Explorer
Make sure it's your program locking the file first.

Categories

Resources