I am writing a messaging app in C# that runs on a shared file server on a network. The program works by multiple users running the program which accesses a file that is shared between the multiple computers. Hence, I need to use the StreamReader/StreamWriter to access the file with multiple programs at once (EDIT: I now know this isn't a good way to do it, but it's what I needed at the time). So how may I access a single file with multiple programs without getting errors about the file being in use?
I think your approach will lead to problems in the future. I'd consider leveraging Redis pub/sub if I were you.
But, since you asked... (I wrote a blog post on this: http://procbits.com/2011/02/18/streamwriter-share-read-access-in-another-process/ )
Generator of chat data:
var fs = File.Open(#"C:\messages.txt", FileMode.Append, FileAccess.Write, FileShare.Read);
var sw = new StreamWriter(fs);
sw.AutoFlush = true;
Somewhere else in your app or another app...
Readers of chat data:
var fs = File.Open(#"C:\messages.txt", FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
var sr = new StreamReader(fs);
If you need single file where multiple users/programs/entites.. shoud read/write without disturbing each other, I would suggest to consider (among other solutions) an use of Sqlite like a simple DB backend. No installation or service setup needed. Just use C# dlls of it and, basically, according to your requests, you will get what you need.
One user writes in the db file (INSERT) another can read (SELECT) from it.
I think you should think twice about using a text file as a means of a communication peer.
It's like asking for trouble.
Please take a look at using a P2P solution instead:
Peer Channel Chat
A simple peer to peer chat application using WCF netPeerTcpBinding
That will give you a much more fitting architecture for your requirements.
Related
I have a desktop application and below is the flow that is to be followed.
During app initialization, an API should be hit and an excel should downloaded to a shared location. After download is complete the app should read the excel file. This won't be a problem with a single instance of app running. But since this is a desktop app, multiple instances (on different computers) are run, app every time during initialization, downloads the file. I'm using OLE Db engine to read the file and the file is being locked and there 's error "The ole db engine cannot read the file because it is opened by another user " while another instance of the app is opened. How to prevent this?
if (response.Result.IsSuccessStatusCode)
{
using (Stream streamToWriteTo = new FileStream(pathToDownloadReport, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
response.Result.Content.CopyToAsync(streamToWriteTo).Wait();
}
}
If you want to have concurrent access to a file you need to make sure every client only takes a read-lock on the file. With OleDb you can open a connection to the file with ReadOnly access with a connection.
"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=\\path\to\your\exelfile.xlsx;Extended Properties=\"Excel 12.0;IMEX=1;ReadOnly=true;\""
You still need to make sure no one opens the file in Excel.
Since you only can have read-only access to your file, you might as well make a local copy of the file and open that instead. That way locking won't be a problem.
I'm developing a progress tracking and monitoring type of an application(c# .net 4.5) A single file both gets written and read from a network location.
I'm having trouble (unresponsive UI / Crashes) reading writing that file in such cases:
if network location is momentarily not responding,
if network location is reached over internet and there is considerable lag,
at startup while client firewall kicks in, it grants delayed access to network resources,
So I'm in need of a more robust way of reading and writing rather than
using (StreamWriter wfile = File.AppendText(path))
{
//...
}
using (StreamReader rfile = new StreamReader(path))
{
//...
}
Async methods seem to conflict reader and writer threads. What is the best practice and your suggestions over this issue? Thanks
You need a service to resolve the issue where the service does the reading and writing of the file. Windows does not properly handle the conflicts. If you don't want to create a service use a database like SQL Server which automatically resolves the conflicts. By the way, SQL Server is a service.
I have a webservices's project. I'm trying to write a log per each method using StreamWriter, in my local machine everything is working fine.
Something like this:
static StreamWriter sw;
try{
if (File.Exists(Directorio + FILE_NAME))
{
sw = File.AppendText(Directorio + FILE_NAME);
}
else
{
sw = File.CreateText(Directorio + FILE_NAME);
sw.WriteLine("---LOG ");
}
sw.WriteLine(Date);
sw.WriteLine(Header);
sw.WriteLine();
sw.Close();//*/
}catch(Exception){}
But when is uploaded to the server sometimes it throws an error that can't write because the file is in use. But I close it every time and I thought that with the try catch should ignore that part and continue with the method, because I don't want to affect the process of each method.
I know that is little information, and I can't reproduce my problem here but hope that someone who had an error like this could give me a hint.
Web servers typically handle multiple requests at once. The occasional error is most likely due to one request trying to log while another request is already logging, and not yet done.
If you wish do use your own logging framework, you will need to coordinate writes to the file.
You could also use one of the exceptional, open-source logging frameworks such as NLog.
This could be due to multiple requests coming to web server and one request trying to write to this log file while other is trying to open. possible fix could be thread synchronization, which is not good as it would significantly degrade the performance of web service. Alternatively I'd recommend using nLog (http://nlog-project.org/), used in several projects without any issues.
I have a question regarding the ftp library from C#. I need to download 9000 txt files from a ftp server. Station.ToUpper() is the file name, so for every file I need a new ftp connection. For one file it takes around one second. The txt files contain two lines. So for all files it takes around one and a half hour. Is there a better / faster solution?
// Get the object used to communicate with the server.
FtpWebRequest request = (FtpWebRequest)WebRequest.Create(ftpAddress + station.ToUpper());
//request.UsePassive = false;
request.Method = WebRequestMethods.Ftp.DownloadFile;
// This example assumes the FTP site uses anonymous logon.
request.Credentials = new NetworkCredential("anonymous", "janeDoe#contoso.com");
FtpWebResponse response = (FtpWebResponse)request.GetResponse();
Stream responseStream = response.GetResponseStream();
StreamReader reader = new StreamReader(responseStream);
There's not much you're doing wrong in this code, except for the fact you're not calling Dispose() on your streams or response objects. Do that first to make sure you're not somehow running out of resources on the client or something.
Other than that, you don't have too many options here, and a lot depends upon what you can do on the server side.
First, you could try to use threading to download a bunch of files at once. You'll need to experiment with how this affects your throughput. It will probably scale linearly for a while, then fall off. If you open up too many connections, you could anger the maintainer of the server, or it could start denying you connections. Be conservative.
Optimally, the files would be zipped (.ZIP or .TGZ) on the server. This will likely not be an option if you don't have more control over the process.
Use the MGET command to avoid re-establishing a connection each time. The System.Net client does not support MGET, so you would have to use a third party library or script ftp.exe. Regardless of the client you choose, the FTP log would look like the following:
USER anonymous
PASS janeDoe#contoso.com
CWD path/to/file
// to get 3 named files
MGET file1.txt file2.txt file3.txt
// or to get all files matching a pattern
MGET *.txt
The file transfers will use the same control session, avoiding login and other network overhead.
One library which could be of interest is FTPLib, which avoids tearing down the channel on each command. Be careful, however, as FTPLib is based on wininet which is not allowable for use in an NT Service.
I would also take a look at LumiSoft, an open source project with a friendly license, and DotNetFtpLib, though I have used neither and cannot speak to their stability or featureset. On the scripting side, take a look at "Using FTP Batch Scripts".
So, I'm actually trying to setup a Wopi Host for a Web project.
I've been working with this sample (the one from Shawn Cicoria, if anyone knows this), and he provides a whole code sample which tells you how to build the links to use your Office Web App servers with some files.
My problem here, is that his sample is working with files that are ON the OWA server, and i need it to work with online files (like http://myserv/res/test.docx. So when he reads his file content, he's using this :
var stream = new FileStream(myFile, FileMode.Open, FileAccess.Read);
responseMessage.Content = new StreamContent(stream);
But that ain't working on "http" files, so i changed it with this :
byte[] tmp;
using (WebClient client = new WebClient())
{
client.Credentials = CredentialCache.DefaultNetworkCredentials;
tmp = client.DownloadData(name);
}
responseMessage.Content = new ByteArrayContent(tmp);
which IS compiling. And with this sample, i managed to open excel files in my office web app, but words and powerpoint files aren't opened. So, here's my question.
Is there a difference between theses two methods, which could alter the content of the files that i'm reading, despite the fact that the WebClient alows "online reading" ?
Sorry for the unclear post, it's not that easy to explain such a problem x) I did my best.
Thanks four your help !
Is there a difference between theses two methods, which could alter
the content of the files that i'm reading, despite the fact that the
WebClient allows "online reading"
FileStream open a file handle to a file placed locally on disk, or a remote disk sitting elsewhere inside a network. When you open a FileStream, you're directly manipulating that particular file.
On the other hand, WebClient is a wrapper around the HTTP protocol. It's responsibility is to construct HTTP request and response messages, allowing you to conveniently work with them. It has no direct knowledge of a resources such as a file, or particularly where it's located. All it knows is to construct message complying with the specification, sends a request and expects a response.