i have a multi-threaded application which wants to send a sequence of data to an external device via a serial port. the sequence of data is a typical cmd - response protocol (ie: a given thread sends a sequence of bytes then waits to read a response which is typically an ack and then it might send another sequence).
what we are are looking to do is declare a sequence of code has exclusive access to this resource until it is done and if another thread wants access to the same external resouce, it waits.
this seems like what LOCK does, but all the examples that i have seen show lock being used to protect a specific block of code, not to serialize access to a resource.
so programatically can i have
Object serialPortLock = new Object();
and in different parts of my program use a construct that looks like:
Lock (serialPortLock)
{
// my turn to do something that is not the same as what
// someone else wants to do but it acts on the same resource
}
the c# documentation talks about using Mutex as a more robust version of Lock. is that whats required here?
Yes, your pattern is correct as long as your program is the only software accessing the serial port.
You have not posted your entire code. If the class that contains serialPortLock has multiple instances, then you MUST make serialPortLock a static. This is normally best practice.
class MySerialPort
{
static object synchLock = new object();
public void DoSomething()
{
lock (synchLock)
{
// whatever
}
}
}
Locking should work fine in the case you've suggested as long as you are locking around all access to any of the object instances that point at the external resource.
Related
I've received some C# code from a colleague for interacting with a cRIO device connected over ethernet. I'm trying to improve the code quality to make it a bit more comprehensible for future users, however I'm struggling a little bit to extract some relevant information from the API documentation. My main question is whether there would be problems caused in leaving a NetworkVariableManager in the Connected state?
Right now the code uses a class which looks something like
public class RIOVar<T>
{
public readonly string location;
public RIOVar(string location)
{
this.location = location;
}
public T Get()
{
using(NetworkVariableReader<T> reader = new NetworkVariableReader<T>(location) )
{
reader.Connect();
return reader.ReadData().GetValue()
}
}
public void Write(T value)
{
using(NetworkVariableWriter<T> writer = new NetworkVariableWriter<T>(location) )
{
writer.Connect();
writer.WriteValue(value);
}
}
}
The actual class does a lot more than this, but the part that actually communicates with the cRIO basically boils down to these two methods and the location data member.
What I'm wondering about is whether it would be better to instead have reader and writer as class members and Connect them in the constructor (at the point that they are constructed the connection should be posible), but what I don't know is if this would have some adverse effect on the way the computer and RIO communicate with each other (maybe a connected manager uses some resource or the program must maintain some sort of register...?) and therefore the approach here of having the manager connected only for the read/write operation is better design.
Keeping a variable connected keeps its backing resources in memory:
threads
sockets
data buffers
These resources are listed in the online help, but it's unclear to me if that list is complete:
NationalInstruments.NetworkVariable uses multiple threads to implement the reading and writing infrastructure. When reading or writing in a tight loop insert a Sleep call, passing 0, to allow a context switch to occur thereby giving the network variable threads time to execute.
... snip ...
NationalInstruments.NetworkVariable shares resources such as sockets and data buffers among connections that refer to the same network variable in the same program.
In my opinion, I'd expect better runtime performance by connecting/disconnecting as infrequently as possible. For example, when the network is reachable, connect; when it isn't, disconnect.
I'm working on a P2P application in C#.
It's a file transfer with a file splitting, and text chat.
On a client there are 2 threads, 1 for listening, 1 for sending.
When i send a file, it's first split into let's say 10 pieces, these 10 pieces are added to a send queue in the client, it then starts sending file chunk 1.
But now i want to send a message through the same pipe.
My idea is then to insert that message into the send list before file chunk 2.
What kind of threading do i need for 2 threads to work on the same list?
I have accounted for the objects being received this way.
My initial idea for the send function was something along these lines:
public void Send()
{
while (IsConnected())
{
if (unSentObjects.Count > 1)
{
Task sendTask = new Task(() => SendObj(unSentObjects[0]));
sendTask.Start();
}
}
}
You could use a Synchronization Object such as a mutex to prevent race conditions or simultaneous write/read to same file. Basically only 1 thread will be able to access the object.
If the data is global to the threads and they are all once process, you can use the synchronization object simply to signal when to use global shared data and when to not use it. Other than that using the shared global data is exactly the same, you are just trafficking the use of it.
I have a project which is a Web API project, my project is accessed by multiple users (i mean a really-really lot of users). When my project being accessed from frontend (web page using HTML 5), and user doing something like updating or retrieving data, the backend app (web API) will write a single log file (a .log file but the content is JSON).
The problem is, when being accessed by multiple users, the frontend became unresponsive (always loading). The problem is in writing process of the log file (single log file being accessed by a really-really lot of users). I heard that using a multi threading technique can solve the problem, but i don't know which method. So, maybe anyone can help me please.
Here is my code (sorry if typo, i use my smartphone and mobile version of stack overflow):
public static void JsonInputLogging<T>(T m, string methodName)
{
MemoryStream ms = new MemoryStream();
DataContractJsonSerializer ser = new
DataContractJsonSerializer(typeof(T));
ser.WriteObject(ms, m);
string jsonString = Encoding.UTF8.GetString(ms.ToArray());
ms.Close();
logging("MethodName: " + methodName + Environment.NewLine + jsonString.ToString());
}
public static void logging (string message)
{
string pathLogFile = "D:\jsoninput.log";
FileInfo jsonInputFile = new FileInfo(pathLogFile);
if (File.Exists(jsonInputFile.ToString()))
{
long fileLength = jsonInputFile.Length;
if (fileLength > 1000000)
{
File.Move(pathLogFile, pathLogFile.Replace(*some new path*);
}
}
File.AppendAllText(pathLogFile, *some text*);
}
You have to understand some internals here first. For each [x] users, ASP.Net will use a single worker process. One worker process holds multiple threads. If you're using multiple instances on the cloud, it's even worse because then you also have multiple server instances (I assume this ain't the case).
A few problems here:
You have multiple users and therefore multiple threads.
Multiple threads can deadlock each other writing the files.
You have multiple appdomains and therefore multiple processes.
Multiple processes can lock out each other
Opening and locking files
File.Open has a few flags for locking. You can basically lock files exclusively per process, which is a good idea in this case. A two-step approach with Exists and Open won't help, because in between another worker process might do something. Bascially the idea is to call Open with write-exclusive access and if it fails, try again with another filename.
This basically solves the issue with multiple processes.
Writing from multiple threads
File access is single threaded. Instead of writing your stuff to a file, you might want to use a separate thread to do the file access, and multiple threads that tell the thing to write.
If you have more log requests than you can handle, you're in the wrong zone either way. In that case, the best way to handle it for logging IMO is to simply drop the data. In other words, make the logger somewhat lossy to make life better for your users. You can use the queue for that as well.
I usually use a ConcurrentQueue for this and a separate thread that works away all the logged data.
This is basically how to do this:
// Starts the worker thread that gets rid of the queue:
internal void Start()
{
loggingWorker = new Thread(LogHandler)
{
Name = "Logging worker thread",
IsBackground = true,
Priority = ThreadPriority.BelowNormal
};
loggingWorker.Start();
}
We also need something to do the actual work and some variables that are shared:
private Thread loggingWorker = null;
private int loggingWorkerState = 0;
private ManualResetEventSlim waiter = new ManualResetEventSlim();
private ConcurrentQueue<Tuple<LogMessageHandler, string>> queue =
new ConcurrentQueue<Tuple<LogMessageHandler, string>>();
private void LogHandler(object o)
{
Interlocked.Exchange(ref loggingWorkerState, 1);
while (Interlocked.CompareExchange(ref loggingWorkerState, 1, 1) == 1)
{
waiter.Wait(TimeSpan.FromSeconds(10.0));
waiter.Reset();
Tuple<LogMessageHandler, string> item;
while (queue.TryDequeue(out item))
{
writeToFile(item.Item1, item.Item2);
}
}
}
Basically this code enables you to work away all the items from a single thread using a queue that's shared across threads. Note that ConcurrentQueue doesn't use locks for TryDequeue, so clients won't feel any pain because of this.
Last thing that's needed is to add stuff to the queue. That's the easy part:
public void Add(LogMessageHandler l, string msg)
{
if (queue.Count < MaxLogQueueSize)
{
queue.Enqueue(new Tuple<LogMessageHandler, string>(l, msg));
waiter.Set();
}
}
This code will be called from multiple threads. It's not 100% correct because Count and Enqueue don't necessarily have to be called in a consistent way - but for our intents and purposes it's good enough. It also doesn't lock in the Enqueue and the waiter will ensure that the stuff is removed by the other thread.
Wrap all this in a singleton pattern, add some more logic to it, and your problem should be solved.
That can be problematic, since every client request handled by new thread by default anyway. You need some "root" object that is known across the project (don't think you can achieve this in static class), so you can lock on it before you access the log file. However, note that it will basically serialize the requests, and probably will have a very bad effect on performance.
No multi-threading does not solve your problem. How are multiple threads supposed to write to the same file at the same time? You would need to care about data consistency and I don't think that's the actual problem here.
What you search is asynchronous programming. The reason your GUI becomes unresponsive is, that it waits for the tasks to complete. If you know, the logger is your bottleneck then use async to your advantage. Fire the log method and forget about the outcome, just write the file.
Actually I don't really think your logger is the problem. Are you sure there is no other logic which blocks you?
i am designing and developing an api where multiple threads are downloading files from the net and then write it to disk.
if it is used incorrectly it could happen that the same file is downloaded and written by more than one threads, which will lead to an exception at the moment of writing to disk.
i would like to avoid this problem with a lock() { ... } around the part that writes the file, but obviously i dont want to lock with a global object, just something that is related to that specific file so that not all threads are locked when a file is written.
i hope this question is understandable.
So what you want to be able to do is synchronize a bunch of actions based no a given key. In this case, that key can be an absolute file name. We can implement this as a dictionary that maps a key to some synchronization object. This could be either an object to lock on, if we want to implement a blocking synchronization mechanism, or a Task if we want to represent an asynchronous method of running the code when appropriate; I went with the later. I also went with a ConcurrentDictionary to let it handle the synchronization, rather than handling it manually, and used Lazy to ensure that each task was created exactly once:
public class KeyedSynchronizer<TKey>
{
private ConcurrentDictionary<TKey, Lazy<Task>> dictionary;
public KeyedSynchronizer(IEqualityComparer<TKey> comparer = null)
{
dictionary = new ConcurrentDictionary<TKey, Lazy<Task>>(
comparer ?? EqualityComparer<TKey>.Default);
}
public Task ActOnKey(TKey key, Action action)
{
var dictionaryValue = dictionary.AddOrUpdate(key,
new Lazy<Task>(() => Task.Run(action)),
(_, task) => new Lazy<Task>(() =>
task.Value.ContinueWith(t => action())));
return dictionaryValue.Value;
}
public static readonly KeyedSynchronizer<TKey> Default =
new KeyedSynchronizer<TKey>();
}
You can now create an instance of this synchronizer, and then specify actions along with the keys (files) that they correspond to. You can be confident that the actions won't be executed until any previous actions on that file have completed. If you want to wait until that action completes, then you can Wait on the task, if you don't have any need to wait, then you can just not. This also allows you to do your processing asynchronously by awaiting the task.
You may consider using ReaderWriterLockSlim
http://msdn.microsoft.com/en-us/library/system.threading.readerwriterlockslim.aspx
private ReaderWriterLockSlim fileLock = new ReaderWriterLockSlim();
fileLock.EnterWriteLock();
try
{
//write your file here
}
finally
{
fileLock.ExitWriteLock();
}
I had a similar situation, and resolved it by lock()ing on the StreamWriter object in question:
private Dictionary<string, StreamWriter> _writers; // Consider using a thread-safe dictionary
void WriteContent(string file, string content)
{
StreamWriter writer;
if (_writers.TryGetValue(file, out writer))
lock (writer)
writer.Write(content);
// Else handle missing writer
}
That's from memory, it may not compile. I'd read up on Andrew's solution (I will be), as it may be more exactly what you need... but this is super-simple, if you just want a quick-and-dirty.
I'll make it an answer with some explanation.
Windows already have something like you want, idea behind is simple: to allow multiple processes access same file and to carry on all writing/reading operations, so that: 1) all processes operates with the most recent data of that file 2) multiple writing or reading occurs without waiting (if possible).
It's called Memory-Mapped Files. I was using it for IPC mostly (without file), so can't provide an example, but there should be some.
You could mimic MMF behavior by using some buffer and sort of layer on top of it, which will redirect all reading/writing operations to that buffer and periodically flush updated content into physical file.
P.S: try to look also for file-sharing (open file for shared reading/writing).
I have created a webservice in .net 2.0, C#. I need to log some information to a file whenever different methods are called by the web service clients.
The problem comes when one user process is writing to a file and another process tries to write to it. I get the following error:
The process cannot access the file because it is being used by another process.
The solutions that I have tried to implement in C# and failed are as below.
Implemented singleton class that contains code that writes to a file.
Used lock statement to wrap the code that writes to the file.
I have also tried to use open source logger log4net but it also is not a perfect solution.
I know about logging to system event logger, but I do not have that choice.
I want to know if there exists a perfect and complete solution to such a problem?
The locking is probably failing because your webservice is being run by more than one worker process.
You could protect the access with a named mutex, which is shared across processes, unlike the locks you get by using lock(someobject) {...}:
Mutex lock = new Mutex("mymutex", false);
lock.WaitOne();
// access file
lock.ReleaseMutex();
You don't say how your web service is hosted, so I'll assume it's in IIS. I don't think the file should be accessed by multiple processes unless your service runs in multiple application pools. Nevertheless, I guess you could get this error when multiple threads in one process are trying to write.
I think I'd go for the solution you suggest yourself, Pradeep, build a single object that does all the writing to the log file. Inside that object I'd have a Queue into which all data to be logged gets written. I'd have a separate thread reading from this queue and writing to the log file. In a thread-pooled hosting environment like IIS, it doesn't seem too nice to create another thread, but it's only one... Bear in mind that the in-memory queue will not survive IIS resets; you might lose some entries that are "in-flight" when the IIS process goes down.
Other alternatives certainly include using a separate process (such as a Service) to write to the file, but that has extra deployment overhead and IPC costs. If that doesn't work for you, go with the singleton.
Maybe write a "queue line" of sorts for writing to the file, so when you try to write to the file it keeps checking to see if the file is locked, if it is - it keeps waiting, if it isn't locked - then write to it.
You could push the results onto an MSMQ Queue and have a windows service pick the items off of the queue and log them. It's a little heavy, but it should work.
Joel and charles. That was quick! :)
Joel: When you say "queue line" do you mean creating a separate thread that runs in a loop to keep checking the queue as well as write to a file when it is not locked?
Charles: I know about MSMQ and windows service combination, but like I said I have no choice other than writing to a file from within the web service :)
thanks
pradeep_tp
Trouble with all the approached tried so far is that multiple threads can enter the code.
That is multiple threads try to acquire and use the file handler - hence the errors - you need a single thread outside of the worker threads to do the work - with a single file handle held open.
Probably easiest thing to do would be to create a thread during application start in Global.asax and have that listen to a synchronized in-memory queue (System.Collections.Generics.Queue). Have the thread open and own the lifetime of the file handle, only that thread can write to the file.
Client requests in ASP will lock the queue momentarily, push the new logging message onto the queue, then unlock.
The logger thread will poll the queue periodically for new messages - when messages arrive on the queue, the thread will read and dispatch the data in to the file.
To know what I am trying to do in my code, following is the singletone class I have implemented in C#
public sealed class FileWriteTest
{
private static volatile FileWriteTest instance;
private static object syncRoot = new Object();
private static Queue logMessages = new Queue();
private static ErrorLogger oNetLogger = new ErrorLogger();
private FileWriteTest() { }
public static FileWriteTest Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
{
instance = new FileWriteTest();
Thread MyThread = new Thread(new ThreadStart(StartCollectingLogs));
MyThread.Start();
}
}
}
return instance;
}
}
private static void StartCollectingLogs()
{
//Infinite loop
while (true)
{
cdoLogMessage objMessage = new cdoLogMessage();
if (logMessages.Count != 0)
{
objMessage = (cdoLogMessage)logMessages.Dequeue();
oNetLogger.WriteLog(objMessage.LogText, objMessage.SeverityLevel);
}
}
}
public void WriteLog(string logText, SeverityLevel errorSeverity)
{
cdoLogMessage objMessage = new cdoLogMessage();
objMessage.LogText = logText;
objMessage.SeverityLevel = errorSeverity;
logMessages.Enqueue(objMessage);
}
}
When I run this code in debug mode (simulates just one user access), I get the error "stack overflow" at the line where queue is dequeued.
Note: In the above code ErrorLogger is a class that has code to write to the File. objMessage is an entity class to carry the log message.
Alternatively, you might want to do error logging into the database (if you're using one)
Koth,
I have implemented Mutex lock, which has removed the "stack overflow" error. I yet have to do a load testing before I can conclude whether it is working fine in all cases.
I was reading about Mutex objets in one of the websites, which says that Mutex affects the performance. I want to know one thing with putting lock through Mutex.
Suppose User Process1 is writing to a file and at the same time User Process2 tries to write to the same file. Since Process1 has put a lock on the code block, will Process2 will keep trying or just die after the first attempet iteself.?
thanks
pradeep_tp
It will wait until the mutex is released....
Joel: When you say "queue line" do you
mean creating a separate thread that
runs in a loop to keep checking the
queue as well as write to a file when
it is not locked?
Yeah, that's basically what I was thinking. Have another thread that has a while loop until it can get access to the file and save, then end.
But you would have to do it in a way where the first thread to start looking gets access first. Which is why I say queue.