I have a quick question for you all. This is my first time dealing with serial I/O, and was wondering what the most efficient method is. Ill be reading in from a laser through rs-232.
From what I can tell from researching it I can accomplish this by using a background worker to create a buffer from the serial port, then have the main thread access that buffer to plot and analyze the points. Is there a more efficient / reliable way to do this ,or is this approach going to be my best bet?
thanks!
How you read from the device depends on how data is retrieved from it. I have an application that gets data from a strain gauge (attached to a meter). This meter autonomously spits out readings, so I don't have to poll it. Because of this I simply use the SerialPorts DataReceived event and add data to an array.
I use the Microsoft Chart Control (built into .NET 4.0) which lives on the UI thread, so it has to be updated from the UI thread. The data comes in pretty fast (~100 points per second), so what I do is build a 100 point array and BeginInvoke (Winforms) that array off to the UI thread while starting a new array. This way the UI is updated periodically (~1 update per second).
If, on the other hand, your device requires polling (you have to ask it for data each time), then you'll want a dedicated thread that sits in a tight loop polling.
Related
I'm currently busy writing some c# code to interface with an Arduino. The code periodically samples audio and transmits data to physically represent the audio levels. I've created a WPF interface for this program, but I pretty much am at a dead end with the final steps. In my interface i would like to be able to change parameters for what is being transmitted as well as displaying feedback read from the COM port.
I don't really understand how to properly make threads in c# - I can imagine how you would create a separate process to manage the IO here as the COM port can only be operated on by a simple process. How would i simultaneously run a loop to sample my audio and send it and another loop to read the serial port, all while still remaining separated from the WPF process so the UI doesn't freeze.
Any tips about proper practice to create these threads securely and efficiently is massively appreciated too!
Thanks
The simplest model is to use a few separate Threads. Each thread runs in an infinite loop, which is defined in a method called a TreadProc, and so each Thread is like a seperate program, but it runs in the same Process and can interact directly with the UI.
These background threads can interact with the WPF UI through the UI Elements' dispatchers. See Threading Model - WPF
Let me briefly explain my problem. I want to read data from a sensor with my c#
solution. In order to do that I use an event, which pulls the data at a very fast rate from the sensor. The data is then saved to a database in sql.
To achieve that and maximize performance, I registered the event in Task A, which now frequently polls the Data from my sensor, lets say 1000 samples per second to give you an idea.
The data is saved in a blocking collection object with a queue.
What i want to do now is to stark a second task, which saves my data to an sql database, but only if there are more than 5000 samples in my blocking collection queue. How do i achieve that?
I tried to start the second task in my event which runs on the first task, but came over a few problems.
A) Sometimes the second task would not start in my event
B) I got an exception that the second task is still runnig (Because the event triggered so fast and it was not done i guess)
Is there any good way to do this?
Best regards
There is 2 Backgroundworkers in my project:
BGW1: the first worker is used to read data from a controller and convert the data into the right format
BGW2: the second worker is used to pass the converted data and objects to the GUI using the ReportProgress functionality
The entire process needs to be as real time as possible and the messages are coming approx every 0.5 ms. The MainThread becomes flustered pretty quick when it has to update 800 points every 5-10 ms.
This causes the GUI to become unresponsive if i update at a faster rate than 10fps.
A solution i have found online, is this:
Alternate Way of Multithreaded GUI
I have tryed to adopt this methodology to my background workers by setting
// Prevent the framework from checking what thread the GUI is updated from.
theMainForm.CheckForIllegalCrossThreadCalls = false;
in the main form. This allows me to update the gui from a seperate thread not the main thread, from what i understand.
Using this line in the main should mean that i can access the GUI elements from other threads that arent the main thread, and i dont need to use ReportProgress
to update the Chart, so i tried updating the Chart from my DoWork portion of BGW2.
The update works from DoWork, but it seems to still just refer the Data to the MainThread and that thread then updates the chart, which results in an unusable GUI again.
Do i have to get rid of the backgroundworkers completly and only use Threads for the solution from the link to work? Or is there some sort of trick to getting this method to work with backgroundworkers.
Well, don't update that often. Just stick to a fixed refresh rate, and use a ConcurrentQueue to pass the data points between the BackgroundWorker that reads the data, and the GUI that renders it. A simple Timer should work well enough - every five seconds, read everything out of the ConcurrentQueue and update the chart.
Don't update the UI from multiple threads. There's a reason why the checks are there.
One background worker really is enough.
The expensive parts of this operation are;
Synchronizing back to the UI thread
Blocking the worker while you do it.
The solve the performance issue, minimise #1. Don't post every item, post many items every x milliseconds.
In fact, I'd recommend not using a background worker at all - the ReportProgress event blocks your worker thread.
Have you tried to enforce the events being handled? This will empty the events queue so the form becomes responsible for the user. Nevertheless you better go with updating the GUI at a fixed rate, as pointed out by Luaan.
I have a series of calculations that need to be processed - the calculations and the order they run are all defined by the user on the UI.
If they just ran one after each other, it wouldn't be too hard. However, some of the calculations need to be processed concurrently and all calculations must have the ability to be individually paused at any time. I also need to be able to re-arrange orders or add new calculations to be processed at any time. So whatever I do must be flexible enough to handle this.
On the UI, imagine a listbox (a queue, if you like) of usercontrols - with each usercontrol displaying the name of the calculation and a pause button. And I can add calculations to this list at any time during processing.
What is the best way to do this?
Should I be running each calculation in its own thread? If so, how should I store the list of running processes? How will I pass the queue to the calculation processor? How will I be able to ensure that every time the queue changes (new ordering or new calculation) the calculation processor will be made aware of this?
My initial thoughts were to have:
CalcProcessor class
CalcCalculation class
In CalcProcessor have 2 Lists of CalcCalculations. One being the "queue" as shown on the UI (perhaps a pointer to it? Or some other way to ensure it updates live), and the other being the list of currently running calculations.
Somehow I need to get the CalcCalculation to be running in its own thread to process the calculation, and be able to handle any pause events. So I need some way to transmit the info of the Pause button being pressed in the UI to the CalcProcessor object, and then into the correct CalcCalculation.
Edit in response to David Hope:
Thanks for your reply.
Yes, there are n calculations but this could change at any time due to being able to add more calculations to process on the UI.
They do not need to share data in anyway. There will be a setting in the application to specify how many should run concurrently (ie. 10 at any given time, the first 10 in the queue for example - and when 1 finishes the next calculation in the queue will start processing).
The calculation will involve taking data from some data source - it could be a database or a file, and analysing it and performing some calculations on that data. When I say the calculation needs to be paused, I don't mean pausing the thread... I just mean (for example, as I haven't written this part of the application yet) if it is reading row by row from a database and doing some live calculations pausing at the completion of processing the current row... and continuing on when the pause button is unclicked on the UI - which could be done with something as primitive as a while(notPaused) loop providing I can get the Pause information from the UI into the thread.
There are several questions here:
How to synchronize the UI and the model?
I think you got this one backwards. Your model shouldn't have a “pointer” to the queue you're showing in the UI. Instead, the queue should be in your model and you should use databinding together with INotifyPropertyChange and ObservableCollection to show the queue on the UI. (At least that's how it's done in WPF.)
This way, you can manipulate your queue directly from your model, and it will automatically show on the UI.
How to start and monitor calculations?
I think Tasks are ideal for this. You can start a Task using Task.Factory.StartNew(). Since it seems your Tasks will take long to execute, you might consider using TaskCreationOptions.LongRunning. You can also use the Task to find out when is the calculation complete (or if it failed with an exception).
How to pause running calculations?
You can use ManualReserEventSlim for that. Normally, it would be set, but if you wanted to pause a running Task, you would Reset() it. The calculation will need to periodically call Wait() on that event. It's not possible to reasonably pause a running thread without cooperation from the calculation on that thread.
If you were using C# 5.0, a better approach would be to use something like PauseToken.
In Framework 4.5, the answer here is the Async API, which removes the need to manage threads. For details, look at the async/await keywords.
From a broader perspective, a "CalcProcessor" class is a good idea, but I think the Task object will suffice to replace your "CalcCalculation" class. The Processor can simply have an Enumerable of Tasks. The Processor can expose methods for managing the queue, if needed, as well as returning information about its status. When your application finally reaches a state where it must have the results, you can use the AwaitAll method to block the CalcProcessor's thread until all of the tasks complete.
Without more information about the actual goal here, it's hard to give better advice.
You can use Observer Pattern to display results on UI and order changes back in to Processor. State and Command patterns will help you to start, pause, cancel the calculations. These patterns have great answers to your questions in design way. Concurrency is still a problem, they do not answer multi-threading problems but they open an easier road to manage threads.
I suggest that you haven't broken the problem down far enough, which is the reason you are frustrated.
You need to start small and build up from there. You mention, but don't define your actual requirements, but they seem to be...
Need to be able to run ?N? calculations
Some need to be run concurrently (does this imply that they share data, if so how are you going to share the data)
Must be able to pause the calculation (don't use Thread.Suspend, as it potentially leaves a thread in an unstable state, particularly bad if you are sharing data), so you will need to build pause points into each calculation. Also need to consider how you are going to communicate the pause/unpause to the calculation
As far as methods, there are several to consider...
Threads are an obvious choice, but require careful tending too (starting, pausing, stopping, etc...)
You could also use BackGroundWorker or possibly Parallel.ForEach
BackGroundWorker contains the framework for cancelling the worker and providing progress (which can be useful).
My recommendation to start would be to go with BackGroundWorker, potentially subclass it to add the Pause/Resume functionality you need. Determine how you are going to manage data sharing (at least use lock to protect against simultaneous access).
You may find BackGroundWorker too restrictive and need to go with Threads, but I'm usually able to avoid it.
If you post more clear requirements, or samples of what you've tried and didn't work, I'll be happy to help more.
For queue you can use heap data structure (priority queue). This will help prioritize yours tasks. Also you should use Thread Pool for effectively calculations. And try to split you tasks to little parts.
I have a question around threading and background workers that I hope you can help with.
I plan on making an ftp application to upload a file to 50 servers. Rather than the user having to wait for each upload to finish before the next one starts I was looking at threading/background workers. Once an upload finishes I want to report the status of the upload "completed/failed" back to the UI. From my understanding, I will need to use background workers for this so I know when the task has completed. I know with threading I can use producer/consumer queue or a semaphore to run a given amount of threads at once but I am not quite sure how I can achieve this with back ground workers.
So my question is, what would be a sensible number of background workers controlling uploading to run at once and what would be the best way to queue the rest?
There is no limit on the size of the upload file so this could be quite small or up to a few MB.
Thanks in advance.
Edit - I tested out one backgroundworker for each server running simultaneousness. The results where faster than just a single backgroundworker but I can't say that i was fully comfortable with running 50 plus background workers at once and since the server count may increase in the future, I decided to stick with just the one, which seems to be fast enough. I may in future look at increasing the count of workers to 2 or 3 but currently 1 seems to be adequate. Thanks for everyones help.
Thanks
I'd go in a completely different direction with it, tbh. Your app should take the file and store it once, responding to the client that it's got it. The file should then be propagated to the other servers. You can do this many ways, but if you want it controlled by the same application (i.e. not done using a windows service or the like) then a good way would be to use a message queue (either MSMQ or one of the OS ones).
This is much easier than using a semaphore or producer-consumer queue.
Put all your tasks in a queue (doesn't need to be a thread-safe queue, it will only be used from the UI thread).
Loop from 1 to N, taking out a task and starting a BackgroundWorker. (Be sure to handle the empty queue, when there were less than N tasks to begin with). In the RunWorkerCompleted event, update your UI, dequeue another task, and start another BackgroundWorker.
The bottleneck here is going to be your network bandwidth. If your local upstream connection is so fast that you can saturate the incoming connections on two or more remote hosts, then you'll benefit from running multiple uploads in parallel. If not, then it makes very little difference to the total upload time, since it'll be dictated by (file size * number of uploads) / (local bandwidth). In other words - if you do 20 uploads one at a time, it'll take an hour; if you do 20 uploads in parallel, it'll still take an hour. The advantage of the first approach is that if you lose connectivity you'll only need to resume/restart a single upload - whichever one was in progress when the connection was lost.
I'd therefore use a single background thread to sequentially upload the file to each server in turn. If you're using the .NET BackgroundWorker to do this, you can get it to ReportProgress at the end of each file (and you know in advance how many files are to be uploaded so you can calculate progress as a percentage), and attach some custom state to the progress update to inform the user whether the last upload succeeded or not.
The only way to know for sure is to test and measure, but it can be different from machine to machine, mostly depending on uplink speed.
Starting 50 backgroundworkers at the same time is a bit on the high end, but is not incredibly many. A simple approach would be to start 50 all at the same time and measure memory consumption and upload speed.
If the FTP servers are each much faster than the client uplink speed the most efficient would be to just upload one (or possibly two) at a time.