Bluetooth in C#, Which stack, Which SDK? - c#

We've got an application which needs to be able to use bluetooth for the following requirements:
Receive files from bluetooth devices (up to 2 devices at the same time)
Display all bluetooth devices in range
Send files to bluetooth devices
Scan for bluetooth devices and transfer files at the same time
We're running on Windows XP.
I've done some looking around and there seems to be 3 main stacks:
BlueSoleil
On the BlueSoleil website, in their SDK section, it seems to mention only 1 connection is supported, which is obviously no good.
Windows
Only seems to support 1 bluetooth dongle, which will probably mean we can't meet all our requirements.
Widcomm
Expensive and potentially overkill? More complex API? Thoughts?
In terms of SDK for C#, was looking at Franson Bluetools, anyone used this API?
Thanks

Firstly the disclaimer, I'm the maintainer of the 32feet.NET library. :-)
I've just checked, and on XP with the Microsoft stack (using one dongle) I can concurrently be receiving two OBEX PUTs and also discovering devices. That's using 32feet.NET's ObexListener class and the BluetoothClient.DiscoverDevices method. To send the OBEX PUTs one can use its ObexWebRequest class. To do multiple parallel connections with ObexListener I just had multiple threads calling its GetContext() method.
So that's maybe simpler than we thought...
I've also tested it with Andy Hume's OBEX Server using his Brecham.Obex library and the concurrent receive works fine there too. Its available from http://32feet.net/files/folders/objectexchange/entry6511.aspx.
On our Widcomm support. Hopefully it doesn't seem too "incomplete" on the client side... Inquiry (device discovery) and connections all work. The server-side still needs a little work however and there are some things the Widcomm API simply doesn't support eg. (programmatic authentication handling).
What was the issue with the samples? Compile-time or run-time? On MSFT stack or Widcomm? Follow-up at http://32feet.net/forums/37.aspx if you prefer.

Time to explain exactly what we ended up doing...
2 dongles why?
If a dongle is doing a scan the transfer rate is massively slowed down
A dongle can only support 7 concurrent transfers, if you are doing a scan, this drops to 6. If you want to send, receive and scan all at the same time, everything slows down, badly, and you are very limited in channels.
So, the idea is to run one dongle continuously scanning (so devices appear as quickly as possible) and the other dongle reserved for transfers, and since it's not scanning, transfers are nice and quick.
Library we used
After much testing and thought, we ended up opting for WirelessCommunicationLibrary from BT framework.
It supports Widcomm, Windows, BlueSoleil and the Toshiba stack. It supports all the server side stuff we need, is a well supported commercial product, which works perfectly without error.
Which stack?
Well, this is a complex one. NONE of the stacks support 2 dongles at the same time. So the only option is to run one dongle on one stack and the other dongle on another. This is where the WCL library comes in handy!
Microsoft - If an error occurs during a scan, it's common for the whole stack to crash out. This is not ideal! You have to close and restart radio device, it takes time and is fault prone. But... the Microsoft stack does handle file transfers very nicely.
Widcomm - Widcomm stack isn't great for file transfers. There is pesky little apps which install with Widcomm which keep trying to take control from your app. You can kill the bttray.exe, which helps, but you still get some strange behaviour from the stack during transfers. I'm sure this can be resolved, but since Windows is poor for scans, makes sense to use Widcomm for scans.
So... we've got one dongle set to Widcomm to scan over and over, and one dongle set to Microsoft set to handle only file transfers (in and out).
Getting 2 dongles to work
We went for using 2 of the same dongles, we can order them in bulk and stock them all the same reducing confusion. Each device shipped just needs 2 bluetooth dongles, simple.
The only problem is, these are widcomm dongles and we need one dongle on the Windows stack. Windows doesn't recognise these as Windows dongles, so won't register them for the Windows stack. So... the is a hack you can make to the bt.inf file to make it recognise the dongle for Windows. Then you need to switch the drivers for one of the dongles to run on the Windows drivers and you're all done.
Summary
So... we've got one dongle scanning all the time, one handling transfers, each on separate stacks and it all works nicely. This is the only way I have found to get 2 dongles working smoothly on Windows. If you've got a better suggestion, please post it!

Try this: 32feet.NET. Starting from version 2.4, they support Widcomm stack in addition to Windows stack.
BTW: Why you need to work with two dongles at the same time? Usually single dongle can handle up to 7 devices connected simultaneously.

Related

Most efficient way to poll multiple devices in C#

I've read a lot about this topic, but still am not sure what to do.
First, the situation: I have software written in C# using .NET 4.5 that polls up to 64 devices on a CAN network that I communicate with via USB using a third party API from the device manufacturer. The purpose is to provide the user with realtime updates of temperature, pressure, and other values like that from some sensors.
Currently I create a System.Threading.Thread for every device which runs a while loop that queries the device for the relevant info, saves updates to SQL Server via Entity Framework, then sleeps for 1.25 seconds.
This runs ok on smaller systems with ~20 or fewer devices, but on a large install with 50+ devices it runs very slowly. I think that my problem is the overhead of creating so many threads. And it doesn't help that I'm stuck with a crappy Atom processor, although at least this one is quad core unlike the previous system I used that was dual core.
So, I've been trying to make the process more efficient. Everything I read seems to point to Task.Run() being the more effective way of doing something like this, but this software could potentially be running for weeks or months at a time, which I THINK means I would need to run it with TaskCreationOptions.LongRunning. But I've read conflicting things on this, so I'm not sure. But if that is the case, then my understanding is that TPL will just start up a new dedicated thread anyways, so it seems like that would still have the overhead I'm trying to avoid.
So, as you can see, I'm pretty lost on this topic. I don't know if I should just give Task.Run() a try, and see what happens, or if there's a whole different way I should do this.
Any help would be immensely appreciated.
Thank you.

Very Slow PLC Programming and Fault Finding

I am working in a full production environment that has a range of PLCS around our production mill, each of these PLC's talk back through a 'DataHighway +' network back to a special PC on our LAN Network called the MicroLinks PC. This has the ROCKWELL OPC RSLinx Classic server software on it.
So, recently I have put together a piece of .NET software in c# using the OPC .NET API to read to ROCKWELL OPC server on the Microlinks PC and sync data back into our MYSQL database that is sat on our WINDOWS R2 server PC
Ever since turning on the .net software, the engineers on site have experienced a massive slow down in developing new PLC scripts and fault finding.
Some of the reports are even as bad as 10 second lags.
Consequently, we have had to turn of the .NET software to sync the data to allow the Engineers to do their work swiftly without issues.
So i am looking for some advice on where or what i should look for, any resources to read for this type of problem etc. As PLC and networks are way out of my depth, I am just the .NET programmer.
Here is the structure of our network:
I'm not sure which type of rockwell PLCs you are using. I'm most familiar with the ControlLogix platform so I'll talk about that.
The ethernet card in a controllogix PLC connects at 100Mb/s but the card can't actually handle 100Mb/s continuously. A 1756-ENBt card can handle about 5000 packets per second, the EN2T roughly double that. There are formulas in the rockwell docs explain how to calculate packets per second but another option when you have a running system is to connect the 'Logix5000 task monitor' that comes with RS Logix and verify the CPU usage of the Ethernet card I think Rockwell recommends you keep it under 60%. If you are requesting too many packets then this CPU won't keep up
The PLC itself can starve communications. Controllogix has a "overhead time slice" setting which is the percent of time the PLC spends servicing communication tasks as opposed to running its own logic. Increasing this percentage can improve comms a bit.
It sounds like your program is putting a large burden on the PLC. Does it get better if you slow down your app so that it is not pulling as much data as fast?
One easy way to reduce the number of packets required to retrieve a block of data without slowing down the update rate is to put it all in one array. RSLinx will then be able to optimize the request instead of pulling individual tags
I have had plenty of troubles using Rockwell RSLinx on my local PC trying to find the IP address of a PLC plugged directly into my ethernet port. Using the "Autobrowse" option, it completely locks up my PC trying to scan the ports and IP addresses for targets.
It might just be poorly optimized Rockwell software causing issues. You also may be exchanging a whole lot of data and your server PC is struggling to keep up.
I would contact Rockwell/Allen Bradley support for help with this. They will probably want some cash to help you.
You're almost deinifitely over-polling the PLC. Try polling less and less frequently until find a value that doesn't slow down the network For example, if you're requesting data every 100ms now, change that once per second. Then once per minute. Then once every 15 minutes. At each step check the comms speed for the programming terminals.

Is a windows service the "correct" choice for interacting with hardware on an embedded system that must run windows?

I am working on an embedded system that involves collecting data from multiple camera modules over USB. The plan was originally to use a small Linux system, but the Linux drivers for the camera don't support using any of it's features (hardware triggering, shooting raw, certain pixel formats, etc). There is a nice C# SDK provided by the manufacturer and everything just works on Windows. We are now investigating using a small Windows system like the new Intel Compute Stick or a Liva.
I want to write software to collect the data from the cameras as they are hardware triggered by another part of the system, and write the data to a removable disk. It should be remotely controllable via TCP/IP (hard wire). This sounds like something that would fit within the purview of a windows service. Would this be a good way to go?
I'm mostly concerned about running into security/permissions issues. I've been reading things that indicate that services are contained within "non-interactive" window stations, and I'm not sure what that means in terms of being able to access devices etc. The machine running all this is going to be completely headless, so it just has to work all the time. I'm continuing to do my own research into the right thing to do here, but if somebody with relevant experience could give me a suggestion "yes" or "no" along with a good reason why, that would help me out greatly.
A few things to consider:
is there any limitations or required privileges to access the hardware/drivers, and which windows identity would allow your service that (elevated) access?
is any interaction with a user-interface required?
To me, without additional details, it looks like a windows service is going to serve the purpose if there is no user-interface required.
I would also recommend using Top Shelf for developing windows services in .Net, as it simplifies and abstracts all the surrounding complexity and lets you focus on what your application needs to achieve instead.
[Note: I'm not affiliated with Top Shelf, or its developers]
I used to work at a company that did Real Time Vision systems using firewire cameras on windows. The software just ran as an application. Which just ends up being simpler to deal with and debug. Most of it was done in C++. However if you don't have hard real time requirements ( that software needed to do things within 50ms ) then C# should be fine
You could run it as a service, but there is no particular need to.

Does HttpListener work well on Mono?

I'm looking to write a small web service to run on a small Linux box. I prefer to code in C#, so I'm looking to use Mono.
I don't want the overhead of running a full web server or Mono's version of ASP.NET. I'm thinking of having a single process with a thread dealing with each client connection. Shared memory between threads instead of a database.
I've read a little on Microsoft's version of HttpListener and how it works with the Http.sys driver. Alas, Mono's documentation on this class is just the automated class interface with no discussion of how it works under the hood. (Linux doesn't have Http.sys, so I imagine it's implemented substantially differently.)
Could anyone point me towards some resources discussing this module please?
Many thanks, Bill, billpg.com
(A little background to my question for the interested.)
Some time ago, I asked this question, interested in keeping a long conversation open with lots of back-and-forth. I had settled on designing my own ad-hoc protocol, but people I spoke to really wanted a REST interface, even at the cost of the "Okay, send your command now" signal.
So, I wondered about running ASP.NET on a Linux/Mono server, but stumbled upon HttpListener. This seemed ideal, as each "conversation" could run in a separate thread. The thread that calls HttpListener in a loop can look for which thread each incomming connection is for and pass the reference to that thread.
The alternative for an ASP.NET driven service, would be to have the ASPX code pick up the state from a database, and write back the new state when it finishes. Yes, it would work, but that's a lot of overhead.
Greetings,
The HttpListener class in Mono works without much of a problem. I think that the most significant difference between its usage in a MS environment and a Linux environment is that port 80 cannot be bound to without a root/su/sudo security. Other ports do not have this restriction. For instance if you specify the prefix: http://localhost:1234/ the HttpListener works as expected. However if you add the prefix http://localhost/, which you would expect to listen on port 80, it fails silently. If you explicitly attempt to bind to port 80 (http://localhost:80/) then you throw an exception. If you invoke your application as a super user or root, you can explicitly bind to port 80 (http://localhost:80/).
I have not yet explored the rest of the HttpListener members in enough detail to make any useful comments about how well it operates in a linux environment. However, if there is interest, I will continue to post my observations.
chickenSandwich
I am not sure why you want to look so deep into the hood. Even on Microsoft side, the documents about http.sys may not provide you really valuable information if you are using the .NET Framework.
To know if something works on Mono good enough, you are always supposed to download its VMware or VPC image, and test your applications on it.
http://www.go-mono.com/mono-downloads/download.html
Though Mono is much more mature than a few years ago, we cannot say it has been tested by enough real-world applications like Microsoft.NET. So please test out your applications and submit issues you find to Mono team.
Based on my experience, minor issues are fixed within only a few days, while for major issues it takes a longer time. But with Mono source code available, you can fix on your own or find out good workarounds most of the times.

Running graphics display on multiple systems, keeping synched

I have a series of systems on a LAN running a synchronized display routine. For example, think of a chorus line. The program they ran is fixed. I have each "client" download the entire routine, and then contact the central "server" at fixed points in the routine for synchronization. The routine itself is mundane with, perhaps, 20 possible instructions.
Each client runs the same routine, but they can be doing completely different things at any one time. One part of the chorus line can be kicking left, another part kicking right, but all in time with each other. Clients can join and drop out at any time, but they're all assigned a part. If no-one is there to run the part, it just doesn't get run.
This is all coded in C# .Net.
The client display is a Windows Forms application. The server accepts TCP connections, and then services them round-robin fashion, keeping a master clock of what's going on. The clients send a signal that says "I've reached sync-point 32" (or 19, or 5, or whatever) and waits for the server to acknowledge and then moves on. Or the server can say "No, you need to start at sync-point 15".
This all works great. There is a minor bit of delay between the first and last clients to hit a sync-point, but it's hardly noticeable. Ran for months.
Then the Specification changed.
Now the clients need to respond to near real-time instructions from the server -- it's no longer a pre-set dance program. The server is going to be sending instructions out and the dance program is made up on the fly. I get the fun job of re-designing the protocol, the servicing loops, and the programming instructions.
My toolkit includes anything in a standard .Net 3.5 toolbox. Installing new software is a pain in the arse, since so many systems (clients) can be involved.
I'm looking for suggestions on keeping the clients synced (some sort of latching system? UDP? Broadcast?), distribution of the "dance program", anything that might make this easier than a traditional Client/Server TCP arrangement.
Keep in mind that there are time/speed limitations going on as well. I could put the dance program in a network database, but I'd have to shove instructions in fairly quickly and there'd be a lot of readers using a rather thick protocol (DBI, SqlClient, etc..) to get a small bit of text. That seems overly complex. And I still need something to keep them all displaying in sync.
Suggestions? Opinions? Wild-ass speculation? Code examples?
PS: Answers may not get marked as "correct" (since this isn't a "correct" answer), but +1 votes for good suggestions for sure.
I did something similar (quite a while back) with synchronizing a bank of 4 displays, each run by a single system, receiving messages from a central server.
The architecture we finally settled on after a fair amount of testing involved having one "master" machine. In your case, this would be having one of your 20 clients that acts as the master, and have it connect to the server via TCP.
The server then would send the entire series of commands for the series through to that one machine.
That machine then used UDP to broadcast real-time instructions to each of the other machines (the 19 other clients on its LAN) to keep their displays up to date. We used UDP for a couple reasons here - there was lower overhead involved, which helped keep the total resource usage down. Also, since you're updating in real-time, if one or two "frames" was out of sync, it was never noticable, at least not noticeable enough for our purposes (having a human sitting and interacting with the system).
The key point to this working smoothly, though, is having an intelligent communication means between the main server and the "master" machine - you want to keep the bandwidth as low as possible. In a case like yours, I'd probably come up with a single binary blob that had the current instruction set for the 20 machines, in its smallest form. (Maybe something like 20 bytes, or 40 bytes if you need it, etc). The "master" machine would then worry about translating this out to the other 19 machines and itself.
There are some nice things about this - the server has a much easier time transmitting to one machine in the cluster instead of every machine in the cluster. This let us, for example, have one single, centralized server "drive" multiple clusters efficiently, without having ridiculous hardware requirements anywhere. It also keeps the client code very, very simple. It just has to listen for a UDP datagram and do whatever it says - in your case, it sounds like it would have one of 20 commands, so the client becomes very, very simple.
The "master" server is the trickiest. In our implementation, we actually had the same client code on it as the other 19 (as a separate proces) and one "translation" process that took the blob, broke it into 20 pieces, and transmitted them. It was fairly simple to write, and worked very well.

Categories

Resources