Experiences programming with the .NET Micro Framework - c#

A company I consult for is looking, at my urging, to switch to devices powered by the .NET Micro Framework, so that we can bring devices to the market faster. The idea, in theory at least, is that coding in C# rather than C or assembly will be much faster and less bug prone. Like I said, this all theory, as I've never programmed an embedded device.
My questions are as follows:
Is the .NET Micro Framework up to the task?
What are some of the things the .NET Micro Framework cannot do?
What are some of the gotchas?
Is there a viable 3rd party marketplace for plugin devices? I didn't see a whole lot on Microsoft's site.
Can someone point to a commercial device that has been developed with the MF Framework.
Thanks.

Without knowing your application and the current capability of the embedded device it will be hard for me to give a definitive opinion if .NET MF is up to the task. If the embedded device is a low power 8-bit CPU with 2K of RAM and 32K of ROM then the .NET MF would not be suitable for that design.
In a large number of cases the move to .NET MF would involve hardware changes to a chipset favoured by many vendors that typically target ARM7 or ARM9 cores. The main reason for this is to leverage the work already done in porting the HAL and cross-compiling the PAL & TinyCLR to the native code for the processor in question. Then, if you application fits the .NET MF model, you only need to develop managed code.
A comparison of development boards might help you to select a platform for a new design. The advantage of the GHI products is that you can purchase the bare chipsets with the firmware that they have developed to integrate with your hardware design.
Answer to Question 1: Is the .NET Micro Framework up to the task?
Sorry, I cannot answer this about your application without more information.
Answer to Question 2: What are some of the things the .NET Micro Framework cannot do?
The micro-framework is not realtime like so many of the competing products. The scheduler is fairly simple and not optimised for systems that require deterministic timing.
The TinyCLR interprets the IL from the next waiting "thread" for a 20ms. Threads can yield their allotted time slice by calling Thread.Sleep(0). ONLY between each thread time slice will the Runtime Interpreter check flags from the hardware and dispatch events to managed code or waking up threads if they are blocked waiting for hardware. As far as I understand, there is no way for a thread to be unblocked from native code interrupt service routines (ISR) or for a higher priority thread pre-emptively interrupting a lower priority thread.
Answer to Question 3: What are some of the gotchas?
Everything seems to be working, you've understood the how the runtime interperter loop works (the scheduling of threads and reacting to hardware events) and then you forget about GARBAGE COLLECTION!!
Best to minimise the amount of thrashing of memory (review carefully each time you new an object). Instead of creating and destroying commonly used objects, consider holding a pool of objects usually GC'd and recycle them when needed again.
Answer to Question 4: Is there a viable 3rd party marketplace for plugin devices?
The third party involvement is mainly in the development boards and reference designs on the hardware side of things. From a software point of view, this code-share link might be of interest. As a side issue, don't forget that most of the VS2008 development tools also work on .NET MF (e.g. Resharper and VisualSVN)
Sorry, don't have an answer to question 5 as I do not follow this type of thing. The landing page for .NET MF on Microsoft does seem to have a images of commercial devices but I've never followed the links.

The .Net Micro Framework is very simple to work with compared to many of the other embedded platforms I have worked with. But it does currently have several draw backs such as the lack of real-time support. Also some of the SDK kits have problems due to hardware contention of all the add-on devices using the same busses. If you need tons of devices hanging off your controller I would look at the Windows CE platform instead. The current selection of hardware for the Micro Framework is just very limited.
Great platform and for small projects it would be great. But when you try to get into near-real-time requirements you might start to run into bumps.
Like so much else in this industry it depends. But for the fact your can get a development
kit for under $100 dollars it might be worth checking into.
I used the Tahoe-II from DeviceSolutions.Net with .Net Micro Framework 2.0/3.0 and C#. Threading was very simple but the framework is currently very limited. I had to create my own HTTP parser and create crude RESTful webservies. There is a Device Web Service model but I wanted pure HTTP. I also had to create my own SNTP and SMTP protocol layers. A new version (4.0) should be release shortly and it may fill in some of these short falls.

Related

How do I best-determine system requirements for a new app? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to specify the hardware your software needs?
How do you determine the system requirements of a user's PC in order for them to install and run your software?
I am aware of the obvious, such as Windows, .NET Framework [version number]. But how do you come up with the correct RAM, Processor and all of that?
Is this just something that you observe while you're debugging your app? Do you just check out the Resource Monitor and watch for how much Disk usage your app is using, or how much memory it is taking up?
Are there any tools, or would you recommend I use tools to help determine system requirements for my applications?
I've searched for this but I have not been able to find much information.
More importantly, what about the Windows Experience Index? I've seen a few box apps in the shop say you need a Windows Exp. Index of N, but are there tools that determine what index is required for my app to run?
Until you start doing stress testing and load testing, using or carefully simulating production volumes and diversity of data, you do not really have a high quality build ready for mass deployment.
And when you do, experience (measurements and, if necessary, projection) from this testing will give you RAM, CPU and similar requirements for your customers.
Sure, the resource monitor is a good way to see how much CPU and ram it consumes. But it all depends on the app you're making, and as the developer you know aprox. how much power is needed under the hood.
If you're just developing standard WinForms / VCL apps that use standard native controls, you really shouldn't worry too much - 256 MB RAM and a 1 GHz processor should be enough, this is usually what I tend to put on my sysreq page.
For heavy 3D games you should probably start looking more into it, how you do that I can't tell you.
If you REALLY want exact hertz and bytes, you could use a VM and alter the specs and see how your app behaves.

Any Good Patterns For Distributed Parallelism?

I've got a for loop I want to parallelize with something like PLINQ's Parallel.ForEach().
The key here is that the C++ library i'm calling to do the computation is decidedly not thread safe, therefore, any plans to parallelize this need to do so across multiple processes.
I was thinking about using WCF to create a "distributor" process to which the "client" and multiple "calculators" could both connect and add/remove items to/from a queue and then the "calculator" sends the results directly back to the client which could update the gui as it receives them. This architecture would allow me to bring as many "calculators" online as I have processors and as I see it even bring them up across multiple computers creating a potential farm of processing power to which all the clients could share.
I'm just wondering if anyone has had any experience doing this and if there are existing application blocks or frameworks that I can use to build this for me. PLINQ does it within the process. is there like a DPLINQ (distributed) or something?
Also if that doesn't exist, does anybody want to give an opinion on my proposed architecture? Any obvious pitfalls? Does anyone think it will work!?!?!?
Sounds like you could be looking for Dryad. It's a Microsoft research project right now, but they do have an "academic release" available. My understanding is that they are also in the process of better productizing it (probably some kind of integration with Azure) for RTM sometime near the end of 2011. Mary Jo Foley covers more about this here.
A long time standard for controlling/dispatching distributed work is MPI. I've only ever used it from C++, but implementations from many languages exist. A quick google suggests that MPI.Net could be a good implementation for .Net!

Using .NET/Mono on Linux to serve a high volume web service, a good idea?

We have a web service that does a fairly high volume of traffic that helps you figure out what are your preferred contacts based on the e-mails you receive.
This service was initially implemented in C# /.NET in order to leverage some code we already have running on Windows hosts. This service does not uses ASP.NET but it's a simple C# service using a base HTTP Listener from .NET.
The service is performing OK, but once in a while MONO will totally blocks and stop responding to any requests. The performance is OK, but not great and it seams that we spend a huge amount of time figuring the difference between the MONO CLR and the Windows CLR. I must admit, I am coming from a Java background and it seems that the ecosystem on the server side is way larger on the Java side than the MONO ecosystem on Linux.
So for now, I am looking for examples and personal experiences around using Mono on Linux to server a high traffic web service.
Don't know if it will help to solve your problems, but you can try to run your web service on mono 2.8 which comes with new garbage collector.
For high volume performance tuning, you often need to consider the following pieces altogether
operating system settings (TCP/IP specific)
web server level settings
framework level settings (Mono specific)
Like #yojimbo87 described, using the latest Mono build can help burst the performance from framework level. But you also need to learn about the OS and the web server to see if there are other tuning approaches.

Remotely manage .net processes

I am going to be deploying a solution which includes a number of small long running processes which will live on a number of boxes. I wanted to develop a central dashboard for managing these processes and was looking for a good way to do so. I would want to get some counters from the processes and monitor things like memory usage and uptime as well as remotely restart them. In java I would use JMX and I was wondering if there was a similar technology in the .net space. So far I have come across
NetMX
WMI
WMI looks to really be more focused towards unmanaged code. NexMX seems to be ideal but not heavily used. Does anybody have some experiences doing something similar they could share? Any other technologies I should consider?
Never heard of NetMX, version 0.7 might have something to do with it. WMI is quite adroit at monitoring .NET apps as well as unmanaged apps. Fire up Perfmon.exe to see the .NET performance counters at work. Queryable with WMI, experiment with WMI Code Creator.

Which platform should I use : native C++ or C#? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I want to develop a windows application. If I use native C++ and MFC for user interface then the application will be very fast and tiny. But using MFC is very complicated. Also If I use C# then the application will be slower than the native code and It reqiures .NET framework to run. But developing GUI is very easy by using WinForm. Which one do you prefer?
"fast" and "slow" are subjective, especially with today's PC's. I'm not saying deliberately make the thing slow, but there isn't nearly as much overhead in writing a managed application as you might think. The JIT etc work very well to make the code execute very fast. And you can also NGEN for extra start-up speed if you really need.
Actually, if you have time to learn it, you might want to consider WPF rather than winform - this is a different skill-set, but allows you to make very good use of graphics hardware etc.
Also - .NET framework comes with new OS installs, and is still very common on those that pre-date it. So for me it would be a fairly clear choice to develop with C#/.NET. The time to develop a robust and fully tested C++ app (with no leaks, etc) is (for me at least) much greater than the same with C#.
Be careful not to optimize too early; you mention the speed of the code but for most Windows user interface operations, the difference is not noticeable as the main bottlenecks of drawing and disk access are no different for either approach.
My recommendation is that you use C# and WPF or WinForms for your user interface. If you encounter some slow downs, use a profiler to determine where and then consider replacing some business logic with native code, but only if there is a benefit.
There are a great many possible Windows applications, and each has its own requirements.
If your application needs to be fast (and what I work on does), then native C++ is a good way to go.
If your application needs to be small (perhaps for really fast transmission over slow lines), then use whatever gets it small.
If your application is likely to be downloaded a lot, you probably want to be leery of later versions of .NET, which your users might not have yet.
If, like most, it will be fast and small enough anyway on the systems it's likely to be used on, use what will allow you to develop fastest and best.
In almost all cases, the right thing to optimize is developer effort. Use whatever gets a high-quality job done fastest and best.
First.. (though I'm a die hard c++ coder) I have to admit c# is in most cases perfectly fine where speed and size are concerned. In some cases the application is smaller, because the interpreted part is already on the target system. (don't spamm me on that one, an app with a dll is smaller then the app all in one. Windows just happens to ship with the "DLL" already there.)
As to coding.. I honestly don't think there is a significant difference. I don't spend alot of my time typing code. Most of it is thinking out a problem. The code part is quite small. Saving a few lines here and there.. Blahh it's not an argument for me.. If it were I'd be working in APL. Learning the STL, MFC and what have you is likely just as intensive as learning the c# libraries. In the end they're all the same.
C# does have one thing going for it.. A market. It's the latest "hot" skill and so theres a market for it. Jobs are easy to find. Now keep in mind java was a "hot" skill a few years back and now ever tom dick and harry has it on their resume. That makes it harder to niche yourself.
Ok.. all that said.. I LOVE C++.. There's nothing like getting dirty when I really need to. When the MFC libs don't do the job I take a look at what they're sitting on and so on and so on.. It's a perenial language and I belive it's still at or near the most used lang in the world. Yah c++, Yah!.
Note also that most Windows computer already have .NET installed on them, so that really shouldn't be a concern.
Also, aside from the .NET installation, .NET applications tend to be quite small.
And for most application with a UI, the speed of the User is the really limiting time factor.
C# applications are slower to start than MFC applications, but you might not notice a speed difference between the two once the application is loaded.
Having no information on the application you plan to develop, I vote for WPF.
In my opinion, the requirements should help you decide the platform. What is more important: Having an application that is easily maintainable or one that must be extremely fast and small ?
A large class of applications nowadays can be written using .NET and managed code and this is in general beneficial to the development in the long term. From my experience, .NET applications are usually fast enough for most use cases and they are simpler to create.
Native C++ still has its use, but just for being "faster and smaller", when "fast enough and small enough" is sufficient does not sound enough as a justification.
The speed argument between native and managed code is largely a non-issue at this point. Each release of the .NET Framework makes performance improvements over the previous ones and application performance is always a very high priority for the .NET development teams.
Starting with Windows Vista and Windows Server 2008, the .NET Framework is installed as part of the operating system. It is also part of Windows Update so almost any Windows XP system will also have it installed. If the requirement that the framework be installed on the target machine is really that much of a problem there are also compilers that will essentially embed the required runtime portions into your application to generate a single exe, but they are expensive (and in my opinion, not really worth the cost).
MFC isn't hard to learn, actually it is very easy.
Almost equal to C#.
Choice of a language or tool should be dictated by the functional and performance requirements of your project and expertise. If performance is a real consideration for you and you have done some analysis to prefer C++ over C#, then you have a decision already. Note though that having MFC based application is not terribly efficient either. On the other hand, overheads in .NET applications are over-stated.
Performance is something that is really a function of how well you write your code and what scalability requirements exist. If you would only have to work with one client with a maximum database records of 1K, then we should not be talking performance.
If ease of development and maintainability is more important, certainly C# would be the choice.
So I am not sure this is a question that can be given an answer as choice A or B with the data you have provided. You need to do the analysis of functional and non-functional requirements and decide.
Also If I use C# then the application will be slower than the native code and It reqiures .NET framework to run
An MFC app requires MFC dll's to run (and probably VC runtime as well!), so they might need to be installed, or if they are statically linked, you add to the size of the exe.
Blockquote
.NET is easier to work with. Unless you'll lose users by using it or will have trouble with code migration, you should probably use .NET. It is highly unlikely that speed will be an issue for this. Size probably doesn't matter that much, either.
Which technology are you more familiar with?
The information you gave does not include anything that would help decide. Yes, MFC apps tend to be smaller (if you include the runtime size which isn't a suitable measure in the long run), more responsive and mor costly to develop. So what?

Categories

Resources