I am going to be deploying a solution which includes a number of small long running processes which will live on a number of boxes. I wanted to develop a central dashboard for managing these processes and was looking for a good way to do so. I would want to get some counters from the processes and monitor things like memory usage and uptime as well as remotely restart them. In java I would use JMX and I was wondering if there was a similar technology in the .net space. So far I have come across
NetMX
WMI
WMI looks to really be more focused towards unmanaged code. NexMX seems to be ideal but not heavily used. Does anybody have some experiences doing something similar they could share? Any other technologies I should consider?
Never heard of NetMX, version 0.7 might have something to do with it. WMI is quite adroit at monitoring .NET apps as well as unmanaged apps. Fire up Perfmon.exe to see the .NET performance counters at work. Queryable with WMI, experiment with WMI Code Creator.
Related
Is there a way for me to get the amount of memory and processor power needed for my application. I recently had a very unpleasant experience when one of my applications kept freezing the computers on which it was working. This is obviously related to the lack of hardware power, because it works perfectly on the stronger computers that I used for testing purposes, where the application worked perfectly. So my question is - is there a way to calculate the amount of hardware power needed to run the application smoothly?
Almost all of my applications are done in C#, so I would need a method that can work with that type of application.
Thanks
This is obviously related to the lack of hardware power
This entirely depends on what your application is doing. If you are solving problems in a "not so time efficient way", then you can optimize the code.
I would suggest that you analyze your code with a profiler.
This will tell you:
What parts of your code are taking up most RAM/CPU
How much RAM in total did your application need when it peeked
Information about CPU consumption
This is obviously related to the lack of hardware power, because it works perfectly on the
stronger computers that I used for testing purposes,
Whoever set up testing should be fired.
You have to have one set of computers that are similar to the ones the application will run in for testing. That was accepted practice 20 years ago - seems modern times do not care about that.
Seriously, you NEED to have a test set that is representative on your lowest accepted hardware level.
Otherwise - no, sorry, no magic button. Profilers do NOT necessarily help (debugging, profiler may use more memory). Try a profiler. Optimize code. But at the end... you need to have a decent testbed.
I'd argue that this should be checked during software installation. Later, if user was prompted for updating his/her hardware and dismissed the warning, you shouldn't care about that.
If you're using Windows Installer (MSI), you can play with a custom action and use System.Management classes to detect whatever you want.
I need to monitor handle usage on a Windows CE box.
Essentially I want to be able to see handle usage over time to tell if my applications / services are leaking handles (which I believe they are).
Any example code would be great.
While it is not exactly what you are looking for, I can recommend a little tool that we are using called CodeSnitch (http://www.entrek.com/codesnitch.html). It will instrument your code and keep track allocation and de-allocation of resources, including handles. I have used it to clean up several of our applications with great success. You can download a 2 week trial version to try it out.
I've got a for loop I want to parallelize with something like PLINQ's Parallel.ForEach().
The key here is that the C++ library i'm calling to do the computation is decidedly not thread safe, therefore, any plans to parallelize this need to do so across multiple processes.
I was thinking about using WCF to create a "distributor" process to which the "client" and multiple "calculators" could both connect and add/remove items to/from a queue and then the "calculator" sends the results directly back to the client which could update the gui as it receives them. This architecture would allow me to bring as many "calculators" online as I have processors and as I see it even bring them up across multiple computers creating a potential farm of processing power to which all the clients could share.
I'm just wondering if anyone has had any experience doing this and if there are existing application blocks or frameworks that I can use to build this for me. PLINQ does it within the process. is there like a DPLINQ (distributed) or something?
Also if that doesn't exist, does anybody want to give an opinion on my proposed architecture? Any obvious pitfalls? Does anyone think it will work!?!?!?
Sounds like you could be looking for Dryad. It's a Microsoft research project right now, but they do have an "academic release" available. My understanding is that they are also in the process of better productizing it (probably some kind of integration with Azure) for RTM sometime near the end of 2011. Mary Jo Foley covers more about this here.
A long time standard for controlling/dispatching distributed work is MPI. I've only ever used it from C++, but implementations from many languages exist. A quick google suggests that MPI.Net could be a good implementation for .Net!
A company I consult for is looking, at my urging, to switch to devices powered by the .NET Micro Framework, so that we can bring devices to the market faster. The idea, in theory at least, is that coding in C# rather than C or assembly will be much faster and less bug prone. Like I said, this all theory, as I've never programmed an embedded device.
My questions are as follows:
Is the .NET Micro Framework up to the task?
What are some of the things the .NET Micro Framework cannot do?
What are some of the gotchas?
Is there a viable 3rd party marketplace for plugin devices? I didn't see a whole lot on Microsoft's site.
Can someone point to a commercial device that has been developed with the MF Framework.
Thanks.
Without knowing your application and the current capability of the embedded device it will be hard for me to give a definitive opinion if .NET MF is up to the task. If the embedded device is a low power 8-bit CPU with 2K of RAM and 32K of ROM then the .NET MF would not be suitable for that design.
In a large number of cases the move to .NET MF would involve hardware changes to a chipset favoured by many vendors that typically target ARM7 or ARM9 cores. The main reason for this is to leverage the work already done in porting the HAL and cross-compiling the PAL & TinyCLR to the native code for the processor in question. Then, if you application fits the .NET MF model, you only need to develop managed code.
A comparison of development boards might help you to select a platform for a new design. The advantage of the GHI products is that you can purchase the bare chipsets with the firmware that they have developed to integrate with your hardware design.
Answer to Question 1: Is the .NET Micro Framework up to the task?
Sorry, I cannot answer this about your application without more information.
Answer to Question 2: What are some of the things the .NET Micro Framework cannot do?
The micro-framework is not realtime like so many of the competing products. The scheduler is fairly simple and not optimised for systems that require deterministic timing.
The TinyCLR interprets the IL from the next waiting "thread" for a 20ms. Threads can yield their allotted time slice by calling Thread.Sleep(0). ONLY between each thread time slice will the Runtime Interpreter check flags from the hardware and dispatch events to managed code or waking up threads if they are blocked waiting for hardware. As far as I understand, there is no way for a thread to be unblocked from native code interrupt service routines (ISR) or for a higher priority thread pre-emptively interrupting a lower priority thread.
Answer to Question 3: What are some of the gotchas?
Everything seems to be working, you've understood the how the runtime interperter loop works (the scheduling of threads and reacting to hardware events) and then you forget about GARBAGE COLLECTION!!
Best to minimise the amount of thrashing of memory (review carefully each time you new an object). Instead of creating and destroying commonly used objects, consider holding a pool of objects usually GC'd and recycle them when needed again.
Answer to Question 4: Is there a viable 3rd party marketplace for plugin devices?
The third party involvement is mainly in the development boards and reference designs on the hardware side of things. From a software point of view, this code-share link might be of interest. As a side issue, don't forget that most of the VS2008 development tools also work on .NET MF (e.g. Resharper and VisualSVN)
Sorry, don't have an answer to question 5 as I do not follow this type of thing. The landing page for .NET MF on Microsoft does seem to have a images of commercial devices but I've never followed the links.
The .Net Micro Framework is very simple to work with compared to many of the other embedded platforms I have worked with. But it does currently have several draw backs such as the lack of real-time support. Also some of the SDK kits have problems due to hardware contention of all the add-on devices using the same busses. If you need tons of devices hanging off your controller I would look at the Windows CE platform instead. The current selection of hardware for the Micro Framework is just very limited.
Great platform and for small projects it would be great. But when you try to get into near-real-time requirements you might start to run into bumps.
Like so much else in this industry it depends. But for the fact your can get a development
kit for under $100 dollars it might be worth checking into.
I used the Tahoe-II from DeviceSolutions.Net with .Net Micro Framework 2.0/3.0 and C#. Threading was very simple but the framework is currently very limited. I had to create my own HTTP parser and create crude RESTful webservies. There is a Device Web Service model but I wanted pure HTTP. I also had to create my own SNTP and SMTP protocol layers. A new version (4.0) should be release shortly and it may fill in some of these short falls.
I know how antivirus detects viruses. I read few aticles:
How do antivirus programs detect viruses?
http://www.antivirusworld.com/articles/antivirus.php
http://www.agusblog.com/wordpress/what-is-a-virus-signature-are-they-still-used-3.htm
http://hooked-on-mnemonics.blogspot.com/2011/01/intro-to-creating-anti-virus-signatures.html
During this one month vacation I'm having. I want to learn & code a simple virus detection program:
So, there are 2-3 ways (from above articles):
Virus Dictionary : Searching for virus signatures
Detecting malicious behavior
I want to take the 2nd approach. I want to start off with simple things.
As a side note, recently I encountered a software named "ThreatFire" for this purpose. It does a pretty good job.
1st thing I don't understand is how can this program inter vent an execution of another between and prompt user about its action. Isnt it something like violation?
How does it scan's memory of other programs? A program is confined to only its virtual space right?
Is C# .NET correct for doing this kind of stuff?
Please post your ideas on how to go about it? Also mention some simple things that I could do.
This happens because the software in question likely has a special driver installed to allow it low level kernel access which allows it to intercept and deny various potentially malicious behavior.
By having the rights that many drivers do, this grants it the ability to scan another processes memory space.
No. C# needs a good chunk of the operating system already loaded. Drivers need to load first.
Learn about driver and kernel level programming. . . I've not done so, so I can't be of more help here.
I think system calls are the way to go, and a lot more doable than actually trying to scan multiple processes' memory spaces. While I'm not a low-level Windows guy, it seems like this can be accomplished using Windows API hooks- tie-ins to the low-level API that can modify system-wide response to a system call. These hooks can be installed as something like a kernel module, and intercept and potentially modify system calls. I found an article on CodeProject that offers more information.
In a machine learning course I took, a group decided to try something similar to what you're describing for a semester project. They used a list of recent system calls made by a program to determine whether or not the executing program was malicious, and the results were promising (think 95% recognition on new samples). In their project, they trained using SVMs on windowed call lists, and used that to determine a good window size. After that, you can collect system call lists from different malicious programs, and either train on the entire list, or find what you consider "malicious activity" and flag it. The cool thing about this approach (aside from the fact that it's based on ML) is that the window size is small, and that many trained eager classifiers (SVM, neural nets) execute quickly.
Anyway, it seems like it could be done without the ML if it's not your style. Let me know if you'd like more info about the group- I might be able to dig it up. Good luck!
Windows provides APIs to do that (generally the involve running at least some of your code in kernel). If you have sufficient privileges, you can also inject a .dll into other process. See http://en.wikipedia.org/wiki/DLL_injection.
When you have the powers described above, you can do that. You are either in kernel space and have access to everything, or inside the target process.
At least for the low-level in-kernel stuff you'd need something more low-level than C#, like C or C++. I'm not sure, but you might be able to do some of the rest things in a C# app.
The DLL injection sounds like the simplest starting point. You're still in user space, and don't have to learn how to live in the kernel world (it's completely different world, really).
Some loose ideas on topic in general:
you can interpose system calls issued by the traced process. It is generally assumed that a process cannot do anything "dangerous" without issuing a system call.
you can intercept its network traffic and see where it connects to, what does it send, what does it receive, which files does it touch, which system calls fail
you can scan its memory and simulate its execution in a sandbox (really hard)
with the system call interposition, you can simulate some responses to the system calls, but really just sandbox the process
you can scan the process memory and extract some general characteristics from it (connects to the network, modifies registry, hooks into Windows, enumerates processes, and so on) and see if it looks malicious
just put the entire thing in a sandbox and see what happens (a nice sandbox has been made for Google Chrome, and it's open source!)