Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Let's say for example you have 5 different companies using the same platform (Windows based) all wrote their own web services, what technique using C# and .Net 3.5 would you recommend using to monitor all their different web services?
My intention is to build an application that provides visual feedback on service statuses to site administrators and of course e-mail/sms alerts as needed. Is there a best practice or methodology to follow in your opinion?
In addition, are there any windows based tools available to perform this that I'm unaware off? Preferrably open-source?
*Edit: Think of the end result, an application that just shows a red or green light next to the services running across the different companies.
Company 1
> Web Service 1 - Green
> Web Service 2 - Green
Company 2
> Web Service 1 - Red
> Web Service 2 - Green
You should try PolyMon, a .NET based, open-source monitoring tool on CodePlex:
http://polymon.codeplex.com/
At least for our case, it hit the sweet-spot of functionality and a lean and easy setup.
You can choose from some out-of-the-box tasks like Ping or URL monitoring, but you can also easily implement your own more complex tasks. Works quite well for us.
The tool itself is not distributed, but you can easily set up two instances of the service (e. g. on servers in different locations) and monitor the same services, or use one instance to monitor the other.
We experienced just one issue that was very annoying and a little freaky, when a server that was running both PolyMon and the SQL Server instance used by PolyMon repeatedly crashed on reboots (endless loop of reboot). Seems to be some kind of race condition. Therefore I strongly recommend to host the PolyMon service and the SQL Server service on different (virtual) machines, or set the start-up type of the PolyMon service to "Manual" instead of "Automatic", and start PolyMon manually after everything else booted, to avoid this problem.
Big Brother System and Network Monitor, will probably do most of what you want. It is extensible and plug-ins can be written in any language. They have a free editon of their monitoring software:
http://www.bb4.org/home1.html
You should also use a local monitor on each server that is being monitored. This is because it is difficult to diagnose problems remotely. This blog has a good discussion of the problem and details of a local monitor design pattern:
http://sleeksoft.co.uk/public/techblog/articles/20041218_1.html
The most commonly used open source monitoring tool is Nagios. Built in it has support for many different services, and you can always write a script or app to test any service not already supported.
Nagios Home Page
Windows Monitoring Service
Instruction for setting up Windows service
If they write aspx web services, all monitoring best practices described for ASP.NET applications, like ASP.NET Health Monitoring Overview MSDN article and in ASP.NET Health Monitoring are applicable
Also see an SO question Tools and methods for live-monitoring ASP.NET web applications?
Standard tools will allow you to ping the IP or probe the HTTP port - these are cheap & simple ways of verifying that the web service is minimally available. In order to validate that they are also fully functional, you will need to do a bit more... your monitoring package will need valid credentials to login to the various web services, not to mention specific proxies & business logic to execute against each one.
Have you looked at MOM (especially MOM management packs)?
There are tools like IPSentry with which you can make HTTP requests and check the returning data...
I know this is a pretty old thread but thought I would add Wolfpack to the list for anyone still looking for a "distributed system monitor" - Wolfpack was designed (by me) as exactly that. It can run multiple "agents" collecting data about the servers they monitor and reporting it to a central server instance.
It has a rich set of monitoring and storage plugins and you can easily roll your own for custom checks (there are numerous supporting nuget packages available - just search for Wolfpack on nuget.org) plus there is an active contrib project too...and its really easy to install via chocolatey.org (cinst wolfpack)!
It is open source and completely FREE! I'm also rewriting major parts of it at the moment for the next v3.0 release and it will support SignalR alert notifications, powershell and a full web api/interface (via ServiceStack).
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Suppose I have an API endpoint such as Facebook Graph API, which I design an application running on my PC to periodically connect to the API and retrieve my posts, comments, etc. On each Timer_Tick, the program reconnects to the API and brings the top 10 data items from the API, and persists these data into databases.
Now, suppose that this application is built by 3rd party, and I just downloaded from the internet as binary file not opensource.
How can I know if the application is leaking my Facebook data to third party without my knowledge?
Is there a mechanism to monitor such leaking if found? (from programmatic perspective)
This is a matter of security and for being sure you almost must think about any vulnerability here and try to make sure there is no way to reveal data by the known vulnerabilities but you cannot be sure about unknown ones.
If this is the matter of the trust and you are dealing with sensitive data i strongly recommend you to avoid using 3rd party tools unless they are provided or certified by the API provider. here are some techniques witch will help you understand about what is going on in the backyard but they will definitely not guaranty the safety :
1- First of all make sure the application is really a binary code (i know you mentioned it as a binary), it's because some executable files are just scripts or semi-scripts but look a like a binary files. for instance in the some cases if the source of the executable application is written with C#, Python, Java, there are tools out there that will help you DeComplie the application and find out what's going inside. this solution of course can be considerably tough if for example the code is obfuscated or there is complex models or OO programming models involved.
2- Use network monitoring tools like WireShark or any other tool to capture all traffic of HTTP/HTTPS requests while using the 3rd party application. because the API is just the same as HTTP requests used by applications to exchange data you can use these tools to monitor what's going on in your computer. normally this application must only connect to the Facebook servers and URLs needed to use the web API, if there is any other request sent or received from a server other than the Facebook there is chance of data leak here. if these requests are not encrypted by SSL/TLS you would be able to see the data being exchanged or if they are encrypted through SSL/TLS there are tools that provide man in the middle attack solution to see these traffics but if they are encrypted in the application layer you won't be able to see what data are being transmitted so it might involve suspicion about data even higher chance of data leak. don't forget that this monitoring must be extended for the entire using cycle of the application.
Also limiting the application to talk only to the server in witch you are calling the API with OS Firewall will be step forward to decrease the chance of data leak here.
In short, i am looking for the best mehod to provide a REST or SOAP API Server in a .Net Framework application (e.g. windows forms) - without admin rights in some cases
What is currently the best way of providing a web based REST or SOAP API in a possible portable csharp application?
Basically i need something that supports the basic http standards out of the box (e.g. Expect: 100-continue and others) and at the same time is able to instanciate the classes of my csharp program directly (perfomrance and ease of use reasons).
The microsoft way is to either use IIS and possibly ASP or go for httplistener. IIS could never be run in a portable way and requires lots of installation procedure/system administration based work. httlistener on the other hand is not even close to being a webserver, i would need to implement all the standard webserver commands on my own.
I am looking around for this topic since years now, one example is this question [old question] Alternative to HttpListener?
Unfortunately this one links to a discontinued project.
Any ideas?
[EDIT] The question targets not only C# but also .NET Framework 2-4.5. The result should be useable in e.g. Windows Form, Windows Service and Commandline applications.
Currently i am using a skeleton Webserver based on HTTPListener and therefore i need to implement all the Parsing of a request, formatting of answers and reacting to special http commands on my own (which seems to be a never ending task): https://www.codeproject.com/Articles/17071/Sample-HTTP-Server-Skeleton-in-C
You could try Griffin web server. I've used it for embedding a web server into applications to host a simple web interface, file hosting, and to provide a REST API for my application.
The biggest advantage for me versus the embedio project (which is excellent) is that it doesn't require admin privileges to run. Looks like no SOAP integration out of the box though.
You should be able to do what you want using .NET Core. You can fairly easily build a self-hosted API using it that's independent of IIS. Tutorials should be easy to find, and here is a Microsoft example.
As ilikesleeping suggested you could use dotnet core, but there are complications in making it work as a service.
I suggest you to use Microsoft OWIN framework. It's really simple and straightforward way of building restful applications. It can work fine as Console or a service, and of course in Console mkode you can display a Form should you wish to.
Here are some links to get kickstarted:
https://learn.microsoft.com/en-us/aspnet/web-api/overview/hosting-aspnet-web-api/use-owin-to-self-host-web-api | https://learn.microsoft.com/en-us/aspnet/aspnet/overview/owin-and-katana/getting-started-with-owin-and-katana | https://blog.decayingcode.com/post/Creating-a-Self-Hosted-OWIN-Application/ |
https://weblogs.asp.net/fredriknormen/creating-a-simple-rest-like-service-with-owin-open-web-server-interface
EDIT:
...and here's the topic on how to have a middleware that hosts SOAP endpoint over OWIN: Any way to get OWIN to host a SOAP service?
I am the author of this question. Just wanted to make obvious for future readers what i learned here:
Most interesting about this question is that it is a "shopping" question. The accepted answer cannot be based on facts but on subjective feeling only. Most of the suggested methods hit the described usecase.
This is the reason why some users did not want to write an answer but instead put their suggestions in a comment instead. Strange but this is how SO works. We just prefer scientifically correct answers here!
By te way, this was my first "bounty" question. I am active SO user since about 3 weeks. (passive for years, like most people)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have to write an api-client system that connects to multiple api-servers, does a job and disconnects. It does two simple things, but needs to do it at scale (ie: aiming for 200-500m outbound API client calls per day):
(1) Simple client connects to an API-server (http/rest), sends a query, receives a response (text based), saves the response for later, and moves on to the next server/query.
Once responses start coming in, a separate process will:
(2) parse the text in the responses and add them to a large file/queue for reporting
I currently have a test system in C#, running 20 console applications on a machine, with 20 threaded clients in each console application carrying out the work. I need to be able to scale this up on demand. What is the best approach to do this? ... I am sure a solid pattern exists to this simple problem?
My thoughts so far are:
-> design a management system that depending on the volume of API-servers to be queried in a given hour, orchestrates the provisioning of virtual machines (not trying to redesign the wheel - will hook into any existing framework like chef/puppet etc if suitable)
-> have a central system for collection of data from the api-clients (perhaps a node instance passing the data off to RabbitMQ for later pickup/processing)
-> have a separate management system that orchestrates the text parsing of data received from the API clients.
-> As project is network latency bound, I believe development language is not really relevant so long as it has good network support.
My main questions then are around:
(1) What would be a most appropriate language/framework to implement this in to enable a lean/cost-effective system? ... ie: no point in spinning up multiple Windows VMs for example if they have a bigger footprint/overhead/cost than doing the same thing in linux? (so in this case I could use the mono framework - get the benefit of C# that my team knows, but the lower cost of linux VMs...)
(2) Is my thinking about having to spin multiple VMs up to do this correct (albeit small VMs running X client applications each)?
(3) Another approach I thought of is to write the clients in Javascript - the reason being that the bottleneck for the api-client is network and api-server response time, not client-side, so it might be well suited to async work? .... in this case I could have one Node server running 100x more api-clients than I could ever get in even a bunch of micro-windows VMs ?
(4) Finally, am I reinventing the wheel? ... is there anything out there on Amazon or Azure already that I can plug into that would provide a ready framework for what I need?
All comments and suggestions and guidance most welcome.
Many thanks.
I am not a specialist in what Amazon provides. Here is what you can use on Azure depending on your needs:
Worker role - this is pretty much a scalable virtual machine. You can scale out or autoscale by condition.
AppFabric and Microservices - for more complex deployment and more granulated development infrastructures.
Azure Functions - an interesting scalable and cost effective processing option. Check it out.
In terms of choosing the language, I would use Node.js if your application is not too complex and it's not going to in the near future. C# is better for more solid systems with complex architecture. Both platforms are supported on Azure.
Have a central system for collection of data from the api-clients
(perhaps a node instance passing the data off to RabbitMQ for later
pickup/processing)
If you need a really big throughput, RabbitMQ may not be enough. On Azure you can use EventHub. More info here.
"Finally, am I reinventing the wheel?" Its a good question - you might be. From your description, you have a lot of proprietary management of servers going on - and a lot of VMs. Depending on your workload, you may not need need manage any traditional VMs at all. Avoid that if you can to keep things lean. There are some great technologies that make server management (patching, security, server administration, etc) a thing of the past for many work loads: event-driven computing frameworks such as AWS Lambda.
Consider a server-less implementation using the API gateway pattern, and microservice architecure pattern, using the following AWS services:
AWS Lambda is a compute service where you can upload your code to AWS Lambda and the service can run the code on your behalf using AWS infrastructure. After you upload your code and create what we call a Lambda function, AWS Lambda takes care of provisioning and managing the servers that you use to run the code. Very light weight. The first 1 million requests per month are free
"Amazon API Gateway is a fully managed service that makes it easy for developers to publish, maintain, monitor, and secure APIs at any scale." $3.50 per million calls. Scaling, security and management all built in. Lambda supports the specification of HTTP endpoints via the API Gateway to trigger Lambda functions.
AWS Lambda provides an easy way to build back ends without managing
servers. API Gateway and Lambda together can be powerful to create and
deploy serverless Web applications. In this walkthrough, you learn how
to create Lambda functions and build an API Gateway API to enable a
Web client to call the Lambda functions synchronously.
You can also integrate DataPipeline for data transformation, and Simple Queueing Service for queuing/messaging, if needed you your workloads.
If you're doing anything stateful and at scale Service Fabric might be the better choice over Azure Functions/Lambda or Worker Roles.
https://azure.microsoft.com/en-us/services/service-fabric/
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm trying to bootstrap a micro ISV on my nights and weekends. I have an application at a very early stage of development. It is written in C# and consists mainly of a collection of classes representing the problem domain. At this point there's no UI or data persistence. (I haven't even settled on the .NET platform. Its early enough that I could change to Java or native executables)
My goal for this application is that it will be a hybrid single user/ occasionally connected multiuser application. The single user part will use an embedded database for local storage. This is a development model I'm familiar with.
The multiuser part is where I have no prior experience. I know each user will need two things:
IP based communication to a remote server on the public internet
User authentication and remote data storage
I have an idea of what services I want this server to provide (information lookup and user to user transactions) but beyond that I'm out of my element. The server will need to be hosted by a third party since I don't have resources to run my own server. Keeping in mind that I will be the sole developer for this project for the foreseeable future:
Which technologies would be the simplest way to implement the two things mentioned above? Direct access to the datastore/database or is it better to isolate it? Should I implement a webservice? If so, SOAP or REST?
What other things do I need to consider when moving to a multiuser application?
I know security is a greater concern in a multiuser application. Especially when your dealing with any kind of banking information(which I will). Performance can be an issue when dealing with a remote connection and large numbers of users. Anything else I'm overlooking?
Regarding moving to a multiuser application, centralising your data is the first step of course, and the simplest way to achieve it is often to use a cloud-based database, such as Amazon SimpleDB or MS Azure. You typically get an access key and a long 'secret' for authentication.
If your data isn't highly relational, you might want to consider Amazon SimpleDB. There are SDKs for most languages, which allow simple code to store/retrieve data in your SimpleDB database using a key and secret, anywhere in the world. You pay for the service based on your data storage and volume of traffic, so it has a very low barrier of entry, especially during development. It will also scale from a tiny home application up to something of the size of amazon.com.
If you do choose to implement your own database server, you should remember two key things:
Ensure no session state exists, i.e. the client makes a call to your web service, some action occurs, and the server forgets about that client (apart from any changed data in the database of course). Similarly the client should not be holding any data locally that could change as a result of interaction from another user. Cache locally only data you know won't change (or that you don't care if it changes).
For a web service, each call will typically be handled on its own thread, and so you need to ensure that access to the database from multiple threads is safe. If you use the standard .NET or Java ways of talking to a SQL database, this should be handled for you. However, if you implement your own data storage, it would be something you'd need to worry about.
Regarding the question of REST/SOAP etc., a key consideration should be what kinds of platforms/devices you want to use to connect to the database server. For example if you were implementing your server in .NET you might consider WCF for implementing your web services. However that might introduce difficulties if you later want to use non-.NET clients. SOAP is a mature technology for web services, but quite onerous to implement, and libraries to wrap up the handling of SOAP calls may not necessarily be available for a given client platform. REST is simple to implement (trivially easy if you use ASP.NET MVC on your server), accessible by any client that can handle HTTP POST/GET without the need for libraries, and easy to test, so REST would be my technology of choice.
If you are sticking with .net (my personal preference), I would expose data access calls via WCF. WCF configuration is really flexible and pretty easy to pick up and you'll want to hide your DB behind a service layer.
1.Direct access to db is the simplest, and the worst. Just think about how you'd auth the db access... I would just write a remote-able API with serializable parameters, and worry about which methods to connect later (web services, IIOP, whatever) - the communication details are all wrapped and hidden anyway.
2.none
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I want to create a small helpdesk ticket control system at work, that would allow users to enter a help request ticket; these tickets would then be assigned to a technician to work on, and the technician would mark it as "FINISHED" after the job is done. The requesting user would then be able to confirm and "CLOSE" the ticket, so that a Help Desk supervisor can keep track of response times and other stats based on the ticket details. Nothing too complicated, using .NET and SQL Server.
I am not sure if I should develop this as a Web application or a Windows application. This application would be used in the plant floor, so it would have to be easily available in the LAN. But we currently host a list of Windows applications via Citrix, so deployment would not really be an issue here. I don't really have experience creating winapps from scratch (though I've modified quite a few), but it feels like a web application would not look as "solid".
What advice can readers provide that could guide me into deciding the better architecture for this purpose?
EDIT
Thank you all for your thoughts! Given that this is a very simple application, I could go either way. I decided to go with a Web application, as our local Citrix setup still has some quirks that need to be fixed.
If you develop a web app you can pop it on your local intranet and your users can use either their browser within Citrix, or via the browser on their terminal.
However, if you've got the infrastructure in place, then perhaps a Windows application would be easier to develop and deploy. The only limitation with a windows application would be that if you were to move away from a Citrix environment, or were to expand to wanting to use the system externally to the plant floor, then it's harder to deploy and maintain your installations.
You can use Web Deployment with Windows applications which is quite nice as it updates itself whenever you publish a new version, however it is a bit of a faf for the users and you've no guarantees that the user will allow the update to occur. So if you had a critical update, the users could, in effect, choose to ignore it.
That's where the web application gets its bonus points. One installation and one point of access. If you update it, then all users are instantly on the latest version.
Personally, I'd go with the web application for future proofing and ease of acccess. It's slightly more work than a windows application, but the payoff usually exceeds the extra time required for the web application.
Before writing this system, I would highly recommend searching www.codeplex.com and making sure that adapting another work isn't a better choice. You may find something that is already written and meets your needs while allowing you to dig around, learn and be ready to modify when they want some new feature not already present. (I believe all projects grow if the users believe in the developer.)
If you are going to write your own and can do it in the time you have, I would highly recommend that you either go with MVC if web based, or WPF (using MVVM) if you want a desktop client. There is a definitive learning curve to either MVC or WPF with MVVM. But I believe the payoff will come. I have found changes much easier when there is a clear line between business logic and visual behavior.
Personally, in this situation I would go for a windows application - as it doesn't sound as though you've any compelling reason to invoke the complexity of web-ness (perhaps it's just me that thinks web => additional complexity). I'm sure you could create a neat little windows app. in half the time it would take to create a clunky web version of the same thing!
As a sidenote:
I really like the way Eclipse Mylyn integrates with XML-RPC. Check this architecture out for inspiration:
http://www.eclipse.org/mylyn/
If you went for a similar strategy you might start off with a simple front end (Maybe as a C# with a native GUI and augment with a web-based integration with your intranet at a later point whichever is the fastest for you to do).
In esscente a 3-tier approach where you have:
The database.
The application layer wich implements an XML communication protocol (XML-RPC is quite simple).
A front end where information fields and workflow steps are 'introspected' rather than hardcoded in the client.
Just a though, hope it helps.
Write a winform app, and distribute it over ClickOnce. It's the best way to go, IMO.
Don't rush to make this decision. In the end, the Web vs Win question is about user accessibility. Much of the processing logic for your business need is independent of the interface. Spend your up front time building the right data model and identifying the necessary processing/services that you need. A well designed DB and service layer will work with both Web and Win apps. This will also give you the best flexibility as your "product" inevitably grows. You may very well want a web interface for managers needing reporting functionality and a WinForms application if you need more advanced user processing abilities for your users. And that is when your initial design work will payoff.