How to make website status available to all platforms [closed] - c#

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Recently, a project came my way with requirements to ...
1. Build a C# console app that continuously checks website availability.
2. Save website status somewhere so that different platforms can access the status.
The console app is completed but I'm wrestling with where I should save the status. I'm thinking a SQL record.
How would you handle where you save the status so that it's extensible, flexible and available for x number of frameworks or platforms?
UPDATE: Looks like I go with DB storage with a RESTful service. I also save the status to an xml file as a fallback to service being down.

The availability of the web-sites could be POSTed to a second web service which returned a JSON/Xml result on the availability of said website(s). This pretty much means any platform/language that is capable of making a web-service call can check the availability of the web site(s).
Admittedly, this does give a single point of failure (the status web service), but inevitably you'll end up with that kind of thing anyway unless you want to start having fail-over web services, etc.

You could save it as XML, which is platform independent. And then to share it, you could use a web server and publish it there. It seems ironic to share website availability on an other website but just as websites, other type of servers/services can have downtime also.

You could create a webservice, and you probably will need to open less unusual ports on firewall to connect to a HTTP server than to connect a SQL Server database. You can also extend that service layer to add business rules more easily than at database level.

I guess webservice is the best option. Just expose a restful api to get a simple Json response with the server status. Fast and resources cheap.

Don't re-invent the wheel. Sign up for Pingdom, Montastic, AlertBot, or one of the plethora of other pre-existing services that will do this for you.
But, if you really must, a database table would be fine.

Related

Prevent API client from leaking sensitive data [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Suppose I have an API endpoint such as Facebook Graph API, which I design an application running on my PC to periodically connect to the API and retrieve my posts, comments, etc. On each Timer_Tick, the program reconnects to the API and brings the top 10 data items from the API, and persists these data into databases.
Now, suppose that this application is built by 3rd party, and I just downloaded from the internet as binary file not opensource.
How can I know if the application is leaking my Facebook data to third party without my knowledge?
Is there a mechanism to monitor such leaking if found? (from programmatic perspective)
This is a matter of security and for being sure you almost must think about any vulnerability here and try to make sure there is no way to reveal data by the known vulnerabilities but you cannot be sure about unknown ones.
If this is the matter of the trust and you are dealing with sensitive data i strongly recommend you to avoid using 3rd party tools unless they are provided or certified by the API provider. here are some techniques witch will help you understand about what is going on in the backyard but they will definitely not guaranty the safety :
1- First of all make sure the application is really a binary code (i know you mentioned it as a binary), it's because some executable files are just scripts or semi-scripts but look a like a binary files. for instance in the some cases if the source of the executable application is written with C#, Python, Java, there are tools out there that will help you DeComplie the application and find out what's going inside. this solution of course can be considerably tough if for example the code is obfuscated or there is complex models or OO programming models involved.
2- Use network monitoring tools like WireShark or any other tool to capture all traffic of HTTP/HTTPS requests while using the 3rd party application. because the API is just the same as HTTP requests used by applications to exchange data you can use these tools to monitor what's going on in your computer. normally this application must only connect to the Facebook servers and URLs needed to use the web API, if there is any other request sent or received from a server other than the Facebook there is chance of data leak here. if these requests are not encrypted by SSL/TLS you would be able to see the data being exchanged or if they are encrypted through SSL/TLS there are tools that provide man in the middle attack solution to see these traffics but if they are encrypted in the application layer you won't be able to see what data are being transmitted so it might involve suspicion about data even higher chance of data leak. don't forget that this monitoring must be extended for the entire using cycle of the application.
Also limiting the application to talk only to the server in witch you are calling the API with OS Firewall will be step forward to decrease the chance of data leak here.

Architect an API layer [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Broad, general question:
We have an app that is quickly approaching legacy status. It has three clients: Windows app, browser (web site) and mobile. The data access layer is convoluted at best and has grown organically over the years.
We want to re-architect the whole thing. The Windows app will go away ('cause who does that any more?) We will only have the browser based website and the mobile app, both of which will consume the (as of today, non-existant) API layer. This API layer will also be partially exposed to third parties should they wish to do their own integrations.
Here's my question:
What does this API layer look like? And by that, I mean... let's start in the Visual Studio solution. Is the API a separate website? So on IIS, will there be two sites... one for the public facing website and another for the API layer?
Or, should the API be a .dll within the main website, and have the endpoint URL's be part of the single website in IIS?
Eventually, we will want to update and publish one w/o impacting the other. I'm just unsure, on a very high level, how to structure the entire thing.
(If it matters: Each client has their own install, either locally on their network, or cloud hosted. All db's are single tenant.)
I would think one single solution with multiple projects.
If you put the API as a different site, you can easily add something like Swagger and Swashbuckle to your API
https://github.com/domaindrivendev/Swashbuckle
This will make documentation easy.
From here you would want to put your business logic (the things that do specific things) in a third project.
From here you have two options. The webpage can consume your own API, or you can reference your business logic project.
API on a different site offers some additional benefit if it is public facing:
Separation of domain
Load balancing and added protection
Resource limiting and throttling without site impact
These kinds of projects are a lot of fun, so consider your options and what will fit best.
I hope this helps!
my preferd way of doing this is with an seperate API project. you publisch the API project to one url and the website to another. this lets you develop both applications with no interferance.
that said, I normaly put the logic of the API in a Service layer (SOA architecture). My api project just pass the input to the sercice layer and response with the service response. this way you can seperate the api between public and private and still contain all the logic in one place.
usally i create a Api wrapper as a seperate project to handle all API calls, so other devs can use the API wrapper (just to make talking to the api more easy for my feelow devs)

Are inter-service WCF REST calls the best way to communicate between application components? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
As a part of a decoupling process to facilitate horizontal application scaling, we are slowly forking out the stateless parts of our application into separate services, which are either served on the same AWS IIS instance, or spun off onto a new one if they need to.
It is becoming apparent that some services that comprise the web service are ideal candidates for decoupling. However, I am unsure as to whether WCF REST calls are the best way to communicate to each other.
What is the best solution to provide inter-component communication in a hosted application via WCF? Currently there are no framework restrictions for the services, so anything .net is fine.
Rest is only good if you're calling your services from a client that you have less control over and need to provide a standard interface to, or are calling over the web where http protocol must be used.
If you have a system where your client is controlled by you and can use any mechanism to call the service, then use a communication system that is more efficient. If you use WCF you can use the net.tcpip protocol which will be much faster (or WWS which is even faster) or go the whole hog and use something like protocol buffers, thrift or another RPC to get the most performance.
I would also run these as dedicated services, outside of IIS so they are less dependant on the whole web infrastructure. You can then deploy them on any box and harden them without the "monoculture" risk, or rewrite them for efficiency in c++.
If you're in a full .NET environment, use WCF without REST, so that the clients can easily create local class references and easily be able to communicate over the web service layer. Ideally, expose (and consume) the services as net.tcp binding if you really want an optimal solution.
Leave the REST(ful) endpoints exposed through WCF for web client consumption, where XML and/or JSON is involved.
The biggest benefit in WCF is you can expose multiple endpoints on the same contracts (interfaces), so your .NET middle layer can be exposed via one endpoint, while other services can be exposed via web endpoints (such as webHttpBinding) from the clients that need it.

Most efficient interface for enduser: Winforms/WCF versus Web Browser access [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
What I need to do is meet my competition and 'update' my winforms application so that it is fully accessible via an online service.
I'm not sure whether to recode my winforms app so that it uses WCF/Odata to access the database or whether the whole app will need re-writing as a webforms app and moving the database to the hosted website. The later option will likely be the more difficult of the two options given my coding experience. At present, the database resides on the end-users PC and the winforms app provides 100% of the end-users access. There is also a web interface that provides the end-user's clients access to reservations, their own user details etc. The web interface is self-hosted on the end-users PC.
With regards to serving up the data to the end-user, will there be an appreciable time difference between my proposed winforms app retrieving and consuming the data from a remote hosted database VERSUS a fully hosted webforms/database app? Can this difference be quantified before I take the plunge?
Web versus Desktop is a huge topic.
My two cents:
Web:
1 - Pros:
Accesible from all kinds of devices (PC, Mac, Smartphone, Tablet).
No installation required.
Only server deployment / update is required.
2 - Cons:
Stateless (this means no client side cache, for example)
Less control over the client computer's features (File System and the like).
Browser hell (UI looks/behaves different on every browser)
harder to code due to crappy javascript everywhere
Web-based vulnerabilities (such as XSS and the like)
.Net Windows Desktop:
1 - Pros:
Stateful (you can cache lots of data in the client).
More control over the client computer's features.
Looks the same on every machine.
easier to code due to stateful nature and no javascript (hooray).
2 - Cons:
Only works in Windows. No smartphone, no Mac, no tablet.
client installation required (which might include a .Net Framework installation).
Server + client updates required (easier with ClickOnce).
That said, winforms is a really old technology no one cares about anymore, and which does not support anything. Web applications can be made to look and feel beautifully with some CSS. winforms looks ugly no matter how hard you try to improve it.
If you go the Windows Desktop route, you'd rather upgrade your application to WPF.

Moving from single user desktop applications to multiuser development [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm trying to bootstrap a micro ISV on my nights and weekends. I have an application at a very early stage of development. It is written in C# and consists mainly of a collection of classes representing the problem domain. At this point there's no UI or data persistence. (I haven't even settled on the .NET platform. Its early enough that I could change to Java or native executables)
My goal for this application is that it will be a hybrid single user/ occasionally connected multiuser application. The single user part will use an embedded database for local storage. This is a development model I'm familiar with.
The multiuser part is where I have no prior experience. I know each user will need two things:
IP based communication to a remote server on the public internet
User authentication and remote data storage
I have an idea of what services I want this server to provide (information lookup and user to user transactions) but beyond that I'm out of my element. The server will need to be hosted by a third party since I don't have resources to run my own server. Keeping in mind that I will be the sole developer for this project for the foreseeable future:
Which technologies would be the simplest way to implement the two things mentioned above? Direct access to the datastore/database or is it better to isolate it? Should I implement a webservice? If so, SOAP or REST?
What other things do I need to consider when moving to a multiuser application?
I know security is a greater concern in a multiuser application. Especially when your dealing with any kind of banking information(which I will). Performance can be an issue when dealing with a remote connection and large numbers of users. Anything else I'm overlooking?
Regarding moving to a multiuser application, centralising your data is the first step of course, and the simplest way to achieve it is often to use a cloud-based database, such as Amazon SimpleDB or MS Azure. You typically get an access key and a long 'secret' for authentication.
If your data isn't highly relational, you might want to consider Amazon SimpleDB. There are SDKs for most languages, which allow simple code to store/retrieve data in your SimpleDB database using a key and secret, anywhere in the world. You pay for the service based on your data storage and volume of traffic, so it has a very low barrier of entry, especially during development. It will also scale from a tiny home application up to something of the size of amazon.com.
If you do choose to implement your own database server, you should remember two key things:
Ensure no session state exists, i.e. the client makes a call to your web service, some action occurs, and the server forgets about that client (apart from any changed data in the database of course). Similarly the client should not be holding any data locally that could change as a result of interaction from another user. Cache locally only data you know won't change (or that you don't care if it changes).
For a web service, each call will typically be handled on its own thread, and so you need to ensure that access to the database from multiple threads is safe. If you use the standard .NET or Java ways of talking to a SQL database, this should be handled for you. However, if you implement your own data storage, it would be something you'd need to worry about.
Regarding the question of REST/SOAP etc., a key consideration should be what kinds of platforms/devices you want to use to connect to the database server. For example if you were implementing your server in .NET you might consider WCF for implementing your web services. However that might introduce difficulties if you later want to use non-.NET clients. SOAP is a mature technology for web services, but quite onerous to implement, and libraries to wrap up the handling of SOAP calls may not necessarily be available for a given client platform. REST is simple to implement (trivially easy if you use ASP.NET MVC on your server), accessible by any client that can handle HTTP POST/GET without the need for libraries, and easy to test, so REST would be my technology of choice.
If you are sticking with .net (my personal preference), I would expose data access calls via WCF. WCF configuration is really flexible and pretty easy to pick up and you'll want to hide your DB behind a service layer.
1.Direct access to db is the simplest, and the worst. Just think about how you'd auth the db access... I would just write a remote-able API with serializable parameters, and worry about which methods to connect later (web services, IIOP, whatever) - the communication details are all wrapped and hidden anyway.
2.none

Categories

Resources