Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Broad, general question:
We have an app that is quickly approaching legacy status. It has three clients: Windows app, browser (web site) and mobile. The data access layer is convoluted at best and has grown organically over the years.
We want to re-architect the whole thing. The Windows app will go away ('cause who does that any more?) We will only have the browser based website and the mobile app, both of which will consume the (as of today, non-existant) API layer. This API layer will also be partially exposed to third parties should they wish to do their own integrations.
Here's my question:
What does this API layer look like? And by that, I mean... let's start in the Visual Studio solution. Is the API a separate website? So on IIS, will there be two sites... one for the public facing website and another for the API layer?
Or, should the API be a .dll within the main website, and have the endpoint URL's be part of the single website in IIS?
Eventually, we will want to update and publish one w/o impacting the other. I'm just unsure, on a very high level, how to structure the entire thing.
(If it matters: Each client has their own install, either locally on their network, or cloud hosted. All db's are single tenant.)
I would think one single solution with multiple projects.
If you put the API as a different site, you can easily add something like Swagger and Swashbuckle to your API
https://github.com/domaindrivendev/Swashbuckle
This will make documentation easy.
From here you would want to put your business logic (the things that do specific things) in a third project.
From here you have two options. The webpage can consume your own API, or you can reference your business logic project.
API on a different site offers some additional benefit if it is public facing:
Separation of domain
Load balancing and added protection
Resource limiting and throttling without site impact
These kinds of projects are a lot of fun, so consider your options and what will fit best.
I hope this helps!
my preferd way of doing this is with an seperate API project. you publisch the API project to one url and the website to another. this lets you develop both applications with no interferance.
that said, I normaly put the logic of the API in a Service layer (SOA architecture). My api project just pass the input to the sercice layer and response with the service response. this way you can seperate the api between public and private and still contain all the logic in one place.
usally i create a Api wrapper as a seperate project to handle all API calls, so other devs can use the API wrapper (just to make talking to the api more easy for my feelow devs)
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
We have web application in asp.net mvc using razor code. Now along with web app we need android app. So whatever operations done in mvc controller need to shift in web API controller. So is there any way to convert mvc controller to web API controller? and is it a good approach to call web API in mvc controller?
This is a very common scenario these days.
To answer your question, no you cannot simply convert an MVC controller to WebApi. For one the API is stateless so you need to take that into consideration.
My suggestion is to create a separate WebApi2 project and create the controllers you need there. There may not even be a 1 to 1 correlation to your MVC controllers.
Think of this web api project as your data layer, in a way. It will simply provide the data you need, maybe create some new things and that's it. If you need to save / load data from a database then that's where you do it so both the UI and mobile app use the same data store basically.
Start small, create one controller first with one method in it and then have your MVC app call it and use the data. When you deploy somewhere you will deploy two things :
The UI app
The WebApi project
This means you will need to keep the URL of the WebApi project somewhere so your UI knows about it.
Once you achieve this separation move to your mobile app and call the same WebApi method you've just implemented for the UI project. This will be your Proof of Concept basically.
An Api comes with its own set of rules and challenges, for example :
which methodology are you going to use? REST or not.
How are you going to secure it?
I suggest looking into OAuth2 with JWT for security and if you are interested I can provide some links.
Here is the blog of Taiseer Joudeh, who does a lot of stuff on OAuth2, you'll find loads of articles on the subject there : http://bitoftech.net/taiseer-joudeh-blog/
Here is an article I wrote on OAuth2 and JWT which will take you through a lot of different things :
https://eidand.com/2015/03/28/authorization-system-with-owin-web-api-json-web-tokens/
I always see a controller as a hatch. My controllers never has any business logic. Any logic comes to seperate libraries, which can easily include an API.
be sure the make use of the await keywords for async methods when using API's.
So just replacing your controller logic to an api should work, If you not using many function from the base class which you inherit from webcontroller.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
As a part of a decoupling process to facilitate horizontal application scaling, we are slowly forking out the stateless parts of our application into separate services, which are either served on the same AWS IIS instance, or spun off onto a new one if they need to.
It is becoming apparent that some services that comprise the web service are ideal candidates for decoupling. However, I am unsure as to whether WCF REST calls are the best way to communicate to each other.
What is the best solution to provide inter-component communication in a hosted application via WCF? Currently there are no framework restrictions for the services, so anything .net is fine.
Rest is only good if you're calling your services from a client that you have less control over and need to provide a standard interface to, or are calling over the web where http protocol must be used.
If you have a system where your client is controlled by you and can use any mechanism to call the service, then use a communication system that is more efficient. If you use WCF you can use the net.tcpip protocol which will be much faster (or WWS which is even faster) or go the whole hog and use something like protocol buffers, thrift or another RPC to get the most performance.
I would also run these as dedicated services, outside of IIS so they are less dependant on the whole web infrastructure. You can then deploy them on any box and harden them without the "monoculture" risk, or rewrite them for efficiency in c++.
If you're in a full .NET environment, use WCF without REST, so that the clients can easily create local class references and easily be able to communicate over the web service layer. Ideally, expose (and consume) the services as net.tcp binding if you really want an optimal solution.
Leave the REST(ful) endpoints exposed through WCF for web client consumption, where XML and/or JSON is involved.
The biggest benefit in WCF is you can expose multiple endpoints on the same contracts (interfaces), so your .NET middle layer can be exposed via one endpoint, while other services can be exposed via web endpoints (such as webHttpBinding) from the clients that need it.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Recently, a project came my way with requirements to ...
1. Build a C# console app that continuously checks website availability.
2. Save website status somewhere so that different platforms can access the status.
The console app is completed but I'm wrestling with where I should save the status. I'm thinking a SQL record.
How would you handle where you save the status so that it's extensible, flexible and available for x number of frameworks or platforms?
UPDATE: Looks like I go with DB storage with a RESTful service. I also save the status to an xml file as a fallback to service being down.
The availability of the web-sites could be POSTed to a second web service which returned a JSON/Xml result on the availability of said website(s). This pretty much means any platform/language that is capable of making a web-service call can check the availability of the web site(s).
Admittedly, this does give a single point of failure (the status web service), but inevitably you'll end up with that kind of thing anyway unless you want to start having fail-over web services, etc.
You could save it as XML, which is platform independent. And then to share it, you could use a web server and publish it there. It seems ironic to share website availability on an other website but just as websites, other type of servers/services can have downtime also.
You could create a webservice, and you probably will need to open less unusual ports on firewall to connect to a HTTP server than to connect a SQL Server database. You can also extend that service layer to add business rules more easily than at database level.
I guess webservice is the best option. Just expose a restful api to get a simple Json response with the server status. Fast and resources cheap.
Don't re-invent the wheel. Sign up for Pingdom, Montastic, AlertBot, or one of the plethora of other pre-existing services that will do this for you.
But, if you really must, a database table would be fine.
I have both a design question along with an overall question of the way WebAPI works.
We have an Internal Website that houses many applications ( a seperate MVC Area for each application). We then have sectioned out the DAL Logic into libraries that are front ended by Web Service calls. The Models/Repositories will call to CRUD things inside several databases (some internal, some third party). So It looks like this
UI -> Model-> Repository -> WebServices -> DB.
This was originally done because we needed to be able to access multiple data access points and funnel them back to the Internal Website for various applications and it seemed like a good way to abstract out all the logic so that the Web Application only focuses on the View end. This pattern has proven to be good for seperation of concerns, but now we are looking into making this available to more than just .NET applications/clients and that points me to begin to look at WebAPI.
Here are my questions:
My main question is, knowing that the Web Services are all done in WCF ( contract based), how hard would it be to convert this to using WebAPI keeping in mind that we wish to make sure the WebAPI webservice is on a separate server from the UI.
Is there any way to set up WebAPI to have contracts and still use the HTTP verbs?
If i am remotely accessing the WebAPI web service via an MVC application on another server and another solution, is there any way to still get that strongly typed objects that you get when you consume a WCF contract?
What are people's thoughts on this design pattern?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm trying to bootstrap a micro ISV on my nights and weekends. I have an application at a very early stage of development. It is written in C# and consists mainly of a collection of classes representing the problem domain. At this point there's no UI or data persistence. (I haven't even settled on the .NET platform. Its early enough that I could change to Java or native executables)
My goal for this application is that it will be a hybrid single user/ occasionally connected multiuser application. The single user part will use an embedded database for local storage. This is a development model I'm familiar with.
The multiuser part is where I have no prior experience. I know each user will need two things:
IP based communication to a remote server on the public internet
User authentication and remote data storage
I have an idea of what services I want this server to provide (information lookup and user to user transactions) but beyond that I'm out of my element. The server will need to be hosted by a third party since I don't have resources to run my own server. Keeping in mind that I will be the sole developer for this project for the foreseeable future:
Which technologies would be the simplest way to implement the two things mentioned above? Direct access to the datastore/database or is it better to isolate it? Should I implement a webservice? If so, SOAP or REST?
What other things do I need to consider when moving to a multiuser application?
I know security is a greater concern in a multiuser application. Especially when your dealing with any kind of banking information(which I will). Performance can be an issue when dealing with a remote connection and large numbers of users. Anything else I'm overlooking?
Regarding moving to a multiuser application, centralising your data is the first step of course, and the simplest way to achieve it is often to use a cloud-based database, such as Amazon SimpleDB or MS Azure. You typically get an access key and a long 'secret' for authentication.
If your data isn't highly relational, you might want to consider Amazon SimpleDB. There are SDKs for most languages, which allow simple code to store/retrieve data in your SimpleDB database using a key and secret, anywhere in the world. You pay for the service based on your data storage and volume of traffic, so it has a very low barrier of entry, especially during development. It will also scale from a tiny home application up to something of the size of amazon.com.
If you do choose to implement your own database server, you should remember two key things:
Ensure no session state exists, i.e. the client makes a call to your web service, some action occurs, and the server forgets about that client (apart from any changed data in the database of course). Similarly the client should not be holding any data locally that could change as a result of interaction from another user. Cache locally only data you know won't change (or that you don't care if it changes).
For a web service, each call will typically be handled on its own thread, and so you need to ensure that access to the database from multiple threads is safe. If you use the standard .NET or Java ways of talking to a SQL database, this should be handled for you. However, if you implement your own data storage, it would be something you'd need to worry about.
Regarding the question of REST/SOAP etc., a key consideration should be what kinds of platforms/devices you want to use to connect to the database server. For example if you were implementing your server in .NET you might consider WCF for implementing your web services. However that might introduce difficulties if you later want to use non-.NET clients. SOAP is a mature technology for web services, but quite onerous to implement, and libraries to wrap up the handling of SOAP calls may not necessarily be available for a given client platform. REST is simple to implement (trivially easy if you use ASP.NET MVC on your server), accessible by any client that can handle HTTP POST/GET without the need for libraries, and easy to test, so REST would be my technology of choice.
If you are sticking with .net (my personal preference), I would expose data access calls via WCF. WCF configuration is really flexible and pretty easy to pick up and you'll want to hide your DB behind a service layer.
1.Direct access to db is the simplest, and the worst. Just think about how you'd auth the db access... I would just write a remote-able API with serializable parameters, and worry about which methods to connect later (web services, IIOP, whatever) - the communication details are all wrapped and hidden anyway.
2.none