Background (you might want to skip this bit, it's here just in case you want context)
I saw from questions like this
ServiceStack CRUD Service routing Documentation that the documentation has a weird way of explaining something that takes what I'm used to (WebApi and controller based routing) to a message oriented routed mechanism requiring us to define a request, response, and a service class with methods in order to handle each and every request.
I'm in the process of converting an existing code base from WebAPI + OData based services to service stack to determine the difference / modelling changes that the two designs require.
The problem domain
There are many requests I currently make that don't really require any parameters (simple get requests) and yet i'm forced to both create, instantiate and then pass to a service method a DTO in this situation as without such a DTO I can't define the route.
Why? this is confusing!
What is the relationship between the DTO, the methods in a service and the routing / handling of a request because i currently have lots of "service" classes in my existing stack that are basically ...
public class FooService : CRUDService<Foo> { /* specifics for Foos */ }
public abstract class CRUDService<T> : ICRUDService<T>
{
public T GetById(object Id) { ... }
public IEnumerable<T> GetAll() { ... }
public T Add(T newT) { ... }
public T Update(T newVersion) { ... }
public bool Delete(object Id) { ... }
}
... how do i get from that for 100 or so services to making this a functional service stack implementation because at the moment my understanding is that I can't pass scalar values to any of these methods, I must always pass a DTO, the request DTO will define the route it handles and I must have a different request and response DTO for every possible operation that my API can perform.
This leaves me thinking I should resort to T4 templating to generate the various DTO's saving me time hand cranking hundreds of basically empty DTO's for now.
My question(s)
It boils down to How do I convert my codebase?
that said, the "sub parts" of this question are really sub questions like:
What's best practice here?
Am I missing something or is there a lot of work for me basically building empty boiler plate DTO's?
How does Service stack wire all this stuff up?
I was told that it's "better than the black box of OData / EF" but this at face value appears to hide a ton of implementation details. Unless i'm just confused at something in the design ethos.
Each Service in ServiceStack requires a concrete Request DTO which is used to define your Services contract.
As ServiceStack is a message-based services framework, the Typed Request DTO is fundamental in how ServiceStack works which "captures the Request" that Services are invoked with, which is also passed down through all ServiceStack filters, e.g:
The Request DTO is also all that's needed to be able to invoke a Service from any client, including MQ Clients:
And by following the Physical Project Structure that ServiceStack Project templates are configured with where all DTOs are kept in a dependency/impl-free ServiceModel project that ServiceStack's .NET generic Service Clients can reference directly to enable an end-to-end Typed API without code-gen, e.g:
var response = client.Get(new MyRequest { ... });
The Request DTO being a "message" uses a POCO DTO to define its contract which is better suited for versioning as they can be extended without breaking existing classes and
since the Request DTO is the definition and entry-point for your Service it's also what most of ServiceStack's other features is built-around, needless to say its important.
Sharp Script for code-generation
If you have so many CRUD Services that you wish to auto-generate DTOs for them I'd recommend taking a look at #Script which is a dynamic .NET Scripting language that defaults to a Template language mode that uses familiar JS syntax for expressions and the ideal handlebars syntax for blocks for templating.
The stand-alone Script Support includes Live Preview support whose instant feedback makes it highly productive and also includes built-in support for querying databases.
Although OrmLite is a code-first ORM, it does include T4 support for initially generating data models.
AutoCRUD Preview
Since you're looking to generate a number of CRUD Services you may want to checkout the preview release of AutoCRUD that's now available on MyGet.
It’s conceptually the same as “Auto Query” where you just need to implement the Request DTOs definition for your DB Table APIs and AutoQuery automatically provides the implementation for the Service.
Related
I am working on my first gRPC service. I have everything working. As in, I can call my service from the client and get a response. My question is can my gRPC service use models from another library?
I have several projects in my solution.
gRPC Server
gRPC Client
Common DTO Library
And a Few more
When I define my proto file is it possible to use the classes from the Common DTO Library?
my.proto
syntax = "proto3";
option csharp_namespace = "myNameSpace";
package myPackageName;
// The service definition.
service MyService{
rpc MyMethodName (DtoFromAnotherLibrary) returns (byte[]);
}
Thank you,
Travis
That's not possible because Proto does not know about your C# projects.
You may consider using code-first gRPC though, where you write C# code that is then creating your proto.
As #Ray stated, you cannot use your model objects through the gRPC interface and he provided a link to the code-first method.
I tend to think of my proto definitions as my external interface and update them with care to ensure backwards compatibility as the interface ages. Because of that, I will code up model objects separate from the gRPC definitions and write extension methods (ToProto for the model, ToModel for the gRPC message) to go back and forth between the two types. It may seem like duplicated effort, but having the flexibility to add things to my model objects like property change notifications, or other convenience methods/properties without affecting the external interface is a plus to me. I spend a lot of time working on the front end, so it analogous to the model/view model relationship.
I'm a C# coder with a (Windows) sysadmin background. I've been looking at the various service frameworks in order to create a unified REST-API for various infrastructure components (windows management, hardware management, etc.). I've settled on using ServiceStack as my framework for this, but have a question on how to manage my DTOs. Most of the time my source data is from non-database objects, which include:
Other web services (usually SOAP based). I usually bring these in via "Add Web Reference" (most, but not all, are asmx).
.NET Objects (usually WMI/WinRM/PowerShell [System.Management], or Active Directory [System.DirectoryServices])...
In some unfortunate cases, raw text output I get as a result of invoking a command (via ssh or cmd).
In all of these cases, I will have to call some sort of Save() method to update properties. In addition, there might be some non-CRUD methods I would like to expose to the REST service. Usually I don't need everything from the source data (for example, in the case of web service data, I'm only interested in boxing up certain properties and methods of a particular proxy class). My understanding is that my DTOs should be clean and not have any dependencies. Since I don't believe I have an ORM I can use, what design pattern should I use to map my data to a DTO?
Apologies if I'm misusing any terminology here...
With a variety of backend services and data sources, I think it would be hard to use anything highly structured like a framework to map your data to DTOs. I would keep it simple:
Keep your DTO classes separate from any of your backend classes. Generally resist the temptation to try to reuse code, use inheritance, etc., in your DTOs (though sometimes I find it useful to declare interfaces for the DTOs to implement). This will keep the interface of your your ServiceStack service clean and independent of backend details.
There are some extension methods available in ServiceStack to easily map properties between two classes: TranslateTo, PopulateWith, PopulateWithNonDefaultValues, etc. The link above mentions these. The trick is that while your DTO classes should not be subclasses of, or directly reusing your backend classes, you will find it convenient to have the property names match up if you want to use these mapping methods.
Keep your ServiceStack service classes simple; their primary responsibility should be translating between DTO classes and lower level model classes, and making one or two method calls on business logic classes to do the actual work.
It sounds like it would be useful for the highest level of your business layer--the classes that your ServiceStack services interact with--to present a clean interface that abstracts away the details about the source and format of a given type of data. So you may want three layers of model classes. From top to bottom: DTOs, business layer POCO classes, and framework-specific classes for specific backend services like web reference generated code or whatever.
I think that's about all there is to it.
I recommend that you define DTOs that meet the requirements of your API, and then have a 'business logic' layer that mediates between the actual objects and your DTOs.
Your ServiceStack services will have a dependency on both the DTO definitions and the business logic layer, and the business logic layer will have a dependency on the DTO definitions and the real-world object definitions. In effect, your REST services and DTOs will act as a facade over the real-world APIs.
I'm relatively new the world of Microsoft WCF. I have a few questions regarding the best design pattern/method to use to implement one or more services that will address my needs.
I have an existing DataLayer which I would like to push into 1 or more WCF services. The backend database is ORACLE (and I have an entire data access layer which communicates with the correct version of ODAC).
When I look at my existing datalayer, I (more or less) have support for mulitple data objects (classes).
UserInfo
UserActivityHistoryAudit
Evaluations
EvaluationWorkFlowAndReview
EvaluationReports
I have several questions involving the best way to implement this in WCF.
Is it best to implement this as one service or several services (one which coincides with each data class/functionality)?
Ultimately, I would like to share the underlying Data Access layer which communicates with the ORACLE ODAC library. Is it best to embed this in a shared library, assembly?
If I go with multiple services, is it cleaner to hang them all off of the same endpoint?
What is the best strategy to use when designing this?
Thanks,
JohnB
the best way is to use one service that is WCF Data Service (OData)
here a sample you can download
http://code.msdn.microsoft.com/WCF-Data-Service-OData-ebb4214a
Often times, your business layer will also be implemented on the server. In such an event, you will simply wrap your business layer. If you do not have a business layer on the server side, model your service based on the same concept. You are exposing a set of functionality to target a specific consumer (or set of consumers). You will generally have one service to present to each consumer (or set of consumers). With that being said, you do not want one large monolithic service just to cover all of your potential needs. Break it down into logical areas.
Most of the time, wrapping a single data layer object is too small to wrap by itself. The exception is if you are simply servicing the data generically to everyone (very common with REST and ODATA services).
===========================
Model your services based off of consumption needs. One service per consumer set.
If you will be sharing your data layer across multiple business layers in different binaries, the data layer should exist in its own stand alone library and shared.
The endpoint layouts for your services are not generally important as long as you are consistent. At the end of the day, your consumers will simply copy/paste the endpoint that you provide.
Have you considered using Factory and Repository Pattern? Something like this.
`public interface IEmployee
{
// define your model here (properties, for example)
string FirstName {get; set;}
string LastName {get; set;}
}
public interface IEmployeeBizFactory
{
IEmployee CreateEmployee();
}
public class CustomEmployee : IEmployee
{
// Implementation here
}
public class CustomEmployeeBizFactory : IEmployeeBizFactory
{
public IEmployee CreateEmployee()
{
return new CustomEmployee();
}
}`
Consider Data Contracts for each of your data objects
Using Data Contracts
I have a WCF service, which exposes many methods.
My application consumes this service, and ServiceContract includes OperationContract definitions for only some of methods.
To cut to the question, consider following code example:
[ServiceContract]
public interface IServer
{
[OperationContract]
void BasicOperation();
}
[ServiceContract]
public interface IExtendedServer : IServer
{
[OperationContract]
void ExtendedOperation();
}
I would like to make contracts so that application has extension capability. In other words, I'd like to be able to use IServer contract everywhere, but to allow plugin-like architecture to extend basic contract interface, so that plugin itself can call ExtendedOperation() operation contract.
So, how do I structure my code, or, what changes do I have to make, in order to be able to do something like following? (channel is of type IServer)
((IExtendedServer)channel).ExtendedOperation()
When I attempt to do this, I get error
System.InvalidCastException : Unable to cast transparent proxy to type 'Contract.IExtendedServer'.
I hope I wasn't confusing...
Services in a SOA world need to have a well-defined and pretty static interface. SOAP services require a representation in a WSDL (and included or separate XSD = XML schema for the data involved).
I don't see how you can create something like a plug-in system in a service world. Plugins work great on a local app - load your resources, language extensions, graphics filters - whatever strikes your fancy. But in a SOA world, this "agility" is the exact contrary of what you're trying to do - create and offer well-defined, fully specified services to be used.
The only option I could see is using a REST-based approach, since there you don't really have many of those limitations. Normally I say this lack of a formal service description is one of the major drawbacks and weak points of REST, but since using REST, the operations are really just defined by the URL's used, this might be a plus in your case.
So I would say: if you really want flexibility in services, you need to check out REST based services. SOAP doesn't fit that bill. Go to the WCF REST Developer Center on MSDN for a vast array of information and resources on how to use REST in and with WCF.
I'm not sure what you're trying to accomplish here. You're dealing with services, which have endpoints exposing specific contracts (i.e. interfaces). You're not dealing with objects and casting and the like; it won't work and isn't the right approach anyway.
The way I see it, what you have is indeed just that: A service that exposes one endpoint with a set of common operations, and potentially X number of additional endpoints with other contracts with extension operations. You could still have a single service class on the service side, but as far as the client goes, they are simply different endpoint/services.
We have an existing repository which is based on EF4 / POCO and is working well. We want to add a service layer using WCF Data Services and looking for some best practice advice.
So far we have developed a class which has a IQueryable property and the getter triggers the repository 'get all users' method. The problem so far have been two-fold:
1) It required us to decorate the ID field of the poco object to tell data service what field was the id. This now means that our POCO object is not 'pure'.
2) It cannot figure out the relationships between the objects (which is obvious i guess).
I've now stopped this approach and i'm thinking that maybe we should expose the OBjectContext from the repository and use more 'automatic' functionality of EF.
Has anybody got any advice or examples of using the repository pattern with WCF Data Services ?
I guess it's a matter of being pragmatic. Does decorating the POCO break anything else? If not, perhaps it's the best way to do it.
WCF Data Services and oData are pretty new, I've also been looking for guidance and it seems a bit thin.
Can you expand a bit more on what you want to expose, and who'll be using it?
The issues I've seen so far in our project
Having an MyRepository : Objectcontext and a
MyDataService : DataService splits logic, so we've
created helpers. I suppose we could have inherited Repository though - (literally just thought of that as I typed this!)
Query and Change Interceptors are your friends, but
should delegate to helpers (or base class) to ensure
DRY. ie - if your repository already
had GetAllUsers, and does logic that
myservice.svc/Users doesn't handle,
you may need to implement a query
interceptor to do the filtering -
again DRY means a helper (or base method) that both
the repository and interceptor can
use.
asp.net compatibility allows you
to tap in nicely to authentication /
authorisation - in a query
interceptor, it's a nice way to
ensure you're allowed to see only the
things you're allowed to see.
A couple of traps....
If it's Flash / Flex based you will
probably have issues with Flash /
Flex not being able to use HTTP
PUT/MERGE or DELETE. You get around
this by using x-httpmethod-override
If it's javascript / jquery, make
sure you turn on json
Overall, I really like it, a super fast way to expose an API, and provided you don't have heavy business logic, it works well.