I am working on my first gRPC service. I have everything working. As in, I can call my service from the client and get a response. My question is can my gRPC service use models from another library?
I have several projects in my solution.
gRPC Server
gRPC Client
Common DTO Library
And a Few more
When I define my proto file is it possible to use the classes from the Common DTO Library?
my.proto
syntax = "proto3";
option csharp_namespace = "myNameSpace";
package myPackageName;
// The service definition.
service MyService{
rpc MyMethodName (DtoFromAnotherLibrary) returns (byte[]);
}
Thank you,
Travis
That's not possible because Proto does not know about your C# projects.
You may consider using code-first gRPC though, where you write C# code that is then creating your proto.
As #Ray stated, you cannot use your model objects through the gRPC interface and he provided a link to the code-first method.
I tend to think of my proto definitions as my external interface and update them with care to ensure backwards compatibility as the interface ages. Because of that, I will code up model objects separate from the gRPC definitions and write extension methods (ToProto for the model, ToModel for the gRPC message) to go back and forth between the two types. It may seem like duplicated effort, but having the flexibility to add things to my model objects like property change notifications, or other convenience methods/properties without affecting the external interface is a plus to me. I spend a lot of time working on the front end, so it analogous to the model/view model relationship.
Related
With gRPC on .NET Core I can define the interface of my service using proto-files.
I need to expose my service as REST, too, and would like to define the service once, using proto, instead of manually creating it all again.
Is it possible to create REST endpoints (controller and request/response-classes) using gRPC on .NET Framework from the proto files?
[Update]
For the REST endpoints I want to use JSON - I just want to create the controller and request/response-classes from the proto files.
For request/response it might be possible to use the classes generated for the gRPC-client, but it would be great if I could create a REST-controller from the proto-file, too.
Short answer: yes
Longer answer: yes but you'd have to write some custom code to read the proto files and then create the controllers in memory on startup. There's not much you can do out the box
A quick search on Google offers more insight
My idea consists of two main elements:
Take C# Dto's (Data-tranfer-objects) and convert them into typescript interfaces to ensure client-side models are in sync with server side.
Take ASP .Net core controller endpoints and convert them to typescript classes that uses a http-service or similar. Again, to ensure client-side requests are in sync with the server.
And whenever a change have been made to a controller or dto, the typescript generated items should then refresh to stay in sync while developing.
I have done some research and found the following Stack Overflow threads and other sources:
DTO to TypeScript generator which suggest using the TypeLite library, which seems great, but according to the documentation, this either requires a [TsClass] Attribute or a reference to class on startup. But, since the project structure I'm using is setup so that all dto's is located in a *.Dtos namespace, I'm kinda missing a TypeScript.Definitions().ForNameSpace(). Also, this only solves the first idea/problem.
Swashbuckly.AspNetCore Would allow me to generate swagger documentation from both the controllers and dto's, and then the task would be to someway interpret the swagger documentation and create typescript classes and interfaces from that. The cons is that as far as i can read, this requires me to startup the server, which if possible i would like to avoid since it would make it hard to update on file change.
FYI, this is a new project I'm about to start, so there's no legacy code to update, also, all of the ASP .NET Core endpoints will return IActionResult to enable the return of Ok(), BadRequest() and so on. So to get the return model would in my mind be hard, since there's not an easy way to get the dto it produces, if any.
So, i have thought of the following solutions that solves both problems:
Create a separate package/application that uses the Swashbuckly lib and generates the models and controllers without starting up the whole server.
Create annotations on every endpoint, something along the lines of [Produces(SomeDto)], where after i would create a small console-application that uses reflection to get information and generate typescript from that. This would of cause requires developers to keep this information in sync, so in my mind there's kinda duplicate information.
But, both of these solutions would not auto-update on C# source file save.
Looking forward to any discussions/suggestions.
Considering your first point, I made a C# DTO to typescript interface generator that uses MSBUILD tasks so its completely independent of your workflow. It also just does it from the source which makes it a bit less stable but you don't have to make any template files.
Find it here or just search for MTT on nuget
In case you are still looking.... I think Typewriter http://frhagn.github.io/Typewriter/ is your solution. You can generate templates specifying what and how to transform.
It doesn't meet all of my needs only because I need a tool to dynamically generate a complicated folder structure, but that's coming in their v2 roadmap.
Besides that, it does a lot of heavy lifting and is pretty easy to configure.
Following the instructions to use the Reflection Provider (http://msdn.microsoft.com/en-us/library/dd728281.aspx) everything works well, until I move the classes Order and Item to a Class Library and reference the class library from the web project with the SVC file.
Move the POCO classes into the WCF project all goes well.
Move the POCO classes out of the WCF project into separate assembly, I get a 500 with no explanation.
I want to be able to keep my poco classes in a separate project and expose them with an OData endpoint. What am I doing wrong?
--UPDATE--
The scenario described above is meant to illustrate a problem I have found using the WCF OData Reflection Provider. It is not my real problem, but is easier to explain for illustrative purposes.
Try upgrading to the latest version of WCF Data Services (currently 5.3), if you aren't already on it. I reproduced your issue using the version of WCF Data Services that ships with .Net 4.5, but once I upgraded the references in both assemblies to the latest release of Microsoft.Data.Services using NuGet, the problem went away.
If you're already using the most up-to-date version of WCF Data Services, make sure that both assemblies are referencing the exact same version of WCF Data Services.
If neither of these fix your problem, add the following attribute to your DataService class to get a more detailed error message and stack trace:
[System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)]
public class YourService : DataService<...>
And then please update your question with the results of that (if the solution doesn't immediately jump out from the stack trace).
(disclaimer: I usually don't like answers of the kind that don't help you with your problem but rather explain why your problem isn't the correct problem, but I think it's justified in this case :))
If you think about it, you don't really want to do that:
The Order and Item classes aren't really POCOs at all; they're not 'plain' C# objects; they have data attributes on them, which make them data transfer objects (DTOs).
They belong to the interface between your service and its clients;
The domain entities (or POCOs) Item and Order will, most likely, be a bit more complex, and contain other things besides data, such as operations and business logic.
I believe the correct way to go is to have a rich domain model, in which Order and Item contain a full set of attributes and operations, and on top of that, a DTO layer, which contains only those attributes that your service client needs.
Sending your POCOs over the wire was termed 'the stripper pattern', and I believe it's best avoided.
I'm working on a fairly straight forward multi-tier application (WPF, WCF, EF 4, and SQL). As far as architecture is concerned, we were planning to include a single "Common" project which will include both entities as well as service contracts.
Are there any advantages/disadvantages to having entities and service contracts in separate assemblies? Or is it usually good to keep them together?
I'm interested in hearing the opinion of others.
Thanks!
Having Contracts in a separate assembly gives you the advantage of the ability injecting to a different entities in a different assembly by providing the Contracts assembly to a developer , and he would implement it and give you a dll that you can put inside the project folder and inject to it using IoC framework like StructureMap without rebuilding,
having the contracts in the same assembly that contains the entities tie the contracts to the implementations...
If you are using a RESTful architecture with other .NET platform consumers - it's helpful to have the Service Contracts in a separate assembly (Shared) so that you can easily share your operation and data contracts with RESTful consumers without exposing any unnecessary data access components to your clients.
I would recommend that you keep the data access and service contracts isolated for this reason.
That is exactly how I structured the design for an e-commerce n-tier app I designed.
There are two common libraries - one for DTO's and another for interfaces.
Then the client and server included those librarues, and the service proxies were generated using common types.
The main advantage here is ease of compilation - you don't have to recreate the proxies when you change the insterface, the client and server are updated automatically.
I also had a utilities app that contained all the helper type stuff I needed.
EDIT: Sorry, just re-read your question. In my case, I had multiple interface libraries - one for the workflow library (with composed interfaces), and another for services (the thing being composed into workflow operations)
So in my case it made sense to keep them seperate.
If you only have one set of interfaces, and those interfaces all make use of your DTO's, there is no reason to seperate them into two libraries - one would be sufficient. Consider though if you may need to share your DTO's between more interface libraries in future, in that case rather keep the DTO's seperate from the interfaces from the start.
I have a webservice with a function that returns a type (foo). If I consume this webservice in .NET through the 2.0 generated proxies, it creates a class called foo in the generated proxy. If I have the DLL that contains that class (foo) that is the DLL being used by the webservice, is there any way to have it use that class instead of creating a custom proxy class? I'm looking for something similar to what remoting does... but not remoting.
I've seen 3 ways of doing this:
Let Visual Studio generate the proxy and then change the classes in the proxy to the full class names of the dll, by hand. Works, but you would have to do this again everytime you update your proxy. Plus it's really dirty, isn't it?
Use a generic class/method that
creates deep copies of your proxy
objects into the "real" objects by
reflection. Works, but of course
with a little performance offtrade
Use WCF, where you can reference the
dll with the data contracts (your
data classes) and use them instead
of creating any proxy by code
generation.
I think the key issue here is in generating the proxies. I've generally used two different approaches to web services:
1) Traditional services, where you expose methods and a client generates the proxy in Visual Studio to consume the methods.
2) Request/Response services, where the exposed "service" is more of a pass-through and the "actions" being performed are encapsulated in the objects being sent to and received from the service. These actions would be in that shared library that both the server and the client have.
In the former I often run into this same problem and I don't really think there's a solution, at least not one that Visual Studio is going to like at all. You could perhaps manually modify the generated proxies to use the other classes, but then you'll have to repeat that step any time you re-generate. Conversely, you can generate outside of Visual Studio in something like CodeSmith (the older version is free, but depends on .NET 1.1), which will require some work to create a template for the proxies and to step outside the IDE to re-generate any time you need to update them.
I can recommend a good tool for the latter, however, and that would be the Agatha project. It takes the approach of separating the "service" from the "actions" that are being performed, and makes the approach of the shared library very easy. Such a re-architecture may very well be out of the question for the project you're working on depending on your schedule, but it's definitely something to explore for future projects.
You could write your own proxy class, or you could implement a constructor on your Foo class that takes an instance of the generated Foo class and copies over the data as appropriate.