I want to build a web app with Javascript for the front end and C# for the back end and I want to determine the value of GraphQL.
For my C# back end I use a GraphQL implementation named GraphQL for .NET.
For my front end, I would like to use Relay as it plays well with ReactJS.
Now for my back end, I implemented a sample schema like in one of the examples which looks like this:
public class StarWarsSchema : Schema
{
public StarWarsSchema()
{
Query = new StarWarsQuery();
}
}
In my front end, I now need to tell Relay somehow about this schema. At least this is what I understood when walking through the tutorials, because for some reason the GraphQL queries needs to be transpiled.
This is an example as how I would like to load all Droids:
class Content extends React.Component<ContentProps, { }> {
...
}
export default Relay.createContainer(Content, {
fragments: {
viewer: () => Relay.QL`
fragment on User {
query HeroNameQuery {
droids {
id
name
}
}
}
`,
}
});
In one of the examples for Relay, I have seen that the babel-relay-plugin is used for conversion. It gets a schema file (JSON). The Getting Started Guide of Relay shows, how to create such a schema with graphql-js and graphql-relay-js.
Now my questions:
Do I really need to create schemas on the front and on the back end?
What is the point of teaching Relay my schema, as the back end already uses the schema to return the well formed data?
What is the benefit at all from using Relay in this scenario? What would I lose when I would just access the backend via a regular REST endpoint along with a GraphQL query as a parameter?
You need schema only on the backend and you can download schema.json from your backend using example at the bottom part of this document: https://facebook.github.io/relay/docs/guides-babel-plugin.html
Relay need schema to correctly construct queries and understand types of returned data.
Relay wraps your React components and fetch/provide all necessary data for rendering. Have many features out of the box like data caching, query consolidation, so with Relay you don't need to fetch anything and worry about how data provided to your component, you just need to write queries and components.
Related
I built a dynamic Survey type application where users can set up their template of questions/attributes that need to be captured. A template can have multiple sections, where each section contains multiple questions/attributes.
We write out the data in JSON according to the template, to be consumed or sent by webhooks, for external systems. Rather than developing a layer for each external integration to deserialize our system's object and transform it for the external system's consumption, I was thinking of writing dynamic mapper that uses a configuration schema(maybe JSON) that will map your source to your destination. Which will be great, as soon as either the source or destination changes, the configuration can just be adjusted accordingly, without having to change code and re-publish.
I'm using this system as an example, but I'm seeing multiple applications for different use cases, with the main one being a source API that needs to integrate with multiple external API's.
Source JSON example:
{
"Section1": {
"S1Attribute1": "S1Attribute1Answer",
"S1Attribute2": "S1Attribute2Answer",
"S1Attribute3": "S1Attribute3Answer"
},
"Section2": {
"S2Attribute1": "S2Attribute1Answer",
"S2Attribute2": "S2Attribute2Answer"
}
}
Destination 1 Example:
{
"SectionsFlattened": {
"S1Attribute1": "S1Attribute1Answer",
"S1Attribute2": "S1Attribute2Answer",
"S1Attribute3": "S1Attribute3Answer",
"S2Attribute1": "S2Attribute1Answer",
"S2Attribute2": "S2Attribute2Answer"
}
}
Destination 2 Example:
{
"NewSection1Name": {
"NewS1Attribute1Name": "S1Attribute1Answer",
"NewS1Attribute2Name": "S1Attribute2Answer",
"NewS1Attribute3Name": "S1Attribute3Answer",
"NewSection2Name": {
"NewS2Attribute1Name": "S2Attribute1Answer",
"NewS2Attribute2Name": "S2Attribute2Answer"
}
}
}
This is pretty simple examples of just moving properties around, but the possibilities are endless of how one would need to transform it.
It feels like a problem that somebody would have investigated or solved maybe, but in all my research I don't seem to find anything concrete, or I'm struggling to find a good starting point - maybe I'm approaching the problem incorrectly. Any suggestions/guidance/articles/libraries would be appreciated.
I'm at a loss with the KTA SDK. My intention is to pass a scanned document in PDF format with a few headers to KTA's jobs queue. As I'm still going through the documentation, my best guess right now is to use the Document class as a DTO then I need to call a method to pass that Document as a parameter:
...
[HttpPost]
public HttpResponseMessage Upload()
{
var httpRequest = System.Web.HttpContext.Current.Request;
var DocType = httpRequest.Headers["X-DocType"];
var Pages = httpRequest.Headers["X-DocPages"];
var Title = httpRequest.Headers["X-DocTitle"];
Agility.Sdk.Model.Capture.Document doc = new Agility.Sdk.Model.Capture.Document();
// doc.DocumentType = DocType; // Type DocumentTypeSummary
doc.NumberOfPages = Convert.ToInt32(Pages);
doc.FileName = Title;
...
I'm just wondering if I'm on the right track in doing this?
My other question is where can we store data from a custom header? In this example, I need to store custom header called Comments and AccountNumber.
Lastly, what service needs to be called to send this document to KTA jobs queue? Would CaptureDocumentService be the right one?
I'd greatly appreciate any help on this.
Start with the details laid out in the Sample App example. It shows what to add to your app.config, but what it doesn’t call out explicitly enough is that you should change the SdkServicesLocation value for your environment. You will simply call the functions in the services within the TotalAgility.Sdk namespaces and it will handle the webservice calls.
The CaptureDocumentService might be part of what you need, and there is a set of samples dedicated to the functions on that service. It refers to the Sample Processes folder, which by default is here:
C:\Program Files\Kofax\TotalAgility\Sample Processes\Capture SDK Sample Package
However what you will definitely need are the functions on the JobService. There are different functions with different options, but CreateJobWithDocuments is probably what you want to start with. You can see that this is creating document(s) and a job together in one step.
There is similarity with the parameters on CaptureDocumentService.CreateDocument3, so you might cross-reference with that to best understand the parameters. The difference is that CreateDocument3 just creates a document in the abstract: you want to actually use it as an input to create a job, so use the combined function.
Finally, to pass fields in, you will be setting RuntimeField objects as part of the RuntimeDocument objects going into your CreateJobWithDocuments call.
I've been trying to figure this out for about a week now. It's time to ask S.O.
I have 4 overall goals here:
The controller code needs to use ViewModel request inputs for validation reasons. (Controller Snippet)
Client code for my API should use a nice model syntax. (Client Code Snippet)
For the swagger UI page, I would like the "Try me" interface to be usable. Either a bunch of text boxes, or a text area for a json blob to serialize and send over.
GET request
Client Code Snippet:
var response = client.GetUserProductHistory(new Models.UserProductHistoryRequest() {
Locale = "en-US",
UserPuid = "FooBar"
});
Controller Snippet
[HttpGet]
[HasPermission(Permissions.CanViewUserProductHistory)]
public JsonPayload<UserProductHistoryResponse> GetUserProductHistory([FromUri]UserProductHistoryRequest model)
{
JsonPayload<UserProductHistoryResponse> output = new JsonPayload<UserProductHistoryResponse>();
return output;
}
I have tried using [FromBody]. It looks great, but I get an error that says 'GET requests do not support FromBody'.
I tried using [FromUri], but then the generated client gives me like 15 method parameters per call in the generated client.
I tried using [FromUri], and operation filters so that the parameters would be condensed into Ref parameters (complex objects as defined by the spec). This actually worked decently for the client generation and the server side. Problem is, the UI for swagger looks really lame. A single TEXT box that you can't actually use very well. If I can figure out how to get the Swagger UI to change the appearance of the [FromUri] request to more closely match the [FromBody] UI, I will be in good shape here. Any ideas or pre-existing content that would point me in the right direction here?
Swagger is not the limitation - REST itself is. By definition of REST, web servers should ignore the incoming request body on all HTTP GET methods. ASP.NET enforces this convention, which is why you it doesn't allow you to use [FromBody] on the GET method.
When designing a REST API, the better practice is to use POST methods for an actual search. This will allow to use [FromBody], and as a bonus, Swagger will behave the way you want it to. See here for a supporting opinion: https://stackoverflow.com/a/18933902/66101
My data is being correctly loaded on my page. However I have 2 textboxes and one submit button based on which I want to filter the records from the server.
Note: I am not using existing filters functionality that is available with jqgrid by default.
I am little confused how can I achieve this. Is there any built in capability of jqgrid to achieve this? The way I currently handle this is I handle the click event in my javascript and supply post data to the action method:
$('#submit').click(function () {
$("#customers").jqGrid('setGridParam', { postData: { 'ContactName': $('#contactName').val(),
CompanyName: $('#companyName').val()
}
});
$("#customers").trigger("reloadGrid");
});
This post data is then being captured on action method and it works fine. Is there any better way of doing this? or am I on right track? Sometimes I feel I write less code on the server and have become more of a client side programmer since I started using Asp.Net MVC 3.0 ;)
You don't have to use setGridParam to change postData as you can declare a function :
jqGrid({
url : ...,
datatype : ...,
mtype : "POST",
postData : {
"ContactName" : (function() {
return $("#contactName").val();
})
}
}
so your submit function will only call to reloadGrid
$('#submit').click(function () {
$("#customers").trigger("reloadGrid");
}
If you want to reduce the amount of code, and you'd better create a simple API in JS to select entities like Customer, Person, Contact etc. Currently I have an app which forms consist of dozens such entities - so I had to create an JS API for selecting(it also gives universal look and feel). From the client side the customer specifies the name of list to get, while the list of possible names is defined in the server's configuration file, which also defines the query to the ORM and how to display fields(I am using an expression language to map from entity fields to strings).
I want to make a Configuration Data Manager. This would allow multiple services to store and access configuration data that is common to all of them.
For the purposes of the Manager, I've decided to create a configuration class object - basically what every configuration data entry would look like:
Name, type, and value.
In the object these would all be strings that discribe the configuration data object itself. Once it has gotten this data from its database as strings, it would put it into this configuration object.
Then, I want it to send it through WCF to its destination. BUT, I don't want to send a serialized version of the configuration object, but rather a serialized version of the object discribed by the configuration object.
The reason I'd like to do this is so that
The Data Manager does not need to know anything about the configuration data.
So I can add configuration objects easily without changing the service. Of course, I should be able to do all of the CRUD operations, not just read.
Summary:
Input: string of name, type and value
Output: Serialized output of the object; the object itself is "type name = value"
Questions:
Is this a good method for storing and accessing the data?
How can I/can I serialize in this manner?
What would the function prototype of a getConfigurationData method look like?
I have decided to go in a different direction, thanks for the help.
Is this a good method for storing and accessing the data?
That is difficult to answer, the best I can give you is both a "yes" and a "No". Yes, It's not a bad idea to isolate the serialization/rehydration of this data.... and No, I don't really care much for the way you describe doing it. I'm not sure I would want it stored in text unless I plan on editing it by hand, and if I'm editing it by hand, I'm not sure I'd want it in a database. It could be done; just not sure you're really on the right track yet.
How can I/can I serialize in this manner?
Don't build your own, never that. Use a well-known format that already exists. Either XML or JSON will serve for hand-editable, or there are several binary formats (BSON, protobuffers) if you do not need to be able to edit it.
What would the function prototype of a getConfigurationData method look like?
I would first break-down the 'general' aka common configuration into a seperate call from the service specific configuration. This enables getConfigurationData to simply return a rich type for common information. Then either add a extra param and property for service specific data, or add another method. As an example:
[DataContract]
public class ConfigurationInfo
{
[DataMember]
public string Foo;
...
// This string is a json/xml blob specific to the 'svcType' parameter
[DataMember]
public string ServiceConfig;
}
[DataContract]
public interface IServiceHost
{
ConfigurationInfo GetConfigurationData(string svcType);
}
Obviously you place a little burden on the caller to parse the 'ServiceConfig'; however, your server can treat it as an opaque string value. It's only job is to associate it with the appropriate svcType and store/fetch the correct value.