I m planning to build another layer between application layer and database layer to reduce database access.
There are 200 hundred application servers and a single giant database server.
I wouldnt like 200 servers to query the db server. therefore planning to build another layer between and cache the data in this later, like cache farm. Servers in this layers will periodically query the db and cache the results in the service layer and clients will query the WCF servers.
I m not talking about distributed cache, which i already have.
I m not familiar with WCF, would it be a good option to implement for this purpose?
Would u recommend REST ? or web service?
WCF is the new standard for web (and other) services on the Microsoft stack, and it also suports building both SOAP-based as well as REST-based services.
It's also well suited for handling both internal (company-internal LAN/intranet - using fast and efficient TCP/IP communications) as well as outward-oriented services. It interfaces with Windows Azure and the cloud, if you need to support that. It interoperates with any SOAP or REST client, it's highly configurable, highly extensible, and all around useful and offers a unified programming model. It can interface with message queues, if you need that - all with the same programming experience.
Based on WCF, you can easily define your database models and expose those as REST-based OData feeds - you'll be putting your database out on the web in minutes (if you're adventurous and wish to do so .... but it's at least possible!).
So: YES! WCF is definitely the way to go!
As for resoures: there's the MSDN WCF Developer Center which has everything from beginner's tutorials to articles and sample code.
Also, check out the screen cast library up on MSDN for some really useful, 10-15 minute chunks of information on just about any topic related to WCF you might be interested in.
Standard SOAP web services are as easy as falling down when using WCF and you control both the server and client.
All you need to do on the server side is define your operations contracts and data contracts, and the clients will be able to build proxy classes for accessing your web services automatically.
There are some things you need to learn when defining your operation and data contracts, but once done, a client can VERY easily poke the service at design time, access the generated WSDL, and automatically generate a proxy class for accessing your new operations with their data contracts.
I would very rarely use REST as the primary interaction mechanism between application servers and database servers. If both ends of the interaction are controlled by you and live in the same data center and can be updated in sync then the extra work required to create a RESTful system would likely be wasted.
Personally, I would be more tempted to look at a messaging type system. Something like nServiceBus.
Related
I would like to find a solution to create a pub/sub medium for 2 microservices to talk to each other,
I am aware i can use some third parties E.g Redis, RabbitMQ
Implementing event-based communication between microservices (integration events)
The challenge lies on the client is unable to allow install any third parties tool due to security reason.
The messageQueue server in Windows won't be allowed to use too.
I can only use the applications that is only existed in the server.
Therefore i am asking if there is anyway that i can create one simple app using windows service.
It is a one-to-many relationship. I have one service that will be dealing with data, once if there is any update, it will publish to those services that is subsribed to it.
It seems my problem could be similar with
.NET Scalable Pub/Sub service implementation
WCF Pub/Sub with subscriber caching(link is dead on the WCF pub-sub)
but i dont see any critical solutions.
I was thinking to use data notifications that MSSQL offers as last alternatives, but seems like it could cause a bottle neck when the applications get scale up.
The internet is so much flooded with articles using third parties tool.
Thanks
Check out Rebus library, that allows using different transport methods to send end receive messages in just a line of code (so in the future you can change it without effort).
You could use SQL Server or try to develop your own transport method
We currently have a solution, using web services (implement via WCF) where client software periodically call a service to retrieve a list of new items waiting for them, then for each item, calls a separate service to do the actual download. This is the typical message polling scenario, and is relatively simple in implementation and trouble free. It does not give a near real time messaging solution however. For real time messaging you'd want more of a push notification architecture.
Because of security concerns we do not want clients to expose interfaces that our system can call because then each client needs to secure that exposed interface among other concerns.
Id like to explore the option of the client still initiating the connection to the server as we do today, but instead of polling, we maintain the connection and once the client has completed the connection, the server is now able to push messages to the client.
Our clients base may be implementing their solution on a variety of platforms, from Linux to Windows, .Net to PHP, etc. Additionally, clients vendors have a varying degree of technical capabilities so complexity of the solution is a factor here. Can SOAP or REST services be used in this type of architecture? Are there other technologies I should be looking at? Our server side part of the solution would need to be a .net one.
I need multiple clients that talk to a WCF service. The WCF service also must be able to connect to any one of the clients also.
So - it sounds like the server and the clients need to have both a WCF server and client built into each one.
Is this correct or is there some way to do this?
I was looking at NetPeerTcpBinding, but that is obsolete. To be fair I'm not sure if that is a valid solution either.
Background:
I plan to have a Windows service installed on hundreds of machines in our network with a WCF service and a WCF client built in.
I will have one Windows service installed on a server with a WCF service and a client built in.
I will have a Windows Forms application
I will have a database
The clients on the network will connect to the service running on the server in order to insert some information on the database.
The user will use the Windows Forms application to connect to the Windows service on the server and this Windows service will connect to the relevant client on the factory floor (to allow remote browsing of files and folders).
Hence I believe the machines on the floor and the server both require a WCF cleint and service built in.
The reason people are recommending wsHttpDualBinding is because it is in itself a secure and interoperable binding that is designed for use with duplex service contracts that allows both services and clients to send and receive messages.
The type of communication mentioned 'duplex' has several variations. Half and Full are the simplest.
Half Duplex: Works like a walkie-talkie, one person may speak at any given time.
Full Duplex: Like a phone, any person may speak at any given time.
Each will introduce a benefit and a problem, they also provide ways to build this communication more effectively based upon your needs.
I'm slightly confused, but I'll attempt to clarify.
You have an assortment of approaches that may occur here, a Windows Communication Foundation (WCF) Service requires the following:
Address
Binding
Contract
Those are essentially the "ABC's" for WCF. The creation of those depicts a picture like this:
As you can see the Service will contain:
Host
Service
Client
The host houses the service which the client will consume so those service methods perform a desired task. An example representation:
As you see Client-1 is going through the Internet (HTTP, HTTPS, etc.) then will hit the Host, which will have the service perform those tasks.
Now Client-n is consuming the service locally, so it is talking over (TCP, etc.) as an example.
The easiest way to remember: One service can be consumed by however many clients require those methods to perform a task. You can create very complex models using a service-oriented architecture (SOA).
All WCF is, is a mean to connect your application to a host or
centralized location you may not have access to.
As you can see in the above image, the Client communicates through a Service to the Host. Which performs a series of task. WCF will talk over an array of protocols. Hopefully this will provide a better understanding of how WCF is structured.
There are a lot of tutorials and even post to get you started. Some excellent books such as "WCF Step by Step".
Essentially your looking for an asynchronous full duplex connection, or a synchronous full duplex service. As mentioned above, your task in essence is the point of a Service.
The question: How does this work best?
It will boil down to your design. There are limitations and structures that you will need to adhere to to truly optimize it for your goal.
Such obstacles may be:
Server Load
Communication Path
Security
Multiple Clients Altering UI / Same Data
Etc.
The list continues and continues. I'd really look up tutorials or a few books on WCF. Here are a few:
WCF Step by Step
WCF Multi-Tier Development
WCF Service Development
They will help you work with the service structure to adhere to your desired goal.
Remember the "ABCs" for the most success with WCF.
Use wsDualHttpBinding if you want your service communicate with your clients.
Read WS Dual HTTP.
You might want to try out creating a WCF service using netTcpBinding. It will work for your requirements. You can use the article How to: Use netTcpBinding with Windows Authentication and Transport Security in WCF Calling from Windows Forms as a start:
Also, there are many examples included within the WCF Samples package which you can use.
I'm at the investigative stage for Workflow Service and WPF.
Having State Machine WF Service hosted in IIS and one or more WPF clients talking to the WF Service sounds reasonable choice so far.
However, although days of reading and research it isn't clear to me what would be the best strategy for tracking the transfer between states from WPF app.
There're numerous samples of tracking participants but most of them are based on One process scenario.
So I am thinking of a structure as below.
A server-side WCF operation that any client calls to register its client-side endpoint
A custom tracking participant that goes through all registered client-side endpoints and sends TrackingRecord at it's Track() method.
Advantage of this approach is that it allows real time update of the states without extra layers like ETW. Another advantage is that it allows decoupling of the logic (or maybe model layer) from the presentation layer.
Can anyone share the opinion over the above structure?
I would also welcome any suggestions for achieving the goal.
[EDIT]
To make my idea above more detailed and clear, below steps would be a typical usage.
1) (WPF client) contains and opens a WCF endpoint for receiving TrackRecords.
2) (WF Service) opens a WCF operation (or a simple WF instance with a Receive message) that registers client-side address to an internal store.
3) (WF Service) a custom tracking participant is created and added that will send TrackingRecord to the registered clients' endpoints.
4) (client) connects to the above service and hands out client-side endpoint mentioned at step 1 and consequently receives TrackingRecords.
[ EDIT 2 ]
To put my goal in simple terms, I'd like to know
1) the most efficient way of tracking the StateMachine's state on a WF Service (IIS) + WPF or any types of client app through TrackingParticipant.
2) if my suggestion can be improved
Meanwhile, I have implemented this and works good so far. I also added MvvM Light framework's messaging feature at the client-side so that it propagate the received message to the models easily.
You might take a look at SignalR instead of trying to force WCF to become a pub/sub platform which is not it's strength. There is an example on my blog with the visual tracking example with the tracking participant split out from the tracking application so it's not all in one process. That blog also has links to two other blogs where similar things were done, but all using a messaging architecture more suitable to events like this.
http://panmanphil.wordpress.com/2012/11/05/slides-and-sample-from-the-chippewa-valley-code-camp/
There is an existing mechanism that wraps a lot of the functionality that you are suggesting (If I understand your needs correctly). If you need to utilize a WCF service to communicate in a bidirectional way (i.e. PUSH data to connected clients) I would suggest leveraging the PollingDuplex Binding.
I have used PollingDuplex in the past with various Silverlight clients to exchange data, and I have read articles like this one describing the steps to produce the same behavior in WPF space.
This approach will automate much of the endpoint registration and tracking logic that you apparently are thinking of doing manually.
I hope this helps.
I want to design a new distributed application, but I have a few queries that I need some genius advice on, hopefully from you people:
Scenario
I currently support a legacy application that is starting to fall between the cracks.
It is a distributed Client-Server app implemented using .Net Remoting. I can't explain exactly what it does, because I'm not allowed to.......But let's just say that it does LOTS of MATHS. I want to re-design and re-write the application using WCF.
Pre-requisites
The server side of the implementation will be hosted in a Windows Service.
The client side will be a windows forms application.
The server side will perform lots of memory-intensive processing.
The server will spit this data out to multiple thin clients (20-ish).
The majority of the time the server will be passing data to the clients, but occasionally the clients will be persisting data back to the server.
The speed at which the data is transmitted is highly-important, however I'm well aware that WCF can handle fast distribution of data.
Encryption/Security is not that important as the app will run on a highly protected local network.
Queries
Given the information above:
1)What sort of design pattern am I best going with? - Baring in mind I want the server to continually PUSH the newly calculated information immediately to the clients, as opposed to the current implementation that involves the client pulling from the server continuously.
2)What type of WCF binding should I use to ensure maximum speed of data transfer? (as close to real-time as possible is what I'm after)
3)Should I use a class library to share the common objects between the client and the server applications?
4)What is the best way in which to databind my objects on the client side in order to see live updates continually as data changes?
If I've forgotten anything then feel free to point this out
Help greatly appreciated.
1) What sort of design pattern am I best going with?
Based on your comments, you're wanting to transform the current polling mechanism to an event-based mechanism. That is, instead of the client constantly checking the server for results, have the server notify the client when a new calculation result is available.
I would recommend using Juval Lowy's Publish-Subscribe Framework for this.
(source: microsoft.com)
.
This framework is described in detail in this MSDN article. And you can download the framework's source code for free at Lowy's website, IDesign.net.
Basically, the server logic that performs the calculations inside the Windows service is the Publishing Client in the graphic, and the various WinForm applications are the Subscribing Clients. The Pub/Sub Service lives in your Windows service. It manages the list of subscribing clients and provides a single endpoint for your server to publish calculation results to. In this way, your server performs a calculation and publishes the result once to the Pub/Sub Service endpoint. The Pub/Sub Service is then responsible for publishing the result to the subscribed clients.
2) What type of WCF binding should I use to ensure maximum speed of data transfer?
If all of your WCF communication were on a single machine, you'd want to use the NetNamedPipeBinding. However, since you will be distributed, you want to use the NetTcpBinding.
For WCF binding decisions, I have found this chart useful.
3) Should I use a class library to share the common objects between the client and the server applications?
Since you are in control of both the client and server side, I would highly recommend sharing a class library instead of using Visual Studio's "Add Service Reference" feature. For a detailed discussion of this, refer to this SO question-and-answer.
4) What is the best way in which to databind my objects on the client side in order to see live updates continually as data changes?
I suspect this will depend on what controls you use to display the data. One way that immediately comes to mind would be to have your client fill an in-memory data table as each calculation result is received. This data table could then be bound to a ListBox control, for example, that shows the results in calculation order.
This to me looks like you need to implement the Observer pattern, but distributed. Whereby new calculations are made to the service, and WCF just happens to be the mechanism by which you push your notification back to the client.
Generally speaking, you have your business logic housed in a windows service, whereby a type is a Subject (Observable). You could publish an endpoint for clients to register for notifications. This would be a WCF service, with potentially two operations:
RegisterClient(...)
UnregisterClient(...)
When a client is registered with service, it can receive updates, broadly speaking, the when the service has finished calculating a result, it could iterate through all registered clients and initiate a push. The push being a communication through an endpoint on the client.
A client endpoint might typically by
Notify(Result...);
And your server simply calls that when it has new data...
Typically you'd use TCP to maximise throughput.
This is by no means exactly what you should do, but perhaps its a direction to start in?