WCF communication with several clients without IIS - c#

we're working on a peer to peer comm software that would allow a number of grocery stores to sync their inventory with what we call "headquarters".
To so this, we're thinking WCF+WPF, and no IIS and web services. My experience with WCF is basically zero, so my question is whether a TCP comm solution using WCF would work. The data that's being transferred is quite limited, about 2MB for a compressed plain text file (so we're sending binary data!), and this is done once per day only. So bandwidth/load shouldn't be an issue here.
The idea at this point is to have a WCF "server" running at HQ. Stores make themselves known to that server and then send files back and forth (simliliar to a chat application).
What I'm not sure of: does every store need to have a WCF "server" (or endpoint)? How would the server (=HQ) send a file to one of the clients (=stores)? Every store can send a file to any other store, and the HQ, and every store can also "request" a file from any other store/HQ.
Two limitations: None of the machines/computers involved can run Windows server for budget reasons, and as stated before IIS is a no-go.

If you are only sending files back and forth, I might question whether or not WCF even makes any sense. Have you considered just using a file transfer protocol, like scp or sftp?
Every machine will have to accept connections and have a file drop location setup, and then yuor application will have to monitor that location for new files. I love WCF in general, but a file transfer protocol is going to have a leg up if that is all you want to do.

If you direct all of your traffic via the server then there's no reason why you couldn't achieve this with WCF. The server would host WCF services in IIS with the stores having a client that was able to upload and request files. With this method, stores would not be able to directly transfer fiels to each other, but they would have to do it via the main server, which would suit your needs if you don't have the budget for the other scenario.
If all transfers are made once per day, the requests for files would be made with each client requesting what files they require, followed by each client uploading any files that are required by the server or any other client. The final step would be the server distributing the required files to each client. Obviously, this is a simplified view of it, the actual process may require some more thinking.

You don't need to host WCF in IIS, but is there any particular reason you don't want to do that?
You can host WCF in a ServiceHost, but then you need to build, maintain and deploy a lot of server/service features that IIS provides for free, such as application process recycling, activation-based hosting, etc.
In any case, it almost sounds like you need peer to peer networking. You can do that with WCF using the NetPeerTcpBinding.

If you have an opportunity to redesign your application, I suggest you do. You can throw strings around in WCF but if you can create a data contract you can keep all your communication strongly typed.
If you have access to windows server 2008 then the new IIS can host your WCF even if it isn't using tcp. Otherwise you just need to write an application that opens a service host, which you would usually wrap into a windows service. But as #MArk Seemann pointed out, you get lots of freebies by running your service in IIS.
Don't have any experience with the PeerTcpBinding but I can tell you that the NetTcpBinding is nice and fast plus it comes with all sorts of goodies like encryption and authentication if you want it.

Related

Simplest architecture for service to service data exchange

I have a public server with web services (.Net) that collect data and uploaded files from different mobile apps and I need to synchronise it with an internal intranet server.
The intranet server is deeply protected by firewall and organisation policies.
I think this is a pretty common scenario where messages and brokers could be used, something like Rabbitmq or Nservicebus, but I'm not an expert on it.
As the data is only to be sent from the external server to the intranet one in unidirectional and asynchronous way I was thinking not to add another layer of indirection to the architecture and just use the web services exposed also for server to server communication.
The approach would be like:
An intranet windows serivce would poll regularly and at different scheduled intervals the external web service to know if there is new data to get (maybe from a certain point in time)
The web service would respond with the list of the new data and files
The windows service would iterate with calls to get all the data to be inserted in the intranet and download the uploaded files.
What are the risks of this approach? Would be better that the external web service would respond only a link to a huge zipped file response with all the data and files in it?
Should I use a something like RabbitMq also for a so simple scenario?
If you are literally dealing with files, you might want to think about something even simpler. FTP (more specifically sftp) might fit your needs better, and be FAR simpler to implement.

WCF architecture help needed

We are planning on implementing our new software application as shown below.
Does this architecture look fit for purpose?
Items to Note:
There are many PC's
The pc has a WCF client as it needs to upload data to the
database periodically.
The PC has a server because the end user on the terminal server needs
to be able to interrogate the pc for information
The terminal server is the GUI for users so they can remotely connect
to a specific PC to interrogate the pc for information
We are using basicHttpBinding below
What else have we considered?
We have tried WCF NetPeerTcpBinding (i.e P2P) but it does not support
request-reply operations.
We have tried WCF Duplex but with the requirements listed above in the items to note section we would end up with a client and server at both ends anyway.
Well I apologize but I basically disagree with your architecture.
WCF is not designed or suited for anything other than a request-response communication.
Its full duplex ability will not enable your server side to issue communication to a specific client unless that client already issued a connection to the server.
That means that in order to achieve a prestigious online full duplex communication with all your clients - all your clients must maintain an open port to the server.
Having a dual client and server per PC in order to achieve an online full duplex is a step forward as it will solve the issue of keeping a port open per client however it has downsides in terms of security as it means that the specific PC is open to receive multiple connection requests. Another issue can occur with deadly reentrancies if you not careful. So, basically you will be saving 'ports' in exchange for architecture
maintainability and fitness to your solution.
So if you are targeting a deployment of around 200-300 PC's your architecture will hold but if you are targeting a larger deployment of thousands of PC's - it will not hold.

Concurrency management in WCF

i have implemented a fairly simple wcf service which handles the file transfers from my clients to the server the problem is when a client sends a file request.
all of the bandwidth is allocated to that single client and others have to wait until the requested file transfer is completed.
So my question is how to make the service more efficient and let the users share the bandwidth
[ServiceBehavior(IncludeExceptionDetailInFaults = true, InstanceContextMode =InstanceContextMode.PerCall,
ConcurrencyMode=ConcurrencyMode.Multiple)]
I set the InstanceContextMode attribute to PerCall but that didn't do the trick
UPDATE : This Project is similar to mine
http://www.codeproject.com/Articles/33825/WCF-TCP-based-File-Server
WCF does not have proper load balancing, you will have to develop one yourself.
If you are transferring files, lets assume download, you should send packets of data rather than the complete file at once. When doing this, add 'delays/sleeps' to the process to limit the amount of bytes the server sends on each time window, this will make room for other requests.
It's questionable that it's desirable to serve up files through a WCF endpoint. The reasons against doing this are pretty much exactly the problems you have been having. It works for a few clients at a time - but scaling out requires hosting new instances of the service behind a load balancer.
It would be worth considering hosting your files with some kind of storage service and have your WCF service simply return a link or handle to the file. Then the file can be retrieved offline. Microsoft have created Azure Blob Storage for this exact purpose.
Appreciate this does not address your original question, and understand the scope of your requirement may not accommodate a large reworking.
Another option is to use chunking channel if you are transferring large files. Examples: MSDN, codeplex.
Although I agree with #hugh position.

Server Push vs Client Pull for Agent-Server Topology

I need to create a system comprising of 2 components:
A single server that process and stores data. It also periodically sends out updates to the agents
Multiple agents that are installed at remote endpoints. These collect data in (often, but not always) long-running operations, and this data needs to get to the server
I'm using C# .NET, and ideally I want to use a standards compliant communications method (i.e. one that could theoritically work with Java too, as we may well also use Java agents in the future). Are there any alternatives to web services? What are my options?
The way I see it I have 3 options using web services, and have made the following observations:
Client pull
No open port required at the agent, as it acts like a client
Would need to poll the server for updates
Server push
Open port at the agent, as it acts like a server
Server must poll agents for results
Hybrid
Open port at the agent, as it acts like both a client and a server
No polling; server pushes out updates when required, client sends results when they are available
The 'hybrid' (where agents are both client and server seems the obvious choice - but this application will typically be installed in enterprise and government environments, and I'm concerned they may have an issue with opening a port at the agent. Am I dwelling too much on this?
Are there any other pros and cons I've missed out?
Our friends at http://www.infrastructures.org swear by pull-based mechanisms: http://www.infrastructures.org/papers/bootstrap/bootstrap.html
A major reason why they prefer client-pull over server-push is that clients may be down, and clients must (in general) apply all the operations pushed by servers. If this criteria isn't important in your case, perhaps their conclusion won't be your conclusion, but I do think it is worth reading the "Push vs Pull" section of their paper to determine for yourself.
I would say that in this day and age you can seriously consider only pull technologies. The problem with push is that clients often are hidden behind Network Address Traversal devices (NAT) like wireless routers, broadband modems or company firewalls and they are, more often than not, unreachable from the server.
Making outbound connections ('phone-home'), specially on well known ports like HTTP/HTTPS can basically be assumed as 'possible' even under most constricted networks.
If you use some kind of messaging server (JMS for Java, not sure for C#) then your messaging server is the only server that needs to open a port and you can have two way communication from your agent to the messaging server and from the server to the messaging server. This would allow you to accomplish the hybrid model without needing to open a port on the agent server.
IMHO, I find your best option is the pull option.. that can satisfy your main system requirements as follow:
The first part: Data needs to get to the server, that's obviously can be done through invoking a web method that send that data as a parameter
2nd part:(Server periodically sends out updates to the agents): You can still do that that thru client (regular) pulls by some sort of a web service method that "asks" for the updates since its last pull (some sort of s time stamp to get the updates it missed)
The hybrid method seems a bit weird to me given that I think of an agent as a part of the system that probably might go "offline" quite often, what will the server then do if that failed? it's usually a tough question/decision, specially if you're not sure if this an intended "going offline" or a system/network failure.. etc

Available options for hosting FTP server in .NET application

I need to implement an FTP service inside my .NET application (running as a Windows Service) and have not had much luck finding good/current source code or vendors. Ideally it needs to be able to respond to the basic FTP Protocol and accept the data stream from an upload via a stream, enabling me to process the data as it is being received (think on the fly hashing).
I need to be able to integrate it into my service because it will stack on top of our current code base with an existing custom TCP/IP communication protocol. I don't want to write (and then spend time debugging and performance testing) my own protocol, or implementation.
I have already found plenty of ftp client implementations, I just need an acceptable server solution.
There is an article about rolling your own FTP server in C# here. It's a bit old, but it might be complete enough for your requirements.
If you can get away from the need to process inbound data on-the-fly, I'd suggest just using an off-the-shelf FTP server (maybe even IIS), and process the received files from a folder. Your service could easily monitor this folder for new files. The other benefit of this is that files could be received even if your service is not running or restarting, and testing would be easier as you can drop your own files into the monitored folder.
Good luck!
Hope you find RemObjects free IP nice to use,
http://www.remobjects.com/ip.aspx
After installation you will see the samples.

Categories

Resources