From my client/server I receive serialized data, once the data is deserialized, it goes into a command handler where receivedData.Action is the ClientMessage:
Command._handlers[receivedData.Action].Handle(receivedData.Profile);
The command handler will work out the client message and return the response that should be given to the client.
I have an enum for the client messages as follow:
public enum ClientMessage
{
INIT = 1,
NEW_PROFILE,
UPDATE_PROFILE_EMAIL,
UPDATE_PROFILE_PASSWORD,
UPDATE_PROFILE_PHONE,
UPDATE_PROFILE_DATE,
UPDATE_PROFILE_SECRET_ANSWER,
UPDATE_PROFILE_POSTAL_CODE,
UPDATE_SUCCESS,
PING,
PONG,
QUIT
}
What I am having a difficult is how to have all the actions written, for example:
Should I have a separated enum for what the client sends and another for what the server should reply with ?
Or should I have a single enum with all messages and follow it as requested ?
Or how should I go about defining the messages and handling it ?
This is what my server/client currently does just to give you a better view:
Server starts
Client connects
Client send auth to server
Server verify client and send connected approval message
Client will from there start sending and updating profiles to the server
This is roughly an example only.
IPacketHandler
public interface IPacketHandler
{
MyCommunicationData Handle(ProfileData profile);
}
Command
public class Command
{
public static Dictionary<ClientMessage, IPacketHandler> _handlers = new Dictionary<ClientMessage, IPacketHandler>()
{
{ClientMessage.INIT, new Init()},
{ClientMessage.NEW_PROFILE, new NewProfile()},
{ClientMessage.UPDATE_PROFILE_EMAIL, new UpdateEmail()},
{ClientMessage.UPDATE_PROFILE_PASSWORD, new UpdatePassword()},
{ClientMessage.UPDATE_PROFILE_PHONE, new UpdatePhone()},
{ClientMessage.UPDATE_PROFILE_DATE, new UpdateDate()},
{ClientMessage.UPDATE_PROFILE_SECRET_ANSWER, new UpdateSecretAnswer()},
{ClientMessage.UPDATE_PROFILE_POSTAL_CODE, new UpdatePostalCode()},
{ClientMessage.UPDATE_SUCCESS, new Success()},
{ClientMessage.PING, new Ping()},
{ClientMessage.PONG, new Pong()},
{ClientMessage.QUIT, new Quit()},
};
}
Example of the INIT
public class Init : IPacketHandler
{
public MyCommunicationData Handle(ProfileData profile)
{
// Some verification to auth the client here
// bla bla
// return response
return new MyCommunicationData() { Action = ClientMessage.CONNECTED };
}
}
PS: If my title is off and you have a better suggestion let me know or if you can go ahead and update it, I was not sure of how to describe this in English.
If your question is about how to design the class and interactions as I understood it, then I would - and it's totally dependant on the specifics of your application - separate this big Enumerations type into separate, smaller ones that are more descriptive of what they do, and of your intentions, for example, ProfileAction, ActionResult, PingStatus etc.. Then when you're using these enums, you make sure that you get compiler-time checks that you're doing it correctly, otherwise, what you're doing is almost like just passing strings.
It also has to do with sticking to Single Responsibility principle in OO design: an object should have single responsibility. Your enum as it stands now has more than one responsibility.
With issues like these, I find it helpful to look at what .NET framework does: for example look at Ping class and how it uses PingStatus enumerations and other enumerations as well.
Not sure I'd use an enum at all. They are great inside a peice of code, exposed as communicated value, they are considerably less than great.
For me I'd have a different class per message, not one message with a god property.
Related
I have a class receives message from the queue, once i get the message i need to upload it to cloud and then send it to another service. 3 different jobs have to be done in a single class, what I'm doing is :
private async Task ProcessMessageAsync(ProcessMessageEventArgs args)
{
try
{
//i get the message first
var incomingMessage = JsonConvert.DeserializeObject<RequestRefundModel>(args.Message.Body.ToString());
//need to upload it
var sendtoBlobResult = await uploadCsv.ExecuteUseCaseAsync(incomingMessage).ConfigureAwait(false);
//prepare to send it to another service
SendFileToAggregatorModel sendToAggregator = new();
sendToAggregator.Metadata = new ResponseCsvRefundModel() { Transactions = incomingMessage.FileBody};
sendToAggregator.TransactionType = incomingMessage.TransactionType;
sendToAggregator.URL = sendtoBlobResult.URL;
await sendFile.ExecuteUseCaseAsync(sendToAggregator);
}
catch (Exception ex)
{
////
}
}
am I breaking the rule of single responsibility? If so, I would like you clarify what I'm missing to fix it?
Your code reveals a process that consists of 3 individual steps. You have already created separate instances to handle these steps (e.g. uploadCsv and sendFile). What you may be missing is a fourth class to describe the process itself. So you could create a new class called RequestRefundProcessor or RequestRefundOrchestrator or RequestRefundFlow whose sole responsibility is to describe the individual steps required to request a refund. I am purposely using language like may and could, because it is up to you to decide wether this makes sense or not. Creating a new class just for 8 lines of code may be overkill. Then again, if there are other classes that can reuse this code, then it makes sense to do it. In my opinion, I would move the code to its own class, because it helps to extract the actual business process from the technical message processing framework you are using.
In the end, the single responsibility principle is an architectural guideline, so the answer to your question will always be: it depends™
I just started to learn C# for a school project but I'm stuck on something.
I have a solution with 2 projects (and each project has a class), something like this:
Solution:
Server (project) (...) MyServerClass.cs, Program.cs
App (project) (...) MyAppClass.cs, Program.cs
In my "MyServerClass.cs", I have this:
class MyServerClass
{
...
public void SomeMethod()
{
Process.Start("App.exe", "MyAppClass");
}
}
How can I properly send, for example, an IP address and port? Would something like this work?
class MyServerClass
{
....
public void SomeMethod()
{
string ip = "127.0.0.1";
int port = 8888;
Process.Start("App.exe", "MyAppClass " + ip + " " + port);
}
}
Then in my "MyAppClass.cs", how can I receive that IP address and port?
EDIT:
The objective of this work is to practice processes/threads/sockets. The idea is having a server that receives emails and filter them, to know if they're spam or not. We got to have 4 or 5 filters. The idea was having them as separated projects (ex: Filter1.exe, Filter2.exe, ...), but I was trying to have only 1 project (ex: Filters.exe) and have the filters as classes (Filter1.cs, Filter2.cs, ...), and then create a new process for each different filter.
I guess I'll stick to a project for each filter!
Thanks!
There are a number of ways to achieve this, each with their own pros/cons.
Some possible solutions:
Pass the values in on the command line. Pros: Easy. Cons: Can only be passed in once on launch. Unidirectional (child process can't send info back). Doesn't scale well for complex structured data.
Create a webservice (either in the server or client). Connect to it and either pull/push the appropriate settings. Pros: Flexible, ongoing, potentially bi-directional with some form of polling and works if client/server are on different hosts. Cons: A little bit more complex, requires one app to be able to locate the web address of the other which is trivial locally and more involved over a network.
Use shared memory via a memory mapped file. This approach allows multiple processes to access the same chunk of memory. One process can write the required data and the others can read it. Pros: Efficient, bi-directional, can be disk-backed to persist state through restarts. Cons: Requires pointers and an understanding of how they work. Requires a little more manipulation of data to perform a read/write.
There are dozens more ways. Without knowing your situation in detail, it's hard to recommend one over another.
Edit Re: Updated requirements
Ok, command line is definitely a good choice here. A quick detour into some architecture...
There's no reason you can't do this with a single project.
First up, use an interface to make sure all your filters are interchangeable. Something like this...
public interface IFilter {
FilterResult Filter(string email);
void SetConfig(string config);
}
SetConfig() is optional but potentially useful to reconfigure a filter without a recompile.
You also need to decide what your IFilter's FilterResult is going to be. Is it a pass/fail? Or a score? Maybe some flags and other metrics.
If you wanted to do multiple projects, you'd put that interface in a "shared" or "common" project on its own and reference it from every other project. This also makes it easy for third parties to develop a filter.
Anyway, next up. Let's look at how the filter is hosted. You want something that's going to listen on the network but that's not the responsibility of the filter itself, so we need a network client. What you use here is up to you. WCF in one flavour or another seems to be a prime candidate. Your network client class should take in its constructor a network port to listen on and an instance of the filter...
public class NetworkClient {
private string endpoint;
private IFilter filter;
public NetworkClient(string Endpoint, IFilter Filter) {
this.filter = Filter;
this.endpoint = Endpoint;
this.Setup();
}
void Setup() {
// Set up your network client to listen on endpoint.
// When it receives a message, pass it to filter.Filter(msg);
}
}
Finally, we need an application to host everything. It's up to you whether you go for a console app or winforms/wpf. Depends if you want the process to have a GUI. If it's running as a service, the UI won't be visible on a user desktop anyway.
So, we'll have a process that takes the endpoint for the NetworkClient to listen on, a class name for the filter to use, and (optionally) a configuration string to be passed in to the filter before first use.
So, in your app's Main(), do something like this...
static void Main() {
try {
const string usage = "Usage: Filter.exe Endpoint FilterType [Config]";
var args = Environment.GetCommandLineArgs();
Type filterType;
IFilter filter;
string endpoint;
string config = null;
NetworkClient networkClient;
switch (args.Length) {
case 0:
throw new InvalidOperationException(String.Format("{0}. An endpoint and filter type are required", usage));
case 1:
throw new InvalidOperationException(String.Format("{0}. A filter type is required", usage));
case 2:
// We've been given an endpoint and type
break;
case 3:
// We've been given an endpoint, type and config.
config = args[3];
break;
default:
throw new InvalidOperationException(String.Format("{0}. Max three parameters supported. If your config contains spaces, ensure you are quoting/escaping as required.", usage));
}
endpoint = args[1];
filterType = Type.GetType(args[2]); //Look at the overloads here to control where you're searching
// Now actually create an instance of the filter
filter = (IFilter)Activator.CreateInstance(filterType);
if (config != null) {
// If required, set config
filter.SetConfig(config);
}
// Make a new NetworkClient and tell it where to listen and what to host.
networkClient = new NetworkClient(endpoint, filter);
// In a console, loop here until shutdown is requested, however you've implemented that.
// In winforms, the main UI loop will keep you alive.
} catch (Exception e) {
Console.WriteLine(e.ToString()); // Or display a dialog
}
}
You should then be able to invoke your process like this...
Filter.exe "127.0.0.1:8000" MyNamespace.MyFilterClass
or
Filter.exe "127.0.0.1:8000" MyNamespace.MyFilterClass "dictionary=en-gb;cutoff=0.5"
Of course, you can use a helper class to convert the config string into something your filter can use (like a dictionary).
When the network client gets a FilterResult back from the filter, it can pass the data back to the server / act accordingly.
I'd also suggest a little reading on Dependency Injection / Inversion of control and Unity. It makes a pluggable architecture much, much simpler. Instead of instantiating everything manually and tracking concrete instances, you can just do something like...
container.Resolve<IFilter>(filterType);
And the container will make sure that you get the appropriate instance for your thread/context.
Hope that helps
I am developing server side logic to process requests and respond with data to front-end server as well as to mobile app direct connections.
I have implemented SessionContext class that basically ensures, that there is matching session record in DB for every service that is called (with exception for forgot password cases, etc).
I am now trying to implement event logging. I want to have common logic so I may log all requests, exceptions, data, etc.
I have come up with this code, but somehow I don't feel good about it - too much code for each service method. Are there any clever tricks that I might implement to make it shorter and easier to read/code?
Current implementation would use EventLogic class to log event to event table. At some point some events might be related to session so I am passing eventLog as parameter to SessionContext (to create link between event and session). SessionContext saves entity data on successful dispose... i have a gut feeling that something is wrong with my design.
public Session CreateUser(string email, string password, System.Net.IPAddress ipAddress)
{
using (var eventLog = new EventLogic())
{
try
{
eventLog.LogCreateUser(email, password, ipAddress);
using (var context = SessionContext.CreateUser(eventLog, email, password, ipAddress))
{
return new Session()
{
Id = context.Session.UId,
HasExpired = context.Session.IsClosed,
IsEmailVerified = context.Session.User.IsEmailVerified,
TimeCreated = context.Session.TimeCreated,
PublicUserId = CryptoHelper.GuidFromId(context.Session, context.Session.UserId, CryptoHelper.TargetTypeEnum.PublicUser),
ServerTime = context.Time
};
}
}
catch (Exception e)
{
eventLog.Exception(e);
}
}
}
You should consider using something like SLF4J + LogBack (for example) for logging.
If your classes follow SRP, you should have not more than one call type like LogCreateUser per your application. And that means, there is no need to extract logging logic into a new class.
Our application calls external services like
//in client factory
FooServiceClient client = new FooServiceClient(binding, endpointAddress);
//in application code
client.BarMethod(); //or other methods
Is it possible to track all of these calls (e.g by events or something like that) so that the application can collect the statistics like number of call, response time, etc? Note that my application itself needs to access the values, not only to write to a log file.
What I can think is to create a subclass of VisualStudio-generated FooServiceClient and then add codes like this
override void BarMethod()
{
RaiseStart("BarMethod");
base.BarMethod();
RaiseEnd("BarMethod);
}
and the RaiseStart and RaiseEnd method will raise events that will be listened by my code.
But this seems tedious (because there are a lot of methods to override) and there is a lot of repeated codes, my code needs to change everytime the service contract changes, etc. Is there a simpler way to achieve this, for example by using reflection to create the subclass or by tapping into a built-in method in WCF, if any?
The first thing I would look at is to see if the counters available in your server's Performance Monitor can provide you with the kind of feedback you need. There's built in counters for a variety of metrics for ServiceModel Endpoints, Operations and Services. Here is some more info http://msdn.microsoft.com/en-us/library/ms735098.aspx
You could try building an implementation of IClientMessageInspector, which has a method to be called before the request is sent and when the reply is received. You can inspect the message, make logs etc in these methods.
You provide an implementation of IEndpointBehavior which applies your message inspector, and then add the endpoint behavior to your proxy client instance.
client.Endpoint.Behaviors.Add(new MyEndpointBehavior())
Check out the docs for MessageInspectors and EndpointBehaviors, there are many different ways of applying them (attributes, code, endpoint xml config), I can't remember of the top of my head which apply to which, as there also IServiceBehavior and IContractBehavior. I do know for sure that the endpoint behaviors can be added to the client proxy collection though.
I found a simple way to do it by using dynamic proxy, for example Castle's Dynamic Proxy.
Firstly, use a factory method to generate your client object
IFooClient GetClient()
{
FooClient client = new FooClient(); //or new FooClient(binding, endpointAddress); if you want
ProxyGenerator pg = new ProxyGenerator();
return pg.CreateInterfaceProxyWithTarget<IFoo>(client, new WcfCallInterceptor());
}
And define the interceptor
internal class WcfCallInterceptor : IInterceptor
{
public void Intercept(IInvocation invocation)
{
try
{
RaiseStart(invocation.Method.Name);
invocation.Proceed();
}
finally
{
RaiseEnd(invocation.Method.Name);
}
}
//you can define your implementation for RaiseStart and RaiseEnd
}
I can also change the intercept method as I wish, for example I can add a catch block to call a different handler in case the method throw exception, etc.
I'm writing a windows service application, which will be accessed through .NET Remoting.
The problem is I can't figure out how to access service objects from remotable class.
For example, I've a handler class:
class Service_console_handler
{
public int something_more = 20;
//some code...
TcpChannel serverChannel = new TcpChannel(9090);
ChannelServices.RegisterChannel(serverChannel);
RemotingConfiguration.RegisterWellKnownServiceType(
typeof(RemoteObject), "RemoteObject.rem",
WellKnownObjectMode.Singleton);
//from here on RemoteObject is accessible by clients.
//some more code doing something and preparing the data...
}
And I've a remotable class:
public class RemoteObject : MarshalByRefObject
{
private int something = 10;
public int Get_something()
{
return something;
}
}
Clients can access data in RemoteObect with no problem. But how can I access Service_console_handler object (i.e. to retrieve useful info from something_more)?
Sorry for dumb questions and thanks in advance.
What you want is somehow to access the instance of ServiceConsoleHandler via a RemoteObject instance, which is visible for the client.
For this you need to consider two things: (1) Get control over the object construction of the RemoteObject instance and make it accessible and (2) modify ServiceConsoleHandler so it can be accessed remotely.
(1)
How would you construct a RemoteObject instance in ServiceConsoleHandler, if you don’t need to consider remoting?
I guess you would do something like this:
class ServiceConsoleHandler
{
…
RemoteObject remoteObject = new RemoteObject();
// now assume that you also already have
// modified the RemoteObject class so it can hold
// a reference to your server:
remoteObject.Server = this;
…
}
It would be nice if you could make this object accessible for the client. You can do this by using RemotingServices.Marshal instead of RemotingConfiguration.RegisterWellKnownServiceType:
class ServiceConsoleHandler
{
…
TcpServerChannel channel = new TcpServerChannel(9090);
ChannelServices.RegisterChannel(channel, true);
RemoteObject remoteObject = new RemoteObject();
remoteObject.Server = this;
RemotingServices.Marshal(remoteObject, "RemoteObject.rem");
…
}
(2)
If you execute the code right now and access the remoteObject.Server in the client code you would get some remoting exception, because the class ServiceConsoleHandler cannot be accessed remotely. Therefore you need the add the [Serializable] attribute:
[Serializable]
class ServiceConsoleHandler
{ … }
Reason: Types which should be accessed remotely, need to be marshaled to some special transferrable representation. This way they can be squeezed through the TCP port and transferred via the TCP protocol. Basic data types can by marshaled by the framework, so you don't need to think about them. For custom types you will need to state, how to do this. One way to do this is by subclassing from MarshalByRefObject. That’s exactly what you have already done with RemoteObject. Another way is to mark your custom classes as [Serializable] as shown above.
That’s it. Now you should be able to access the server’s field in the client code. Note that you don’t need your existing code for object activation:
TcpClientChannel channel = new TcpClientChannel();
ChannelServices.RegisterChannel(channel, true);
RemoteObject remoteObject = (RemoteObject)Activator.GetObject(
typeof(RemoteObject), "tcp://localhost:9090/RemoteObject.rem");
Console.WriteLine(remoteObject.Server.SomethingMore);
For me .NET remoting is full of funny surprises and sleepless nights. To counter this, make yourself familiar with the remoting concepts (which are from my point of view poorly documented). Dig into the serialization concept (MarshalByRefObject vs. [Serializable]). If you want to make a production code out of it, think a very good ways to handle remoting exceptions. Also consider multithreading. There could be more than one client using this remote object at once.
Have fun!
Thank you! I very much appreciate thoroughness and clarity of you answer.
Most bizzare thing is that I didn't even know that you can publish object instance. About a dozen simple tutorials I studied proposed RemotingConfiguration.RegisterWellKnownServiceType as only method to do remoting. Stupid me.
Now remoting looks much more useful to me. I just wrote a quick test application and it worked. Thanks again.