TypedActor vs ReceiveActor? - c#

Most documentation refers to using ReceiveActor and then methods such as Receive(). However, some documentation refers to inheriting from TypedActor and then using interfaces such as IHandle<MyMessageType>.
Is it safe [as in, best practice/not deprecated] to use TypedActor + Interfaces or should I only be using ReceiveActor? (Official documentation seems to be unclear on the subject)

TypedActor will be marked as obsolete in version 1.3, and will be removed in version 1.5 (relevant pull request and issue).
There was a discussion a few years ago (as tomliversidge says in their answer) to rename it, but that plan didn't go ahead.

There is some discussion here with regards to this. Seems like it is being made obsolete in the Java/Scala Akka world so I'd probably stick to ReceiveActor unless TypedActor gives you something ReceiveActor doesn't

Related

GCM or CCM implementation of C#

Can any one point me to any live implementation of CBC-MAC Mode (CCM)or Galois/Counter Mode (GCM) in C#?
It seems that Microsoft has not created any implementation similar to AesCryptoServiceProvider. Am I right?
I did went through: https://blogs.msdn.microsoft.com/shawnfa/2009/03/17/authenticated-symmetric-encryption-in-net/, bit it's more of a "would look something like" rather than actual implementation.
Any help will be highly appreciated.
Thanks in advance.
By the way, while there wasn't an official MS implementation at the time of this question, there is one now:
AES-GCM: AesGcm Class (System.Security.Cryptography)
AES-CCM: AesCcm Class (System.Security.Cryptography)
Both are supported from:
.Net: 5.0
.Net Core: 3.0
.Net Standard 2.1

Best Practices on using C# Intellisense Comments

We have a Visual Studio 2010 solution that contains several C# projects in accordance with Jeffery Palermo's Onion Architecture pattern (http://jeffreypalermo.com/blog/the-onion-architecture-part-1/). We want to add the Visual Studio Intellisense Comments using the triple slashes, but we want to see if anyone knows of best practices on how far to take this. Do we start all the way down in the Model in the Core project, and work up through Infrastructure and into the DataAccess Services and Repositories, and into the User Interface? Or is it better to use these comments in a more limited fashion, and if so what are the important objects to apply the Intellisense Comments to?
Add them to any methods exposed in public APIs, that way you can give the caller all the information they need when working with a foreign interface. For example, which exceptions the method may throw and other remarks.
It's still beneficial to add these kinds of comments to private methods, I do it anyway to be consistent. It also helps if you plan on generating documentation from the comments.
While, technically, there is such a thing as too much documentation, 99.99999% of the time this exception doesn't apply.
Document everything as much as you can. Formal, informal, stream of thought..every scrap of comments will help some poor soul who inherits your code or has to interface with it.
(It's like the old rule "The error may be in the Compiler and not your code. Compilers have errors too. This is not one of those times.")
Do we start all the way down in the Model in the Core project, and work up through Infrastructure and into the DataAccess Services and Repositories, and into the User Interface? Yes
Or is it better to use these comments in a more limited fashion, and if so what are the important objects to apply the Intellisense Comments to? If you want to. Apply them to any function you write, and not what VS autogenerates
I've seen limited "intellisense" comments..but extensive in-code comments that follow. So long as the "content" is there, life will be good. I generally include a brief blurb about each function in the intellisense comments, but put the majority of "here's why i did this" in the function and dead-tree documents.
I agree with fletcher. Start with public facing classes and methods and then work your way down into private code. If you were starting from scratch I would highly recommend adding the XML comments to all code for your own convenience, but in this case starting with public methods and then updating other classes whenever you go in to update them is a good solution.

Enhanced DynamicQuery?

I've recently started using the DynamicQuery API, and it quickly became apparent that it has numerous limitations. I've found at least one improvement online: support for enum arguments, but it's pretty clear that this API is not actively maintained (if at all).
In case I'm wrong and there is somebody maintaining an improved version - please post a link!
Alternatively, a separate, active project with similar goals would also be of interest.
(Clarification: I'm looking to parse strings at runtime.)
In the end we just implemented some of the features we missed by editing the source code. Added support for passing in a static class as an "external" (DynamicQuery's terminology), support for calling methods on this static class, and type inference if any such methods are generic.
I suspect there isn't much demand for this, so I didn't bother making it available anywhere. Let me know if you think otherwise.
Edit: due to a request, DynamicQuery Enhanced is now available on BitBucket. Expect to be underwhelmed; take a look at this Info and this list of tweaks.
I've seen PredicateBuilder before mentioned (here on Stackoverflow) as an alternative. I've not used it though, but it might be useful to you.

Lex/Yacc for C#?

Actually, maybe not full-blown Lex/Yacc. I'm implementing a command-interpreter front-end to administer a webapp. I'm looking for something that'll take a grammar definition and turn it into a parser that directly invokes methods on my object. Similar to how ASP.NET MVC can figure out which controller method to invoke, and how to pony up the arguments.
So, if the user types "create foo" at my command-prompt, it should transparently call a method:
private void Create(string id) { /* ... */ }
Oh, and if it could generate help text from (e.g.) attributes on those controller methods, that'd be awesome, too.
I've done a couple of small projects with GPLEX/GPPG, which are pretty straightforward reimplementations of LEX/YACC in C#. I've not used any of the other tools above, so I can't really compare them, but these worked fine.
GPPG can be found here and GPLEX here.
That being said, I agree, a full LEX/YACC solution probably is overkill for your problem. I would suggest generating a set of bindings using IronPython: it interfaces easily with .NET code, non-programmers seem to find the basic syntax fairly usable, and it gives you a lot of flexibility/power if you choose to use it.
I'm not sure Lex/Yacc will be of any help. You'll just need a basic tokenizer and an interpreter which are faster to write by hand. If you're still into parsing route see Irony.
As a sidenote: have you considered PowerShell and its commandlets?
Also look at Antlr, which has C# support.
Still early CTP so can't be used in production apps but you may be interested in Oslo/MGrammar:
http://msdn.microsoft.com/en-us/oslo/
Jison is getting a lot of traction recently. It is a Bison port to javascript. Because of it's extremely simple nature, I've ported the jison parsing/lexing template to php, and now to C#. It is still very new, but if you get a chance, take a look at it here: https://github.com/robertleeplummerjr/jison/tree/master/ports/csharp/Jison
If you don't fear alpha software and want an alternative to Lex / Yacc for creating your own languages, you might look into Oslo. I would recommend you to sit through session recordings of sessions TL27 and TL31 from last years PDC. TL31 directly addresses the creation of Domain Specific Languages using Oslo.
Coco/R is a compiler generator with a .NET implementation. You could try that out, but I'm not sure if getting such a library to work would be faster than writing your own tokenizer.
http://www.ssw.uni-linz.ac.at/Research/Projects/Coco/
I would suggest csflex - C# port of flex - most famous unix scanner generator.
I believe that lex/yacc are in one of the SDKs already (i.e. RTM). Either Windows or .NET Framework SDK.
Gardens Point Parser Generator here provides Yacc/Bison functionality for C#. It can be donwloaded here. A usefull example using GPPG is provided here
As Anton said, PowerShell is probably the way to go. If you do want a lex/ yacc implementation then Malcolm Crowe has a good set.
Edit: Direct Link to the Compiler Tools
Just for the record, implementation of lexer and LALR parser in C# for C#:
http://code.google.com/p/naive-language-tools/
It should be similar in use to Lex/Yacc, however those tools (NLT) are not generators! Thus, forget about speed.

Organizing using directives [duplicate]

This question already has answers here:
Closed 14 years ago.
I've been using ReSharper for the past months and, advertising aside, I can't see myself coding without it. Since I love living on the bleeding "What the hell just went wrong" edge, I decided to try my luck w/ the latest ReSharper 4.5 nightly builds. It's all nice.
However, I've noticed that the using directives grouping format has changed, and I wanted to know which is closer to the general standards:
[OLD]
#region Using directives
using System.X;
using System.Y;
using System.Z;
using System.A;
#region
namespace X { ... }
[NEW]
namespace X {
#region Using directives
using System.X;
using System.Y;
using System.Z;
using System.A;
#region
...
}
Other than just lazy loading references, does it serve any special purpose? (Been reading Scott Hanselman's take on this # http://www.hanselman.com/blog/BackToBasicsDoNamespaceUsingDirectivesAffectAssemblyLoading.aspx)
Thanks;
As Scott proceeds to discover in his post, there is no runtime difference between these two cases. Therefore, it does not serve the purpose of lazy loading references.
If you read the comments in Scott's blog all the way to the end, you will also see that the developer who passed this rumor to Scott (Mike Brown) says that he had only heard of this and not tested it himself.
That said, it is possible that where you put the using directives might make a difference by giving a compiler error if you set up an alias for a type inside a namespace, and you have another type with the same name defined in the namespace. But that's no runtime difference of course.
Finally, I believe that MS coding guidelines say to do it as ReSharper 4.5 does. But it's silly to blindly follow this rule "because MS says so", since
It has been proved that it offers no benefit.
Your team's (or your) usual coding style may very well be different.
Well, the usual is subjective. :) But for me, the usual is the "old" way.
Oh, my bad. I didn't see that question when I searched for it. I know it's silly to do it 'cause MS says so, but in general, what's the "usual" approach to this?
I'm known to use ReSharper's code cleanup a lot, so I'm just wondering, to be honest.

Categories

Resources