using (DiscordWebhookClient client = new DiscordWebhookClient(WEBHOOK_URL))
{
ulong z = 42342340290226;
client.ModifyMessageAsync(z);//Not sure how I would edit this message. The documentation is confusing.
}
Im not sure how to use this ModifyMessage function. Also, do I need to use a Async Function? I am just calling it without using any ASYNC. Im not sure if that is ok. The send message works, but second function im not sure how to work it.
Like others have said, you should await the async call. This ensures that the action is executed in a predictable fashion. The intend is to execute it right now and to wait for the result of the action.
That being said, the discord documentation describes this method as follows:
public Task ModifyMessageAsync(ulong messageId, Action<WebhookMessageProperties> func, RequestOptions options = null)
The second parameter describes a delegate based on WebhookMessageProperties. It can easily be defined by a lambda, like such:
x => { }
Now bear in mind the x is arbitrary, you can chose whatever designation you like, even whole words, I just kept it short for the example.
Between the bracelets, you can use access any properties that are in the WebhookMessageProperties class, by using x.SomeProperty. Where SomeProperty has to be a known property of that class if that makes sense.
One of the known properties is for example:
string Content { get; set; }
So here is how you can use a lambda to change the Content property:
using (DiscordWebhookClient client = new DiscordWebhookClient(WEBHOOK_URL))
{
ulong z = 42342340290226;
await client.ModifyMessageAsync(z, x =>
{
x.Content = "This is the updated message content";
});
}
If you want to update multiple properties at the same time, you can just add another line inside the lambda.
Related
This is my code...
namespaceMyNamespace
{
public class Ping
{
private readonly ILogger<Ping2> _logger;
public Ping2(ILogger<Ping2> log)
{
_logger = log;
}
[FunctionName("Ping2")]
public async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "v1/Ping")] HttpRequest req)
{
_logger.LogInformation("Processing: Ping");
if (req.Query["agendaId"] == Microsoft.Extensions.Primitives.StringValues.Empty)
{
return new BadRequestResult();
}
Guid agendaId = Guid.Parse(req.Query["agendaId"]);
string dataTag = "ZZZZ";
string hubName = "myHub";
EventHubProducerClient producerClient;
producerClient = new EventHubProducerClient(connString, hubName);
using EventDataBatch eventBatch = await producerClient.CreateBatchAsync();
MensagemPing data = new MensagemPing() {
ID = Guid.NewGuid(),
agendaId = agendaId,
dataTag = dataTag,
timestamp = DateTime.Now
};
string jsonString = JsonSerializer.Serialize(data);
eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes(jsonString)));
await producerClient.SendAsync(eventBatch);
return new OkResult();
}
}
}
I can't find much documentation about the best way to implemment this. Just to start, I will never had a need to send a message in Batch. I found many examples about how to send a single message to the hub, but in mostly case, are deprecated examples or using classic asp.net application.
Also, this endpoint is taking one and a half second to be executed locally, specially because this snippet here take more than 1 second:
EventHubProducerClient producerClient;
producerClient = new EventHubProducerClient(connString, hubName);
using EventDataBatch eventBatch = await producerClient.CreateBatchAsync();
MensagemPing data = new MensagemPing() {
ID = Guid.NewGuid(),
agendaId = agendaId,
dataTag = dataTag,
timestamp = DateTime.Now
};
string jsonString = JsonSerializer.Serialize(data);
eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes(jsonString)));
I am pretty convinced that this is not the best practice to consume event hubs. Maybe someone can give me some good example or maybe give me good hists about how to improve it and keep it more faster ?
In this scenario, since you're using Azure Functions, you may want to consider using the Event Hubs output binding rather than using the client library directly. The benefit is that it manages the Event Hubs clients for you, including lifetimes and sending patterns.
If you decide to continue to use the client library directly, there are a few suggestions that I'd make:
The producer is intended to be long-lived; I'd strongly suggest creating it as a singleton, either by registering it via DI using the Microsoft.Extensions.Azure library (preferable) or by creating it as a static member of the class.
If you choose to continue creating the producer in the body of the Function, you'll need to close or dispose it. Otherwise, the connection that it owns will be left open until it idles out after ~20 minutes. This will eventually cause socket exhaustion on the host unless your traffic is really, really low.
Creating an explicit batch for a single event doesn't offer much benefit. I'd suggest using the SendAsync overload that accepts an enumerable instead. For example:
await producer.SendAsync(new[] { eventData }, cancellationToken);
Not impactful, but if you'd like to simplify a bit, there's a constructor overload for EventData that accepts a string; no need to manually perform the encoding:
var eventData = new EventData(jsonString);
If you're dealing with a higher call volume, it may be helpful to consider using the EventHubBufferedProducerClient, as it manages the building of batches and sends implicitly to try and maximize throughput. That said, using in a Functions context is awkward and requires more manual configuration in the Function startup in order to manage its lifetime properly and ensure events are flushed at cleanup. It's probably not worth it unless you're seeing a bottleneck on the single event sends, but there's an end-to-end sample that illustrates using it in an ASP.NET host, which is highly similar to Functions.
I have a controller which returns a large json object. If this object does not exist, it will generate and return it afterwards. The generation takes about 5 seconds, and if the client sent the request multiple times, the object gets generated with x-times the children. So my question is: Is there a way to block the second request, until the first one finished, independent who sent the request?
Normally I would do it with a Singleton, but because I am having scoped services, singleton does not work here
Warning: this is very oppinionated and maybe not suitable for Stack Overflow, but here it is anyway
Although I'll provide no code... when things take a while to generate, you don't usually spend that time directly in controller code, but do something like "start a background task to generate the result, and provide a "task id", which can be queried on another different call).
So, my preferred course of action for this would be having two different controller actions:
Generate, which creates the background job, assigns it some id, and returns the id
GetResult, to which you pass the task id, and returns either different error codes for "job id doesn't exist", "job id isn't finished", or a 200 with the result.
This way, your clients will need to call both, however, in Generate, you can check if the job is already being created and return an existing job id.
This of course moves the need to "retry and check" to your client: in exchange, you don't leave the connection to the server opened during those 5 seconds (which could potentially be multiplied by a number of clients) and return fast.
Otherwise, if you don't care about having your clients wait for a response during those 5 seconds, you could do a simple:
if(resultDoesntExist) {
resultDoesntExist = false; // You can use locks for the boolean setters or Interlocked instead of just setting a member
resultIsBeingGenerated = true;
generateResult(); // <-- this is what takes 5 seconds
resultIsBeingGenerated = false;
}
while(resultIsBeingGenerated) { await Task.Delay(10); } // <-- other clients will wait here
var result = getResult(); // <-- this should be fast once the result is already created
return result;
note: those booleans and the actual loop could be on the controller, or on the service, or wherever you see fit: just be wary of making them thread-safe in however method you see appropriate
So you basically make other clients wait till the first one generates the result, with "almost" no CPU load on the server... however with a connection open and a thread from the threadpool used, so I just DO NOT recommend this :-)
PS: #Leaky solution above is also good, but it also shifts the responsability to retry to the client, and if you are going to do that, I'd probably go directly with a "background job id", instead of having the first (the one that generates the result) one take 5 seconds. IMO, if it can be avoided, no API action should ever take 5 seconds to return :-)
Do you have an example for Interlocked.CompareExchange?
Sure. I'm definitely not the most knowledgeable person when it comes to multi-threading stuff, but this is quite simple (as you might know, Interlocked has no support for bool, so it's customary to represent it with an integral type):
public class QueryStatus
{
private static int _flag;
// Returns false if the query has already started.
public bool TrySetStarted()
=> Interlocked.CompareExchange(ref _flag, 1, 0) == 0;
public void SetFinished()
=> Interlocked.Exchange(ref _flag, 0);
}
I think it's the safest if you use it like this, with a 'Try' method, which tries to set the value and tells you if it was already set, in an atomic way.
Besides simply adding this (I mean just the field and the methods) to your existing component, you can also use it as a separate component, injected from the IOC container as scoped. Or even injected as a singleton, and then you don't have to use a static field.
Storing state like this should be good for as long as the application is running, but if the hosted application is recycled due to inactivity, it's obviously lost. Though, that won't happen while a request is still processing, and definitely won't happen in 5 seconds.
(And if you wanted to synchronize between app service instances, you could 'quickly' save a flag to the database, in a transaction with proper isolation level set. Or use e.g. Azure Redis Cache.)
Example solution
As Kit noted, rightly so, I didn't provide a full solution above.
So, a crude implementation could go like this:
public class SomeQueryService : ISomeQueryService
{
private static int _hasStartedFlag;
private static bool TrySetStarted()
=> Interlocked.CompareExchange(ref _hasStartedFlag, 1, 0) == 0;
private static void SetFinished()
=> Interlocked.Exchange(ref _hasStartedFlag, 0);
public async Task<(bool couldExecute, object result)> TryExecute()
{
if (!TrySetStarted())
return (couldExecute: false, result: null);
// Safely execute long query.
SetFinished();
return (couldExecute: true, result: result);
}
}
// In the controller, obviously
[HttpGet()]
public async Task<IActionResult> DoLongQuery([FromServices] ISomeQueryService someQueryService)
{
var (couldExecute, result) = await someQueryService.TryExecute();
if (!couldExecute)
{
return new ObjectResult(new ProblemDetails
{
Status = StatusCodes.Status503ServiceUnavailable,
Title = "Another request has already started. Try again later.",
Type = "https://tools.ietf.org/html/rfc7231#section-6.6.4"
})
{ StatusCode = StatusCodes.Status503ServiceUnavailable };
}
return Ok(result);
}
Of course possibly you'd want to extract the 'blocking' logic from the controller action into somewhere else, for example an action filter. In that case the flag should also go into a separate component that could be shared between the query service and the filter.
General use action filter
I felt bad about my inelegant solution above, and I realized that this problem can be generalized into basically a connection number limiter on an endpoint.
I wrote this small action filter that can be applied to any endpoint (multiple endpoints), and it accepts the number of allowed connections:
[AttributeUsage(AttributeTargets.Method, AllowMultiple = false)]
public class ConcurrencyLimiterAttribute : ActionFilterAttribute
{
private readonly int _allowedConnections;
private static readonly ConcurrentDictionary<string, int> _connections = new ConcurrentDictionary<string, int>();
public ConcurrencyLimiterAttribute(int allowedConnections = 1)
=> _allowedConnections = allowedConnections;
public override async Task OnActionExecutionAsync(ActionExecutingContext context, ActionExecutionDelegate next)
{
var key = context.HttpContext.Request.Path;
if (_connections.AddOrUpdate(key, 1, (k, v) => ++v) > _allowedConnections)
{
Close(withError: true);
return;
}
try
{
await next();
}
finally
{
Close();
}
void Close(bool withError = false)
{
if (withError)
{
context.Result = new ObjectResult(new ProblemDetails
{
Status = StatusCodes.Status503ServiceUnavailable,
Title = $"Maximum {_allowedConnections} simultaneous connections are allowed. Try again later.",
Type = "https://tools.ietf.org/html/rfc7231#section-6.6.4"
})
{ StatusCode = StatusCodes.Status503ServiceUnavailable };
}
_connections.AddOrUpdate(key, 0, (k, v) => --v);
}
}
}
I want to make use of the saveChangesAsync in a synchronous function. The situation I want to use this in is for example.
public string getName(int id)
{
var db = new dbContext();
String name= db.names.find(id);
db.log.add(new Log("accessed name");
db.savechangesAsync();
return name;
}
So basically I dont care when the log is actually saved to the database, I just dont want it to slow down my getName function. I want the getname to return and then the log can be saved to the database any time after / during that.
How would I go about achieving this? Nothing is dependant on the time that the log is submitted, So it can take 2 min for all I care.
I have come up with another solution:
private async void UpdateLastComms(string _id)
{
int id = Int32.Parse(_id);
using (var db = new dbContext())
{
db.Devices.Where(x => x.UserId == id).FirstOrDefault().LastComms = DateTime.Now;
await db.SaveChangesAsync();
}
}
and I then can call this function like so UpdateLastComms("5");
How will the this compare to the first and will it execute as I think?
The problem with "fire and forget" methods like this is error handling. If there is an error saving the log to the database, is that something you want to know about?
If you want to silently ignore errors, then you can just ignore the returned task, as in your first example. Your second example uses async void, which is dangerous: if there is a database write error with your second example, the default behavior is to crash the application.
If you want to handle errors by taking some action, then put a try/catch around the body of the method in your second example.
Long story short
Say I have the following code:
// a class like this
class FirstObject {
public Object OneProperty {
get;
set;
}
// (other properties)
public Object OneMethod() {
// logic
}
}
// and another class with properties and methods names
// which are similar or exact the same if needed
class SecondObject {
public Object OneProperty {
get;
set;
}
// (other properties)
public Object OneMethod(String canHaveParameters) {
// logic
}
}
// the consuming code would be something like this
public static void main(String[] args) {
FirstObject myObject=new FirstObject();
// Use its properties and methods
Console.WriteLine("FirstObject.OneProperty value: "+myObject.OneProperty);
Console.WriteLine("FirstObject.OneMethod returned value: "+myObject.OneMethod());
// Now, for some reason, continue to use the
// same object but with another type
// -----> CHANGE FirstObject to SecondObject HERE <-----
// Continue to use properties and methods but
// this time calls were being made to SecondObject properties and Methods
Console.WriteLine("SecondObject.OneProperty value: "+myObject.OneProperty);
Console.WriteLine("SecondObject.OneMethod returned value: "+myObject.OneMethod(oneParameter));
}
Is it possible to change FirstObject type to SecondObject and continue to use it's properties and methods?
I've total control over FirstObject, but SecondObject is sealed and totally out of my scope!
May I achieve this through reflection? How? What do you think of the work that it might take to do it? Obviously both class can be a LOT more complex than the example above.
Both class can have templates like FirstObject<T> and SecondObject<T> which is intimidating me to use reflection for such a task!
Problem in reality
I've tried to state my problem the easier way for the sake of simplicity and to try to extract some knowledge to solve it but, by looking to the answers, it seems obvious to me that, to help me, you need to understand my real problem because changing object type is only the tip of the iceberg.
I'm developing a Workflow Definition API. The main objective is to have a API able to be reusable on top of any engine I might want to use(CLR through WF4, NetBPM, etc.).
By now I'm writing the middle layer to translate that API to WF4 to run workflows through the CLR.
What I've already accomplished
The API concept, at this stage, is somehow similar to WF4 with ActivityStates with In/Out Arguments and Data(Variables) running through the ActivityStates using their arguments.
Very simplified API in pseudo-code:
class Argument {
object Value;
}
class Data {
String Name;
Type ValueType;
object Value;
}
class ActivityState {
String DescriptiveName;
}
class MyIf: ActivityState {
InArgument Condition;
ActivityState Then;
ActivityState Else;
}
class MySequence: ActivityState {
Collection<Data> Data;
Collection<ActivityState> Activities;
}
My initial approach to translate this to WF4 was too run through the ActivitiesStates graph and do a somehow direct assignment of properties, using reflection where needed.
Again simplified pseudo-code, something like:
new Activities.If() {
DisplayName=myIf.DescriptiveName,
Condition=TranslateArgumentTo_WF4_Argument(myIf.Condition),
Then=TranslateActivityStateTo_WF4_Activity(myIf.Then),
Else=TranslateActivityStateTo_WF4_Activity(myIf.Else)
}
new Activities.Sequence() {
DisplayName=mySequence.DescriptiveName,
Variables=TranslateDataTo_WF4_Variables(mySequence.Variables),
Activities=TranslateActivitiesStatesTo_WF4_Activities(mySequence.Activities)
}
At the end of the translation I would have an executable System.Activities.Activity object. I've already accomplished this easily.
The big issue
A big issue with this approach appeared when I began the Data object to System.Activities.Variable translation. The problem is WF4 separates the workflow execution from the context. Because of that both Arguments and Variables are LocationReferences that must be accessed through var.Get(context) function for the engine to know where they are at runtime.
Something like this is easily accomplished using WF4:
Variable<string> var1=new Variable<string>("varname1", "string value");
Variable<int> var2=new Variable<int>("varname2", 123);
return new Sequence {
Name="Sequence Activity",
Variables=new Collection<Variable> { var1, var2 },
Activities=new Collection<Activity>(){
new Write() {
Name="WriteActivity1",
Text=new InArgument<string>(
context =>
String.Format("String value: {0}", var1.Get(context)))
},
new Write() {
//Name = "WriteActivity2",
Text=new InArgument<string>(
context =>
String.Format("Int value: {0}", var2.Get(context)))
}
}
};
but if I want to represent the same workflow through my API:
Data<string> var1=new Data<string>("varname1", "string value");
Data<int> var2=new Data<int>("varname2", 123);
return new Sequence() {
DescriptiveName="Sequence Activity",
Data=new Collection<Data> { var1, var2 },
Activities=new Collection<ActivityState>(){
new Write() {
DescriptiveName="WriteActivity1",
Text="String value: "+var1 // <-- BIG PROBLEM !!
},
new Write() {
DescriptiveName="WriteActivity2",
Text="Int value: "+Convert.ToInt32(var2) // ANOTHER BIG PROBLEM !!
}
}
};
I end up with a BIG PROBLEM when using Data objects as Variables. I really don't know how to allow the developer, using my API, to use Data objects wherever who wants(just like in WF4) and later translate that Data to System.Activities.Variable.
Solutions come to mind
If you now understand my problem, the FirstObject and SecondObject are the Data and System.Activities.Variable respectively. Like I said translate Data to Variable is just the tip of the iceberg because I might use Data.Get() in my code and don't know how to translate it to Variable.Get(context) while doing the translation.
Solutions that I've tried or thought of:
Solution 1
Instead of a direct translation of properties I would develop NativeActivites for each flow-control activity(If, Sequence, Switch, ...) and make use of CacheMetadata() function to specify Arguments and Variables. The problem remains because they are both accessed through var.Get(context).
Solution 2
Give my Data class its own Get() function. It would be only an abstract method, without logic inside that it would, somehow, translate to Get() function of System.Activities.Variable. Is this even possible using C#? Guess not! Another problem is that a Variable.Get() has one parameter.
Solution 3
The worst solution that I thought of was CIL-manipulation. Try to replace the code where Data/Argument is used with Variable/Argument code. This smells like a nightmare to me. I know next to nothing about System.reflection.Emit and even if I learn it my guess is that it would take ages ... and might not even be possible to do it.
Sorry if I ended up introducing a bigger problem but I'm really stuck here and desperately needing a tip/path to go on.
This is called "duck typing" (if it looks like a duck and quacks like a duck you can call methods on it as though it really were a duck). Declare myObject as dynamic instead of as a specific type and you should then be good to go.
EDIT: to be clear, this requires .NET 4.0
dynamic myObject = new FirstObject();
// do stuff
myObject = new SecondObject();
// do stuff again
Reflection isn't necessarily the right task for this. If SecondObject is out of your control, your best option is likely to just make an extension method that instantiates a new copy of it and copies across the data, property by property.
You could use reflection for the copying process, and work that way, but that is really a separate issue.
This question already has answers here:
Where do I use delegates? [closed]
(8 answers)
Closed 9 years ago.
I think I understand the concept of a delegate in C# as a pointer to a method, but I cant find any good examples of where it would be a good idea to use them. What are some examples that are either significantly more elegant/better with delegates or cant be solved using other methods?
The .NET 1.0 delegates:
this.myButton.Click += new EventHandler(this.MyMethod);
The .NET 2.0 delegates:
this.myOtherButton.Click += delegate {
var res = PerformSomeAction();
if(res > 5)
PerformSomeOtherAction();
};
They seem pretty useful. How about:
new Thread(new ThreadStart(delegate {
// do some worker-thread processing
})).Start();
What exactly do you mean by delegates? Here are two ways in which they can be used:
void Foo(Func<int, string> f) {
//do stuff
string s = f(42);
// do more stuff
}
and
void Bar() {
Func<int, string> f = delegate(i) { return i.ToString(); }
//do stuff
string s = f(42);
// do more stuff
}
The point in the second one is that you can declare new functions on the fly, as delegates. This can be largely replaced by lambda expressions,and is useful any time you have a small piece of logic you want to 1) pass to another function, or 2) just execute repeatedly. LINQ is a good example. Every LINQ function takes a lambda expression as its argument, specifying the behavior. For example, if you have a List<int> l then l.Select(x=>(x.ToString()) will call ToString() on every element in the list. And the lambda expression I wrote is implemented as a delegate.
The first case shows how Select might be implemented. You take a delegate as your argument, and then you call it when needed. This allows the caller to customize the behavior of the function. Taking Select() as an example again, the function itself guarantees that the delegate you pass to it will be called on every element in the list, and the output of each will be returned. What that delegate actually does is up to you. That makes it an amazingly flexible and general function.
Of course, they're also used for subscribing to events. In a nutshell, delegates allow you to reference functions, using them as argument in function calls, assigning them to variables and whatever else you like to do.
I primarily use the for easy asynch programming. Kicking off a method using a delegates Begin... method is really easy if you want to fire and forget.
A delegate can also be used like an interface when interfaces are not available. E.g. calling methods from COM classes, external .Net classes etc.
Events are the most obvious example. Compare how the observer pattern is implemented in Java (interfaces) and C# (delegates).
Also, a whole lot of the new C# 3 features (for example lambda expressions) are based on delegates and simplify their usage even further.
For example in multithread apps. If you want several threads to use some control, You shoul use delegates. Sorry, the code is in VisualBasic.
First you declare a delegate
Private Delegate Sub ButtonInvoke(ByVal enabled As Boolean)
Write a function to enable/disable button from several threads
Private Sub enable_button(ByVal enabled As Boolean)
If Me.ButtonConnect.InvokeRequired Then
Dim del As New ButtonInvoke(AddressOf enable_button)
Me.ButtonConnect.Invoke(del, New Object() {enabled})
Else
ButtonConnect.Enabled = enabled
End If
End Sub
I use them all the time with LINQ, especially with lambda expressions, to provide a function to evaluate a condition or return a selection. Also use them to provide a function that will compare two items for sorting. This latter is important for generic collections where the default sorting may or may not be appropriate.
var query = collection.Where( c => c.Kind == ChosenKind )
.Select( c => new { Name = c.Name, Value = c.Value } )
.OrderBy( (a,b) => a.Name.CompareTo( b.Name ) );
One of the benefits of Delegates is in asynchronous execution.
when you call a method asynchronously you do not know when it will finish executing, so you need to pass a delegate to that method that point to another method that will be called when the first method has completed execution. In the second method you can write some code that inform you the execution has completed.
Technically delegate is a reference type used to encapsulate a method with a specific signature and return type
Some other comments touched on the async world... but I'll comment anyway since my favorite 'flavor' of doing such has been mentioned:
ThreadPool.QueueUserWorkItem(delegate
{
// This code will run on it's own thread!
});
Also, a huge reason for delegates is for "CallBacks". Let's say I make a bit of functionality (asynchronously), and you want me to call some method (let's say "AlertWhenDone")... you could pass in a "delegate" to your method as follows:
TimmysSpecialClass.DoSomethingCool(this.AlertWhenDone);
Outside of their role in events, which your probably familiar with if you've used winforms or asp.net, delegates are useful for making classes more flexible (e.g. the way they're used in LINQ).
Flexibility for "Finding" things is pretty common. You have a collection of things, and you want to provide a way to find things. Rather than guessing each way that someone might want to find things, you can now allow the caller to provide the algorithm so that they can search your collection however they see fit.
Here's a trivial code sample:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace Delegates
{
class Program
{
static void Main(string[] args)
{
Collection coll = new Collection(5);
coll[0] = "This";
coll[1] = "is";
coll[2] = "a";
coll[3] = "test";
var result = coll.Find(x => x == "is");
Console.WriteLine(result);
result = coll.Find(x => x.StartsWith("te"));
Console.WriteLine(result);
}
}
public class Collection
{
string[] _Items;
public delegate bool FindDelegate(string FindParam);
public Collection(int Size)
{
_Items = new string[Size];
}
public string this[int i]
{
get { return _Items[i]; }
set { _Items[i] = value; }
}
public string Find(FindDelegate findDelegate)
{
foreach (string s in _Items)
{
if (findDelegate(s))
return s;
}
return null;
}
}
}
Output
is
test
there isn't really anything delgates will solve that can't be solved with other methods, but they provide a more elegant solution.
With delegates, any function can be used as long as it has the required parameters.
The alternative is often to use a kind of custom built event system in the program, creating extra work and more areas for bugs to creep in
Is there an advantage to use a delegate when dealing with external calls to a database?
For example can code A :
static void Main(string[] args) {
DatabaseCode("test");
}
public void DatabaseCode(string arg) {
.... code here ...
}
Be improved in code B :
static void Main(string[] args) {
DatabaseCodeDelegate slave = DatabaseCode;
slave ("test");
}
public void DatabaseCode(string arg) {
.... code here ...
}
public delegate void DatabaseCodeDelegate(string arg);
It seems that this is subjective, but an area where there are strong conflicting view points?