Custom Workflow Step - Retrieve Field From Entity - c#

I have a workflow activity which I use to send out e-mails to suppliers on a regular basis and scheduling follow-up phone calls etc if there is no reply. However my problem is that I somehow need to retrieve the e-mail for each supplier before sending the message.
How can I go about retrieving the value in a specific field from a specific entity in CRM 2011?
Here is the pseudo-code:
For each supplier entity in the system {
if (sendSalesRequest) {
Send Initial E-mail
- wait 21 days
if (noReply) {
Send Follow up e-mail
- wait 7 days
if (noFollowupReply) {
schedule phone call activity everyday until reply
}
}
}
}
However, I need a way of retrieving the e-mail address from the suppler entity.
I'm not looking for a worked solution (though I wouldn't turn it down!), just guidelines on how to go about this task as I'm brand new to CRM development.
Thanks,
Tysin

If your new to 2011 I would suggest reading about the SDK. In particular you need to look at custom workflow activity development. It has a number of samples which should help you along the way.
I would suggest having:
A master workflow, that has a single step to call your custom workflow activity which processes the suppliers. You start this workflow whenever you want to process all the suppliers.
An on-demand workflow that sends the initial email
An on-demand workflow that sends the follow up email
An on-demand workflow that schedules the phone call
The reason I suggest this is that means you keep the creation of data in workflows where is it easier to manage and handle. E.g. you could change the email message without recompiling your custom workflow activity.
So first step in your master custom workflow activity, is to query Crm, I would suggest a QueryExpression to start with. That way you can get the data to base you logic on.
Then you need some code to start a workflow against a record, to send the email and phones calls. There is an example for this here.
Then you code would look a little like this:
QueryExpression query = new QueryExpression("supplier");
query.ColumnSet = new ColumnSet("sendsalesrequest", "noreply", "nofollowupreply");
EntityCollection entities = service.RetrieveMultiple(query);
entities.Entities.ToList().ForEach(entity =>
{
if(entity["sendsalesrequest"] == "Yes")
{
StartWorkflow(entity.Id, "Send Initial E Mail Workflow Name");
}
//etc
}
This is quite high level but should hopefully get you going in the right direction.

Related

Write an independent job which has no impact to the current system in .net core API

I'm working on a CRM application and my client wants to download some information within last 6 months. At the moment I have created an API endpoint which returns FileContentResult object and that will open a new tab in browser and automatically download an Excel file.
But this process is time consuming (since it has over 500K data) and users don't wait in the same page until the process is done. So, once an user change between pages I get issues and sometimes the application return timeout error since the API response is slow.
Now, I'm planning to enhance that same function/API endpoint by introducing some silent job. Which means once user click on "Download" button, process will start and it will send a message stating that "Your download process has been started. You will receive an email with the report within next 15minutes". In this way, users don't have to wait and they can do something else in the system.
Currently, I'm using async task and awaits until the job is done.
public async Task<FileContentResult> ExportData()
{
//...
//... process data and create excel file
//...
//...
return new FileContentResult(*some byte array*, "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet")
{
FileDownloadName = $"Data.xlsx"
};
}
and I'm calling this method by
await exportService.ExportData();
My concern is what are the things I should change here in order to avoid any impact on other processes and run this as a background job. Once I get the result, I will send an email with an attachment.
Please help me with your valuable ideas. Thanks in advance
what are the things I should change here in order to avoid any impact on other processes and run this as a background job. Once I get the result, I will send an email with an attachment.
You should use a basic distributed architecture, as I describe on my blog. Specifically:
Instead of creating the report in your ASP.NET app, your ASP.NET app should just create a message indicating that the report should be created, and place that message into a durable queue.
Have a separate, independent process read the messages from that queue, generate the report, and send the email.

Teams UpdateActivity events difference when you test in newly created teams

We have a Teams bot that posts messages in MS Teams. The first activity of a new conversation is always an adaptive card and once in a while, we update that with a new card. This worked OK until I made a new Team with this bot.
The update we are trying with UpdateActivityAsync, return NotFound.
After some troubleshooting, I noticed the following:
The new team has a different name: 19:...#thread.tacv2 as opposed to 19:...#thread.skype.
When I use an older team, it works as expected.
When I update the activity with text only (so no adaptive card as attachment) it will always update as expected.
After an update with a text, we are able to update with an adaptive card ONCE. After one update with an adaptive card, any subsequent updates with adaptive cards will return NotFound.
So, as a workaround, I now first update with text and immediately after that I send the update with the card. Which is a bad UI thing (flickering) but it works for now.
We use the old bot framework version 3, which I know is not maintained anymore, but as far as I can find, it should still work (no plans to discontinue operation). Also given the above points (specifically point 4) I would expect it uses the same calls under the hood.
So, this works for older teams, but not for a team with #thread.tacv2
await connector.Conversations.UpdateActivityAsync(
teamsConversationId,
activityId,
(Activity)messageWithCard);
And for teams with #thread.tacv2 we now have to use this
var messageWithText = Activity.CreateMessageActivity();
messageWithText.ChannelId = teamsConversationId;
messageWithText.Id = activityId;
messageWithText.Type = ActivityTypes.Message;
messageWithText.Text = "Updated";
await connector.Conversations.UpdateActivityAsync(
teamsConversationId,
activityId,
(Activity)messageWithText);
await connector.Conversations.UpdateActivityAsync(
teamsConversationId,
activityId,
(Activity)messageWithCard);
The exception does not provide too many details:
Operation returned an invalid status code 'NotFound'
Conversation not found.
Does anyone know how to avoid this change between teams and allow updates of activity with cards?
Also (and this is much less important, but I think it's useful to add) I noticed that sometimes (I've seen it twice now) Teams seems unable to render the adaptive card and displays URIObject XML instead, containing error: cards.unsupported. However, if I exit the client and restart it, it renders fine... I have never seen this so far in the old channels.
Teams client version 1.3.00.362 (64-bit) (no dev mode).
Normal Azure tenant (no preview/trial)
EDIT 11/05/2020 It seems that this also happens on teams with the 'old' name (#thread.skype). So the '#thread.tacv2' seems unrelated.
We weren't able to find logs at the exact timestamps that you provided, but did find logs for the conversation ids on those dates and see 404s with the same minute and seconds in UTC. We assume the time stamps that were provided are represented in a different timezone.
From the logs we are seeing the following pattern:
Bot sends PUT activity with card - 404 returned
Bot sends PUT activity with text - 200 returned
Bot sends PUT activity with card - 200 returned
This looks like the same pattern that you shared in your original post.
There is a scenario that's causing 404s to be returned on PUTS whenever the bot tries to update an existing card message with the exact same card after new messages have been sent to a reply chain
These are the repo steps:
Bot send card to reply chain (can be root message or reply message)
Any user sends a message to the chain
Bot attempts to update message with the exact same card
Is it possible that your bot is encountering this? Is there a way to check whether the card your bot is sending in the first PUT request is the same card that is already in the original message

Orleans. Akka.net. Problem with understanding the actor model

If you don't know C# but you are familiar with the actor model please read my problem below as it is more about architecture and data management.
I'm a very junior C# developer and trying to understand what the actor model is. I'm kind of done with it but it's left one point that I cannot get.
Before I tell you a problem let me describe the context in order to provide you with better undersrtanding.
As a test example I want to build an app for an imaginary bank. I'm going to implement this application by using both akka.net and Orleans on learning purpose and to be able to compare them.
Use cases:
As a user I want to be able to create a new account;
As a user I want to be able to login to the app using my account unique number;
As a user I want to be able to deposit money to my account;
As a user I want to be able to select another user and transfer a specified sum of money to their account;
As a user I want to be able to withdraw a sum of money from my account.
So, there are the following entities:
User;
Account.
Identifying one to one relationship between the user and their account.
I'm going to use ORM to store this data in my Database. Obviously the models look something like this:
public class User
{
public Guid Id { get; set; }
public string FullName { get; set; }
....
}
public class Account
{
public Guid Id { get; set; }
public string UniqueNumber { get; set; }
public string Balance { get; set; }
...
}
And I also want to have two actors/grains:
AccountActor;
TransactionService;
Their interfaces:
//Or IAccountGrain
public interface IAccountActor
{
void Deposit(Money amount);
void Withdraw(Money amount);
}
//Or ITransactionGrain
public interface ITransactionActor
{
void Transfer(IAccountActor from, IAccountActor to, Money amount);
}
The thing that I don't understand is how to treat the data in the relation database.
Let's imagine the following scenario:
50 users are online and vigorously making request via the client app to the REST API of the app.
They are withdrawing, depositing and transferring money almost without any pauses.
The question is:
Should I create one actor per user account? I'm pretty sure that I
need because how then I can implement thousands of transactions
between different accounts.
How can I associate the user account with the AccountActor? Is it correct If I load the data from database using repository before
actor's activation/start and set up the state?
And the main problem:
How to save state back to the database?
Let's image an account A that has 1000$.
And It occurs about 100 transactions initiated by the users where this account is involved.
Account A changes its state from message to message.
What is the best approach to save these changes to database? I read that If I use calls to the database directly from the actor, I will lose all the benefits because of blocked operations.
Should I create one more actor to process messages from other actors and writes changes to database using repositories?
I mean I can send from AccountActor messages about the account changes to the new actor where I will call the appropriate repository. But, Isn't it a bottleneck? Let's imagine 1000 users online and about 100 000 transactions between accounts. Then the actor that is responsible for saving accounts changes to the database can have too many messages to process.
Sorry for the long text. I tried to find examples of applications that use Orleans or Akka.net but I haven't found anything what utilizes a database.
Thank you for you attention.
There are a couple ideas you’re missing here, but let’s take the questions in order.
Should I create one actor per user account? I'm pretty sure that I
need because how then I can implement thousands of transactions
between different accounts.
I assume the alternative you’re considering is multiple actors per user account, and this would be wrong. There can be only one actor per user account or else you will run into the problem you describe that simultaneous requests can withdraw the same money twice.
How can I associate the user account with the AccountActor?
You are missing a UserActor which owns the AccountActors. An account cannot exist without an owner, otherwise we don’t know who owns the money in an account. In the real world, one typically does not send money to a random account. They want to send it to a person, and uses the senders User persona account to do so.
Is it correct If I load the data from database using repository before
actor's activation/start and set up the state?
Yes, in fact that is mandatory. Without the state in the actor, the actor is not good for much.
What is the best approach to save these changes to database? I read
that If I use calls to the database directly from the actor, I will
lose all the benefits because of blocked operations. Should I create
one more actor to process messages from other actors and writes
changes to database using repositories?
You’re on the right track but not quite there yet. Saves of actor state are done with asynchronous methods to write to the DB. With an async method, the main thread does not block waiting for the DB write to happen, thus the processing thread can continue with its business.
When only one actor is involved in an action, it can save its own state via the async method. In banking registers there are always 2 accounts involved and writes to both must succeed or fail, never one succeed and one fail. Therefore, the TransactionActor will open a DB transaction and tell each AccountActor to save its state using that DB transaction. If either fails, it aborts the transaction and both fail. Note that this method is a private async method on the TransactionActor so you get the benefits of parallel processing.
BTW, you can't find any examples of writing to the DB in Orleans because that is all handled for you by the framework. The save methods are automatically async and they interact with the DB. All you do in Orleans is reference the Actor and the state is automatically pulled from the DB for you.

Get notified when a cell value is changed in SQL Server database [duplicate]

This question already has answers here:
How to monitor SQL Server table changes by using c#?
(11 answers)
Closed 6 years ago.
I want to get notified when a certain change occurs in Database table. Consider the case: I want to perform a certain action when the column in a row changes its value to 5. How can I achieve it. I am using C# and entity framework to access the database.
For this you have to make a schedule job which will continuously(like interval of 5 minutes) ping database and notify you as like Facebook's notification bar.
Also you can write trigger on that table which will insert/update notification table and from there you will get notify.
The short answer is that you should probably try and manage this outside of SQL server. I have to assume that you have some application logic executing outside of SQL server that is the source of the update. Ideally your notification logic should be placed in your application tier before or after the database is updated.
Should you not be able to achieve this, three other options I can offer are:
polling You build a service that reads the value from SQL server in a loop. The loop should read the value periodically, and perform the notification. Most engineers avoid polling as from a best practices standpoint it is typically contra indicated due to adding persistent load to the database. Although polling should be avoided, it's surprisingly common in the field.
msmq You update the value via a stored procedure, and use this article to send a message to MSMQ when the value is 5. You will need to write a service to consume the MSMQ message and process the notification. You may use a WCF service using MSMQ transport to make this easy.
email You send an email using sp_send_dbmail in the update stored procedure, and build the necessary notification consumer(s). It should be noted that this method will likely also involve polling if you consume the email electronically. You can avoid that by using IMAP IDLE to process the email notifications. Try MailKit
Reporting services also apparently offers notifications, but I am not familiar with them.
using(var context = new FooEntities)
{
try
{
var customer = context.Customers.First(i=> i.CustomerID = 23);
customer.Name = "Bar";
context.SaveChanges();
//Write your notification code here
}
catch(Exception ex)
{
//Write notification along with the error you want to display.
}
}
Search in google there's many different way of displaying a notification.

Azure Application Insights custom response metric

I need some help to find a good pattern for a custom application insights metric.
Environment
I have a custom Windows Service running on multiple Azure VMs.
I can successfull add Events to my Monitoring instance on Azure.
Goal
I want to create a custom metric that allows me to monitor if my windows services are running and responding per instance. It would be perfect if it acts like the respond timeout in website metric.
Each service instance has a custom maschine related identifier, like:
TelemetryClient telemetry = new TelemetryClient();
telemetry.Context.Device.Id = FingerPrint.Instance;
Now I wnat to create a alert if one of my Service instances (Context.Device.Id) is not running or responding.
Question
How to achive this?
Is it even possible or usefull to Monitor multiple instance of one service type onside on application insight? Or must I open one single application insight per instance?
Can anybody help me?
Response to Paul's answere
Track Metric Use TrackMetric to send metrics that are not attached to particular events. For example, you could monitor a queue length at regular intervals.
If I do so, whats happens if my server made a restart (update or somethink) and my service don't start up. Now the service did't send a TrackMetric to the application insight and no alert is raised because the value don't drop below 1, but the Service is still not running.
Regards Steffen
I found a good working solution, with only a few simple steps.
1) Implement a HttpListener instance on a service port (for example 8181) with a simple text response "200: OK"
2) Add a matching endpoint to the azure VM imstande
3) Create a default web test on "myVM.cloudapp.net:8181" with checkup of response text
Work great so far and matches all my needs! :)
Per the documentation on Azure portal:
https://azure.microsoft.com/en-us/documentation/articles/app-insights-api-custom-events-metrics/#track-metric
Track Metric
Use TrackMetric to send metrics that are not attached to particular events. For example, you could monitor a queue length at regular intervals.
Metrics are displayed as statistical charts in metric explorer, but unlike events, you can't search for individual occurrences in diagnostic search.
Metric values should be >= 0 to be correctly displayed.
c# code looks like this
private void Run() {
var appInsights = new TelemetryClient();
while (true) {
Thread.Sleep(60000);
appInsights.TrackMetric("Queue", queue.Length);
}
}
I don't think there is currently a good way to accomplish this. What you're actually looking for is a way to detect a "stale heartbeat." For example, if your service was sending up an event "Service Health is okay", you'd want an alert that you haven't received one of those events in a certain amount of time. There aren't any date/time conditional operators in AI's alert system.
Microsoft might explain that this scenario is not intended to be satisfied by AI, as this is more of a "health checking" system's responsibility, like SCOM or Operation Insights or something else entirely.
I agree this is something that needs a solution, and using AI for it would be wonderful (I've already attempted to accomplish the same thing with no luck); I just think "they" will say its not a scenario in the realm of responsibility for AI.

Categories

Resources