Facebook messenger's APIs support a message type called the "Button Template". This card type allows sending a message with text, then a list of buttons with different actions, as in the image below (from Facebook's documentation):
This template similar to the Generic template, which is what is produced by the Bot Framework's Hero cards, but is distinct in that it does not require a "Header" on the card, which is required by the Generic template.
I'm trying to figure out how to render this template using the Bot Framework -- we've gotten it to work in Facebook messenger by populating the MessageActivity.ChannelData with a custom model we created based on the JSON in Facebook's documentation, but this causes the message to fail to appear in the Bot Framework Emulator. Is there any way to render this template using official Bot Framework methods/classes?
Short answer: No. The Emulator is not designed to test channel-specific functionality in this way.
You are on the right track using the channelData to store your custom message definition. But, as you already discovered, since you are wanting to test a FB-specific feature, the only way is to test on FB Messenger directly.
Related
I have created the chatbot with bot framework composer, now as there are some limitations in the composer data manipulation gathered from adaptive cards I would like to create one dialog in bot framework C# version as a waterfall dialog. Is there a way to add waterfall dialog to the composer project with VS and then somehow invoke it. I have found the blog post about doing it the opposite way link but although I searched trough the internet I cannot find any help. Is there anyone who did it before? Is it possibly?
I'm looking for a way to start a Teams call from within my own custom application.
Our application has a whole bunch of phone numbers for Customers/Suppliers/... and I want to give my users the option to initiate a call just by clicking a button in the application.
Does anybody have a good sample on how to do this?
I know that in the past, this was fairly easy to do with Skype and Lync. They just had an SDK you could call from your own application.
But when I try to look for the same thing in Teams I always end up in documentation around bots. And it's a bit confusing if you're new to that part :-)
So main question is, how can I start a phone call with Teams from my own code? Phone calls to an actual phone number, not a Teams account.
There's no such API available to create a call. However you can have deep link to make a call. Follow this doc to understand Deep linking to an audio or audio-video call
To make a call to combination of VoIP and PSTN users https://teams.microsoft.com/l/call/0/0?users=<user1>,4:<phonenumber>
To make an audio call you can do -
https://teams.microsoft.com/l/call/0/0?users=<user1>,<user2>
To make a video call you can do - https://teams.microsoft.com/l/call/0/0?users=<user1>,<user2>&withVideo=true
User ID field supports the Azure AD UserPrincipalName, typically an email address, or in case of a PSTN call, it supports a pstn mri 4:.
It will not directly going to start the call. Instead showing a pop-up as below -
i am working on a bot via the Bot Framework 4.0 and at the moment im implementing text-based Adaptive Cards. The bot can translate any text languages from the user with the Azure Translation API.
Link to the Translation API: https://azure.microsoft.com/en-us/services/cognitive-services/translator/
My question is if it is possible to translate outcoming Adaptive Cards via the Translation API or if not, if there is a possible solution to this problem.
It is possible, though not automatic. Translating cards is tricky because you can't just translate every string in the card. You need to know which strings to translate and which ones to not translate. I've created a library that does this, and you can read about it here.
I am very new to Mvvmcross and Xamarin Android development. My question are there any examples with the use of MvxObserableCollection method(I read somewhere that this method can help update the list with each api call), I am searching for few hours now but I can't seem to find what I am looking for.
And like I mentioned, in my sample application when user clicks a button an api request is send and the information comes up in second viewmodel but there is a list which is constantly changing with each api call. So is the any other way or are there any MvxObservableCollection examples out there. Thanks!!
I am developing a wpf app and i want to add speech to text functionality. I want to achieve this using nuance naturally speaking SDK. Below is my requirement explained briefly, 1) User speak out text and this should be typed on the screen in a rich text box control. 2) User speak out command and program should execute the command logic, for example i have a list whose items are apple, mango, grape. Select an item from list using command 'list item mango'. My questions are, a) Do i have to use Dragon SDK Client (DSC) Edition to fulfill my requirement or is there any other SDK or API form nuance for this requirement. b) Can anyone please suggest a basic tutorial for me to start on this, like how to setup SDK and a sample application in wpf will be great.I have no clue from where to start and how to implement this.