How to use a custom image upload handler in TinyMCE Blazor? - c#

I'm trying to use a custom image handler for TinyMCE Blazor Component within a razor page, without success. The reason I need to use a custom upload handler rather than just allowing TinyMCE to post the request is that I need to add a JWT to the request for authentication.
TinyMCE configuration is done via a dictionary of <string, object>
#code {
private Dictionary<string, object> editorConf = new Dictionary<string, object>
{
{"plugins", "autolink media link image emoticons table paste"},
{"toolbar", "undo redo | styles | bold italic underline | table | link image paste "},
{"paste_data_images", "true"},
{"width", "100%"},
{"automatic_uploads", true },
{"images_upload_url", "/UploadImage/"} // works fine if no JWT required
};
/// other code
}
I cannot use a C# method for the handler because I do not know the parameter types, the only examples I've found are written in PHP (which I am not familiar with) and js and so the parameters are not typed.
I have tried following an approach similar to what is suggested here https://github.com/tinymce/tinymce-blazor/issues/19 by creating a js script that invokes a C# method that would then add the JWT and do the required work before returning the file path of the image.
export function upload_handler(blobInfo, success, failure, progress)
{
DotNet.invokeMethodAsync('MyApp', "UploadHandler", "this is a test!")
.then((data) => {
success(data);
});
};
private static IJSObjectReference? js_imagesupload;
private Dictionary<string, object> editorConf = new Dictionary<string, object>
{
{"plugins", "autolink media link image emoticons table paste"},
{"toolbar", "undo redo | styles | bold italic underline | table | link image paste "},
{"paste_data_images", "true"},
{"width", "100%"},
{"automatic_uploads", true },
{"images_upload_handler", (async () => await js_imagesupload.InvokeVoidAsync("upload_handler", null))}
};
protected override async Task OnAfterRenderAsync(bool firstRender)
{
if (firstRender)
{
js_imagesupload = await JS.InvokeAsync<IJSObjectReference>("import", "./scripts/imagesupload.js"); // js script
}
}
[JSInvokable]
public static Task<string> UploadHandler(string value)
{
// add JWT to request and do image upload work here
}
The problem with this is I get an error complaining about JSON serialization.
rit: Microsoft.AspNetCore.Components.WebAssembly.Rendering.WebAssemblyRenderer[100]
Unhandled exception rendering component: Serialization and deserialization of 'System.Func`1[[System.Threading.Tasks.Task, System.Private.CoreLib...
I can see why this would happen, of course Tiny MCE cannot serialize the lambda expression. I'd really appreciate it if someone knows a way I can get around this issue. I'm kind of new to Blazor so it's entirely possible I'm missing something simple! Many thanks.

It turns out that the answer to this is that you can use a javascript file to store the config and this in turn allows you to set the images_upload_handler to a javascript function. Thanks to the TinyMCE contributor jscasca! https://github.com/tinymce/tinymce-blazor/issues/60
So I created a file called tinyMceConf.js and stored this in wwwroot/js/
The file contains the config (tinyMceConf) and also the upload handler which calls a C# method.
The SetDotNetHelper is called from the razor component so that js has a reference to the dotnet instance. See this helpful article for more information Calling .NET Instance Methods in ASP.NET Core Blazor Directly from JavaScript I had to do this so because I needed my method to be an instance method rather than static.
tinyMceConf.js
tinyMceConf = {
height: 400,
toolbar: 'undo redo | styles | bold italic underline | table | link image paste | emoticons',
plugins: 'autolink media link image emoticons table paste',
paste_data_images: true,
automatic_uploads: true,
images_upload_handler: js_upload_handler
}
function SetDotNetHelper(dotNetHelper) {
window.dotNetHelper = dotNetHelper;
}
function js_upload_handler(blobInfo, success, failure, progress) {
console.log(blobInfo.filename());
window.dotNetHelper.invokeMethodAsync('UploadHandler', blobInfo.base64(), blobInfo.filename())
.then((data) => {
success(data);
});
}
Then I added this line to index.html (within the head)
<script src="/js/tinyMceConf.js" type="text/javascript"></script>
Then within my razor component I set JsConfSrc to tinyMceConf (in tinyMceConf.js).
<TinyMCE.Blazor.Editor Id="uuid"
Inline=false
CloudChannel="5"
Disable=false
ClassName="tinymce-wrapper"
JsConfSrc="tinyMceConf"
Field="() => loadedTicket.TechNotes" #bind-Value="#loadedTicket.TechNotes" />
Within my razor component I have a method using the JSInvokable attribute that allows my js to invoke it.
[JSInvokable]
public async Task<string> UploadHandler(string base64string, string filename)
{
// Within my method I use the base 64 string and filename passed from js
// to do what I need and return the url of the uploaded image
}
That's it!

Related

Blazor - Drag and Drop uploads file

I'm facing an issue with uploading a file via the Drag'n Drop API.
Here the following blazor component:
<InputTextArea
#ondrop="HandleDrop"
#ondragenter="HandleDragEnter"
#ondragleave="HandleDragLeave"/>
</InputTextArea>
#code {
private async Task HandleDrop(DragEventArgs args)
{
var files = args.DataTransfer.Files;
// Do something to upload the file and get the content
}
I want to upload the file and display it in the textarea. Now since .NET6 the DragEventArgs will list all files (if any) associated with the Drag'n Drop event.
Natively there seems to be no way to get the content of those files.
Therefore I tried to achieve this with JavaScript interop:
private async Task HandleDrop(DragEventArgs args)
{
var content = await jsRuntime.InvokeAsync<string>("getContentFromFile", args);
}
With the following JavaScript function (which is referenced in the _Layout.cshtml):
async function getContentFromFile(args) {
// Use some API here? File Upload API? Didn't work as args is only a data object and not the "real" DragEventArgs from JavaScript
// I also tried FileReader
const fileName = args.Files[0]; // Let's assume we always have one file
let content = await new Promise((resolve) => {
let fileReader = new FileReader();
fileReader.onload = (e) => resolve(fileReader.result);
fileReader.readAsText(fileName);
});
console.log(content);
return content;
}
This approach let to other issues and exceptions, like that the FileReader threw:
parameter is not of type 'Blob'
Is this with this approach with the current version of blazor (.net6) possible at all? Any tips are welcome.

Scraping html list data from a dynamic server

Hallo guys!
Sorry for the dump question, this is my last resort. I swear i triend countless of other Stackoverflow questions, different Frameworks, etc., but those didnt seem to help.
Ich have the following Problem:
A website displays a list of data (there is a TON of div, li, span etc. tags infront, its a big HTML.)
Im writing a tool that fetches data from a specific list inside a ton of other div tags, downloads it and outputs an excel file.
The website im trying to access, is dynamic. So you open the website, it loads a little bit, and then the list appears (probably some JS and stuff).
When i try to download the website via a webRequest in C#, the html I get ist almost empty with a ton on white spaces, lots of non-html stuff, some garbage data as well.
Now: Im pretty used to C#, HTMLAgillityPack, and countless other libraries, not so much in web related stuff tho. I tried CefSharp, Chromium etc. all of those stuff, but couldnt get them to work properly unfortunately.
I want to have a HTML in my program to work with that looks exactly like the HTML that you see when
you open the dev console in chrome wenn visting the website mentined above.
The HTML parser works flwalessly there.
This is how I image how the code could look like simplified.
Extreme C# pseudocode:
WebBrowserEngine web = new WebBrowserEngine()
web.LoadURLuntilFinished(url); // with all the JS executed and stuff
String html = web.getHTML();
web.close();
My Goal would be that the string html in the pseudocode looks exactly like the one in the Chrome dev tab.
Maybe there is a solution posted somewhere else but i swear i coudlnt find it, been looking for days.
Andy help is greatly appreciated.
#SpencerBench is spot on in saying
It could be that the page is using some combination of scroll state, element visibility, or element positions to trigger content loading. If that's the case, then you'll need to figure out what it is and trigger it programmatically.
To answer the question for your specific use case, we need to understand the behaviour of the page you want to scrape data from, or as I asked in the comments, how do you know the page is "finished"?
However, it's possible to give a fairly generic answer to the question which should act as a starting point for you.
This answer uses Selenium, a package which is commonly used for automating testing of web UIs, but as they say on their home page, that's not the only thing it can be used for.
Primarily it is for automating web applications for testing purposes, but is certainly not limited to just that. Boring web-based administration tasks can (and should) also be automated as well.
The web site I'm scraping
So first we need a web site. I've created one using ASP.net core MVC with .net core 3.1, although the web site's technology stack isn't important, it's the behaviour of the page you want to scrape which is important. This site has 2 pages, unimaginatively called Page1 and Page2.
Page controllers
There's nothing special in these controllers:
namespace StackOverflow68925623Website.Controllers
{
using Microsoft.AspNetCore.Mvc;
public class Page1Controller : Controller
{
public IActionResult Index()
{
return View("Page1");
}
}
}
namespace StackOverflow68925623Website.Controllers
{
using Microsoft.AspNetCore.Mvc;
public class Page2Controller : Controller
{
public IActionResult Index()
{
return View("Page2");
}
}
}
API controller
There's also an API controller (i.e. it returns data rather than a view) which the views can call asynchronously to get some data to display. This one just creates an array of the requested number of random strings.
namespace StackOverflow68925623Website.Controllers
{
using Microsoft.AspNetCore.Mvc;
using System;
using System.Collections.Generic;
using System.Text;
[Route("api/[controller]")]
[ApiController]
public class DataController : ControllerBase
{
[HttpGet("Create")]
public IActionResult Create(int numberOfElements)
{
var response = new List<string>();
for (var i = 0; i < numberOfElements; i++)
{
response.Add(RandomString(10));
}
return Ok(response);
}
private string RandomString(int length)
{
var sb = new StringBuilder();
var random = new Random();
for (var i = 0; i < length; i++)
{
var characterCode = random.Next(65, 90); // A-Z
sb.Append((char)characterCode);
}
return sb.ToString();
}
}
}
Views
Page1's view looks like this:
#{
ViewData["Title"] = "Page 1";
}
<div class="text-center">
<div id="list" />
<script src="~/lib/jquery/dist/jquery.min.js"></script>
<script>
var apiUrl = 'https://localhost:44394/api/Data/Create';
$(document).ready(function () {
$('#list').append('<li id="loading">Loading...</li>');
$.ajax({
url: apiUrl + '?numberOfElements=20000',
datatype: 'json',
success: function (data) {
$('#loading').remove();
var insert = ''
for (var item of data) {
insert += '<li>' + item + '</li>';
}
insert = '<ul id="results">' + insert + '</ul>';
$('#list').html(insert);
},
error: function (xht, status) {
alert('Error: ' + status);
}
});
});
</script>
</div>
So when the page first loads, it just contains an empty div called list, however the page loading trigger's the function passed to jQuery's $(document).ready function, which makes an asynchronous call to the API controller, requesting an array of 20,000 elements. While the call is in progress, "Loading..." is displayed on the screen, and when the call returns, this is replaced by an unordered list containing the received data. This is written in a way intended to be friendly to developers of automated UI tests, or of screen scrapers, because we can tell whether all the data has loaded by testing whether or not the page contains an element with the ID results.
Page2's view looks like this:
#{
ViewData["Title"] = "Page 2";
}
<div class="text-center">
<div id="list">
<ul id="results" />
</div>
<script src="~/lib/jquery/dist/jquery.min.js"></script>
<script>
var apiUrl = 'https://localhost:44394/api/Data/Create';
var requestCount = 0;
var maxRequests = 20;
$(document).ready(function () {
getData();
});
function getDataIfAtBottomOfPage() {
console.log("scroll - " + requestCount + " requests");
if (requestCount < maxRequests) {
console.log("scrollTop " + document.documentElement.scrollTop + " scrollHeight " + document.documentElement.scrollHeight);
if (document.documentElement.scrollTop > (document.documentElement.scrollHeight - window.innerHeight - 100)) {
getData();
}
}
}
function getData() {
window.onscroll = undefined;
requestCount++;
$('results2').append('<li id="loading">Loading...</li>');
$.ajax({
url: apiUrl + '?numberOfElements=50',
datatype: 'json',
success: function (data) {
var insert = ''
for (var item of data) {
insert += '<li>' + item + '</li>';
}
$('#loading').remove();
$('#results').append(insert);
if (requestCount < maxRequests) {
window.setTimeout(function () { window.onscroll = getDataIfAtBottomOfPage }, 1000);
} else {
$('#results').append('<li>That\'s all folks');
}
},
error: function (xht, status) {
alert('Error: ' + status);
}
});
}
</script>
</div>
This gives a nicer user experience because it requests data from the API controller in multiple smaller chunks, so the first chunk of data appears fairly quickly, and once the user has scrolled down to somewhere near the bottom of the page, the next chunk of data is requested, until 20 chunks have been requested and displayed, at which point the text "That's all folks" is added to the end of the unordered list. However this is more difficult to interact with programmatically because you need to scroll the page down to make the new data appear.
(Yes, this implementation is a bit buggy - if the user gets to the bottom of the page too quickly then requesting the next chunk of data doesn't happen until they scroll up a bit. But the question isn't about how to implement this behaviour in a web page, but about how to scrape the displayed data, so please forgive my bugs.)
The scraper
I've implemented the scraper as a xUnit unit test project, just because I'm not doing anything with the data I've scraped from the web site other than Asserting that it is of the correct length, and therefore proving that I haven't prematurely assumed that the web page I'm scraping from is "finished". You can put most of this code (other than the Asserts) into any type of project.
Having created your scraper project, you need to add the Selenium.WebDriver and Selenium.WebDriver.ChromeDriver nuget packages.
Page Object Model
I'm using the Page Object Model pattern to provide a layer of abstraction between functional interaction with the page and the implementation detail of how to code that interaction. Each of the pages in the web site has a corresponding page model class for interacting with that page.
First, a base class with some code which is common to more than one page model class.
namespace StackOverflow68925623Scraper
{
using System;
using OpenQA.Selenium;
using OpenQA.Selenium.Support.UI;
public class PageModel
{
protected PageModel(IWebDriver driver)
{
this.Driver = driver;
}
protected IWebDriver Driver { get; }
public void ScrollToTop()
{
var js = (IJavaScriptExecutor)this.Driver;
js.ExecuteScript("window.scrollTo(0, 0)");
}
public void ScrollToBottom()
{
var js = (IJavaScriptExecutor)this.Driver;
js.ExecuteScript("window.scrollTo(0, document.body.scrollHeight)");
}
protected IWebElement GetById(string id)
{
try
{
return this.Driver.FindElement(By.Id(id));
}
catch (NoSuchElementException)
{
return null;
}
}
protected IWebElement AwaitGetById(string id)
{
var wait = new WebDriverWait(Driver, TimeSpan.FromSeconds(10));
return wait.Until(e => e.FindElement(By.Id(id)));
}
}
}
This base class gives us 4 convenience methods:
Scroll to the top of the page
Scroll to the bottom of the page
Get the element with the supplied ID, or return null if it doesn't exist
Get the element with the supplied ID, or wait for up to 10 seconds for it to appear if it doesn't exist yet
And each page in the web site has its own model class, derived from that base class.
namespace StackOverflow68925623Scraper
{
using OpenQA.Selenium;
public class Page1Model : PageModel
{
public Page1Model(IWebDriver driver) : base(driver)
{
}
public IWebElement AwaitResults => this.AwaitGetById("results");
public void Navigate()
{
this.Driver.Navigate().GoToUrl("https://localhost:44394/Page1");
}
}
}
namespace StackOverflow68925623Scraper
{
using OpenQA.Selenium;
public class Page2Model : PageModel
{
public Page2Model(IWebDriver driver) : base(driver)
{
}
public IWebElement Results => this.GetById("results");
public void Navigate()
{
this.Driver.Navigate().GoToUrl("https://localhost:44394/Page2");
}
}
}
And the Scraper class:
namespace StackOverflow68925623Scraper
{
using OpenQA.Selenium.Chrome;
using System;
using System.Threading;
using Xunit;
public class Scraper
{
[Fact]
public void TestPage1()
{
// Arrange
var driver = new ChromeDriver();
var page = new Page1Model(driver);
page.Navigate();
try
{
// Act
var actualResults = page.AwaitResults.Text.Split(Environment.NewLine);
// Assert
Assert.Equal(20000, actualResults.Length);
}
finally
{
// Ensure the browser window closes even if things go pear-shaped
driver.Quit();
}
}
[Fact]
public void TestPage2()
{
// Arrange
var driver = new ChromeDriver();
var page = new Page2Model(driver);
page.Navigate();
try
{
// Act
while (!page.Results.Text.Contains("That's all folks"))
{
Thread.Sleep(1000);
page.ScrollToBottom();
page.ScrollToTop();
}
var actualResults = page.Results.Text.Split(Environment.NewLine);
// Assert - we expect 1001 because of the extra "that's all folks"
Assert.Equal(1001, actualResults.Length);
}
finally
{
// Ensure the browser window closes even if things go pear-shaped
driver.Quit();
}
}
}
}
So, what's happening here?
// Arrange
var driver = new ChromeDriver();
var page = new Page1Model(driver);
page.Navigate();
ChromeDriver is in the Selenium.WebDriver.ChromeDriver package and implements the IWebDriver interface from the Selenium.WebDriver package with the code to interact with the Chrome browser. Other packages are available containing implementations for all popular browsers. Instantiating the driver object opens a browser window, and calling its Navigate method directs the browser to the page we want to test/scrape.
// Act
var actualResults = page.AwaitResults.Text.Split(Environment.NewLine);
Because on Page1, the results element doesn't exist until all the data has been displayed, and no user interaction is required in order for it to be displayed, we use the page model's AwaitResults property to just wait for that element to appear and return it once it has appeared.
AwaitResults returns an IWebElement instance representing the element, which in turn has various methods and properties we can use to interact with the element. In this case we use its Text property which returns the element's contents as a string, without any markup. Because the data is displayed as an unordered list, each element in the list is delimited by a line break, so we can can use String's Split method to convert it to a string array.
Page2 needs a different approach - we can't use the presence of the results element to determine whether the data has all been displayed, because that element is on the page right from the start, instead we need to check for the string "That's all folks" which is written right at the end of the last chunk of data. Also the data isn't loaded all in one go, and we need to keep scrolling down in order to trigger the loading of the next chunk of data.
// Act
while (!page.Results.Text.Contains("That's all folks"))
{
Thread.Sleep(1000);
page.ScrollToBottom();
page.ScrollToTop();
}
var actualResults = page.Results.Text.Split(Environment.NewLine);
Because of the bug in the UI that I mentioned earlier, if we get to the bottom of the page too quickly, the fetch of the next chunk of data isn't triggered, and attempting to scroll down when already at the bottom of the page doesn't raise another scroll event. That's why I'm scrolling to the bottom of the page and then back to the top - that way I can guarantee that a scroll event is raised. You never know, the web site you're trying to scrape data from may itself be buggy.
Once the "That's all folks" text has appeared, we can go ahead and get the results element's Text property and convert it to a string array as before.
// Assert - we expect 1001 because of the extra "that's all folks"
Assert.Equal(1001, actualResults.Length);
This is the bit that won't be in your code. Because I'm scraping a web site which is under my control, I know exactly how much data it should be displaying so I can check that I've got all the data, and therefore that my scraping code is working correctly.
Further reading
Absolute beginner's introduction to Selenium: https://www.guru99.com/selenium-csharp-tutorial.html
(A curiosity in that article is the way that it starts by creating a console application project and later changes its output type to class library and manually adds the unit test packages, when the project could have been created using one of Visual Studio's unit test project templates. It gets to the right place in the end, albeit via a rather odd route.)
Selenium documentation: https://www.selenium.dev/documentation/
Happy scraping!
If you need to fully execute the web page, then a complete browser like CefSharp is your only option.
It could be that the page is using some combination of scroll state, element visibility, or element positions to trigger content loading. If that's the case, then you'll need to figure out what it is and trigger it programmatically. I know that CefSharp can simulate user actions like clicking, scrolling, etc.

Not able to access public property from ts file in Html file

I need to get some data from service and display it in HTML. I have put API call in service, and I got that data in ts file(checked in console), But When I am trying to get the same data into html, its showing null reference exception. Couldnt figure out what I missed.
export class SsoComponent implements OnInit {
public samlResponseData: SamlResponse;
constructor(
private ssoService: SsoService,
private store: Store<AppState>) { }
ngOnInit() {
this.verifySessionExpiration();
}
public verifySessionExpiration() {
this.store.pipe(select(getAuthData))
.subscribe(authData => {
if (authData) {
this.ssoService.fetchSamlResponse()
.subscribe(samlResponse => {
console.log(samlResponse);
this.samlResponseData = samlResponse;
});
} else {
this.ssoService.goLogin();
}
});
}
I am seeing the correct response in console. This is my code in HTML.
{{samlResponseData.ResponseData}}
I am getting a console error saying, "Unable to set property 'ResponseData' of undefined or null reference"
I have a model SamlResponse with a string property that I want to show it in HTML.
Please help.
The logic you have in verifySessionExpiration() asynchronous. You are getting that error because your template is trying to access samlResponseData before it has a value.
One way to fix it would be to only render the data when you know the value isn't null or undefined.
<ng-container *ngIf="samlResponseData">
{{samlResponseData.ResponseData}}
</ng-container>
Another option would be to initialize samlResponseData to some empty object:
samlResponseData = {};

WPF Awesomium - custom ResourceInterceptor wont allow local files to be loaded from disk

i have added my own custom Resource Interceptor to Awesomiums Webcore. I use it to load a local html file, manipulate it and then return it.
However this HTML file references other files on disk and I always get the error - something like
Not allowed to load local resource: file:///C:/awesomium project/bin/Debug/Resources/Html/css/myCss.css
This happens whether I return a file or a byte[] as my ResourceResponse. I have the WebPreferences set up as follows
return new WebPreferences()
{
UniversalAccessFromFileURL = true,
FileAccessFromFileURL = true,
SmoothScrolling = true,
};
I had this working using Awesomiums custom data sources but I need to provide our own prefix i.e. I can't use asset://. i'm not sure why this isn't working from the IResourceInterceptor. Any ideas?
Here is my imlementation of the Resource interceptor
public bool OnFilterNavigation(NavigationRequest request)
{
return false;//not handled
}
public ResourceResponse OnRequest(ResourceRequest request)
{
var path = request.Url.ToString();
ResourceResponse response = null;
if (path.StartsWith(#"myCustomPrefix://"))
{
response = _firstHandler.HandleRequest(path.Replace(#"myCustomPrefix://", string.Empty));
}
return response;
}
Edit: _firstHandler is a chain of command pattern. I could potentially do many things with the request so I have a handler for each one. I would like to say the handlers work. If I set them to create a file on disk they create that file - if I load up the html file from disk directly in the awesomium browser it is loaded correctly. It is only an issue when I return it as a ResourceResponse(filepath) or ResourceResponse(byte[], intPtr, mimetype) that it says it can't load the other files the HTML references locally.

Xilium.CefGlue how to execute JavaScript with return Value?

I'm quite new in programming multi-threading and I could not understand from the xelium example how I could execute a javascript and get the return value.
I have tested:
browser.GetMainFrame().ExecuteJavaScript("SetContent('my Text.')", null, 0);
the javascript is executed, but I this function don’t allow me to get the return value.
I should execute the following function to get all the text the user have written in the box..
browser.GetMainFrame().ExecuteJavaScript("getContent('')", null, 0);
the function TryEval should do this…
browser.GetMainFrame().V8Context.TryEval("GetDirtyFlag", out returninformation , out exx);
But this function can’t be called from the browser, I think it must be called from the renderer? How can I do so?
I couldn’t understand the explanations about CefRenderProcessHandler and OnProcessMessageReceived.. How to register a Scriptable Object and set my javascript & parameters?
Thx for any suggestions how I could solve this!
I have been struggling with this as well. I do not think there is a way to do this synchronously...or easily :)
Perhaps what can be done is this:
From browser do sendProcessMessage with all JS information to renderer
process. You can pass all kinds of parameters to this call in a structured way so encapsulating the JS method name and params in order should not be difficult to do.
In renderer process (RenderProcessHandler onProcessMessageReceived method) do TryEval on the V8Context and get the return value via out parameters and sendProcessMessage back to the
browser process with the JS return value (Note that this supports ordinary return semantics from your JS method).You get the browser instance reference in the onProcessMessageReceived so it is as easy as this (mixed pseudo code)
browser.GetMainFrame().CefV8Context.tryEval(js-code,out retValue, out exception);
process retValue;
browser.sendProcessMessage(...);
Browser will get a callback in the WebClient in onProcessMessageReceived.
There is nothing special here in terms of setting up JS. I have for example a loaded html page with a js function in it. It takes a param as input and returns a string. in js-code parameter to TryEval I simply provide this value:
"myJSFunctionName('here I am - input param')"
It is slightly convoluted but seems like a neat workable approach - better than doing ExecuteJavaScript and posting results via XHR on custom handler in my view.
I tried this and it does work quite well indeed....and is not bad as it is all non-blocking. The wiring in the browser process needs to be done to process the response properly.
This can be extended and built into a set of classes to abstract this out for all kinds of calls..
Take a look at the Xilium demo app. Most of the necessary wiring is already there for onProcessMessage - do a global search. Look for
DemoRendererProcessHandler.cs - renderer side this is where you will invoke tryEval
DemoApp.cs - this is browser side, look for sendProcessMessage - this will initiate your JS invocation process.
WebClient.cs - this is browser side. Here you receive messages from renderer with return value from your JS
Cheers.
I resolved this problem by returning the result value from my JavaScript function back to Xilium host application via an ajax call to a custom scheme handler. According to Xilium's author fddima it is the easiest way to do IPC.
You can find an example of how to implement a scheme handler in the Xilium's demo app.
Check out this post: https://groups.google.com/forum/#!topic/cefglue/CziVAo8Ojg4
using System;
using System.Windows.Forms;
using Xilium.CefGlue;
using Xilium.CefGlue.WindowsForms;
namespace CefGlue3
{
public partial class Form1 : Form
{
private CefWebBrowser browser;
public Form1()
{
InitializeCef();
InitializeComponent();
}
private static void InitializeCef()
{
CefRuntime.Load();
CefMainArgs cefArgs = new CefMainArgs(new[] {"--force-renderer-accessibility"});
CefApplication cefApp = new CefApplication();
CefRuntime.ExecuteProcess(cefArgs, cefApp);
CefSettings cefSettings = new CefSettings
{
SingleProcess = false,
MultiThreadedMessageLoop = true,
LogSeverity = CefLogSeverity.ErrorReport,
LogFile = "CefGlue.log",
};
CefRuntime.Initialize(cefArgs, cefSettings, cefApp);
}
private void Form1_Load(object sender, EventArgs e)
{
browser = new CefWebBrowser
{
Visible = true,
//StartUrl = "http://www.google.com",
Dock = DockStyle.Fill,
Parent = this
};
Controls.Add(browser);
browser.BrowserCreated += BrowserOnBrowserCreated;
}
private void BrowserOnBrowserCreated(object sender, EventArgs eventArgs)
{
browser.Browser.GetMainFrame().LoadUrl("http://www.google.com");
}
}
}
using Xilium.CefGlue;
namespace CefGlue3
{
internal sealed class CefApplication : CefApp
{
protected override CefRenderProcessHandler GetRenderProcessHandler()
{
return new RenderProcessHandler();
}
}
internal sealed class RenderProcessHandler : CefRenderProcessHandler
{
protected override void OnWebKitInitialized()
{
CefRuntime.RegisterExtension("testExtension", "var test;if (!test)test = {};(function() {test.myval = 'My Value!';})();", null);
base.OnWebKitInitialized();
}
}
}

Categories

Resources