When I call NotesDXLExporterClass.Export on a NotesDocumentClass object that has a very large attachment, I get a System.Runtime.InteropServices.COMException: 'Out of memory' exception.
I was hoping to resolve this by setting NotesDXLExporter.OmitRichtextAttachments to true, but it looks like this property is not available through COM (1).
What are my options here to get around this issue?
(1) Differences between accessing Domino Objects through either LotusScript or COM
Note 4: NotesXMLProcessor is not implemented in COM. NotesDXLExporter and NotesDXLImporter implement ExitOnFirstFatalError, Log, and LogComment, rather than inheriting them. )
Edit:
When I open C:\Program Files (x86)\IBM\Lotus\Notes\domobj.tlb in Oleview.exe and look at the NotesDXLExporterClass interface I only see the following:
[
uuid(29131437-2EED-1069-BF5D-00DD011186B7)
]
dispinterface NOTESDXLEXPORTER {
properties:
[id(0x00000bf6)
]
VARIANT FORCENOTEFORMAT;
[id(0x00000bfa)
]
VARIANT OUTPUTDOCTYPE;
[id(0x00000bfb)
]
BSTR DOCTYPESYSTEM;
[id(0x00000f1e), readonly
]
BSTR LOG;
[id(0x00000f1f)
]
BSTR LOGCOMMENT;
[id(0x00000f20)
]
VARIANT EXITONFIRSTFATALERROR;
methods:
[id(0x00000f28)]
void SETINPUT(VARIANT INPUT);
[id(0x00000f29)]
void SETOUTPUT(VARIANT OUTPUT);
[id(0x00000f2a)]
void PROCESS();
};
The document to which you've linked (and also my local Notes Help) doesn't say that NotesDXLExporter.OmitRichtextAttachments isn't available in COM. Did you try using that property and get an error?
If NotesDXLExporter.OmitRichtextAttachments isn't available, are you able to develop an agent in the relevant Domino database (or in another database created for this purpose) that acts as a go-between?
I'm thinking an agent could take the note id of the target document via NotesAgent.Run, and export that document's DXL to a field (which might have to be rich text if the DXL is more than 32kB) in another temporary document. Your code should call that agent via COM, get the resulting temporary document, read the DXL from its field, then delete the temporary document.
This seems overly complex, but it's the only solution that occurs to me.
Try running your code as a LotusScript agent.
If it fails, the problem is inherent in the classes, not in the COM implementation and there's not going to be much you can do other than trying a more up-to-date version of Notes/Domino.
If it works, a potential workaround would be to have your COM code invoke a LotusScript agent on the server to do this part of the work.
Related
We have a running Redoc server, which includes a bunch on yaml files with api specifications. However, a couple of neccesary yaml files are not on the local (let's call it RedocServer) machine.
Those remote files are accessible through aspnet-webapi service (WebApiServer).
So, let's say, to get one of those files, we use refernce in the index.yaml file:
paths:
/api/1:
$ref: "https:/some-address/ApiDoc.yaml"
If ApiDoc.yaml itself has no refences, there is no problem for WebApiServer to simply return a string using a method like that:
[HttpGet]
public string GetApiDoc()
{
var directoryPath = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location);
var filePath = Path.Combine(directoryPath, "ApiDoc.yaml");
return File.ReadAllText(filePath);
}
However, in our case, that ApiDoc.yaml has some huge, nested references to another files inside it. Something like that, implying that referenced objects have references inside them:
post:
tags:
- Test
summary: Test
operationId: Test
consumes:
- application/json
produces:
- application/json
requestBody:
content:
application/json:
schema:
$ref: "../ApiDoc2.yaml#/components/schemas/ApiRequest"
responses:
200:
description: OK
content:
application/json:
schema:
$ref: "../ApiDoc3.yaml#/components/schemas/ApiResponse"
If WebApiServer returns a string like that, RedocServer will probably just try to resolve these references with RedocServer files. But we obviously want to make sure that refernces will be resolved on the WebApiServer side.
So, the question is, how to properly return that ApiDoc.yaml without breaking any references?
We cannot resolve references manually because objects are huge and deeply nested. OpenApi.net, which we tried to use, still isn't able to resolve remote references automatically, and also doesnt't seem to be able to work with files without "info" and "openapi:3.0.0" part in it.
As it turns out, Redoc automatically parses remote references, replacing local path in urls with remote url.
So simply put: you can just return a string or file like that, and everything should be working all right.
I am aware of the option to call RFC-functions with .NCo 3.0 but is it possible to call transactions/programs directly with the SAP Connector? (Like using the fields defined in SAP as parameters and fill them, or use a variation, something like this?).
This answer provides a workaround that I am aware of, and sure - I could call a VBScript from my C# code but that is not what I want to do.
I also checked all of the 64 Questions tagged with sap-connector but there was nowhere a direct answer if it is possible or not.
Also the SAP documentations I got from the SAP marketplace aren't mentioning transactions/programs at all. Does this mean it is not wanted/possible ?
If so, why is it possible to do it with macros/pre-recorded VBScripts but not with the .NET-Connector ? Or am I just doing something wrong ?
When I try to call a program/transaction with the standart-code:
SAPHandle.ECCDestinationConfig cfg = new SAPHandle.ECCDestinationConfig();
RfcDestinationManager.RegisterDestinationConfiguration(cfg);
RfcDestination dest = RfcDestinationManager.GetDestination("QP2");
dest.Ping(); //works fine -> Connection is OK
RfcRepository repo = dest.Repository;
IRfcFunction zzmkalzzm23fnc = repo.CreateFunction("ZMZKALZZM23");
it gives me the following (expectable) error:
metadata for function ZMZKALZZM23 not available: FU_NOT_FOUND:
function module ZMZKALZZM23 is not available
CreateFunction, as the name already suggests, creates a proxy to call a remote-enabled function module in the SAP system. You can't call a transaction or program this way. I am not aware of any way to call a report with SAP .Net Connector. The solution you linked uses SAP Gui, which provides the SAP system with a UI to display graphical elements. AFAIK, SAP NCo doesn't provide such an interface and you can't call reports from NCo.
However, there are products that allow you to execute transactions and catch their output. We are using the product Theobald Xtract to extract SAP ERP data for BI purposes, but they also have a more generic .Net library (Theobald ERPConnect) available that may be able to provide this functionality. It won't be as simple as calling a function and extracting the strongly typed data, but with some filtering you should be able to get the output you need. Those products are not cheap, but they do provide a nice set of functionality you otherwise would have to reinvent yourself.
Some example code how you could call the transaction you ended up calling through VBS-Scripts.
From the Theobald ERPConnect Knowledgbase:
private void button1_Click(object sender, System.EventArgs e)
{
// Reset the batch steps
transaction1.BatchSteps.Clear();
// fill new steps
transaction1.ExecutionMode = ERPConnect.Utils.TransactionDialogMode.ShowOnlyErrors;
transaction1.TCode = "MMBE";
transaction1.AddStepSetNewDynpro("RMMMBEST","1000");
transaction1.AddStepSetOKCode("ONLI");
transaction1.AddStepSetCursor("MS_WERKS-LOW");
transaction1.AddStepSetField("MS_MATNR-LOW",textBox1.Text);
transaction1.AddStepSetField("MS_WERKS-LOW",textBox2.Text);
// connect to SAP
r3Connection1.UseGui = true;
R3Connection r3Connection1= new R3Connection("SAPServer",00,"SAPUser","Password","EN","800");
r3Connection1.Open(false);
// Run
transaction1.Execut e();
}
I am not a c# guy. Needless to say that I don't have experience on this topic.
I bought a software and I installed it on my computer. Now I thought of using some of it's functions on my software(that I make to plan in c#). So I messaged the person who sold me the software and he send me a two page pdf file explaining what to do.
It states:
This software features a COM interface.
And it goes saying it's API contains a function "stackAPI".
and the parameters used are apiname type string, apipass type string.
Return values type long. 0 for sucess and 1 for error.
That's all it states, I tired searching google, it could not help me at all. So how do I start?
when I write the following code on c# it gives me error.
string[] apiname;
string[] apipass;
stackAPI(apiname, apipass);
I know if I was using dll I would import it as
[DllImport("example.dll"]
But no dll is provided.
Do I need to add the path to the folder where the software is installed to call the API ?
To get started:
using example.dll;
Then in your main class:
example.CustomService api = new example.CustomService();
var response = api.Dostuff();
Console.WriteLine(response);
If anyone wants to know how I did it. After a week of searching I was able to find the solution today. I wanted to call the function stackAPI(apiname, apipass);.
stackAPI(apiname, apipass) was the member of stack.callapi
So I wrote:
dynamic lifestohack = Activator.CreateInstance(Type.GetTypeFromProgID("stack.callapi"));
than just you can call the function like this.
int rv;
string[] apiname;
string[] apipass;
rv=lifestoahck.stackAPI(apiname, apipass);
Hope it may help someone.
My question is related with the C# implementation of the google protocol buffers (protobuf-csharp-port, by jon skeet, great job!)
I am experiencing troubles with the extensions: let's say I wrote:
"transport_file.proto" with a "transport message" and some code to
deal with it "code_old".
and I wrote an extension of the transport message on
"Mytransport.proto" file, and new code to read it "code_new".
I'm trying to read a new message (from MyTransport.proto) with the code_old expecting to ignore the extension, but I get an exception in the merge method from TextFormat: "transport" has no field named "whatever_new_field"
Transport.Builder myAppConfigB = new Transport.Builder();
System.IO.StreamReader fich = System.IO.File.OpenText("protocolBus.App.cfg");
TextFormat.Merge(fich.ReadToEnd(),myAppConfigB);
fich.Close();
new extended file looks like:
...
Transport
{
TransportName: "K6Server_0"
DllImport: "protocolBus.Transports.CentralServer"
TransportClass: "K6Server"
K6ServerParams
{
K6Server { host: "85.51.11.23" port: 40069 }
Service: "TZinTalk"
...
}
}
...
while the old one, not extended:
...
Transport
{
TransportName: "K6Server_0"
DllImport: "Default"
TransportClass: "Multicast"
}
...
The whole idea is to use the text based protocol buffer as a config file in which I write some params, and based on one of those I load and assembly (which will read the whole message with the new extension (params to initialize the object).
Any idea? (it is a desperate question :D )
I'm using MSVC# 2008Express edition, protobuf-csharp-port version 0.9.1 (someday I'll upgrade everything).
THANKS in advance.
I'm working on a non centrilized Publish-Subscribe framework of messages (for any written message in a proto file I auto create a Publish and a Subscriber class) with different transports. By the default I use multicast, but broadcast and a "UDP star" are also included. I let the extension mechanism to let people add new transports with its owm config params that should be read by my main code_old (just to load the assembly) and let the new transport (.dll) read it again (fully).
Curious? the previous, almost functional, version is in http://protocolbus.casessite.org
Update 1
Extended types in text format are enclosed in brackets (good to know, I was not aware of it :D ) so I should have written:
[K6ServerParams]
{
K6Server { host: "85.51.11.23" port: 40069 }
Service: "TZinTalk"
...
}
Protocol buffers are designed to be backwards and forwards compatible when using their binary format, but certainly the current code doesn't expect to parse the text format with unknown fields. It could potentially be changed to do that, but I'd want to check with the Java code to try to retain parity with that.
Is there any reason you're not using the binary representation to start with? That's the normal intended usage, and the one where the vast majority of the work has gone in. (Having said which, it all seems a bit of a blur after this long away from the code...)
Mode : http://www.emacswiki.org/emacs/CSharpMode
log:
Loading /.emacs.d/contrib/dev/csharp-mode.el
Done loading /.emacs.d/contrib/dev/csharp-mode.el
File mode specification error: (void-function make-local-hook)
Loading vc-git...done
When done with a buffer, type C-x #
(No files need saving)
File mode specification error: (void-function make-local-hook)
When done with a buffer, type C-x #
Making completion list... [2 times]
goto-history-element: End of history; no default available [3 times]
or: Symbol's function definition is void: make-local-hook
mouse-minibuffer-check: Minibuffer window is not active
(No files need saving)
When done with a buffer, type C-x #
(No files need saving)
File mode specification error: (void-function make-local-hook)
When done with a buffer, type C-x #
Making completion list... [2 times]
or: Symbol's function definition is void: make-local-hook
Why that? And how can I fix it?
make-local-hook has been obsolete for years, and was removed entirely in Emacs 24.
You should try to locate an updated version of the library. According to the Wiki page you linked to, the latest version is here:
http://code.google.com/p/csharpmode/
Failing that, there's a pretty good chance that the code only includes those function calls to retain backwards-compatibility with Emacs 20, and that provided there is an appropriate call to add-hook present, all you would need to do is delete all instances of (make-local-hook HOOK) from the code.
Here's the relevant bits of its old docstring:
(make-local-hook HOOK)
This function is obsolete since 21.1;
not necessary any more.
Make the hook HOOK local to the current buffer.
The return value is HOOK.
You never need to call this function now that `add-hook' does it for you
if its LOCAL argument is non-nil.
See also C-hf add-hook RET