CobaltCore assembly - c#

I try to implement a custom Wopi host in C# that can handle the Cobalt Protocol using the CobaltCore assembly.
But I didn't found any documentation for CobaltCore.dll
Object browser is a little helpful..
Please provide some details if someone had similar issue.
How I should use Cobalt to decipher the messages?

For word editing implementation go here:
Can I just use Office Web Apps Server
// fsshttpb payload, basically decode from base64 encoded
byte[] test1 = System.Convert.FromBase64String("DAALAJzPKfM5lAabBgIAAO4CAABaBBYADW1zd29yZAd3YWN6AggA1RyhD3cBFgIGAAMFABoEIAAL3Do4buY4RJXm4575cgEiigICAAALAawCAFUDAQ==");
// create an atom object from the fsshttp input
AtomFromByteArray atomRequest = new AtomFromByteArray(test1);
RequestBatch requestBatch = new RequestBatch();
requestBatch.DeserializeInputFromProtocol(atomRequest);
// now you can inspect requestBatch to view the decoded objects
edit:
Here is a sample implementation using CobaltCore. Pretty much a combination of my answers about WOPI/FSSHTTP on this website in one project.
https://github.com/thebitllc/WopiBasicEditor

Also implementing the Cobalt approach to edits and like Julia it stops at a "cant edit screen" even after lockingstore callbacks including co-author etc.
What I have found however is the log system for OWA reveals quite considerable detail about what the OWA server is attempting to do.
C:\ProgramData\Microsoft\OfficeWebApps\Data\Logs\ULS
I can see from these logs it complains about a missing access token, by providing
&access_token=1&access_token_ttl=0
to the end of the wopi url this error goes away.
I also tested many of the file info fields and was able to see how the OWA server caches information. If we keep changing the cfi.Version
FileInfo info = new FileInfo("C:\\WOPI OWA WORD EDITOR\\OWA_Source_Documents\\" + fi.Name);
cfi.Version = info.LastWriteTimeUtc.ToString("s");
we get a fresh cached item each time we change the files contents via normal word.
These also affect the View mode for Word and I suspect will lock us out of the word edit mode but since I don't have that working I cant tell yet.
cfi.SupportsCoauth = true; // all three (3) needed to see the edit in browser menu in view mode .
cfi.SupportsCobalt = true; // all three (3) needed to see the edit in browser menu in view mode .
cfi.SupportsFolders = true; // all three (3) needed to see the edit in browser menu in view mode .
cfi.SupportsLocks = true;
cfi.SupportsScenarioLinks = false;
cfi.SupportsSecureStore = true;
cfi.SupportsUpdate = true;
This one locks out the word edit function and unless you update the version of the file it will stay locked even if you change it back to false.
cfi.WebEditingDisabled = false;
Roger Hogg

thanks to thebitllc for the correct approach to getting back the file.
System.IO.FileStream _FileStream = new System.IO.FileStream("C:\\WOPI OWA WORD EDITOR\\OWA_Updated_Documents\\output.docx", System.IO.FileMode.Create, System.IO.FileAccess.Write);
GenericFdaStream myCobaltStream = new GenericFda(cobaltFile.CobaltEndpoint, null).GetContentStream();
myCobaltStream.CopyTo(_FileStream);
_FileStream.Close();

Related

Data import via Management API successful, but data for custom dimensions does not show

I am trying to import data for custom dimension in Google Analytics through the .NET client library. In Google Analytics, when I view the uploads for a data set from Admin > Data Import > Manage Uploads, it says my uploads are successful, but the data for the custom dimension doesn't seem to show up in my report. Right now, I am just using my custom dimension to set the category for an article.
Here is how I am uploading through the .Net client library.
string accountId = "***";
string webPropertyId = "***";
string customDataSourceId = "***";
string contentType = "application/octet-stream";
IUploadProgress progress;
using (var dataStream = CreateArticleCsvStream(articles))
{
var fs = File.Create("test.csv");
dataStream.CopyTo(fs);
fs.Close();
progress = service.Management.Uploads.UploadData(accountId, webPropertyId, customDataSourceId, dataStream, contentType).Upload();
}
if (progress.Status == UploadStatus.Failed)
{
throw progress.Exception;
}
Here is the output for test.csv
ga:pagePath,ga:dimension1
/path/to/page/,"MyCategory"
When I download the file from the data set, I get the same file as the test.csv file, it just has a random filename that gets assisgned to it.
I found this other question similar to mine, but there was no solution posted. Any help would be appreciated.
I have also waited over 24 hours, but still nothing.
It took a few days of trial and error but I finally found the solution.
First thing to check is that your Website's URL is correct under Admin > View Settings. We had ours set up like my.domain.com/path/to/site when it should have just been my.domain.com. (We are using SharePoint, which is why path/to/site was appended to the site URL)
Second thing to check is that your key/pagePath entries are all correct. In our case, we had an extra forward slash at the end of the URL. For some reason, Google Analytics displays the trailing forward slash in reports, but does not actually store it for the pagePath.
Another error may be capitalization. It seems like GA applies filters after the data has been processed. If you add the lowercase/uppercase filter, notice that it only affects how the URLs display in your reports. Behind the scenes, it seems that GA still stores the URL with whatever capitalization the hit initially came in with. For example if the URL on your site is my.domain.com/path/to/PAGE.aspx and you apply the lowercase filter, the pagePath will display in your reports as /path/to/page.aspx. But, if you use the lowercase value in your csv import, the data will not join. You must use the pagePath that appears on your site (/path/to/PAGE.aspx in this case).
It would be nice if Google gave some log files when it tries to process and join the uploaded data with the existing data, rather than just saying the upload was successful even though the processing/joining stage may fail.

What could cause an XML file to be filled with null characters?

This is a tricky question. I suspect it will require some advanced knowledge of file systems to answer.
I have a WPF application, "App1," targeting .NET framework 4.0. It has a Settings.settings file that generates a standard App1.exe.config file where default settings are stored. When the user modifies settings, the modifications go in AppData\Roaming\MyCompany\App1\X.X.0.0\user.config. This is all standard .NET behavior. However, on occasion, we've discovered that the user.config file on a customer's machine isn't what it's supposed to be, which causes the application to crash.
The problem looks like this: user.config is about the size it should be if it were filled with XML, but instead of XML it's just a bunch of NUL characters. It's character 0 repeated over and over again. We have no information about what had occurred leading up to this file modification.
We can fix that problem on a customer's device if we just delete user.config because the Common Language Runtime will just generate a new one. They'll lose the changes they've made to the settings, but the changes can be made again.
However, I've encountered this problem in another WPF application, "App2," with another XML file, info.xml. This time it's different because the file is generated by my own code rather than by the CLR. The common themes are that both are C# WPF applications, both are XML files, and in both cases we are completely unable to reproduce the problem in our testing. Could this have something to do with the way C# applications interact with XML files or files in general?
Not only can we not reproduce the problem in our current applications, but I can't even reproduce the problem by writing custom code that generates errors on purpose. I can't find a single XML serialization error or file access error that results in a file that's filled with nulls. So what could be going on?
App1 accesses user.config by calling Upgrade() and Save() and by getting and setting the properties. For example:
if (Settings.Default.UpgradeRequired)
{
Settings.Default.Upgrade();
Settings.Default.UpgradeRequired = false;
Settings.Default.Save();
}
App2 accesses info.xml by serializing and deserializing the XML:
public Info Deserialize(string xmlFile)
{
if (File.Exists(xmlFile) == false)
{
return null;
}
XmlSerializer xmlReadSerializer = new XmlSerializer(typeof(Info));
Info overview = null;
using (StreamReader file = new StreamReader(xmlFile))
{
overview = (Info)xmlReadSerializer.Deserialize(file);
file.Close();
}
return overview;
}
public void Serialize(Info infoObject, string fileName)
{
XmlSerializer writer = new XmlSerializer(typeof(Info));
using (StreamWriter fileWrite = new StreamWriter(fileName))
{
writer.Serialize(fileWrite, infoObject);
fileWrite.Close();
}
}
We've encountered the problem on both Windows 7 and Windows 10. When researching the problem, I came across this post where the same XML problem was encountered in Windows 8.1: Saved files sometime only contains NUL-characters
Is there something I could change in my code to prevent this, or is the problem too deep within the behavior of .NET?
It seems to me that there are three possibilities:
The CLR is writing null characters to the XML files.
The file's memory address pointer gets switched to another location without moving the file contents.
The file system attempts to move the file to another memory address and the file contents get moved but the pointer doesn't get updated.
I feel like 2 and 3 are more likely than 1. This is why I said it may require advanced knowledge of file systems.
I would greatly appreciate any information that might help me reproduce, fix, or work around the problem. Thank you!
It's well known that this can happen if there is power loss. This occurs after a cached write that extends a file (it can be a new or existing file), and power loss occurs shortly thereafter. In this scenario the file has 3 expected possible states when the machine comes back up:
1) The file doesn't exist at all or has its original length, as if the write never happened.
2) The file has the expected length as if the write happened, but the data is zeros.
3) The file has the expected length and the correct data that was written.
State 2 is what you are describing. It occurs because when you do the cached write, NTFS initially just extends the file size accordingly but leaves VDL (valid data length) untouched. Data beyond VDL always reads back as zeros. The data you were intending to write is sitting in memory in the file cache. It will eventually get written to disk, usually within a few seconds, and following that VDL will get advanced on disk to reflect the data written. If power loss occurs before the data is written or before VDL gets increased, you will end up in state 2.
This is fairly easy to repro, for example by copying a file (the copy engine uses cached writes), and then immediately pulling the power plug on your computer.
I had a similar problem and I was able to trace my problem to corrupted HDD.
Description of my problem (all related informations):
Disk attached to mainboard (SATA):
SSD (system),
3 * HDD.
One of the HDD's had a bad blocks and there were even problems reading the disk structure (directories and file listing).
Operation system: Windows 7 x64
file system (on all disks): NTFS
When the system tried to read or write to the corrupted disk (user request or automatic scan or any other reason) and the attempt failed, all write operations (to other disk's) were incorrect. The files created on system disk (mostly configuration files by another applications) were written and were valid (probably because the files were cashed in RAM) on direct check of file content.
Unfortunately, after a restart, all the files (written after the failed write/read access on corrupted drive) had the correct size, but the content of the files was 'zero byte' (exactly like in your case).
Try rule out hardware related problems. You can try to check 'copy' the file (after a change) to a different machine (upload to web/ftp). Or try to save specific content to a fixed file. When the check file on different will be correct, or when the fixed content file will be 'empty', the reason is probably on local machine. Try to change HW components, or reinstall the system.
There is no documented reason for this behavior, as this is happening to users but nobody can tell the origin of this odd conditions.
It might be CLR problem, although this is a very unlikely, the CLR doesn't just write null characters and XML document cannot contain null characters if there's no xsi:nil defined for the nodes.
Anyway, the only documented way to fix this is to delete the corrupted file using this line of code:
try
{
ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.PerUserRoamingAndLocal);
}
catch (ConfigurationErrorsException ex)
{
string filename = ex.Filename;
_logger.Error(ex, "Cannot open config file");
if (File.Exists(filename) == true)
{
_logger.Error("Config file {0} content:\n{1}", filename, File.ReadAllText(filename));
File.Delete(filename);
_logger.Error("Config file deleted");
Properties.Settings.Default.Upgrade();
// Properties.Settings.Default.Reload();
// you could optionally restart the app instead
}
else
{
_logger.Error("Config file {0} does not exist", filename);
}
}
It will restore the user.config using the Properties.Settings.Default.Upgrade();
again without null values.
I ran into a similar issue but it was on a server. The server restarted while a program was writing to a file which caused the file to contain all null characters and become unusable to the program writing/reading from it.
So the file looked like this:
The logs showed that the server restarted:
The corrupted file showed that it was last updated at the time of the restart:
I have the same problem, there is an extra "NUL" character at the end of serialized xml file:
I am using XMLWriter like this:
using (var stringWriter = new Utf8StringWriter())
{
using (var xmlWriter = XmlWriter.Create(stringWriter, new XmlWriterSettings { Indent = true, IndentChars = "\t", NewLineChars = "\r\n", NewLineHandling = NewLineHandling.Replace }))
{
xmlSerializer.Serialize(xmlWriter, data, nameSpaces);
xml = stringWriter.ToString();
var xmlDocument = new XmlDocument();
xmlDocument.LoadXml(xml);
if (removeEmptyNodes)
{
RemoveEmptyNodes(xmlDocument);
}
xml = xmlDocument.InnerXml;
}
}

Determine properties such as if PDF is Simplex or Duplex in iTextSharp

I am using iTextSharp for reading and managing PDF documents. Things such as stamping overlays for the background or logos and backers. The PDF's are statement files, so I cannot give an example. I am wondering how to view the settings of the PDF to see if the PDF file is Simplex or Duplex, and that sort of information. Any help or suggestions would be appreciated. At the moment I test for certain criteria of second page, and this is a poor and bad way to do this. Thanks in advance, and happy coding!
The duplex mode is stored in the document's /ViewerPreferences dictionary under the /Duplex key. It supports three values, /DuplexFlipLongEdge, /DuplexFlipShortEdge, and /Simplex. You can use the code below to inspect this:
//Assume false by default since this was introduced in PDF 1.7
Boolean isDuplex = false;
//Bind a reader to our file
using (var r = new PdfReader(testFile)) {
//Get the view preferences
var prefs = r.Catalog.GetAsDict(PdfName.VIEWERPREFERENCES);
//Make sure we found something
if (prefs != null) {
//Get the duplex key
var duplex = prefs.Get(PdfName.DUPLEX);
//Make sure we got something and it is one of the duplex modes
isDuplex = (duplex != null && (duplex.Equals(PdfName.DUPLEXFLIPLONGEDGE) || duplex.Equals(PdfName.DUPLEXFLIPSHORTEDGE)));
}
}
I know its 2 years later but I just spent hours searching, found this... but eventually found...
I create a button that runs this script (that pops up the printer dialogue with duplex pre selected if available... note that selecting another printer erases this pre selection.. also change "Long" for "Short" if you flip that way... q8)
var pp = this.getPrintParams();
pp.DuplexType = pp.constants.duplexTypes.DuplexFlipLongEdge;
this.print(pp);

DotNetNuke 7.1 HTML Module converting data:image into URI?

I am unable to use the drag-and-drop functionality within DotNetNuke version 7.1.
The drag-and-drop functionality of the Telerik RadEditor takes the browser's Base64 input and encases it in an img tag where the source is the data. E.g., src="data:image/jpeg;base64,[base64data]".
When using drag/drop to a RadEditor within the HTML Module and then saving the HTML content, that src definition is changed to a URI request by prepending the relative path for the DNN portal. E.g., src="/mysite/portals/0/data:image/jpeg;base64,[base64data]".
This converts what started out as a perfectly valid embedded image tag into a request and thereby causes the browser to request this "image" from the server. The server then returns a 414 error (URI too long).
Example without prepended relative path: http://jsfiddle.net/GGGH/27Tbb/2/
<img src="data:image/jpeg;base64,[stuff]>
Example with prepended relative path (won't display): http://jsfiddle.net/GGGH/NL85G/2/
<img src="mysite/portals/0/data:image/jpeg;base64,[stuff]>
Is there some configuration that I've missed? Prepending relative paths is OK for src="/somephysicalpath" but not for src="data:image...".
I ended up solving the problem prior to posting the question but wanted to add this knowledge to SO in case someone else encountered the same problem (has no one noticed this yet?). Also, perhaps, DNN or the community can improve upon my solution and that fix can make it into a new DNN build.
I've looked at the source code for RadEditor, RadEditorProvider and then finally the Html module itself. It seems the problem is in the EditHtml.ascx.cs, FormatContent() method which calls the HtmlTextController's ManageRelativePaths() method. It's that method that runs for all "src" tags (and "background") in the Html content string. It post-processes the Html string that comes out of the RadEditor to add in that relative path. This is not appropriate when editing an embedded Base64 image that was dragged to the editor.
In order to fix this, and still allow for the standard functionality originally intended by the manufacturer, the DotNetNuke.Modules.Html.EditHtm.ascx.cs, ManageRelativePaths needs to be modified to allow for an exception if the URI includes a "data:image" string at its beginning. Line 488 (as of version 7.1.0) is potentially appropriate. I added the following code (incrementing P as appropriate and positioned after the URI length was determined -- I'm sure there's a better way but this works fine):
// line 483, HtmlTextController.cs, DNN code included for positioning
while (P != -1)
{
sbBuff.Append(strHTML.Substring(S, P - S + tLen));
// added code
bool skipThisToken = false;
if (strHTML.Substring(P + tLen, 10) == "data:image") // check for base64 image
skipThisToken = true;
// end added code - back to standard DNN
//keep characters left of URL
S = P + tLen;
//save startpos of URL
R = strHTML.IndexOf("\"", S);
//end of URL
if (R >= 0)
{
strURL = strHTML.Substring(S, R - S).ToLower();
}
else
{
strURL = strHTML.Substring(S).ToLower();
}
// added code to continue while loop after the integers were updated
if (skipThisToken)
{
P = strHTML.IndexOf(strToken + "=\"", S + strURL.Length + 2, StringComparison.InvariantCultureIgnoreCase);
continue;
}
// end added code -- the method continues from here (not reproduced)
This is probably not the best solution as its searching for a hard coded value. Better would be functionality that allows the developers to add tags later. (But, then again, EditHtml.ascx.cs and HtmlTextController both hard code the two tags that they intend to post-process.)
So, after making this small change, recompiling the DotNetNuke.Modules.Html.dll and deploying, drag-and-drop should be functional. Obviously this increases the complexity of an upgrade -- it would be better if this were fixed by DNN themselves. I verified that as of v7.2.2 this issue still exists.
UPDATE: Fixed in DNN Community Version 7.4.0

IE9 randomly popping Windows Security Dialog when I send down a binary blob as a .xlsx

So on our website, we have multiple reports that can be downloaded as an Excel Spreadsheet, we accomplish this by reading in a blank template file from the harddrive, copying it into a MemoryStream, pushing the data into the template with DocumentFormat.OpenXml.Spreadsheet; Then we pass the MemoryStream to a function that sets the headers and copies the stream into the Response.
Works GREAT in FF & Chrome, but IE9 (and 8, so my QA tells me) randomly pop a Windows Security login dialog asking you to log into the remote server. I can either cancel the dialog, or hit ok (the credentials seem to be ignored), and get the Excel file as expected. Looking at the queries (using CharlesProxy) I cannot get the login dialog to pop until I disable CharlesProxy again, so I cannot see if there's any difference in the traffic between my dev machine and the server. It also doesn't happen when running debug from my local-host, just from the Dev/Test server.
Any help would be useful, the code in question follows. This is called out of a server-side function in the code behind, hence the RespondAsExcel clears the response and puts in the xlsx instead.
using (MemoryStream excelStream = new MemoryStream())
{
using (FileStream template = new FileStream(Server.MapPath(#"Reports\AlignedTemplateRII.xlsx"), FileMode.Open, FileAccess.Read))
{
Master.CopyStream(template, excelStream);
}
//Logic here to push data into the Memory stream using DocumentFormat.OpenXml.Spreadsheet;
Master.RespondAsExcel(excelStream, pgmName);
}
public void RespondAsExcel(MemoryStream excelStream, string fileName)
{
var contenttype = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet";
Response.Clear();
Response.ContentType = contenttype;
fileName = Utils.ReplaceWhiteSpaceWithUnderScores(fileName);
Response.AddHeader("content-disposition", "inline;filename=" + fileName);
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.BinaryWrite(excelStream.ToArray());
//If that doesn't work, can try this way:
//excelStream.WriteTo(Response.OutputStream);
Response.End();
}
public void CopyStream(Stream source, Stream destination)
{
byte[] buffer = new byte[32768];
int bytesRead;
do
{
bytesRead = source.Read(buffer, 0, buffer.Length);
destination.Write(buffer, 0, bytesRead);
} while (bytesRead != 0);
}
A couple of ideas come to mind regarding that "extra authentication dialog" that can always be dismissed...won't promise this is your issue, but it sure smells like a first-cousin of it.
Office 2007 and later documents open HTTP-based repositories with the WebClient libraries, which do not honor any of IE's security zone filters when requests are made. If the file is requested by IE, and host URL contains dots (implying a FQDN), even if the site is anonymously authenticated (requiring no credentials), you'll get the "credential" dialog that can be cancelled or simply clicked three times and discarded. I was dealing with this problem just yesterday, and as best I can tell, there's no workaround if the file is delivered with IE. There's some quirk about how IE delivers the file that makes Office apps believe it has to authenticate the request before opening it, even though the file has already been delivered to the client!
The dialog issue may be overcome if the document is delivered from a host server in the same domain as the requesting server, eg some-server.a.domain.com to my-machine.a.domain.com.
The second idea is something strictly born of my own experience - that the openoffice vendor format types sometimes introduce their own set of oddness in document stream situations. We've just used a type of application/vnd.ms-excel and, while it seems it should map to the same applications, the problems don't seem to be as prevalent.
Perhaps that can give you some thoughts on going forward. Ultimately, right now, I don't think there's an ideal solution for the situation you're encountering. We're in the same boat, and had to tell our in-house clients that get the dialog to just hit "Cancel," and they get the document they want.
In your RespondAsExcel() method, change your content-dispositon response header from inline to attachment. This will force the browser to open the file as read only. See KB899927.
Response.AddHeader("content-disposition", "attachment;filename=" + fileName);
I had something similar with VBScript when using "Response.ContentType="application/vnd.ms-excel". I simply added the following code and the Windows Security popup window no longer appeared:
Response.AddHeader "content-disposition","attachment; filename=your_file_name_here.xls"

Categories

Resources