Gleamtech DocumentUltimate Cache file Path - c#

I am using the Gleamtech's DocumentUltimate Document Viewer in C# and want to get the cache folder name. which is created when a document is loaded.
Example : ~ App_Data\DocumentCache\~krcth
var cache = DocumentUltimateWebConfiguration.Current.GetCache();
string folder=cache.LocationId;
But it returns the same value every time and there is no folder for the same value. Is there any way I can get the newly created cache folder name?
Whole Code is down Below, I am trying to delete cache of a user when it exceeds 2 as multiple caches are slowing down the response.
DataTable docache = new DataTable();
if(Session["docache"] == null)
{
docache.Columns.Add("Path");
docache.Columns.Add("Time");
docache.Columns.Add("Number");
}
else
{
docache = (DataTable)Session["docache"];
}
var cache = DocumentUltimateWebConfiguration.Current.GetCache();
string path =cache.LocationId;
int cachecount = Convert.ToInt32(hfcache.Value.ToString());
cachecount++;
docache.Rows.Add(path, DateTime.Now.ToString("ddMMyyyyhhmmss"), cachecount);
hfcache.Value = Convert.ToString(cachecount);
if(docache.Rows.Count >2)
{
string deletecachepath=Server.MapPath("~/App_Data/DocumentCache/" + docache.Rows[0]["Path"].ToString());
DeleteDirectory(deletecachepath);
docache.Rows[0].Delete();
}
Session["docache"] = docache;
//End
Anyone Have any idea about this?

Related

Need a Way to Search through Event Logs by RecordID

I am trying to search through a folder with Event Logs in them, eventpath has the path of the specific Event Log I want to access. I want to use a specified RecordID to find it's correlated FormatDescription and display it in a MessageBox. I want to be able to use the eventpath to access each Event Log since I am using 6 separate .evtx files and need to use this method on all of them.
I found this solution, but I get an error when I'm trying to Query. I've tried to find a fix, but it seems as if it's just not going to work for what I need. I commented where exactly it is occurring in the code.
This is the exception: System.Diagnostics.Eventing.Reader.EventLogException: The specified path is invalid.
I can't find a fix for this code, but if anyone knows a fix or another way to approach searching through Event Logs by RecordID and giving the corresponding FormatDescription, it would be greatly appreciated.
I am using C# in Windows Presentation Foundation.
public void getDesc(string recordid)
{
string eventpath = getEventPath();
//takes off the .evtx of the path
string result = eventpath.Substring(0, eventpath.Length - 5);
//result1 is going to be similar to this:
//C:\Users\MyName\AppData\Local\Temp\randomTempDirectory\additional_files\DiagnosticInfo\WindowsEventLogs\Application
string sQuery = "*[System/EventRecordID=" + recordid + "]";
var elQuery = new EventLogQuery(result, PathType.LogName, sQuery);
//this is where it errors out
//error: Specified Channel Path is invalid
using (var elReader = new System.Diagnostics.Eventing.Reader.EventLogReader(elQuery))
{
List<EventRecord> eventList = new List<EventRecord>();
EventRecord eventInstance = elReader.ReadEvent();
try
{
while ((eventInstance = elReader.ReadEvent()) != null)
{
//Access event properties here:
string formatDescription = eventInstance.FormatDescription();
MessageBox.Show(formatDescription);
}
}
finally
{
if (eventInstance != null)
eventInstance.Dispose();
}
}
}

C# MongoDB : Chunk file does not upload following a prior deletion

I'm having a bit of a problem with an update feature designed to compare two images, and if different, delete the existing image and data from Mongo, and replace it with a new copy. The problem is, individually, each component works. The loading feature will successfully upload an image and Bson Document. The delete method will successfully (seemingly) remove them; the document, the fs.files entry, and the fs.chunks entry.
However, when the entry is deleted and then proceeds to upload the new image, only the fs.files entry and the BsonDocument will be pushed to the server. The actual image is left off.
I'm running MongoDB 3.2.6 for Windows.
The replace block followed by the upload block
if (newMD5.Equals(oldMD5) == false)
{
Debug.WriteLine("Updating image " + fileWithExt);
BsonValue targetId = docCollection.FindOne(Query.EQ("id", fileNoExt))["_id"];
deleteImageEntry(Query.EQ("_id", new ObjectId(targetId.ToString())));
//continues to upload replacement
} else
{
continue;
}
}
//create new entry
uploadInfo = mongoFileSystem.Upload(memStream, fileFs);
BsonDocument entry = new BsonDocument();
entry.Add("fileId", uploadInfo.Id);
entry.Add("id", fileNoExt);
entry.Add("filename", fileFs);
entry.Add("user", "");
//appends to image collection
var newItemInfo = docCollection.Save(entry);
And the delete method
public static bool deleteImageEntry(IMongoQuery query)
{
MongoInterface mongo = new MongoInterface();
try
{
var docCollection = mongo.Database.GetCollection("employees");
var imageCollection = mongo.Database.GetCollection<EmployeeImage>("employees");
var toDelete = docCollection.FindOne(query);
BsonValue fileId = toDelete.GetValue("fileId");
mongo.Gridfs.DeleteById(fileId);
WriteConcernResult wresult = imageCollection.Remove(query);
}
catch (Exception e)
{
Debug.WriteLine("Image could not be deleted \n\r" + e.Message);
return false;
}
return true;
}
Sorry the code is messy, I've doing a lot a guerrilla testing to try and find a reason for this. Similar code has worked in other parts of the program.
Well this is embarrassing. After a few more hours of debugging it turned out to be the MD5.ComputerHash() function was setting the memStream iterator to the end, and so there was no data being uploaded. Using memStream.Position = 0; solved the problem.

How can I insert an existing Google Worksheet into a Google Spreadsheet?

I'm writing a C# app with some Google Spreadsheets integration. I'm in a situation where I have some data in a worksheet that needs to be moved into a different spreadsheet. This worksheet contains a huge amount of data, so I want to avoid iterating through its contents.
The API guide gives an example of how to create a new worksheet within a spreadsheet. I modified it to add an existing worksheet to the spreadsheet:
using System;
using Google.GData.Client;
using Google.GData.Spreadsheets;
namespace MySpreadsheetIntegration
{
class Program
{
static void Main(string[] args)
{
SpreadsheetsService service = new SpreadsheetsService("MySpreadsheetIntegration-v1");
SpreadsheetEntry destinationSpreadsheet = fetchGoogleSpreadSheetEntry(service, "some_title");
SpreadsheetEntry originSpreadsheet = fetchGoogleSpreadSheetEntry(service, "some_other_title");
// Create a local representation of the new worksheet.
WorksheetEntry originWorksheet = fetchGoogleWorksheet( originSpreadsheet, "some_worksheet_title" );
// Send the local representation of the worksheet to the API for
// creation. The URL to use here is the worksheet feed URL of our
// spreadsheet.
WorksheetFeed wsFeed = destinationSpreadsheet.Worksheets;
service.Insert(wsFeed, originWorksheet);
}
}
}
For clarity, the above code attempts to take the "some_worksheet_title" worksheet in the "some_other_title" spreadsheet, and put it into the "some_title" spreadsheet. Below are the functions referenced in the above code.
public static WorksheetEntry fetchGoogleWorksheet( SpreadsheetEntry spreadsheet, string worksheet_title )
{
WorksheetFeed wsFeed = spreadsheet.Worksheets;
WorksheetEntry worksheet = null;
foreach (WorksheetEntry entry in wsFeed.Entries)
{
worksheet = entry;
if (entry.Title.Text == worksheet_title)
{
Console.WriteLine(DateTime.Now.ToString("HH:mm") + ": Worksheet found on Google Drive.");
break;
}
}
if (worksheet.Title.Text != worksheet_title)
{
return null;
}
return worksheet;
}
public static SpreadsheetEntry fetchGoogleSpreadSheetEntry( SpreadsheetsService service, string spreadsheet_title )
{
Console.WriteLine(DateTime.Now.ToString("HH:mm") + ": Looking for spreadsheet on Google Drive.");
SpreadsheetQuery query = new SpreadsheetQuery();
SpreadsheetFeed feed;
feed = service.Query(query);
SpreadsheetEntry spreadsheet = null;
// Iterate through all of the spreadsheets returned
foreach (SpreadsheetEntry entry in feed.Entries)
{
// Print the title of this spreadsheet to the screen
spreadsheet = entry;
if (entry.Title.Text == spreadsheet_title)
{
Console.WriteLine(DateTime.Now.ToString("HH:mm") + ": Spreadsheet found on Google Drive.");
Console.WriteLine(DateTime.Now.ToString("HH:mm") + ": Looking for worksheet in spreadsheet.");
break;
}
}
if (spreadsheet.Title.Text != spreadsheet_title)
{
return null;
}
return spreadsheet;
}
I expected to be able to fetch to worksheet I want to add to the spreadsheet, and just add it to the spreadsheet. It does not work. The above code creates a (correctly titled) worksheet in the destination spreadsheet, but does not transfer any of the content of the worksheet.
Is there any way to have it transfer the content correctly?
After trying a few different ways of doing this, the most reliable way turned out to be Google Apps Scripting. In general terms, my solution involves a Google Apps script that is being called by my C# application via the execution API. Below are some code samples demonstrating how all of this works together.
So here's the Google Apps script that moves content from one worksheet to another:
function copyWorksheet( destinationSpreadsheetId, destinationWorksheetTitle, originSpreadsheetId, originWorksheetTitle ) {
// Spreadsheet where new data will go:
var dss = SpreadsheetApp.openById(destinationSpreadsheetId);
// Spreadsheet where new data is coming from:
var oss = SpreadsheetApp.openById(originSpreadsheetId);
// Worksheet containing new data:
var dataOriginWorksheet = oss.getSheetByName(originWorksheetTitle);
// Worksheet whose data will be 'overwritten':
var expiredWorksheet = dss.getSheetByName(destinationWorksheetTitle);
// If a spreadsheet only has one worksheet, deleting that worksheet causes an error.
// Thus we need to know whether the expired worksheet is the only worksheet in it's parent spreadsheet.
var expiredWorksheetIsAlone = dss.getNumSheets() == 1 && expiredWorksheet != null;
// Delete the expired worksheet if there are other worksheets:
if (expiredWorksheet != null && !expiredWorksheetIsAlone)
dss.deleteSheet(expiredWorksheet);
// Otherwise, rename it to something guaranteed not to clash with the new sheet's title:
if(expiredWorksheetIsAlone)
expiredWorksheet.setName(dataOriginWorksheet.getName() + destinationWorksheetTitle);
// Copy the new data into it's rightful place, and give it it's rightful name.
dataOriginWorksheet.copyTo(dss).setName(destinationWorksheetTitle);
// Since there are now definitely 2 worksheets, it's safe to delete the expired one.
if(expiredWorksheetIsAlone)
dss.deleteSheet(expiredWorksheet);
// Make sure our changes are applied ASAP:
SpreadsheetApp.flush();
return "finished";
}
This is a severely stripped down version of the code I ended up using, which is why there are two spreadsheet ID fields. This means that it does not matter whether or not the the two worksheets are in the same spreadsheet.
The C# part of the solution looks like this:
// We need these for the method below
using Google.Apis.Script.v1;
using Google.Apis.Script.v1.Data;
...
public static bool copyWorksheet(ScriptService scriptService, string destinationSpreadsheetId, string destinationWorksheetTitle, string originSpreadsheetId, string originWorksheetTitle)
{
// You can get the script ID by going to the script in the
// Google Apps Scripts Editor > Publish > Deploy as API executable... > API ID
string scriptId = "your-apps-script-id";
ExecutionRequest request = new ExecutionRequest();
request.Function = "copyWorksheet";
IList<object> parameters = new List<object>();
parameters.Add(destinationSpreadsheetId);
parameters.Add(destinationWorksheetTitle);
parameters.Add(originSpreadsheetId);
parameters.Add(originWorksheetTitle);
request.Parameters = parameters;
ScriptsResource.RunRequest runReq = scriptService.Scripts.Run(request, scriptId);
try
{
Operation op = runReq.Execute();
if (op.Error != null)
{
Console.WriteLine(DateTime.Now.ToString("HH:mm:ss") + " The Apps script encountered an error");
// The API executed, but the script returned an error.
IDictionary<string, object> error = op.Error.Details[0];
Console.WriteLine( "Script error message: {0}", error["errorMessage"]);
if ( error.ContainsKey("scriptStackTraceElements") )
{
// There may not be a stacktrace if the script didn't
// start executing.
Console.WriteLine("Script error stacktrace:");
Newtonsoft.Json.Linq.JArray st = (Newtonsoft.Json.Linq.JArray)error["scriptStackTraceElements"];
foreach (var trace in st)
{
Console.WriteLine(
"\t{0}: {1}",
trace["function"],
trace["lineNumber"]);
}
}
}
else
{
// The result provided by the API needs to be cast into
// the correct type, based upon what types the Apps
// Script function returns. Here, the function returns
// an Apps Script Object with String keys and values.
// It is most convenient to cast the return value as a JSON
// JObject (folderSet).
return true;
}
}
catch (Google.GoogleApiException e)
{
// The API encountered a problem before the script
// started executing.
Console.WriteLine(DateTime.Now.ToString("HH:mm:ss") + " Could not call Apps Script");
}
return false;
}
...
The above 2 pieces of code, when used together, solved the problem perfectly. Execution time does not differ greatly between different volumes of data, and there has been no data corruption with transfers.

How can I get the LastRunTime for a report using the Business Objects Web Services SDK?

I'm using the Business Objects Web Services SDK to access our Business Objects data. I've successfully got a list of reports, and from that found the LastSuccessfulInstance of a report that has been previously run. However, I can't seem to get the LastRunTime to be populated. When I do a query with no attributes specified it comes back as not set, and I get the same result when I ask for that attribute in particular. I've looked at the report itself and the instance and they both don't have this information. Does anyone know where I can get it from?
Here's my code (hacked from one of SAP's demos):
var sessConnUrl = serviceUrl + "/session";
var boConnection = new BusinessObjects.DSWS.Connection(sessConnUrl);
var boSession = new Session(boConnection);
// Setup the Enterprise Credentials used to login to the Enterprise System
var boEnterpriseCredential = new EnterpriseCredential
{
Domain = cmsname,
Login = username,
Password = password,
AuthType = authType
};
// Login to the Enterprise System and retrieve the SessionInfo
boSession.Login(boEnterpriseCredential);
/************************** DISPLAY INBOX OBJECTS *************************/
// Retrieve the BIPlatform Service so it can be used to add the USER
var biPlatformUrl = boSession.GetAssociatedServicesURL("BIPlatform");
var boBiPlatform = BIPlatform.GetInstance(boSession, biPlatformUrl[0]);
// Specify the query used to retrieve the inbox objects
// NOTE: Adding a "/" at the end of the query indicates that we want to
// retrieve the all the objects located directly under the inbox.
// Without the "/" Path operator, the inbox itself would be returned.
const string query = "path://InfoObjects/Root Folder/Reports/";
// Execute the query and retrieve the reports objects
var boResponseHolder = boBiPlatform.Get(query, null);
var boInfoObjects = boResponseHolder.InfoObjects.InfoObject;
// If the reports contains a list of objects, loop through and display them
if (boInfoObjects != null)
{
// Go through and display the list of documents
foreach (var boInfoObject in boInfoObjects)
{
var report = boInfoObject as Webi;
if (report == null)
continue;
if (!string.IsNullOrEmpty(report.LastSuccessfulInstanceCUID))
{
var instanceQuery = "cuid://<" + report.LastSuccessfulInstanceCUID + ">";
var instanceResponseHolder = boBiPlatform.Get(instanceQuery, null);
var instance = instanceResponseHolder.InfoObjects.InfoObject[0];
}
}
}
Both report.LastRunTimeSpecified and instance.LastRunTimeSpecified are false and both LastRunTime are 01\01\0001, but I can see a last run time in the Web Intelligence UI.
With a little help from Ted Ueda at SAP support I figured it out. Not all the properties are populated by default you need to append #* to the query string to get everything, i.e. change the line:
const string query = "path://InfoObjects/Root Folder/Reports/";
to:
const string query = "path://InfoObjects/Root Folder/Reports/#*";

Open Lotus Notes database by Replica ID in C#

I created a program a while ago using C# that does some automation for a completely different program, but found that I need to access data from a Lotus Notes database. The only problem is, I can only seem to figure out how to open the database by the server's name (using session.GetDatabase())... I can't figure out how to open it by Replica ID. Does anyone know how I would go about that? (I don't want my program going down every time the server changes.)
public static string[] GetLotusNotesHelpTickets()
{
NotesSession session = new NotesSession();
session.Initialize(Password);
// 85256B45:000EE057 = NTNOTES1A Server Replica ID
NotesDatabase database = session.GetDatabase("NTNOTES1A", "is/gs/gshd.nsf", false);
string SearchFormula = string.Concat("Form = \"Call Ticket\""
, " & GroupAssignedTo = \"Business Systems\""
, " & CallStatus = \"Open\"");
NotesDocumentCollection collection = database.Search(SearchFormula, null, 0);
NotesDocument document = collection.GetFirstDocument();
string[] ticketList = new string[collection.Count];
for (int i = 0; i < collection.Count; ++i)
{
ticketList[i] = ((object[])(document.GetItemValue("TicketNumber")))[0].ToString();
document = collection.GetNextDocument(document);
}
document = null;
collection = null;
database = null;
session = null;
return ticketList;
}
This code is working fine, but if the server changed from NTNOTES1A, then nothing is going to work anymore.
you'll need to use the notesDbDirectory.OpenDatabaseByReplicaID(rid$) method. To get the NotesDbDirectory, you can use the getDbDirectory method of the session
Set notesDbDirectory = notesSession.GetDbDirectory( serverName$ )
So you can use the code below to get a database by replicaID.
public static string[] GetLotusNotesHelpTickets()
{
NotesSession session = new NotesSession();
session.Initialize(Password);
Set notesDBDirectory = session.GetDbDirectory("NTNOTES1A")
// 85256B45:000EE057 = NTNOTES1A Server Replica ID
NotesDatabase database = notesDBDirectory.OpenDatabaseByReplicaID("85256B45:000EE057")
string SearchFormula = string.Concat("Form = \"Call Ticket\""
, " & GroupAssignedTo = \"Business Systems\""
, " & CallStatus = \"Open\"");
NotesDocumentCollection collection = database.Search(SearchFormula, null, 0);
NotesDocument document = collection.GetFirstDocument();
string[] ticketList = new string[collection.Count];
for (int i = 0; i < collection.Count; ++i)
{
ticketList[i] = ((object[])(document.GetItemValue("TicketNumber")))[0].ToString();
document = collection.GetNextDocument(document);
}
document = null;
collection = null;
database = null;
session = null;
return ticketList;
}
Unfortunately, this only solves half of your problem. I know you'd rather just tell Notes to fetch the database with a particular replicaID from the server closest to the client, just like the Notes Client does when you click on a DBLink or Bookmark. However, there is (or appears to be) no way to do that using the Notes APIs.
My suggestion is to either loop through a hard-coded list of potential servers by name, and check to see if the database is found (the OpenDatabaseByReplicaID method returns ERR_SYS_FILE_NOT_FOUND (error 0FA3) if the database is not found). If that's not a good option, perhaps you can easily expose the servername in an admin menu of your app so it can be changed easily if the server name changes at some point.
set database = new NotesDatabase("")
call database.OpenByReplicaID("repid")

Categories

Resources