I'm trying to delete a single row with the Google Sheets API in C#, and I can't get it to work. Fetching and updating rows work as intended.
Here's my code (inspired from C# Google Sheets API - Delete Row and the Java documentation):
var request = new Request
{
DeleteDimension = new DeleteDimensionRequest
{
Range = new DimensionRange
{
SheetId = 0,
Dimension = "ROWS",
StartIndex = 21,
EndIndex = 21
}
}
};
var deleteRequest = new BatchUpdateSpreadsheetRequest {Requests = new List<Request> {request}};
// First way: create a batch update request from scratch then execute it
var responseFirstWay = new SpreadsheetsResource.BatchUpdateRequest(MY_SHEETS_SERVICE, deleteRequest, MY_SPREADSHEET_ID).Execute();
// Second way: create a batch update request from the existing SheetsService then execute it
var responseSecondWay = MY_SHEETS_SERVICE.Spreadsheets.BatchUpdate(deleteRequest, MY_SPREADSHEET_ID).Execute();
I've tried with various indexes, but it doesn't seem to change anything (in my example above I've put 21, with all rows up to 30 filled with data). I find a bit weird to have a SheetId set to zero, but that's the gid parameter I have when visiting Google Sheets.
No matter how I create my BatchUpdateRequest, the response is empty after I execute it.
According to the documentation for DimensionRange the indexes are half-open meaning the start-index is inclusive, and the end-index is exclusive.
You should change your EndIndex to 22 to delete the 21st row.
Related
how to implement pagination using DynamoDB in c# same as SQL/MYSQL.
I have check document for ScanRequest and QueryRequest that one is working perfectly fine.
But I need Pagination same as we all do in SQL/MYSQL like in initial call I need total number of item count and page wise records and easily I can jump to any other page also.
So anyone can suggest good solution or any alternative solution?
Thank you in advance.
Dictionary<string, AttributeValue> lastKeyEvaluated = null;
do
{
var request = new ScanRequest
{
TableName = "Test",
Limit = 5,
ExclusiveStartKey = lastKeyEvaluated
};
var response = await Client.ScanAsync(request);
lastKeyEvaluated = response.LastEvaluatedKey;
}
while (lastKeyEvaluated.Count != 0);
Dictionary<string, Condition> conditions = new Dictionary<string, Condition>();
// Title attribute should contain the string "Adventures"
Condition titleCondition = new Condition();
titleCondition.ComparisonOperator = ComparisonOperator.EQ;
titleCondition.AttributeValueList.Add(new AttributeValue { S = "Company#04" });
conditions["PK"] = titleCondition;
var request = new QueryRequest();
request.Limit = 2;
request.ExclusiveStartKey = null;
request.TableName = "Test";
request.KeyConditions = conditions;
var result = await Client.QueryAsync(request);
DynamoDB supports an entirely different type of pagination. The concept of page number, offset, etc. is another paradigm from the SQL database world that has no parallel in DynamoDB. However, pagination is supported, just not the way many expect.
You may want to consider changing how your client application paginates to accommodate. If you have a small amount of information to paginate (e.g. under 1MB of data), you might consider passing it to the client and letting the client implement the pagination (e.g. page 4 of 15).
If you have too much data to paginate client-side, you may want to consider updating your client application to accommodate how DynamoDB paginates (e.g. using LastEvaluatedKey and ExclusiveStartKey).
I need to filter the defects for a certain range of ID's from HP ALM using its OTA. This needs to be done without calling all the defects from ALM and filtering them from code as that would significantly increase the time which is not desirable.
For example, I can filter a single defect as follows:
TDAPIOLELib.BugFactory OBugFactory = alm_core.tDConnection.BugFactory as TDAPIOLELib.BugFactory;
TDAPIOLELib.TDFilter OTDFilter = OBugFactory.Filter as TDAPIOLELib.TDFilter;
TDAPIOLELib.List OBugList;
// Gets only the bug with ID 3
OTDFilter["BG_BUG_ID"] = 3;
OBugList = OBugFactory.NewList(OTDFilter.Text);
Is there a way to get the Bug List in an ID range between 1 to 100. Something like this:
// Gets all the bugs between 1-100
OTDFilter["BG_BUG_ID_MIN"] = 1;
OTDFilter["BG_BUG_ID_MAX"] = 100;
OBugList = OBugFactory.NewList(OTDFilter.Text);
The complete solution to filter all the defects between 1-100 is as follows:
TDAPIOLELib.BugFactory OBugFactory = alm_core.tDConnection.BugFactory as TDAPIOLELib.BugFactory;
TDAPIOLELib.TDFilter OTDFilter = OBugFactory.Filter as TDAPIOLELib.TDFilter;
TDAPIOLELib.List OBugList;
List<DefectOutputModel> AllBugList = new List<DefectOutputModel>();
OTDFilter.Text= #"[Filter]{
TableName: BUG,
ColumnName: BG_BUG_ID,
LogicalFilter: "">= 1 And <= 100"",
VisualFilter: "">= 1 And <= 100"",
SortOrder: 1,
SortDirection: 0,
NO_CASE:
}";
OBugList = OBugFactory.NewList(OTDFilter.;
The query for OTDFilter.Text was obtained by first filtering the defects by ID in HP ALM webapp and then copying the filter query text and pasting it here.
I'm able to split excel file in function but when publishing on azure function is giving timeout exception. what to do.how azure durable functions can help here?
This is how i'm doing it:
bookOriginal.LoadFromStream(BlobService.GetFileFromBlob(filename));
log.LogInformation("File read from Azure Blob");
Worksheet sheet = bookOriginal.Worksheets[0];
var totalRow = sheet.Rows.Count();
int splitRows = 7000;
int count = totalRow / splitRows;
for (int i = 1; i <= count; i++)
{
CellRange range1;
Workbook newBook1 = new Workbook();
newBook1.CreateEmptySheets(1);
Worksheet newSheet1 = newBook1.Worksheets[0];
Model localModel = new Model();
if (i == 1)
{
range1 = sheet.Range[2, 1, splitRows, sheet.LastColumn];
}
else
{
range1 = sheet.Range[(splitRows * (i - 1)) + 1, 1, splitRows * i, sheet.LastColumn];
}
newSheet1.Copy(range1, newSheet1.Range[1, 1]);
//bookOriginal.SaveToFile("Research and Development.xlsx", ExcelVersion.Version2007);
localModel.workbookObject = newBook1;
model.Add(localModel);
}
Console.WriteLine("Ran Completely");
Yes durable functions can surely help you!
You can take a look at this link https://learn.microsoft.com/it-it/azure/azure-functions/durable/durable-functions-overview?tabs=csharp
The first and the second pattern could help you. The Project structure can be:
a blob triggered function that downloads the source excel, converting it into one object that you can pass as input invoking the orchestrator .
The orchestrator function Deserializes the input object and groups the rows as you did in your code
inside a foreach statement you can use the current group of rows as parameter to invoke an activity. You can choose if activities will run in sequence (as pattern 1 awaiting activity) or run in parallel (As pattern 2 using Task.WhenAll)
The activity function Converts the row group into an excel File and, using blob attribute as output, uploads it into storage
WARNING: durable Functions documentation sayes: Return values are serialized to JSON and persisted to the orchestration history table in Azure Table storage.
So The input model must be serializable as json.
I'm trying to design a table that has 3 additional tables in the last cell. Like this.
I've managed to get the first nested table into row 4, but my second nested table is going into cell(1,1) of the first table.
var wordApplication = new Word.Application();
wordApplication.Visible = true;
var wordDocument = wordApplication.Documents.Add();
var docRange = wordDocument.Range();
docRange.Tables.Add(docRange, 4, 1);
var mainTable = wordDocument.Tables[1];
mainTable.set_Style("Table Grid");
mainTable.Borders.Enable = 0;
mainTable.PreferredWidthType = Word.WdPreferredWidthType.wdPreferredWidthPercent;
mainTable.PreferredWidth = 100;
docRange.Collapse(Word.WdCollapseDirection.wdCollapseStart);
var phoneRange = mainTable.Cell(4, 1).Range;
phoneRange.Collapse(Word.WdCollapseDirection.wdCollapseStart);
phoneRange.Tables.Add(phoneRange, 3, 2);
var phoneTable = mainTable.Cell(4, 1).Tables[1];
phoneTable.set_Style("Table Grid");
phoneTable.Borders.Enable = 0;
phoneTable.AutoFitBehavior(Word.WdAutoFitBehavior.wdAutoFitContent);
phoneTable.Rows.RelativeHorizontalPosition = Word.WdRelativeHorizontalPosition.wdRelativeHorizontalPositionMargin;
phoneRange.Collapse(Word.WdCollapseDirection.wdCollapseEnd);
I've tried collapsing the range, adding in a paragraph then collapsing the range again. No luck. I found this post and many similar ones, but I must be missing something.
Thanks for your time.
It usually helps in situation like these to add a line in your code: phoneRange.Select(); and having code execution end with that. Take a look at where the Range actually is. Now you can test using the keyboard where the Range needs to be in order to insert the next table successfully.
Since you say phoneRange selects outside the third row, rather than working with phoneRange try setting a new Range object to phoneTable.Range then collapse it to its end-point.
Hi
If only insert operation occur on lucene index (no delete/update), is it true that docID is not changing ? and its also reliable
if it is true, i want to use it as loading FieldCache incrementally to lower down the overhead of loading all documents, what is the best solution for that ??
I'm not quite sure what you're planning to do with the field cache, but my understanding of document ids is that they can change during an insert, depending on pending deletes, merge policies etc.
i.e. Document ID should not be used past a commit boundary on a reopened index reader
Hope this helps,
The document id is static within a segment. IndexReader.Open (usually) opens a DirectoryReader which combines several SegmentReader. You'll need to pass the "bottom" reader to the FieldCache for the population to work correctly.
Here's an example from FieldCache with frequently updating index which ensures that only the newly read segment is read by the FieldCache, instead of the topmost reader (which will considered changed at every commit).
var directory = FSDirectory.Open(new DirectoryInfo("index"));
var reader = IndexReader.Open(directory, readOnly: true);
var documentId = 1337;
// Grab all subreaders.
var subReaders = new List<IndexReader>();
ReaderUtil.GatherSubReaders(subReaders, reader);
// Loop through all subreaders. While subReaderId is higher than the
// maximum document id in the subreader, go to next.
var subReaderId = documentId;
var subReader = subReaders.First(sub => {
if (sub.MaxDoc() < subReaderId) {
subReaderId -= sub.MaxDoc();
return false;
}
return true;
});
var values = FieldCache_Fields.DEFAULT.GetInts(subReader, "newsdate");
var value = values[subReaderId];