using the following code;I am trying to adjust the range for my antennas;
Antennas.Config config = new Antennas.Config();
config.TransmitPowerIndex = (ushort)myreader.TransmitPowerIndex;
config.TransmitFrequencyIndex = (ushort)myreader.TransmitFrequencyIndex;
config.ReceiveSensitivityIndex = (ushort)myreader.ReceiveSensitivityIndex;
myreader.ReaderAPI.Config.Antennas.SetConfig(config);
The problem is, it wont let me change TransmitPowerIndex or ReceiveSensitivityIndex
apart from 0, the exception I get is "config value of out range"
If I run the antennas at default settings (without using the code above), they run at full power.
If I use the following settings:
Antennas.Config config = new Antennas.Config();
config.TransmitPowerIndex = 10;
config.TransmitFrequencyIndex = 1;
config.ReceiveSensitivityIndex = 0;
myreader.ReaderAPI.Config.Antennas.SetConfig(config);
Antennas run at a significantly low power,but this is too low for mysetting,If I want to change the powerindex to 20 for example,nothing changes.If I change the transmitfrequencyindex or the receivesensitivityindex to anything other than the values above,i get "config value out of range error".
How can I adjust the range of my antennas, based on some values on a linear basis? EMDK help files have no certain data on that unfortunately...
a little late, but better than never.
Motorola uses a table to assign the power of the antennas. This table is located in this property:
Symbol.RFID3.Reader MyReader;
int[] table = MyReader.ReaderCapabilities.TransmitPowerLevelValues;
Now, table is an array of int, the values are the power levels available for the antenna.
For example, let's assume table contains 50 items.
If you want the antenna at 50%:
config.TransmitPowerIndex = 24;
If you want the antenna at ~75%:
config.TransmitPowerIndex = 36;
If you want the antenna at 100%:
config.TransmitPowerIndex = 49;
Remember, you have to assign the INDEX of the value you want.
Hope this helps somebody.
Regards.
Antenna Read Range is set by config.TransmitPowerIndex
Then for Maximum Read Range:
config.TransmitPowerIndex = 160;
For Minimum Read Range:
config.TransmitPowerIndex = 0;
Related
I'm have a fairly complicated (to me) algorithm that I'm trying to write. The idea is to determine which elements in an array are the first ones to sum up to a value that falls within a range.
For example:
I have an array [1, 15, 25, 22, 25] that is in a prioritized order.
I want to find the first set of values with the most elements that sum within a minimum and maximum range, not necessarily the set that get me closest to my max.
So, if the min is 1 and max is 25, I would select [0(1), 1(15)] even though the third element [2(25)] is closer to my max of 25 because those come first.
If the min is 25 and max is 40, I would select [0(1), 1(15), 3(22)], skipping the third element since that would breach the max.
If the min is 50 and max is 50, I would select [2(25), 4(25)] since those are the only two that can meet the min and max requirements.
Are there any common CS algorithms that match this pattern?
This is a dynamic programming problem.
You want to build a data structure to answer the following question.
by next to last position available in the array:
by target sum:
(elements in sum, last position used)
When it finds a target_sum in range, you just read back through it to get the answer.
Here is pseudocode for that. I used slightly Pythonish syntax and JSON to represent the data structure. Your code will be longer:
Initialize the lookup to [{0: (0, null)}]
for i in 1..(length of array):
# Build up our dynamic programming data structure
Add empty mapping {} to end of lookup
best_sum = null
best_elements = null
for prev_sum, prev_elements, prev_position in lookup for i-1:
# Try not using this element
if prev_sum not in lookup[i] or lookup[i][prev_sum][0] < prev_elements:
lookup[i][prev_sum] = (prev_elements, prev_position)
# Try using this element
next_sum = prev_sum + array[i-1]
next_elements = prev_elements + 1
prev_position = i-1
if next_sum not in lookup lookup[i][next_sum][0] < prev_elements:
lookup[i][next_sum] = (next_elements, next_position)
if next_sum in desired range:
if best_elements is null or best_elements < this_elements
best_elements = this_elements
best_sum = this_sum
if best_elements is not null:
# Read out the answer!
answer = []
j = i
while j is not null:
best_sum = lookup[j][0]
answer.append(array[j])
j = lookup[j][1]
return reversed(answer)
This will return the desired values rather than the indexes. To switch, just reverse what goes into the answer.
i tried Trial version of Gembox.SpreadSheet.
when i Get Cells[,].value by for() or Foreach().
so i think after Calculate() & get Cell[].value, but that way just take same time,too.
it take re-Calculate when i Get Cell[].value.
workSheet.Calcuate(); <- after this, values are Calculated, am i right?
for( int i =0; i <worksheet.GetUsedCellRange(true).LastRowIndex+1;++i)
{
~~~~for Iteration~~~
var value = workSheet.Cells[i,j].Value; <- re-Calcuate value(?)
}
so here is a Question.
Can i Get calculated values? or you guys know pre-Calculate function or Get more Speed?
Unfortunate, I'm not sure what exactly you're asking, can you please try reformulating your question a bit so that it's easier to understand it?
Nevertheless, here is some information which I hope you'll find useful.
To iterate through all cells, you should use one of the following:
1.
foreach (ExcelRow row in workSheet.Rows)
{
foreach (ExcelCell cell in row.AllocatedCells)
{
var value = cell.Value;
// ...
}
}
2.
for (CellRangeEnumerator enumerator = workSheet.Cells.GetReadEnumerator(); enumerator.MoveNext(); )
{
ExcelCell cell = enumerator.Current;
var value = cell.Value;
// ...
}
3.
for (int r = 0, rCount = workSheet.Rows.Count; r < rCount; ++r)
{
for (int c = 0, cCount = workSheet.CalculateMaxUsedColumns(); c < cCount; ++c)
{
var value = workSheet.Cells[r, c].Value;
// ...
}
}
I believe all of them will have pretty much the same performances.
However, depending on the spreadsheet's content this last one could end up a bit slower. This is because it does not exclusively iterate only through allocated cells.
So for instance, let say you have a spreadsheet which has 2 rows. The first row is empty, it has no data, and the second row has 3 cells. Now if you use 1. or 2. approach then you will iterate only through those 3 cells in the second row, but if you use 3. approach then you will iterate through 3 cells in the first row (which previously were not allocated and now they are because we accessed them) and then through 3 cells in the second row.
Now regarding the calculation, note that when you save the file with some Excel application it will save the last calculated formula values in it. In this case you don't have to call Calculate method because you already have the required values in cells.
You should call Calculate method when you need to update, re-calculate the formulas in your spreadsheet, for instance after you have added or modified some cell values.
Last, regarding your question again it is hard to understand it, but nevertheless:
Can i Get calculated values?
Yes, that line of code var value = workSheet.Cells[i,j].Value; should give you the calculated value because you used Calculate method before it. However, if you have formulas that are currently not supported by GemBox.Spreadsheet's calculation engine then it will not be able to calculate the value. You can find a list of currently supported Excel formula functions here.
or you guys know pre-Calculate function or Get more Speed?
I don't know what "pre-Calculate function" means and for speed please refer to first part of this answer.
I want the Excel spreadsheet cells I populate with C# to expand or contract so that all their content displays without manually adjusting the width of the cells - displaying at "just enough" width to display the data - no more, no less.
I tried this:
_xlSheet = (MSExcel.Excel.Worksheet)_xlSheets.Item[1];
_xlSheet.Columns.AutoFit();
_xlSheet.Rows.AutoFit();
...but it does nothing in my current project (it works fine in a small POC sandbox app that contains no ranges). Speaking of ranges, the reason this doesn't work might have something to do with my having created cell ranges like so:
var rowRngMemberName = _xlSheet.Range[_xlSheet.Cells[1, 1], _xlSheet.Cells[1, 6]];
rowRngMemberName.Merge(Type.Missing);
rowRngMemberName.Font.Bold = true;
rowRngMemberName.Font.Italic = true;
rowRngMemberName.Font.Size = 20;
rowRngMemberName.Value2 = shortName;
...and then adding "normal"/generic single-cell values after that.
In other words, I have values that span multiple columns - several rows of that. Then below that, I revert to "one cell, one value" mode.
Is this the problem?
If so, how can I resolve it?
Is it possible to have independent sections of a spreadsheet whose formatting (autofitting) isn't affected by other parts of the sheet?
UPDATE
As for getting multiple rows to accommodate a value, I'm using this code:
private void AddDescription(String desc)
{
int curDescriptionBottomRow = curDescriptionTopRow + 3;
var range =
_xlSheet.Range[_xlSheet.Cells[curDescriptionTopRow, 1], _xlSheet.Cells[curDescriptionBottomRow, 1]];
range.Merge();
range.Font.Bold = true;
range.VerticalAlignment = XlVAlign.xlVAlignCenter;
range.Value2 = desc;
}
...and here's what it accomplishes:
AutoFit is what is needed, after all, but the key is to call it at the right time - after all other manipulation has been done. Otherwise, subsequent manipulation can lose the autofittedness.
If I get what you are asking correctly you are looking to wrap text... at least thats the official term for it...
xlWorkSheet.Range["A4:A4"].Cells.WrapText = true;
Here is the documentation: https://msdn.microsoft.com/en-us/library/office/ff821514.aspx
I'm using emguCV to use OpenCV machine learning algorithms. I can successfully train a RTree(i get success) but when i try to predict it gives me always -1. Then i tried to get the Variable Importance matrix and the tree count and the matrix comes as null (i specified the params to built it) and the tree count comes as 0.
Does anyone has any thoughts on what i'm doing wrong? PS, if i use a decision tree i can get predictions.
I have 6 variables and about 11000 samples.
Below are the parameters i use:
MCvRTParams param = new MCvRTParams();
param.maxDepth = 8;// max depth
param.minSampleCount = 10;// min sample count
param.regressionAccuracy = 0;// regression accuracy: N/A here
param.useSurrogates = true; //compute surrogate split, no missing data
param.maxCategories = 15;// max number of categories (use sub-optimal algorithm for larger numbers)
param.cvFolds = 10;
//param.use1seRule = true;
param.truncatePrunedTree = true;
//param.priors = priorsHandle.AddrOfPinnedObject(); // the array of priors
Thanks
Try setting regressionAccuracy to a non-zero number. regressionAccuracy stops splitting nodes, if the accuracy within a node is better than regressionAccuracy. If you set it to zero it will stop immediately at the root node.
Or is there any better suited 3rd party control for this purpose?
I know that DevExpress XtraGrid supports, in theory, Int32.MaxValue rows or columns in the grid. In this case, the limit is the system memory not the grid.
But do you really need to display so much data?
Short answer: Dont do it!
Long answer: Change the FillWeight to 10 or less (default is 100). The limit you are reaching is due to the total FillWeight exceeding 64K x 100 (who knows why that is a limit).
Use a virtual list (loads only the rows that are visible). I'm not sure that WinForms ListView has a virtual mode but the WPF one does.
So create a WPF user control and set it up for VirtualMode = True and host that user control on your WinForms client with an ElementHost container.
Sorry I can't be more specific, I don't have the code to hand.
Ryan
You're missing that the FillWeight variable takes a floating point number not an integer, so 0.5f or 0.01f would do (the latter would allow up to 6553500 columns in theory). Unfortunately, creation is very slow (at least for me, increasingly past around 1000 columns; 10,000 cols takes about 20 seconds). Perhaps the VirtualMode others have suggested is worth a shot.
For what it's worth, here is the code I use to create an x by y size table of empty cells. Perhaps someone can optimize the speed further:
private void createDGVcells(DataGridView dgv, int columns, int rows) {
// Optimization:
dgv.ColumnHeadersHeightSizeMode = DataGridViewColumnHeadersHeightSizeMode.DisableResizing; // Massive speed up if high column count
dgv.ScrollBars = ScrollBars.None; // Apx. 75% speedup for high row count
dgv.AllowUserToAddRows = false; // Further 50% apx speedup when creating rows
dgv.ReadOnly = true; // Small apx. 50ms latency speedup?
// First clear any existing cells, should they exist:
if (dgv.DataSource != null) dgv.DataSource = null;
else {
dgv.Rows.Clear();
dgv.Columns.Clear();
}
// Create the first row (the columns):
DataGridViewColumn[] dgvc = new DataGridViewColumn[columns];
for (int i = 0; i < dgvc.Length; ++i) {
DataGridViewColumn dg = new DataGridViewTextBoxColumn();
dg.FillWeight = 0.1f; // Allows up to 655350 columns in theory
dgvc[i] = dg;
}
dgv.Columns.AddRange(dgvc);
// Add all the rows (very quick)
for (int j = 0; j < rows - 1; j++) dgv.Rows.Add();
// Optional to turn these back on
dgv.ReadOnly = false;
dgv.AllowUserToAddRows = true;
dgv.ScrollBars = ScrollBars.Both;
dgv.ColumnHeadersHeightSizeMode = DataGridViewColumnHeadersHeightSizeMode.EnableResizing;
}
Xceed's DataGrid for WPF can do this easily and uses both column and UI virtualization. Check out their live demo. You can populate the demo with as many columns and rows needed to test perf. http://xceed.com/Grid_WPF_Demo.html