I'm reading the book:
Introduction to Neural Networks for C# Second Edition by Jeff Heaton
In particular the chapter about Hopefield network. He explains how to calculate the contribution matrix given a boolean array as pattern.
For exaple given the following pattern 0101, the corresponding contribution matrix (of weights) is:
0 -1 1 -1
-1 0 -1 1
1 -1 0 -1
-1 1 -1 0
The process of recognize the pattern follow this rule:
We must now compare those weights with the input pattern of 0101. We
will sum only the weights corresponding to the positions that contain
a 1 in the input pattern. The results of the activation of each neuron
are shown below.
N1 = -1 + -1 = -2
N2 = 0 + 1 = 1
N3 = -1 + -1 = -2
N4 = 1 + 0 = 1
These values are meaningless without an activation function. The
activation function used for a Hopfield network is any value greater
than zero, so the following neurons will fire.
N1 activation result is –2; will not fire (0)
N2 activation result is 1; will fire (1)
N3 activation result is –2; will not fire(0)
N4 activation result is 1; will fire (1)
As you can see, we assign a binary value of 1 to all neurons that
fired, and a binary value of 0 to all neurons that did not fire. The
final binary output from the Hopfield network will be 0101. This is
the same as the input pattern.
Moreover he said:
If we also want to recognize 1001, then we would calculate both
contribution matrixes and add the results to create the connection
weight matrix
So I calculate the second contribution matrix:
0 -1 -1 1
-1 0 1 -1
-1 1 0 -1
1 -1 -1 0
And add the two matrixes:
0 -2 0 0
-2 0 0 0
0 0 0 -2
0 0 -2 0
And obviously (following the previous rule) this last matrix cannot recognize any of the previous patterns. How is it possible? Where is the error?
EDIT: (Added the example provided by the author)
Considering the example provided, the two patterns are:
1100 -> [1 1 -1 -1]
0 1 -1 -1
1 0 -1 -1
-1 -1 0 1
-1 -1 1 0
1000 -> [1 -1 -1 -1]
0 -1 -1 -1
-1 0 1 1
-1 1 0 1
-1 1 1 0
Addition:
0 0 -2 -2
0 0 0 0
-2 0 0 2
-2 0 2 0
Multiply by [1 1 -1 -1]:
4 0 -4 -4
Multiply by [1 -1 -1 -1]:
4 0 -4 -4
In both cases the pattern recognized is 1000 (1100 is missing). Thus, there is something not working here.
This source doesn't look very good. For example it uses the term "inverse" instead of "transpose". Also the algorithm for recalling patterns is described incorrectly. Fortunately if you look at their implementation it seems to work fine (although it's also a low quality code).
The difference is that when you present a vector to recall a pattern for, you should also convert it to bipolar form and then calculate the dot product with each column of the weight matrix. So, in your example when you'll present the vector 1001 you calculate:
|0 -2 0 0|
[1 -1 -1 1] * |-2 0 0 0| = [2 -2 -2 2]
|0 0 0 -2|
|0 0 -2 0|
After applying the threshold function it yields the correct result: 1001. For the second vector, 0101:
|0 -2 0 0|
[-1 1 -1 1] * |-2 0 0 0| = [-2 2 -2 2]
|0 0 0 -2|
|0 0 -2 0|
which also gives correct result: 0101.
EDIT:
In your second example you seem to have reached the limit of Hopfield network capabilities. First of all, the patterns you present differ with only one bit, which makes them hard to tell apart. This, combined with the fact that Hopfield Network's capacity is said to be something around 0.138 * n (link), n being the number of neurons seems to explain the problem.
Other sources, like this one (Chapter 6) provide theoretical bound of n/2 * log(n) - for "almost all patterns" to be retrieved without errors. In this link you can also find other learning rules. And you can find another simple example here.
Related
When I do the following logical AND operation with numbers in C# I'm getting the following results:
-3 & 3 = 1
-1 & 1 = 1
0 & 0 =0
but when I do 8 & -8 =8
Can someone please explain how we are getting the result as 8?
If you look at the numbers in hexadecimal format, it will help you understand how the calculation is performed.
Assuming you store the numbers as integers:
3 = 0x00000003
8 = 0x00000008
-3 = 0xFFFFFFFD
-8 = 0xFFFFFFF8
Then if we zoom in on the smallest nibble (4-bits, consider the following):
For 3 & -3
0 0 1 1
& 1 1 0 1
-------
0 0 0 1 = 1
For 8 & -8
1 0 0 0
& 1 0 0 0
-------
1 0 0 0 = 8
To see what is going on, you could use the following:
static void Main(string[] args)
{
show(8);
show(-8);
}
static void show(int i)
{
Console.WriteLine($"{i,3} = 0b{Convert.ToString(i, 2).PadLeft(32, '0')}");
}
Output:
8 = 0b00000000000000000000000000001000
-8 = 0b11111111111111111111111111111000
Im searching for an algorithm which could get me the closest point, in a 2D Array/Matrix, as fast as possible.
Example Matrix:
1 2 3 4 5 6 7 8 9 10 11
________________________
1: 0 0 0 0 0 0 0 1 0 0 0
2: 0 0 0 0 0 0 0 0 0 0 1
3: 1 0 0 0 0 0 0 0 0 0 0
4: 0 0 0 0 0 X 0 0 0 0 0
5: 0 0 0 0 0 0 0 0 0 0 0
6: 0 0 0 1 0 0 0 0 0 0 0
7: 0 0 0 0 0 0 0 0 0 0 0
X = my Point. 1 = Point im searching for.
In this example the "1" on row 6 is the one im searching for.
Right now im scanning the entire array, and calculate for each "1" which is closest to X, but it seems slow to me.
I thought about an searchalgorithm that starts on the fields next to the X, takes circular steps, and breaks when a "1" is found.
Does someone have any idea how to realize this?
Edit: Distance to a point is defined by sqrt( (x1-x2)^2 + (y1-y2)^2 ). So the distance to the „1“ in row 6 is 2,83.
Edit: To be concrete: this will be used to do a pixelsearch in an image. Startpoint is in the middle, and searching for the closest „correct pixel“.
thanks!
I know that my question will be a littel strange but i need realy some help. I have an algorithme to calculate true pixel quantity in a binary image. Her is an example how it work:
Binary Image :
0 0 1 0 0 1
1 1 1 1 0 1
1 1 1 1 1 1
1 1 0 0 0 1
0 1 0 0 0 1
Her is the result :
18 15 11 8 6 5
16 13 9 7 5 4
11 9 6 5 4 3
5 4 2 2 2 2
2 2 1 1 1 1
And this is how it work :
Result (i,j) = result (i+1, j) + result (i, j+1) - result(i + 1, j + 1) + Image(i,j)
Her an example for the 18 value:
18 = 16 + 15 - 13 + 0
My question :
What is the name of this algorithm because I need to get some more information about it?
Thank you for help.
This is called an integral image, or summed area table. It is used to speedup box filtering, among others. It is a 2D generalization of a prefix sum.
One of the basics of Computer Science is knowing how numbers are represented in 2’s complement. Imagine that you write down all numbers between A and B inclusive in 2’s complement representation using 32 bits. How many 1’s will you write down in all ?
Input:
The first line contains the number of test cases T (<=1000). Each of the next T lines contains two integers A and B.
Output:
Output T lines, one corresponding to each test case.
Constraints:
-2^31 <= A <= B <= 2^31 - 1
Sample Input:
-2 0
-3 4
-1 4
Sample Output:
63
99
37
Explanation:
For the first case, -2 contains 31 1’s followed by a 0, -1 contains 32 1’s and 0 contains 0 1’s. Thus the total is 63.
For the second case, the answer is 31 + 31 + 32 + 0 + 1 + 1 + 2 + 1 = 99
for (int i=1; i<line[0]; i++)
{
int numOf1s = 0;
for (int j=line[i].A; j<=line[i].B; j++)
{
for (unsigned int n=j; n>0; n>>=1)
numOf1s += n & 1;
}
printf("%d\n",numOf1s);
}
I've created a few table types in my database to be used as stored procedure parameters. These correspond to real database tables, so if they're out of sync there'll be a problem. I'd like to add a unit test that looks at the two and fails if they are different, but I'm not sure where to start.
I don't know if there's a recommended way to do this - I was going to try to somehow pull out the column information, loop through it and fail the test if they're different, but it seems a bit fiddly.
Is there a better way?
For SQL Server 2008, take a look at the sys.tables, sys.table_types and sys.columns system tables.
In one of my databases I have a table type called candidateRoutes and a physical (real) table called RouteArea
The following two queries:
select sys.columns.* from sys.table_types join sys.columns on sys.columns.object_id = sys.table_types.type_table_object_id where sys.table_types.name = 'candidateRoutes'
select sys.columns.* from sys.tables join sys.columns on sys.columns.object_id = sys.tables.object_id where sys.tables.name = 'RouteArea'
return:
object_id name column_id system_type_id user_type_id max_length precision scale collation_name is_nullable is_ansi_padded is_rowguidcol is_identity is_computed is_filestream is_replicated is_non_sql_subscribed is_merge_published is_dts_replicated is_xml_document xml_collection_id default_object_id rule_object_id is_sparse is_column_set
215671816 RouteId 1 56 56 4 10 0 NULL 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
215671816 Area 2 240 130 -1 0 0 NULL 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
and
object_id name column_id system_type_id user_type_id max_length precision scale collation_name is_nullable is_ansi_padded is_rowguidcol is_identity is_computed is_filestream is_replicated is_non_sql_subscribed is_merge_published is_dts_replicated is_xml_document xml_collection_id default_object_id rule_object_id is_sparse is_column_set
1675153013 RouteId 1 127 127 8 19 0 NULL 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1675153013 ValidFrom 2 61 61 8 23 3 NULL 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1675153013 ValidTo 3 61 61 8 23 3 NULL 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1675153013 Line 4 240 130 -1 0 0 NULL 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1675153013 Area 5 240 130 -1 0 0 NULL 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
so you could perhaps do something like this:
with
TableType as
(select name, user_type_id, max_length, precision from sys.columns where object_id = (select type_table_object_id from sys.table_types where name = 'candidateRoutes')),
PhysicalTable as
(select name, user_type_id, max_length, precision from sys.columns where object_id = (select object_id from sys.tables where name = 'RouteArea'))
select * from TableType full join PhysicalTable
on TableType.name = PhysicalTable.name
where TableType.name is null
or PhysicalTable.name is null
or TableType.user_type_id <> PhysicalTable.user_type_id
or TableType.max_length <> PhysicalTable.max_length
or TableType.precision <> PhysicalTable.precision
but including scale, collation_name, is_nullable, etc., to find all columns that do not match. In my case, I get:
name user_type_id max_length precision name user_type_id max_length precision
RouteId 56 4 10 RouteId 127 8 19
NULL NULL NULL NULL ValidFrom 61 8 23
NULL NULL NULL NULL ValidTo 61 8 23
NULL NULL NULL NULL Line 130 -1 0
If no rows are returned, the type and the table are the same.
Like you said, with C#, you'll have to dump the data from both tables into seperate DataSets, then loop through and compare. That would be such a resource hog though, and will most likely yield very undesirable performance if you have thousands of records.
Do you have to do it in C#? Why don't you do the comparison in SQL and return a bool of the results (true, if everything is the same, false if there was a difference)?
But if you must do it in .NET, have you tried looking into F#? I've been doing a bit of reading, and it looks like F# might be a performance improvement on top of C# for this kind of data analysis.
Here's an article that may help you with F# and SQL.
http://tomasp.net/blog/dynamic-sql.aspx
Or, you can look into LINQ (sorry, I'm inexperienced with it), it may be the answer to what you're looking for.
http://www.linqpad.net/WhyLINQBeatsSQL.aspx