Build: #127 was successful Changes by Marcus Lewis <>

Build result summary


3 minutes
bamboo-agent-0 (34)
43e3ef94b54f16c3aa77b55bd2e5d9822720cf3d 43e3ef94b54f16c3aa77b55bd2e5d9822720cf3d
a4b88d88551fbeb592edbedfa2737145de57d3a3 a4b88d88551fbeb592edbedfa2737145de57d3a3
Successful since
#121 ()

Code commits

Author Commit Message Commit date
Marcus Lewis &lt;; Marcus Lewis <> a4b88d88551fbeb592edbedfa2737145de57d3a3 a4b88d88551fbeb592edbedfa2737145de57d3a3 Merge pull request #816 from mrcslws/multi-column-location-inference
New experiment: Multi-column location inference
Marcus Lewis &lt;; Marcus Lewis <> 7b6f729b987df1f677d06bef483a92c8ca5cd51f 7b6f729b987df1f677d06bef483a92c8ca5cd51f Denote the hovered sensor with color, not size
Marcus Lewis &lt;; Marcus Lewis <> c8e7cd42750ddcb1923583950b8c29aa74264cb1 c8e7cd42750ddcb1923583950b8c29aa74264cb1 Bugfix: draw the firing fields in the right place
Marcus Lewis &lt;; Marcus Lewis <> 941e4e3dd3b3593c99b3b68a31991f364f2d682f 941e4e3dd3b3593c99b3b68a31991f364f2d682f New experiment: Multi-column location inference
This uses a new algorithm for calculating the allocentric location:
cortical columns recall all of the allocentric locations where they've
ever sensed an input, and then the cortical columns vote on the body's
allocentric location. The body's allocentric location is calculated
from the sensor's allocentric location and the sensor's egocentric
location. The location is represented by an array of "location
modules" that are inspired by grid cell modules.

This is similar in principle to my previous algorithm that infers the
allocentric location by performing path integration on unions of
locations. The difference here is that this new solution doesn't
require movement. It still benefits from movement, but it can also
infer locations with multiple cortical columns with one "touch".

(I'll probably do more writing on this topic.)