CN110019582A - Cognitive map construction method based on space and Motion-Joint coding - Google Patents

Cognitive map construction method based on space and Motion-Joint coding Download PDF

Info

Publication number
CN110019582A
CN110019582A CN201710748763.8A CN201710748763A CN110019582A CN 110019582 A CN110019582 A CN 110019582A CN 201710748763 A CN201710748763 A CN 201710748763A CN 110019582 A CN110019582 A CN 110019582A
Authority
CN
China
Prior art keywords
cell
space
speed
coding
indicate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710748763.8A
Other languages
Chinese (zh)
Other versions
CN110019582B (en
Inventor
斯白露
曾太平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Automation of CAS
Original Assignee
Shenyang Institute of Automation of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Automation of CAS filed Critical Shenyang Institute of Automation of CAS
Priority to CN201710748763.8A priority Critical patent/CN110019582B/en
Publication of CN110019582A publication Critical patent/CN110019582A/en
Application granted granted Critical
Publication of CN110019582B publication Critical patent/CN110019582B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Feedback Control In General (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to the cognitive map construction methods based on space and Motion-Joint coding.Belong to robot navigation's technical field.Consider interior positioning system entorhinal cortex-hippocampal neural circuit of mammal brain, utilize the space navigation encoding characteristics of Place cell, Head direction cell, raster cell and speed cell, constitute a continuous attractor neural network model-spatial memory network, the space of encoding animal and motion feature simultaneously, angular velocity and linear velocity carry out path integral.In this network, each interneuronal asymmetric connection weight generates can be with the inherently movable peak of the system of spontaneous movement, once there is speed input, attract sub-network that can steadily integrate to linear velocity and angular speed, forms stable robot head direction and position encoded.Single camera vision inputs to form partial view cell, is capable of the accumulated error of correcting route integral.The present invention can steadily construct the topological map for consistent half metric system that links up using single camera.

Description

Cognitive map construction method based on space and Motion-Joint coding
Technical field
The invention belongs to robot navigation field more particularly to a kind of spaces and fortune based on mammal spatial cognition Dynamic combined coding, the map constructing method using single camera as information input.
Background technique
Robot is dissolved into our daily life, one of them most important challenge is how to assign machine People can explore the Spatial cognitive Abilities of ambient enviroment as animal.Two ten years of past, many researchers spend big The energy of amount assigns the ability of robot autonomous navigation, can position simultaneously under the conditions of the intervention of nobody, map structuring and Navigation.This has constituted an important robot research field-synchronous superposition (Simultaneous Localization and Mapping,SLAM).Method is largely divided into two classes, and classical method is referred to as probability SLAM, usually Using Extended Kalman filter, particle filter etc..Various filtering algorithms be used to estimate the pose of robot.However, The limitation of these methods is to need expensive sensor, a large amount of computing resource, and assumes a static state or structuring Environment.Although probabilistic method provides perfect mathematic(al) treatment, due to lacking robustness, seriously affect in true object Manage the application in environment.
Another method derives from the inspiration of Neuscience, it is intended to establish neural network model, imitate mammal Space navigation mechanism.With the successive discovery of Place cell, Head direction cell, raster cell, speed cell, a large amount of nerve net Network model is suggested the navigation mechanism explained inside brain, is broadly divided into three classes: concussion interference model continuously attracts sub-network Model and self-organizing model.Concussion interference model is easy to generate drift error, lacks robustness.Continuous attractor network model energy Enough in the case where connection weight is interfered, stable path integral is still realized.Although a large amount of computation model is suggested The navigation mechanism of explanation mammal is removed, but seldom Algorithms of Robots Navigation System uses space navigation mechanism.RatSLAM model In, biological fidelity is sacrificed, pose cell passes through mobile energy peak and carries out the coding of different location, rather than uses the dynamic of network Mechanical characteristics.In the document of Jauffret, the granting model of raster cell is realized in such a way that modulus maps, but grid Cell can neither help localization for Mobile Robot that can not realize the building of environment.Yuan et al. uses Yoram in 2009 The raster cell model for the pure position that Burak and Ila R Fiete is proposed, binding site cellular network construct office environment Cognitive map in range.Speed and position, the combined coding feature of head direction are not accounted for, and is only carried out in small range Test, the robust performance times in a wide range of so need further test.
In short, mobile robot is far from reaching practical stage at present, can not also be dissolved into people's lives. As experiment Neuscience will be it has furthermore been found that mammal spatial cognition neural network model will be further perfect.Into one Step verifies the space navigation ability of brain, it will help establishes the autonomous mobile robot with class people homing capability.
Summary of the invention
It is an object of the invention to develop a cognitive map for mobile robot to construct system, based on mammal The partial view cell of visual cortex, the head direction-velocity joint Codocyte of entorhinal cortex and the grid of deep layer entorhinal cortex Lattice-velocity joint Codocyte, using the continuous rotation for attracting network code robot and translation and head direction and position. By cheap single camera information input, the city map that can be realized outdoor large scale environment up to 66 kms is constructed, and is System block diagram is shown in Fig. 1.
The technical solution adopted by the present invention to solve the technical problems is: being encoded cognitively based on space and Motion-Joint Figure construction method, comprising the following steps:
Vision measurement unit receives camera image, obtains the angular speed and linear speed of robot according to the variation of visual scene Degree, and be input in spatial memory network and map structuring node;
Partial view cell extracts partial view template from camera image and encodes different scenes;Current scene is therewith When preceding scene is identical, then it is assumed that the position accessed before reaching, corresponding partial view cell-stimulating, and activate sky Between corresponding raster cell and Head direction cell in memory network;
Spatial memory network receives the input of partial view cell and vision measurement unit, carries out path integral and vision Calibration, and send an instruction to map structuring node;
Map structuring node reads the space encoding of environment using spatial memory network, is formed according to angular speed and linear velocity Environmental map.
Partial view cell establishes new partial view cell if current scene and scene before are all different.
The progress path integral and vision alignment the following steps are included:
Head direction-velocity joint encodes attractor network model: the Head direction cell network of combined coding is in neural space Middle formation circular magnets, single movable peak is presented in a network;Real head of the robot in physical environment towards angle, Activate part head towards nerve cell, the angle of the movement at movable peak and robot in physical environment by the speed input of modulation Speed is proportional;
Grid-velocity joint encodes attractor network model: due to periodic boundary condition, the grid of combined coding is thin Born of the same parents form circular ring shape in neural space and attract sub-network;Position of the phase code robot of raster mode in physical space It sets, raster cell is activated by the Head direction cell and speed cell that are located at entorhinal cortex;The movement of raster mode and robot Speed in physical environment is proportional;Head direction cell and raster cell form robot physical rings in neural space together The head direction and position bulk encoding in border.
Head direction-the velocity joint encodes attractor network model:
Wherein, IvIndicate the speed input of modulation, IviewIndicate the calibration input of partial view cell;The hair of m expression cell Put rate;
F (x)=x works as x > 0
F (x)=0 other;
The weight of nerve cell (θ, v) is obtained by following formula after nerve cell before prominent (θ ', v ') and protrusion,
J (θ, v | θ ', v ')=J0+J1cos(θ-θ′-v′)cos(λ(v-v′))
Wherein, J0< 0 indicates uniformly to inhibit, J1> 0 weight intensity;θ and v respectively indicates the nerve of the coding grid in postsynaptic The nerve cell of cell and coding rate;θ ' and v ' respectively indicates the nerve cell and coding rate of presynaptic coding grid Nerve cell;The size at λ expression adjustment speed peak;LrIndicate the extreme value of coded speed;
The speed input of modulation is obtained by following steps:
Turning rate input V is mapped to ideal activity peak position u (V) in neural space,
U (V)=arctan (τ V)
Wherein, τ indicates the time constant in head direction-velocity joint coding attractor network model;
Ideal activity peak position attracts sub-network using direction-velocity joint coding to the end is inputted after Gaussian function modulation In model, realized using following formula:
Wherein IrIndicate the input amplitude of angular speed, ε indicates the intensity that speed is adjusted, σrIndicate the sharp journey that speed is adjusted Degree;
Head is obtained directed through following steps:
Head direction is estimated according to Fourier transformation:
ψ=∠ (∫ ∫ m (θ, v) exp (i θ) D θ Dv)
Wherein ∠ (Z) indicates the angle of plural number Z;
Angular speed is obtained by following steps:
According to the phase of Fourier transformation estimated angular velocity:
Then it maps that in actual physics space, formula is as follows:
The grid-velocity joint coding attractor network model is as follows:
IvIndicate the speed input of modulation, IviewIndicate the calibration electric current input of partial view cell;The hair of m expression cell Put rate;LtIndicate the extreme value of speed;
Nerve cell before prominentWith nerve cell after protrusionWeight obtained by following formula:
Wherein, J0< 0, it indicates uniformly to inhibit, Jk> 0, indicate weight intensity;The number at k expression activity peak;It compiles Two location dimensions in code environment,The dimension on two directional velocities in coding environment;It respectively indicates The nerve cell of the nerve cell of coding site, coding rate before prominent;The size at λ expression adjustment speed peak.
The speed input of the modulation is obtained by following steps:
In neural space, the speed of X-axis and Y-axis forms vectorThen it is mapped to desired speed axle position It setsOn, formula is as follows:
Wherein, S indicates the velocity transformation coefficient of physical space and neural coding space, the number at k expression activity peak;τ table Show the time constant in grid-velocity joint coding attractor network model;
Then, it is input to grid-velocity joint by the speed of Gaussian modulation to encode in attractor network model, formula It is as follows:
ItIndicate the input amplitude of angular speed, ε indicates the intensity that speed is adjusted, σrIndicate the acuity that speed is adjusted.
The linear velocity of dimension on two directional velocities is obtained by following steps:
Firstly, estimating the position at movable peak on two speed axis, formula is as follows:
Wherein, ∠ (Z) indicates the angle of plural number Z;φjJ be x, y, herein φjIndicate x, the work of y-axis in speed dimension Dynamic peak position;
Then, according to the phase at movable peak by neural space reflection into physical space, formula is as follows:
VjIndicate the speed of robot in physical environment, j is x, y herein;S indicates physical space and neural coding space Velocity transformation coefficient, the number at k expression activity peak;τ indicates the time in grid-velocity joint coding attractor network model Constant.
By grid-velocity joint encode attractor network model obtain robot the position of physical space by with Lower step obtains:
8.1) phase of raster cell:
Raster mode is mapped to the vectors of three equal part angles up, vector is as follows:
Then the phase on axis of projection is sought, formula is as follows:
Wherein, l1=1, l2=l3=sin α, α=arctan (2);J indicates serial number, is herein 1,2,3;
8.2) raster mode phase then is found out, formula is as follows:
8.3) finally, estimating robot in the position of physical space:
WhereinFor the ratio introduction of robot physical space and neural space.
Corresponding partial view cell-stimulating, and corresponding raster cell and head court in activation space memory network Include: to cell
1) partial view cell Calibration Head direction-velocity joint encodes attractor network model, the side adjusted using Gauss Formula realizes that formula is as follows:
Wherein, IdIndicate the amplitude of Implantation Energy, ψ indicates the associated phase of partial view cell, σdIndicate Gauss tune The sharpness of section;
2) partial view cell correction grid-velocity joint encodes attractor network model, raw using concussion interference model At the granting model of raster cell, formula is as follows:
Wherein, IpIndicating the amplitude of Implantation Energy, C indicates constant,Indicate the phase of partial view cell association;K is indicated The number at coding rate peak in neural space; J table Show serial number, is herein 1,2,3;l1=1, l2=l3=sin α, α=arctan (2).
The map structuring node reads the space encoding of environment using spatial memory network, with forming environment according to speed Figure the following steps are included:
Map structuring node utilizes angular speed, linear velocity, the position of spatial memory network code and head direction, establishes topology Then the node of map and the connection relationship between other nodes obtain map by the method optimizing that standard drawing optimizes.
The invention has the following beneficial effects and advantage:
1. the present invention can integrate motion information and perception information, realizes and stablizing for space environment is encoded.Institute of the present invention The neural network framework of use and the protrusion connection mechanism of mammal entorhinal cortex are consistent, the entorhinal cortex mould based on deep layer Type has the ability encoded with velocity joint, similar with the single neuron experimental result of Neurobiology acquisition, has very high Biological fidelity.
2. simultaneously, experiment acquired results prove, this cognitive map building system can be adopted using cheap single camera The image information of collection successfully constructs the consistent topological map that links up in the suburbs of a wide range of scale (as shown in Figure 4 A), such as Shown in Fig. 4 B.
Detailed description of the invention
Fig. 1 is that cognitive map of the invention constructs system block diagram;
Fig. 2 is the phase estimation schematic diagram of raster mode of the invention;
The nervous activity of Fig. 3 A expression head direction-speed cell;
Fig. 3 B indicates the nervous activity of grid-speed cell;
Fig. 3 C indicates visual information;
Fig. 3 D indicates topological map;
Fig. 4 A indicates test environmental map;
Fig. 4 B indicates map caused by cognitive environment building system.
Specific embodiment
The present invention will be further described in detail below with reference to the embodiments.
As shown in Figure 1.The present invention discloses a kind of cognitive map construction method based on space and Motion-Joint coding.Belong to Robot navigation's technical field.Consider interior positioning system entorhinal cortex-hippocampal neural circuit of mammal brain, benefit With the space navigation encoding characteristics of Place cell, Head direction cell, raster cell and speed cell, a complicated company is constituted Continuous attractor neural network model-spatial memory network, can the space of encoding animal and motion feature simultaneously, angular velocity Path integral is carried out with linear velocity.In this network, the asymmetric connection weight between each neuron can generate can be spontaneous The inherently movable peak of mobile system, once have speed input, attract sub-network can steadily to linear velocity and angular speed into Row integral forms stable robot head direction and position encoded.Single camera vision inputs to form partial view cell, can The accumulated error of correcting route integral.The cognitive map construction method proposed can steadily be constructed coherent using single camera The topological map of consistent half metric system.
The present invention is made of five nodes, and vision measurement unit receiving sensor data are according to the measure of the change of visual scene Angular speed and linear velocity;Partial view cell judges whether to enter the scene accessed, to judge whether to regard Feel calibration;Spatial memory network includes head direction-velocity joint coding attractor network model and grid-velocity joint coding Attractor network model receives the input of partial view cell and vision measurement, carries out path integral and vision alignment, and will refer to Order is sent to map structuring node, carries out the building of topological map;
Head direction-velocity joint encodes attractor network model, and the Head direction cell network of combined coding can be in nerve Circular magnets is formed in space, single movable peak is presented in a network.Movable peak encoding machine people is in physical environment Real head is towards angle, and towards neuron, the movement at movable peak and robot exist the speed input activation sub-fraction head of modulation Angular speed in physical environment is proportional;The nervous activity of neuron in head direction-velocity encoded cine attractor network model is such as Shown in Fig. 3 A;
Grid-velocity joint encodes attractor network model, due to periodic boundary condition (0~2 π), combined coding Raster cell model can form circular ring shape in neural space and attract sub-network.The phase of raster mode being capable of encoding machine people Position in physical space, raster cell can be activated by the Head direction cell and speed cell for being located at entorhinal cortex.Grid The movement of lattice model is proportional to speed of the robot in physical environment.Head direction cell and raster cell are together neural empty Between in form the head direction and position bulk encoding of robot physical environment;In grid-velocity encoded cine attractor network model Neuron nervous activity it is as shown in Figure 3B;
Vision measurement, robot estimate linear velocity and angular speed by the two continuous frames picture of matching camera acquisition.Estimate The angular speed of meter forms head orientation information by head direction-velocity joint coding attractor network model.The linear velocity of estimation, knot Syncephalon orientation information completes path integral by grid-velocity joint coding attractor network model;
Partial view cell, from camera image extract partial view template encode different scenes, if scene with Before all scenes observed are all different enough, then establish new partial view cell;If with partial view end before Template is identical enough, then it is assumed that the position accessed before reaching, corresponding partial view cell-stimulating, and input is provided Into spatial memory network;In figure 3C indicate input visual scene (on), partial view vision template (in), current office Portion's view cell shuttering vision (under);
Map structuring can read the space encoding of environment using spatial memory network, form environmental map.Topological map The movement velocity of middle robot is measured by vision measurement, when robot returns to the position accessed before, is formed closed loop, is passed through The method of optimization carries out the further correction of map, and the topological map of map structuring process is as shown in Figure 3D.
Neuron before head direction-velocity joint coding attractor network model is prominent (θ ', v ') and neuron after protruding The weight of (θ, v) is obtained by following formula,
J (θ, v | θ ', v ')=J0+J1cos(θ-θ′-v′)cos(λ(v-v′))
Wherein, J0<0 indicates uniformly to inhibit, the weight intensity of J1>0.
The granting rate of the attractor network dynamic of head direction-velocity joint encoding model, neuron (θ, v) is logical Following difference equation is crossed to obtain:
Wherein IvIndicate velocity modulation input, IviewIndicate the calibration electric current input of partial view cell,
F (x)=x works as x > 0;F (x)=0 other.
Its mode of abridging is as follows
Angular speed modulation input:
External turning rate input V is mapped to ideal activity peak position u (V) in neural space,
U (V)=arctan (τ V)
Wherein τ indicates the time constant in attractor network dynamic.
It is adopted using being inputted in direction-velocity joint encoding model to the end after Gaussian function modulation ideal activity peak position It is realized with following formula:
Wherein IrIndicate the input amplitude of rotation speed, ε indicates the intensity that speed is adjusted, σrIndicate that speed is adjusted sharp Degree.
Head direction estimation:
Head direction is estimated according to Fourier transformation:
ψ=∠ (∫ ∫ m (θ, v) exp (i θ) D θ Dv)
Wherein ∠ (Z) seeks the angle of plural Z.
Attitude rate estimator:
According to the phase of Fourier transformation estimated angular velocity:
Then it maps that in actual physics space, formula is as follows:
Neuron before grid-velocity joint coding attractor network model is prominentWith neuron after protrusion's Weight is obtained by following formula:
Wherein,Two location dimensions in coding environment,Two speed sides in coding environment Upward dimension.It when k=2, will appear two movable peaks in two location dimensions, and all only go out in two speed dimensions An existing movable peak.The number at k expression location dimension activity peak.θj、θj' respectively indicate location dimension it is prominent before nerve cell and Nerve cell after protrusion;vj、vj' respectively indicate nerve cell after speed dimension protrudes preceding nerve cell and protrudes.
Grid-velocity joint encoding model attractor network dynamic is as follows:
Wherein abbreviation formula is
With
Linear velocity modulation input formula:
In neural space,It is mapped to desired speed shaft positionOn, formula is as follows:
K indicates the number at coding rate peak in neural space.
Then, it is input in grid-rate pattern by the speed of Gaussian modulation, formula is as follows:
The linear velocity estimation of two dimensions:
Firstly, estimating the position at movable peak on two speed axis, formula is as follows:
Then, according to the phase at movable peak by neural space reflection into physical space, formula is as follows:
The phase estimation of raster cell:
Raster mode is mapped to the vectors of three equal part angles up, vector is as follows:
Then the phase on axis of projection is sought, formula is as follows:
Wherein, l1=1, l2=l3=sin α, α=arctan (2)
Then, raster mode phase can be found out by mapping relations as shown in Figure 2, formula is as follows:
Finally, the mapping relations of neural space and physical space are estimated that in the position of physical space:
WhereinFor the ratio introduction of robot physical space and neural space.
The position and head orientation information that partial view cell is stored pass through modulation feedback, and direction-velocity joint encodes to the end In model and grid-velocity joint encoding model, partial view cell Calibration Head direction-velocity joint encoding model utilizes Gauss The mode of adjusting realizes that formula is as follows:
Wherein, IdIndicate the amplitude of Implantation Energy, ψ indicates the phase of partial view cell, σdIndicate that Gauss is adjusted sharp keen Degree.
Partial view cell Calibration Head direction-velocity joint encoding model generates raster cell using concussion interference model Model is provided, formula is as follows:
Wherein, IpIndicating the amplitude of Implantation Energy, C indicates constant,Indicate the phase of partial view cell association.
The cognitive map construction method is made of five nodes, vision measurement unit receiving sensor data according to The measure of the change angular speed and linear velocity of visual scene;Partial view cell judges whether to enter the scene accessed, To judge whether to vision alignment;Spatial memory network include head direction-velocity joint coding attractor network model and Grid-velocity joint encodes attractor network model, receives the input of partial view cell and vision measurement, carries out path product Point and vision alignment, and send an instruction to map structuring node, carry out the building of topological map.
Step 1 acquires data from web camera, the image data information of reading is analyzed, and obtain image Size, the information such as frame per second, by robot operating system ROS node acquire, with the rate of the every frame of 100ms with The format of sensor_msgs/Image.msg message is sent, in figure C indicate input visual scene (on).Fig. 3 A~figure 3D cognitive map constructs the operation screenshot of system, and 3A indicates the nervous activity of head direction-speed cell in figure;Fig. 3 B indicates grid The nervous activity of lattice-speed cell;In figure 3C indicate input visual scene (on), partial view visual templates (in), currently Partial view template (under);3D indicates topological map in figure.
Step 2, vision measurement node receive image message, by comparing continuous image array, can obtain speed letter Breath.The difference that absolute mean intensity is calculated by comparing two consecutive image scan line intensity profiles calculates its opposite offset, leads to True rotation and translation speed can be converted in physical space by crossing speed gain constant.Then with ROS standard message The format geometry_msgs/Twist.msg of data packet is issued.
Step 3, at the same time, transmitted image information is equally received by partial view cell node, if observation The current scene arrived is similar to scene template observed before enough, we will activate associated partial view cell. If current scene is with scene template before, none is suitably matched, it will one new visual templates of creation, and It is associated with a new corresponding partial view cell.Then current partial view cell id is issued, C indicates part in figure View visual templates (in), current partial view template (under).
Step 4, spatial memory network node receive angular speed and linear velocity ROS message geometry_msgs/ Twist.msg, and the partial view cell id of current ROS (Robot Operating System) message activation.In space In memory network node, the message from two nodes of vision measurement and partial view cell is carried out by two different interruptions Reason, angular velocity information are received by head direction-velocity joint coding attractor network model, generate robot head towards response; And linear velocity and head orientation information are received by grid-velocity joint coding attractor network model together, generate raster mode Position response.In the absence of the partial view cell id received, then by estimate current raster cell phase and Existing location information and head orientation information are associated with current partial view cell up, simultaneously by the phase of Head direction cell It sends the topological node of experience drawing and the message on side of establishing new on map structuring node;When the partial view received In the presence of cell id, then associated location information and head orientation information before being found according to id, according to partial view cell school The Firing Patterns formula of Firing Patterns formula and partial view cell the correction Head direction cell of positive raster cell, correct direction and The Firing Patterns of raster cell are corrected.While correcting the Firing Patterns of Head direction cell and raster cell, estimation is worked as Preceding location information and head orientation information, if the letter of current location information and head orientation information and some experience drawing node When ceasing identical enough, then some topological map node is created to the transmission of map structuring node and be connected with some topological map node Side message.The nervous activity of neuron in grid-velocity encoded cine attractor network model is as shown in fig. 3;Fig. 3 B Indicate the nervous activity of the neuron in grid-velocity encoded cine attractor network model.
How step 5, map structuring node create the side of topological map in addition to reception space memory network node transmits Outside the message of node, meanwhile, also receive the velocity information issued from vision measurement node.The velocity information received It is mainly used for being accumulated at the relative distance and turned phase for being connected to and being moved before the new order for how creating topological map To angle.When being connected to the order of creation node, then pervious topological map section is based on according to accumulative distance and angle information Point creates new node, and creates the side of a connection present topology map nodes and topological map node before.When receiving When creating the order on side, present node is connected with another node according to currently accumulative distance and angle, simultaneously because Path integral cumulative errors caused by vision measurement solving speed, by topological map standard drawing optimization algorithm, to current The optimization of map further progress.Fig. 4 A~Fig. 4 B indicates map structuring, tests environment using shown in Fig. 4 A, is located at Australia Brisbane St. Lucia suburbs, 3km × 1.6km, running route overall length 66km produce the cognition topology generated shown in Fig. 4 B Map.
Step 6, the visual information to extraction, head direction-velocity joint Codocyte and grid-velocity joint Codocyte Granting rate and topological map visualized, realize the display to the information using Python and C++.
The storage and post-processing of step 7, data.Required information is recorded using the writing function of ROS bag It in rosbag file, is further analyzed using MATLAB script, handles and show.
Used parameter respective value is as shown in Table 1 in the present embodiment model:
Table one

Claims (10)

1. the cognitive map construction method based on space and Motion-Joint coding, it is characterised in that the following steps are included:
Vision measurement unit receives camera image, obtains the angular speed and linear velocity of robot according to the variation of visual scene, And it is input in spatial memory network and map structuring node;
Partial view cell extracts partial view template from camera image and encodes different scenes;Current scene with before When scene is identical, then it is assumed that the position accessed before reaching, corresponding partial view cell-stimulating, and activation space are remembered Recall corresponding raster cell and the Head direction cell in network;
Spatial memory network receives the input of partial view cell and vision measurement unit, carries out path integral and vision alignment, And send an instruction to map structuring node;
Map structuring node reads the space encoding of environment using spatial memory network, forms environment according to angular speed and linear velocity Map.
2. the cognitive map construction method according to claim 1 based on space and Motion-Joint coding, it is characterised in that: Partial view cell establishes new partial view cell if current scene and scene before are all different.
3. the cognitive map construction method according to claim 1 based on space and Motion-Joint coding, it is characterised in that The progress path integral and vision alignment the following steps are included:
Head direction-velocity joint encodes attractor network model: the Head direction cell network of combined coding shape in neural space Attractor is circularized, single movable peak is presented in a network;Real head of the robot in physical environment passes through towards angle The speed input of modulation activates part head towards nerve cell, the angular speed of the movement at movable peak and robot in physical environment It is proportional;
Grid-velocity joint encodes attractor network model: due to periodic boundary condition, the raster cell of combined coding exists Circular ring shape is formed in neural space attracts sub-network;Position of the phase code robot of raster mode in physical space, grid Lattice cell is activated by the Head direction cell and speed cell that are located at entorhinal cortex;The movement of raster mode and robot are in physics Speed in environment is proportional;Head direction cell and raster cell form the head of robot physical environment in neural space together Direction and position bulk encoding.
4. the cognitive map construction method according to claim 1 based on space and Motion-Joint coding, it is characterised in that Head direction-the velocity joint encodes attractor network model:
Wherein, IvIndicate the speed input of modulation, IviewIndicate the calibration input of partial view cell;The granting of m expression cell Rate;
F (x)=x works as x > 0
F (x)=0 other;
The weight of nerve cell (θ, v) is obtained by following formula after nerve cell before prominent (θ ', v ') and protrusion,
J (θ, v | θ ', v ')=J0+J1 cos(θ-θ′-v′)cos(λ(v-v′))
Wherein, J0< 0 indicates uniformly to inhibit, J1> 0 weight intensity;θ and v respectively indicates the nerve cell of the coding grid in postsynaptic With the nerve cell of coding rate;θ ' and v ' respectively indicates the nerve cell of presynaptic coding grid and the nerve of coding rate Cell;The size at λ expression adjustment speed peak;LrIndicate the extreme value of coded speed;
The speed input of modulation is obtained by following steps:
Turning rate input V is mapped to ideal activity peak position u (V) in neural space,
U (V)=arctan (τ V)
Wherein, τ indicates the time constant in head direction-velocity joint coding attractor network model;
Ideal activity peak position encodes attractor network model using direction-velocity joint to the end is inputted after Gaussian function modulation In, it is realized using following formula:
Wherein IrIndicate the input amplitude of angular speed, ε indicates the intensity that speed is adjusted, σrIndicate the acuity that speed is adjusted;
Head is obtained directed through following steps:
Head direction is estimated according to Fourier transformation:
ψ=∠ (∫ ∫ m (θ, v) exp (i θ) D θ Dv)
Wherein ∠ (Z) indicates the angle of plural number Z;
Angular speed is obtained by following steps:
According to the phase of Fourier transformation estimated angular velocity:
Then it maps that in actual physics space, formula is as follows:
5. the cognitive map construction method according to claim 1 based on space and Motion-Joint coding, it is characterised in that The grid-velocity joint coding attractor network model is as follows:
IvIndicate the speed input of modulation, IviewIndicate the calibration electric current input of partial view cell;The granting rate of m expression cell; LtIndicate the extreme value of speed;
Nerve cell before prominentWith nerve cell after protrusionWeight obtained by following formula:
Wherein, J0< 0, it indicates uniformly to inhibit, Jk> 0, indicate weight intensity;The number at k expression activity peak;Coding collar Two location dimensions in border,The dimension on two directional velocities in coding environment;Respectively indicate protrusion The nerve cell of the nerve cell of preceding coding site, coding rate;The size at λ expression adjustment speed peak.
6. the cognitive map construction method according to claim 5 based on space and Motion-Joint coding, it is characterised in that The speed input of the modulation is obtained by following steps:
In neural space, the speed of X-axis and Y-axis forms vectorThen it is mapped to desired speed shaft positionOn, formula is as follows:
Wherein, S indicates the velocity transformation coefficient of physical space and neural coding space, the number at k expression activity peak;τ indicates grid Time constant in lattice-velocity joint coding attractor network model;
Then, it is input to grid-velocity joint by the speed of Gaussian modulation to encode in attractor network model, formula is such as Under:
ItIndicate the input amplitude of angular speed, ε indicates the intensity that speed is adjusted, σrIndicate the acuity that speed is adjusted.
7. the cognitive map construction method according to claim 5 based on space and Motion-Joint coding, it is characterised in that The linear velocity of dimension on two directional velocities is obtained by following steps:
Firstly, estimating the position at movable peak on two speed axis, formula is as follows:
Wherein, ∠ (Z) indicates the angle of plural number Z;φjJ be x, y, herein φjIndicate x, the movable peak of y-axis in speed dimension Position;
Then, according to the phase at movable peak by neural space reflection into physical space, formula is as follows:
VjIndicate the speed of robot in physical environment, j is x, y herein;The speed of S expression physical space and neural coding space Transformation coefficient, the number at k expression activity peak;τ indicates the time constant in grid-velocity joint coding attractor network model.
8. the cognitive map construction method according to claim 5 based on space and Motion-Joint coding, it is characterised in that It obtains robot by grid-velocity joint coding attractor network model and is obtained in the position of physical space by following steps It arrives:
8.1) phase of raster cell:
Raster mode is mapped to the vectors of three equal part angles up, vector is as follows:
Then the phase on axis of projection is sought, formula is as follows:
Wherein, l1=1, l2=l3=sin α, α=arctan (2);J indicates serial number, is herein 1,2,3;
8.2) raster mode phase then is found out, formula is as follows:
8.3) finally, estimating robot in the position of physical space:
WhereinFor the ratio introduction of robot physical space and neural space.
9. the cognitive map construction method according to claim 1 based on space and Motion-Joint coding, it is characterised in that Corresponding partial view cell-stimulating, and corresponding raster cell and Head direction cell packet in activation space memory network It includes:
1) partial view cell Calibration Head direction-velocity joint encodes attractor network model, real in the way of Gauss adjusting Existing, formula is as follows:
Wherein, IdIndicate the amplitude of Implantation Energy, ψ indicates the associated phase of partial view cell, σdIndicate what Gauss was adjusted Sharpness;
2) partial view cell correction grid-velocity joint encodes attractor network model, generates grid using concussion interference model The granting model of lattice cell, formula are as follows:
Wherein, IpIndicating the amplitude of Implantation Energy, C indicates constant,Indicate the phase of partial view cell association;K indicates nerve The number at coding rate peak in space; J indicates sequence Number, it is herein 1,2,3;l1=1, l2=l3=sin α, α=arctan (2).
10. the cognitive map construction method according to claim 1 based on space and Motion-Joint coding, it is characterised in that The map structuring node using spatial memory network read environment space encoding, according to speed formed environmental map include with Lower step:
Map structuring node utilizes angular speed, linear velocity, the position of spatial memory network code and head direction, establishes topological map Node and the connection relationship between other nodes, then pass through standard drawing optimize method optimizing obtain map.
CN201710748763.8A 2017-08-28 2017-08-28 Cognitive map construction method based on space and motion joint coding Active CN110019582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710748763.8A CN110019582B (en) 2017-08-28 2017-08-28 Cognitive map construction method based on space and motion joint coding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710748763.8A CN110019582B (en) 2017-08-28 2017-08-28 Cognitive map construction method based on space and motion joint coding

Publications (2)

Publication Number Publication Date
CN110019582A true CN110019582A (en) 2019-07-16
CN110019582B CN110019582B (en) 2023-07-14

Family

ID=67186149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710748763.8A Active CN110019582B (en) 2017-08-28 2017-08-28 Cognitive map construction method based on space and motion joint coding

Country Status (1)

Country Link
CN (1) CN110019582B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112527929A (en) * 2020-10-20 2021-03-19 深圳市银星智能科技股份有限公司 Grid map coding method and device and electronic equipment
CN112906884A (en) * 2021-02-05 2021-06-04 鹏城实验室 Brain-like prediction tracking method based on pulse continuous attractor network
CN113009917A (en) * 2021-03-08 2021-06-22 安徽工程大学 Mobile robot map construction method based on closed loop detection and correction, storage medium and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060129506A1 (en) * 2004-07-15 2006-06-15 Neurosciences Research Foundation, Inc. Mobile brain-based device having a simulated nervous system based on the hippocampus
CN103699125A (en) * 2013-12-09 2014-04-02 北京工业大学 Robot simulated navigation method based on rat brain-hippocampal navigation
US20160375592A1 (en) * 2015-06-24 2016-12-29 Brain Corporation Apparatus and methods for safe navigation of robotic devices
CN106814737A (en) * 2017-01-20 2017-06-09 安徽工程大学 A kind of SLAM methods based on rodent models and RTAB Map closed loop detection algorithms

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060129506A1 (en) * 2004-07-15 2006-06-15 Neurosciences Research Foundation, Inc. Mobile brain-based device having a simulated nervous system based on the hippocampus
CN103699125A (en) * 2013-12-09 2014-04-02 北京工业大学 Robot simulated navigation method based on rat brain-hippocampal navigation
US20160375592A1 (en) * 2015-06-24 2016-12-29 Brain Corporation Apparatus and methods for safe navigation of robotic devices
CN106814737A (en) * 2017-01-20 2017-06-09 安徽工程大学 A kind of SLAM methods based on rodent models and RTAB Map closed loop detection algorithms

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DAVID BALL 等: "OpenRatSLAM: an open source brain-based SLAM system", AUTONOMOUS ROBOTS, vol. 34, pages 149 *
于乃功;苑云鹤;李倜;蒋晓军;罗子维;: "一种基于海马认知机理的仿生机器人认知地图构建方法", 自动化学报, no. 01 *
张潇;胡小平;张礼廉;马涛;王玉杰;: "一种改进的RatSLAM仿生导航算法", 导航与控制, no. 05 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112527929A (en) * 2020-10-20 2021-03-19 深圳市银星智能科技股份有限公司 Grid map coding method and device and electronic equipment
CN112527929B (en) * 2020-10-20 2023-12-08 深圳银星智能集团股份有限公司 Grid map coding method and device and electronic equipment
CN112906884A (en) * 2021-02-05 2021-06-04 鹏城实验室 Brain-like prediction tracking method based on pulse continuous attractor network
CN113009917A (en) * 2021-03-08 2021-06-22 安徽工程大学 Mobile robot map construction method based on closed loop detection and correction, storage medium and equipment
CN113009917B (en) * 2021-03-08 2022-02-15 安徽工程大学 Mobile robot map construction method based on closed loop detection and correction, storage medium and equipment

Also Published As

Publication number Publication date
CN110019582B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
Ahn Formation control
CN109579843B (en) Multi-robot cooperative positioning and fusion image building method under air-ground multi-view angles
Zeng et al. Lanercnn: Distributed representations for graph-centric motion forecasting
CN110019582A (en) Cognitive map construction method based on space and Motion-Joint coding
Liu et al. Fusion of magnetic and visual sensors for indoor localization: Infrastructure-free and more effective
CN105263113B (en) A kind of WiFi location fingerprints map constructing method and its system based on crowdsourcing
CN106949896A (en) A kind of situation awareness map structuring and air navigation aid based on mouse cerebral hippocampal
CN103699125B (en) A kind of robot simulation air navigation aid based on the navigation of mouse cerebral hippocampal
CN107363813A (en) A kind of desktop industrial robot teaching system and method based on wearable device
Kim et al. Cooperative search of multiple unknown transient radio sources using multiple paired mobile robots
CN110261823A (en) Visible light indoor communications localization method and system based on single led lamp
CN102375416B (en) Human type robot kicking action information processing method based on rapid search tree
Chakravarty et al. GEN-SLAM: Generative modeling for monocular simultaneous localization and mapping
CN108582073A (en) A kind of quick barrier-avoiding method of mechanical arm based on improved random road sign Map Method
CN109240279A (en) A kind of robot navigation method of view-based access control model perception and spatial cognition neuromechanism
Zeng et al. Cognitive mapping based on conjunctive representations of space and movement
Soria et al. Bluetooth network for micro-uavs for communication network and embedded range only localization
Mañas-Álvarez et al. Robotic park: Multi-agent platform for teaching control and robotics
Béjar et al. A practical approach for outdoors distributed target localization in wireless sensor networks
Barolli Complex, Intelligent and Software Intensive Systems: Proceedings of the 16th International Conference on Complex, Intelligent and Software Intensive Systems (CISIS-2022)
CN113469030B (en) Personnel positioning method and system based on artificial intelligence and body shadow evaluation
CN110471030A (en) Based on the radio frequency tomography passive type Position-Solving method for improving conjugate gradient
Lin et al. Deep heading estimation for pedestrian dead reckoning
Wang et al. Leto: crowdsourced radio map construction with learned topology and a few landmarks
Pessin et al. Evaluating the impact of the number of access points in mobile robots localization using artificial neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant