CN107194332A - A kind of Mental rotation mechanism implementation model for being translated and being rotated based on space-time - Google Patents

A kind of Mental rotation mechanism implementation model for being translated and being rotated based on space-time Download PDF

Info

Publication number
CN107194332A
CN107194332A CN201710322416.9A CN201710322416A CN107194332A CN 107194332 A CN107194332 A CN 107194332A CN 201710322416 A CN201710322416 A CN 201710322416A CN 107194332 A CN107194332 A CN 107194332A
Authority
CN
China
Prior art keywords
rotation
point cloud
cortex
mental
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710322416.9A
Other languages
Chinese (zh)
Inventor
李军
许阳
高杨建
沈广田
陈剑斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201710322416.9A priority Critical patent/CN107194332A/en
Publication of CN107194332A publication Critical patent/CN107194332A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

The invention discloses a kind of Mental rotation mechanism implementation model for translating and rotating based on space-time, and explore its application prospect in Mobile Robotics Navigation.The main brain zone function quantization means that primate Mental rotation process is related to by the invention, including upper parietal cortex (SPC), prefrontal cortex (PFC) and premotor cortex (PMC).SPC modules carry out map structuring and graphical representation;PFC module carries out feature and pose matching, and characteristics of image obtained by upper parietal cortex is compared with defined threshold, decided whether as same object and same pose;PMC simulation rotation current objects, by the point cloud characteristic pattern of relatively current and target object, calculate spin matrix, and then simulate rotation current object, difference is minimum between repeatedly making object, obtains the four-dimensional rotation operator of time correlation, and rotation operator is used for the navigation task for instructing mobile robot;Our model overcomes that conventional navigation algorithms learning ability is poor, to the shortcoming that condition depended is strong and training cost is big.

Description

A kind of Mental rotation mechanism implementation model for being translated and being rotated based on space-time
Technical field
The present invention relates to the modeling of psychological whirler reason in brain cognitive process, more particularly to based on space-time translation and The realization of rotating model and its possible application in Mobile Robotics Navigation.
Background technology
Mental rotation (mental rotation) is a kind of important Spatial cognitive Abilities of primate, and it can Help us to simulate the location status that object is seen in rotation in the brain, to solve a certain particular task, such as distinguish two things Whether body direction is consistent.The seventies in last century, the famous three-dimensional body psychology rotation test of Shepard et al. was drawn:Judge two The reaction time whether individual object matches and their the linear positive correlation of angle difference.After the experiment is suggested, some are new Method is also used to study the biological neural basis of Mental rotation, such as:Magnetic resonance imaging, positron emission Computerized tomograph, TMS transcranial magnetic stimulations etc..These methods are disclosed when carrying out Mental rotation, the upper top in brain Cortex (superior parietal cortex, SPC), prefrontal cortex (prefrontal cortex, PFC) and precentral gyrus Motor cortex (premotor cortex, PMC) these three regions in cortex are by sustained activation.The function bag of upper parietal cortex Space configuration, special pattern conversion is included to calculate;The function of prefrontal cortex includes motion control and imitation;The work(of precentral gyrus cortex Motion planning can be included and perform and move simulation.These discoveries have also promoted the generation of the neurology framework of some Mental rotations, These frameworks are general to be divided into four cognitive processes by psychology rotation:1) psychogram of an object is created from all directions (mental image);2) with reference to other images, Mental rotation is carried out to current object, compared until that can make;3) close Connection compares two objects;4) comparative result is reported.
Mental rotation is that it needs by carrying out multiple spatial alternation completion to target scene or object in continuous time Limited priori or action director are to complete particular task.It is a kind of motion because Mental rotation is simulation rotation Forecasting Methodology, its Project Realization is complex, is related to scene perception and memory, common-denominator target extraction, target's feature-extraction, feature Convert with being compared this series of actions, so the computation model for not having especially perfect Mental rotation at present can be applied it to Actual field of machine vision.
The current comparative maturity of Mobile Robotics Navigation algorithm, and the navigation of view-based access control model is in most important research Hold.But traditional navigation algorithm also has the drawbacks of some are difficult to overcome, such as adaptive ability weakness and learning ability difference etc..It is special Determine under environment, such as taken place frequently surprisingly, action is dangerous, and when environment is strange, the autonomous control based on realizing cognitive ability is It is highly important.Especially, in addition to a person primate and infant are indifferent using map and real-time composition, he Be how to perform navigation taskResearch shows that Mental rotation has played important function, primate in navigation procedure Scene and target scene, and then improve map at the moment can be contrasted by Mental rotation, and imagine that current visual angle rotates to an angle Whether the scene that can be seen afterwards is consistent with target scene, and next step Motion is determined with this.It can be seen that, by Mental rotation mechanism It is promising with Mobile Robotics Navigation task.
The content of the invention
In view of this, in order to solve the above problems, present invention design is a kind of practical based on space-time translation and rotation Mental rotation mechanism model, by complicated Mental rotation process simplification into a computation model being easily achieved, the four-dimension includes Three dimensions and time dimension.And further explore its possible application prospect in Mobile Robotics Navigation task, it is therefore an objective to drop The training cost of low robot navigation's algorithm, improves migration and the adaptability of algorithm.
The purpose of the present invention is to propose to a kind of practical Mental rotation mechanism model for being translated and being rotated based on space-time.
To realize object above, it is achieved through the following technical solutions:
A kind of excitation learning method based on gradient policy that the present invention is provided, comprises the following steps:
S1:The space configuration function of parietal cortex on abstract representation, inputs the RGB pictures and 3D point cloud figure of current object, Export 3 width color characteristic figures, 8 width shape facility figures, 4 width direction characteristic patterns and 1 amplitude point cloud key point characteristic pattern;
S2:The feature and pose matching function of abstract representation prefrontal cortex, input the 16 width characteristic patterns obtained for S1, output It is characterized and location matches result;
S3:The simulation spinfunction of abstract representation motor cortex, inputs the Dian Yunguan of matching result and the S1 output for S2 Key point feature figure, is output as a series of and time correlation four-dimensional rotation operator;
S4:Mobile robot plans next step Motion according to the obtained rotation operators of S3.
Further, the upper parietal cortex respective modules major function in the step S1 is to carry out space configuration, because field Scape is excessively complicated, and we are studied some critical object extracted in target scene, and foundation is that our mankind are remembering ground Significant road sign or building when figure under general meeting selective memory current visual angle in map etc..Upper parietal cortex is by object Color, shape, direction and key point characteristic quantification, input the RGB pictures and 3D point cloud figure for current object, RGB pictures are used In the extraction of first three feature, 3D point cloud figure is used in the extraction of crucial point feature.Color is using " the double competitions of color " algorithmic notation; Shape carries out Sobel edge extracting expressions using 8 yardstick gaussian pyramid pictures;Rolled up using cosine grating and dimensional Gaussian in direction Product description;SIFT operator extraction key points are used to 3D point cloud figure, to simplify data volume.The upper parietal cortex module is output as 3 color characteristic figures, 8 shape facility figures, and 4 direction character figures and 1 key point 3D point cloud figure.
Further, the major function of the prefrontal cortex module in the step S2 is feature and the comparison match of pose, its Input the 16 width current object characteristic patterns obtained for upper parietal cortex module and 16 width target object characteristic patterns, with Euclidean distance come Color characteristic figure, shape facility figure described, direction character figure, the difference of key point 3D point cloud figure, and then by this 4 major class characteristic pattern Linear fusion is made comparisons into a value with the threshold value pre-set with the value, judge finding object and target object whether be Same object, and pose is consistent.If consistent, Mental rotation process is completed, and Mobile Robotics Navigation operation is also completed, if differing Cause that current object 3D point cloud figure then is fed back into premotor cortex, carry out Mental rotation planning next time.
Further, the groundwork of the premotor cortex module in the step S3 is simulation rotation current object, input It is the crucial point diagram of the 3D point cloud of current object and target object.The method for using for reference mechanical arm spatial description and conversion is realized, first Three-dimensional coordinate system is built by origin of target object point cloud chart center, current object is then projected into appointing in the space Meaning position, and then using translation operator by the center translation of current object to the origin of coordinates.According to two amplitude point cloud key points Angle difference average, builds a four-dimensional rotation operator related to current state, includes x, y, z, t (time) state.With Rotation operator simulation rotation current object, matches key point again, if there is larger difference, is then simulating postrotational state On calculate rotation operator x, y, z, a t+1 again, similar to Markov Chain, the state at t+1 moment is only dependent upon t State, until matching difference value within a certain range, export a series of four-dimensional rotation operator.
Further, the rotation operator that mobile robot is exported according to S3 in the step S4, plans ensuing track, makes Certain angle can be turned over by obtaining camera, then gather the scene graph after being moved through using Kinect moving cameras, and pass through mesh Object most like in the shape template matching current scene of object is marked, splits the object, its RGB figures and 3D point cloud figure is fed back To above-mentioned upper parietal cortex module, Mental rotation process next time is carried out.It is in place that mobile robot constantly adjusts oneself institute Put, finally as make the scene under its visual angle consistent with target scene as possible.
The present invention by the Mental rotation procedural abstraction of primate into can quantization means Space Rotating process, establish A whole set of Mental rotation realizes framework, and explores the possibility applied it in Mobile Robotics Navigation task.The present invention Meaning be that being related to complicated brain zone function for Mental rotation process carries out mathematical modeling, and be combined into the algorithm stream of a closed loop Journey.Meanwhile, the cognitive algorithm, which is incorporated into Mobile Robotics Navigation, also significant, can overcome conventional navigation algorithms The shortcoming of training trouble and learning ability difference.The algorithm fusion function of Episodic Memory and motion prediction that Mental rotation is inspired, The scene that can be seen after movement can be imagined before motion, can avoid what traditional algorithm was caused in multiple trial and error learning Cost.Further, the navigation algorithm that Mental rotation is inspired has stronger environment transfer ability, can adapt to field complicated and changeable Scape.
Other advantages, target and the feature of the present invention will be illustrated in the following description to a certain extent, and And to a certain extent, based on will be apparent to those skilled in the art to investigating hereafter, or can To be instructed from the practice of the present invention.The target and other advantages of the present invention can be wanted by following specification, right Structure specifically noted in book, and accompanying drawing is asked to realize and obtain.
Brief description of the drawings
In order that the object, technical solutions and advantages of the present invention are clearer, the present invention is made into one below in conjunction with the accompanying drawings The detailed narration of step, wherein:
Fig. 1 is the complete brain zone function flow chart of primate Mental rotation;
Fig. 2 is Mental rotation functional block diagram proposed by the present invention.
Embodiment
Below in conjunction with accompanying drawing, method of the present invention is described in further detail;It should be appreciated that preferred embodiments Only for the explanation present invention, the protection domain being not intended to be limiting of the invention.
Fig. 1 is the complete brain zone function flow chart of primate Mental rotation;Fig. 2 is Mental rotation proposed by the present invention Functional block diagram, as shown in the figure:The Mental rotation mechanism model for being translated and being rotated based on space-time that the present invention is provided, including with Lower step:
S1:The space configuration function of parietal cortex on abstract representation, inputs the RGB pictures and 3D point cloud figure of current object, Export 3 width color characteristic figures, 8 width shape facility figures, 4 width direction characteristic patterns and 1 amplitude point cloud key point characteristic pattern;
S2:The feature and pose matching function of abstract representation prefrontal cortex, input the 16 width characteristic patterns obtained for S1, output It is characterized and location matches result;
S3:The simulation spinfunction of abstract representation motor cortex, inputs the Dian Yunguan of matching result and the S1 output for S2 Key point feature figure, is output as a series of and time correlation four-dimensional rotation operator;
S4:Mobile robot plans next step Motion according to the obtained rotation operators of S3.
As the further improvement of above-described embodiment, the upper parietal cortex respective modules major function in the step S1 is Space configuration is carried out, because scene is excessively complicated, we are studied some critical object extracted in target scene, foundation It is significant road sign or building that our mankind typically can be under selective memory current visual angle in map when map is remembered Deng.Upper parietal cortex by the color of object, shape, direction and key point characteristic quantification, input for current object RGB pictures and 3D point cloud figure, RGB pictures are used in the extraction of first three feature, and 3D point cloud figure is used in the extraction of crucial point feature.Color uses " face The double competitions of color " algorithmic notation, i.e. red-green, yellow-blue, black-and-white color pair;Shape is carried out using 8 yardstick gaussian pyramid pictures Sobel edge extractings are represented;Direction is described using cosine grating and dimensional Gaussian convolution, takes four direction;3D point cloud figure is made With SIFT operator extraction key points, to simplify data volume.The upper parietal cortex module is output as 3 color characteristic figures, 8 shapes Shape characteristic pattern, and 4 direction character figures and 1 key point 3D point cloud figure.
As the further improvement of above-described embodiment, the major function of the prefrontal cortex module in the step S2 is feature And the comparison match of pose, it inputs the 16 width current object characteristic patterns and 16 width target objects obtained for upper parietal cortex module Characteristic pattern, describes color characteristic figure, shape facility figure with Euclidean distance, direction character figure, the difference of key point 3D point cloud figure, And then this 4 major class characteristic pattern linear fusion is made comparisons with the value with the threshold value pre-set into a value, judges finding thing Whether body and target object are same objects, and pose is consistent.If consistent, Mental rotation process is completed, and mobile robot is led Boat operation is also completed, and current object 3D point cloud figure is fed back into premotor cortex if inconsistent, carries out psychology rotation next time Turn planning.
As the further improvement of above-described embodiment, the groundwork of the premotor cortex module in the step S3 is mould Intend rotation current object, input is the crucial point diagram of the 3D point cloud of current object and target object.Realize that using for reference mechanical arm space retouches The method stated and converted, three-dimensional coordinate system is built first by origin of target object point cloud chart center, then by current thing The optional position that body is projected in the space, and then using translation operator by the center translation of current object to the origin of coordinates. Because SIFT algorithms have yardstick independence, the difference for the dimension of object that scene depth can be overcome different and brought, according to The angle difference average of two amplitude point cloud key points, builds a four-dimensional rotation operator related to current state, including x, y, z, t The state of (time).With rotation operator simulation rotation current object, key point is matched again, if there is larger difference, is then existed Simulate and calculate rotation operator x, y, z, a t+1 in postrotational state again, similar to Markov Chain, the shape at t+1 moment State is only dependent upon the state of t, until matching difference value within a certain range, export a series of four-dimensional rotation operator.
As the further improvement of above-described embodiment, the rotation that mobile robot is exported according to S3 in the step S4 is calculated Son, plans ensuing track so that camera can turn over certain angle, then using the collection movement of Kinect moving cameras Later scene graph, and object most like in current scene is matched by the shape template of target object, split the object, will Its RGB schemes and 3D point cloud figure feeds back to above-mentioned upper parietal cortex module, carries out Mental rotation process next time.Mobile robot Oneself position is constantly adjusted, finally as makes the scene under its visual angle consistent with target scene as possible.
After multiple Mental rotation, mobile robot can cook up a Rational Path for leading to target scene, whole mistake The navigation mode that journey is taken when facing unfamiliar condition with the mankind is quite similar, by finding road sign on the way, constantly recalls and works as First visual angle, rotation matching determines the position that subsequent time should be reached, rather than aimlessly trial and error, so the present invention is right The cognitive navigation mode of mobile robot has enlightening meaning.
The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the invention, it is clear that those skilled in the art Member can carry out various changes and modification to the present invention without departing from the spirit and scope of the present invention.So, if the present invention These modifications and variations belong within the scope of the claims in the present invention and its equivalent technologies, then the present invention is also intended to include these Including change and modification.

Claims (5)

1. the Mental rotation mechanism implementation model for being translated and being rotated based on space-time, it is characterised in that:Comprise the following steps:
S1:The space configuration function of parietal cortex on abstract representation, inputs the RGB pictures and 3D point cloud figure of current object, output 3 Width color characteristic figure, 8 width shape facility figures, 4 width direction characteristic patterns and 1 amplitude point cloud key point characteristic pattern;
S2:The feature and pose matching function of abstract representation prefrontal cortex, input the 16 width characteristic patterns obtained for S1, are output as spy Seek peace the matching result of position;
S3:The simulation spinfunction of abstract representation motor cortex, inputs the point cloud key point of matching result and the S1 output for S2 Characteristic pattern, is output as a series of and time correlation four-dimensional rotation operator;
S4:Mobile robot plans next step Motion according to the obtained rotation operators of S3.
2. the Mental rotation mechanism implementation model for being translated and being rotated based on space-time according to patent requirements 1, its feature It is:Upper parietal cortex respective modules major function in the step S1 is to carry out space configuration, is extracted in target scene Some critical object is studied.The color of object, shape, direction and key point characteristic quantification are inputted and are by upper parietal cortex The RGB pictures and 3D point cloud figure of current object, RGB pictures are used in the extraction of first three feature, and it is special that 3D point cloud figure is used in key point The extraction levied.Color is using " the double competitions of color " algorithmic notation;Shape carries out Sobel sides using 8 yardstick gaussian pyramid pictures Edge, which is extracted, to be represented;Direction is described using cosine grating and dimensional Gaussian convolution;It is crucial using SIFT operator extractions to 3D point cloud figure Point, to simplify data volume.The upper parietal cortex module is output as 3 color characteristic figures, 8 shape facility figures, and 4 directions Characteristic pattern and 1 key point 3D point cloud figure.
3. the Mental rotation mechanism implementation model for being translated and being rotated based on space-time according to patent requirements 1, its feature It is:The major function of prefrontal cortex module in the step S2 is feature and the comparison match of pose, and it is upper top that it, which is inputted, 16 width current object characteristic patterns and 16 width target object characteristic patterns that leaves layer module is obtained, color are described with Euclidean distance special Levy figure, shape facility figure, direction character figure, the difference of key point 3D point cloud figure, so by this 4 major class characteristic pattern linear fusion into One value, is made comparisons with the threshold value pre-set with the value, judges whether current object and target object are same objects, and And pose is consistent.If consistent, Mental rotation process is completed, and Mobile Robotics Navigation operation is also completed, will be current if inconsistent Object 3D point cloud figure feeds back to premotor cortex, carries out Mental rotation planning next time.
4. the Mental rotation mechanism implementation model for being translated and being rotated based on space-time according to patent requirements 1, its feature It is:The groundwork of premotor cortex module in the step S3 is simulation rotation current object, and input is current object With the crucial point diagram of 3D point cloud of target object.The method for using for reference mechanical arm spatial description and conversion, first with target object point cloud Figure center is that origin builds three-dimensional coordinate system, the optional position for then projecting to current object in the space, and then is adopted With translation operator by the center translation of current object to the origin of coordinates.According to the angle difference average of two amplitude point cloud key points, A four-dimensional rotation operator related to current state is built, includes x, y, z, t (time) state.Simulated and revolved with rotation operator Turn current object, key point is matched again, if there is larger difference, then calculate one again on postrotational state is simulated Rotation operator x, y, z, t+1, similar to Markov Chain, the state at t+1 moment is only dependent upon the state of t, until matching Difference value within a certain range, exports a series of four-dimensional rotation operator.
5. the Mental rotation mechanism implementation model for being translated and being rotated based on space-time according to patent requirements 1, its feature It is:The rotation operator that mobile robot is exported according to S3 in the step S4, plans ensuing track so that camera energy Certain angle is enough turned over, then the scene graph after being moved through, and the shape for passing through target object is gathered using Kinect moving cameras Most like object, splits the object in shape template matches current scene, and its RGB is schemed and 3D point cloud figure feeds back to above-mentioned upper Parietal cortex module, carries out Mental rotation process next time.Mobile robot constantly adjusts oneself position, finally to the greatest extent may be used The scene under its visual angle can be made consistent with target scene.
CN201710322416.9A 2017-05-09 2017-05-09 A kind of Mental rotation mechanism implementation model for being translated and being rotated based on space-time Pending CN107194332A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710322416.9A CN107194332A (en) 2017-05-09 2017-05-09 A kind of Mental rotation mechanism implementation model for being translated and being rotated based on space-time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710322416.9A CN107194332A (en) 2017-05-09 2017-05-09 A kind of Mental rotation mechanism implementation model for being translated and being rotated based on space-time

Publications (1)

Publication Number Publication Date
CN107194332A true CN107194332A (en) 2017-09-22

Family

ID=59873668

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710322416.9A Pending CN107194332A (en) 2017-05-09 2017-05-09 A kind of Mental rotation mechanism implementation model for being translated and being rotated based on space-time

Country Status (1)

Country Link
CN (1) CN107194332A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101443817A (en) * 2006-03-22 2009-05-27 皮尔茨公司 Method and device for determining correspondence, preferably for the three-dimensional reconstruction of a scene
US8437535B2 (en) * 2006-09-19 2013-05-07 Roboticvisiontech Llc System and method of determining object pose
CN103278170A (en) * 2013-05-16 2013-09-04 东南大学 Mobile robot cascading map building method based on remarkable scenic spot detection
CN104457566A (en) * 2014-11-10 2015-03-25 西北工业大学 Spatial positioning method not needing teaching robot system
CN104484522A (en) * 2014-12-11 2015-04-01 西南科技大学 Method for building robot simulation drilling system based on reality scene
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
CN106406338A (en) * 2016-04-14 2017-02-15 中山大学 Omnidirectional mobile robot autonomous navigation apparatus and method based on laser range finder

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101443817A (en) * 2006-03-22 2009-05-27 皮尔茨公司 Method and device for determining correspondence, preferably for the three-dimensional reconstruction of a scene
US8437535B2 (en) * 2006-09-19 2013-05-07 Roboticvisiontech Llc System and method of determining object pose
CN103278170A (en) * 2013-05-16 2013-09-04 东南大学 Mobile robot cascading map building method based on remarkable scenic spot detection
CN104457566A (en) * 2014-11-10 2015-03-25 西北工业大学 Spatial positioning method not needing teaching robot system
CN104484522A (en) * 2014-12-11 2015-04-01 西南科技大学 Method for building robot simulation drilling system based on reality scene
CN105843223A (en) * 2016-03-23 2016-08-10 东南大学 Mobile robot three-dimensional mapping and obstacle avoidance method based on space bag of words model
CN106406338A (en) * 2016-04-14 2017-02-15 中山大学 Omnidirectional mobile robot autonomous navigation apparatus and method based on laser range finder

Similar Documents

Publication Publication Date Title
Soo Kim et al. Interpretable 3d human action analysis with temporal convolutional networks
CN108830150B (en) One kind being based on 3 D human body Attitude estimation method and device
Fang et al. An automatic road sign recognition system based on a computational model of human recognition processing
CN104063702B (en) Three-dimensional gait recognition based on shielding recovery and partial similarity matching
CN107423730A (en) A kind of body gait behavior active detecting identifying system and method folded based on semanteme
Weber et al. Robot docking with neural vision and reinforcement
CN108389226A (en) A kind of unsupervised depth prediction approach based on convolutional neural networks and binocular parallax
CN107179683A (en) A kind of interaction intelligent robot motion detection and control method based on neutral net
CN107146237A (en) A kind of method for tracking target learnt based on presence with estimating
JP2019028876A (en) Device and method of generating teacher data for machine learning
CN109241830A (en) It listens to the teacher method for detecting abnormality in the classroom for generating confrontation network based on illumination
CN110111289A (en) A kind of image processing method and device
CN107621880A (en) A kind of robot wheel chair interaction control method based on improvement head orientation estimation method
White et al. Vision processing for assistive vision: A deep reinforcement learning approach
CN106446833B (en) A kind of bionical visible sensation method of multichannel for complex scene image recognition
CN106846462A (en) Insect identifying device and method based on three-dimensional simulation
CN116182875A (en) Temporary road path planning method and system based on graphic neural network
Abid et al. Dynamic hand gesture recognition for human-robot and inter-robot communication
CN107194332A (en) A kind of Mental rotation mechanism implementation model for being translated and being rotated based on space-time
Sabbaghi et al. Learning of gestures by imitation using a monocular vision system on a humanoid robot
Jenkins et al. Interactive human pose and action recognition using dynamical motion primitives
Thomsen et al. Predicting and steering performance in architectural materials
Yang et al. Footballer action tracking and intervention using deep learning algorithm
Carbone et al. Simulation of near infrared sensor in unity for plant-weed segmentation classification
CN115661367A (en) Dynamic hybrid deformation modeling method and system based on photo collection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170922

WD01 Invention patent application deemed withdrawn after publication