CN110774283A - Robot walking control system and method based on computer vision - Google Patents

Robot walking control system and method based on computer vision Download PDF

Info

Publication number
CN110774283A
CN110774283A CN201911033909.6A CN201911033909A CN110774283A CN 110774283 A CN110774283 A CN 110774283A CN 201911033909 A CN201911033909 A CN 201911033909A CN 110774283 A CN110774283 A CN 110774283A
Authority
CN
China
Prior art keywords
robot
module
color
value
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911033909.6A
Other languages
Chinese (zh)
Inventor
吴春富
李国栋
王小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Longyan University
Original Assignee
Longyan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Longyan University filed Critical Longyan University
Priority to CN201911033909.6A priority Critical patent/CN110774283A/en
Publication of CN110774283A publication Critical patent/CN110774283A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a robot walking control system and method based on computer vision, which utilizes a camera to scan the environment space where a robot is positioned to obtain environment space image data; inputting a robot control command by using an input keyboard through a command input module; processing the image target object by using an image processing program; identifying the image characteristic information by using an identification program; measuring the obstacle distance by using a distance measuring sensor; and navigating the walking route of the robot according to the obstacle identification distance measurement data by using a navigation program. The invention completes a series of tasks such as visual processing, spatial representation, self-positioning, map updating and the like, and realizes the robot navigation with high imitativeness and strong autonomy in unknown environment.

Description

Robot walking control system and method based on computer vision
Technical Field
The invention belongs to the technical field of robot walking control, and particularly relates to a robot walking control system and method based on computer vision.
Background
A Robot (Robot) is a machine device that automatically performs work. It can accept human command, run the program programmed in advance, and also can operate according to the principle outline action made by artificial intelligence technology. The task of which is to assist or replace human work, such as production, construction, or dangerous work. The arm of the robot body generally adopts a spatial open-chain link mechanism, wherein a kinematic pair (a revolute pair or a revolute pair) is often called a joint, and the number of joints is usually the degree of freedom of the robot. The robot actuator may be classified into rectangular coordinate type, cylindrical coordinate type, polar coordinate type, joint coordinate type, and the like according to the joint configuration type and the motion coordinate type. For the sake of anthropomorphic reasons, the relevant parts of the robot body are often referred to as a base, a waist, an arm, a wrist, a hand (gripper or end effector), a walking part (for a mobile robot), and the like, respectively. However, when the existing robot walks, the contour of a target object in an acquired environment image cannot be accurately determined; meanwhile, accurate walking navigation cannot be provided.
In summary, the problems of the prior art are as follows: when the existing robot walks, the contour of a target object in an acquired environment image cannot be accurately determined; meanwhile, accurate walking navigation cannot be provided.
Disclosure of Invention
The invention aims to provide a robot walking control system and method based on computer vision.
The technical scheme adopted by the invention is as follows:
a robot walking control system and method based on computer vision includes:
firstly, scanning an environment space where a robot is located by using a camera through an environment space scanning module to obtain environment space image data; inputting a robot control command by using an input keyboard through a command input module;
processing the image target object by the central control module through an image processing module by using an image processing program to obtain a color reference value of the target object in a target image frame, wherein the color reference value is a color value of a color with the largest frequency of occurrence of the target object in a preset area; acquiring a first area formed by pixel points of the target image frame, the difference value of which with the color reference value is smaller than or equal to a preset value; taking the edge of the first area as the contour of the target object;
step three, identifying the image characteristic information by using an identification program through an obstacle identification module;
preprocessing the collected obstacle data, namely classifying the obstacle data according to obstacle image information, sequencing all position data of the same obstacle image information according to time stamps, and finally forming an original track sequence set of the robot;
processing the preprocessed track sequence set, namely finding track sequence sets which do not meet the obstacle avoidance tolerance of the robot according to the obstacle avoidance requirement of the robot, and then sequencing the sets according to frequency to obtain a safe distributable track data set;
analyzing the availability of the track data set after obstacle avoidance processing, and counting the data utility of the track data set;
measuring the obstacle distance by using a distance measuring sensor through a distance measuring module;
step five, navigating the walking route of the robot by using a navigation module according to the obstacle identification distance measurement data by using a navigation program; performing abstract representation on the visual image layer by using a neural network VGG-16 to form M visual nodes representing the position and direction angle information of the robot, wherein the response value of the jth visual node is fj;
establishing an annular space cortex and uniformly distributing N position nodes, wherein the position nodes i and the position nodes k mutually suppress discharge through feedback connections wika to obtain the feedback connections wika; the vision node j transmits information to the position node i through the competition connection vij to obtain a value mi contributed by the mode to the position node response; calculating response values of all position nodes on the cortex of the annular space according to the feedback connections wika and mi; the position nodes on the cortex of the annular space form a cognitive map, and robot positioning is realized at the same time; constructing a topological map according to the difference value of the estimated positions to realize the navigation of the robot;
and sixthly, displaying the environment space image, the control command and the ranging data information by using a display through a display module.
Further, in the second step, acquiring the color reference value of the target object in the target image frame includes:
selecting a second area where the target object is located from the target image frame;
acquiring a plurality of pixel points in the preset area with the central point of the second area as an origin;
searching a color value corresponding to the color with the most occurrence times from the plurality of pixel points;
taking the color value corresponding to the color with the largest occurrence number as the color reference value;
in the step three, the obstacle identification module further comprises:
obstacle data are collected and preprocessed, and an original track sequence set of a plurality of robots is finally formed;
performing anonymization processing on the original track sequence set, wherein the anonymization processing comprises the following steps: finding a problematic projection set VP which does not meet the obstacle avoidance tolerance of the robot from the original track sequence set; sorting all the tracks in the problem projection set VP in a descending order according to the frequency of the tracks appearing in the original track sequence set, and storing the result in a set FVP;
searching and processing front | PS | track projection records with the highest occurrence frequency in the set FVP, wherein the processing comprises track suppression processing till the track projection records are processed
Figure BDA0002250905810000021
Or
Figure BDA0002250905810000022
Ending the exception handling;
and issuing the processed track sequence set.
Further, selecting a second region in the target image frame where the target object is located includes:
acquiring a third area of the target object in a target coordinate system;
mapping the coordinates corresponding to the third area to the target image frame to obtain target coordinates;
and taking the area in the target image frame corresponding to the target coordinate as the second area.
Further, the acquiring a third area of the target object in the target coordinate system includes:
acquiring a characteristic color value of the target object;
taking the difference between the color value of the target image frame and the characteristic color value as a first matrix;
calculating the third region according to the first matrix.
Further, said calculating the third region from the first matrix comprises:
determining a first coordinate point according to color values of elements of the first matrix, wherein the color value of the first coordinate point is the largest of the elements of the first matrix;
taking the first coordinate point as a center to serve as a rectangular frame, wherein a preset numerical value relationship exists between the average value of the color values of the elements at the edge of the rectangular frame and the average value of the color values of the elements in the frame of the rectangular frame;
and when the target object is in the rectangular frame, taking the area where the rectangular frame is located as the third area.
Further, each element of the first matrix has three dimensions of red, green and blue, and determining a first coordinate point from a color value of an element of the first matrix comprises:
adding color values of three dimensions of red, green and blue of each element in the first matrix to obtain a second matrix, wherein each element of the second matrix has one dimension;
averaging the color values of a preset number of elements around each element in the second matrix, and taking the average value as the color value of the element in the second matrix;
assigning the color values of the elements of the color values in the second matrix within a first preset range to be zero to obtain a third matrix;
and taking the coordinate of the element with the maximum color value in the third matrix as the first coordinate point.
Further, the feedback link wika is obtained by the following formula:
wherein a is the speed of the robot; j0 and J1 are weight modulation parameters; sigma is a spatial range modulation parameter; positions of the position node i and the position node k on the cortex of the annular space are pi and pk respectively; t is the time.
Further, the value mi contributed to the position node response is obtained by the following formula:
Figure BDA0002250905810000032
short-term active memory using location nodes i
Figure BDA0002250905810000033
Enhance learning of the postsynaptic neuron response mi, expressed as:
Figure BDA0002250905810000034
wherein η is the learning rate;
short-term active memory of postsynaptic neurons
Figure BDA0002250905810000041
Expressed as:
Figure BDA0002250905810000042
wherein ε represents the degree of influence on short-term memory to modulate post-synaptic neuronal responses; the neuron response is the value that contributes to the position node response.
Another object of the present invention is to provide a computer vision-based robot walking control system for implementing the computer vision-based robot walking control method, the computer vision-based robot walking control system including:
the environment space scanning module is connected with the central control module and used for scanning the environment space where the robot is located through the camera to obtain environment space image data;
the command input module is connected with the central control module and used for inputting a robot control command through an input keyboard;
the central control module is connected with the central control module and used for controlling each module to normally work through the single chip microcomputer;
the image processing module is connected with the central control module and is used for processing the image target object through an image processing program;
the obstacle identification module is connected with the central control module and used for identifying the image characteristic information through an identification program;
the distance measurement module is connected with the central control module and used for measuring the obstacle distance through the distance measurement sensor;
the navigation module is connected with the central control module and used for navigating the walking route of the robot according to the obstacle identification distance measurement data through a navigation program;
and the display module is connected with the central control module and used for displaying the environment space image, the control instruction and the distance measurement data information through the display.
According to the technical scheme, the color value of the color with the largest occurrence frequency is used as the color reference value through the image processing module, the pixel points with the color value equal to or close to the color reference value in the target image frame are regarded as the pixel points of the target object, the region (first region) where the set of the pixel points with the color value equal to or close to the color reference value in the target image frame is located is obtained, the first region is processed, the edge of the first region is used as the contour of the target object, and the technical effect of accurately determining the contour of the target object in the collected environment image is achieved; meanwhile, a series of tasks such as visual processing, spatial representation, self-positioning, map updating and the like are completed through a navigation module according to a neural computing mechanism of environment perception and spatial memory, and robot navigation with high imitativeness and strong autonomy in an unknown environment is realized; the motion error is corrected by directly utilizing information sources such as texture, color and the like contained in visual stimulation, and the high-precision robot navigation method with multi-information fusion is realized.
Identifying the image characteristic information by using an identification program through an obstacle identification module; preprocessing the collected obstacle data, namely classifying the obstacle data according to obstacle image information, sequencing all position data of the same obstacle image information according to time stamps, and finally forming an original track sequence set of the robot; processing the preprocessed track sequence set, namely finding track sequence sets which do not meet the obstacle avoidance tolerance of the robot according to the obstacle avoidance requirement of the robot, and then sequencing the sets according to frequency to obtain a safe distributable track data set; analyzing the availability of the track data set after obstacle avoidance processing, and counting the data utility of the track data set; the robot can effectively avoid the obstacle.
Drawings
The invention is described in further detail below with reference to the accompanying drawings and the detailed description;
fig. 1 is a flowchart of a robot walking control method based on computer vision according to an embodiment of the present invention.
Fig. 2 is a block diagram of a robot walking control system based on computer vision according to an embodiment of the present invention.
In the figure: 1. an environmental space scanning module; 2. an instruction input module; 3. a central control module; 4. an image processing module; 5. an obstacle identification module; 6. a distance measurement module; 7. a navigation module; 8. and a display module.
Detailed Description
In order to further understand the contents, features and effects of the present invention, the following embodiments are illustrated and described in detail with reference to the accompanying drawings.
The structure of the present invention will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, the robot walking control method based on computer vision provided by the invention comprises the following steps:
s101, scanning an environment space where the robot is located by using a camera through an environment space scanning module to obtain environment space image data; inputting a robot control command by using an input keyboard through a command input module;
s102, the central control module processes the image target object through the image processing module by using an image processing program;
s103, identifying the image characteristic information by using an identification program through an obstacle identification module;
s104, measuring the obstacle distance by using a distance measuring sensor through a distance measuring module;
s105, navigating the walking route of the robot by using a navigation module according to the obstacle identification distance measurement data by using a navigation program;
and S106, displaying the environment space image, the control command and the ranging data information by using the display through the display module.
In the step S103, a navigation module is used for navigating the walking route of the robot according to the obstacle identification distance measurement data by using a navigation program; performing abstract representation on the visual image layer by using a neural network VGG-16 to form M visual nodes representing the position and direction angle information of the robot, wherein the response value of the jth visual node is fj; establishing an annular space cortex and uniformly distributing N position nodes, wherein the position nodes i and the position nodes k mutually suppress discharge through feedback connections wika to obtain the feedback connections wika; the vision node j transmits information to the position node i through the competition connection vij to obtain a value mi contributed by the mode to the position node response; calculating response values of all position nodes on the cortex of the annular space according to the feedback connections wika and mi; the position nodes on the cortex of the annular space form a cognitive map, and robot positioning is realized at the same time; and constructing a topological map according to the difference value of the estimated positions to realize the navigation of the robot.
In the process of identifying the image characteristic information by the obstacle identification module, the obstacle identification module further comprises:
collecting and preprocessing obstacle data, and finally forming a plurality of original track sequence sets of robots(ii) a Performing anonymization processing on the original track sequence set, wherein the anonymization processing comprises the following steps: finding a problematic projection set VP which does not meet the obstacle avoidance tolerance of the robot from the original track sequence set; sorting all the tracks in the problem projection set VP in a descending order according to the frequency of the tracks appearing in the original track sequence set, and storing the result in a set FVP; searching and processing front | PS | track projection records with the highest occurrence frequency in the set FVP, wherein the processing comprises track suppression processing till the track projection records are processed Or
Figure BDA0002250905810000062
Ending the exception handling; and issuing the processed track sequence set.
As shown in fig. 2, a robot walking control system based on computer vision provided by an embodiment of the present invention includes: the system comprises an environmental space scanning module 1, an instruction input module 2, a central control module 3, an image processing module 4, an obstacle identification module 5, a distance measurement module 6, a navigation module 7 and a display module 8.
The environment space scanning module 1 is connected with the central control module 3 and is used for scanning the environment space where the robot is located through the camera to obtain environment space image data;
the command input module 2 is connected with the central control module 3 and is used for inputting a robot control command through an input keyboard;
the central control module 3 is connected with the central control module 3 and is used for controlling each module to normally work through the single chip microcomputer;
the image processing module 4 is connected with the central control module 3 and is used for processing the image target object through an image processing program;
the obstacle identification module 5 is connected with the central control module 3 and used for identifying the image characteristic information through an identification program;
the distance measuring module 6 is connected with the central control module 3 and used for measuring the obstacle distance through a distance measuring sensor;
the navigation module 7 is connected with the central control module 3 and used for navigating the walking route of the robot according to the obstacle identification distance measurement data through a navigation program;
and the display module 8 is connected with the central control module 3 and is used for displaying the environment space image, the control instruction and the distance measurement data information through a display.
The image processing module 4 provided by the invention has the following processing method:
(1) acquiring a color reference value of a target object in a target image frame, wherein the color reference value is a color value of a color of the target object with the largest occurrence frequency in a preset area;
(2) acquiring a first area formed by pixel points of the target image frame, the difference value of which with the color reference value is smaller than or equal to a preset value;
(3) and taking the edge of the first area as the outline of the target object.
The invention provides a method for acquiring a color reference value of a target object in a target image frame, which comprises the following steps:
selecting a second area where the target object is located from the target image frame;
acquiring a plurality of pixel points in the preset area with the central point of the second area as an origin;
searching a color value corresponding to the color with the most occurrence times from the plurality of pixel points;
and taking the color value corresponding to the color with the largest occurrence number as the color reference value.
The selecting the second area where the target object is located in the target image frame provided by the invention comprises the following steps:
acquiring a third area of the target object in a target coordinate system;
mapping the coordinates corresponding to the third area to the target image frame to obtain target coordinates;
and taking the area in the target image frame corresponding to the target coordinate as the second area.
The invention provides a method for acquiring a third area of the target object in the target coordinate system, which comprises the following steps:
acquiring a characteristic color value of the target object;
taking the difference between the color value of the target image frame and the characteristic color value as a first matrix;
calculating the third region according to the first matrix.
The invention provides that calculating the third region from the first matrix comprises:
determining a first coordinate point according to color values of elements of the first matrix, wherein the color value of the first coordinate point is the largest of the elements of the first matrix;
taking the first coordinate point as a center to serve as a rectangular frame, wherein a preset numerical value relationship exists between the average value of the color values of the elements at the edge of the rectangular frame and the average value of the color values of the elements in the frame of the rectangular frame;
and when the target object is in the rectangular frame, taking the area where the rectangular frame is located as the third area.
The invention provides a first matrix, each element of which has three dimensions of red, green and blue, and determining a first coordinate point according to a color value of an element of the first matrix comprises:
adding color values of three dimensions of red, green and blue of each element in the first matrix to obtain a second matrix, wherein each element of the second matrix has one dimension;
averaging the color values of a preset number of elements around each element in the second matrix, and taking the average value as the color value of the element in the second matrix;
assigning the color values of the elements of the color values in the second matrix within a first preset range to be zero to obtain a third matrix;
and taking the coordinate of the element with the maximum color value in the third matrix as the first coordinate point.
The navigation module 7 provided by the invention has the following navigation method:
1) performing abstract representation on the visual image layer by using a neural network VGG-16 to form M visual nodes representing the position and direction angle information of the robot, wherein the response value of the jth visual node is fj;
2) establishing an annular space cortex and uniformly distributing N position nodes, wherein the position nodes i and the position nodes k mutually suppress discharge through feedback connections wika to obtain the feedback connections wika;
3) the vision node j transmits information to the position node i through the competition connection vij to obtain a value mi contributed by the mode to the position node response;
4) calculating response values of all position nodes on the cortex of the annular space according to the feedback connections wika and mi;
5) the position nodes on the cortex of the annular space form a cognitive map, and robot positioning is realized at the same time;
6) and constructing a topological map according to the difference value of the estimated positions to realize the navigation of the robot.
The feedback connection wika provided by the invention is obtained by the following formula:
wherein a is the speed of the robot; j0 and J1 are weight modulation parameters; sigma is a spatial range modulation parameter; positions of the position node i and the position node k on the cortex of the annular space are pi and pk respectively; t is the time.
The value mi contributed to the position node response provided by the invention is obtained by the following formula:
Figure BDA0002250905810000082
short-term active memory using location nodes i
Figure BDA0002250905810000086
Enhance learning of the postsynaptic neuron response mi, expressed as:
Figure BDA0002250905810000083
wherein η is the learning rate;
short-term active memory of postsynaptic neurons
Figure BDA0002250905810000084
Expressed as:
Figure BDA0002250905810000085
wherein ε represents the degree of influence on short-term memory to modulate post-synaptic neuronal responses; the neuron response is the value that contributes to the position node response.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent changes and modifications made to the above embodiment according to the technical spirit of the present invention are within the scope of the technical solution of the present invention.

Claims (10)

1. A robot walking control method based on computer vision is characterized in that: which comprises the following steps:
firstly, scanning an environment space where a robot is located by using a camera through an environment space scanning module to obtain environment space image data; inputting a robot control command by using an input keyboard through a command input module;
processing the image target object by the central control module through an image processing module by using an image processing program to obtain a color reference value of the target object in a target image frame, wherein the color reference value is a color value of a color with the largest frequency of occurrence of the target object in a preset area; acquiring a first area formed by pixel points of the target image frame, the difference value of which with the color reference value is smaller than or equal to a preset value; taking the edge of the first area as the contour of the target object;
step three, identifying the image characteristic information by using an identification program through an obstacle identification module;
preprocessing the collected obstacle data, namely classifying the obstacle data according to obstacle image information, sequencing all position data of the same obstacle image information according to time stamps, and finally forming an original track sequence set of the robot;
processing the preprocessed track sequence set, namely finding out a track sequence set which does not meet the obstacle avoidance tolerance of the robot according to the obstacle avoidance requirement of the robot, and then sequencing the sets according to frequency to obtain a safe distributable track data set;
analyzing the availability of the track data set after obstacle avoidance processing, and counting the data utility of the track data set;
measuring the obstacle distance by using a distance measuring sensor through a distance measuring module;
step five, navigating the walking route of the robot by using a navigation module according to the obstacle identification distance measurement data by using a navigation program; performing abstract representation on the visual image layer by using a neural network VGG-16 to form M visual nodes representing the position and direction angle information of the robot, wherein the response value of the jth visual node is fj;
establishing an annular space cortex and uniformly distributing N position nodes, wherein the position nodes i and the position nodes k mutually suppress discharge through feedback connections wika to obtain the feedback connections wika; the vision node j transmits information to the position node i through the competition connection vij to obtain a value mi contributed by the mode to the position node response; calculating response values of all position nodes on the cortex of the annular space according to the feedback connections wika and mi; the position nodes on the cortex of the annular space form a cognitive map, and robot positioning is realized at the same time;
constructing a topological map according to the difference value of the estimated positions to realize the navigation of the robot;
and sixthly, displaying the environment space image, the control command and the ranging data information by using a display through a display module.
2. The robot walking control method based on computer vision according to claim 1, characterized in that: the second step of obtaining the color reference value of the target object in the target image frame comprises the following steps:
s2.1, selecting a second area where the target object is located from the target image frame;
s2.2, acquiring a plurality of pixel points in the preset area with the central point of the second area as an origin;
s2.3, searching a color value corresponding to the color with the largest occurrence frequency from the plurality of pixel points;
s2.4, taking the color value corresponding to the color with the largest occurrence frequency as the color reference value;
in the step three, the obstacle identification module further comprises:
s3.1, collecting and preprocessing obstacle data, and finally forming an original track sequence set of a plurality of robots;
s3.2, carrying out anonymization processing on the original track sequence set, wherein the anonymization processing comprises the following steps: finding a problematic projection set VP which does not meet the obstacle avoidance tolerance of the robot from the original track sequence set; sorting all the tracks in the problem projection set VP in a descending order according to the frequency of the tracks appearing in the original track sequence set, and storing the result in a set FVP;
s3.3, searching the front | PS | track projection records with the highest occurrence frequency in the set FVP, and processing the track projection records, wherein the processing comprises track suppression processing till the track suppression processing is finished
Figure FDA0002250905800000021
Or Ending the exception handling;
and S3.4, issuing the processed track sequence set.
3. The robot walking control method based on computer vision according to claim 2, characterized in that: selecting a second region where the target object is located in the target image frame specifically includes: acquiring a third area of the target object in a target coordinate system; mapping the coordinates corresponding to the third area to the target image frame to obtain target coordinates; and taking the area in the target image frame corresponding to the target coordinate as the second area.
4. The robot walking control method based on computer vision according to claim 3, characterized in that: the acquiring a third area of the target object in the target coordinate system specifically includes: acquiring a characteristic color value of the target object; taking the difference between the color value of the target image frame and the characteristic color value as a first matrix; calculating the third region according to the first matrix.
5. The robot walking control method based on computer vision according to claim 4, characterized in that: the specific method for calculating the third area according to the first matrix is as follows: determining a first coordinate point according to color values of elements of the first matrix, wherein the color value of the first coordinate point is the largest of the elements of the first matrix; taking the first coordinate point as a center to serve as a rectangular frame, wherein a preset numerical value relationship exists between the average value of the color values of the elements at the edge of the rectangular frame and the average value of the color values of the elements in the frame of the rectangular frame; and when the target object is in the rectangular frame, taking the area where the rectangular frame is located as the third area.
6. The robot walking control method based on computer vision according to claim 5, characterized in that: each element of the first matrix has three dimensions of red, green and blue, and the specific method for determining the first coordinate point according to the color value of the element of the first matrix is as follows:
adding color values of three dimensions of red, green and blue of each element in the first matrix to obtain a second matrix, wherein each element of the second matrix has one dimension;
averaging the color values of a preset number of elements around each element in the second matrix, and taking the average value as the color value of the element in the second matrix;
assigning the color values of the elements of the color values in the second matrix within a first preset range to be zero to obtain a third matrix;
and taking the coordinate of the element with the maximum color value in the third matrix as the first coordinate point.
7. The robot walking control method based on computer vision according to claim 1, characterized in that: the feedback link wika is obtained by the following formula:
Figure FDA0002250905800000031
wherein a is the speed of the robot; j0 and J1 are weight modulation parameters; sigma is a spatial range modulation parameter; positions of the position node i and the position node k on the cortex of the annular space are pi and pk respectively; t is the time.
8. The robot walking control method based on computer vision according to claim 7, characterized in that: the value mi contributed to the position node response is obtained by the following formula:
Figure FDA0002250905800000032
short-term active memory using location nodes i
Figure FDA0002250905800000035
Enhance learning of the postsynaptic neuron response mi, expressed as:
Figure FDA0002250905800000033
wherein η is the learning rate;
short-term active memory of postsynaptic neurons
Figure FDA0002250905800000036
Expressed as:
Figure FDA0002250905800000034
wherein ε represents the degree of influence on short-term memory to modulate post-synaptic neuronal responses; the neuron response is the value that contributes to the position node response.
9. A robot walking control system based on computer vision, for implementing a robot walking control method based on computer vision according to claims 1-8, characterized in that: the system comprises:
the environment space scanning module is connected with the central control module and used for scanning the environment space where the robot is located through the camera to obtain environment space image data;
the command input module is connected with the central control module and used for inputting a robot control command through an input keyboard;
the central control module is connected with the central control module and used for controlling each module to normally work through the single chip microcomputer;
the image processing module is connected with the central control module and is used for processing the image target object through an image processing program;
the obstacle identification module is connected with the central control module and used for identifying the image characteristic information through an identification program;
the distance measurement module is connected with the central control module and used for measuring the obstacle distance through the distance measurement sensor;
the navigation module is connected with the central control module and used for navigating the walking route of the robot according to the obstacle identification distance measurement data through a navigation program;
and the display module is connected with the central control module and used for displaying the environment space image, the control instruction and the distance measurement data information through the display.
10. A computer vision based robot implementing a computer vision based robot walking control method of claim 1.
CN201911033909.6A 2019-10-29 2019-10-29 Robot walking control system and method based on computer vision Pending CN110774283A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911033909.6A CN110774283A (en) 2019-10-29 2019-10-29 Robot walking control system and method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911033909.6A CN110774283A (en) 2019-10-29 2019-10-29 Robot walking control system and method based on computer vision

Publications (1)

Publication Number Publication Date
CN110774283A true CN110774283A (en) 2020-02-11

Family

ID=69387300

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911033909.6A Pending CN110774283A (en) 2019-10-29 2019-10-29 Robot walking control system and method based on computer vision

Country Status (1)

Country Link
CN (1) CN110774283A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022223023A1 (en) * 2021-04-22 2022-10-27 苏州宝时得电动工具有限公司 Self-moving device, moving trajectory adjusting method, and computer-readable storage medium
WO2023184223A1 (en) * 2022-03-30 2023-10-05 中国电子科技集团公司信息科学研究院 Robot autonomous positioning method based on brain-inspired space coding mechanism and apparatus

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103646249A (en) * 2013-12-12 2014-03-19 江苏大学 Greenhouse intelligent mobile robot vision navigation path identification method
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN108090924A (en) * 2016-11-07 2018-05-29 深圳光启合众科技有限公司 Image processing method and device, robot
CN109240279A (en) * 2017-07-10 2019-01-18 中国科学院沈阳自动化研究所 A kind of robot navigation method of view-based access control model perception and spatial cognition neuromechanism
CN109257108A (en) * 2018-11-13 2019-01-22 广东水利电力职业技术学院(广东省水利电力技工学校) A kind of multiplicate controlling quantum communications protocol implementing method and system
CN109360044A (en) * 2018-09-20 2019-02-19 浙江医药高等专科学校 A kind of cross-border e-commerce sale management system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN103646249A (en) * 2013-12-12 2014-03-19 江苏大学 Greenhouse intelligent mobile robot vision navigation path identification method
CN108090924A (en) * 2016-11-07 2018-05-29 深圳光启合众科技有限公司 Image processing method and device, robot
CN109240279A (en) * 2017-07-10 2019-01-18 中国科学院沈阳自动化研究所 A kind of robot navigation method of view-based access control model perception and spatial cognition neuromechanism
CN109360044A (en) * 2018-09-20 2019-02-19 浙江医药高等专科学校 A kind of cross-border e-commerce sale management system and method
CN109257108A (en) * 2018-11-13 2019-01-22 广东水利电力职业技术学院(广东省水利电力技工学校) A kind of multiplicate controlling quantum communications protocol implementing method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022223023A1 (en) * 2021-04-22 2022-10-27 苏州宝时得电动工具有限公司 Self-moving device, moving trajectory adjusting method, and computer-readable storage medium
WO2023184223A1 (en) * 2022-03-30 2023-10-05 中国电子科技集团公司信息科学研究院 Robot autonomous positioning method based on brain-inspired space coding mechanism and apparatus

Similar Documents

Publication Publication Date Title
CN108838991A (en) It is a kind of from main classes people tow-armed robot and its to the tracking operating system of moving target
Nitzan Development of intelligent robots: achievements and issues
Alonso et al. Current research trends in robot grasping and bin picking
Ferreira et al. Stereo-based real-time 6-DoF work tool tracking for robot programing by demonstration
Hebert et al. Combined shape, appearance and silhouette for simultaneous manipulator and object tracking
Sanz et al. Vision-guided grasping of unknown objects for service robots
CN111975771A (en) Mechanical arm motion planning method based on deviation redefinition neural network
CN111044031B (en) Cognitive map construction method based on mouse brain hippocampus information transfer mechanism
Jiang et al. The state of the art of search strategies in robotic assembly
CN110774283A (en) Robot walking control system and method based on computer vision
Zhou et al. Imitating tool-based garment folding from a single visual observation using hand-object graph dynamics
Nuzzi et al. Hands-Free: a robot augmented reality teleoperation system
CN113910218A (en) Robot calibration method and device based on kinematics and deep neural network fusion
Bertino et al. Experimental autonomous deep learning-based 3d path planning for a 7-dof robot manipulator
Fantacci et al. Visual end-effector tracking using a 3D model-aided particle filter for humanoid robot platforms
CN111645067B (en) High-intelligence robot environment sensing method and system
US20200246974A1 (en) Handling assembly comprising a handling device for carrying out at least one work step, method, and computer program
CN116977434A (en) Target behavior tracking method and system based on tracking camera
Chow et al. Learning human navigational skill for smart wheelchair in a static cluttered route
CN113838203B (en) Navigation system based on three-dimensional point cloud map and two-dimensional grid map and application method
Zhang et al. A markerless human-manipulators interface using multi-sensors
Panzieri et al. Vision based navigation using Kalman approach for SLAM
KR102452315B1 (en) Apparatus and method of robot control through vision recognition using deep learning and marker
Gao et al. A novel local path planning method considering both robot posture and path smoothness
Bilal et al. Improving vision based pose estimation using LSTM neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200211

RJ01 Rejection of invention patent application after publication