CN113910225A - Robot control system and method based on visual boundary detection - Google Patents

Robot control system and method based on visual boundary detection Download PDF

Info

Publication number
CN113910225A
CN113910225A CN202111176145.3A CN202111176145A CN113910225A CN 113910225 A CN113910225 A CN 113910225A CN 202111176145 A CN202111176145 A CN 202111176145A CN 113910225 A CN113910225 A CN 113910225A
Authority
CN
China
Prior art keywords
boundary
robot
working area
information
working
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111176145.3A
Other languages
Chinese (zh)
Inventor
张伟
吴一飞
鲍鑫亮
陈越凡
申中一
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bongos Robotics Shanghai Co ltd
Original Assignee
Bongos Robotics Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bongos Robotics Shanghai Co ltd filed Critical Bongos Robotics Shanghai Co ltd
Priority to CN202111176145.3A priority Critical patent/CN113910225A/en
Priority to PCT/CN2022/070672 priority patent/WO2023056720A1/en
Publication of CN113910225A publication Critical patent/CN113910225A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a robot control system and method based on visual boundary detection, the scheme includes an image acquisition module, a visual boundary module and a mobile control module, the image acquisition module acquires the peripheral environment information of a robot; the visual boundary detection module can be used for detecting the working boundary of the working area, generating boundary information of the working area which can be identified by a machine from a natural boundary, and sending the boundary information to the mobile control module; and the movement control module controls the robot to continue to move forward according to the received boundary information, adjusts the angle of the advancing direction of the robot and moves the distance according to the calculation result. The scheme provided by the invention can be well fused with a visual boundary detection scheme, can accurately identify the working boundary corresponding to the outdoor working area aiming at the outdoor working area without any preset physical boundary, and accurately adjusts the subsequent advancing route after detecting that the equipment reaches the boundary of the working area.

Description

Robot control system and method based on visual boundary detection
Technical Field
The invention relates to the technical field of robot control, in particular to a control technology based on visual boundary detection.
Background
With the progress of science and the development of technology, more and more automatic mobile robots appear in the daily life of people. The automatic mobile robots rely on the control system of the automatic mobile robots to complete related tasks set by people in a fixed area without manual operation and intervention. Existing mainstream control techniques include collision control based on a collision sensor.
In an indoor mobile robot, since there is a natural physical barrier such as a wall, a collision-based control is mostly used, and when the robot collides with an obstacle, a collision sensor sends a signal to control the advancing direction of the robot. However, in the outdoor mobile robot, since there is no physical obstacle such as an indoor wall, the collision sensor does not function when it reaches the boundary of the work area.
Aiming at the characteristics of outdoor environment, people design a visual boundary detection scheme so as to sense the surrounding environment; whether the current robot reaches the boundary of a working area or not is detected, however, once a boundary detection signal is triggered, if the robot continues to move forward, a boundary crossing behavior can occur, and thus a serious accident can be brought.
However, the existing control scheme cannot be well fused with the visual boundary detection scheme, so that the robot based on the visual boundary detection can accurately adjust the subsequent forward route after detecting the boundary of the working area.
Thus, there is a need in the art for a robot control scheme that can be based on visual boundary detection.
Disclosure of Invention
Aiming at the problems of the existing mobile control scheme of the outdoor automatic mobile equipment based on the vision detection of the boundary of the area, the invention aims to provide a robot control system based on the vision boundary detection and a corresponding control method, so that the robot based on the vision boundary detection can accurately adjust the subsequent advancing route based on the detection after the robot detects the boundary of the working area.
In order to achieve the above object, the present invention provides a robot control system based on visual boundary detection, comprising: the device comprises an image acquisition module, a visual boundary identification module and a movement control module;
the image acquisition module acquires image information of the surrounding environment of the robot in real time;
the visual boundary identification module identifies a working boundary of a working area according to the image information acquired by the image acquisition module;
the mobile control module calculates and judges whether the mower reaches the boundary of the working area according to the working boundary information of the working area identified by the visual boundary identification module, and if the mower reaches the boundary of the working area, the working state of the robot is adjusted to prevent the robot from crossing the boundary of the working area; and if the boundary of the working area is not reached, controlling the robot to keep the current working state.
Further, the visual boundary identification module identifies a working area boundary based on a deep neural network and an image processing mode.
Further, the vision boundary identification module segments the acquired robot surrounding environment image based on a deep neural network to obtain a corresponding neural network segmentation map, and forms a corresponding workable area and a non-workable area in the neural network segmentation map, wherein a boundary between the workable area and the non-workable area forms a working area boundary.
Further, the boundary information of the working area includes contour information of the boundary of the working area.
Further, the mobile control module acquires the boundary contour information of the working area identified by the visual boundary identification module and a plurality of contour point information corresponding to the contour of the robot, calculates the relative relationship between the contour points and the boundary contour of the working area, judges whether the robot moves to the boundary position of the working area according to the calculation result, and controls the robot to adjust the moving direction according to the working mode and the boundary information of the working area when the robot moves to reach the boundary of the working area.
In order to achieve the above object, the present invention provides a robot control method based on visual boundary detection, comprising:
acquiring image information of the surrounding environment of the equipment in real time;
identifying a natural working boundary of the working area according to the acquired image information, and generating identifiable working area boundary information from the natural working boundary;
calculating and judging whether the mower reaches the boundary of the working area or not based on the generated boundary information of the working area, and if so, adjusting the working state of the robot to prevent the robot from crossing the boundary of the working area; and if the boundary of the working area is not reached, controlling the robot to keep the current working state.
Further, the method generates identifiable work area boundary information based on a deep neural network and an image processing algorithm.
Further, the method includes the steps that the acquired robot surrounding environment image is segmented on the basis of the deep neural network to obtain a corresponding neural network segmentation graph, a corresponding workable area and a corresponding unworkable area are formed in the neural network segmentation graph, and a boundary between the workable area and the unworkable area forms a working area boundary.
Further, the boundary information of the working area includes contour information of the boundary of the working area.
Further, the method acquires the identified contour information of the boundary of the working area and a plurality of contour point information corresponding to the contour of the robot in real time, calculates the relative relationship between the contour points and the contour of the boundary of the working area, judges whether the robot advances to the boundary position of the working area according to the calculation result, and controls the robot to adjust the advancing direction according to the working mode and the boundary information of the working area when the robot advances to reach the boundary of the working area.
The scheme provided by the invention can be well fused with a visual boundary detection scheme, can accurately identify the working boundary corresponding to the outdoor working area aiming at the outdoor working area without any preset physical boundary, and accurately adjusts the subsequent advancing route after detecting that the equipment reaches the boundary of the working area.
The scheme provided by the invention can accurately identify the relative position between the equipment and the boundary of the working area, can accurately control the advancing route of the equipment when the equipment reaches the boundary of the working area, avoids the equipment from crossing the boundary of the working area, and improves the reliability and safety of automatic operation of the equipment.
Drawings
The invention is further described below in conjunction with the appended drawings and the detailed description.
FIG. 1 is a schematic diagram of a robot control system based on visual boundary detection according to an embodiment of the present invention;
FIG. 2 is an exemplary diagram of a working path of a robot on a lawn without any boundary markers preset in an embodiment of the present invention;
FIG. 3 is an exemplary diagram of a visually perceived picture obtained at a point B of a boundary in an example of the present invention;
FIG. 4 is a graph illustrating an exemplary effect of a segmentation map of a neural network formed for the visual perception picture of FIG. 3 in an example of the present invention;
FIG. 5 is an exemplary diagram of a visual perception picture obtained when a turn is made in an embodiment of the present invention;
FIG. 6 is a graph of an exemplary effect of a segmentation map of a neural network formed for the visual perception picture shown in FIG. 5 in an example of the present invention;
FIG. 7 is an exemplary view of a visually perceived picture obtained in mode 1 according to an embodiment of the present invention;
FIG. 8 is a graph of an exemplary effect of a segmentation map of a neural network formed for the visual perception picture of FIG. 7 in an example of the present invention;
FIG. 9 is a diagram of exemplary effects of device out-of-range in an example of the invention;
FIG. 10 is a diagram illustrating exemplary effects of non-equipment out of bounds in an example of the invention;
FIG. 11 is a diagram illustrating exemplary effects of partial out-of-range equipment in an example of the present invention.
Detailed Description
In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further explained below by combining the specific drawings.
In the outdoor working area, the specific boundary limiting body of the indoor wall body is not arranged, so that the problem of outdoor automatic operation equipment is solved. In order to well limit the outdoor automatic operation equipment in the outdoor working area, a conventional method is to preset a corresponding physical boundary at the boundary in the outdoor working area, and to match the physical boundary with the sensing technology of the outdoor automatic operation equipment, so as to limit the equipment in the preset working area. By way of example, the physical boundary here is a pre-buried boundary copper wire or the like.
The scheme abandons the control scheme of the outdoor automatic operation equipment requiring the corresponding physical boundary to be preset at the boundary of the outdoor working area, provides the control scheme of the outdoor automatic operation equipment based on visual boundary detection, can realize the accurate identification of the working boundary corresponding to the outdoor working area in the outdoor working area under the condition of not presetting any physical boundary, and accurately adjusts the subsequent advancing route after detecting that the equipment reaches the boundary of the working area, thereby avoiding the equipment from crossing the boundary of the outdoor working area and ensuring the reliability and the safety of the work of the outdoor automatic operation equipment.
The control scheme of the outdoor automatic operation equipment based on the visual boundary detection is characterized in that the image information of the surrounding environment of the equipment (namely the outdoor automatic operation equipment) is acquired in real time;
identifying a natural working boundary of the working area according to the acquired image information, and generating identifiable working area boundary information from the natural working boundary;
judging whether the mower reaches the boundary or not based on the generated boundary information of the working area, and if so, adjusting the working state of the robot to prevent the robot from crossing the boundary; and if the boundary of the working area is not reached, controlling the robot to keep the current working state and continue to move forward.
The working state here includes, but is not limited to, the forward direction, forward speed, etc. of the robot.
According to the scheme, when the image information of the surrounding environment of the equipment is obtained, the boundary of the working area is identified based on the deep neural network and the image processing algorithm, so that corresponding boundary information of the working area is formed.
According to the scheme, when identifiable working area boundary information is generated, the acquired equipment surrounding environment image is segmented based on the deep neural network to obtain a corresponding neural network segmentation map.
And forming corresponding workable regions and unworkable regions in the neural network segmentation graph, wherein the boundary between the workable regions and the unworkable regions is the boundary of the workable regions.
By way of example, the workable area and the unworkable area may be represented in two different colors, and the boundary of the two colors is the boundary of the natural working area.
Fig. 4 is a diagram of a neural network segmentation obtained by performing neural network segmentation on the environment image around the device shown in fig. 3, in which a blue area indicates a workable area, a gray area indicates an unworkable area, and a boundary between the two color areas is a boundary of a natural working area.
On the basis, the boundary information of the working area is further calculated, wherein the boundary information of the working area comprises boundary outline information and distance information between the boundary and the equipment.
Here, it should be noted that the boundary information of the working area may be calculated to obtain other information as needed.
According to the scheme, when the advancing route is accurately adjusted, the result of visual identification and detection of the boundary of the working area is obtained in real time, whether the equipment advances to the boundary position of the working area is calculated and judged based on the obtained result, and the working state of the equipment is adjusted in time when the equipment advances to the boundary position of the working area, so that the equipment is prevented from crossing the boundary of the working area.
Further, the scheme specifically acquires boundary information through a visual identification detection result of the boundary of the working area, wherein the boundary information is mainly contour information of the boundary of the working area; and calculating the distance of the equipment relative to the boundary of the working area through the parameters of the image acquisition device on the equipment based on the contour information of the boundary of the working area.
On the basis, whether the equipment reaches the boundary can be judged in real time by combining the acquired contour information of the working area and the distance information of the equipment relative to the boundary of the working area, and the adjustment is carried out according to the acquired contour information of the boundary of the working area after the boundary is reached.
Aiming at the mode of determining the boundary of the equipment relative to the working area, the scheme also provides a mode of judging whether the equipment reaches the boundary of the working area directly based on the contour information of the boundary of the working area.
In this way, for example, in the present scheme, boundary information is specifically obtained through a result of visual identification and detection of a working area boundary, where the boundary information is specifically contour information of the working area boundary; meanwhile, when the peripheral environment image of the equipment is subjected to boundary visual identification detection of a working area, the generated neural network segmentation graph selects corresponding contour points in the neural network segmentation graph and determines the position information of the contour points; and then combining the position information of the contour point with the contour information based on the determined working area boundary to calculate and judge whether the position of the contour point meets a corresponding threshold value, thereby judging whether the equipment reaches the working area boundary. And adjusting according to the acquired boundary contour information of the working area after the boundary is reached.
Aiming at the mode of determining the boundary of the equipment relative to the working area, the scheme also provides a mode of judging whether the equipment reaches the boundary of the working area directly based on the area of the working area.
For example, in this manner, in the present scheme, boundary information is specifically obtained through a result of visual identification and detection of a working area boundary, where the boundary information is specifically contour information of the working area boundary; calculating and determining the area of a working area in a neural network segmentation graph based on the contour information of the working area boundary, wherein the neural network segmentation graph is determined when the visual identification detection of the working area boundary is carried out on the peripheral environment image of the equipment; then, whether the determined working area operates the corresponding threshold is calculated, so that whether the equipment reaches the boundary of the working area is judged. And adjusting according to the acquired boundary contour information of the working area after the boundary is reached.
For the calculation and determination of the working area in the neural network segmentation graph, in the embodiment, the working area in the neural network segmentation graph is determined preferentially by counting the number of pixels in the working area in the corresponding neural network segmentation graph, so that the accuracy of area calculation can be ensured, the calculation speed can be ensured, and the computational power of a processor is prevented from being consumed in a transition way.
Furthermore, when the working state of the equipment is adjusted, a control instruction is formed by fusing the working mode set by the equipment and the result of visual recognition detection so as to control the equipment to adjust the moving state and continue to move forward in the working area.
The detection result of the visual identification of the boundary of the working area is boundary contour information determined based on a neural network segmentation map and state information of the equipment relative to the boundary contour.
The state information of the device relative to the boundary contour may be distance information of the device relative to the boundary contour, area information of a working area between the device and the boundary contour, and the like.
For example, according to the scheme, based on the processed work area neural network segmentation map and the work area boundary information, the work area boundary visual identification detection result is calculated, and corresponding boundary contour information and corresponding work area information are obtained.
In the scheme, in order to ensure the adjustment precision when the working state is adjusted, the direction information of the working area relative to the current working state of the equipment is calculated and determined preferentially on the basis of the obtained boundary contour information and the working area information; on the basis, the determined direction information of the working area is fused with a preset working mode of the equipment to form an equipment moving state adjusting instruction, so that the equipment moving state is accurately adjusted.
The control scheme of the outdoor automatic operation equipment based on the visual boundary detection can form a corresponding software program when being applied specifically, is presented by a corresponding control system of the outdoor automatic operation equipment, can run in the outdoor automatic operation equipment and realizes the scheme of realizing the operation area of the outdoor automatic operation equipment based on the visual boundary detection.
Referring to fig. 1, a system diagram of the control system of the outdoor automatic operation equipment based on the visual boundary detection according to the present embodiment is shown.
The control system 100 for the outdoor automatic operation equipment based on the visual boundary detection mainly comprises an image acquisition module 110, a visual boundary recognition module 120 and a mobile control module 130, which are matched with each other to realize a control system, and the control of limiting the outdoor automatic operation equipment in an outdoor working area is completed.
For convenience of description, the outdoor automatic operation equipment will be hereinafter referred to simply as equipment.
The image acquisition module 110 in the system acquires the image information of the surrounding environment of the outdoor automatic operation equipment in real time.
The visual boundary identification module 120 in the system performs data interaction with the image acquisition module 110, and can identify the working boundary of the working area according to the image information acquired by the image acquisition module.
Specifically, the visual boundary identification module 120 may identify the working area boundary based on the deep neural network and the image processing algorithm with respect to the image information of the surrounding environment of the device acquired by the image acquisition module 110 to form corresponding working area boundary information.
When the visual boundary identifying module 120 identifies, detects and generates identifiable work area boundary information, the acquired device surrounding environment image is segmented specifically based on the deep neural network to obtain a corresponding work area neural network segmentation map, and a corresponding workable area and a non-workable area are formed in the work area neural network segmentation map, where a boundary between the workable area and the non-workable area is a work area boundary.
By way of example, the workable area and the unworkable area may be represented in two different colors, and the boundary of the two colors is the boundary of the natural working area.
Fig. 4 is a diagram of a neural network segmentation obtained by performing neural network segmentation on the environment image around the device shown in fig. 3, in which a blue area indicates a workable area, a gray area indicates an unworkable area, and a boundary between the two color areas is a boundary of a natural working area.
On the basis, the boundary information of the working area is further calculated, wherein the boundary information of the working area comprises boundary outline information and distance information between the boundary and the equipment.
The movement control module 130 in the present system is in data communication with the visual boundary identification module 120 and with the drive control components of the device.
The mobile control module 130 can control the equipment to move forward according to the working boundary information of the working area identified by the visual boundary identification module 120, and adjust the working state of the equipment according to the calculation result.
The working states here include, but are not limited to: the direction of travel, angle, distance moved, etc. of the device.
Specifically, the mobile control module 130 obtains the result of the visual recognition detection of the visual boundary recognition module 120 in real time, calculates and determines whether the device has advanced to the boundary position of the work area based on the obtained result, and adjusts the working state of the device in time when the device has advanced to the boundary position of the work area, so as to prevent the device from crossing the boundary of the work area.
As for the manner and process of the mobile control module 130 specifically determining whether the device reaches the boundary of the working area, as mentioned above, the detailed description is omitted here.
The mobile control module 130 further forms a control command based on the result of the visual recognition detection performed by the visual boundary recognition module 120 and the working mode set by the device, so as to control the device to adjust the mobile state and move forward in the working area.
As for the manner and process of the mobile control module 130 for specifically controlling the device to adjust the mobile state when the device reaches the boundary of the working area, as mentioned above, details are not described here.
The formed control system 100 for outdoor automatic operation equipment based on visual boundary detection can be directly operated in an outdoor working area without any artificial boundary calibration or setting when operating corresponding outdoor operation equipment (such as a robot), can automatically detect the working boundary of the working area, generates boundary information of the working area recognizable by a machine from a natural boundary, and can perform accurate behavior control when receiving the boundary information of the corresponding working area, accurately adjust a subsequent forward route, ensure that the robot always works in the corresponding working area, and prevent the risk caused by the boundary crossing of the robot.
For example, when the outdoor automatic operation equipment control system 100 operates in a corresponding outdoor operation robot, based on size parameter data (such as width data) of the outdoor automatic operation equipment itself, focal length parameter information of a mounted image acquisition module is calculated to obtain a range of a travelable region in a real-time acquired image, pixels in the travelable region range are counted to determine an area of the travelable region, the number of the pixels in the counted travelable region range is further determined, and if the condition is met, the travel can be continued without reaching a boundary; otherwise, judging that the outdoor automatic operation equipment reaches the boundary.
Continuing with the above example, the process of the present outdoor automatic working equipment control system 100 controlling the outdoor working robot to automatically work in the work area is further exemplified below.
The control system acquires the surrounding environment image information of the equipment through an image acquisition module mounted on the equipment, wherein the environment image is preferably an environment image in the advancing direction of the equipment (such as fig. 9a, fig. 10a and fig. 11 a).
The visual boundary identifying module 120 in the system segments the acquired device surrounding environment image based on the deep neural network to obtain a corresponding neural network segmentation map (as shown in fig. 9b, fig. 10b, and fig. 11 b).
And forming corresponding workable regions and unworkable regions in the neural network segmentation graph, wherein the boundary between the workable regions and the unworkable regions is the boundary of the workable regions. The workable area and the unworkable area are represented by two different colors, and the boundary of the two colors is the boundary of the natural working area.
And further combining focal length parameter information of an image acquisition module carried by the equipment to calculate the coordinates of each pixel point of the neural network segmentation graph in a visual imaging coordinate system.
On the basis of the distance, the distance between each pixel point of the neural network segmentation graph and the equipment can be further calculated, and the distance between the boundary line in the graph and the equipment is certainly included.
After the neural network segmentation graph processing of the surrounding environment image is completed (namely, the visual identification detection is completed), whether the equipment reaches the boundary of the working area or not is judged based on the obtained result.
For example, it may be determined whether there is a working area for the resulting neural network segmentation map, and if there is no working area, it may indicate that the head of the device has crossed the boundary.
Referring to fig. 9, there is no pixel (blue) in the neural network segmentation map (fig. 9b) corresponding to the surrounding environment image shown in fig. 9a, which indicates that the head of the device has entirely crossed the boundary, and at this time, the driving control unit in the device needs to be controlled to adjust the moving state of the device according to the set operation mode.
And judging whether the obtained neural network segmentation graph has a workable area, and further judging whether the equipment reaches a boundary and needs to adjust the moving state.
First, contour points corresponding to the device positions, such as points a and B in fig. 10, are obtained in the neural network segmentation map according to the size parameter data of the device itself.
By way of example, the point a and the point B are determined based on the width parameter of the device, and the connecting line between the point a and the point B corresponds to the width side of the front end of the device, so that the position state information of the device relative to the boundary can be determined by determining the specific position information of the point a and the point B in the neural network segmentation map.
Based on the processing of the neural network segmentation map, the coordinate information of the contour point a and the contour point B in the map can be determined, and then the state of the device relative to the boundary can be judged based on the coordinate information of the contour point a and the contour point B.
(1) Comparing, if the contour point A and the contour point B are both in the working area of the neural network segmentation graph, then the device does not cross the boundary, at this time, the distance of the device relative to the boundary line of the working area or the area of the working area in the neural network segmentation graph can be further calculated to judge whether the device reaches the boundary, if the device does not reach the boundary and the front working area has enough moving area, the device is controlled to keep the current moving state to continue moving; if the boundary is reached and there is not enough moving area in the front working area, the control device adjusts the current moving state to continue moving.
Referring to fig. 10, in the neural network segmentation map (as shown in fig. 10B) corresponding to the surrounding image shown in fig. 10a, it is determined that contour points a and B in the map correspond to the current position of the device.
Based on the aforementioned processing of the neural network segmentation map, coordinate information of the contour point a and the contour point B in the map can be determined. Meanwhile, in the neural network segmentation map shown in fig. 10B, coordinate information of a point C and a point D corresponding to the most lateral sides of the contour point a and the contour point B in the work area in the map is acquired.
And determining whether the equipment is out of range or not by comparing the coordinate information of the point C and the point D with the coordinate information of the contour point A and the contour point B.
Taking the content shown in fig. 10 as an example, the value of the leftmost coordinate in the work area in the segment map, that is, the coordinate value of the point C in the map, and the value of the rightmost coordinate in the work area in the segment map, that is, the coordinate value of the point D in the map, are obtained. And comparing and calculating the coordinate value of the point C and the coordinate value of the point A, and determining that the point C is on the left side of the point A, so that a working area exists on the left side of the equipment, and the equipment does not cross the boundary in the area on the left side. Meanwhile, the coordinate of the point D and the coordinate information of the point B are compared and calculated, the coordinate of the point D is located on the right side of the coordinate of the point B, and therefore it is determined that the lawn also exists on the right side of the equipment, and the area of the right side of the equipment does not cross the border.
On the basis, whether the front of the equipment reaches the boundary of the working area and/or whether the equipment has enough moving area can be further judged, so that whether the equipment needs to adjust the moving state or not can be controlled.
(2) At this time, through comparison, if only one contour point of the contour points a and B is located in the working area in the neural network segmentation map, it indicates that a partial area of the device has crossed the boundary, and at this time, it is necessary to adjust the current moving state to continue moving.
Referring to fig. 11, in the neural network segmentation map (as shown in fig. 11B) corresponding to the surrounding image shown in fig. 11a, it is determined that contour point a and contour point B in the map correspond to the current position of the device.
Based on the aforementioned processing of the neural network segmentation map, coordinate information of the contour point a and the contour point B in the map can be determined. Meanwhile, in the neural network segmentation map shown in fig. 11B, coordinate information of a point C and a point D corresponding to the most lateral sides of the contour point a and the contour point B in the work area in the map is acquired.
And determining whether the equipment is out of range or not by comparing the coordinate information of the point C and the point D with the coordinate information of the contour point A and the contour point B.
Taking the content shown in fig. 11 as an example, the value of the leftmost coordinate in the work area in the segment map, that is, the coordinate value of the point C in the map, and the value of the rightmost coordinate in the work area in the segment map, that is, the coordinate value of the point D in the map, are obtained. And comparing and calculating the coordinate value of the point C and the coordinate value of the point A, and determining that the point C is on the right of the point A, and determining that no working area exists on the left of the equipment and the equipment is out of bounds in the area on the left. Meanwhile, the coordinate of the point D and the coordinate information of the point B are compared and calculated, the coordinate of the point D is located on the right side of the coordinate of the point B, and therefore it is determined that the lawn also exists on the right side of the equipment, and the area of the right side of the equipment does not cross the border.
And if the left side of the equipment is judged to be out of range, and the right side of the equipment is provided with a working area, and the right side of the equipment is not out of range, the equipment is controlled to adjust the moving state so that the equipment moves towards the current right area.
Therefore, when the control system for the outdoor automatic operation equipment is applied specifically, the size parameter data of the outdoor automatic operation equipment and the focal length parameter information of the carried image acquisition module can be fused to complete accurate automatic operation control on the outdoor automatic operation equipment.
In order to make the purpose, technical solution and advantages of the present invention more clear, the following demonstrates the solution provided by the present invention by taking the lawn robot working on a specific lawn outdoors as an example.
In the scheme of the embodiment, the outdoor automatic operation equipment control system based on the visual boundary detection provided by the scheme is operated in the lawn robot, and meanwhile, a corresponding visual boundary sensor is formed in the lawn robot.
The present example is directed to an outdoor specific lawn without any artificial boundary calibration or setting.
The lawn robot 200 thus set operates on the specific lawn outdoors, and when starting, the lawn robot 200 is located at any position in the map.
As shown in fig. 2, the initial position is a, and a is a random position in the image, and advances along a random direction, which is assumed to be the direction from point a to point B in the figure for convenience of illustration.
When the lawn mower reaches point B at the boundary of the working area, the visual boundary sensor in the lawn robot 200 will analyze the obtained visual perception picture, as shown in fig. 3.
Based on the visual boundary detection, a neural network segmentation map as shown in fig. 4 is formed, and accordingly, working region information, non-working region information, boundary contour information, and the like are obtained, so that the identified natural boundary is generated into working region boundary information that can be recognized by a machine, as shown in fig. 4.
At this time, the visual boundary sensor in the lawn robot 200 determines whether the lawn robot 200 reaches the boundary based on the processed work area boundary information. As shown in fig. 4, the neural network segmentation graph is used to statistically analyze lawn pixel points in the drivable region in the graph, and meanwhile, the control system determines that the robot has not reached the working boundary according to the statistical result, and continues to control the lawn robot to work based on the current driving state.
When the lawn mower reaches the working area boundary B, the visual boundary sensor in the lawn robot 200 will be analyzed according to the obtained visual perception picture, as shown in fig. 5.
Based on the visual boundary detection, a neural network segmentation map as shown in fig. 6 is formed, and accordingly, working region information, non-working region information, boundary contour information, and the like are obtained, so that the identified natural boundary is generated into working region boundary information that can be recognized by a machine, as shown in fig. 6.
In this state, the visual boundary sensor in the lawn robot 200 determines whether the lawn robot 200 reaches the boundary based on the processed work area boundary information. For example, based on the neural network segmentation graph shown in fig. 6, lawn pixel points in the travelable region are statistically analyzed, it is determined according to the statistical result that the lawn pixel points in the travelable region do not satisfy the condition, the left body of the lawn robot 200 will cross the boundary when continuing to advance, at this time, the visual boundary sensor in the lawn robot 200 sends a control adjustment signal to the drive control system in the lawn robot 200, and the control system starts to adjust the advancing direction of the lawn mower, if necessary, turning. Meanwhile, it is determined according to the counted distribution of the lawn pixels in the travelable region that there is a workable region on the right side of the lawn robot 200 and the lawn pixels in the workable region satisfy conditions, thereby determining that the turning direction is clockwise turning according to the boundary information.
Furthermore, the lawn robot 200 acquires the corresponding visual perception picture of the surroundings in real time when turning, as shown in fig. 7.
In the process, a neural network segmentation graph shown in fig. 8 is formed by synchronously aiming at the obtained visual perception picture and based on visual boundary detection, so that working area information, non-working area information, boundary contour information and the like are obtained, and the identified natural boundary is generated into the working area boundary information which can be identified by a machine, as shown in fig. 8.
In the process, the visual boundary sensor in the lawn robot 200 also synchronously judges whether the lawn robot 200 reaches the boundary or not based on the processed work area boundary information. As shown in fig. 8, the method statistically analyzes lawn pixel points in the travelable region, and determines that the lawn pixel points in the travelable region in front of the current state of the lawn robot 200 satisfy the condition according to the statistical result, so that the lawn robot can pass through, and sends a signal to a drive control system in the lawn robot 200, and the control system controls the lawn robot to stop turning and start advancing.
By way of example, the control system can precisely adjust the subsequent advancing path of the lawn robot through the following modes.
Mode 1: the control system is in border mode.
In the mode, after receiving the detection signal of the visual boundary sensor, the control system controls the lawn robot to turn and then move forward along the boundary line.
For example, at boundary B, the visual boundary analysis results in a counterclockwise turn. And meanwhile, the visual boundary sensor analyzes the turning angle, and when the lawn robot goes forward and the boundary line is horizontal after turning, the lawn robot continues to go straight forward along the direction of the boundary line BC. As one proceeds along the boundary line, the surrounding picture is visually perceived in real time, as shown in fig. 7, and the resulting boundary information is analyzed in real time, as shown in fig. 8. And the control system keeps the distance between the lawn robot and the boundary according to the boundary information obtained by analysis.
Mode 2: the control system is in an automatic turning mode.
In the mode, the control system of the lawn robot starts turning after receiving the boundary signal detected by the visual boundary sensor, and the visual boundary sensor analyzes the turning angle, so that the lawn robot firstly ensures that the lawn mower does not go beyond the boundary after turning; and then controlling the lawn robot to continue to turn at a random angle, and then controlling the lawn robot to move forward along a straight line by the control system to control the lawn robot to move away from the boundary.
Mode 3: the control system is in a preset distance mode.
In this mode, after receiving the detection signal of the visual boundary sensor, the control system of the lawn robot controls the lawn robot to start turning first, and the turning angle and direction are determined by the boundary image, for example, at the boundary B, the result of the visual boundary analysis is an anticlockwise turning. Meanwhile, the visual boundary sensor analyzes the turning angle, and when the lawn robot goes forward and the boundary line is horizontal after turning, the lawn robot is controlled to continue to go forward along the boundary line; the control system presets a distance value advancing along the boundary, simultaneously monitors the distance value advancing along the boundary of the lawn robot in real time, when the preset value is reached, the control system controls the robot to turn according to the visual analysis sensor, and after the robot turns at random angles, the lawn mower moves straight ahead away from the boundary.
According to the embodiment, the scheme provided by the invention can accurately identify the relative position between the equipment and the boundary of the working area, and can accurately control the advancing route of the equipment when the equipment reaches the boundary of the working area, so that the equipment is prevented from crossing the boundary of the working area, and the reliability and the safety of the automatic operation of the equipment are improved.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. Robot control system based on visual boundary detection, characterized by, include: the device comprises an image acquisition module, a visual boundary identification module and a movement control module;
the image acquisition module acquires image information of the surrounding environment of the robot in real time;
the visual boundary identification module identifies the working boundary of the working area according to the image information acquired by the image acquisition module to form corresponding working area boundary information;
the mobile control module calculates and judges whether the robot reaches the boundary of the working area according to the boundary information of the working area identified by the visual boundary identification module, and if the robot reaches the boundary of the working area, the working state of the robot is adjusted to prevent the robot from crossing the boundary of the working area; and if the boundary of the working area is not reached, controlling the robot to keep the current working state.
2. The vision boundary detection-based robot control system of claim 1, wherein the vision boundary identification module identifies a work area boundary based on a deep neural network and an image processing manner.
3. The robot control system based on visual boundary detection according to claim 2, wherein the visual boundary identification module segments the acquired robot surrounding environment image based on a deep neural network to obtain a corresponding neural network segmentation map, and forms a corresponding workable area and a non-workable area in the neural network segmentation map, and the boundary between the workable area and the non-workable area forms a working area boundary.
4. The vision boundary detection-based robot control system of claim 1, wherein the work area boundary information includes work area boundary contour information.
5. The robot control system based on visual boundary detection according to claim 4, wherein the movement control module obtains the contour information of the boundary of the working area recognized by the visual boundary recognition module and a plurality of contour point information corresponding to the contour of the robot, calculates a relative relationship between the plurality of contour points and the contour of the boundary of the working area based on the contour point information, judges whether the robot has advanced to the boundary of the working area according to the calculation result, and controls the robot to adjust the advancing direction according to the working mode and the boundary information of the working area when the robot has advanced to the boundary of the working area.
6. The robot control method based on visual boundary detection is characterized by comprising the following steps:
acquiring image information of the surrounding environment of the equipment in real time;
identifying a natural working boundary of the working area according to the acquired image information, and generating identifiable working area boundary information from the natural working boundary;
calculating and judging whether the robot reaches the boundary of the working area or not based on the generated boundary information of the working area, and if so, adjusting the working state of the robot to prevent the robot from crossing the boundary of the working area; and if the boundary of the working area is not reached, controlling the robot to keep the current working state.
7. The visual boundary detection-based robot control method of claim 6, wherein the method generates identifiable work area boundary information based on a deep neural network and an image processing algorithm.
8. The robot control method based on visual boundary detection according to claim 7, wherein the method segments the acquired robot surrounding image based on a deep neural network to obtain a corresponding neural network segmentation map, and forms a corresponding workable region and a non-workable region in the neural network segmentation map, and a boundary between the workable region and the non-workable region forms a working region boundary.
9. The method of claim 6, wherein the work area boundary information includes work area boundary contour information.
10. The method of claim 9, wherein the method comprises obtaining the boundary contour information of the identified working area and contour point information corresponding to the contour of the robot in real time, calculating a relative relationship between the contour points and the boundary contour of the working area, determining whether the robot has moved to the boundary position of the working area according to the calculation result, and controlling the robot to adjust the moving direction according to the working mode and the boundary information of the working area when the robot has moved to the boundary position of the working area.
CN202111176145.3A 2021-10-09 2021-10-09 Robot control system and method based on visual boundary detection Pending CN113910225A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111176145.3A CN113910225A (en) 2021-10-09 2021-10-09 Robot control system and method based on visual boundary detection
PCT/CN2022/070672 WO2023056720A1 (en) 2021-10-09 2022-01-07 Visual boundary detection-based robot control system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111176145.3A CN113910225A (en) 2021-10-09 2021-10-09 Robot control system and method based on visual boundary detection

Publications (1)

Publication Number Publication Date
CN113910225A true CN113910225A (en) 2022-01-11

Family

ID=79238732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111176145.3A Pending CN113910225A (en) 2021-10-09 2021-10-09 Robot control system and method based on visual boundary detection

Country Status (2)

Country Link
CN (1) CN113910225A (en)
WO (1) WO2023056720A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228849B (en) * 2023-05-08 2023-07-25 深圳市思傲拓科技有限公司 Navigation mapping method for constructing machine external image
CN117115407B (en) * 2023-10-18 2024-02-20 深圳市普渡科技有限公司 Slope detection method, device, computer equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699101A (en) * 2015-01-30 2015-06-10 深圳拓邦股份有限公司 Robot mowing system capable of customizing mowing zone and control method thereof
US9196053B1 (en) * 2007-10-04 2015-11-24 Hrl Laboratories, Llc Motion-seeded object based attention for dynamic visual imagery
CN105612909A (en) * 2016-02-23 2016-06-01 广东顺德中山大学卡内基梅隆大学国际联合研究院 Vision and multisensory fusion based intelligent mowing robot control system
CN109685849A (en) * 2018-12-27 2019-04-26 南京苏美达智能技术有限公司 A kind of the out-of-bounds determination method and system of mobile robot
CN109859158A (en) * 2018-11-27 2019-06-07 邦鼓思电子科技(上海)有限公司 A kind of detection system, method and the machinery equipment on the working region boundary of view-based access control model
CN109949198A (en) * 2019-02-22 2019-06-28 中国农业机械化科学研究院 A kind of wheatland boundary detecting apparatus and detection method
CN111126251A (en) * 2019-12-20 2020-05-08 深圳市商汤科技有限公司 Image processing method, device, equipment and storage medium
CN111872935A (en) * 2020-06-21 2020-11-03 珠海市一微半导体有限公司 Robot control system and control method thereof
CN213424010U (en) * 2020-12-16 2021-06-11 四川省机械研究设计院(集团)有限公司 Mowing range recognition device of mowing robot

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2944773B2 (en) * 1991-03-20 1999-09-06 ヤンマー農機株式会社 Image processing method for automatic traveling work machine
CN109063575B (en) * 2018-07-05 2022-12-23 中国计量大学 Intelligent mower autonomous and orderly mowing method based on monocular vision
US11657072B2 (en) * 2019-05-16 2023-05-23 Here Global B.V. Automatic feature extraction from imagery
CN110326423A (en) * 2019-08-08 2019-10-15 浙江亚特电器有限公司 The grass trimmer and its rotating direction control method and device of a kind of view-based access control model
CN111813126A (en) * 2020-07-27 2020-10-23 河北工业大学 Intelligent obstacle avoidance control system and method based on neural network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9196053B1 (en) * 2007-10-04 2015-11-24 Hrl Laboratories, Llc Motion-seeded object based attention for dynamic visual imagery
CN104699101A (en) * 2015-01-30 2015-06-10 深圳拓邦股份有限公司 Robot mowing system capable of customizing mowing zone and control method thereof
CN105612909A (en) * 2016-02-23 2016-06-01 广东顺德中山大学卡内基梅隆大学国际联合研究院 Vision and multisensory fusion based intelligent mowing robot control system
CN109859158A (en) * 2018-11-27 2019-06-07 邦鼓思电子科技(上海)有限公司 A kind of detection system, method and the machinery equipment on the working region boundary of view-based access control model
CN109685849A (en) * 2018-12-27 2019-04-26 南京苏美达智能技术有限公司 A kind of the out-of-bounds determination method and system of mobile robot
CN109949198A (en) * 2019-02-22 2019-06-28 中国农业机械化科学研究院 A kind of wheatland boundary detecting apparatus and detection method
CN111126251A (en) * 2019-12-20 2020-05-08 深圳市商汤科技有限公司 Image processing method, device, equipment and storage medium
CN111872935A (en) * 2020-06-21 2020-11-03 珠海市一微半导体有限公司 Robot control system and control method thereof
CN213424010U (en) * 2020-12-16 2021-06-11 四川省机械研究设计院(集团)有限公司 Mowing range recognition device of mowing robot

Also Published As

Publication number Publication date
WO2023056720A1 (en) 2023-04-13

Similar Documents

Publication Publication Date Title
CN109733384B (en) Parking path setting method and system
US11087148B2 (en) Barrier and guardrail detection using a single camera
CN113910225A (en) Robot control system and method based on visual boundary detection
JP7150274B2 (en) Autonomous vehicles with improved visual detection capabilities
CN108128245B (en) Vehicle environment imaging system and method
CN112415998A (en) Obstacle classification and obstacle avoidance control system based on TOF camera
EP2860664B1 (en) Face detection apparatus
JP6572156B2 (en) Construction equipment interference prevention device
CN109344687B (en) Vision-based obstacle detection method and device and mobile device
US10068141B2 (en) Automatic operation vehicle
JP6614108B2 (en) Vehicle control apparatus and vehicle control method
EP2559016A1 (en) Video based intelligent vehicle control system
US10067511B2 (en) Automatic operation vehicle
JP6375816B2 (en) Vehicle peripheral information display system and display device
JP6354659B2 (en) Driving support device
CN113805571B (en) Robot walking control method, system, robot and readable storage medium
JP2008293122A (en) Obstacle monitoring device
EP3961580A1 (en) Apparatus, method, and computer program for object detection
US10054952B2 (en) Automatic operation vehicle
KR101359649B1 (en) obstacle detection sensor
KR20170142379A (en) Apparatus for detect dimensional welding line of welding robot using image processing
JP3674316B2 (en) Travel position determination device
CN113557713A (en) Context aware monitoring
CN116466724A (en) Mobile positioning method and device of robot and robot
CN111460852A (en) Vehicle-mounted 3D target detection method, system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination