CN115373407A - Method and device for robot to automatically avoid safety warning line - Google Patents
Method and device for robot to automatically avoid safety warning line Download PDFInfo
- Publication number
- CN115373407A CN115373407A CN202211315014.3A CN202211315014A CN115373407A CN 115373407 A CN115373407 A CN 115373407A CN 202211315014 A CN202211315014 A CN 202211315014A CN 115373407 A CN115373407 A CN 115373407A
- Authority
- CN
- China
- Prior art keywords
- robot
- warning line
- determining
- safety warning
- image acquisition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 230000011218 segmentation Effects 0.000 claims abstract description 31
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 26
- 230000007613 environmental effect Effects 0.000 claims abstract description 22
- 238000012545 processing Methods 0.000 claims abstract description 18
- 239000011159 matrix material Substances 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 18
- 238000011176 pooling Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The disclosure relates to the technical field of robots, and provides a method and a device for a robot to automatically avoid a safety warning line. The method comprises the following steps: acquiring a target image about the surrounding environment of the robot through an image acquisition device arranged on the robot; extracting environment information of the environment around the robot from the target image by using an image processing model; determining a first position of a security warning line by using a scene segmentation algorithm based on the environmental information; marking a second position of the security alert line on a built-in map of the robot based on the first position; according to the second position, calculating the shortest distance between the robot and the safety warning line; and planning a path for the robot according to the shortest distance so as to control the robot to avoid the safety warning line in time. By adopting the technical means, the problem that potential safety hazards exist in a method for automatically avoiding forbidden areas by a robot in the prior art is solved.
Description
Technical Field
The disclosure relates to the technical field of robots, in particular to a method and a device for a robot to automatically avoid a safety warning line.
Background
In the process of executing tasks by the robot, the robot inevitably encounters construction sections, artificial settings, road damage, dangerous zones, emergent faults and other zones or forbidden areas where the robot cannot pass through. At present, a robot avoiding a forbidden area is often based on SLAM (Simultaneous localization and mapping), and once a positioning failure or a positioning deviation occurs in the method, the robot may enter the forbidden area, which brings great danger to surrounding people, facilities and robots.
In the course of implementing the disclosed concept, the inventors found that there are at least the following technical problems in the related art: the method for automatically avoiding the forbidden area by the robot has the problem of potential safety hazard.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide a method and an apparatus for a robot to automatically avoid a security fence, an electronic device, and a computer-readable storage medium, so as to solve the problem of potential safety hazard in the prior art, in which a method for a robot to automatically avoid a forbidden area exists.
In a first aspect of the embodiments of the present disclosure, a method for automatically avoiding a security fence by a robot is provided, including: acquiring a target image about the surrounding environment of the robot through an image acquisition device arranged on the robot; extracting environment information of the environment around the robot from the target image by using an image processing model; determining a first position of a safety warning line by using a scene segmentation algorithm based on the environment information; marking a second position of the safety warning line on a built-in map of the robot based on the first position; according to the second position, calculating the shortest distance between the robot and the safety warning line; and planning a path for the robot according to the shortest distance so as to control the robot to avoid the safety warning line in time.
In a second aspect of the embodiments of the present disclosure, there is provided a device for automatically avoiding a security fence by a robot, including: an acquisition module configured to acquire a target image about an environment around the robot through an image acquisition device provided on the robot; an extraction module configured to extract environmental information of an environment around the robot from the target image using the image processing model; a determination module configured to determine a first position of a security fence using a scene segmentation algorithm based on the environmental information; a marking module configured to mark a second location of the security fence on a built-in map of the robot based on the first location; a calculation module configured to calculate a shortest distance between the robot and the security fence according to the second position; and the control module is configured to plan a path for the robot according to the shortest distance so as to control the robot to avoid the safety warning line in time.
In a third aspect of the embodiments of the present disclosure, an electronic device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the above method when executing the computer program.
In a fourth aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, which stores a computer program, which when executed by a processor, implements the steps of the above-mentioned method.
Compared with the prior art, the embodiment of the disclosure has the following beneficial effects: acquiring a target image about the surrounding environment of the robot through an image acquisition device arranged on the robot; extracting environment information of the environment around the robot from the target image by using an image processing model; determining a first position of a safety warning line by using a scene segmentation algorithm based on the environment information; marking a second position of the safety warning line on a built-in map of the robot based on the first position; according to the second position, calculating the shortest distance between the robot and the safety warning line; and planning a path for the robot according to the shortest distance so as to control the robot to avoid the safety warning line in time. By adopting the technical means, the problem of potential safety hazards in the method for automatically avoiding the forbidden area by the robot in the prior art can be solved, and further greater safety guarantee is provided for the robot to automatically avoid the forbidden area.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive efforts.
FIG. 1 is a scenario diagram of an application scenario of an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a method for automatically avoiding a security fence by a robot according to an embodiment of the present disclosure;
fig. 3 is a schematic view of a robot automatically avoiding a security fence according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a device for automatically avoiding a security fence of a robot according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
A method and an apparatus for automatically avoiding a security fence by a robot according to an embodiment of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 1 is a scene schematic diagram of an application scenario of an embodiment of the present disclosure. The application scenario may include terminal devices 101, 102, and 103, server 104, and network 105.
The terminal devices 101, 102, and 103 may be hardware or software. When terminal devices 101, 102, and 103 are hardware, they may be various electronic devices having a display screen and supporting communication with server 104, including but not limited to smart phones, robots, laptop portable computers, desktop computers, and the like (e.g., 102 may be a robot); when the terminal apparatuses 101, 102, and 103 are software, they can be installed in the electronic apparatus as above. The terminal devices 101, 102, and 103 may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, which is not limited by the embodiments of the present disclosure. Further, various applications, such as data processing applications, instant messaging tools, social platform software, search-type applications, shopping-type applications, etc., may be installed on the terminal devices 101, 102, and 103.
The server 104 may be a server providing various services, for example, a backend server receiving a request sent by a terminal device establishing a communication connection with the server, and the backend server may receive and analyze the request sent by the terminal device and generate a processing result. The server 104 may be a server, may also be a server cluster composed of a plurality of servers, or may also be a cloud computing service center, which is not limited in this disclosure.
The server 104 may be hardware or software. When the server 104 is hardware, it may be various electronic devices that provide various services to the terminal devices 101, 102, and 103. When the server 104 is software, it may be multiple software or software modules that provide various services for the terminal devices 101, 102, and 103, or may be a single software or software module that provides various services for the terminal devices 101, 102, and 103, which is not limited by the embodiment of the present disclosure.
The network 105 may be a wired network connected by a coaxial cable, a twisted pair and an optical fiber, or may be a wireless network that can interconnect various Communication devices without wiring, for example, bluetooth (Bluetooth), near Field Communication (NFC), infrared (Infrared), and the like, which is not limited in the embodiment of the present disclosure.
The target user can establish a communication connection with the server 104 via the network 105 through the terminal devices 101, 102, and 103 to receive or transmit information or the like. It should be noted that the specific types, numbers and combinations of the terminal devices 101, 102 and 103, the server 104 and the network 105 may be adjusted according to the actual requirements of the application scenario, and the embodiment of the present disclosure does not limit this.
Fig. 2 is a schematic flowchart of a method for automatically avoiding a security fence by a robot according to an embodiment of the present disclosure. The method of automatically avoiding the security fence by the robot of fig. 2 may be performed by the terminal device or the server of fig. 1. As shown in fig. 2, the method for automatically avoiding the security fence by the robot includes:
s201, acquiring a target image about the surrounding environment of the robot through image acquisition equipment arranged on the robot;
s202, extracting environment information of the environment around the robot from the target image by using an image processing model;
s203, determining a first position of a safety warning line by using a scene segmentation algorithm based on the environmental information;
s204, marking a second position of a safety warning line on a built-in map of the robot based on the first position;
s205, calculating the shortest distance between the robot and the safety warning line according to the second position;
and S206, planning a path for the robot according to the shortest distance so as to control the robot to avoid a safety warning line in time.
The image acquisition device may be a low cost device such as a common color camera; the target image is shot of the surrounding environment of the robot; the image processing model is a trained neural network model, and the corresponding relation between the image and the environmental information is learned and stored; the safety fence is an area where the robot cannot pass through, and is a fence around the area where the robot cannot pass through.
According to the technical scheme provided by the embodiment of the disclosure, a target image about the surrounding environment of the robot is acquired through image acquisition equipment arranged on the robot; extracting environment information of the environment around the robot from the target image by using an image processing model; determining a first position of a safety warning line by using a scene segmentation algorithm based on the environment information; marking a second position of the safety warning line on a built-in map of the robot based on the first position; according to the second position, calculating the shortest distance between the robot and the safety warning line; and planning a path for the robot according to the shortest distance so as to control the robot to avoid the safety warning line in time. By adopting the technical means, the problem of potential safety hazards in the method for automatically avoiding the forbidden area by the robot in the prior art can be solved, and further greater safety guarantee is provided for the robot to automatically avoid the forbidden area.
Compared with the prior art, the robot has higher accuracy in avoiding forbidden areas, so that the disclosed embodiment provides greater safety guarantee for the robot to automatically avoid the forbidden areas.
In step S203, determining a first position of the security fence by using a scene segmentation algorithm based on the environment information, including: the scene segmentation algorithm is realized through a scene segmentation model, and the scene segmentation model comprises a plurality of convolution layers, a pooling layer and a deconvolution layer; inputting the environmental information into a multilayer convolution layer and a pooling layer, and outputting a semantic feature map, wherein the semantic feature map has a plurality of positions, and each position corresponds to the information of one pixel point in the environmental information; inputting the semantic feature map into a multilayer deconvolution layer, and outputting a category confidence of each position in the semantic feature map; and determining a first position according to the category confidence of each position in the semantic feature map.
The target image has a large number of pixel points, the environment information has information of a large number of pixel points, and the semantic feature map has a large number of positions for representing the information of the pixel points. The category confidence of each position in the semantic feature map can indicate the probability of the category of each position in the semantic feature map, which is equivalent to determining the category of each position in the semantic feature map, further determining the category of information of each pixel point in the environmental information, further determining the category of each pixel point in the target image, and finally determining the security alert line. The security guard is a line segment and the first position is the position of the line segment.
In step S203, determining a first position of the security fence by using a scene segmentation algorithm based on the environment information, including: determining an area where the robot cannot pass by using a scene segmentation algorithm based on the environment information; and determining a first position of the safety warning line based on the environment information and the area where the robot cannot pass.
The process of determining the area where the robot cannot pass by using the scene segmentation algorithm is similar to the process of determining the security warning line by using the scene segmentation algorithm. Based on the environmental information and the area where the robot cannot pass through, a line segment around the area where the robot cannot pass through, that is, a security fence line, can be determined.
In step S204, marking a second position of the security fence on the built-in map of the robot based on the first position includes: acquiring an internal reference matrix of the image acquisition equipment; determining a plane equation of the ground where the robot is located in a coordinate system of the image acquisition equipment according to the internal reference matrix, wherein the target image contains the ground where the robot is located; and determining a second position of the safety warning line under the coordinate system of the image acquisition equipment based on the plane equation of the ground and the first position, and marking the second position on the built-in map.
The internal reference matrix is a matrix representing various parameters of the image acquisition equipment, and can represent a plane equation of the ground where the robot is located in a coordinate system of the image acquisition equipment according to the internal reference matrix.
Determining a second position of the security fence in the coordinate system of the image acquisition device based on the planar equation of the ground and the first position, comprising: calculating a direction vector of the security alert line based on the first position and the internal reference matrix; and determining a second position of the security fence under the coordinate system of the image acquisition equipment based on the plane equation of the ground and the direction vector.
The direction vector of the security-alert line is calculated based on the first position and the internal reference matrix, and may be understood as the direction vector of the security-alert line in the coordinate system of the image acquisition device, which is represented according to the internal reference matrix. The intersection line generated by the plane equation of the ground and the direction vector is a safety warning line under the coordinate system of the image acquisition equipment.
In step S205, calculating the shortest distance between the robot and the security fence according to the second position includes: determining a linear equation of a second position in a coordinate system of the image acquisition device; and calculating the shortest distance by using a point-line distance formula according to the linear equation and the position of the robot in the coordinate system of the image acquisition equipment.
The position of the image acquisition equipment under the coordinate system of the image acquisition equipment is generally the origin, and the image acquisition equipment is arranged on the robot, so that the positions of the image acquisition equipment and the robot can be regarded as the same.
In step S206, planning a path for the robot according to the shortest distance to control the robot to avoid the security fence in time, including: updating the first position of the safety warning line in real time; and judging whether the path planned for the robot needs to be updated or not by comparing the corresponding relation between the first position and a second position marked with a safety warning line on a built-in map of the robot.
If the first position of the safety warning line is not changed, the corresponding relation between the first position and the second position is not changed, otherwise, the corresponding relation is changed, and at the moment, the path planned for the robot needs to be updated (if the corresponding relation is changed, the safety warning line is changed).
Optionally, according to the shortest distance, planning a path for the robot to control the robot to avoid the security alert line in time, including: updating the second position of the safety warning line in real time; when the second position changes, that is, there is an action to update the second position, the path planned for the robot is updated.
Fig. 3 is a schematic view of a scenario in which a robot automatically avoids a security fence according to an embodiment of the present disclosure, as shown in fig. 3: the robot moves towards the forbidden zone at the point A, a warning line is arranged on the periphery of the forbidden zone, the robot bends right when approaching the warning line so as to avoid the robot touching the warning line, and the robot bends right to reach the point B.
It should be noted that the warning lines are divided from colors, including but not limited to single color and two-color alternation, including: yellow-black, red-white, green-white, etc. The target image may obtain information about the color or texture of the warning line, and the image processing model may be used to extract environmental information about the environment around the robot from the target image, which may also include information about the color or texture of the warning line. Based on the information on the color or texture of the guard line in the environment information, a first position of the safety guard line can be determined by using a scene segmentation algorithm.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 4 is a schematic diagram of an apparatus for automatically avoiding a security fence by a robot according to an embodiment of the present disclosure. As shown in fig. 4, the apparatus for automatically avoiding the security fence by the robot includes:
an acquisition module 401 configured to acquire a target image about an environment around the robot by an image acquisition device provided on the robot;
an extraction module 402 configured to extract environmental information of an environment around the robot from the target image using an image processing model;
a determining module 403 configured to determine a first position of a security fence using a scene segmentation algorithm based on the environment information;
a marking module 404 configured to mark a second location of the security fence on the robot's built-in map based on the first location;
a calculation module 405 configured to calculate a shortest distance between the robot and the security fence according to the second position;
and the control module 406 is configured to plan a path for the robot according to the shortest distance so as to control the robot to avoid the safety warning line in time.
The image acquisition device may be a low cost device such as a common color camera; the target image is shot of the surrounding environment of the robot; the image processing model is a trained neural network model, and learns and stores the corresponding relation between the images and the environmental information; the safety warning lines are the areas where the robot cannot pass through, and the safety warning lines are the warning lines around the areas where the robot cannot pass through.
According to the technical scheme provided by the embodiment of the disclosure, a target image about the surrounding environment of the robot is acquired through image acquisition equipment arranged on the robot; extracting environment information of the environment around the robot from the target image by using an image processing model; determining a first position of a security warning line by using a scene segmentation algorithm based on the environmental information; marking a second position of the safety warning line on a built-in map of the robot based on the first position; according to the second position, calculating the shortest distance between the robot and the safety warning line; and planning a path for the robot according to the shortest distance so as to control the robot to avoid the safety warning line in time. By adopting the technical means, the problem of potential safety hazards in the method for automatically avoiding the forbidden area by the robot in the prior art can be solved, and further greater safety guarantee is provided for the robot to automatically avoid the forbidden area.
Compared with the prior art, the robot has higher accuracy in avoiding forbidden areas, so that the embodiment of the invention provides greater safety guarantee for the robot to automatically avoid the forbidden areas.
Optionally, the determining module 403 is further configured to implement the scene segmentation algorithm by a scene segmentation model, the scene segmentation model including a plurality of convolutional layers, a pooling layer, and a deconvolution layer; inputting the environmental information into a multilayer convolution layer and a pooling layer, and outputting a semantic feature map, wherein the semantic feature map has a plurality of positions, and each position corresponds to the information of one pixel point in the environmental information; inputting the semantic feature map into a multilayer deconvolution layer, and outputting a category confidence coefficient of each position in the semantic feature map; and determining a first position according to the category confidence of each position in the semantic feature map.
The target image has a large number of pixel points, the environment information has information of a large number of pixel points, and the semantic feature map has a large number of positions for representing the information of the pixel points. The category confidence of each position in the semantic feature map can indicate the probability of the category of each position in the semantic feature map, which is equivalent to determining the category of each position in the semantic feature map, further determining the category of information of an internal pixel point in the environmental information, further determining the category of each pixel point of the target image, and finally determining a safety warning line. The security guard is a line segment and the first position is the position of the line segment.
Optionally, the determining module 403 is further configured to determine, based on the environment information, an area where the robot cannot pass through by using a scene segmentation algorithm; and determining a first position of the safety warning line based on the environment information and the area where the robot cannot pass.
The process of determining the area where the robot cannot pass by using the scene segmentation algorithm is similar to the process of determining the security warning line by using the scene segmentation algorithm. Based on the environmental information and the area where the robot cannot pass, a line segment around the area where the robot cannot pass, that is, a security alert line, can be determined.
Optionally, the labeling module 404 is further configured to obtain an internal reference matrix of the image acquisition device; determining a plane equation of the ground where the robot is located in a coordinate system of the image acquisition equipment according to the internal reference matrix, wherein the target image contains the ground where the robot is located; based on the planar equation of the ground and the first position, a second position of the security fence under the coordinate system of the image acquisition device is determined and marked on the built-in map.
The internal reference matrix is a matrix representing various parameters of the image acquisition equipment, and can represent a plane equation of the ground where the robot is located in the coordinate system of the image acquisition equipment according to the internal reference matrix.
Optionally, the marking module 404 is further configured to calculate a direction vector of the security fence based on the first position and the internal reference matrix; and determining a second position of the security fence under the coordinate system of the image acquisition equipment based on the plane equation of the ground and the direction vector.
The direction vector of the security fence is calculated based on the first position and the internal reference matrix, and may be understood as a direction vector of the security fence in the coordinate system of the image acquisition device according to the internal reference matrix representation. And an intersection line generated by the plane equation of the ground and the direction vector is a safety warning line under the coordinate system of the image acquisition equipment.
Optionally, the calculation module 405 is further configured to determine a line equation of the second position in the coordinate system of the image acquisition device; and calculating the shortest distance by using a point-line distance formula according to the linear equation and the position of the robot in the coordinate system of the image acquisition equipment.
The position of the image acquisition equipment under the coordinate system of the image acquisition equipment is generally the origin, and the image acquisition equipment is arranged on the robot, so that the positions of the image acquisition equipment and the robot can be regarded as the same.
Optionally, the control module 406 is further configured to update the first position of the security fence in real time; and judging whether the path planned for the robot needs to be updated or not by comparing the corresponding relation between the first position and a second position marked with a safety warning line on a built-in map of the robot.
If the first position of the security alert line does not change, the corresponding relationship between the first position and the second position is not changed, otherwise, the corresponding relationship is changed, and the path planned for the robot needs to be updated at this time (if the corresponding relationship is changed, the security alert line is changed).
Optionally, planning a path for the robot according to the shortest distance to control the robot to avoid the security alert line in time, including: updating the second position of the safety warning line in real time; when the second position changes, that is, there is an action to update the second position, the path planned for the robot is updated.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure.
Fig. 5 is a schematic diagram of an electronic device 5 provided by the embodiment of the present disclosure. As shown in fig. 5, the electronic apparatus 5 of this embodiment includes: a processor 501, a memory 502, and a computer program 503 stored in the memory 502 and operable on the processor 501. The steps in the various method embodiments described above are implemented when the processor 501 executes the computer program 503. Alternatively, the processor 501 implements the functions of the respective modules/units in the above-described respective apparatus embodiments when executing the computer program 503.
Illustratively, the computer program 503 may be partitioned into one or more modules/units, which are stored in the memory 502 and executed by the processor 501 to complete the present disclosure. One or more of the modules/units may be a series of computer program instruction segments capable of performing certain functions, the instruction segments describing the execution of the computer program 503 in the electronic device 5.
The electronic device 5 may be a desktop computer, a notebook, a palm computer, a cloud server, or other electronic devices. The electronic device 5 may include, but is not limited to, a processor 501 and a memory 502. Those skilled in the art will appreciate that fig. 5 is merely an example of an electronic device 5, and does not constitute a limitation of the electronic device 5, and may include more or fewer components than shown, or some of the components may be combined, or different components, e.g., the electronic device may also include input-output devices, network access devices, buses, etc.
The Processor 501 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 502 may be an internal storage unit of the electronic device 5, for example, a hard disk or a memory of the electronic device 5. The memory 502 may also be an external storage device of the electronic device 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the electronic device 5. Further, the memory 502 may also include both internal storage units and external storage devices of the electronic device 5. The memory 502 is used for storing computer programs and other programs and data required by the electronic device. The memory 502 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other ways. For example, the above-described apparatus/electronic device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one type of logical function, another division may be made in an actual implementation, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the above embodiments may be realized by the present disclosure, and the computer program may be stored in a computer readable storage medium to instruct related hardware, and when the computer program is executed by a processor, the steps of the above method embodiments may be realized. The computer program may comprise computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer-readable medium may contain suitable additions or subtractions depending on the requirements of legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer-readable media may not include electrical carrier signals or telecommunication signals in accordance with legislation and patent practice.
The above examples are only intended to illustrate the technical solutions of the present disclosure, not to limit them; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present disclosure, and are intended to be included within the scope of the present disclosure.
Claims (10)
1. A method for automatically avoiding a safety warning line by a robot is characterized by comprising the following steps:
acquiring a target image about the surrounding environment of the robot through an image acquisition device arranged on the robot;
extracting environmental information of the robot surrounding environment from the target image by using an image processing model;
determining a first position of a safety warning line by using a scene segmentation algorithm based on the environment information;
marking a second position of the security fence on a built-in map of the robot based on the first position;
calculating the shortest distance between the robot and the safety warning line according to the second position;
and planning a path for the robot according to the shortest distance so as to control the robot to avoid the safety warning line in time.
2. The method of claim 1, wherein determining the first location of the security fence using a scene segmentation algorithm based on the environmental information comprises:
the scene segmentation algorithm is realized through a scene segmentation model, and the scene segmentation model comprises a plurality of convolution layers, a pooling layer and a deconvolution layer;
inputting the environment information into a plurality of layers of the convolution layer and the pooling layer, and outputting a semantic feature map, wherein the semantic feature map has a plurality of positions, and each position corresponds to the information of one pixel point in the environment information;
inputting the semantic feature map into a plurality of layers of the deconvolution layers, and outputting a category confidence of each position in the semantic feature map;
and determining the first position according to the category confidence of each position in the semantic feature map.
3. The method according to claim 1, wherein determining a first location of a security fence using a scene segmentation algorithm based on the environmental information comprises:
determining an area where the robot cannot pass by using the scene segmentation algorithm based on the environment information;
and determining a first position of the safety warning line based on the environment information and the area where the robot cannot pass.
4. The method of claim 1, wherein said marking a second location of the security fence on a built-in map of the robot based on the first location comprises:
acquiring an internal reference matrix of the image acquisition equipment;
determining a plane equation of the ground where the robot is located under a coordinate system of the image acquisition equipment according to the internal reference matrix, wherein the target image contains the ground where the robot is located;
determining a second position of the security fence in a coordinate system of the image acquisition device based on the planar equation of the ground and the first position, and marking the second position on the built-in map.
5. The method of claim 4, wherein determining a second position of the security fence in a coordinate system of the image acquisition device based on the planar equation of the ground and the first position comprises:
calculating a direction vector of the security alert line based on the first position and the internal reference matrix;
determining a second position of the security fence in a coordinate system of the image acquisition device based on the planar equation of the ground and the directional vector.
6. The method according to claim 1, wherein said calculating a shortest distance between the robot and the security fence based on the second position comprises:
determining a linear equation for the second location in a coordinate system of the image acquisition device;
and calculating the shortest distance by using a point-line distance formula according to the linear equation and the position of the robot in the coordinate system of the image acquisition equipment.
7. The method according to claim 1, wherein the planning a path for the robot according to the shortest distance to control the robot to avoid the safety fence in time comprises:
updating the first position of the safety warning line in real time;
and judging whether the path planned for the robot needs to be updated or not by comparing the corresponding relation between the first position and a second position marked with the safety warning line on a built-in map of the robot.
8. A device for automatically avoiding a safety warning line by a robot is characterized by comprising:
an acquisition module configured to acquire a target image about an environment around a robot through an image acquisition device provided on the robot;
an extraction module configured to extract environmental information of an environment around the robot from the target image using an image processing model;
a determination module configured to determine a first position of a security fence using a scene segmentation algorithm based on the environmental information;
a marking module configured to mark a second location of the security fence on a built-in map of the robot based on the first location;
a calculation module configured to calculate a shortest distance between the robot and the security fence according to the second position;
and the control module is configured to plan a path for the robot according to the shortest distance so as to control the robot to avoid the safety warning line in time.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of a method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211315014.3A CN115373407A (en) | 2022-10-26 | 2022-10-26 | Method and device for robot to automatically avoid safety warning line |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211315014.3A CN115373407A (en) | 2022-10-26 | 2022-10-26 | Method and device for robot to automatically avoid safety warning line |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115373407A true CN115373407A (en) | 2022-11-22 |
Family
ID=84073790
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211315014.3A Pending CN115373407A (en) | 2022-10-26 | 2022-10-26 | Method and device for robot to automatically avoid safety warning line |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115373407A (en) |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008198124A (en) * | 2007-02-15 | 2008-08-28 | Matsushita Electric Works Ltd | Alarm sensor |
CN103076802A (en) * | 2012-10-09 | 2013-05-01 | 江苏大学 | Method and system for establishing and identifying robot virtual boundary |
CN109829926A (en) * | 2019-01-30 | 2019-05-31 | 杭州鸿泉物联网技术股份有限公司 | Road scene semantic segmentation method and device |
CN109919135A (en) * | 2019-03-27 | 2019-06-21 | 华瑞新智科技(北京)有限公司 | Behavioral value method, apparatus based on deep learning |
CN109949313A (en) * | 2019-05-17 | 2019-06-28 | 中科院—南京宽带无线移动通信研发中心 | A kind of real-time semantic segmentation method of image |
CN110084173A (en) * | 2019-04-23 | 2019-08-02 | 精伦电子股份有限公司 | Number of people detection method and device |
CN110390254A (en) * | 2019-05-24 | 2019-10-29 | 平安科技(深圳)有限公司 | Character analysis method, apparatus, computer equipment and storage medium based on face |
CN110647816A (en) * | 2019-08-26 | 2020-01-03 | 合肥工业大学 | Target detection method for real-time monitoring of goods shelf medicines |
CN111045017A (en) * | 2019-12-20 | 2020-04-21 | 成都理工大学 | Method for constructing transformer substation map of inspection robot by fusing laser and vision |
CN111096138A (en) * | 2019-12-30 | 2020-05-05 | 中电海康集团有限公司 | UWB-based mowing robot working boundary establishing and identifying system and method |
CN111582060A (en) * | 2020-04-20 | 2020-08-25 | 浙江大华技术股份有限公司 | Automatic line drawing perimeter alarm method, computer equipment and storage device |
CN112711263A (en) * | 2021-01-19 | 2021-04-27 | 未来机器人(深圳)有限公司 | Storage automatic guided vehicle obstacle avoidance method and device, computer equipment and storage medium |
CN113034490A (en) * | 2021-04-16 | 2021-06-25 | 北京石油化工学院 | Method for monitoring stacking safety distance of chemical storehouse |
CN113657331A (en) * | 2021-08-23 | 2021-11-16 | 深圳科卫机器人科技有限公司 | Warning line infrared induction identification method and device, computer equipment and storage medium |
CN114067219A (en) * | 2021-11-11 | 2022-02-18 | 华东师范大学 | Farmland crop identification method based on semantic segmentation and superpixel segmentation fusion |
CN114090905A (en) * | 2021-09-30 | 2022-02-25 | 深圳中智永浩机器人有限公司 | Alert line position identification method and device, computer equipment and storage medium |
CN115131821A (en) * | 2022-06-29 | 2022-09-30 | 大连理工大学 | Improved YOLOv5+ Deepsort-based campus personnel crossing warning line detection method |
CN115223039A (en) * | 2022-05-13 | 2022-10-21 | 南通河海大学海洋与近海工程研究院 | Robot semi-autonomous control method and system for complex environment |
-
2022
- 2022-10-26 CN CN202211315014.3A patent/CN115373407A/en active Pending
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008198124A (en) * | 2007-02-15 | 2008-08-28 | Matsushita Electric Works Ltd | Alarm sensor |
CN103076802A (en) * | 2012-10-09 | 2013-05-01 | 江苏大学 | Method and system for establishing and identifying robot virtual boundary |
CN109829926A (en) * | 2019-01-30 | 2019-05-31 | 杭州鸿泉物联网技术股份有限公司 | Road scene semantic segmentation method and device |
CN109919135A (en) * | 2019-03-27 | 2019-06-21 | 华瑞新智科技(北京)有限公司 | Behavioral value method, apparatus based on deep learning |
CN110084173A (en) * | 2019-04-23 | 2019-08-02 | 精伦电子股份有限公司 | Number of people detection method and device |
CN109949313A (en) * | 2019-05-17 | 2019-06-28 | 中科院—南京宽带无线移动通信研发中心 | A kind of real-time semantic segmentation method of image |
CN110390254A (en) * | 2019-05-24 | 2019-10-29 | 平安科技(深圳)有限公司 | Character analysis method, apparatus, computer equipment and storage medium based on face |
CN110647816A (en) * | 2019-08-26 | 2020-01-03 | 合肥工业大学 | Target detection method for real-time monitoring of goods shelf medicines |
CN111045017A (en) * | 2019-12-20 | 2020-04-21 | 成都理工大学 | Method for constructing transformer substation map of inspection robot by fusing laser and vision |
CN111096138A (en) * | 2019-12-30 | 2020-05-05 | 中电海康集团有限公司 | UWB-based mowing robot working boundary establishing and identifying system and method |
CN111582060A (en) * | 2020-04-20 | 2020-08-25 | 浙江大华技术股份有限公司 | Automatic line drawing perimeter alarm method, computer equipment and storage device |
CN112711263A (en) * | 2021-01-19 | 2021-04-27 | 未来机器人(深圳)有限公司 | Storage automatic guided vehicle obstacle avoidance method and device, computer equipment and storage medium |
CN113034490A (en) * | 2021-04-16 | 2021-06-25 | 北京石油化工学院 | Method for monitoring stacking safety distance of chemical storehouse |
CN113657331A (en) * | 2021-08-23 | 2021-11-16 | 深圳科卫机器人科技有限公司 | Warning line infrared induction identification method and device, computer equipment and storage medium |
CN114090905A (en) * | 2021-09-30 | 2022-02-25 | 深圳中智永浩机器人有限公司 | Alert line position identification method and device, computer equipment and storage medium |
CN114067219A (en) * | 2021-11-11 | 2022-02-18 | 华东师范大学 | Farmland crop identification method based on semantic segmentation and superpixel segmentation fusion |
CN115223039A (en) * | 2022-05-13 | 2022-10-21 | 南通河海大学海洋与近海工程研究院 | Robot semi-autonomous control method and system for complex environment |
CN115131821A (en) * | 2022-06-29 | 2022-09-30 | 大连理工大学 | Improved YOLOv5+ Deepsort-based campus personnel crossing warning line detection method |
Non-Patent Citations (1)
Title |
---|
张蕊 等: "基于深度学习的场景分割算法研究综述", 《计算机研究与发展》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110020620B (en) | Face recognition method, device and equipment under large posture | |
CN111079619B (en) | Method and apparatus for detecting target object in image | |
CN110443222A (en) | Method and apparatus for training face's critical point detection model | |
CN108174152A (en) | A kind of target monitoring method and target monitor system | |
KR20210040296A (en) | Image tagging method, electronic device, apparatus, storage medium, and program | |
CN108876857B (en) | Method, system, device and storage medium for positioning unmanned vehicle | |
US11842446B2 (en) | VR scene and interaction method thereof, and terminal device | |
CN114373047A (en) | Method, device and storage medium for monitoring physical world based on digital twin | |
CN115719436A (en) | Model training method, target detection method, device, equipment and storage medium | |
CN110866475A (en) | Hand-off steering wheel and image segmentation model training method, device, terminal and medium | |
CN114363161B (en) | Abnormal equipment positioning method, device, equipment and medium | |
CN108036774B (en) | Surveying and mapping method, system and terminal equipment | |
EP3165979B1 (en) | Providing mounting information | |
CN110633843A (en) | Park inspection method, device, equipment and storage medium | |
CN110673607B (en) | Feature point extraction method and device under dynamic scene and terminal equipment | |
CN108255932B (en) | Roaming browsing method and system of digital factory based on three-dimensional digital platform | |
CN108564076B (en) | Visual control system in electric power wiring in intelligent building | |
CN115373407A (en) | Method and device for robot to automatically avoid safety warning line | |
CN110414458B (en) | Positioning method and device based on matching of plane label and template | |
CN109493423B (en) | Method and device for calculating midpoint positions of two points on surface of three-dimensional earth model | |
CN115410173A (en) | Multi-mode fused high-precision map element identification method, device, equipment and medium | |
CN111932611B (en) | Object position acquisition method and device | |
CN109246606B (en) | Expansion method and device of robot positioning network, terminal equipment and storage medium | |
CN113436332A (en) | Digital display method and device for fire-fighting plan, server and readable storage medium | |
CN111854751A (en) | Navigation target position determining method and device, readable storage medium and robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20221122 |
|
RJ01 | Rejection of invention patent application after publication |