CN110619298A - Mobile robot, specific object detection method and device thereof and electronic equipment - Google Patents

Mobile robot, specific object detection method and device thereof and electronic equipment Download PDF

Info

Publication number
CN110619298A
CN110619298A CN201910866408.XA CN201910866408A CN110619298A CN 110619298 A CN110619298 A CN 110619298A CN 201910866408 A CN201910866408 A CN 201910866408A CN 110619298 A CN110619298 A CN 110619298A
Authority
CN
China
Prior art keywords
target object
information
target
type
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910866408.XA
Other languages
Chinese (zh)
Inventor
梅健
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruyu Intelligent Technology (suzhou) Co Ltd
Original Assignee
Ruyu Intelligent Technology (suzhou) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruyu Intelligent Technology (suzhou) Co Ltd filed Critical Ruyu Intelligent Technology (suzhou) Co Ltd
Priority to CN201910866408.XA priority Critical patent/CN110619298A/en
Publication of CN110619298A publication Critical patent/CN110619298A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a movable robot and a method, a device and an electronic device for detecting a specific object thereof, wherein the method comprises the following steps: acquiring reflectivity information of a target object; determining the type of the target object according to the reflectivity information of the target object; and identifying whether the target object is a specific object or not according to the type of the target object. The invention can eliminate or reduce the influence of imaging capability on the detection accuracy. In a further alternative scheme, the judgment basis can be diversified and richer, so that the detection accuracy can be guaranteed.

Description

Mobile robot, specific object detection method and device thereof and electronic equipment
Technical Field
The present invention relates to mobile robots, and particularly to a mobile robot, a method and an apparatus for detecting a specific object of the mobile robot, and an electronic device.
Background
A mobile robot is understood to be any robot that can automatically move on a plane, which is applied to the fields of industry, life, and the like, and includes, for example, a cleaning robot, a transportation robot, and the like. The cleaning robot can be a sweeping robot, a mopping robot, and the like.
A mobile robot, such as a cleaning robot, needs to detect the trash in front of or around and then can clean the trash in a targeted manner, for example, if the trash is detected, the trash can be collected and cleaned. However, when detecting whether the object is a specific object, the detection is usually determined according to the shape of the object in the captured image, which is limited by the imaging capability of the mobile robot, and if the imaging capability is not good, the recognition accuracy may be not good.
Disclosure of Invention
The invention provides a movable robot and a method, a device and electronic equipment for detecting a specific object thereof, which aim to solve the problem that if the imaging capability of the movable robot is limited and is not good, the identification accuracy may be not good.
According to a first aspect of the present invention, there is provided a specific object detection processing method of a mobile robot, including:
acquiring reflectivity information of a target object;
determining the type of the target object according to the reflectivity information of the target object;
and identifying whether the target object is a specific object or not according to the type of the target object.
Optionally, determining the type of the target object according to the reflectivity information of the target object includes:
determining first probability data according to the reflectivity information of the target object, wherein the first probability data is used for representing the possibility that the target object belongs to each type;
determining a type of the target object based on the first probability data.
Determining a type of the target object based on the first probability data and second probability data, the second probability data being determined based on pixel information of the target object and also being used for representing the possibility that the target object belongs to each type, the pixel information of the target object being information represented by a pixel portion of the target object in a target image acquired by an image acquisition section of the movable robot.
Optionally, determining the type of the target object according to the reflectivity information of the target object includes:
and determining one or more object types corresponding to the reflectivity information of the target object according to the target interval range in which the reflectivity information of the target object is positioned and the corresponding relation between different interval ranges and different object types, wherein the type of the target object can be determined according to the determined one or more object types.
Optionally, after determining one or more object types corresponding to the reflectivity information of the target object, the method further includes:
if the reflectivity information of the target object corresponds to a plurality of object types, determining the type of the target object in the plurality of object types according to the pixel information of the target object; the pixel information is information characterized by a pixel portion of the target object in a target image acquired by an image acquisition component of the mobile robot;
and if the reflectivity information of the target object corresponds to one object type, determining that the object type corresponding to the reflectivity information of the target object is the type of the target object.
Optionally, determining the type of the target object according to the reflectivity information of the target object includes:
determining the type of the target object according to at least one of the pixel information of the target object and the depth information of the target object and the reflectivity information of the target object; the pixel information is information characterized by a pixel portion of the target object in a target image captured by an image capturing section of the movable robot.
Optionally, the depth of field information is detected by a detection component of the mobile robot; the detection component comprises a detection light source and a receiver;
the detection light source is used for emitting light pulses to a range of target visual angles containing the target object; the receiver is used for receiving the return light corresponding to the light pulse; the depth information is determined from the time at which the light pulse is emitted and the time at which the corresponding return light is received.
Optionally, the reflectivity information of the target object is determined according to a lighting condition when the lighting source of the mobile robot illuminates the target object, and a light collecting condition when the image collecting component of the mobile robot collects the target image of the target object.
Optionally, the reflectivity information of the target object is determined by calculating according to the following formula:
wherein:
Ppixelwhen the target image is collected, the energy information of the light collected by the pixel unit of the image collecting component;
PTXenergy information of light emitted for the illumination source;
d is the distance between the target object and the image acquisition component;
d ^2 is the luminous decay efficiency of the illumination light source;
dii is the light attenuation coefficient reflected by the target object and is determined by an object surface reflection model;
μ is the other attenuation during light transmission;
phi is the coefficient of the light aperture and transmittance of the normalized lens and other optical path attenuation or circuit attenuation coefficients.
Optionally, the specific items include at least one of:
a processing object in which the mobile robot normally works;
a hazard detrimental to the normal operation of the mobile robot;
and pre-designating evacuees to be avoided by the movable robot.
Optionally, after identifying whether the target object is a specific object according to the type of the target object, the method further includes:
and executing corresponding processing according to the identified specific object.
Optionally, if the specific object includes a garbage object to be processed by the mobile robot, then:
according to the identified specific object, executing corresponding processing, including:
if the target object is identified to be the garbage object, controlling a cleaning assembly to clean the garbage object, and/or:
and if the target object is not identified as the junk object, controlling the movable robot to travel around the target object.
Optionally, if the specific object includes a harmful object and/or an evasive object, then:
according to the identified specific object, executing corresponding processing, including:
and if the target object is identified to be the harmful object or the evasive object, controlling the movable robot to move around the target object.
Optionally, if the specific object includes a harmful object and/or an evasive object, and also includes a garbage object, then:
according to the identified specific object, executing corresponding processing, including:
if the target object is identified to be the garbage object, controlling a cleaning assembly to clean the garbage object;
if the target object is identified to be the harmful object or the evasive object, controlling the movable robot to travel around the target object;
and if the target object is not identified as the garbage object and the target object is not identified as the harmful substance and the evasive object, controlling a cleaning assembly to clean the garbage object.
According to a second aspect of the present invention, there is provided a specific object detecting apparatus of a movable robot, comprising:
the acquisition module is used for acquiring the reflectivity information of the target object;
the type determining module is used for determining the type of the target object according to the reflectivity information of the target object;
and the identification module is used for identifying whether the target object is a specific object or not according to the type of the target object.
According to a third aspect of the present invention, there is provided a mobile robot comprising a processor, a memory and an image acquisition component for acquiring the target image;
the memory is used for storing codes and related data;
the processor is configured to execute the code in the memory to implement the method according to the first aspect and its alternatives.
According to a fourth aspect of the present invention, there is provided an electronic device comprising a processor and a memory, the memory for storing code and associated data;
the processor is configured to execute the code in the memory to implement the method according to the first aspect and its alternatives.
According to a fifth aspect of the present invention, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the method of the first aspect and its alternatives.
In the movable robot and the specific object detection method, device and electronic equipment thereof provided by the invention, when the specific object is identified, the type of the target object can be judged according to the reflectivity information, so that whether the specific object is identified according to the type can be identified.
Furthermore, the alternative scheme of the invention can also combine the collected image to identify, and further, the judgment basis comprises the image and the reflectivity information, and the partial scheme can also combine the depth of field information, so that the judgment basis is more diverse and richer, and the detection accuracy can be ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a first schematic flow chart illustrating a mobile robot and a method for detecting a specific object thereof according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of step S12 in FIG. 1;
FIG. 3 is another schematic flow chart of step S12 in FIG. 1;
FIG. 4 is a schematic view of another flowchart of step S12 in FIG. 1;
FIG. 5 is a second flowchart illustrating a specific object detection method of the mobile robot according to an embodiment of the present invention;
FIG. 6 is a schematic flow chart of step S14 in FIG. 5;
FIG. 7 is another schematic flow chart of step S14 in FIG. 5;
FIG. 8 is a schematic view of another flowchart of step S14 in FIG. 5;
FIG. 9 is a first block diagram illustrating the program modules of the specific object detection apparatus of the mobile robot according to an embodiment of the present invention;
FIG. 10 is a block diagram of a second exemplary embodiment of a specific object detection apparatus of a mobile robot in accordance with the present invention;
FIG. 11 is a first schematic diagram of the configuration of a mobile robot in accordance with an embodiment of the present invention;
FIG. 12 is a second schematic diagram of the construction of a mobile robot in accordance with an embodiment of the present invention;
fig. 13 is a schematic structural diagram of an electronic device in an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 1 is a first flowchart illustrating a specific object detection method of a mobile robot according to an embodiment of the present invention.
Referring to fig. 1, a specific object detection method of a mobile robot includes:
s11: reflectivity information of the target object is acquired.
The target object may be any object capable of acquiring reflectivity information, and if the movable robot detects the reflectivity information of the object in front, the target object is the object capable of acquiring the reflectivity information in front.
The reflectivity information may be any information that can characterize the reflectivity of at least a portion of the surface of the target object. This may be the reflectivity data itself or other data associated therewith.
In one embodiment, if the mobile robot is equipped with a detection component, the detection component comprises a detection light source and a receiver; the reflectivity information may be determined from the intensity of the light pulse emitted by the probe light source and the intensity of the return light received by the receiver.
In another embodiment, the reflectivity information of the target object is determined according to the light emitting condition of the illumination light source of the mobile robot and the light collecting condition when the image collecting component of the mobile robot collects the target image.
The mobile robot may be configured with the illumination source, the image capturing component and the detecting component referred to above, and further, the illumination source, the image capturing component and the detecting component may face in the same direction, for example, may face forward.
In a specific implementation, the reflectivity information θ can be determined by the following formula:
and further:
wherein:
Ppixelwhen the target image is collected, the energy information of the light collected by the pixel unit of the image collecting component; the pixel units can be parts used for sensitization in the image acquisition component for example;
PTXenergy information of light emitted for the illumination source;
d is the distance between the target object and the image acquisition component; the depth of field information can be adopted, and can also be calculated and determined according to the depth of field information;
d ^2 is the luminous decay efficiency of the illumination light source;
dithe light attenuation coefficient of the target object reflection, i, is determined by the object surface reflection model. For example, in diffuse reflection, i is 2, and in specular reflection, i is related to the reflection angle;
μ is other attenuation during light transmission, including but not limited to lens transmittance, filter attenuation, air attenuation, etc.;
phi is the coefficient of the light aperture and transmittance of the normalized lens and other optical path attenuation or circuit attenuation coefficients.
After step S11, it may include:
s12: and determining the type of the target object according to the reflectivity information of the target object.
Any object with different surface reflectivity can be suitable for judging the type of the object according to the reflectivity information. Further, any judgment made based on this does not depart from the description of the present embodiment.
Meanwhile, in this embodiment, when determining the type of the target object, the pixel information may also be combined, or the pixel information and the depth information, and further, the step S12 may specifically include:
and determining the type of the target object according to at least one of the pixel information of the target object and the depth information of the target object and the reflectivity information of the target object.
The pixel information may be understood as information represented by a pixel portion of the target object in the target image, and in a specific implementation process, the pixel information of the target object may include at least one of size information, object material information, surface feature information, position information, and shape information of the target object in the target image.
The target image is acquired by an image acquisition part of the mobile robot, which can be understood as an image including pixels of the target object. The target image may be specifically acquired by the image acquisition component when the illumination light source of the mobile robot irradiates the target object.
The depth of field information is detected by a detection component of the mobile robot.
In one embodiment, the detection component may include a detection light source and a receiver; the detection light source is used for emitting light pulses to a range of target visual angles containing the target object; the receiver is used for receiving the return light corresponding to the light pulse; the depth information is determined from the time at which the light pulse is emitted and the time at which the corresponding return light is received.
Fig. 2 is a schematic flow chart of step S12 in fig. 1.
In one embodiment, referring to fig. 2, step S12 may include:
s121: determining first probability data according to the reflectivity information of the target object;
s122: determining a type of the target object based on the first probability data.
The first probability data may be understood as being used to characterize the likelihood that the target object belongs to each type.
In one embodiment, the process of determining the first probability data may be calculated, for example, using a predetermined data model, which may be generated using defined rules that may be determined based on the reflective properties of the object surface to light; the mathematical model can also be determined by using the reflectivity information of different objects under different environments as material training.
In another embodiment, the process of determining the first probability data may also be determined based on, for example, testing and statistics of reflectivity information of various objects under various environments, such as: if the majority of the reflectivity information measured by a certain first class object is in a specific interval, and the minority of the reflectivity information measured by another second class object is in the specific interval, then the probability that the target object of the reflection class information in the specific interval is the first class object is greater than that of the second class object, and if the target object is subjected to quantitative statistical processing, the target object can be classified more accurately based on the reflectivity information. The specific implementation can be determined through basic experimental principles and limited experiments.
It can be seen that the corresponding probability is determined based on the reflectivity information without departing from the description of the above embodiments.
Meanwhile, the first probability data may also be determined by comprehensively considering the reflectivity information of the background area, for example, the first probability data may be calculated according to the difference between the reflectivity information of the target object and the reflectivity information of the background area.
In one implementation, in step S122, the type with the highest first probability data may be determined as the type of the target object.
Fig. 3 is another schematic flow chart of step S12 in fig. 1.
In another embodiment, the type of the target object may be further determined in step S122 by combining the pixel information.
Further, before step S122, the method may further include:
s123: and determining second probability data according to the pixel information of the target object.
The first probability data may also be understood as characterizing the likelihood that the target object belongs to each type. It differs from the first probability data in that: both are based on different dimensions of information (i.e., reflectivity information and pixel information) to obtain different probabilities.
In one embodiment, the process of determining the second probability data may be calculated, for example, using a predetermined data model, the mathematical model may be generated using defined rules, and the rules may be determined based on different features of different objects represented in the image; the mathematical model can also be determined by training by using different pixel information under different environments as materials. In another implementation, the process of determining the second probability data may also be based on, for example, statistical determination of features of various objects in the image. The specific implementation can be determined through basic experimental principles and limited experiments.
Meanwhile, any existing or improved means for determining the object type based on the image in the art may be applied to the above embodiment as long as the calculation of the probability data is performed.
Correspondingly, in step S122, the method may specifically include:
s1221: determining a type of the target object based on the first probability data and the second probability data.
In a specific implementation process, the comprehensive probability data may be calculated according to the first probability data and the second probability data, for example, different weights may be configured for the first probability data and the second probability data, and the comprehensive probability data may be obtained by adding the different weights, and then the type with the highest comprehensive probability data is determined as the type of the target object.
Further, in determining the first probability data or the second probability data, the determination may also be made in conjunction with depth of field information. For example: the actual size information of the target object may be determined based on at least one of the size information and the shape information in the pixel information and the depth information, and the second probability data may be determined based on the actual size information of the corresponding shape.
Fig. 4 is a schematic view of another flowchart of step S12 in fig. 1.
In another embodiment, referring to fig. 3, step S12 may include:
s124: and determining one or more object types corresponding to the reflectivity information of the target object according to the target interval range in which the reflectivity information of the target object is positioned and the corresponding relation between different interval ranges and different object types.
The corresponding relationship can be described as a first corresponding relationship, and the type of the target object can be determined according to the determined one or more object types.
After step S124, the method may further include:
s125: whether the reflectivity information of the target object corresponds to a plurality of object types.
If the determination result in step S125 is yes, step S126 may be implemented: determining the type of the target object in the plurality of object types according to the pixel information of the target object.
If the determination result in step S125 is no, step S127 may be implemented: and determining the object type corresponding to the reflectivity information of the target object as the type of the target object.
Meanwhile, the type of the target object can be further determined by combining the depth information. For example: the real size information of the target object can be determined according to at least one of the size information and the shape information in the pixel information and the depth information, and then the corresponding type is determined to be the type of the target object according to the real size information of the corresponding shape.
In addition, the present embodiment does not exclude the use of a trained or configured regular model, which may determine the type of the target object according to the input of the reflectivity information, the pixel information, and the depth information.
For the reflectivity information, in an example of an application scenario, it may be determined that the target object is a liquid object or a non-liquid object according to the reflectivity information; meanwhile, as mentioned above, in order to accurately determine whether the liquid-like object is a liquid-like object, the determination may be further performed in combination with pixel information such as size information and shape information of the liquid-like object.
Regarding the pixel information, in an example of an application scenario, the step may determine that the target object is a wire-type object according to the size information and/or the shape information; meanwhile, in order to accurately judge whether the object is a wire object, the surface characteristic information, the depth of field information, the reflectivity information and the like of the object can be further combined for judgment. For example, the actual size information of the target object may be determined according to at least one of the size information and the shape information in the pixel information and the depth information, and then the type of the target object may be determined according to the actual size information.
In other examples, the type of the target object may be, for example, a metal type, a nonmetal type, a paper type, a non-paper type, and the like, and may be further, for example, a milk type, a water type, and the like, a specific metal, a paper of a specific material, and the like, and more specific types.
After step S12, the method may further include:
s13: and identifying whether the target object is a specific object or not according to the type of the target object.
The specific object may be any object that can be distinguished from other objects.
In one embodiment, the particular feature may include at least one of:
a processing object in which the mobile robot normally works; the processing object may be, for example, garbage;
a hazard detrimental to the normal operation of the mobile robot;
and pre-designating evacuees to be avoided by the movable robot.
The object to be processed may be further understood as an object suitable for being processed by the mobile robot, and further, for the sweeping robot, the object may be, for example, a garbage object, which may be understood as an object that does not cause damage to the traveling and normal operation of the sweeping robot after the sweeping robot collects and cleans the garbage object. For a mopping robot, the treatment object may also be a beverage, milk, etc. on the ground. It can be seen that the processing objects of the mobile robot may be different according to their roles.
The harmful substances can be understood as any objects which have an adverse effect on the normal operation or the effect of the normal operation of the mobile robot, and the above-mentioned treatment objects can be further described as objects which do not have an adverse effect on the normal operation or the effect of the normal operation of the mobile robot.
The defined harmful substances may be different according to functions of the mobile robot, wherein in one example, the harmful substances may be objects like cables, which may cause wheels at the bottom of the mobile robot to be entangled by the cables and not to operate smoothly, and for mobile robots such as floor sweeping robots, the harmful substances may also be liquids such as milk and beverages, which may cause ineffective cleaning of cleaning components, but may cause the floor to be dirty and disordered, and may cause damage to the components due to liquid infiltration.
The avoidance object can be understood as any object which is set to be avoided manually or automatically in advance, for example, what the object is, the type of the object, or some characteristic of the object can be specifically set by a user in a program (for example, an interactive interface of APP) for controlling the mobile robot.
The recognition result may include, for example: identifying that the target object is a junk in the target image or identifying that the target image has no junk; it is also possible, for example: identifying that the target object is a harmful substance in the target image or identifying that no harmful substance exists in the target image; the target object may also be identified as an avoidance in the target image, or an avoidance may be identified in the target image, for example.
In addition, if the movable robot is a cleaning robot (e.g., a sweeping robot or a mopping robot), the garbage can be any object to be cleaned, such as a solid object, which can be cleaned in the collection chamber by cleaning, a liquid object, which can be cleaned by cleaning means such as wiping. For example, the floor sweeping robot can use liquid as the garbage to be cleaned, or can use the liquid as the harmful substance instead of the garbage to be cleaned, and the floor mopping robot can use the liquid as the garbage to be cleaned, or can use the solid as the garbage to be cleaned.
In one example, a mobile robot as a sweeping robot may use garbage as a processing object and a predefined part of objects as harmful and/or evasive objects.
In one embodiment, after step S103, the method may further include:
fig. 5 is a second flowchart illustrating a specific object detection method of the mobile robot according to an embodiment of the present invention. FIG. 6 is a schematic flow chart of step S14 in FIG. 5; FIG. 7 is another schematic flow chart of step S14 in FIG. 5; fig. 8 is a schematic view of another flowchart of step S14 in fig. 5.
Referring to fig. 5, after step S13, the method may further include:
s14: and executing corresponding processing according to the identified specific object.
In one embodiment, step S14 may include:
s141: whether the target object is identified as the junk.
If the determination result in step S141 is yes, step S142 may be implemented: and controlling a cleaning assembly to clean the garbage.
If the determination result in the step S141 is no, then:
in the embodiment shown in fig. 6, step S143: controlling the mobile robot to travel around the target object;
in the embodiment shown in fig. 8, step S144 may be implemented: whether the target object is identified as the harmful substance or the evasive substance.
Referring to fig. 7 and 8, if the determination result of step S144 is yes, step S143 may be implemented.
In the embodiment shown in fig. 8, if the determination result of step S144 is no, step S142 may be performed.
As can be seen, the above embodiment can implement the following process:
if the target object is identified to be the garbage object, controlling a cleaning assembly to clean the garbage object;
if the target object is identified to be the harmful object or the evasive object, controlling the movable robot to travel around the target object;
and if the target object is not identified as the garbage object and the target object is not identified as the harmful substance and the evasive object, controlling a cleaning assembly to clean the garbage object.
This embodiment also does not exclude an embodiment in which steps S144 and S143 are performed without determining whether the garbage is present.
It can be seen that, if the specific object is a trash, the related art can only clean when the trash is identified, but not clean when the trash is not identified, and in a scene where the trash is not fully identified, part of the trash which is not identified cannot be cleaned, and in the above embodiment, if the specific object is identified as the trash and also identified as a harmful substance or an evasive substance, the target object can be cleaned if the harmful substance or the evasive substance is not identified, and this embodiment can avoid the situation that the subsequent treatment is limited by the recognition capability of the trash when the trash can only be identified, thereby achieving accurate and full coverage of trash cleaning.
Further, since only the garbage is recognized in the related art, if a part of the non-garbage is recognized as the garbage, cleaning is erroneously performed, which leads to an adverse effect, whereas in the above embodiment, if it is recognized as the harmful substance or the evasive substance as well as the specific object of whether the garbage is recognized, if the garbage is recognized, detour can be still selected, and further, safety can be effectively improved.
Therefore, compared with the prior art, the specific objects identified by the embodiment are more diverse, so that the limitation of identifying a single specific object can be avoided, and the scheme can take more diverse effects into consideration.
In a specific implementation process, a detour route for bypassing a harmful substance or an evasive substance may be different from a common obstacle. Meanwhile, under the condition that harmful substances or evasion objects are identified, information describing the harmful substances and the evasion objects can be fed back, so that the corresponding area positions are identified as having the harmful substances or the evasion objects, subsequent path planning is facilitated, and corresponding alarm information can be sent out outwards.
In summary, the specific object detection method of the mobile robot according to the embodiment can determine the type of the target object based on the reflectivity information when identifying the specific object, so as to identify whether the specific object is the specific object according to the type.
Furthermore, this embodiment alternative still can combine the image of gathering to discern, and then, the basis of judging has included image and reflectivity information, also can combine depth of field information in partial scheme, and is visible, and the basis of judging is more various more abundant to can ensure the accuracy of detection, wherein, no matter how imaging ability, this scheme compares in the scheme that only relies on the image to detect, because the basis is more various, can effectively improve the accuracy of detection.
FIG. 9 is a first block diagram illustrating the program modules of the specific object detection apparatus of the mobile robot according to an embodiment of the present invention; fig. 10 is a second schematic diagram of the program modules of the specific object detecting apparatus of the mobile robot according to an embodiment of the present invention.
Referring to fig. 9 and 10, the specific object detecting apparatus 200 of the mobile robot includes:
an obtaining module 201, configured to obtain reflectivity information of a target object;
a type determining module 202, configured to determine a type of the target object according to the reflectivity information of the target object;
the identifying module 203 is configured to identify whether the target object is a specific object according to the type of the target object.
Optionally, the type determining module 202 is specifically configured to:
determining first probability data according to the reflectivity information of the target object, wherein the first probability data is used for representing the possibility that the target object belongs to each type;
determining a type of the target object based on the first probability data.
Optionally, the type determining module 202 is specifically configured to:
determining a type of the target object based on the first probability data and second probability data, the second probability data being determined based on pixel information of the target object and also being used for representing the possibility that the target object belongs to each type, the pixel information of the target object being information represented by a pixel portion of the target object in a target image acquired by an image acquisition section of the movable robot.
Optionally, the type determining module 202 is specifically configured to:
and determining one or more object types corresponding to the reflectivity information of the target object according to the target interval range in which the reflectivity information of the target object is positioned and the corresponding relation between different interval ranges and different object types, wherein the type of the target object can be determined according to the determined one or more object types.
Optionally, the type determining module 202 is specifically configured to:
if the reflectivity information of the target object corresponds to a plurality of object types, determining the type of the target object in the plurality of object types according to the pixel information of the target object; the pixel information is information characterized by a pixel portion of the target object in a target image acquired by an image acquisition component of the mobile robot;
and if the reflectivity information of the target object corresponds to one object type, determining that the object type corresponding to the reflectivity information of the target object is the type of the target object.
Optionally, the type determining module 202 is specifically configured to:
determining the type of the target object according to at least one of the pixel information of the target object and the depth information of the target object and the reflectivity information of the target object; the pixel information is information characterized by a pixel portion of the target object in a target image captured by an image capturing section of the movable robot.
Optionally, the depth of field information is detected by a detection component of the mobile robot; the detection component comprises a detection light source and a receiver;
the detection light source is used for emitting light pulses to a range of target visual angles containing the target object; the receiver is used for receiving the return light corresponding to the light pulse; the depth information is determined from the time at which the light pulse is emitted and the time at which the corresponding return light is received.
Optionally, the reflectivity information of the target object is determined according to a lighting condition when the lighting source of the mobile robot illuminates the target object, and a light collecting condition when the image collecting component of the mobile robot collects the target image of the target object.
Optionally, the reflectivity information of the target object is determined by calculating according to the following formula:
wherein:
Ppixelwhen the target image is collected, the intensity information of the light collected by the pixel unit of the image collecting component is acquired;
PTXthe optical power of the light emitted for the illumination source;
d is the distance information of the target object from the illumination light source;
d ^2 is the luminous decay efficiency of the illumination light source;
dithe light attenuation coefficient reflected by the target object;
μ is the other attenuation in the light transmission.
Optionally, the specific items include at least one of:
a processing object in which the mobile robot normally works;
a hazard detrimental to the normal operation of the mobile robot;
and pre-designating evacuees to be avoided by the movable robot.
Optionally, the apparatus further includes:
the executing module 204 is configured to execute corresponding processing according to the identified specific object.
Optionally, if the specific object includes a garbage object to be processed by the mobile robot, then:
the execution module 204 is specifically configured to:
if the target object is identified to be the garbage object, controlling a cleaning assembly to clean the garbage object, and/or:
and if the target object is not identified as the junk object, controlling the movable robot to travel around the target object.
Optionally, if the specific object includes a harmful object and/or an evasive object, then:
the execution module 204 is specifically configured to:
and if the target object is identified to be the harmful object or the evasive object, controlling the movable robot to move around the target object.
Optionally, if the specific object includes a harmful object and/or an evasive object, and also includes a garbage object, then:
the execution module 204 is specifically configured to:
if the target object is identified to be the garbage object, controlling a cleaning assembly to clean the garbage object;
if the target object is identified to be the harmful object or the evasive object, controlling the movable robot to travel around the target object;
and if the target object is not identified as the garbage object and the target object is not identified as the harmful substance and the evasive object, controlling a cleaning assembly to clean the garbage object.
In summary, the specific object detecting device of the mobile robot according to the present embodiment can determine the type of the target object based on the reflectivity information when identifying the specific object, so as to identify whether the specific object is the specific object according to the type.
Furthermore, this embodiment alternative still can combine the image of gathering to discern, and then, the basis of judging has included image and reflectivity information, also can combine depth of field information in partial scheme, and is visible, and the basis of judging is more various more abundant to can ensure the accuracy of detection, wherein, no matter how imaging ability, this scheme compares in the scheme that only relies on the image to detect, because the basis is more various, can effectively improve the accuracy of detection.
FIG. 11 is a first schematic diagram of the configuration of a mobile robot in accordance with an embodiment of the present invention; fig. 12 is a second schematic structural diagram of a mobile robot according to an embodiment of the present invention.
Referring to fig. 11, the mobile robot 300 includes a detection component 303, an illumination light source 304, an image capturing component 305, a processor 301 and a memory 302, wherein the detection component 303, the illumination light source 304 and the image capturing component 305 are all directly or indirectly connected to the processor 301.
The memory 302 is used for storing codes and related data.
The processor 301 is configured to execute the code in the memory to implement the method according to the above alternative.
Referring to fig. 12, the detecting component 303 may include the detecting light source 3031 and the receiver 3032 mentioned above.
The detection light source 3031 can detect the depth of field information of the front object at a certain frame rate. In the specific implementation process, the requirements on the time precision and the uniformity of the light source are higher, and a VCSEL or a high-quality LED can be selected. In a further alternative, in order to achieve the measurement accuracy, a high-speed pulse driving mode is mainly adopted, for example, a light-emitting rising-falling edge of tens of nanoseconds is adopted.
The power of the illumination source 304 may be less than 1 watt, the field angle depends on the object to be identified, the range of the vertical viewing angle may be 50 degrees to 60 degrees, for example, 55 degrees, and the range of the horizontal viewing angle may be 70 degrees to 75 degrees, for example, 72 degrees.
The illumination source 304 may not have a high requirement for electrical response, it may be continuously emitting light, as can be appreciated with reference to a cell phone flash, and the emission time may be, for example, 3 milliseconds.
Fig. 13 is a schematic structural diagram of an electronic device in an embodiment of the invention.
Referring to fig. 13, an electronic device 40 is provided, including:
a processor 41; and the number of the first and second groups,
a memory 42 for storing executable instructions of the processor;
wherein the processor 41 is configured to perform the above-mentioned method via execution of the executable instructions.
The processor 41 is capable of communicating with the memory 42 via the bus 43.
The present embodiments also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-mentioned method.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (18)

1. A specific object detection processing method of a mobile robot is characterized by comprising the following steps:
acquiring reflectivity information of a target object;
determining the type of the target object according to the reflectivity information of the target object;
and identifying whether the target object is a specific object or not according to the type of the target object.
2. The method of claim 1, wherein determining the type of the target object based on the reflectivity information of the target object comprises:
determining first probability data according to the reflectivity information of the target object, wherein the first probability data is used for representing the possibility that the target object belongs to each type;
determining a type of the target object based on the first probability data.
3. The method of claim 2, wherein determining the type of the target object from the first probability data comprises:
determining a type of the target object based on the first probability data and second probability data, the second probability data being determined based on pixel information of the target object and also being used for representing the possibility that the target object belongs to each type, the pixel information of the target object being information represented by a pixel portion of the target object in a target image acquired by an image acquisition section of the movable robot.
4. The method of claim 1, wherein determining the type of the target object based on the reflectivity information of the target object comprises:
and determining one or more object types corresponding to the reflectivity information of the target object according to the target interval range in which the reflectivity information of the target object is positioned and the corresponding relation between different interval ranges and different object types, wherein the type of the target object can be determined according to the determined one or more object types.
5. The method of claim 4, wherein after determining one or more object types corresponding to the reflectivity information of the target object, further comprising:
if the reflectivity information of the target object corresponds to a plurality of object types, determining the type of the target object in the plurality of object types according to the pixel information of the target object; the pixel information is information characterized by a pixel portion of the target object in a target image acquired by an image acquisition component of the mobile robot;
and if the reflectivity information of the target object corresponds to one object type, determining that the object type corresponding to the reflectivity information of the target object is the type of the target object.
6. The method of claim 1, wherein determining the type of the target object based on the reflectivity information of the target object comprises:
determining the type of the target object according to at least one of the pixel information of the target object and the depth information of the target object and the reflectivity information of the target object; the pixel information is information characterized by a pixel portion of the target object in a target image captured by an image capturing section of the movable robot.
7. The method of claim 6, wherein the depth of field information is detected by a detection component of the mobile robot; the detection component comprises a detection light source and a receiver;
the detection light source is used for emitting light pulses to a range of target visual angles containing the target object; the receiver is used for receiving the return light corresponding to the light pulse; the depth information is determined from the time at which the light pulse is emitted and the time at which the corresponding return light is received.
8. The method according to any one of claims 1 to 7, wherein the information on the reflectivity of the target object is determined based on a lighting situation when the illumination light source of the mobile robot illuminates the target object and a light collection situation when the image collection part of the mobile robot collects a target image of the target object.
9. The method of claim 8, wherein the reflectivity information of the target object is determined by calculating the following equation:
wherein:
Ppixelwhen the target image is collected, the energy information of the light collected by the pixel unit of the image collecting component;
PTXenergy information of light emitted for the illumination source;
d is the distance between the target object and the image acquisition component;
dii is the light attenuation coefficient reflected by the target object and is determined by an object surface reflection model;
μ is the other attenuation during light transmission;
phi is the coefficient of the light aperture and transmittance of the normalized lens and other optical path attenuation or circuit attenuation coefficients.
10. The method according to any one of claims 1 to 7, wherein the specific object comprises at least one of:
a processing object in which the mobile robot normally works;
a hazard detrimental to the normal operation of the mobile robot;
and pre-designating evacuees to be avoided by the movable robot.
11. The method according to any one of claims 1 to 7, wherein after identifying whether the target object is a specific object according to the type of the target object, further comprising:
and executing corresponding processing according to the identified specific object.
12. The method according to claim 11, wherein if the specific object includes a garbage object to be processed by the mobile robot, then:
according to the identified specific object, executing corresponding processing, including:
if the target object is identified to be the garbage object, controlling a cleaning assembly to clean the garbage object, and/or:
and if the target object is not identified as the junk object, controlling the movable robot to travel around the target object.
13. The method of claim 11, wherein if the specific object comprises a pest and/or evasive object, then:
according to the identified specific object, executing corresponding processing, including:
and if the target object is identified to be the harmful object or the evasive object, controlling the movable robot to move around the target object.
14. The method of claim 11, wherein if the specific object comprises a pest and/or evasive object and also comprises a trash object, then:
according to the identified specific object, executing corresponding processing, including:
if the target object is identified to be the garbage object, controlling a cleaning assembly to clean the garbage object;
if the target object is identified to be the harmful object or the evasive object, controlling the movable robot to travel around the target object;
and if the target object is not identified as the garbage object and the target object is not identified as the harmful substance and the evasive object, controlling a cleaning assembly to clean the garbage object.
15. A specific object detection device for a mobile robot, comprising:
the acquisition module is used for acquiring the reflectivity information of the target object;
the type determining module is used for determining the type of the target object according to the reflectivity information of the target object;
and the identification module is used for identifying whether the target object is a specific object or not according to the type of the target object.
16. A mobile robot comprising a processor, a memory and an image acquisition component for acquiring an image of the target;
the memory is used for storing codes and related data;
the processor configured to execute the code in the memory to implement the method of any one of claims 1 to 14.
17. An electronic device, comprising a processor and a memory,
the memory is used for storing codes and related data;
the processor configured to execute the code in the memory to implement the method of any one of claims 1 to 14.
18. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 14.
CN201910866408.XA 2019-09-12 2019-09-12 Mobile robot, specific object detection method and device thereof and electronic equipment Pending CN110619298A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910866408.XA CN110619298A (en) 2019-09-12 2019-09-12 Mobile robot, specific object detection method and device thereof and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910866408.XA CN110619298A (en) 2019-09-12 2019-09-12 Mobile robot, specific object detection method and device thereof and electronic equipment

Publications (1)

Publication Number Publication Date
CN110619298A true CN110619298A (en) 2019-12-27

Family

ID=68923295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910866408.XA Pending CN110619298A (en) 2019-09-12 2019-09-12 Mobile robot, specific object detection method and device thereof and electronic equipment

Country Status (1)

Country Link
CN (1) CN110619298A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106114500A (en) * 2016-06-22 2016-11-16 京东方科技集团股份有限公司 Vehicle travel control method and controlling device for vehicle running
CN107024821A (en) * 2017-05-18 2017-08-08 深圳市沃特沃德股份有限公司 The method and lighting device of a kind of lighting device capturing information
CN107272708A (en) * 2017-08-03 2017-10-20 佛山市盈智轩科技有限公司 Home-use floor cleaning system and floor cleaning method
CN109120801A (en) * 2018-10-30 2019-01-01 Oppo(重庆)智能科技有限公司 A kind of method, device and mobile terminal of dangerous goods detection
CN109213137A (en) * 2017-07-05 2019-01-15 广东宝乐机器人股份有限公司 sweeping robot, sweeping robot system and its working method
CN110103223A (en) * 2019-05-27 2019-08-09 西安交通大学 A kind of identification of view-based access control model follows barrier-avoiding method and robot automatically
CN110147706A (en) * 2018-10-24 2019-08-20 腾讯科技(深圳)有限公司 The recognition methods of barrier and device, storage medium, electronic device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106114500A (en) * 2016-06-22 2016-11-16 京东方科技集团股份有限公司 Vehicle travel control method and controlling device for vehicle running
CN107024821A (en) * 2017-05-18 2017-08-08 深圳市沃特沃德股份有限公司 The method and lighting device of a kind of lighting device capturing information
CN109213137A (en) * 2017-07-05 2019-01-15 广东宝乐机器人股份有限公司 sweeping robot, sweeping robot system and its working method
CN107272708A (en) * 2017-08-03 2017-10-20 佛山市盈智轩科技有限公司 Home-use floor cleaning system and floor cleaning method
CN110147706A (en) * 2018-10-24 2019-08-20 腾讯科技(深圳)有限公司 The recognition methods of barrier and device, storage medium, electronic device
CN109120801A (en) * 2018-10-30 2019-01-01 Oppo(重庆)智能科技有限公司 A kind of method, device and mobile terminal of dangerous goods detection
CN110103223A (en) * 2019-05-27 2019-08-09 西安交通大学 A kind of identification of view-based access control model follows barrier-avoiding method and robot automatically

Similar Documents

Publication Publication Date Title
CN110558902B (en) Mobile robot, specific object detection method and device thereof and electronic equipment
CN108780050B (en) Method and device for detecting lens, electronic equipment and computer readable storage medium
CN104729426B (en) Angle steel automatic on-line detecting system and method based on machine vision
US11297768B2 (en) Vision based stalk sensors and associated systems and methods
CN109871765B (en) Image-based non-standard article stacking detection method and system and electronic equipment
BE1025335B1 (en) Sensor to control an automatic door
EP2945094A1 (en) Scanner automatic dirty/clean window detection
WO2014018427A2 (en) Kernel counter
CN113570582B (en) Camera cover plate cleanliness detection method and detection device
BR102020016762A2 (en) GRAIN LOSS MONITORING SYSTEM FOR A HARVESTING MACHINE CONFIGURED TO PROCESS AGRICULTURAL MATERIAL.
CN110471086A (en) A kind of radar survey barrier system and method
BE1025329A9 (en) Human body recognition method and human body recognition sensor
CN112241015B (en) Method for removing dragging point by single-point laser radar
CN110686600A (en) Measuring method and system based on flight time measurement
CN110619298A (en) Mobile robot, specific object detection method and device thereof and electronic equipment
CN112461829B (en) Optical flow sensing module, self-moving robot and material detection method
JP6382899B2 (en) Object detection method
CN115631143A (en) Laser point cloud data detection method and device, readable storage medium and terminal
JP2000348268A (en) Human body detecting device
KR20240023447A (en) Monitoring the scan volume of a 3d scanner
CN110430361B (en) Window cleaning method and device
US20170011510A1 (en) Image apparatus, image processing method, and computer readable, non-transitory medium
CN112927278A (en) Control method, control device, robot and computer-readable storage medium
CN113221910A (en) Structured light image processing method, obstacle detection method, module and equipment
US20230267439A1 (en) Barcode Reader with Off-Platter Detection Assembly

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191227

RJ01 Rejection of invention patent application after publication