CN116700236A - Map generation method for self-mobile device, self-mobile device and storage medium - Google Patents

Map generation method for self-mobile device, self-mobile device and storage medium Download PDF

Info

Publication number
CN116700236A
CN116700236A CN202210190305.8A CN202210190305A CN116700236A CN 116700236 A CN116700236 A CN 116700236A CN 202210190305 A CN202210190305 A CN 202210190305A CN 116700236 A CN116700236 A CN 116700236A
Authority
CN
China
Prior art keywords
area
obstacle
map
self
obstacle information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210190305.8A
Other languages
Chinese (zh)
Inventor
罗绍涵
孙佳佳
王元超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dreame Innovation Technology Suzhou Co Ltd
Original Assignee
Dreame Innovation Technology Suzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dreame Innovation Technology Suzhou Co Ltd filed Critical Dreame Innovation Technology Suzhou Co Ltd
Priority to CN202210190305.8A priority Critical patent/CN116700236A/en
Priority to PCT/CN2023/075812 priority patent/WO2023160428A1/en
Publication of CN116700236A publication Critical patent/CN116700236A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47LDOMESTIC WASHING OR CLEANING; SUCTION CLEANERS IN GENERAL
    • A47L11/00Machines for cleaning floors, carpets, furniture, walls, or wall coverings
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application belongs to the technical field of automatic control, and particularly relates to a map generation method of self-mobile equipment, the self-mobile equipment and a medium. The method comprises the following steps: in the moving process of the self-moving equipment in the target area, constructing an area map of the target area, wherein the area map comprises an unviewable area where an obstacle in the target area is located, and the unviewable area is marked by using a preset representation mode; acquiring obstacle information obtained in the moving process; correcting the area structure and/or the preset representation mode of the non-passable area based on the obstacle information to obtain a corrected area map; the problem that the matching degree of the regional map and the actual environment of the target region is not high can be solved; since the obstacle information is not used when constructing the area map, the area map is corrected by combining the obstacle information in the target area, so that the matching degree of the area map and the actual environment of the target area can be improved.

Description

Map generation method for self-mobile device, self-mobile device and storage medium
Technical Field
The application belongs to the technical field of automatic control, and particularly relates to a map generation method of self-mobile equipment, the self-mobile equipment and a storage medium.
Background
The self-moving device refers to a device which can realize automatic movement in a target area without manual driving. In order to avoid collision of the self-mobile device with an obstacle during operation, the self-mobile device acquires a regional map of the target region prior to operation, wherein the regional map comprises an impermissible region of the self-mobile device. In this way, the self-mobile device can avoid the non-passable area from moving in the process of moving according to the area map.
In a typical method for generating a regional map of a target region, a self-moving device moves in the target region and gradually builds the regional map in the moving process; if collision occurs in the moving process, marking an unvented area on the constructed area map based on the collision position; and after the target area is traversed, obtaining a complete area map of the target area.
However, whether each area can pass or not can only be shown on the area map, and the obtained area map is not matched with the actual environment of the target area to a high degree.
Disclosure of Invention
The application provides a map generation method of self-mobile equipment, the self-mobile equipment and a storage medium, which can solve the problem that whether each area can pass or not can only be embodied on the traditional area map, and the obtained area map has low matching degree with the actual environment of a target area. The application provides the following technical scheme:
In a first aspect, there is provided a map generation method of a self-mobile device, the method comprising:
constructing an area map of a target area in the process that the self-mobile equipment moves in the target area, wherein the area map comprises an unviewable area where an obstacle in the target area is located, and the unviewable area is marked by using a preset representation mode;
acquiring obstacle information of the obstacle obtained in the moving process;
and correcting the area structure of the non-passable area and/or the preset representation mode based on the obstacle information to obtain a corrected area map.
Optionally, correcting the preset representation mode based on the obstacle information to obtain a corrected area map, including:
determining an expected representation mode corresponding to the obstacle information;
and identifying the non-passable area at the corresponding position of the obstacle by using the expected representation mode, and obtaining the corrected area map.
Optionally, the obstacle information includes an obstacle type; accordingly, the desired representation is used to indicate the obstacle type;
and/or the number of the groups of groups,
the obstacle information includes an obstacle color; accordingly, the desired representation includes representation in the obstacle color.
Optionally, correcting the area structure of the non-passable area based on the obstacle information includes:
determining whether the non-passable area is a false area based on the obstacle information;
and under the condition that the non-passable area is not a false area, complementing the area map to obtain the corrected area map.
Optionally, the self-mobile device is provided with a laser sensor and a collision sensor; accordingly, the obstacle information includes a first obstacle detection result determined based on a first sensing result acquired by the laser sensor, and a second obstacle detection result determined based on a second sensing result acquired by the collision sensor;
the determining whether the non-passable area is a false area based on the obstacle information includes:
in a case where the first obstacle detection result indicates that no obstacle exists and the second obstacle detection result indicates that an obstacle exists, it is determined that the non-passable area is not the false area.
Optionally, the complementing the regional map includes:
and complementing the regional map by using the first sensing result.
Optionally, the determining whether the non-passable area is a false area based on the obstacle information includes:
inputting the obstacle information into a pre-trained neural network model to obtain an obstacle recognition result; and the obstacle recognition result is used for indicating whether the area where the obstacle is located is the false area or not.
Optionally, after the area structure of the non-passable area and/or the preset representation mode are modified based on the obstacle information, the method further includes:
and performing control and direct processing on the region edge in the corrected region map.
Optionally, the performing the direct control processing on the region edge in the corrected region map includes:
determining a corner location within the target area based on the obstacle information;
and controlling the edge of the area to be straight based on the corner position.
In a second aspect, there is provided a self-mobile device comprising a processor and a memory; the memory stores a program that is loaded and executed by the processor to implement the map generation method of the self-mobile device provided in the first aspect.
In a third aspect, there is provided a computer-readable storage medium having stored therein a program for implementing the map generation method of the self-mobile device provided in the first aspect when executed by a processor.
The beneficial effects of the application at least comprise: constructing an area map of the target area in the process that the self-mobile equipment moves in the target area, wherein the area map comprises an unviewable area where an obstacle in the target area is located, and the unviewable area is marked by using a preset representation mode; acquiring obstacle information obtained in the moving process; correcting the area structure and/or the preset representation mode of the non-passable area based on the obstacle information to obtain a corrected area map; the problem that the matching degree of the regional map and the actual environment of the target region is not high can be solved; since the obstacle information is not used when constructing the area map, the area map is corrected by combining the obstacle information in the target area, so that the matching degree of the area map and the actual environment of the target area can be improved.
In addition, the non-passable area corresponding to the obstacle is marked by using the expected representation method corresponding to the obstacle information, so that different representation modes corresponding to different obstacles in the corrected area map can be different, different obstacles can be more intuitively distinguished, the corrected area map can more intuitively reflect the actual environment of the target area, and the matching degree of the area map and the actual environment of the target area can be improved.
In addition, because the obstacle information comprises the types of the obstacles, the corresponding representation modes of the different types of the obstacles in the corrected area map are different, the different types of the obstacles can be more intuitively distinguished, and the problem that the different types of the obstacles cannot be distinguished through the area map when all the obstacles are marked by using the preset representation mode can be avoided.
In addition, because the obstacle information comprises the obstacle colors, the corresponding representation modes of the obstacles with different colors in the corrected regional map are different, the obstacles with different colors can be more intuitively distinguished, and the problem that the obstacles with different colors cannot be distinguished through the regional map when all the obstacles are marked by using the preset representation modes can be avoided, so that the matching degree of the regional map and the actual environment of the target region can be improved.
In addition, under the condition that the non-passable area is not a false area based on the obstacle information, the area map is complemented to obtain the corrected area map, so that the problem that the actually existing area in the target area is determined to be the non-passable area can be avoided, and the matching degree of the area map and the actual environment of the target area can be improved.
In addition, since the obstacle information includes the first obstacle detection result determined based on the first sensing result acquired by the laser sensor and the second obstacle detection result determined based on the second sensing result acquired by the collision sensor, it can be accurately determined whether the non-passable area is a false area based on the obstacle information, thereby determining whether to correct the area map, and thus, the degree of matching of the area map with the actual environment of the target area can be improved.
In addition, under the condition that the non-passable area is not a false area, the area map is complemented based on the first sensing result, and the laser sensing signal emitted by the laser sensor can pass through the object with high light transmittance, so that the environmental information of the area can be determined based on the first sensing result, and the area map is complemented, so that the matching degree of the area map and the actual environment of the target area can be improved.
In addition, because whether the area where the obstacle is a false area is determined based on the obstacle recognition result output by the pre-trained neural network model, whether the area map is corrected is determined, and therefore the matching degree of the area map and the actual environment of the target area can be improved.
In addition, the area edge in the corrected area map is subjected to the control and direct processing, so that errors in the construction and correction processes of the area map can be avoided, and the matching degree of the area map and the actual environment of the target area can be improved.
In addition, because the wall body position can be determined based on the corner position, the edge of the area can be controlled to be straight, the accuracy of the straight control process can be improved, and the matching degree of the area map and the actual environment of the target area can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a self-mobile device according to one embodiment of the present application;
FIG. 2 is a schematic diagram of a method for map generation from a mobile device according to one embodiment of the present application;
FIG. 3 is a block diagram of a map generation apparatus of a self-mobile device provided by an embodiment of the present utility model;
fig. 4 is a block diagram of an electronic device provided in one embodiment of the utility model.
Detailed Description
The following description of the embodiments of the present utility model will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the utility model are shown. The utility model will be described in detail hereinafter with reference to the drawings in conjunction with embodiments. It should be noted that, without conflict, the embodiments of the present utility model and features of the embodiments may be combined with each other.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present utility model and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
In the application, unless otherwise indicated, terms of orientation such as "upper, lower, top, bottom" are used generally with respect to the orientation shown in the drawings or with respect to the component itself in the vertical, vertical or gravitational direction; also, for ease of understanding and description, "inner and outer" refers to inner and outer relative to the profile of each component itself, but the above-mentioned orientation terms are not intended to limit the present utility model.
First, several terms related to the embodiments of the present application will be described.
Laser sensor: is a sensor comprising a laser generating component and a light sensing component; the laser sensor emits laser to the target object when in use, the laser generates diffuse reflection on the target object, the reflected light is imaged on the light sensing component, and the position and the shape of the target object can be measured by checking the change of the imaged position and the imaged shape.
Collision sensor: a sensing signal is generated upon contact with the other object to indicate that the device is in collision with the other object. The collision sensor is typically mounted to the surface of the self-moving device.
Fig. 1 is a schematic structural diagram of a self-mobile device according to an embodiment of the present application. Among them, the self-mobile devices include, but are not limited to: the device with the automatic movement function such as the sweeper, the floor washer, the sweeping and mopping integrated machine and the like is not limited by the type of the self-moving device. As can be seen from fig. 1, the self-moving device at least includes a housing 110, a moving mechanism 120 and a controller 130.
The housing 110 is a housing of the mobile device, and the shape of the housing 110 may be a regular geometric body, such as a circle, a square, or may be configured into other shapes according to an actual application scenario, which is not limited to the shape of the housing 110 in this embodiment.
The housing 110 mainly serves as a protection and support. The housing 110 may be integrally formed or may be a detachable structure, and the implementation of the housing 110 is not limited in this embodiment.
The structure of the housing 110 is substantially flat, such as a disc shape, and the shape of the housing 110 is not limited in this embodiment.
The moving mechanism 120 is located at the bottom of the housing 110 and is used for driving the self-moving device to move. The moving mechanism 120 may be wheeled or crawler-type, and the implementation of the moving mechanism 120 is not limited in this embodiment.
The moving mechanism 120 is connected to the controller 130, so as to drive the self-moving device to move under the control of the controller 130.
The controller 130 may be a micro control unit installed from the inside of the mobile device, or any component having a control function, and the embodiment is not limited in type of controller.
In this embodiment, the controller 130 is configured to: in the moving process of the self-moving equipment in the target area, constructing an area map of the target area, wherein the area map comprises an unviewable area where an obstacle in the target area is located, and the unviewable area is marked by using a preset representation mode; acquiring obstacle information of an obstacle obtained in the moving process; and correcting the regional structure and/or the preset representation mode of the non-passable region based on the obstacle information to obtain a corrected regional map.
Optionally, for collecting obstacle information, an environmental sensor 140 is provided on the self-moving device. The environment sensor 140 is used to collect obstacle information of obstacles in the target area during movement of the self-moving device in the target area.
The environmental sensor 140 may be a visual sensor, a laser sensor, or a collision sensor, where the visual sensor includes, but is not limited to: charge coupled devices (Charge Coupled Device, CCD), metal oxide semiconductor devices (Complementary Metal Oxide Semiconductor, CMOS), etc., the present embodiment does not limit the type of environmental sensor 140.
In one example, the environmental sensor 140 includes a visual sensor. Accordingly, the obstacle information includes an obstacle color.
Alternatively, the environmental sensors 140 may be one or at least two, and the number of the environmental sensors 140 is not limited in this embodiment.
Optionally, environmental sensor 140 may be located at the top, and/or side of housing 110, the collection range of environmental sensor 140 including, but not limited to: the specific mounting location and collection range of the environmental sensor 140 are not limited by this embodiment from the front of the travel direction of the mobile device, from the left side of the travel direction of the mobile device, and/or from the left side of the travel direction of the mobile device.
In actual implementation, the self-mobile device may also include other components, such as: battery packs, edge brushes, etc., the present embodiment does not list the components that the self-mobile device includes one by one.
In the embodiment, an area map of a target area is constructed in the process that the self-mobile device moves in the target area, the area map comprises an unviewable area where an obstacle in the target area is located, and the unviewable area is marked by using a preset representation mode; acquiring obstacle information obtained in the moving process; correcting the area structure and/or the preset representation mode of the non-passable area based on the obstacle information to obtain a corrected area map; the problem that the matching degree of the regional map and the actual environment of the target region is not high can be solved; since the obstacle information is not used when constructing the area map, the area map is corrected by combining the obstacle information in the target area, so that the matching degree of the area map and the actual environment of the target area can be improved.
The map generation method of the self-mobile device provided by the application is described in detail below.
The map generation method of the self-mobile device is shown in fig. 2. This embodiment will be described by taking the method as an example for use in the self-mobile device shown in fig. 1. In other embodiments, it may also be performed by other devices communicatively coupled to the self-mobile device, such as: the cleaning device is remotely controlled by a device such as a mobile phone, a computer, a tablet computer, and the like, and the implementation manner of other devices and the execution subject of each embodiment are not limited in this embodiment. The map generation method at least comprises the following steps:
In step 201, an area map of the target area is constructed during the movement of the self-mobile device within the target area.
The regional map comprises an unviewable region where the obstacle in the target region is located, and the unviewable region is marked by using a preset representation mode.
Alternatively, the target area may be a house, or may also be an office, or may also be a factory building, and the type in the target area is not limited in this embodiment. The present embodiment will be described taking a target area as a house as an example.
Alternatively, the obstacle may be a wall in the target area, or may be furniture in the target area, such as: tables, tea tables, garderobe, beds, etc., the present embodiment is not limited to the type of obstacle.
The preset representation mode refers to a mode of identifying an unvented area in a traditional regional map construction process. Optionally, the preset representation may be text, for example: marking the non-passable area through the word "non-passable area"; alternatively, it may be digital, such as: identifying the non-passable area by the number "1"; alternatively, it may be of a fixed color, such as: the non-passable area is marked with gray, and the type of the preset representation mode is not limited in this embodiment.
Optionally, the manner of constructing the area map of the target area includes, but is not limited to, one of the following:
in the first method, a region map of a target region is constructed based on a movement trajectory of a mobile device moving within the target region. At this time, an area map of the target area is constructed, including: determining a passable area in the target area based on a movement track of the self-moving device moving in the target area; and determining an unvented area in the target area based on the boundary information of the target area and the passable area in the target area so as to construct an area map of the target area.
In the second way, a region map of the target region is constructed based on the position where the collision occurs during the movement of the self-moving device within the target region. At this time, constructing the target area map includes: acquiring a position where the mobile device collides in the moving process of the mobile device in the target area; the position where the collision occurs is determined as an obstacle position to construct a region map of the target region.
Alternatively, the collision of the self-mobile device may be determined based on the sensing information of the collision sensor mounted on the side of the housing of the self-mobile device, or may be determined based on the sensing information of the acceleration sensor mounted on the self-mobile device, which is not limited in the collision detection manner of the self-mobile device in this embodiment.
In one example, a collision from the mobile device is determined if the acceleration in the non-travel direction is detected from the mobile device to be greater than a preset acceleration threshold.
In actual implementation, the self-mobile device may also construct the area map of the target area in other manners, and the present embodiment does not limit the manner in which the self-mobile device constructs the area map of the target area.
Step 202, obtaining obstacle information of an obstacle obtained in the moving process.
Optionally, the obstacle information includes, but is not limited to, the following: the type of obstacle, the color of the obstacle, and/or the position of the edge of the obstacle, the present embodiment does not limit the type of obstacle information.
In one example, the types of obstructions are classified as furniture and other obstructions, wherein furniture is in turn specifically classified as a table, chair, cabinet, etc.
In another example, the types of the obstacle are classified into movable obstacle and immovable obstacle, the immovable obstacle including a wall, a wardrobe, and the like; the movable barrier includes a table, a chair, or the like.
In actual implementation, the obstacle may be classified according to other classification methods, and the classification method of the obstacle is not limited in this embodiment.
Alternatively, the obstacle information may be generated by the self-mobile device according to the environmental information collected by the environmental sensor, or may be sent by other devices to the self-mobile device, which is not limited by the manner in which the obstacle information is obtained in this embodiment.
In one example, the obstacle information is generated from the mobile device according to the environmental information acquired by the environmental sensor, and at this time, the obstacle information of the obstacle obtained during the movement is acquired, which at least includes at least one of the following cases:
in the first case, the obstacle information includes an obstacle type, and at this time, the environment sensor includes a vision sensor or a laser sensor, and accordingly, the environment information is image information of an area where the obstacle is located. At this time, the obstacle information of the obstacle obtained during the movement is acquired, including: the type of the obstacle is determined based on the image information of the area where the obstacle is located.
Optionally, determining the type of the obstacle based on the image information of the area where the obstacle is located includes: and inputting the image of the area where the obstacle is located into a pre-trained obstacle recognition model to obtain the type of the obstacle.
The obstacle recognition model is obtained by training the neural network by using first training data, and each group of first training data comprises a first sample image and obstacle type label data in the first sample image.
Such as: the training process of the obstacle recognition model comprises the following steps: creating an initial network model; inputting the sample image and the obstacle type label data in the sample image into an initial network model to obtain a model result; iteratively updating parameters of the initial network model based on the model result and corresponding obstacle type label data, obtaining an obstacle recognition model when the iteration times reach preset times or the updated model converges,
the initial network model may be a BP neural network (Back Propagation Neural Network), an ART neural network (Adaptive Resonance Theory), or a radial basis function (Radial Basis Function, RBF) neural network, which is not limited in this embodiment.
In the second case, the obstacle information includes an obstacle color, and at this time, the environment sensor includes a visual sensor, and accordingly, the environment information is image information of an area where the obstacle is located. At this time, the obstacle information of the obstacle obtained during the movement is acquired, including: the color of the obstacle is determined based on the image information of the area where the obstacle is located.
The image information collected by the vision sensor is color image information, namely information consisting of data of three channels of red, green and blue of the same row and column.
In a third case, the obstacle information includes an obstacle edge position, and at this time, the environmental sensor includes a laser sensor or a vision sensor, and accordingly, the environmental information is a laser sensing signal of an area where the obstacle is located. At this time, the obstacle information of the obstacle obtained during the movement is acquired, including: and determining the edge position of the obstacle based on the laser sensing signal of the area where the obstacle is located.
The laser sensing signals are used for indicating the height change condition of the area where the obstacle is located, namely, the laser sensing signals corresponding to the positions of different heights are different.
Optionally, determining the obstacle edge position based on the laser sensing signal of the area where the obstacle is located includes: determining the position of the height change in the area where the obstacle is located based on the laser sensing signal; the position where the height of the obstacle changes in the area is determined as the edge position of the obstacle.
In another example, the obstacle information is sent by other devices to the self-mobile device, and at this time, the obstacle information of the obstacle obtained during the movement is obtained, including: the self-mobile device sends current position information of the self-mobile device to other devices in the moving process, so that the other devices can determine barrier information of a subarea where the current position is based on the current position information and send the barrier information to the self-mobile device; and receiving obstacle information sent by other devices.
And 203, correcting the regional structure and/or the preset representation mode of the non-passable region based on the obstacle information to obtain a corrected regional map.
Optionally, correcting the preset representation mode based on the obstacle information to obtain a corrected area map, including: determining an expected representation mode corresponding to the obstacle information; and identifying the non-passable area of the corresponding position of the obstacle by using the expected representation mode, and obtaining the corrected area map.
Optionally, the expected representation corresponding to the different types of obstacle information is different, and thus determining the expected representation corresponding to the obstacle information includes at least one of:
in the first case, the obstacle information includes an obstacle type. Accordingly, the representation is expected to indicate the type of obstacle. At this time, determining the desired representation corresponding to the obstacle information includes: and determining the type data corresponding to the type of the obstacle as a desired representation.
Alternatively, the type data may be text, such as: the obstacle type name may alternatively be digital, such as: the number corresponding to the type of the obstacle, or may be a color, for example: the color corresponding to the type of the obstacle is not limited to the implementation of the type data in this embodiment.
In the second case, the obstacle information includes an obstacle color. Accordingly, the desired representation includes representation in an obstacle color. At this time, determining the desired representation corresponding to the obstacle information includes: the obstacle color is determined as the desired representation.
Such as: the obstacle is red in color, and then the red is determined as the desired representation.
Optionally, correcting the area structure of the non-passable area based on the obstacle information includes: determining whether the non-passable area is a false area based on the obstacle information; and under the condition that the non-passable area is not a false area, complementing the area map to obtain a corrected area map.
Wherein the false area refers to: false areas resulting from the identification of false images formed on high reflectivity objects. Wherein, the high reflectivity object means: an object having a light reflectance greater than a preset reflectance threshold value, through which most of the light cannot pass. High reflectivity objects include, but are not limited to: mirrors, electroplated articles, etc. Since the false area is not truly present, the self-mobile device cannot enter the false area, and therefore the self-mobile device marks the false area as an unvented area in the process of mapping.
Optionally, the reflectivity threshold is greater than or equal to a light reflectivity corresponding to a minimum amount of light signal required to identify a false area from the mobile device; the amount of optical signal reflected by the object is positively correlated with the optical reflectivity of the object.
In the conventional regional map construction process, since the self-mobile device cannot pass through the high-light-transmittance object, the self-mobile device determines the region where the high-light-transmittance object is located as an impassable region. However, since the high light transmittance object may be located between two areas, one of which is the area where the self-mobile device is located, in this case, if the other area where the high light transmittance object is communicated is determined as an unvented area, a problem may be caused in that the area map does not conform to the actual environment of the target area.
Wherein, the object with high light transmittance, also called transparent object, means: an object having a light transmittance greater than a preset light transmittance threshold, through which a majority of light may pass. High light transmittance objects include, but are not limited to: organic glass, transparent glass reinforced plastic, etc.
Optionally, the light transmittance threshold is greater than or equal to a light transmittance corresponding to a minimum amount of light signal required to identify the object from the mobile device; the amount of light signal transmitted through the object is positively correlated with the light transmittance of the object.
Based on the above technical problem, in this embodiment, the self-mobile device needs to determine whether the non-passable area is a false area.
Optionally, the means for determining whether the non-passable area is a false area based on the obstacle information includes, but is not limited to, at least one of the following:
in a first way, a laser sensor and a collision sensor are installed on the self-mobile device; accordingly, the obstacle information includes a first obstacle detection result determined based on a first sensing result acquired by the laser sensor and a second obstacle detection result determined based on a second sensing result acquired by the collision sensor; at this time, determining whether the non-passable area is a false area based on the obstacle information includes: in a case where the first obstacle detection result indicates that no obstacle exists and the second obstacle detection result indicates that an obstacle exists, it is determined that the non-passable area is not a false area.
Since the sensing signal of the laser sensor can pass through the high-transmittance object, the high-transmittance object cannot be recognized based on the first obstacle recognition result, but since the high-transmittance object is actually present, the high-transmittance object can be recognized based on the second obstacle recognition result, and the area where the high-transmittance object is located is determined as the non-passable area. Thus, it may be determined whether the non-passable area is a false area by combining the first obstacle recognition result and the second obstacle recognition result.
Since the first sensing result may reflect environmental information within the area in the case where the non-passable area is not a false area, the complementing of the area map includes: and complementing the regional map by using the first sensing result.
In a second manner, determining whether the non-passable area is a false area based on the obstacle information includes: inputting the obstacle information into a pre-trained neural network model to obtain an obstacle recognition result; the obstacle recognition result is used for indicating whether the area where the obstacle is located is a false area or not.
The neural network model is obtained by training the neural network through second training data, and each group of second training data comprises a second sample image and obstacle type label data in the second sample image.
The training method of the neural network model is the same as the training method of the obstacle recognition model, and the embodiment will not be described here again.
Alternatively, the obstacle information may be an obstacle image, where the obstacle information may be collected by a vision sensor, or may also be outline information of the obstacle, where the obstacle information may be collected by a laser sensor, and the type of the obstacle information and the manner of collecting the obstacle information are not limited in this embodiment.
Optionally, the obstacle recognition result is an obstacle type, and the obstacle type includes: high light transmittance obstacles and other obstacles, wherein the other obstacles refer to obstacles other than the high light transmittance obstacle. In the case where the obstacle type indicates that the obstacle is a high light transmittance obstacle, determining that the non-passable area is not a false area; in the event that the obstacle type indicates that the obstacle is another obstacle, it is determined that the non-passable area is a false area.
Since the obstacle information may reflect the environmental information within the area in the case where the non-passable area is not a false area, the complementing of the area map includes: and complementing the regional map by using the obstacle information.
Optionally, correcting the area structure and/or the preset representation mode of the non-passable area based on the obstacle information, and after obtaining the corrected area map, further includes: and performing control and direct processing on the region edge in the corrected region map.
Wherein, the control process means: the edges of the curved region are straightened.
Corner means: and the intersection angle between adjacent walls. Since the self-moving device is difficult to move to the corner position of the target area in the process of moving the target area, the area edge information of the corner position in the corrected area map may be inaccurate, which may cause the area edge of the corner position to bend, and thus, the area edge in the corrected area map is subjected to the control and straightening process, including: determining a corner position within the target area based on the obstacle information; and (5) performing control and straightening treatment on the edge of the area based on the corner position.
Optionally, determining the mid-corner position of the target area based on the obstacle information includes, but is not limited to, at least one of:
first case: the obstacle information includes an obstacle edge position. At this time, determining the corner position in the target area based on the obstacle information includes: determining obstacle edge profile information based on the obstacle edge location; the corner position in the target area is determined based on the obstacle edge profile information.
Optionally, determining the obstacle edge profile information based on the obstacle edge position includes: and connecting adjacent barrier edge positions to obtain barrier profile information.
Optionally, determining the corner position in the target area based on the obstacle edge profile information includes: and determining the corner position, indicated by the obstacle edge profile information, of which the corner angle is larger than a preset angle threshold value, as the corner position.
The preset angle threshold is stored in the self-mobile device in advance.
In one example, the preset angle threshold is 80 degrees.
Optionally, the determining the corner angle in the edge profile of the obstacle comprises: determining a corner position in the edge profile of the obstacle; taking the corner position as a starting point, cutting out the edge contour line segments of the region with preset length in different directions; respectively calculating line segment included angles among different edge contour line segments; and determining the maximum line segment included angle as the corner angle of the corner position.
Second case: the obstacle information includes an obstacle image. At this time, determining the corner position in the target area based on the obstacle image includes: the corner position in the target area is determined based on the obstacle image.
Optionally, determining the corner position in the target area based on the obstacle image includes: inputting the obstacle image into a pre-trained corner recognition model to obtain a corner recognition result; the corner recognition result is used for indicating whether a corner exists in the obstacle image or not and the position of the corner.
The corner recognition model is obtained by training the neural network by using third training data, and each group of third training data comprises a third sample image and corner position label data in the third sample image.
The training process of the corner recognition model is the same as that of the aforementioned obstacle recognition model, and this embodiment is not repeated here.
Optionally, the method of controlling the edge of the area based on the corner position includes, but is not limited to, at least one of the following:
the first way is: segmenting an edge of a region in the target region based on the corner position; and (5) respectively performing control and straightening treatment on the edges of each section of region.
Optionally, the controlling and straightening process is performed on the edge of each section of area, including: and respectively carrying out straight line fitting on the edges of each section of region.
Alternatively, the method of straight line fitting may be a least square method, or may be a gradient descent method, and the method of straight line fitting is not limited in this embodiment.
The second way is: determining a wall location based on the corner location; and performing control and direct treatment on the edges of the areas corresponding to the positions of the walls.
Optionally, the controlling and straightening process is performed on the edge of the area corresponding to the wall position, including: and performing straight line fitting on the edge line segments of the area corresponding to the wall positions.
Third mode: and (5) performing control and straightening treatment on the edge of the area at the corner position.
Optionally, the controlling and straightening process is performed on the edge of the area at the corner position, including: taking the corner position as a starting point, cutting out regional edge line segments with preset lengths in different directions; and performing straight line fitting on the intercepted regional edge line segments.
In order to eliminate the influence of noise data and abrupt change data in the region edges on the control process, the control process is performed on the region edges in the corrected region map, including: filtering the region edge in the corrected region map to filter out the mutation position on the region edge position; and performing control and direct treatment on the edge position of the filtered region.
Optionally, filtering the region edges in the corrected region map includes: and removing the position, which is larger than the preset difference value, from the adjacent position in the edge position of the region.
In other embodiments, the self-mobile device may also perform a direct control process on the region edge in the region map after the region correction according to the feature information in the region map after the correction, for example: identifying the wall body position in the corrected regional map; performing control and direct treatment on the edge of the area based on the position of the wall body; the embodiment does not limit the manner of performing the control and straightening process on the region edge in the corrected region map.
In summary, in the map generation method of the self-mobile device provided by the embodiment, an area map of a target area is constructed in a process that the self-mobile device moves in the target area, the area map includes an unviewable area where an obstacle in the target area is located, and the unviewable area is identified by using a preset representation mode; acquiring obstacle information obtained in the moving process; correcting the area structure and/or the preset representation mode of the non-passable area based on the obstacle information to obtain a corrected area map; the problem that the matching degree of the regional map and the actual environment of the target region is not high can be solved; since the obstacle information is not used when constructing the area map, the area map is corrected by combining the obstacle information in the target area, so that the matching degree of the area map and the actual environment of the target area can be improved.
In addition, the non-passable area corresponding to the obstacle is marked by using the expected representation method corresponding to the obstacle information, so that different representation modes corresponding to different obstacles in the corrected area map can be different, different obstacles can be more intuitively distinguished, the corrected area map can more intuitively reflect the actual environment of the target area, and the matching degree of the area map and the actual environment of the target area can be improved.
In addition, because the obstacle information comprises the types of the obstacles, the corresponding representation modes of the different types of the obstacles in the corrected area map are different, the different types of the obstacles can be more intuitively distinguished, and the problem that the different types of the obstacles cannot be distinguished through the area map when all the obstacles are marked by using the preset representation mode can be avoided.
In addition, because the obstacle information comprises the obstacle colors, the corresponding representation modes of the obstacles with different colors in the corrected regional map are different, the obstacles with different colors can be more intuitively distinguished, and the problem that the obstacles with different colors cannot be distinguished through the regional map when all the obstacles are marked by using the preset representation modes can be avoided, so that the matching degree of the regional map and the actual environment of the target region can be improved.
In addition, under the condition that the non-passable area is not a false area based on the obstacle information, the area map is complemented to obtain the corrected area map, so that the problem that the actually existing area in the target area is determined to be the non-passable area can be avoided, and the matching degree of the area map and the actual environment of the target area can be improved.
In addition, since the obstacle information includes the first obstacle detection result determined based on the first sensing result acquired by the laser sensor and the second obstacle detection result determined based on the second sensing result acquired by the collision sensor, it can be accurately determined whether the non-passable area is a false area based on the obstacle information, thereby determining whether to correct the area map, and thus, the degree of matching of the area map with the actual environment of the target area can be improved.
In addition, under the condition that the non-passable area is not a false area, the area map is complemented based on the first sensing result, and the laser sensing signal emitted by the laser sensor can pass through the object with high light transmittance, so that the environmental information of the area can be determined based on the first sensing result, and the area map is complemented, so that the matching degree of the area map and the actual environment of the target area can be improved.
In addition, because whether the area where the obstacle is a false area is determined based on the obstacle recognition result output by the pre-trained neural network model, whether the area map is corrected is determined, and therefore the matching degree of the area map and the actual environment of the target area can be improved.
In addition, the area edge in the corrected area map is subjected to the control and direct processing, so that errors in the construction and correction processes of the area map can be avoided, and the matching degree of the area map and the actual environment of the target area can be improved.
In addition, because the wall body position can be determined based on the corner position, the edge of the area can be controlled to be straight, the accuracy of the straight control process can be improved, and the matching degree of the area map and the actual environment of the target area can be improved.
The present embodiment provides a map generating apparatus of a self-mobile device, as shown in fig. 3. The present embodiment is applied to the controller of the self-mobile device shown in fig. 1, where the apparatus includes at least the following modules, a map construction module 310, an information acquisition module 320, and a map correction module 330.
The map construction module 310 is configured to construct a region map of the target region during the movement of the self-mobile device in the target region, where the region map includes an unviewable region where an obstacle in the target region is located, and the unviewable region is identified by using a preset representation mode;
An information acquisition module 320, configured to acquire obstacle information of an obstacle obtained during movement;
the map correction module 330 is configured to correct the area structure and/or the preset representation of the non-passable area based on the obstacle information, and obtain a corrected area map. For relevant details reference is made to the above-described method and apparatus embodiments.
It should be noted that: in the map generation apparatus for a self-mobile device provided in the foregoing embodiment, only the division of the foregoing functional modules is used as an example for illustration, and in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the map generation apparatus for a self-mobile device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the map generating apparatus of the self-mobile device and the map generating method embodiment of the self-mobile device provided in the foregoing embodiments belong to the same concept, and detailed implementation processes of the map generating apparatus and the map generating method embodiment of the self-mobile device are detailed in the method embodiment, and are not described herein again.
The present embodiment provides an electronic device, as shown in fig. 4. The electronic device may be the self-mobile device of fig. 1. The electronic device comprises at least a processor 401 and a memory 402.
Processor 401 may include one or more processing cores such as: 4 core processors, 8 core processors, etc. The processor 401 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 401 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 401 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 401 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 402 is used to store at least one instruction for execution by processor 401 to implement the method of map generation from a mobile device provided by an embodiment of the method in the present application.
In some embodiments, the electronic device may further optionally include: a peripheral interface and at least one peripheral. The processor 401, memory 402, and peripheral interfaces may be connected by buses or signal lines. The individual peripheral devices may be connected to the peripheral device interface via buses, signal lines or circuit boards. Illustratively, peripheral devices include, but are not limited to: radio frequency circuitry, touch display screens, audio circuitry, and power supplies, among others.
Of course, the electronic device may also include fewer or more components, as the present embodiment is not limited in this regard.
Optionally, the present application further provides a self-mobile device, the self-mobile device including a processor and a memory; the memory stores a program that is loaded and executed by the processor to implement the map generation method of the self-mobile device of the above method embodiment.
Optionally, the present application further provides a computer readable storage medium, in which a program is stored, and the program is loaded and executed by a processor to implement the map generating method of the self-mobile device according to the above method embodiment.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (11)

1. A method of map generation from a mobile device, the method comprising:
constructing an area map of a target area in the process that the self-mobile equipment moves in the target area, wherein the area map comprises an unviewable area where an obstacle in the target area is located, and the unviewable area is marked by using a preset representation mode;
acquiring obstacle information of the obstacle obtained in the moving process;
and correcting the area structure of the non-passable area and/or the preset representation mode based on the obstacle information to obtain a corrected area map.
2. The method of claim 1, wherein correcting the preset representation based on the obstacle information to obtain a corrected region map comprises:
Determining an expected representation mode corresponding to the obstacle information;
and identifying the non-passable area at the corresponding position of the obstacle by using the expected representation mode, and obtaining the corrected area map.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the obstacle information includes an obstacle type; accordingly, the desired representation is used to indicate the obstacle type;
and/or the number of the groups of groups,
the obstacle information includes an obstacle color; accordingly, the desired representation includes representation in the obstacle color.
4. The method of claim 1, wherein modifying the area structure of the non-passable area based on the obstacle information comprises:
determining whether the non-passable area is a false area based on the obstacle information;
and under the condition that the non-passable area is not a false area, complementing the area map to obtain the corrected area map.
5. The method of claim 4, wherein the self-moving device has a laser sensor and a collision sensor mounted thereon; accordingly, the obstacle information includes a first obstacle detection result determined based on a first sensing result acquired by the laser sensor, and a second obstacle detection result determined based on a second sensing result acquired by the collision sensor;
The determining whether the non-passable area is a false area based on the obstacle information includes:
in a case where the first obstacle detection result indicates that no obstacle exists and the second obstacle detection result indicates that an obstacle exists, it is determined that the non-passable area is not the false area.
6. The method of claim 5, wherein the complementing the area map comprises:
and complementing the regional map by using the first sensing result.
7. The method of claim 4, wherein the determining whether the non-passable area is a false area based on the obstacle information comprises:
inputting the obstacle information into a pre-trained neural network model to obtain an obstacle recognition result; and the obstacle recognition result is used for indicating whether the area where the obstacle is located is the false area or not.
8. The method according to claim 1, wherein the correcting the area structure of the non-passable area and/or the preset representation based on the obstacle information, after obtaining the corrected area map, further comprises:
And performing control and direct processing on the region edge in the corrected region map.
9. The method of claim 8, wherein the performing the direct control process on the region edges in the corrected region map comprises:
determining a corner location within the target area based on the obstacle information;
and controlling the edge of the area to be straight based on the corner position.
10. A self-moving device, characterized in that the self-moving device comprises a processor and a memory; stored in the memory is a program that is loaded and executed by the processor to implement the map generation method of the self-mobile device as claimed in any one of claims 1 to 9.
11. A computer-readable storage medium, characterized in that the storage medium has stored therein a program which, when executed by a processor, is adapted to carry out the map generation method of a self-mobile device as claimed in any one of claims 1 to 9.
CN202210190305.8A 2022-02-28 2022-02-28 Map generation method for self-mobile device, self-mobile device and storage medium Pending CN116700236A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210190305.8A CN116700236A (en) 2022-02-28 2022-02-28 Map generation method for self-mobile device, self-mobile device and storage medium
PCT/CN2023/075812 WO2023160428A1 (en) 2022-02-28 2023-02-14 Map generation method for self-moving device, self-moving device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210190305.8A CN116700236A (en) 2022-02-28 2022-02-28 Map generation method for self-mobile device, self-mobile device and storage medium

Publications (1)

Publication Number Publication Date
CN116700236A true CN116700236A (en) 2023-09-05

Family

ID=87764698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210190305.8A Pending CN116700236A (en) 2022-02-28 2022-02-28 Map generation method for self-mobile device, self-mobile device and storage medium

Country Status (2)

Country Link
CN (1) CN116700236A (en)
WO (1) WO2023160428A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112034830A (en) * 2019-06-03 2020-12-04 江苏美的清洁电器股份有限公司 Map information processing method and device and mobile device
KR102054301B1 (en) * 2019-08-09 2020-01-22 엘지전자 주식회사 Method of drawing map applied by object feature and robot implementing thereof
KR20220012001A (en) * 2020-07-22 2022-02-03 엘지전자 주식회사 Robot Cleaner and Controlling method thereof
CN113109821A (en) * 2021-04-28 2021-07-13 武汉理工大学 Mapping method, device and system based on ultrasonic radar and laser radar
CN113907663B (en) * 2021-09-22 2023-06-23 追觅创新科技(苏州)有限公司 Obstacle map construction method, cleaning robot, and storage medium
CN113848943B (en) * 2021-10-18 2023-08-08 追觅创新科技(苏州)有限公司 Grid map correction method and device, storage medium and electronic device

Also Published As

Publication number Publication date
WO2023160428A9 (en) 2023-09-28
WO2023160428A1 (en) 2023-08-31

Similar Documents

Publication Publication Date Title
US20180353042A1 (en) Cleaning robot and controlling method thereof
US10660496B2 (en) Cleaning robot and method of controlling the cleaning robot
KR102235270B1 (en) Moving Robot and controlling method
US20130107010A1 (en) Surface segmentation from rgb and depth images
WO2020248458A1 (en) Information processing method and apparatus, and storage medium
CN109381122A (en) The method for running the cleaning equipment advanced automatically
CN112347876B (en) Obstacle recognition method based on TOF camera and cleaning robot
CN112714684A (en) Cleaning robot and method for performing task thereof
CN111104933A (en) Map processing method, mobile robot, and computer-readable storage medium
US10871781B2 (en) Method for drawing map having feature of object applied thereto and robot implementing the same
CN110794831A (en) Method for controlling robot to work and robot
CN113001544A (en) Robot control method and device and robot
US20220334587A1 (en) Method for processing map of closed space, apparatus, and mobile device
CN114365974B (en) Indoor cleaning and partitioning method and device and floor sweeping robot
CN110315538B (en) Method and device for displaying barrier on electronic map and robot
CN113848944A (en) Map construction method and device, robot and storage medium
KR20230134109A (en) Cleaning robot and Method of performing task thereof
CN116700236A (en) Map generation method for self-mobile device, self-mobile device and storage medium
CN111830966A (en) Corner recognition and cleaning method, device and storage medium
WO2023045749A1 (en) Charging device, self-moving device, charging method and system, and storage medium
CN116258831A (en) Learning-based systems and methods for estimating semantic graphs from 2D LiDAR scans
CN114489058A (en) Sweeping robot, path planning method and device thereof and storage medium
CN116087986A (en) Self-mobile device, obstacle detection method for self-mobile device, and storage medium
CN113516715A (en) Target area inputting method and device, storage medium, chip and robot
RU2658092C2 (en) Method and navigation system of the mobile object using three-dimensional sensors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination