CN114295138B - Robot, map extension method, apparatus, and readable storage medium - Google Patents

Robot, map extension method, apparatus, and readable storage medium Download PDF

Info

Publication number
CN114295138B
CN114295138B CN202111679399.7A CN202111679399A CN114295138B CN 114295138 B CN114295138 B CN 114295138B CN 202111679399 A CN202111679399 A CN 202111679399A CN 114295138 B CN114295138 B CN 114295138B
Authority
CN
China
Prior art keywords
pose
map
node
nodes
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111679399.7A
Other languages
Chinese (zh)
Other versions
CN114295138A (en
Inventor
何科君
陈美文
武金龙
刘运航
周阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pudu Technology Co Ltd
Original Assignee
Shenzhen Pudu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pudu Technology Co Ltd filed Critical Shenzhen Pudu Technology Co Ltd
Priority to CN202111679399.7A priority Critical patent/CN114295138B/en
Publication of CN114295138A publication Critical patent/CN114295138A/en
Application granted granted Critical
Publication of CN114295138B publication Critical patent/CN114295138B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

A map extension method, comprising: acquiring multi-frame image data acquired by an image acquisition device for an identification code and multi-frame sensing data acquired by a sensing device when the robot moves; acquiring a marking map of the identification code; obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map; based on multi-frame sensing data, a plurality of pose nodes and a plurality of local maps, obtaining a first relative pose relation of every two adjacent pose nodes and a second relative pose relation of each pose node and the plurality of local maps; correlating the multi-frame image data with the corresponding pose nodes to obtain a plurality of observation values of the pose nodes and the identification codes; optimizing a plurality of pose nodes and a plurality of local maps by taking the first relative pose relation, the second relative pose relation and the observed value as constraints; and constructing a global map by using the optimized pose nodes, the local maps and the multi-frame sensing data.

Description

Robot, map extension method, apparatus, and readable storage medium
Technical Field
The present invention relates to the field of robots, and in particular, to a robot, a map extension method, a map extension apparatus, and a readable storage medium.
Background
At present, an indoor distribution robot generally adopts laser positioning or visual marking positioning. Different robots may employ different positioning means. If a robot using visual indication positioning is deployed on a site, a new robot using laser positioning is needed, and in order to ensure that the robots in two positioning modes can operate normally and cannot collide with each other, the coordinate systems of the maps used by the robots are required to be consistent. If a new map fusion mode is adopted, all the service points which are already set, such as dining tables, kitchen taking points, charging points and the like, need to be reset, and the deployment time can be increased.
Disclosure of Invention
The application provides a robot, a map extension method, a map extension device and a readable storage medium, which can ensure that the map coordinate systems of the robot positioned by laser and the robot positioned by visual indication are consistent and simultaneously improve the map construction efficiency.
In one aspect, the present application provides a robot comprising:
a memory, a processor, a sensing device, and an image acquisition device;
The memory stores executable program code;
the processor coupled to the memory is configured to invoke the executable program code stored in the memory to perform the steps of:
acquiring multi-frame image data acquired by an image acquisition device for an identification code and multi-frame sensing data acquired by a sensing device when the robot moves;
acquiring a marking map of the identification code;
obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map;
based on the multi-frame sensing data, the plurality of pose nodes and the plurality of local maps, obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the plurality of local maps;
correlating the multi-frame image data with corresponding pose nodes to obtain a plurality of observation values of the pose nodes and the identification codes;
optimizing the plurality of pose nodes and the plurality of local maps by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraints;
And constructing a global map by using the optimized pose nodes, the local maps and the multi-frame sensing data.
In another aspect, the present application provides a map construction apparatus applied to a robot including a sensing device and an image acquisition device, the apparatus including:
the first acquisition module is used for acquiring multi-frame sensing data acquired by the sensing device when the robot moves;
the second acquisition module is used for acquiring multi-frame image data acquired by the image acquisition device on the identification code;
the third acquisition module is used for acquiring the marking map of the identification code;
the fourth acquisition module is used for obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map;
a fifth obtaining module, configured to obtain a first relative pose relationship of each two adjacent pose nodes and a second relative pose relationship of each pose node and the plurality of local maps based on the multi-frame sensing data, the plurality of pose nodes, and the plurality of local maps;
the association module is used for associating the multi-frame image data with the corresponding pose nodes to obtain a plurality of observation values of the pose nodes and the identification codes;
The optimization module is used for optimizing the plurality of pose nodes and the plurality of local maps by taking the first relative pose relation, the second relative pose relation and the observed value as constraints;
and the map construction module is used for constructing a positioning map by using the optimized pose nodes, the local maps and the multi-frame sensing data.
In a third aspect, the present application provides a map extension method, the method including:
acquiring multi-frame image data acquired by an identification code through an image acquisition device and multi-frame sensing data acquired by a sensing device when the robot moves;
acquiring a marking map of the identification code;
obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map;
based on the multi-frame sensing data, the plurality of pose nodes and the plurality of local maps, obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the plurality of local maps;
correlating the multi-frame image data with the corresponding pose nodes to obtain a plurality of observation values of the pose nodes and the identification codes;
Optimizing the plurality of pose nodes and the plurality of local maps by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraints;
and constructing a global map by using the optimized pose nodes, the local maps and the multi-frame sensing data.
In a fourth aspect, the present application provides a readable storage medium having stored thereon a computer program for implementing a map extension method for a robot according to the first aspect when executed by a processor.
According to the technical scheme provided by the application, the laser visual marking fusion map is directly expanded from the prior visual marking map, the coordinate system of the prior visual marking map is adopted, the position of the prior service point can be kept unchanged, only a small amount of newly added service points are needed, the deployment time can be saved, and the map construction efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a robot provided in an embodiment of the present application;
fig. 2 is a flowchart of a map extension method provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of a map building device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an apparatus according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In this specification, adjectives such as first and second may be used solely to distinguish one element or action from another element or action without necessarily requiring or implying any actual such relationship or order. Where the environment permits, reference to an element or component or step (etc.) should not be construed as limited to only one of the element, component, or step, but may be one or more of the element, component, or step, etc.
In the present specification, for convenience of description, the dimensions of the various parts shown in the drawings are not drawn in actual scale.
Referring to fig. 1, a schematic structural diagram of a robot according to an embodiment of the present application is provided. For convenience of explanation, only portions relevant to the embodiments of the present application are shown. The robot may include:
the processor 20 is an operation and control core of the robot, and is a final execution unit for information processing and program running. Memory 10, such as a hard drive memory, a non-volatile memory (e.g., flash memory or other electronically programmable limited delete memory used to form a solid state drive, etc.), a volatile memory (e.g., static or dynamic random access memory, etc.), and the like, embodiments of the present application are not limited.
The memory 10 has stored therein executable program code; the processor 20 coupled to the memory 10 invokes said executable program code stored in the memory 10 to perform the map extension method as follows:
acquiring multi-frame image data acquired by an image acquisition device for an identification code and multi-frame sensing data acquired by a sensing device when the robot moves; acquiring a marking map of the identification code; obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map; based on multi-frame sensing data, a plurality of pose nodes and a plurality of local maps, obtaining a first relative pose relation of every two adjacent pose nodes and a second relative pose relation of each pose node and the plurality of local maps; correlating the multi-frame image data with the corresponding pose nodes to obtain a plurality of observation values of the pose nodes and the identification codes; optimizing a plurality of pose nodes and a plurality of local maps by taking the first relative pose relation, the second relative pose relation and the observed value as constraints; and constructing a global map by using the optimized pose nodes, the local maps and the multi-frame sensing data.
Referring to fig. 2, a map extension method provided in the embodiment of the present application mainly includes steps S201 to S207, which are described as follows:
step S201: and acquiring multi-frame image data acquired by the identification code through the image acquisition device and multi-frame sensing data acquired by the sensing device when the robot moves.
In this embodiment of the present application, the acquiring of the multi-frame image data acquired by the image acquisition device for the identification code and the multi-frame sensing data acquired by the sensing device when the robot moves along the mapping path, where the mapping path is a path of the robot when the robot moves for mapping purposes. In some embodiments the sensing device comprises at least a two-dimensional lidar, in other embodiments a wheel odometer or the like. In this embodiment, the sensing device includes a two-dimensional laser radar and a wheel type odometer, and therefore, the multi-frame sensing data collected by the sensing device includes laser point cloud data and wheel type odometer data, that is, information such as position and attitude.
In the present embodiment, the identification code may be an identification affixed to an environment, such as a wall, ceiling, or the like. The image acquisition devices carried by the robot can be monocular, binocular or three-eye cameras, and multi-frame image data can be acquired through shooting the identification codes by the monocular, binocular or three-eye cameras and other image acquisition devices.
Step S202: and acquiring a marking map of the identification code.
In the embodiment of the present application, the map is a visual map. The map is the previous map, which is the known information. In some embodiments, when a robot for laser positioning is newly added to the same site, the site is already deployed with the robot for positioning by using the visual indication, and the indication map is the positioning map of the deployed robot for positioning by using the visual indication; in other embodiments, when the same robot can use both visual and laser positioning, the map may also be a previously established visual map. The map records the ID and coordinate information of each identification code w T mi The coordinate system of the map is defined herein as a world coordinate system, mi denotes the i-th identification code, and W denotes the world coordinate system.
Step S203: and obtaining a plurality of pose nodes and a plurality of local maps of the robot through the multi-frame image data, the multi-frame sensing data and the marking map.
In the embodiment of the application, as described above, the sensing device carried by the robot includes a wheel type odometer and a two-dimensional laser radar, and multiple frames of sensing data acquired by the two sensors can obtain multiple more accurate pose nodes of the robot. Here, it should be noted that the plurality of pose nodes of the robot include a history pose node and a latest pose node, where the latest pose node of the robot is a current pose node of the robot, because the pose node of the robot includes a time variable, that is, the pose node of the robot is a pose node at a different time, the pose node obtained at the current time is the latest pose node or the current pose node, and the pose node obtained before the current time is the history pose node. In the present embodiment, the local map is a laser grid map. Furthermore, the local map is relative to the global map, meaning that the laser grid map is limited to a certain range. As an embodiment of the present application, obtaining a plurality of pose nodes and a plurality of local maps of the robot through a plurality of frames of image data, a plurality of frames of sensing data and a map of the map may be achieved through the following steps S2031 to S2034:
Step S2031: an initial pose node and an initial local map are created based on the image data and the map of landmarks. The step S2031 specifically includes the steps of:
step S20311: obtaining ID of identification code and pose of robot relative identification code through image data r T mi
Step S20312: acquiring coordinate information of the identification code according to the ID of the identification code and the marked map w T mi
Step S20313: pose using relative identification code of robot r T mi And coordinate information of the identification code w T mi Calculating the pose of a robot w T ro . Here, the w T row T mi * r T mi Where i represents the same identification code.
Step S20314: and creating pose nodes and a local map based on the robot pose, and inserting corresponding sensing data into the local map.
According to the pose of the robot w T ro Acquiring an initial pose node, wherein the initial pose node is a pose node (t) 0 ,x o ,y oo ) Where x is o 、y o Is the above pose w T ro Values of x and y, θ o Is the above pose w T ro An angle of orientation of (a); accordingly, the coordinates of the local map are (x o ,y o ,0)。
Step S2032: a new node is created based on the odometry data and/or the laser point cloud data.
Generally, more nodes are created, meaning an improvement in the accuracy of the subsequent graph. However, creating more nodes comes at the cost of either consuming computation or sacrificing real-time, and therefore, requires selection when creating new nodes. In particular, creating a new node based on odometer data and/or laser point cloud data may be: if the odometer data and/or the laser point cloud data indicate that the translation distance of the robot compared with the previous pose node is greater than or equal to a first translation threshold value and/or the rotation distance of the robot is greater than or equal to a first rotation threshold value, a new node is created.
Step S2033: and acquiring a pose predicted value of the new node based on the odometer data corresponding to the new node and the last pose node, and obtaining the latest pose node by utilizing the corresponding point cloud data and the local map corrected pose predicted value of the new node.
In the embodiment of the present application, the last pose node is the pose node obtained according to steps S2031 to S2033 in the last event cycle.
Because the pose predicted value of the new node is obtained based on the odometer data corresponding to the new node and the last pose node, and a certain error or uncertainty exists, the pose predicted value needs to be corrected by utilizing the laser point cloud data corresponding to the new node and the local map to obtain the latest pose node.
Step S2034: a new local map is created based on the latest pose node and the last local map.
Specifically, creating a new local map based on the latest pose node and the last local map may be: if the translation distance between the latest pose node and the last local map is greater than or equal to the second translation threshold value, a new local map is created, and the pose value of the latest pose node is used as the pose value of the local map. I.e. the pose value of the first pose in the local map is used as the pose value of the local map. For an initial local probability grid map, the first pose node is the initial pose node, so that the pose value of the local map is the same as the pose value of the initial pose node.
After step S2034, the step of creating a new node and/or a new partial map based on the odometer data and/or the laser point cloud data is returned to be performed to obtain a plurality of pose nodes and a plurality of partial maps, i.e., steps S2032 to S2034 are repeatedly performed.
Step S204: based on multi-frame sensing data, a plurality of pose nodes and a plurality of local maps, a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the plurality of local maps are obtained.
It should be noted that the second relative pose relationship in step S204 includes a map relative relationship and a closed-loop constraint relationship. Specifically, the implementation of step S204 may be obtained through steps S2041 to S2043, which are described as follows:
step S2041: a first relative pose relationship of every two adjacent pose nodes is obtained based on the plurality of pose nodes.
Specifically, based on every two adjacent pose nodes in the plurality of pose nodes, a first relative pose relationship of every two adjacent pose nodes is calculated. Thus, a plurality of first relative pose relationships may be obtained. Optionally, the first relative pose relationship of two adjacent pose nodes specifically refers to a relative pose relationship of pose values of two adjacent pose nodes.
Step S2042: and acquiring a map relative relation based on each pose node and the corresponding local map.
Specifically, the latest pose nodes of the robot are setAnd matching the laser point cloud data associated with the pose nodes with the corresponding local map. If the laser point cloud data associated with the latest pose node of the robot is successfully matched with the corresponding local map, acquiring a relative pose relation R of the latest pose node of the robot and the local map successfully matched in the corresponding local map ij As a map relative relationship of the pose node and the corresponding local map. In the process of continuously creating the pose, each latest pose node is matched with the local map where the pose node is located (corresponding to the pose node), so that the map relative relation between each pose node and the local map is obtained, and a plurality of map relative relations can be obtained. Similarly, the map relative relationship of each pose node and the local map is also the relative position relationship of the pose value.
Step S2043: through loop detection, a closed-loop constraint relation C is obtained based on laser point cloud data corresponding to the latest pose node and other local maps ij The other local map is a local map that does not correspond to the latest pose node. In the continuous pose creation process, for each latest pose node, closed loop detection is carried out on other local maps, so that whether a closed loop constraint relation exists is determined. If so, acquiring a closed-loop constraint relation, and if not, considering the closed-loop constraint relation as none.
The loop detection is also referred to as closed loop detection, and is the prior art, and is not described herein. Once the closed-loop frame is detected, the closed-loop constraint relation between the laser point cloud data corresponding to the latest pose node and other local maps can be obtained, namely, the edge between two points with the loop relation in the map optimization.
Step S205: and correlating the multi-frame image data with the corresponding pose nodes to obtain an observation value.
In the embodiment of the application, the identification code may be a sticky identification in the environment. The image acquisition device carried by the robot can determine the pose of the identification codes through acquiring the identification codes in the environment, wherein the pose is a plurality of observation values of the robot on the position nodes and the identification codes in the environment. Further, the multi-frame image data may be associated with a pose node of the robot. As for the specific association method, the time association method may be a time association method that associates the observed values of the same identification code in the environment by the robot with different pose nodes of the robot, or the ID association method may be a time association method that associates the observed values of the different pose nodes of the environment by the robot with the same pose node of the robot.
Optionally, for the same pose node, multiple identification codes may be observed, and similarly, the same identification code may be also observed by multiple pose nodes, so that multiple observed values of the identification code may be obtained by associating multiple frames of image data acquired by the image acquisition device with corresponding pose nodes.
Step S206: and optimizing the plurality of pose nodes and the plurality of local maps by taking the first relative pose relation, the second relative pose relation and the observed value as constraints.
Specifically, the implementation of step S206 may be: taking the first relative pose relation, the second relative pose relation and the observed value as constraint conditions, and taking a plurality of pose nodes and a plurality of local maps as optimization targets to construct a nonlinear least square problem; and solving the nonlinear least square problem by using a graph optimization solving algorithm, and taking the optimal solution of the nonlinear least square problem as a plurality of optimized pose nodes, a plurality of local maps and observation values. Here, the graph optimization solving algorithm may be a Levenberg-Marquardt algorithm or gauss newton method, etc., which is not limited in this application. In an alternative embodiment, a plurality of or all first relative pose relationships, a plurality of or all second relative pose relationships, and a plurality of or all observed values are taken as constraints, and optimization calculation is performed uniformly, so that a plurality of pose nodes and a plurality of local probability grid maps are obtained.
Step S207: and constructing a positioning map by using the optimized pose nodes, the local maps and the multi-frame sensing data.
Specifically, the implementation of step S207 may be: and building a global map according to the optimized multiple pose nodes, the associated multi-frame sensing data and the multiple local maps. When the drawing construction work is finished, outputting the grid map which is the same as the marked map in the coordinate system.
In the embodiment of the application, the position of a laser map is initialized by adopting the existing marking map, and the initial laser pose with the consistent coordinate system is provided; the marked pose is only used as optimization constraint and is not used as an optimization target, so that the marked pose in the global map is ensured to be the same as the pose of the map marked before. According to the technical scheme, the laser visual marking fusion map is directly expanded from the previous marking map, the coordinate system of the previous visual marking map is adopted, the position of the previous service point can be kept unchanged, only a small amount of newly added service points are needed, the deployment time can be saved, and the map construction efficiency is improved.
Referring to fig. 3, a map building apparatus provided in the embodiment of the present application may be a central processing unit of a robot or a functional module thereof, where the apparatus may include a first obtaining module 301, a second obtaining module 302, a third obtaining module 303, a fourth obtaining module 304, a fifth obtaining module 305, an associating module 306, an optimizing module 307, and a map building module 308, which are described in detail below:
A first acquiring module 301, configured to acquire multi-frame sensing data acquired by the sensing device when the robot moves;
a second acquiring module 302, configured to acquire multi-frame image data acquired by the image acquisition device on the identification code;
a third obtaining module 303, configured to obtain a map of the identification code;
a fourth obtaining module 304, configured to obtain a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data, and the map;
a fifth obtaining module 305, configured to obtain, based on the multi-frame sensing data, the plurality of pose nodes, and the plurality of local maps, a first relative pose relationship of each two adjacent pose nodes and a second relative pose relationship of each pose node and the plurality of local maps;
the association module 306 is configured to associate the multi-frame image data with a corresponding pose node to obtain multiple observations of the pose node and the identification code;
an optimization module 307, configured to optimize the plurality of pose nodes and the plurality of local maps with the first relative pose relationship, the second relative pose relationship, and the observed value as constraints;
the map construction module 308 is configured to construct a positioning map using the optimized plurality of pose nodes, the plurality of local maps, and the multi-frame sensing data.
In one embodiment of the present application, the fourth obtaining module 304 illustrated in fig. 3 may include an initializing unit, a first creating unit, a predicted value obtaining unit, a modifying unit, and a second creating unit, where:
the initialization unit is used for creating an initial pose node and an initial local map based on the multi-frame image data and the marked map;
a first creation unit for creating a new node based on the odometer data and/or the laser point cloud data;
a predicted value obtaining unit for obtaining a predicted value of the pose of the new node based on the odometer data corresponding to the new node and the previous pose node;
the correcting unit is used for correcting the pose predicted value by utilizing the corresponding point cloud data and the local map with the new node to obtain the latest pose node;
and the second creating unit is used for creating a new local map based on the latest pose node and the last local map.
Optionally, in an embodiment of the present application, the initializing unit includes: the system comprises an identification code analysis unit, a coordinate information acquisition unit, a robot pose calculation unit, an initial node creation unit and an initial local map creation unit, wherein:
The identification code analysis unit is used for acquiring the ID of the identification code and the pose of the robot relative to the identification code based on multi-frame image data;
a coordinate information acquisition unit for acquiring coordinate information of the identification code based on the ID of the identification code in combination with the map;
a robot pose calculating unit for calculating the robot pose based on the pose of the robot relative to the identification code and the coordinate information of the identification code;
an initial node creation unit for creating a pose node based on the pose of the robot;
an initial local map creation unit for creating a local map based on the robot pose.
Alternatively, in another embodiment of the present application, the first creating unit may include a new node creating unit, and the second creating unit may include a new map creating unit, wherein:
the new node creation unit is used for creating a new node if the odometer data and/or the laser point cloud data indicate that the translation distance of the robot compared with the previous pose node is greater than or equal to a first translation threshold value and/or the rotation distance of the robot is greater than or equal to a first rotation threshold value;
and the new map creation unit is used for creating a new local map if the translation distance between the latest pose node and the last local map is greater than or equal to a second translation threshold value, and taking the latest pose node as the pose of the local map.
Optionally, in another embodiment of the present application, the fifth obtaining module 305 illustrated in fig. 3 may include a first relative pose relationship obtaining unit, a map relative relationship obtaining unit, and a closed-loop constraint relationship obtaining unit, where:
a first relative pose relationship obtaining unit for obtaining a first relative pose relationship of every two adjacent pose nodes based on a plurality of pose nodes;
the map relative relation acquisition unit is used for acquiring a map relative relation based on each pose node and the corresponding local map;
the closed-loop constraint relation acquisition unit is used for acquiring a closed-loop constraint relation based on laser point cloud data corresponding to the latest pose node and other local maps through loop detection, wherein the other local maps are local maps not corresponding to the latest pose node.
Alternatively, in another embodiment of the present application, the optimization module 307 illustrated in fig. 3 may include a problem building unit and a problem solving unit, where:
the problem construction unit is used for constructing a nonlinear least square problem by taking the first relative pose relation, the second relative pose relation and the observed value as constraint conditions and taking a plurality of pose nodes and a plurality of local maps as optimization targets;
And the problem solving unit is used for solving the nonlinear least square problem by using a graph optimization solving algorithm, and taking the optimal solution of the nonlinear least square problem as a plurality of optimized pose nodes and a plurality of local maps.
Optionally, in another embodiment of the present application, the map building module 307 illustrated in fig. 3 may include a first map generating unit, where:
and the first map generating unit is used for establishing a global map according to the optimized multiple pose nodes and the multiple frames of sensing data and the multiple local maps.
As can be seen from the apparatus illustrated in fig. 3, in the embodiment of the present application, an existing map is used to initialize the position of the laser map, and provide an initial laser pose with a consistent coordinate system; the marked pose is only used as optimization constraint and is not used as an optimization target, so that the marked pose in the global map is ensured to be the same as the pose of the map marked before. According to the technical scheme, the laser visual marking fusion map is directly expanded from the previous marking map, the coordinate system of the previous visual marking map is adopted, the position of the previous service point can be kept unchanged, only a small amount of newly added service points are needed, the deployment time can be saved, and the map construction efficiency is improved.
Fig. 4 is a schematic structural diagram of an apparatus according to an embodiment of the present application. As shown in fig. 4, the apparatus 4 of this embodiment may be a robot or a module therein, mainly comprising: a processor 40, a memory 41 and a computer program 42 stored in the memory 41 and executable on the processor 40, for example a program of a map extension method. The steps in the map extension method embodiment described above, such as steps S201 to S207 shown in fig. 2, are implemented when the processor 40 executes the computer program 42. Alternatively, the processor 40 may implement the functions of the modules/units in the above-described apparatus embodiments when executing the computer program 42, for example, the functions of the first acquisition module 301, the second acquisition module 302, the third acquisition module 303, the fourth acquisition module 304, the fifth acquisition module 305, the association module 306, the optimization module 307, and the map construction module 308 shown in fig. 3.
Illustratively, the computer program 42 of the map extension method mainly includes: acquiring multi-frame image data acquired by an identification code through an image acquisition device and multi-frame sensing data acquired by a sensing device when the robot moves; acquiring a marking map of the identification code; obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map; based on multi-frame sensing data, a plurality of pose nodes and a plurality of local maps, obtaining a first relative pose relation of every two adjacent pose nodes and a second relative pose relation of each pose node and the plurality of local maps; correlating the multi-frame image data with the corresponding pose nodes to obtain a plurality of observation values of the pose nodes and the identification codes; optimizing a plurality of pose nodes and a plurality of local maps by taking the first relative pose relation, the second relative pose relation and the observed value as constraints; and constructing a global map by using the optimized pose nodes, the local maps and the multi-frame sensing data. The computer program 42 may be divided into one or more modules/units, which are stored in the memory 41 and executed by the processor 40 to complete the present application. One or more of the modules/units may be a series of computer program instruction segments capable of performing a specific function, which instruction segments are used to describe the execution of the computer program 42 in the device 4. For example, the computer program 42 may be divided into functions of the first acquisition module 301, the second acquisition module 302, the third acquisition module 303, the fourth acquisition module 304, the fifth acquisition module 305, the association module 306, the optimization module 307, and the map construction module 308 (modules in the virtual device), each of which has the following specific functions: a first acquiring module 301, configured to acquire multi-frame sensing data acquired by the sensing device when the robot moves; a second acquiring module 302, configured to acquire multi-frame image data acquired by the image acquisition device on the identification code; a third obtaining module 303, configured to obtain a map of the identification code; a fourth obtaining module 304, configured to obtain a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data, and the map; a fifth obtaining module 305, configured to obtain, based on the multi-frame sensing data, the plurality of pose nodes, and the plurality of local maps, a first relative pose relationship of each two adjacent pose nodes and a second relative pose relationship of each pose node and the plurality of local maps; the association module 306 is configured to associate the multi-frame image data with a corresponding pose node to obtain multiple observations of the pose node and the identification code; an optimization module 307, configured to optimize the plurality of pose nodes and the plurality of local maps with the first relative pose relationship, the second relative pose relationship, and the observed value as constraints; the map construction module 308 is configured to construct a positioning map using the optimized plurality of pose nodes, the plurality of local maps, and the multi-frame sensing data.
Device 4 may include, but is not limited to, a processor 40, a memory 41. It will be appreciated by those skilled in the art that fig. 4 is merely an example of device 4 and is not intended to limit device 4, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., a computing device may also include an input-output device, a network access device, a bus, etc.
The processor 40 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may be an internal storage unit of the device 4, such as a hard disk or a memory of the device 4. The memory 41 may also be an external storage device of the device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the device 4. Further, the memory 41 may also include both an internal storage unit of the device 4 and an external storage device. The memory 41 is used to store computer programs and other programs and data required by the device. The memory 41 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that the above-described functional units and modules are merely illustrated for convenience and brevity of description, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above device may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/device and method may be implemented in other manners. For example, the apparatus/device embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a non-transitory computer readable storage medium. Based on such understanding, the present application may implement all or part of the procedures in the methods of the foregoing embodiments, or may be implemented by instructing related hardware by a computer program, where the computer program of the map extension method may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each embodiment of the foregoing methods, that is, obtain multiple frames of image data acquired by an image acquisition device for an identification code and multiple frames of sensing data acquired by a sensing device when the robot moves; acquiring a marking map of the identification code; obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map; based on multi-frame sensing data, a plurality of pose nodes and a plurality of local maps, obtaining a first relative pose relation of every two adjacent pose nodes and a second relative pose relation of each pose node and the plurality of local maps; correlating the multi-frame image data with the corresponding pose nodes to obtain a plurality of observation values of the pose nodes and the identification codes; optimizing a plurality of pose nodes and a plurality of local maps by taking the first relative pose relation, the second relative pose relation and the observed value as constraints; and constructing a global map by using the optimized pose nodes, the local maps and the multi-frame sensing data. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The non-transitory computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a USB flash disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the non-transitory computer readable medium may include content that is suitably scaled according to the requirements of jurisdictions in which the legislation and patent practice, such as in some jurisdictions, the non-transitory computer readable medium does not include electrical carrier signals and telecommunication signals according to the legislation and patent practice. The above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application. The foregoing detailed description of the embodiments has been presented for purposes of illustration and description, and it should be understood that the foregoing is by way of example only, and is not intended to limit the scope of the invention.

Claims (10)

1. A robot, the robot comprising:
a memory, a processor, a sensing device, and an image acquisition device;
the memory stores executable program code;
the processor coupled to the memory is configured to invoke the executable program code stored in the memory to perform the steps of:
acquiring multi-frame image data acquired by the image acquisition device for the identification code and multi-frame sensing data acquired by the sensing device when the robot moves;
acquiring a marking map of the identification code;
obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map;
based on the multi-frame sensing data, the plurality of pose nodes and the plurality of local maps, obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the plurality of local maps;
correlating the multi-frame image data with corresponding pose nodes to obtain a plurality of observation values of the pose nodes and the identification codes;
optimizing the plurality of pose nodes and the plurality of local maps by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraints;
Constructing a global map by using the optimized pose nodes, the local maps and the multi-frame sensing data;
the sensing device comprises a wheel type odometer and a two-dimensional laser radar, and the multi-frame sensing data comprises odometer data and laser point cloud data;
the obtaining the plurality of pose nodes and the plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the map comprises:
creating an initial pose node and an initial local map based on the image data and the map;
creating a new node based on the odometry data and/or the laser point cloud data;
acquiring a pose predicted value of the new node based on the odometer data corresponding to the new node and the last pose node, and correcting the pose predicted value by utilizing the corresponding point cloud data and the local map of the new node to obtain the latest pose node;
creating a new local map based on the latest pose node and the last local map;
and returning to the step of creating a new node and/or a new local map based on the odometry data and/or the laser point cloud data to obtain a plurality of pose nodes and a plurality of local maps.
2. Robot according to claim 1, characterized in that the creation of a new node based on odometer data and/or laser point cloud data, in particular:
if the odometer data and/or the laser point cloud data indicate that the translation distance of the robot compared with the previous pose node is greater than or equal to a first translation threshold value and/or the rotation distance of the robot is greater than or equal to a first rotation threshold value, a new node is created.
3. The robot of claim 1, wherein said creating an initial pose node and an initial local map based on said image data and said map comprises:
acquiring an ID of the identification code and a pose of the robot relative to the identification code based on the image data;
acquiring coordinate information of the identification code based on the ID of the identification code and the marking map;
calculating the pose of the robot based on the pose of the robot relative to the identification code and the coordinate information of the identification code;
and creating a pose node with an initial pose and an initial local map based on the pose of the robot.
4. The robot of claim 1, wherein the step of creating a new node based on odometry data and/or laser point cloud data comprises:
If the odometer data and/or the laser point cloud data indicate that the translation distance of the robot compared with the previous pose node is greater than or equal to a first translation threshold value and/or the rotation distance of the robot is greater than or equal to a first rotation threshold value, creating a new node;
the processor invokes the executable program code stored in the memory, and the step of creating a new local map based on the latest pose node and the last local map in the map extension method includes:
and if the translation distance between the latest pose node and the last local map is greater than or equal to a second translation threshold value, creating a new local map, and taking the latest pose node as the pose of the local map.
5. The robot of claim 1, wherein the second relative pose relationship comprises a map relative relationship and a closed-loop constraint relationship;
the obtaining, based on the multi-frame sensing data, the plurality of pose nodes, and the plurality of local maps, a first relative pose relationship for each two adjacent pose nodes and a second relative pose relationship for each pose node and the plurality of local maps includes:
Acquiring a first relative pose relationship of every two adjacent pose nodes based on the plurality of pose nodes;
acquiring the map relative relation based on each pose node and a corresponding local map;
and acquiring the closed-loop constraint relation based on laser point cloud data corresponding to the latest pose node and other local maps through loop detection, wherein the other local maps are local maps not corresponding to the latest pose node.
6. The robot of claim 1, wherein optimizing the plurality of pose nodes and the plurality of local maps with the first relative pose relationship, the second relative pose relationship, and the observation as constraints comprises:
constructing a nonlinear least square problem by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraint conditions and taking the pose nodes and the local maps as optimization targets;
and solving the nonlinear least square problem by a graph optimization solving algorithm, and taking the optimal solution of the nonlinear least square problem as a plurality of optimized pose nodes and a plurality of local maps.
7. The robot of claim 1, wherein the constructing a positioning map using the optimized plurality of pose nodes, the plurality of local maps, and the multi-frame sensor data comprises:
and establishing a global map according to the optimized multiple pose nodes, the associated multi-frame sensing data and the multiple local maps.
8. A map construction apparatus applied to a robot including a sensing device and an image acquisition device, the apparatus comprising:
the first acquisition module is used for acquiring multi-frame sensing data acquired by the sensing device when the robot moves;
the second acquisition module is used for acquiring multi-frame image data acquired by the image acquisition device on the identification code;
the third acquisition module is used for acquiring the marking map of the identification code;
the fourth acquisition module is used for obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map;
a fifth obtaining module, configured to obtain a first relative pose relationship of each two adjacent pose nodes and a second relative pose relationship of each pose node and the plurality of local maps based on the multi-frame sensing data, the plurality of pose nodes, and the plurality of local maps;
The association module is used for associating the multi-frame image data with the corresponding pose nodes to obtain a plurality of observation values of the pose nodes and the identification codes;
the optimization module is used for optimizing the plurality of pose nodes and the plurality of local maps by taking the first relative pose relation, the second relative pose relation and the observed value as constraints;
the map construction module is used for constructing a positioning map by using the optimized pose nodes, the local maps and the multi-frame sensing data;
the sensing device comprises a wheel type odometer and a two-dimensional laser radar, and the multi-frame sensing data comprises odometer data and laser point cloud data;
the fourth acquisition module includes:
the initialization unit is used for creating an initial pose node and an initial local map based on the multi-frame image data and the marked map;
a first creation unit for creating a new node based on the odometer data and/or the laser point cloud data;
a predicted value obtaining unit for obtaining a predicted value of the pose of the new node based on the odometer data corresponding to the new node and the previous pose node;
the correcting unit is used for correcting the pose predicted value by utilizing the corresponding point cloud data and the local map with the new node to obtain the latest pose node;
And the second creating unit is used for creating a new local map based on the latest pose node and the last local map and triggering the first creating unit.
9. A map extension method, the method comprising:
acquiring multi-frame image data acquired by an image acquisition device for an identification code and multi-frame sensing data acquired by a sensing device when the robot moves;
acquiring a marking map of the identification code;
obtaining a plurality of pose nodes and a plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the marking map;
based on the multi-frame sensing data, the plurality of pose nodes and the plurality of local maps, obtaining a first relative pose relationship of every two adjacent pose nodes and a second relative pose relationship of each pose node and the plurality of local maps;
correlating the multi-frame image data with the corresponding pose nodes to obtain a plurality of observation values of the pose nodes and the identification codes;
optimizing the plurality of pose nodes and the plurality of local maps by taking the first relative pose relationship, the second relative pose relationship and the observed value as constraints;
Constructing a global map by using the optimized pose nodes, the local maps and the multi-frame sensing data;
the sensing device comprises a wheel type odometer and a two-dimensional laser radar, and the multi-frame sensing data comprises odometer data and laser point cloud data;
the obtaining the plurality of pose nodes and the plurality of local maps of the robot based on the multi-frame image data, the multi-frame sensing data and the map comprises:
creating an initial pose node and an initial local map based on the image data and the map;
creating a new node based on the odometry data and/or the laser point cloud data;
acquiring a pose predicted value of the new node based on the odometer data corresponding to the new node and the last pose node, and correcting the pose predicted value by utilizing the corresponding point cloud data and the local map of the new node to obtain the latest pose node;
creating a new local map based on the latest pose node and the last local map;
and returning to the step of creating a new node and/or a new local map based on the odometry data and/or the laser point cloud data to obtain a plurality of pose nodes and a plurality of local maps.
10. A readable storage medium having stored thereon a computer program for realizing a map extension method when executed by a processor, the map extension method being the map extension method of claim 9.
CN202111679399.7A 2021-12-31 2021-12-31 Robot, map extension method, apparatus, and readable storage medium Active CN114295138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111679399.7A CN114295138B (en) 2021-12-31 2021-12-31 Robot, map extension method, apparatus, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111679399.7A CN114295138B (en) 2021-12-31 2021-12-31 Robot, map extension method, apparatus, and readable storage medium

Publications (2)

Publication Number Publication Date
CN114295138A CN114295138A (en) 2022-04-08
CN114295138B true CN114295138B (en) 2024-01-12

Family

ID=80975910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111679399.7A Active CN114295138B (en) 2021-12-31 2021-12-31 Robot, map extension method, apparatus, and readable storage medium

Country Status (1)

Country Link
CN (1) CN114295138B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107607107A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of Slam method and apparatus based on prior information
WO2019169540A1 (en) * 2018-03-06 2019-09-12 斯坦德机器人(深圳)有限公司 Method for tightly-coupling visual slam, terminal and computer readable storage medium
CN112051596A (en) * 2020-07-29 2020-12-08 武汉威图传视科技有限公司 Indoor positioning method and device based on node coding
CN112183171A (en) * 2019-07-05 2021-01-05 杭州海康机器人技术有限公司 Method and device for establishing beacon map based on visual beacon
CN113819914A (en) * 2020-06-19 2021-12-21 北京图森未来科技有限公司 Map construction method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107607107A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of Slam method and apparatus based on prior information
WO2019169540A1 (en) * 2018-03-06 2019-09-12 斯坦德机器人(深圳)有限公司 Method for tightly-coupling visual slam, terminal and computer readable storage medium
CN110462683A (en) * 2018-03-06 2019-11-15 斯坦德机器人(深圳)有限公司 Method, terminal and the computer readable storage medium of close coupling vision SLAM
CN112183171A (en) * 2019-07-05 2021-01-05 杭州海康机器人技术有限公司 Method and device for establishing beacon map based on visual beacon
CN113819914A (en) * 2020-06-19 2021-12-21 北京图森未来科技有限公司 Map construction method and device
CN112051596A (en) * 2020-07-29 2020-12-08 武汉威图传视科技有限公司 Indoor positioning method and device based on node coding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Unified Multi-Modal Landmark Tracking for Tightly Coupled Lidar-Visual-Inertial Odometry;David Wisth等;IEEE Robotics and Automation Letters;第6卷(第2期);第1004-1011页 *
多模融合的室内定位算法研究;刘运航;《中国优秀硕士学位论文全文数据库信息科技辑》(第5期);第I138-1249页 *

Also Published As

Publication number Publication date
CN114295138A (en) 2022-04-08

Similar Documents

Publication Publication Date Title
US11422261B2 (en) Robot relocalization method and apparatus and robot using the same
CN109345596B (en) Multi-sensor calibration method, device, computer equipment, medium and vehicle
CN109509210B (en) Obstacle tracking method and device
CN110873883B (en) Positioning method, medium, terminal and device integrating laser radar and IMU
CN110470333B (en) Calibration method and device of sensor parameters, storage medium and electronic device
CN111735439B (en) Map construction method, map construction device and computer-readable storage medium
CN115631305A (en) Driving method of skeleton model of virtual character, plug-in and terminal equipment
CN109425348B (en) Method and device for simultaneously positioning and establishing image
CN112183171B (en) Method and device for building beacon map based on visual beacon
CN110887493B (en) Track calculation method, medium, terminal and device based on local map matching
CN113048980B (en) Pose optimization method and device, electronic equipment and storage medium
CN111862215B (en) Computer equipment positioning method and device, computer equipment and storage medium
CN111142514B (en) Robot and obstacle avoidance method and device thereof
CN114926549B (en) Three-dimensional point cloud processing method, device, equipment and storage medium
CN115862821B (en) Digital twinning-based intelligent operating room construction method and related device
CN112861595A (en) Method and device for identifying data points and computer-readable storage medium
KR20210133583A (en) Apparatus for building map using gps information and lidar signal and controlling method thereof
CN111368860B (en) Repositioning method and terminal equipment
CN113932790A (en) Map updating method, device, system, electronic equipment and storage medium
CN114295138B (en) Robot, map extension method, apparatus, and readable storage medium
CN113375657A (en) Electronic map updating method and device and electronic equipment
CN111426316A (en) Robot positioning method and device, robot and readable storage medium
CN113808196A (en) Plane fusion positioning method and device, electronic equipment and storage medium
CN115540867A (en) Robot, map construction method, map construction device and readable storage medium
MOLDER et al. Navigation algorithms with LIDAR for mobile robots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant