CN113112478A - Pose recognition method and terminal equipment - Google Patents

Pose recognition method and terminal equipment Download PDF

Info

Publication number
CN113112478A
CN113112478A CN202110404780.6A CN202110404780A CN113112478A CN 113112478 A CN113112478 A CN 113112478A CN 202110404780 A CN202110404780 A CN 202110404780A CN 113112478 A CN113112478 A CN 113112478A
Authority
CN
China
Prior art keywords
pose
point cloud
data
terminal device
laser
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110404780.6A
Other languages
Chinese (zh)
Other versions
CN113112478B (en
Inventor
***
毕艳飞
柴黎林
李贝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ubtech Technology Co ltd
Original Assignee
Shenzhen Ubtech Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ubtech Technology Co ltd filed Critical Shenzhen Ubtech Technology Co ltd
Priority to CN202110404780.6A priority Critical patent/CN113112478B/en
Publication of CN113112478A publication Critical patent/CN113112478A/en
Application granted granted Critical
Publication of CN113112478B publication Critical patent/CN113112478B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Remote Sensing (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention is suitable for the technical field of equipment control, and provides a pose identification method and terminal equipment, wherein the pose identification method comprises the following steps: acquiring pose data to be identified of the terminal equipment; the pose data comprises position data and attitude data; acquiring a regional point cloud map with the position data as a center; acquiring corresponding laser point cloud data under the actual pose of the terminal equipment through a built-in laser radar; determining a confidence score corresponding to each pose data according to the laser point cloud data and the regional point cloud map; generating a pose recognition result based on the confidence score of the pose data. By adopting the invention, the reliability of the currently identified pose data can be determined, and the robustness of the terminal equipment in the scene with easily changed environment can be greatly improved.

Description

Pose recognition method and terminal equipment
Technical Field
The invention belongs to the technical field of equipment control, and particularly relates to a pose identification method and terminal equipment.
Background
Along with the continuous development of intellectualization and automation, the application field of the intelligent robot is more and more extensive, for example, the intelligent robot can be applied to the fields of household cleaning, automatic delivery, guidance and the like, and the convenience and the intelligent degree of the life of a user are greatly improved. In the running process of the intelligent robot, one of the important factors influencing the accuracy of navigation and map construction often depends on the navigation technology and the map self-construction technology, namely how to accurately identify the pose of the intelligent robot.
In the existing pose identification technology, pose data are generally acquired through a sensor, and the relative position between key markers in a scene is identified so as to identify the pose of an intelligent robot. However, in indoor scenes such as houses, the positions of the key markers are easy to change, or new obstacles are added in the original traveling path, so that the pose identification of the intelligent robot is influenced, and the reliability of the pose identification and the robustness in the scene where the environment is easy to change are reduced.
Disclosure of Invention
In view of this, embodiments of the present invention provide a pose identification method and a terminal device, so as to solve the problems of a conventional pose identification technology, that the pose identification reliability is low, and that the robustness in a scene in which an environment is likely to change is low.
A first aspect of an embodiment of the present invention provides a pose identification method, which is applied to a terminal device, and includes:
acquiring pose data to be identified of the terminal equipment; the pose data comprises position data and attitude data;
acquiring a regional point cloud map with the position data as a center;
acquiring corresponding laser point cloud data under the actual pose of the terminal equipment through a built-in laser radar;
determining a confidence score corresponding to each pose data according to the laser point cloud data and the regional point cloud map;
generating a pose recognition result based on the confidence score of the pose data.
A second aspect of an embodiment of the present invention provides a pose recognition apparatus, including:
the pose data acquisition unit is used for acquiring pose data to be identified of the terminal equipment; the pose data comprises position data and attitude data;
the regional point cloud map acquisition unit is used for acquiring a regional point cloud map with the position data as the center;
the laser point cloud data acquisition unit is used for acquiring laser point cloud data corresponding to the actual pose of the terminal equipment through a built-in laser radar;
the confidence score determining unit is used for determining a confidence score corresponding to each pose data according to the laser point cloud data and the regional point cloud map;
a pose recognition result generation unit configured to generate a pose recognition result based on the confidence score of the pose data.
A third aspect of embodiments of the present invention provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the first aspect when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, performs the steps of the first aspect.
The embodiment of the invention provides a pose identification method and terminal equipment, which have the following beneficial effects:
when the pose identification is required, acquiring pose data to be identified of the terminal equipment, and then determining a regional point cloud map corresponding to position data in the pose data; on the other hand, when the terminal device acquires the pose data, laser point cloud data corresponding to the current pose can be acquired through a built-in laser radar, a confidence score corresponding to the current pose data can be determined by comparing the regional point cloud map with the laser point cloud data, a corresponding pose identification result is obtained based on the confidence of the pose data, the reliability of the currently identified pose data is determined, and when the reliability is high, corresponding normal response operation can be executed based on the identified pose data, for example, the terminal device is controlled to run on a preset track; when the reliability degree is low, abnormal response operation can be executed, for example, pose data of the terminal device is re-identified or a map is updated, and the robustness of the terminal device in a scene where the environment is easy to change can be greatly improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of an implementation of a pose identification method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of laser point cloud data and a map of local point clouds according to an embodiment of the invention;
fig. 3 is a flowchart of a specific implementation of a pose identification method S104 according to a second embodiment of the present invention;
fig. 4 is a flowchart of a specific implementation of a pose identification method S1041 according to a third embodiment of the present invention;
fig. 5 is a flowchart of a specific implementation of the pose identification methods S101 and S105 according to the fourth embodiment of the present invention;
fig. 6 is a flowchart of a specific implementation of a pose identification method S102 according to a fifth embodiment of the present invention;
fig. 7 is a block diagram of a pose recognition apparatus according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
When the pose identification is required, acquiring pose data to be identified of the terminal equipment, and then determining a regional point cloud map corresponding to position data in the pose data; on the other hand, when the terminal device acquires the pose data, laser point cloud data corresponding to the current pose can be acquired through a built-in laser radar, a confidence score corresponding to the current pose data can be determined by comparing the regional point cloud map with the laser point cloud data, a corresponding pose identification result is obtained based on the confidence of the pose data, the reliability of the currently identified pose data is determined, and when the reliability is high, corresponding normal response operation can be executed based on the identified pose data, for example, the terminal device is controlled to run on a preset track; when the reliability degree is low, abnormal response operation can be executed, for example, pose data of the terminal equipment is identified again or a map is updated, and the problems of low reliability of pose identification and low robustness in a scene with easily changed environment in a pose identification technology are solved.
In the embodiment of the present invention, the main execution body of the process is a terminal device, and the terminal device includes but is not limited to: the terminal equipment can also be mobile equipment, such as an intelligent model automobile, an unmanned aerial vehicle and the like; in a possible implementation manner, the executing entity of the process may be another electronic device that establishes a communication connection with the terminal device, in which case, the terminal device may send pose data to the electronic device so that the electronic device outputs a pose recognition result about the terminal device, based on which, the electronic device may be a device capable of executing a pose recognition task, such as a computer, a smart phone, and a tablet computer, and send the recognized pose recognition result to the terminal device so that the terminal device executes a corresponding response operation based on the pose recognition result; the electronic equipment can also determine corresponding response operation based on the pose recognition result, and send a control instruction corresponding to the response operation to the terminal equipment so as to control the terminal equipment to execute corresponding action. In the following embodiments, the main execution body of the flow is described by taking a terminal device as an example.
Fig. 1 shows a flowchart of an implementation of the pose identification method according to the first embodiment of the present invention, which is detailed as follows:
in S101, acquiring pose data to be identified of the terminal equipment; the pose data includes position data and pose data.
In this embodiment, the terminal device may determine, through a built-in data acquisition module, pose data of a current position, where the pose data is in a state to be identified, and because there may be a deviation in the internal data acquisition module, the pose data may be invalid or pose data with abnormal identification, and it is necessary to identify the reliability of the pose data, that is, a pose identification result corresponding to the position data is output through S102 to S105. For example, the data acquisition module includes: the positioning module can also be a WIFI communication module or a low-power-consumption Bluetooth module and the like, and the position data of the terminal equipment is determined by searching the wireless signal of the corresponding wireless equipment and the signal intensity of the wireless signal; the pose module may include one or more than two sensors, and determine the pose data of the terminal device according to the values fed back by the sensors, where the sensors include, but are not limited to: if the terminal device is an intelligent robot and the intelligent robot has a plurality of movable joints, that is, the robot has a plurality of degrees of freedom when changing its posture, each movable joint may be configured with a corresponding sensor, and the terminal device may determine current posture data of the terminal device based on a sensing value fed back by the sensor corresponding to each movable joint.
In a possible implementation manner, the position data is specifically used to represent an absolute position of the terminal device, for example, the position data may be longitude and latitude of the position of the terminal device, optionally, the position data may also be used to represent a relative position of the terminal device, for example, a scene where the terminal device is located includes a plurality of calibration objects, when the terminal device determines the pose data, the collected position data may specifically be a distance value between the terminal device and each calibration object, and in this case, the number of the calibration objects may be three or more.
In one possible implementation, the pose data includes, but is not limited to: the attitude angle, the orientation information of each orientation, and the like can be used for information representing the attitude of the terminal device. Of course, if the terminal device includes a plurality of movable joints, each movable joint has a plurality of degrees of freedom, when determining the posture data of the terminal device, the corresponding angle of the movable joint in each degree of freedom can be determined, and then the posture data of the terminal device is determined according to the angle of the plurality of movable joints in each degree of freedom.
In a possible implementation manner, the terminal device may be configured with a trigger condition for pose recognition, and if the terminal device detects that any one of the pre-configured trigger conditions for pose recognition is currently satisfied, the terminal device may perform operations S101 to S105 to determine the pose of the terminal device. For example, the trigger condition of the above pose recognition may be a time trigger condition, the terminal device may be configured with a plurality of trigger times of the pose recognition, and the time interval between each trigger time may be the same (i.e. with a preset trigger period) or may be different. The time interval between the trigger moments of the pose identification can be determined according to the density of the obstacles in the scene, if the density of the obstacles is high in a certain scene, namely the current environment is complex, the pose of the terminal equipment needs to be frequently confirmed at the moment to realize accurate control, and therefore the time interval of the corresponding trigger moment is short; on the contrary, if the density of the obstacles in the scene where the terminal device is located is small, that is, the current environment is simple, the pose of the terminal device does not need to be frequently confirmed, and therefore the time interval of the corresponding trigger time can be long.
In a possible implementation manner, the triggering condition of the pose identification may also be an event triggering condition, for example, when the terminal device collides with an obstacle during traveling based on a preset map, that is, a deviated preset track may be caused by a collision indicating that the pose of the terminal device is abnormal, and at this time, it may be determined that the triggering condition of the pose identification is satisfied, and the pose of the terminal device is determined.
In S102, an area point cloud map centered on the position data is acquired.
In this embodiment, after determining the current pose data, the terminal device needs to determine the reliability of the pose data, that is, needs to calculate a confidence score corresponding to the pose data. Therefore, the terminal device may extract the position data from the pose data, and acquire an area point cloud map based on the position data as a center, where it should be noted that the area point cloud map is pre-stored, and may be stored in a local memory of the terminal device, or may be stored in a cloud server, or may be stored in an external memory.
In a possible implementation manner, the terminal device may associate a corresponding preset position for each area point cloud area, after obtaining the current position data, the terminal device may perform addressing based on the position data, select a preset position matching the position data from all stored preset positions, and use an area point cloud map of the matched preset position as an area point cloud map corresponding to the position data.
In a possible implementation manner, the terminal device may pre-store a global point cloud map corresponding to the current scene, and in this case, the terminal device may intercept an area point cloud map centered on the position data from the global point cloud map. Similarly, the global point cloud map may be stored in a local memory of the terminal device, or may be stored in a cloud server.
In a possible implementation manner, the area point cloud map may be constructed by the terminal device in an operation process. For example, when the terminal device operates in the current scene for the first time, the laser data obtained in the traveling process can be obtained through a built-in laser radar, wherein the traveling process is specifically determined based on a preset search route, the terminal device can obtain laser data corresponding to each azimuth on the search route, and accordingly a global point cloud map corresponding to the current scene is generated based on all the collected laser data, and the global point cloud map comprises the area point cloud map. Optionally, the terminal device may also update the area point cloud map in the traveling process, and if the laser data acquired by the terminal device in a certain traveling process is low in matching degree with the laser data acquired at the previous corresponding position, the area point cloud map may be updated based on the current laser data.
In a possible implementation manner, the area point cloud map may be constructed by other devices outside the terminal device, where the manner in which the other devices construct the area point cloud map is the same as the above manner, and details are not repeated. In this case, the other devices may send the constructed area point cloud map to the terminal device provided in this embodiment, and certainly, the other devices may also upload the constructed map to a third-party device, for example, a cloud server or a management device of the current scene, and the terminal device may download the area point cloud map corresponding to the position data through the third-party device.
In S103, laser point cloud data corresponding to the actual pose of the terminal device is obtained through a built-in laser radar.
In this embodiment, while the terminal device acquires pose data, the terminal device may acquire corresponding laser point cloud data in the current pose through a built-in laser radar. The laser point cloud data is used to represent: and the relative position relation between each scene object in the current scene and the current pose of the terminal equipment. Optionally, the laser point cloud data may specifically include a distance value between the terminal device and each spatial dimension.
In this embodiment, the laser point cloud data acquired by the laser radar is acquired based on the actual pose of the terminal device, that is, the laser point cloud data can reflect the actual pose of the terminal device, and is data with high reliability.
In this embodiment, a laser radar is configured in the terminal device, and the laser radar has a certain acquisition angle and acquisition depth, and based on this, the laser radar has a certain effective range, and obtains the relative position between each pixel point and the terminal device in the effective range, thereby constructing and obtaining the above-mentioned laser point cloud data.
In S104, according to the laser point cloud data and the area point cloud map, a confidence score corresponding to each pose data is determined.
In this embodiment, after acquiring the area point cloud map determined based on the pose data to be identified and the laser point cloud data corresponding to the actual pose, the terminal device may match the two point cloud data, so as to determine a confidence score corresponding to the pose data based on a matching degree between the two point cloud data. The area point cloud map is determined based on the pose data to be identified, the laser point cloud data is acquired based on the actual pose through the laser radar, if the matching degree of the area point cloud map and the actual pose is high, the closer the pose data to be identified and the actual pose of the terminal equipment are, and correspondingly, the higher the confidence score corresponding to the pose data is; on the contrary, if the matching degree between the area point cloud map and the laser point cloud data is lower, it indicates that the difference between the pose data to be recognized and the actual pose of the terminal device is larger, and correspondingly, the confidence score corresponding to the pose data is lower.
In a possible implementation manner, the terminal device may be configured with a preset confidence coefficient conversion function, and the terminal device may import the laser point cloud data and the area point cloud map into the confidence coefficient conversion function, so as to calculate a confidence coefficient score corresponding to the pose data. Optionally, the confidence level conversion function specifically includes the following modules: the device comprises a point cloud data normalization module, a point cloud data matching module and a confidence score calculation module. The point cloud data normalization module is used for specifically performing standardization processing on the laser point cloud data and the area point cloud map, for example, converting all parameter values in the laser point cloud data and the area point cloud map into a uniform dimension, and also performing matrix conversion on the laser point cloud data and/or the area point cloud map so as to convert the laser point cloud data and/or the area point cloud map into a uniform angle and then performing a subsequent matching degree calculation process; the point cloud data matching module is specifically configured to receive the normalized laser point cloud data and the regional point cloud map output by the point cloud data normalization module, and calculate a distance between the normalized laser point cloud data and the regional point cloud map, where the distance is specifically determined based on distance values between the normalized laser point cloud data and corresponding points in the regional point cloud map, where the distance may be an average value of the distance values of the corresponding points, or a matrix formed based on the corresponding distance values; the confidence score calculating module calculates the confidence score based on the distance obtained by transmission.
In a possible implementation manner, the confidence score calculating module inputs a distance matrix formed by distance values between the laser point cloud data and each corresponding point in the regional point cloud map; in this case, the confidence score may be set with a corresponding distance threshold, identify a corresponding point in the distance matrix whose distance value exceeds the distance threshold as an abnormal point, and calculate a corresponding confidence score based on the number of abnormal points or the percentage of the number of abnormal points to the total point cloud number.
In a possible implementation manner, the area point cloud map may specifically be point cloud data within a range of one week (i.e., 360 °) centered on the position data. The laser point cloud data may be point cloud data corresponding to a preset visual angle based on a current posture. Exemplarily, fig. 2 shows a schematic diagram of an area point cloud map and laser point cloud data provided by an embodiment of the present application. Referring to fig. 2, the area point cloud map is specifically an area 1, and the laser point cloud data is specifically an area 2, and thus, the area of the area point cloud map may be an area larger than the laser point cloud data.
In S105, a pose recognition result is generated based on the confidence score of the pose data.
In this embodiment, after the terminal device calculates the confidence score, a pose recognition result corresponding to the pose data to be recognized may be generated. The pose recognition result includes but is not limited to: the pose recognition is free of abnormity, pose deviation, pose recognition abnormity and the like, the pose recognition result is output, the pose recognition of the terminal equipment can be realized, and reliability evaluation can be given to the pose recognition result, so that the terminal equipment can perform corresponding operation based on the pose recognition result.
In a possible implementation manner, if the terminal device recognizes that the pose recognition result is that pose recognition is not abnormal, it indicates that pose data to be recognized is the same as the actual pose of the terminal device, and a corresponding operation may be performed based on the pose data. For example, if the terminal device travels along a preset trajectory, it may be determined that the terminal device does not deviate from the trajectory currently, and the terminal device may operate based on the original travel policy.
In a possible implementation manner, if the terminal device recognizes that the pose recognition result is a pose deviation, it indicates that there is a certain deviation between the pose data to be recognized and the actual pose of the terminal device, but the difference is not too large, and the pose of the terminal device can be adjusted.
In a possible implementation manner, if the terminal device recognizes that the pose recognition result is that the pose recognition is abnormal, it indicates that the deviation between the pose data to be recognized and the actual pose of the terminal device is large, and the actual pose is not an expected pose, and at this time, the terminal device may be controlled to move to a preset pose.
It should be noted that the terminal device executes the corresponding response operation based on the pose identification result, which may be determined according to the type of the task currently executed by the terminal device, and this is not illustrated here.
As can be seen from the above, the pose identification method provided by the embodiment of the invention acquires pose data to be identified of the terminal device when the pose identification is required, and then determines the area point cloud map corresponding to the position data in the pose data; on the other hand, when the terminal device acquires the pose data, laser point cloud data corresponding to the current pose can be acquired through a built-in laser radar, a confidence score corresponding to the current pose data can be determined by comparing the regional point cloud map with the laser point cloud data, a corresponding pose identification result is obtained based on the confidence of the pose data, the reliability of the currently identified pose data is determined, and when the reliability is high, corresponding normal response operation can be executed based on the identified pose data, for example, the terminal device is controlled to run on a preset track; when the reliability degree is low, abnormal response operation can be executed, for example, pose data of the terminal device is re-identified or a map is updated, and the robustness of the terminal device in a scene where the environment is easy to change can be greatly improved.
Fig. 3 shows a flowchart of a specific implementation of the pose identification method S104 according to the second embodiment of the present invention. Referring to fig. 3, with respect to the embodiment shown in fig. 1, in the method for identifying a pose provided by this embodiment, S104 includes: s1041 to S1042 are specifically described as follows:
further, the determining a confidence score corresponding to each pose data according to the laser point cloud data and the area point cloud map includes:
in S1041, the laser point cloud data and the area point cloud map are imported to a preset point cloud matching degree algorithm, and a matching degree between the laser point cloud data and the area point cloud map is calculated.
In this embodiment, the terminal device may be configured with a point cloud matching algorithm, which is specifically configured to calculate a matching degree between any two point cloud data. If the matching degree between the two point cloud data is higher, the higher the corresponding scene similarity when shooting the point cloud data is, and for the same scene, the two point cloud data can be considered to be in the same pose; on the contrary, if the matching degree between the two point cloud data is lower, the corresponding scene similarity is lower when the point cloud data is shot, and for the same scene, the point cloud data and the scene are in different poses. Based on this, in order to judge whether the pose data to be identified is the same as the actual pose of the terminal device, the terminal device may import the laser point cloud data and the area point cloud map into the matching degree algorithm, and calculate the matching degree between the two.
Optionally, the point cloud matching algorithm may specifically adopt an iterative closest point algorithm or a normal distribution transformation algorithm. The terminal equipment can respectively calculate the corresponding accuracy rates of the two algorithms under the current scene, and selects the algorithm with higher accuracy rate as the point cloud matching algorithm.
In S1042, the matching degree is imported into a preset evaluation function to obtain a confidence score corresponding to the posture data; the evaluation function is specifically:
Figure BDA0003021816800000111
wherein score (p) is the degree of match; f [ score (p) ] is the evaluation function; bel (x) is the confidence score.
In this embodiment, after the terminal device obtains the matching degree through calculation, the terminal device may import the matching degree into a preset evaluation function, and convert the matching degree into a corresponding confidence score. Specifically, the higher the matching degree is, the closer the recognized pose data is to the actual pose of the terminal device is, and the higher the corresponding confidence score is. It should be noted that the evaluation function is not a linear function, that is, a nonlinear relationship exists between the matching degree and the confidence, and since the environment where the terminal device is located may be a variable environment, for example, a home is easy to shift or increase or decrease in a home environment, if the laser point cloud data acquired at the current pose exists a large deviation in a partial area between the preset area point cloud maps, the overall matching degree value is low, and even if the pose data is consistent with the actual pose, the matching degree is low. Based on the method, the corresponding confidence scores of the terminal equipment tend to be the same in the interval with higher matching degree, so that the fault tolerance rate can be increased under the scene with larger environmental change; in the interval with low matching degree, the difference between the positions can be increased, namely the corresponding confidence scores can be greatly different, so that the fault tolerance rate under the variable scene can be ensured, and the accuracy of pose abnormity identification can be improved.
For example, if the calculated matching degree is 60%, that is, 0.6, the confidence score may be calculated to be 100% after the conversion is performed by the evaluation function.
In the embodiment of the application, the matching degree between the laser point cloud data and the area point cloud map is calculated and converted into the corresponding confidence score, so that the difference degree between the pose data to be identified and the actual position can be determined according to the matching degree between the two point cloud data, the confidence score of the pose data is determined based on the difference degree, and the accuracy of the confidence score can be improved.
Fig. 4 shows a flowchart of a specific implementation of a pose identification method S1041 according to a third embodiment of the present invention. Referring to fig. 4, with respect to the embodiment shown in fig. 3, the method for identifying a pose S1041 provided in this embodiment includes S401 to S403, which are detailed as follows:
further, the determining a plurality of sets of models having an adjacent relationship based on the three-dimensional model and establishing a contact force function corresponding to each of the sets of models includes:
in S401, second feature points corresponding to any N first feature points in the laser point cloud data are searched in the area point cloud map, and a point cloud conversion matrix is generated based on the N first feature points and the N second feature points; and N is a positive integer greater than or equal to 3.
In this embodiment, before the terminal device needs to calculate the matching degree between the laser point cloud data and the area point cloud map, the laser point cloud data may be converted, and since there may be a certain deviation between the posture of the laser point cloud data during acquisition and the corresponding posture of the area point cloud map during acquisition, in order to improve the accuracy of recognition, an error caused by a difference in the acquisition postures may be eliminated through the point cloud conversion matrix.
In a possible implementation manner, since the area point cloud map may be point cloud data within a range of a next week based on the position data, and the laser point cloud data may be point cloud data within a preset visual angle, that is, the range of the area point cloud map may be greater than the range of the laser point cloud data, a point cloud conversion matrix for the laser point cloud data needs to be determined.
In this embodiment, when the terminal device needs to determine the point cloud conversion matrix, at least three feature points need to be acquired from two point cloud data, and the conversion matrix in the three-dimensional space dimension can be determined according to the at least three feature points. Based on the point cloud conversion matrix, the terminal device can determine at least 3 first feature points from the laser point cloud data, respectively determine second feature points corresponding to the first feature points in the regional point cloud map, and generate the point cloud conversion matrix based on the determined first feature points and the corresponding position coordinates of the second feature points.
In one possible implementation manner, a marker is configured in the current scene, and a point of the marker in the point cloud data is used as the feature point. Based on this, the terminal device may determine a point corresponding to the marker in the area point cloud map as the first feature point described above, and determine a point corresponding to the marker in the laser point cloud data as the second feature point described above.
In S402, a laser conversion matrix corresponding to the laser point cloud data is generated based on the point cloud conversion matrix.
In this embodiment, after the terminal device obtains the point cloud conversion matrix, conversion may be performed based on the laser point cloud data, so as to generate a laser conversion matrix. After the conversion is completed, the terminal equipment can quickly determine the corresponding point of each point in the laser point cloud data in the area point cloud data.
In S403, a deviation distance between the first feature point in the laser transformation matrix and a second feature point corresponding to the first feature point in the area point cloud map is calculated.
In this embodiment, the terminal device determines the position of each first feature point in the laser point cloud data, so after performing matrix conversion, the terminal device may also locate the position of each first feature point in the laser conversion matrix, and the corresponding relationship between each first feature point and the second feature point also remains unchanged, and the smaller the deviation distance between the first feature point and the corresponding second feature point is, the more similar the two point cloud data is; on the contrary, the larger the deviation distance between the first feature point and the corresponding second feature point is, the larger the difference between the two point cloud data is, so that the matching degree between the two point cloud data can be calculated through the deviation distance between the feature points.
In a possible implementation manner, the terminal device may further establish, based on a correspondence between the first feature point and the second feature point, each point in the laser transformation matrix and a corresponding point in the area point cloud map, so as to establish a correspondence between the points, a deviation distance between the points of the terminal device, and determine a matching degree between the laser point cloud data and the area point cloud map based on all the deviation distances.
In S404, the matching degree is obtained based on all the deviation distances.
In this embodiment, the terminal device may import the difference distance between the corresponding feature points to a preset matching degree calculation function, so as to calculate the matching degree. Wherein, the larger the deviation distance is, the lower the corresponding matching degree is; conversely, if the deviation distance is smaller, the corresponding matching degree is higher.
In one possible implementation, the terminal device may calculate a mean of the deviation distances and determine the degree of matching based on an inverse of the mean of the deviation distances.
In the embodiment of the application, the first characteristic point and the second characteristic point which have an incidence relation in the two point cloud data are identified, the point cloud conversion matrix is generated based on the characteristic points, the laser point cloud data are converted to obtain the laser conversion matrix, so that the deviation distance between the associated characteristic points can be calculated, the matching degree between the two characteristic points is calculated according to the deviation degree, and the accuracy of the matching degree is improved.
Fig. 5 shows a flowchart of a specific implementation of the pose identification methods S101 and S104 according to the fourth embodiment of the present invention. Referring to fig. 5, with respect to the embodiment shown in any one of fig. 1 to 4, the method S101 for identifying a pose provided by the present embodiment includes: s1011, correspondingly, S105 includes S1051 to S1504, which are detailed as follows:
further, the acquiring pose data to be identified by the terminal device includes:
in S1011, in the process of moving the terminal device, the pose data is acquired in a preset acquisition period.
In this embodiment, in the moving process of the terminal device, the pose of the terminal device needs to be identified in real time, and the moving direction, the moving speed and the like are adjusted based on the pose of the terminal device, so that the terminal device can be configured with an acquisition cycle and periodically acquire pose data of the terminal device.
Correspondingly, the generating a pose recognition result based on the confidence score of the pose data comprises:
in S1051, if the pose identification result corresponding to any one of the acquisition periods is a pose deviation, the pose of the terminal device is adjusted or the pose data of the acquisition period is updated based on the laser point cloud data and the area point cloud map.
In this embodiment, if the terminal device detects that the pose identification result corresponding to any acquisition cycle is a pose deviation, that is, it indicates that the difference between the pose data to be identified and the actual pose of the terminal device is small, the terminal device may adjust the actual pose of the terminal device according to the difference between the laser point cloud data and the area point cloud map, in this case, there are two adjustment modes for the terminal device, one of which is to adjust the actual pose of the terminal device so that the actual pose is consistent with the pose data to be identified; and the other mode is to update the pose data so that the updated pose data is consistent with the actual pose.
Therefore, for the first manner, the terminal device determines an adjustment angle and an adjustment distance, and adjusts the current posture of the terminal device based on the adjustment angle and the adjustment distance, so as to adjust the actual posture of the terminal device to be consistent with the posture data to be recognized;
with regard to the second mode described above, the terminal device may determine a center position corresponding to the laser point cloud data from the global point cloud map, and update the pose data based on the determined center position.
In the embodiment of the application, the terminal equipment can acquire the pose data periodically in a preset acquisition period in the moving process so as to identify the pose of the terminal equipment periodically, and when the pose deviates, the pose or the pose data of the terminal equipment can be adjusted in real time so as to improve the accuracy of the moving process.
Optionally, the generating a pose recognition result based on the confidence score of the pose data may further include:
in S1052, if the pose recognition result corresponding to M consecutive acquisition cycles is a pose recognition abnormality, a position missing instruction is generated; and M is greater than or equal to a preset abnormal response threshold value.
In this embodiment, the terminal device may be configured with a corresponding abnormal response threshold, and if the pose recognition results corresponding to a plurality of consecutive acquisition cycles are pose recognition abnormalities, it indicates that the pose data to be recognized by the terminal device for a plurality of consecutive times has a large deviation from the actual pose of the terminal device, that is, the terminal device does not move based on the preset movement trajectory.
In S1053, in response to the position loss instruction, controlling the terminal device to move to a preset reset position.
In this embodiment, the terminal device may be configured with at least one reset position in the current scene, the reset position may be configured with a guidance signal, and the terminal device may move to the positioning point based on the guidance signal, so as to be able to retrieve the position of the terminal device again. Above-mentioned guide signal can be wireless WIFI signal, also can be bluetooth signal etc. and terminal equipment can move to foretell reset position according to having strong relevant characteristic between signal strength and the position.
In the embodiment of the application, when the situation that the pose data of the terminal equipment is continuously abnormal for pose recognition is detected, the terminal equipment is recognized to be in a position loss state and is moved to the reset position, the abnormal situation of position loss can be automatically repaired, and the robustness of the abnormal situation is improved.
Optionally, the generating a pose recognition result based on the confidence score of the pose data may further include:
s1054, if the pose identification result corresponding to Q continuous acquisition cycles is pose identification abnormity, updating the regional point cloud map based on the laser point cloud data acquired in the acquisition cycles; and Q is less than a preset abnormal response threshold value.
In this embodiment, if the terminal device detects that the pose identification result of the continuous Q pose data is that the pose identification is abnormal, but the number Q of the identification abnormality is not greater than the abnormal response threshold, at this time, after Q acquisition cycles, the terminal device can accurately identify the pose of the terminal device again, that is, the terminal device does not lose the current position, which may cause the above abnormal situation due to a large environmental change, such as household displacement or increase and decrease, and at this time, the terminal device may update the corresponding area point cloud map based on the laser point cloud data acquired periodically, so that the area point cloud map can be matched with the scene after the obstacle changes.
In the embodiment of the application, under the condition that the number of abnormal posture identification results is small, the area point cloud map can be updated based on the laser point cloud data, the purpose of updating the area point cloud map in real time is achieved, and the adaptability of the terminal equipment to a variable environment is improved.
Fig. 6 shows a flowchart of a specific implementation of a pose identification method according to a fifth embodiment of the present invention. Referring to fig. 6, with respect to any one of the embodiments shown in fig. 1 to 4, in the method for identifying a pose provided by this embodiment, S102 includes: s1021 to S1023 are described in detail as follows:
further, the attitude data includes an attitude angle of the terminal device; the acquiring of the regional point cloud map with the position data as the center comprises the following steps:
in S1021, a central point corresponding to the position data is determined from a preset global point cloud map.
In this embodiment, the terminal device stores a global point cloud map in advance, each point in the global point cloud map may be associated with a preset position, the terminal device may compare the position data identified this time with the preset position associated with each point in the global point cloud map, and a point corresponding to the preset position matched with the position data is used as the central point.
In S1022, an effective recognition area is determined with the central point as a center of a circle, the attitude angle as an initial angle, and a preset angular resolution as a radius.
In this embodiment, the terminal device may use the central point as a center of a circle, use an attitude angle obtained by recognition in the attitude data as an initial angle, use a preset angular resolution as an effective recognition radius, obtain a region in a circle range corresponding to the central point as the center of a circle, and recognize the region as an effective recognition region.
In S1023, a point cloud map corresponding to the effective identification area is intercepted from the global point cloud map as the area point cloud map.
In this embodiment, the terminal device may extract point cloud data corresponding to the effective identification area from the global point cloud map, and use the extracted point cloud data as the area point cloud map.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 7 is a block diagram illustrating a configuration of a pose recognition apparatus according to an embodiment of the present invention, where the terminal device includes units for performing the steps in the corresponding embodiment of fig. 1. Please refer to fig. 1 and fig. 1 for the corresponding description of the embodiment. For convenience of explanation, only the portions related to the present embodiment are shown.
Referring to fig. 7, the pose recognition apparatus includes:
a pose data acquisition unit 71 configured to acquire pose data to be recognized by the terminal device; the pose data comprises position data and a pose angle;
an area point cloud map acquisition unit 72 configured to acquire an area point cloud map centered on the position data;
the laser point cloud data acquisition unit 73 is used for acquiring laser point cloud data corresponding to the actual pose of the terminal equipment through a built-in laser radar;
a confidence score determining unit 74, configured to determine a confidence score corresponding to each pose data according to the laser point cloud data and the area point cloud map;
a pose recognition result generation unit 75 configured to generate a pose recognition result based on the confidence score of the pose data.
Optionally, the confidence score determining unit 74 includes:
the matching degree calculation unit is used for importing the laser point cloud data and the area point cloud map into a preset point cloud matching degree algorithm and calculating the matching degree between the laser point cloud data and the area point cloud map;
the evaluation function importing unit is used for importing the matching degree into a preset evaluation function to obtain a confidence score corresponding to the attitude data; the evaluation function is specifically:
Figure BDA0003021816800000181
wherein score (p) is the degree of match; f [ score (p) ] is the evaluation function; bel (x) is the confidence score.
Optionally, the matching degree calculation unit includes:
a feature point determining unit, configured to search, in the area point cloud map, second feature points corresponding to any N first feature points in the laser point cloud data, and generate a point cloud conversion matrix based on the N first feature points and the N second feature points; n is a positive integer greater than or equal to 3;
the laser conversion matrix generating unit is used for generating a laser conversion matrix corresponding to the laser point cloud data based on the point cloud conversion matrix;
a deviation distance calculation unit, configured to calculate a deviation distance between the first feature point in the laser transformation matrix and a second feature point corresponding to the first feature point in the area point cloud map;
and the deviation distance conversion unit is used for obtaining the matching degree based on all the deviation distances.
Optionally, the pose data acquiring unit is specifically configured to acquire the pose data in a preset acquisition cycle during a moving process of the terminal device;
correspondingly, the pose recognition result generating unit 75 includes:
and the pose deviation response unit is used for adjusting the pose of the terminal equipment or updating the pose data of the acquisition period based on the laser point cloud data and the area point cloud map if the pose identification result corresponding to any acquisition period is pose deviation.
Optionally, the pose recognition result generating unit 75 further includes:
the position loss instruction response unit is used for generating a position loss instruction if the pose identification result corresponding to the continuous M acquisition cycles is pose identification abnormity; the M is greater than or equal to a preset abnormal response threshold value;
and the position loss repairing unit is used for responding to the position loss instruction and controlling the terminal equipment to move to a preset reset position.
Optionally, the pose recognition result generating unit 75 further includes:
the map updating unit is used for updating the regional point cloud map based on the laser point cloud data acquired in the acquisition period if the pose identification result corresponding to the Q continuous acquisition periods is pose identification abnormity; and Q is less than a preset abnormal response threshold value.
Optionally, the attitude data comprises an attitude angle of the terminal device; the area point cloud map acquisition unit 72 includes:
the central point determining unit is used for determining a central point corresponding to the position data from a preset global point cloud map;
the effective identification area determining unit is used for determining an effective identification area by taking the central point as a circle center, the attitude angle as an initial angle and a preset angular resolution as a radius;
and the regional point cloud map generating unit is used for intercepting the point cloud map corresponding to the effective identification region from the global point cloud map as the regional point cloud map.
Therefore, the pose recognition device provided by the embodiment of the invention can also acquire pose data to be recognized of the terminal equipment when the pose recognition is needed, and then determine the area point cloud map corresponding to the position data in the pose data; on the other hand, when the terminal device acquires the pose data, laser point cloud data corresponding to the current pose can be acquired through a built-in laser radar, a confidence score corresponding to the current pose data can be determined by comparing the regional point cloud map with the laser point cloud data, a corresponding pose identification result is obtained based on the confidence of the pose data, the reliability of the currently identified pose data is determined, and when the reliability is high, corresponding normal response operation can be executed based on the identified pose data, for example, the terminal device is controlled to run on a preset track; when the reliability degree is low, abnormal response operation can be executed, for example, pose data of the terminal device is re-identified or a map is updated, and the robustness of the terminal device in a scene where the environment is easy to change can be greatly improved.
Fig. 8 is a schematic diagram of a terminal device according to another embodiment of the present invention. As shown in fig. 8, the terminal device 8 of this embodiment includes: a processor 80, a memory 81 and a computer program 82, such as a pose recognition program, stored in said memory 81 and executable on said processor 80. The processor 80, when executing the computer program 82, implements the steps in the above-described embodiments of the method for identifying each pose, for example, S101 to S105 shown in fig. 1. Alternatively, the processor 80, when executing the computer program 82, implements the functions of the units in the device embodiments described above, such as the functions of the modules 71 to 75 shown in fig. 7.
Illustratively, the computer program 82 may be divided into one or more units, which are stored in the memory 81 and executed by the processor 80 to accomplish the present invention. The one or more units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 82 in the terminal device 8. For example, the computer program 82 may be divided into a pose data acquisition unit, an area point cloud map acquisition unit, a laser point cloud data acquisition unit, a confidence score determination unit, and a pose recognition result generation unit, each of which functions as described above.
The terminal device may include, but is not limited to, a processor 80, a memory 81. Those skilled in the art will appreciate that fig. 8 is merely an example of a terminal device 8 and does not constitute a limitation of terminal device 8 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the terminal device may also include input-output devices, network access devices, buses, etc.
The Processor 80 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 81 may be an internal storage unit of the terminal device 8, such as a hard disk or a memory of the terminal device 8. The memory 81 may also be an external storage device of the terminal device 8, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 8. Further, the memory 81 may also include both an internal storage unit and an external storage device of the terminal device 8. The memory 81 is used for storing the computer program and other programs and data required by the terminal device. The memory 81 may also be used to temporarily store data that has been output or is to be output.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A pose recognition method is applied to terminal equipment and is characterized by comprising the following steps:
acquiring pose data to be identified of the terminal equipment; the pose data comprises position data and attitude data;
acquiring a regional point cloud map with the position data as a center;
acquiring corresponding laser point cloud data under the actual pose of the terminal equipment through a built-in laser radar;
determining a confidence score corresponding to each pose data according to the laser point cloud data and the regional point cloud map;
generating a pose recognition result based on the confidence score of the pose data.
2. The identification method according to claim 1, wherein the determining a confidence score corresponding to each pose data according to the laser point cloud data and the area point cloud map comprises:
importing the laser point cloud data and the area point cloud map into a preset point cloud matching degree algorithm, and calculating the matching degree between the laser point cloud data and the area point cloud map;
importing the matching degree into a preset evaluation function to obtain a confidence score corresponding to the attitude data; the evaluation function is specifically:
Figure FDA0003021816790000011
wherein score (p) is the degree of match; f [ score (p) ] is the evaluation function; bel (x) is the confidence score.
3. The identification method according to claim 2, wherein the importing the laser point cloud data and the area point cloud map into a preset point cloud matching degree algorithm, and calculating the matching degree between the laser point cloud data and the area point cloud map comprises:
searching second characteristic points corresponding to any N first characteristic points in the laser point cloud data in the area point cloud map, and generating a point cloud conversion matrix based on the N first characteristic points and the N second characteristic points; n is a positive integer greater than or equal to 3;
generating a laser conversion matrix corresponding to the laser point cloud data based on the point cloud conversion matrix;
calculating a deviation distance between the first characteristic point in the laser conversion matrix and a second characteristic point corresponding to the first characteristic point in the regional point cloud map;
and obtaining the matching degree based on all the deviation distances.
4. The identification method according to any one of claims 1 to 3, wherein the acquiring pose data to be identified by the terminal device comprises:
acquiring the pose data in a preset acquisition cycle in the moving process of the terminal equipment;
correspondingly, the generating a pose recognition result based on the confidence score of the pose data comprises:
and if the pose identification result corresponding to any one acquisition cycle is pose deviation, adjusting the pose of the terminal equipment or updating the pose data of the acquisition cycle based on the laser point cloud data and the regional point cloud map.
5. The recognition method of claim 4, wherein generating a pose recognition result based on the confidence score of the pose data further comprises:
if the pose identification results corresponding to the M continuous acquisition periods are pose identification abnormity, generating a position loss instruction; the M is greater than or equal to a preset abnormal response threshold value;
and responding to the position loss instruction, and controlling the terminal equipment to move to a preset reset position.
6. The recognition method of claim 4, wherein generating a pose recognition result based on the confidence score of the pose data further comprises:
if the pose identification results corresponding to the Q continuous acquisition periods are pose identification abnormity, updating the regional point cloud map based on the laser point cloud data acquired in the acquisition periods; and Q is less than a preset abnormal response threshold value.
7. The recognition method according to any one of claims 1 to 3, wherein the attitude data includes an attitude angle of the terminal device; the acquiring of the regional point cloud map with the position data as the center comprises the following steps:
determining a central point corresponding to the position data from a preset global point cloud map;
determining an effective identification area by taking the central point as a circle center, the attitude data as an initial angle and a preset angular resolution as a radius;
and intercepting the point cloud map corresponding to the effective identification area from the global point cloud map as the area point cloud map.
8. An apparatus for identifying a pose, comprising:
the pose data acquisition unit is used for acquiring pose data to be identified of the terminal equipment; the pose data comprises position data and attitude data;
the regional point cloud map acquisition unit is used for acquiring a regional point cloud map with the position data as the center;
the laser point cloud data acquisition unit is used for acquiring laser point cloud data corresponding to the actual pose of the terminal equipment through a built-in laser radar;
the confidence score determining unit is used for determining a confidence score corresponding to each pose data according to the laser point cloud data and the regional point cloud map;
a pose recognition result generation unit configured to generate a pose recognition result based on the confidence score of the pose data.
9. A terminal device, characterized in that the terminal device comprises a memory, a processor and a computer program stored in the memory and executable on the processor, the processor executing the computer program with the steps of the method according to any of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202110404780.6A 2021-04-15 2021-04-15 Pose recognition method and terminal equipment Active CN113112478B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110404780.6A CN113112478B (en) 2021-04-15 2021-04-15 Pose recognition method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110404780.6A CN113112478B (en) 2021-04-15 2021-04-15 Pose recognition method and terminal equipment

Publications (2)

Publication Number Publication Date
CN113112478A true CN113112478A (en) 2021-07-13
CN113112478B CN113112478B (en) 2023-12-15

Family

ID=76717131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110404780.6A Active CN113112478B (en) 2021-04-15 2021-04-15 Pose recognition method and terminal equipment

Country Status (1)

Country Link
CN (1) CN113112478B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664684A (en) * 2022-12-13 2023-08-29 荣耀终端有限公司 Positioning method, electronic device and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108732584A (en) * 2017-04-17 2018-11-02 百度在线网络技术(北京)有限公司 Method and apparatus for updating map
US20190323843A1 (en) * 2018-07-04 2019-10-24 Baidu Online Network Technology (Beijing) Co., Ltd. Method for generating a high precision map, apparatus and storage medium
CN110561423A (en) * 2019-08-16 2019-12-13 深圳优地科技有限公司 pose transformation method, robot and storage medium
CN111076733A (en) * 2019-12-10 2020-04-28 亿嘉和科技股份有限公司 Robot indoor map building method and system based on vision and laser slam
CN111735439A (en) * 2019-03-22 2020-10-02 北京京东尚科信息技术有限公司 Map construction method, map construction device and computer-readable storage medium
CN112414403A (en) * 2021-01-25 2021-02-26 湖南北斗微芯数据科技有限公司 Robot positioning and attitude determining method, equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108732584A (en) * 2017-04-17 2018-11-02 百度在线网络技术(北京)有限公司 Method and apparatus for updating map
US20190323843A1 (en) * 2018-07-04 2019-10-24 Baidu Online Network Technology (Beijing) Co., Ltd. Method for generating a high precision map, apparatus and storage medium
CN111735439A (en) * 2019-03-22 2020-10-02 北京京东尚科信息技术有限公司 Map construction method, map construction device and computer-readable storage medium
CN110561423A (en) * 2019-08-16 2019-12-13 深圳优地科技有限公司 pose transformation method, robot and storage medium
CN111076733A (en) * 2019-12-10 2020-04-28 亿嘉和科技股份有限公司 Robot indoor map building method and system based on vision and laser slam
CN112414403A (en) * 2021-01-25 2021-02-26 湖南北斗微芯数据科技有限公司 Robot positioning and attitude determining method, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116664684A (en) * 2022-12-13 2023-08-29 荣耀终端有限公司 Positioning method, electronic device and computer readable storage medium
CN116664684B (en) * 2022-12-13 2024-04-05 荣耀终端有限公司 Positioning method, electronic device and computer readable storage medium

Also Published As

Publication number Publication date
CN113112478B (en) 2023-12-15

Similar Documents

Publication Publication Date Title
KR101948728B1 (en) Method and system for collecting data
CN110936383B (en) Obstacle avoiding method, medium, terminal and device for robot
CN109974727B (en) Robot charging method and device and robot
CN110579738B (en) Moving target direction angle obtaining method and terminal equipment
CN113074727A (en) Indoor positioning navigation device and method based on Bluetooth and SLAM
CN113777600B (en) Multi-millimeter wave radar co-location tracking method
KR101341204B1 (en) Device and method for estimating location of mobile robot using raiser scanner and structure
US10902610B2 (en) Moving object controller, landmark, and moving object control method
CN112729301A (en) Indoor positioning method based on multi-source data fusion
CN113985465A (en) Sensor fusion positioning method and system, readable storage medium and computer equipment
CN113112478B (en) Pose recognition method and terminal equipment
CN109737563B (en) Control method and device based on induction array, storage medium and computer equipment
JP2023503750A (en) ROBOT POSITIONING METHOD AND DEVICE, DEVICE, STORAGE MEDIUM
CN112689234B (en) Indoor vehicle positioning method, device, computer equipment and storage medium
CN115962787B (en) Map updating and automatic driving control method, device, medium and vehicle
CN109769206B (en) Indoor positioning fusion method and device, storage medium and terminal equipment
CN109782616B (en) Control method and device based on induction array, storage medium and computer equipment
CN109489658B (en) Moving target positioning method and device and terminal equipment
CN111630346B (en) Improved positioning of mobile devices based on images and radio words
JP5953393B2 (en) Robot system and map updating method
CN113741447B (en) Robot charging pile alignment method and device, terminal equipment and storage medium
KR102481615B1 (en) Method and system for collecting data
CN111372051B (en) Multi-camera linkage blind area detection method and device and electronic equipment
KR102252823B1 (en) Apparatus and method for tracking targets and releasing warheads
KR20200043329A (en) Method and system for collecting data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant