CN111199198A - Image target positioning method, image target positioning device and mobile robot - Google Patents

Image target positioning method, image target positioning device and mobile robot Download PDF

Info

Publication number
CN111199198A
CN111199198A CN201911376182.1A CN201911376182A CN111199198A CN 111199198 A CN111199198 A CN 111199198A CN 201911376182 A CN201911376182 A CN 201911376182A CN 111199198 A CN111199198 A CN 111199198A
Authority
CN
China
Prior art keywords
image
target
color
limb
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911376182.1A
Other languages
Chinese (zh)
Other versions
CN111199198B (en
Inventor
罗志平
程骏
李清凤
庞建新
熊友军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Youbixuan Intelligent Robot Co ltd
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN201911376182.1A priority Critical patent/CN111199198B/en
Publication of CN111199198A publication Critical patent/CN111199198A/en
Application granted granted Critical
Publication of CN111199198B publication Critical patent/CN111199198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2135Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of image processing, and provides an image target positioning method, an image target positioning device, a mobile robot and a computer readable storage medium, wherein the method comprises the following steps: acquiring an image of a collected person and a shooting distance of the image, wherein the shooting distance is the distance between the collected person and the mobile robot when the mobile robot shoots the image; determining a target distance interval to which the shooting distance belongs in more than two preset distance intervals; and positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval. By the method, the time and labor cost for collecting the data set can be saved.

Description

Image target positioning method, image target positioning device and mobile robot
Technical Field
The present application relates to image processing technologies, and in particular, to an image target positioning method, an image target positioning apparatus, a mobile robot, and a computer-readable storage medium.
Background
Gesture recognition plays an increasingly important role in the field of robotics. By recognizing the gesture, the robot can execute corresponding instruction operation, and contactless interaction is realized. The gesture recognition of the robot is mainly realized based on a deep learning technology, and therefore, a large number of data sets are needed for training a deep learning model in the development of the gesture recognition robot.
However, the existing data collection method cannot meet the requirement that the mobile robot acquires images of the collected people at different distances in real time in the moving process and automatically labels the hand region in the images. The existing data set acquisition method is time-consuming and labor-consuming.
Disclosure of Invention
In view of the above, the present application provides an image target positioning method, an image target positioning apparatus, a mobile robot and a computer-readable storage medium, which can save the time and labor cost for data set acquisition.
In a first aspect, the present application provides an image target positioning method applied to a mobile robot, including:
acquiring an image of a collected person and a shooting distance of the image, wherein the shooting distance is a distance between the collected person and the mobile robot when the mobile robot shoots the image;
determining a target distance interval to which the shooting distance belongs in more than two preset distance intervals;
and positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval.
In a second aspect, the present application provides an image target positioning apparatus, comprising:
an image acquisition unit configured to acquire an image of a person to be captured and a shooting distance of the image, where the shooting distance is a distance between the person to be captured and the mobile robot when the mobile robot shoots the image;
an interval determination unit configured to determine a target distance interval to which the shooting distance belongs, among two or more preset distance intervals;
and a limb part positioning unit for positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval.
In a third aspect, the present application provides a mobile robot, comprising a camera, a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the method provided in the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the method as provided in the first aspect.
In a fifth aspect, the present application provides a computer program product, which, when run on a mobile robot, causes the mobile robot to perform the method provided by the first aspect described above.
As can be seen from the above, in the present application, an image of a captured person and a shooting distance of the image are first obtained, where the shooting distance is a distance between the captured person and the mobile robot when the mobile robot shoots the image; then determining a target distance interval to which the shooting distance belongs in more than two preset distance intervals; and finally, positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval. According to the scheme, the mobile robot can acquire the images of the collected people at different distances in real time in the moving process, and the limb parts in the images are automatically marked, so that the time and labor cost for collecting the data set are greatly saved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a schematic flowchart of an image target positioning method according to an embodiment of the present disclosure;
fig. 2 is an exemplary diagram of a third positioning method provided in an embodiment of the present application;
FIG. 3 is a schematic structural diagram of an image target locating device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a mobile robot provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Fig. 1 shows a flowchart of an image target positioning method provided in an embodiment of the present application, where the image target positioning method is applied to a mobile robot, and is detailed as follows:
step 101, acquiring an image of a person to be acquired and a shooting distance of the image;
in an embodiment of the present invention, the shooting distance is a distance between the captured person and the mobile robot when the mobile robot shoots the image. The mobile robot is provided with a depth camera, and in the moving process of the mobile robot, the collected people are shot in real time through the depth camera, so that the color image, the depth image and the infrared image of the collected people at different shooting distances can be obtained. In a certain shooting distance range, the depth camera can also track the human skeleton of the collected person to obtain a plurality of joint points of the collected person, such as head joint points, hand joint points, wrist joint points and the like. For example, the depth camera may be a Kinect v2, the Kinect v2 may acquire a depth map with 512x424 resolution, an infrared map with 512x424 resolution and a color map with 1920x 1080 resolution, and the Kinect v2 may automatically align pixels of the depth map, the infrared map and the color map. The depth camera may be PMD CARMERA, SoftKinect, and association Phab, which are not limited herein. When the mobile robot shoots each frame of image of the collected person through the depth camera, the mobile robot measures shooting distance through the distance sensor.
Step 102, determining a target distance interval to which the shooting distance belongs in more than two preset distance intervals;
in the embodiment of the present application, at least two distance intervals are pre-divided according to the performance of the depth camera. The fact that the depth camera is Kinect v2 is used for explaining, when the shooting distance is in the range of 0.8-4.5 m, the Kinect v2 can accurately track the joint point of the collected person; when the shooting distance of the Kinect v2 is below 0.5 m, the joint points of the collected person cannot be tracked, and the shot depth map and the shot infrared map are not accurate enough; when the shooting distance is 0.5-0.8 m or more than 4.5 m, the Kinect v2 can not track the joint point of the collected person, but the shot depth map and infrared map are accurate. For the three situations, four distance intervals are preset: less than 0.5 m, 0.5-0.8 m, 0.8-4.5 m and more than 4.5 m. And comparing the shooting distance with each distance section, and determining the distance section to which the shooting distance belongs as a target distance section.
And 103, positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval.
In the embodiment of the application, because the quality of the images of the acquired person is different for the same depth camera at different shooting distances, different limb part positioning methods need to be adopted to position the limb part (such as a hand) of the acquired person in the images for the images with different quality. Therefore, the limb part positioning method is correspondingly arranged in each distance interval. Or, the Kinect v2 in the step 102 indicates that the limb part positioning method corresponding to the distance interval of 0.5-0.8 m and the distance interval of more than 4.5 m is the first positioning method; the limb part positioning method corresponding to the distance interval of 0.8-4.5 m is a second positioning method; the corresponding limb part positioning method with the distance interval of less than 0.5 m is a third positioning method. And positioning the limb part of the acquired person in the image by adopting a limb part positioning method corresponding to the target distance interval, namely determining the pixel coordinates of the limb part in the image. In addition, the mobile robot may capture an image that does not include the captured person during the mobile capturing process, which may cause the limb portion of the image to be positioned unsuccessfully, and in the case of the limb portion positioning failure, the image may be discarded. As a possible implementation manner, after obtaining the pixel coordinates of the limb part in the image, the minimum bounding rectangle of the limb part may be drawn in the image, and the point coordinates of the upper left corner of the minimum bounding rectangle and the side length of the minimum bounding rectangle may be saved.
Optionally, the limb part is any one of a hand, a shoulder, an arm, a leg or a foot.
Specifically, the limb portion may be any one of a hand, a shoulder, an arm, a leg, or a foot of the body of the subject.
Optionally, a method for positioning a limb portion corresponding to the target distance interval is a first positioning method, and the method for positioning the limb portion of the acquired person in the image in step 103 includes the following steps:
a1, obtaining the center point of the point cloud of the first target and the point cloud of the second target according to the color image and the depth image;
a2, constructing a covariance matrix according to the central point and the point cloud of the second target;
a3, performing principal component analysis on the covariance matrix to obtain at least one principal component vector;
a4, determining the largest principal component vector as the largest principal component vector in the at least one principal component vector;
a5, under the instruction of the maximum principal component vector, obtaining the point cloud of any limb in the limb parts of the collected person;
and A6, obtaining the pixel coordinates of the limb in the color map according to the collected point cloud of the arbitrary limb, and completing the positioning of the limb.
The second target is divided into at least two parts by the first target, the color of the second target is skin color, the color of the first target is a first color which is different from the colors of other areas and is not black, and the other areas are areas except the first target in the color map. The image comprises a color image and a depth image, and pixels of the color image and the depth image are aligned.
Specifically, the second target includes a limb portion of the person to be collected, and the first target divides the second target into two or more parts: the limb portion and other portions other than the limb portion. For example, the second target is a skin color portion of the person to be collected in the color chart, and the first target is a wristband worn on a wrist of the person to be collected. And because the pixels of the color image and the depth image are aligned, the central point of the point cloud of the first target and the point cloud of the second target can be obtained according to the color image and the depth image. The center point of the point cloud of the first target and the point cloud of the second target are three-dimensional coordinates, so that a covariance matrix can be constructed according to the center point and the point cloud of the second target located in a certain range around the center point. By performing principal component analysis on the covariance matrix, the covariance matrix can be decomposed to obtain a plurality of principal component vectors, and the largest principal component vector among the plurality of principal component vectors can be taken as the largest principal component vector. Under the instruction of the maximum principal component vector, the point cloud of any limb (such as a hand) in the limb parts of the collected person can be obtained. According to preset camera parameters (such as the focal length of a depth camera), the collected point cloud of any limb can be converted into pixel coordinates and depth values of the limb in the depth map, and the pixel coordinates of the limb in the color map are equal to the pixel coordinates of the limb in the depth map due to the alignment of the pixels of the depth map and the color map.
Optionally, the step a1 specifically includes:
b1, obtaining the pixel coordinate of the first target in the color picture through color detection;
b2, obtaining the depth value of the first target from the depth map according to the pixel coordinate of the first target;
b3, obtaining the pixel coordinate of the second target in the color picture through a skin color model;
b4, obtaining the depth value of the second target from the depth map according to the pixel coordinate of the second target;
b5, obtaining the center point of the point cloud of the first target according to the pixel coordinate of the first target and the depth value of the first target;
and B6, obtaining the point cloud of the second target according to the pixel coordinate of the second target and the depth value of the second target.
Specifically, since the first color is different from the colors of the other regions, a pixel having the first color in the color map is extracted by color detection (for example, OpenCV-based color detection), and a pixel coordinate of the first object in the color map is obtained. Since the pixels of the color map and the depth map are aligned, the pixel coordinate of the first target in the depth map is equal to the pixel coordinate of the first target in the color map. The depth value of the first target can be obtained from the depth map according to the pixel coordinates of the first target in the depth map. Based on the pixel coordinates of the first target in the depth map and the depth value of the first target, a center point of the point cloud of the first target can be calculated.
Similarly, the pixel coordinates of the second object in the color map can be obtained by detecting the pixels with skin color in the color map through the skin color model. The pixel coordinate of the second target in the depth map is equal to the pixel coordinate of the second target in the color map. The depth value of the second target can be obtained from the depth map according to the pixel coordinates of the second target in the depth map. Based on the pixel coordinates of the second target in the depth map and the depth value of the second target, a point cloud of the second target can be calculated.
Optionally, the step B3 specifically includes:
c1, converting the color image into HSV color space image and YCbCr color space image;
c2, segmenting a second target from the HSV color space image through a skin color model to obtain a first skin color image;
c3, segmenting a second target from the YCbCr color space image through a skin color model to obtain a second skin color image;
c4, carrying out logical OR operation of pixel values on the first skin color image and the second skin color image pixel by pixel to obtain a final skin color image;
and C5, obtaining the pixel coordinate of the second target in the color map according to the final skin color map.
Specifically, the color map is converted from an original color space (such as a BGR color space) to an HSV color space to obtain the HSV color space image. And simultaneously, converting the color image from the original color space to an YCbCr color space to obtain the YCbCr color space image. And extracting pixels with colors being skin colors from the HSV color space image through a skin color model so as to realize the segmentation of the second target and obtain the first skin color image. Similarly, a pixel with a color of skin color is extracted from the YCbCr color space image through a skin color model, so as to segment the second target and obtain the second skin color map. And performing logical OR operation on pixel values of the pixels in the first skin color map and the pixels in the second skin color map pixel by pixel to obtain the final skin color map. For example, if the pixel value of a certain pixel in the first skin color map is 1 and the pixel value of a pixel in the second skin color map corresponding to the certain pixel is 0, the logical or operation results in a value of 1. And the pixel coordinate of the second target in the final skin color image is the pixel coordinate of the second target in the color image.
Optionally, the step a5 specifically includes:
and searching the point cloud of any limb in the limb parts of the collected person from the point cloud of the second target along the direction of the maximum principal component vector by taking the central point as a starting point.
Specifically, the point cloud separated by the point cloud of the first target in the point cloud of the second target is searched along the direction of the maximum principal component vector with the central point of the point cloud of the first target as a starting point, and the point cloud of any limb in the limb parts of the collected person can be obtained.
Optionally, a method for positioning a limb portion corresponding to the target distance interval is a second positioning method, and the method for positioning the limb portion of the acquired person in the image in step 103 includes the following steps:
d1, obtaining the point cloud of the collected person based on the depth map;
d2, acquiring limb joint points and wrist joint points in the point cloud of the collected person;
d3, determining a three-dimensional space range based on preset conditions according to the limb joint points and the wrist joint points;
d4, determining the point cloud in the three-dimensional space range in the point cloud of the person to be collected as the point cloud of the limb part of the person to be collected;
d5, obtaining the pixel coordinates of the limb part in the color map according to the point cloud of the limb part, and completing the positioning of the limb part.
Specifically, the limb part is a hand part, the image comprises a color image and a depth image, and pixels of the color image and the depth image are aligned. Because the depth camera has the function of tracking the skeleton of the human body, the point cloud of the collected person can be calculated based on the pixel coordinates and the depth value in the depth map. And acquiring the hand joint point and the wrist joint point detected by the depth camera. A three-dimensional spatial range may be determined based on preset conditions. The preset conditions are determined according to the priori knowledge of the human body structure. And determining the point cloud in the three-dimensional space range from the point clouds of the person to be collected as the point cloud of the limb part (hand) of the person to be collected. And converting the point cloud of the hand into pixel coordinates and depth values of the hand in the depth map according to preset camera parameters of the depth camera. The color map and the depth map are aligned in pixel, so that the pixel coordinate of the hand in the color map is the pixel coordinate of the hand in the depth map.
Optionally, if the method for positioning the limb part corresponding to the target distance interval is a third positioning method, the method for positioning the limb part of the person to be captured in the image in the step 103 includes the following steps:
e1, carrying out binarization on the depth map to obtain a depth binary map;
e2, carrying out binarization on the infrared image to obtain an infrared binary image;
e3, carrying out logical AND operation on the depth binary image and the infrared binary image to obtain a contour map of the limb part of the collected person;
e4, obtaining the pixel coordinates of the limb part in the color map according to the outline map, and completing the positioning of the limb part.
Specifically, the image includes a color image, a depth image, and an infrared image, and pixels of the color image, the depth image, and the infrared image are aligned. As shown in fig. 2, the depth map is binarized to obtain a depth binary map, and the infrared map is binarized to obtain an infrared binary map. And performing logical AND operation on the depth binary image and the infrared binary image to obtain a contour map of the limb part of the collected person. The pixel coordinates of the limb part in the outline map are the pixel coordinates of the limb part in the color map.
As can be seen from the above, in the present application, an image of a captured person and a shooting distance of the image are first obtained, where the shooting distance is a distance between the captured person and the mobile robot when the mobile robot shoots the image; then determining a target distance interval to which the shooting distance belongs in more than two preset distance intervals; and finally, positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval. According to the scheme, the mobile robot can acquire the images of the collected people at different distances in real time in the moving process, and the limb parts in the images are automatically marked, so that the time and labor cost for collecting the data set are greatly saved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 3 is a schematic structural diagram of an image target locating device provided in an embodiment of the present application, where the image target locating device is applicable to a mobile robot, and for convenience of description, only the parts related to the embodiment of the present application are shown.
The image object locating device 300 includes:
an image acquisition unit 301 configured to acquire an image of a person to be captured and a shooting distance of the image, where the shooting distance is a distance between the person to be captured and the mobile robot when the mobile robot shoots the image;
an interval determination unit 302 configured to determine a target distance interval to which the shooting distance belongs, among two or more preset distance intervals;
a limb part positioning unit 303, configured to position the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval.
Optionally, the image includes a color image and a depth image, pixels of the color image and the depth image are aligned, and the limb part positioning method corresponding to the target distance interval is a first positioning method, and then the limb part positioning unit 303 further includes:
a target point cloud obtaining subunit, configured to obtain a center point of a point cloud of a first target and a point cloud of a second target according to the color map and the depth map, where the second target is divided into at least two parts by the first target, a color of the second target is a skin color, the first target is a first color, the first color is different from colors of other areas, and the other areas are areas in the color map other than the first target;
a matrix construction subunit, configured to construct a covariance matrix according to the center point and the point cloud of the second target;
a principal component analysis subunit, configured to perform principal component analysis on the covariance matrix to obtain at least one principal component vector;
a maximum principal component determining subunit operable to determine, as a maximum principal component vector, a maximum principal component vector among the at least one principal component vector;
a limb point cloud obtaining subunit, configured to obtain, under the instruction of the maximum principal component vector, a point cloud of any limb in the limb parts of the collected person;
and the first coordinate positioning subunit is used for obtaining the pixel coordinates of the limb in the color map according to the collected point cloud of any limb so as to complete the positioning of the limb.
Optionally, the target point cloud obtaining subunit further includes:
the color detection subunit is used for obtaining the pixel coordinates of the first target in the color image through color detection;
a first depth obtaining subunit, configured to obtain a depth value of the first target from the depth map according to the pixel coordinate of the first target;
the skin color detection subunit is used for obtaining the pixel coordinate of the second target in the color picture through a skin color model;
a second depth obtaining subunit, configured to obtain a depth value of the second target from the depth map according to the pixel coordinate of the second target;
a central point obtaining subunit, configured to obtain a central point of the point cloud of the first target according to the pixel coordinate of the first target and the depth value of the first target;
and the second target point cloud subunit is used for obtaining the point cloud of the second target according to the pixel coordinate of the second target and the depth value of the second target.
Optionally, the skin color detection subunit further includes:
a color space conversion subunit, configured to convert the color image into an HSV color space image and an YCbCr color space image, respectively;
the HSV segmentation subunit is used for segmenting a second target from the HSV color space image through a skin color model to obtain a first skin color image;
the YCbCr segmentation subunit is used for segmenting a second target from the YCbCr color space image through a skin color model to obtain a second skin color image;
a logic or subunit, configured to perform a pixel-by-pixel logical or operation on the first skin color map and the second skin color map to obtain a final skin color map;
and the final skin color subunit is used for obtaining the pixel coordinate of the second target in the color image according to the final skin color image.
Optionally, the hand point cloud obtaining subunit further includes:
and the point cloud searching subunit is used for searching the point cloud of any limb in the limb parts of the acquired person from the point cloud of the second target along the direction of the maximum principal component vector by taking the central point as a starting point.
Optionally, the image includes a color image and a depth image, pixels of the color image and the depth image are aligned, the limb part is a hand, and a limb part positioning method corresponding to the target distance interval is a second positioning method, and the limb part positioning unit 303 further includes:
the acquired person point cloud obtaining subunit is used for obtaining the point cloud of the acquired person based on the depth map;
the joint acquisition subunit is used for acquiring limb joint points and wrist joint points in the point cloud of the acquired person;
the three-dimensional space determining subunit is used for determining a three-dimensional space range based on preset conditions according to the limb joint point and the wrist joint point;
a limb part point cloud determining subunit, configured to determine a point cloud within the three-dimensional space range from the point clouds of the acquired person as a point cloud of a limb part of the acquired person;
and the second coordinate positioning subunit is used for obtaining the pixel coordinates of the limb part in the color image according to the point cloud of the limb part to complete the positioning of the limb part.
Optionally, the image includes a color image, a depth image, and an infrared image, pixels of the color image, the depth image, and the infrared image are aligned, and if the limb part positioning method corresponding to the target distance interval is a third positioning method, the limb part positioning unit 303 further includes:
a depth binarization subunit, configured to binarize the depth map to obtain a depth binary map;
the infrared binarization subunit is used for binarizing the infrared image to obtain an infrared binarization image;
the logic and subunit is used for carrying out logic and operation on the depth binary image and the infrared binary image to obtain a contour map of the limb part of the collected person;
and the third coordinate positioning subunit is used for obtaining the pixel coordinates of the limb part in the color image according to the outline image so as to complete the positioning of the limb part.
As can be seen from the above, in the present application, an image of a captured person and a shooting distance of the image are first obtained, where the shooting distance is a distance between the captured person and the mobile robot when the mobile robot shoots the image; then determining a target distance interval to which the shooting distance belongs in more than two preset distance intervals; and finally, positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval. According to the scheme, the mobile robot can acquire the images of the collected people at different distances in real time in the moving process, and the limb parts in the images are automatically marked, so that the time and labor cost for collecting the data set are greatly saved.
Fig. 4 is a schematic structural diagram of a mobile robot according to an embodiment of the present application. As shown in fig. 4, the mobile robot 4 of this embodiment includes: at least one processor 40 (only one is shown in fig. 4), a memory 41, a computer program 42 stored in the memory 41 and executable on the at least one processor 40, and a camera 43, wherein the processor 40 implements the following steps when executing the computer program 42:
acquiring an image of a collected person and a shooting distance of the image, wherein the shooting distance is a distance between the collected person and the mobile robot when the mobile robot shoots the image;
determining a target distance interval to which the shooting distance belongs in more than two preset distance intervals;
and positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval.
In a second possible implementation mode provided based on the first possible implementation mode, the image includes a color map and a depth map, pixels of the color map and the depth map are aligned, a limb part positioning method corresponding to the target distance interval is a first positioning method, and the method for positioning the limb part of the captured person in the image includes the following steps:
obtaining a center point of a point cloud of a first target and a point cloud of a second target according to the color map and the depth map, wherein the second target is divided into at least two parts by the first target, the color of the second target is skin color, the first target is a first color, the first color is different from the colors of other areas, and the other areas are areas except the first target in the color map;
constructing a covariance matrix according to the center point and the point cloud of the second target;
performing principal component analysis on the covariance matrix to obtain at least one principal component vector;
determining a largest principal component vector as a largest principal component vector among the at least one principal component vector;
under the indication of the maximum principal component vector, obtaining point cloud of any limb in the limb parts of the collected person;
and obtaining the pixel coordinates of the limbs in the color map according to the collected point cloud of any limb, and completing the positioning of the limbs.
In a third possible embodiment based on the second possible embodiment, the obtaining a center point of a point cloud of a first target and a point cloud of a second target from the color map and the depth map includes:
obtaining the pixel coordinate of a first target in the color image through color detection;
acquiring a depth value of the first target from the depth map according to the pixel coordinate of the first target;
obtaining the pixel coordinate of a second target in the color image through a skin color model;
acquiring a depth value of the second target from the depth map according to the pixel coordinate of the second target;
obtaining a center point of the point cloud of the first target according to the pixel coordinate of the first target and the depth value of the first target;
and obtaining the point cloud of the second target according to the pixel coordinate of the second target and the depth value of the second target.
In a fourth possible implementation manner provided on the basis of the third possible implementation manner, the obtaining of the pixel coordinates of the second object in the color map through the skin color model includes:
respectively converting the color images into HSV color space images and YCbCr color space images;
a second target is segmented from the HSV color space image through a skin color model to obtain a first skin color image;
a second target is segmented from the YCbCr color space image through a skin color model to obtain a second skin color image;
carrying out logical OR operation of pixel values on the first skin color image and the second skin color image pixel by pixel to obtain a final skin color image;
and obtaining the pixel coordinate of the second target in the color image according to the final skin color image.
In a fifth possible embodiment based on the second possible embodiment, the obtaining a point cloud of any one of the limb parts of the person under acquisition under the instruction of the maximum principal component vector includes:
and searching the point cloud of any limb in the limb parts of the collected person from the point cloud of the second target along the direction of the maximum principal component vector by taking the central point as a starting point.
In a sixth possible embodiment based on the first possible embodiment, the image includes a color map and a depth map, pixels of the color map and the depth map are aligned, the limb part is a hand, a limb part positioning method corresponding to the target distance zone is a second positioning method, and the method of positioning the limb part of the person to be captured in the image includes the steps of:
obtaining the point cloud of the collected person based on the depth map;
acquiring limb joint points and wrist joint points in the point cloud of the collected person;
determining a three-dimensional space range based on preset conditions according to the limb joint points and the wrist joint points;
determining the point cloud in the three-dimensional space range in the point cloud of the collected person as the point cloud of the limb part of the collected person;
and obtaining the pixel coordinates of the limb part in the color map according to the point cloud of the limb part to complete the positioning of the limb part.
In a seventh possible embodiment based on the first possible embodiment, the image includes a color map, a depth map, and an infrared map, pixels of the color map, the depth map, and the infrared map are aligned, and if a limb part positioning method corresponding to the target distance interval is a third positioning method, the method of positioning the limb part of the person to be captured in the image includes the following steps:
carrying out binarization on the depth map to obtain a depth binary map;
carrying out binarization on the infrared image to obtain an infrared binary image;
carrying out logic AND operation on the depth binary image and the infrared binary image to obtain a contour map of the limb part of the collected person;
and obtaining the pixel coordinates of the limb part in the color image according to the outline image to complete the positioning of the limb part.
Those skilled in the art will appreciate that fig. 4 is merely an example of the mobile robot 4, and does not constitute a limitation of the mobile robot 4, and may include more or less components than those shown, or combine some of the components, or different components, such as input and output devices, network access devices, etc.
The Processor 40 may be a Central Processing Unit (CPU), and the Processor 40 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may be an internal storage unit of the mobile robot 4 in some embodiments, such as a hard disk or a memory of the mobile robot 4. In other embodiments, the memory 41 may be an external storage device of the mobile robot 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the mobile robot 4. Further, the memory 41 may include both an internal storage unit and an external storage device of the mobile robot 4. The memory 41 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, other programs, and the like, such as program codes of the computer programs. The above-mentioned memory 41 may also be used to temporarily store data that has been output or is to be output.
As can be seen from the above, in the present application, an image of a captured person and a shooting distance of the image are first obtained, where the shooting distance is a distance between the captured person and the mobile robot when the mobile robot shoots the image; then determining a target distance interval to which the shooting distance belongs in more than two preset distance intervals; and finally, positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval. According to the scheme, the mobile robot can acquire the images of the collected people at different distances in real time in the moving process, and the limb parts in the images are automatically marked, so that the time and labor cost for collecting the data set are greatly saved.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above method embodiments.
The embodiment of the present application provides a computer program product, which when running on a mobile robot, enables the mobile robot to implement the steps in the above method embodiments when executed.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer-readable medium may include at least: any entity or device capable of carrying computer program code to a mobile robot, recording medium, computer Memory, Read-Only Memory (ROM), Random-access Memory (RAM), electrical carrier wave signals, telecommunications signals, and software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the above modules or units is only one logical function division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (11)

1. An image target positioning method is applied to a mobile robot, and comprises the following steps:
acquiring an image of a collected person and a shooting distance of the image, wherein the shooting distance is the distance between the collected person and the mobile robot when the mobile robot shoots the image;
determining a target distance interval to which the shooting distance belongs in more than two preset distance intervals;
and positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval.
2. The image target positioning method according to claim 1, wherein the image includes a color image and a depth image, pixels of the color image and the depth image are aligned, the limb positioning method corresponding to the target distance interval is a first positioning method, and the method for positioning the limb of the person to be captured in the image includes the following steps:
obtaining a center point of a point cloud of a first target and a point cloud of a second target according to the color map and the depth map, wherein the second target is divided into at least two parts by the first target, the color of the second target is skin color, the first target is first color, the first color is different from the colors of other areas, and the other areas are areas except the first target in the color map;
constructing a covariance matrix according to the point cloud of the central point and the second target;
performing principal component analysis on the covariance matrix to obtain at least one principal component vector;
determining a largest principal component vector among the at least one principal component vector as a largest principal component vector;
under the indication of the maximum principal component vector, obtaining point clouds of any limb in the limb parts of the collected person;
and obtaining the pixel coordinates of the limb in the color map according to the collected point cloud of any limb, and completing the positioning of the limb.
3. The image target locating method of claim 2, wherein obtaining the center point of the point cloud of the first target and the point cloud of the second target according to the color map and the depth map comprises:
obtaining the pixel coordinate of a first target in the color image through color detection;
acquiring a depth value of the first target from the depth map according to the pixel coordinate of the first target;
obtaining the pixel coordinate of a second target in the color image through a skin color model;
acquiring a depth value of the second target from the depth map according to the pixel coordinate of the second target;
obtaining a center point of the point cloud of the first target according to the pixel coordinate of the first target and the depth value of the first target;
and obtaining the point cloud of the second target according to the pixel coordinate of the second target and the depth value of the second target.
4. The image target locating method of claim 3, wherein the obtaining pixel coordinates of a second target in the color map through a skin color model comprises:
respectively converting the color image into an HSV color space image and an YCbCr color space image;
a second target is segmented from the HSV color space image through a skin color model to obtain a first skin color image;
a second target is segmented from the YCbCr color space image through a skin color model to obtain a second skin color image;
performing logical OR operation of pixel values on the first skin color image and the second skin color image pixel by pixel to obtain a final skin color image;
and obtaining the pixel coordinate of a second target in the color image according to the final skin color image.
5. The image target positioning method according to claim 2, wherein obtaining a point cloud of any of the collected human body parts under the indication of the maximum principal component vector comprises:
and searching the point cloud of any limb in the limb parts of the collected person from the point cloud of the second target along the direction of the maximum principal component vector by taking the central point as a starting point.
6. The image target positioning method according to claim 1, wherein the image includes a color image and a depth image, the color image and the depth image are aligned in terms of pixels, the limb part is a hand, the limb part positioning method corresponding to the target distance interval is a second positioning method, and the method for positioning the limb part of the acquired person in the image includes the following steps:
obtaining a point cloud of the collected person based on the depth map;
acquiring limb joint points and wrist joint points in the point cloud of the acquired person;
determining a three-dimensional space range based on a preset condition according to the limb joint point and the wrist joint point;
determining point clouds in the three-dimensional space range in the point clouds of the collected person as point clouds of limb parts of the collected person;
and obtaining the pixel coordinates of the limb part in the color map according to the point cloud of the limb part to complete the positioning of the limb part.
7. The image target positioning method according to claim 1, wherein the image includes a color map, a depth map and an infrared map, pixels of the color map, the depth map and the infrared map are aligned, the limb part positioning method corresponding to the target distance interval is a third positioning method, and the method for positioning the limb part of the acquired person in the image includes the following steps:
carrying out binarization on the depth map to obtain a depth binary map;
carrying out binarization on the infrared image to obtain an infrared binary image;
carrying out logical AND operation on the depth binary image and the infrared binary image to obtain a contour map of the limb part of the collected person;
and obtaining the pixel coordinates of the limb part in the color image according to the contour image to complete the positioning of the limb part.
8. The image target positioning method according to any one of claims 1 to 7, wherein the limb part is any one of a hand, a shoulder, an arm, a leg or a foot.
9. An image object locating device, applied to a mobile robot, comprising:
the image acquisition unit is used for acquiring an image of a collected person and a shooting distance of the image, wherein the shooting distance is the distance between the collected person and the mobile robot when the mobile robot shoots the image;
the interval determining unit is used for determining a target distance interval to which the shooting distance belongs in more than two preset distance intervals;
and the limb part positioning unit is used for positioning the limb part of the acquired person in the image by a limb part positioning method corresponding to the target distance interval.
10. A mobile robot comprising a camera, a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 8 when executing the computer program.
11. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN201911376182.1A 2019-12-27 2019-12-27 Image target positioning method, image target positioning device and mobile robot Active CN111199198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911376182.1A CN111199198B (en) 2019-12-27 2019-12-27 Image target positioning method, image target positioning device and mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911376182.1A CN111199198B (en) 2019-12-27 2019-12-27 Image target positioning method, image target positioning device and mobile robot

Publications (2)

Publication Number Publication Date
CN111199198A true CN111199198A (en) 2020-05-26
CN111199198B CN111199198B (en) 2023-08-04

Family

ID=70744391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911376182.1A Active CN111199198B (en) 2019-12-27 2019-12-27 Image target positioning method, image target positioning device and mobile robot

Country Status (1)

Country Link
CN (1) CN111199198B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085771A (en) * 2020-08-06 2020-12-15 深圳市优必选科技股份有限公司 Image registration method and device, terminal equipment and computer readable storage medium
CN113763333A (en) * 2021-08-18 2021-12-07 安徽帝晶光电科技有限公司 Sub-pixel positioning method, positioning system and storage medium
CN114155557A (en) * 2021-12-07 2022-03-08 美的集团(上海)有限公司 Positioning method, positioning device, robot and computer-readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447466A (en) * 2015-12-01 2016-03-30 深圳市图灵机器人有限公司 Kinect sensor based identity comprehensive identification method
CN108063859A (en) * 2017-10-30 2018-05-22 努比亚技术有限公司 A kind of automatic camera control method, terminal and computer storage media
WO2018120038A1 (en) * 2016-12-30 2018-07-05 深圳前海达闼云端智能科技有限公司 Method and device for target detection
CN109544606A (en) * 2018-11-02 2019-03-29 山东大学 Fast automatic method for registering and system based on multiple Kinect
CN109961406A (en) * 2017-12-25 2019-07-02 深圳市优必选科技有限公司 A kind of method, apparatus and terminal device of image procossing
CN110324521A (en) * 2018-04-28 2019-10-11 Oppo广东移动通信有限公司 Control method, apparatus, electronic equipment and the storage medium of camera
WO2019198446A1 (en) * 2018-04-10 2019-10-17 株式会社ニコン Detection device, detection method, information processing device, and information processing program
CN110427917A (en) * 2019-08-14 2019-11-08 北京百度网讯科技有限公司 Method and apparatus for detecting key point

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447466A (en) * 2015-12-01 2016-03-30 深圳市图灵机器人有限公司 Kinect sensor based identity comprehensive identification method
WO2018120038A1 (en) * 2016-12-30 2018-07-05 深圳前海达闼云端智能科技有限公司 Method and device for target detection
CN108063859A (en) * 2017-10-30 2018-05-22 努比亚技术有限公司 A kind of automatic camera control method, terminal and computer storage media
CN109961406A (en) * 2017-12-25 2019-07-02 深圳市优必选科技有限公司 A kind of method, apparatus and terminal device of image procossing
WO2019198446A1 (en) * 2018-04-10 2019-10-17 株式会社ニコン Detection device, detection method, information processing device, and information processing program
CN110324521A (en) * 2018-04-28 2019-10-11 Oppo广东移动通信有限公司 Control method, apparatus, electronic equipment and the storage medium of camera
CN109544606A (en) * 2018-11-02 2019-03-29 山东大学 Fast automatic method for registering and system based on multiple Kinect
CN110427917A (en) * 2019-08-14 2019-11-08 北京百度网讯科技有限公司 Method and apparatus for detecting key point

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
黄朝美;杨马英;: "基于信息融合的移动机器人目标识别与定位" *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112085771A (en) * 2020-08-06 2020-12-15 深圳市优必选科技股份有限公司 Image registration method and device, terminal equipment and computer readable storage medium
CN112085771B (en) * 2020-08-06 2023-12-05 深圳市优必选科技股份有限公司 Image registration method, device, terminal equipment and computer readable storage medium
CN113763333A (en) * 2021-08-18 2021-12-07 安徽帝晶光电科技有限公司 Sub-pixel positioning method, positioning system and storage medium
CN113763333B (en) * 2021-08-18 2024-02-13 安徽帝晶光电科技有限公司 Sub-pixel positioning method, positioning system and storage medium
CN114155557A (en) * 2021-12-07 2022-03-08 美的集团(上海)有限公司 Positioning method, positioning device, robot and computer-readable storage medium

Also Published As

Publication number Publication date
CN111199198B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
CN110221690B (en) Gesture interaction method and device based on AR scene, storage medium and communication terminal
CN113536864B (en) Gesture recognition method and device, computer readable storage medium and terminal equipment
CN109145742B (en) Pedestrian identification method and system
JP5715833B2 (en) Posture state estimation apparatus and posture state estimation method
EP3499414B1 (en) Lightweight 3d vision camera with intelligent segmentation engine for machine vision and auto identification
CN112528831B (en) Multi-target attitude estimation method, multi-target attitude estimation device and terminal equipment
CN111460967B (en) Illegal building identification method, device, equipment and storage medium
US20170011523A1 (en) Image processing apparatus, image processing method, and storage medium
CN111199198B (en) Image target positioning method, image target positioning device and mobile robot
CN106648078B (en) Multi-mode interaction method and system applied to intelligent robot
CN113111844B (en) Operation posture evaluation method and device, local terminal and readable storage medium
CN113160276B (en) Target tracking method, target tracking device and computer readable storage medium
CN111191582A (en) Three-dimensional target detection method, detection device, terminal device and computer-readable storage medium
KR20210157194A (en) Crop growth measurement device using image processing and method thereof
CN112861776A (en) Human body posture analysis method and system based on dense key points
CN111199169A (en) Image processing method and device
CN114119695A (en) Image annotation method and device and electronic equipment
CN109993715A (en) A kind of robot vision image preprocessing system and image processing method
CN111680670B (en) Cross-mode human head detection method and device
CN112686122A (en) Human body and shadow detection method, device, electronic device and storage medium
CN113191189A (en) Face living body detection method, terminal device and computer readable storage medium
CN112418089A (en) Gesture recognition method and device and terminal
WO2022205841A1 (en) Robot navigation method and apparatus, and terminal device and computer-readable storage medium
CN114494355A (en) Trajectory analysis method and device based on artificial intelligence, terminal equipment and medium
CN112232272B (en) Pedestrian recognition method by fusing laser and visual image sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20231211

Address after: Room 601, 6th Floor, Building 13, No. 3 Jinghai Fifth Road, Beijing Economic and Technological Development Zone (Tongzhou), Tongzhou District, Beijing, 100176

Patentee after: Beijing Youbixuan Intelligent Robot Co.,Ltd.

Address before: 518000 16th and 22nd Floors, C1 Building, Nanshan Zhiyuan, 1001 Xueyuan Avenue, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen Youbixuan Technology Co.,Ltd.