CN109669533B - Motion capture method, device and system based on vision and inertia - Google Patents

Motion capture method, device and system based on vision and inertia Download PDF

Info

Publication number
CN109669533B
CN109669533B CN201811303014.5A CN201811303014A CN109669533B CN 109669533 B CN109669533 B CN 109669533B CN 201811303014 A CN201811303014 A CN 201811303014A CN 109669533 B CN109669533 B CN 109669533B
Authority
CN
China
Prior art keywords
data
motion
captured object
posture
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811303014.5A
Other languages
Chinese (zh)
Other versions
CN109669533A (en
Inventor
闫东坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yingdi Mande Technology Co ltd
Original Assignee
Beijing Yingdi Mande Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yingdi Mande Technology Co ltd filed Critical Beijing Yingdi Mande Technology Co ltd
Priority to CN201811303014.5A priority Critical patent/CN109669533B/en
Publication of CN109669533A publication Critical patent/CN109669533A/en
Application granted granted Critical
Publication of CN109669533B publication Critical patent/CN109669533B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Psychiatry (AREA)
  • Health & Medical Sciences (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a motion capture method, a motion capture device and a motion capture system based on vision and inertia, wherein the motion capture method comprises the following steps: acquiring image data of a captured object, first motion data of a vision sensor module obtained through the vision sensor module and second motion data of the captured object obtained through an inertia measurement module; obtaining first position data of the captured object according to the image data and the first motion data; obtaining second position and posture data of the captured object according to the second motion data; and fusing the first position data and the second position data to obtain the motion information of the captured object. According to the method, the device and the system for capturing the motion based on the vision and the inertia, provided by the embodiment of the invention, because the two aspects of the motion capture based on the vision and the motion capture based on the inertia are combined, the pose data obtained by the motion capture based on the vision and the motion capture based on the inertia are fused, and the precision of the motion capture is improved.

Description

Motion capture method, device and system based on vision and inertia
Technical Field
The invention relates to the technical field of motion capture, in particular to a motion capture method, device and system based on vision and inertia.
Background
With the maturity of Micro Electro Mechanical Systems (MEMS) technology in recent years, inertial motion capture is developed at a high speed, and the method includes connecting an Inertial Measurement Unit (IMU) to an object to be measured, moving along with the IMU, measuring angular velocity information and acceleration information of the object to be measured, and further processing the angular velocity information and acceleration information to obtain position information and attitude information, but the method has low measurement accuracy.
Disclosure of Invention
In view of this, embodiments of the present invention provide a motion capture method, device and system based on vision and inertia, so as to solve the problem of low measurement accuracy in the existing motion capture method.
In order to achieve the purpose, the invention adopts the following technical scheme:
according to a first aspect, embodiments of the present invention provide a method of vision and inertia based motion capture, the method comprising: acquiring image data of a captured object, first motion data of a visual sensor module obtained through the visual sensor module and second motion data of the captured object obtained through an inertial measurement module; obtaining first pose data of the captured object according to the image data and the first motion data; obtaining second position and posture data of the captured object according to the second motion data; and fusing the first position and posture data and the second position and posture data to obtain the motion information of the captured object.
With reference to the first aspect, in a first implementation manner of the first aspect, the vision sensor module includes: the camera is used for acquiring the image data; deriving first pose data for the captured object from the image data and first motion data, comprising: obtaining third posture data of the camera in a world coordinate system according to the image data and the first motion data; obtaining fourth position data of the captured object under the camera coordinate system according to the image data; and obtaining the first position data of the captured object under the world coordinate system according to the third position data and the fourth position data.
With reference to the first embodiment of the first aspect, in the second embodiment of the first aspect, the third posture data of the camera in the world coordinate system is obtained through an instant positioning and mapping algorithm.
With reference to the first implementation manner of the first aspect, in a third implementation manner of the first aspect, fourth pose data of the captured object in the camera coordinate system is obtained through a deep learning algorithm.
With reference to the first aspect or any one of the implementation manners of the first aspect, in a fourth implementation manner of the first aspect, the fusing the first pose data and the second pose data to obtain the motion information of the captured object includes: fusing the first position and posture data and the second position and posture data to obtain position and posture information of the captured object; and carrying out dynamics calculation according to the pose information to obtain the motion information of the captured object.
With reference to the fourth implementation manner of the first aspect, in the fifth implementation manner of the first aspect, the first position and orientation data are fused by a complementary filtering algorithm, a least square algorithm, or a kalman filtering algorithm.
According to a second aspect, embodiments of the present invention provide a vision and inertia based motion capture device comprising: the data acquisition module is used for acquiring image data of a captured object, first motion data of the vision sensor module obtained through the vision sensor module and second motion data of the captured object obtained through the inertia measurement module; a first pose determination module for obtaining first pose data of the captured object according to the image data and the first motion data; a second posture determination module for obtaining second posture data of the captured object according to the second motion data; and the motion information determining module is used for fusing the first position and posture data and the second position and posture data to obtain the motion information of the captured object.
According to a third aspect, the present invention provides a computer-readable storage medium storing computer instructions for causing a computer to perform the motion capture method according to the first aspect of the present invention or any one of the first to fifth embodiments of the first aspect.
According to a fourth aspect, embodiments of the present invention provide a vision and inertia based motion capture device comprising: a memory and a processor, the memory and the processor being communicatively connected to each other, the memory having stored therein computer instructions, and the processor executing the computer instructions to perform the motion capture method according to the first aspect of the present invention or any one of the first to fifth embodiments of the first aspect.
According to a fifth aspect, embodiments of the present invention provide a vision and inertia based motion capture system comprising: visual sensor module, inertial measurement module and host computer, the visual sensor module includes: a camera and an inertia measurement unit; the camera acquires image data of a captured object and sends the image data to the upper computer; the inertial measurement unit acquires first motion data of the visual sensor module and sends the first motion data to the upper computer; the inertia measurement module acquires second motion data of the captured object and sends the second motion data to the upper computer; the upper computer receives image data, first motion data and second motion data, obtains first position and posture data of the captured object according to the image data and the first motion data, obtains second position and posture data of the captured object according to the second motion data, and fuses the first position and posture data and the second position and posture data to obtain motion information of the captured object.
Compared with the prior art, the technical scheme of the invention at least has the following advantages:
the embodiment of the invention provides a motion capture method, a device and a system based on vision and inertia, wherein the motion capture method obtains first position and posture data of a captured object according to image data of the captured object and first motion data of a vision sensor module, obtains second position and posture data of the captured object according to second motion data of the captured object, fuses the first position and posture data and the second position and posture data to finally obtain motion information of the captured object.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of one specific example of a method of vision and inertia based motion capture in an embodiment of the present invention;
FIG. 2 is a flow chart of another specific example of a method of vision and inertia based motion capture in an embodiment of the present invention;
FIG. 3 is a functional block diagram of one particular example of a vision and inertia based motion capture device in accordance with embodiments of the present invention;
FIG. 4 is a schematic diagram of one specific example of a vision and inertia based motion capture system in accordance with embodiments of the present invention;
FIG. 5 is a schematic diagram of one specific example of a vision and inertia based motion capture device in an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
An embodiment of the present invention provides a motion capture method based on vision and inertia, as shown in fig. 1, the motion capture method includes:
step S1: acquiring image data of a captured object, first motion data of a vision sensor module obtained through the vision sensor module and second motion data of the captured object obtained through an inertia measurement module; the first motion data is the motion angular velocity and the acceleration of the vision sensor module obtained through the vision sensor module, and the second motion data is the motion angular velocity and the acceleration of the captured object obtained through the inertia measurement module.
Step S2: and obtaining first position data of the captured object according to the image data and the first motion data.
Step S3: second pose data of the captured object is derived from the second motion data.
Step S4: and fusing the first position data and the second position data to obtain the motion information of the captured object.
Through the above steps S1 to S4, the embodiment of the present invention provides a method for motion capture based on vision and inertia, in which first pose data of a captured object is obtained from image data of the captured object and first motion data of a vision sensor module, second pose data of the captured object is obtained from second motion data of the captured object, and the first pose data and the second pose data are fused to finally obtain motion information of the captured object.
In a preferred embodiment, in step S1, the vision sensor module includes: and the camera can be an OV580 binocular camera, and images of a captured object are continuously shot through the camera, so that the image data are acquired. The vision sensor module further includes: the inertial measurement unit can be a motion sensor, and the motion angular velocity and the acceleration of the vision sensor module, namely the first motion data, are obtained through the motion sensor. The inertial measurement module may also be a motion sensor, by which the motion angular velocity and acceleration of the captured object, i.e., the second motion data described above, are obtained.
As shown in fig. 2, the step S2 of obtaining the first pose data of the captured object according to the image data and the first motion data specifically includes:
step S21: obtaining third posture data of the camera in a world coordinate system according to the image data and the first motion data; preferably, the third pose data of the camera in the world coordinate system is obtained through a simultaneous localization and mapping (SLAM) algorithm.
Step S22: obtaining fourth posture data of the captured object under a camera coordinate system according to the image data; specifically, distortion correction is performed on image data to obtain an undistorted image, deep learning based on TensorFlow is performed by using the undistorted image to obtain semantic information of the captured object, and therefore fourth pose data of the captured object in a camera coordinate system is obtained. It should be noted that, the distortion correction of the image data may be performed by using a method of performing distortion correction on an image in the prior art, and the present invention is not limited thereto.
Step S23: obtaining first pose data of the captured object under a world coordinate system according to the third pose data and the fourth pose data; specifically, the position and the posture of the captured object in the camera coordinate system are recurred by combining the position and the posture of the camera in the world coordinate system, so that the position and the posture of the captured object in the world coordinate system are obtained.
In a preferred embodiment, in step S3, kalman filtering may be performed on the angular velocity data and the acceleration data of the captured object, and then the position and the attitude of the captured object, i.e. the second attitude data, are obtained through pose solution. It should be noted that the pose solution may be a pose solution method in the prior art, which is not limited in this respect.
In a preferred embodiment, as shown in fig. 2, the step S4 of fusing the first pose data and the second pose data to obtain the motion information of the captured object includes:
step S41: fusing the first position and posture data and the second position and posture data to obtain position and posture information of the captured object; preferably, the first position and attitude data and the second position and attitude data are fused through a complementary filtering algorithm, a least square algorithm or a kalman filtering algorithm, so as to obtain the position and attitude information of the captured object with high precision and high stability.
Step S42: performing dynamics calculation according to the pose information to obtain motion information of the captured object; specifically, forward dynamics is carried out on the pose information obtained by fusion to obtain the tail end position of a bone tree of the captured object, and then reverse dynamics is carried out according to the tail end position of the bone tree to obtain a continuous and stable motion capture bone structure, so that the motion information of the captured object is obtained.
An embodiment of the present invention further provides a motion capture device based on vision and inertia, as shown in fig. 3, the motion capture device includes: a data obtaining module 1, configured to obtain image data of a captured object, first motion data of a vision sensor module obtained by the vision sensor module, and second motion data of the captured object obtained by an inertial measurement module, for details, see the related description of step S1 in the foregoing method embodiment; a first pose determination module 2, configured to obtain first pose data of the captured object according to the image data and the first motion data, for details, see the related description of step S2 in the above method embodiment; a second posture determining module 3, configured to obtain second posture data of the captured object according to the second motion data, for details, see the related description of step S3 in the foregoing method embodiment; the motion information determining module 4 is configured to fuse the first pose data and the second pose data to obtain motion information of the captured object, and details of the motion information determining module may be referred to in the related description of step S4 of the foregoing method embodiment.
Through the data acquisition module 1, the first pose determination module 2, the second pose determination module 3 and the motion information determination module 4, the embodiment of the invention provides a motion capture device based on vision and inertia, the first pose data of a captured object is obtained according to image data of the captured object and the first motion data of a vision sensor module, the second pose data of the captured object is obtained according to the second motion data of the captured object, the first pose data and the second pose data are fused, and finally the motion information of the captured object is obtained.
An embodiment of the present invention further provides a motion capture system based on vision and inertia, as shown in fig. 4, the motion capture system includes: visual sensor module 5, inertial measurement module 6 and host computer 7, visual sensor module 5 includes: a camera 51 and an inertial measurement unit 52; the camera 51 acquires image data of a captured object and transmits the image data to the upper computer 7; the inertial measurement unit 52 acquires first motion data of the vision sensor module and sends the first motion data to the upper computer 7; the inertia measurement module 6 acquires second motion data of the captured object and sends the second motion data to the upper computer 7; the upper computer 7 receives the image data, the first motion data and the second motion data, obtains first position and posture data of the captured object according to the image data and the first motion data, obtains second position and posture data of the captured object according to the second motion data, and fuses the first position and posture data and the second position and posture data to obtain motion information of the captured object.
With the visual sensor module 5, the inertial measurement module 6, and the upper computer 7, the visual and inertial motion capture system according to the embodiment of the present invention obtains the first pose data of the captured object according to the image data of the captured object and the first motion data of the visual sensor module, obtains the second pose data of the captured object according to the second motion data of the captured object, and fuses the first pose data and the second pose data to finally obtain the motion information of the captured object.
In an optional embodiment of the present invention, the camera 51 may adopt an OV580 binocular camera, and may perform still photography, or perform motion photography in combination with the first motion data of the vision sensor module, and continuously photograph an image of a captured object through the camera, so as to obtain the image information; the inertial measurement unit 52 may adopt an MPU6500, and acquire angular velocity data and acceleration data of the vision sensor module, that is, the first motion data, through a three-axis gyroscope and a three-axis accelerometer in the MPU 6500; the inertial measurement module 6 may be a MPU6500, wherein the MPU6500 is bound to the trunk of the object to be captured, and is rigidly fixed to the trunk of the object to be captured by a flexible bandage, specifically, a plurality of MPUs 6500 are bound to the left and right arms, the left and right legs, the head, the back, and the like of the object to be captured, respectively, and the second motion data of the object to be captured is obtained by the angular velocity data and the acceleration data of the inertial measurement module of the three-axis gyroscope and the three-axis accelerometer in the MPU 6500.
The details of the above-mentioned vision and inertia based motion capture system can be understood by referring to the corresponding related descriptions and effects in the embodiment shown in fig. 1 and fig. 2, and will not be described herein again.
Embodiments of the present invention also provide a motion capture device based on vision and inertia, as shown in fig. 5, the electronic device may include a processor 81 and a memory 82, wherein the processor 81 and the memory 82 may be connected by a bus or other means, and fig. 5 illustrates the connection by the bus as an example.
Processor 81 may be a Central Processing Unit (CPU). The Processor 81 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or combinations thereof.
The memory 82, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the vision and inertia based motion capture method in the embodiments of the present invention (e.g., the data acquisition module 1, the first pose determination module 2, the second pose determination module 3, and the motion information determination module 4 shown in fig. 4). The processor 81 executes various functional applications and data processing of the processor, i.e. implements the vision and inertia based motion capture method in the above method embodiments, by running non-transitory software programs, instructions and modules stored in the memory 82.
The memory 82 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor 81, and the like. Further, the memory 82 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 82 may optionally include memory located remotely from the processor 81, which may be connected to the processor 81 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 82 and, when executed by the processor 81, perform a vision and inertia based motion capture method as in the embodiments of fig. 1 and 2.
The details of the above-described vision and inertia based motion capture device may be understood with reference to the corresponding related descriptions and effects in the embodiments shown in fig. 1 and 2, and will not be described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (8)

1. A method of vision and inertia based motion capture, comprising:
acquiring image data of a captured object, first motion data of a visual sensor module obtained through the visual sensor module and second motion data of the captured object obtained through an inertial measurement module;
obtaining first pose data of the captured object according to the image data and the first motion data;
obtaining second position and posture data of the captured object according to the second motion data;
fusing the first position and posture data and the second position and posture data to obtain action information of the captured object, wherein the action information is determined according to a skeleton structure of the captured object, and the skeleton structure is obtained by performing dynamic calculation according to position and posture information determined by the first position and posture data and the second position and posture data;
the vision sensor module includes: the camera is used for acquiring the image data;
deriving first pose data for the captured object from the image data and first motion data, comprising:
obtaining third posture data of the camera in a world coordinate system according to the image data and the first motion data;
obtaining fourth position data of the captured object under the camera coordinate system according to the image data;
and obtaining the first position data of the captured object under the world coordinate system according to the third position data and the fourth position data.
2. The motion capture method of claim 1, wherein the third pose data of the camera in the world coordinate system is obtained by an instantaneous positioning and mapping algorithm.
3. The motion capture method of claim 1, wherein the fourth pose data of the captured object in the camera coordinate system is obtained by a deep learning algorithm.
4. The motion capture method of claim 1, wherein the first and second pose data are fused by a complementary filtering algorithm, a least squares algorithm, or a kalman filtering algorithm.
5. A vision and inertia based motion capture device, comprising:
the data acquisition module is used for acquiring image data of a captured object, first motion data of the vision sensor module obtained through the vision sensor module and second motion data of the captured object obtained through the inertia measurement module;
a first pose determination module for obtaining first pose data of the captured object according to the image data and the first motion data;
a second posture determination module for obtaining second posture data of the captured object according to the second motion data;
the motion information determining module is used for fusing the first position and posture data and the second position and posture data to obtain motion information of the captured object, wherein the motion information is determined according to a skeleton structure of the captured object, and the skeleton structure is obtained by performing dynamic calculation according to position and posture information determined by the first position and posture data and the second position and posture data;
the vision sensor module includes: the camera is used for acquiring the image data;
the first posture determining module is specifically used for obtaining third posture data of the camera in a world coordinate system according to the image data and the first motion data; obtaining fourth position data of the captured object under the camera coordinate system according to the image data; and obtaining the first position data of the captured object under the world coordinate system according to the third position data and the fourth position data.
6. A computer-readable storage medium storing computer instructions for causing a computer to perform the motion capture method of any one of claims 1-4.
7. A vision and inertia based motion capture device, comprising: a memory and a processor, the memory and the processor being communicatively coupled to each other, the memory having stored therein computer instructions, the processor performing the motion capture method of any of claims 1-4 by executing the computer instructions.
8. A vision and inertia based motion capture system, comprising: visual sensor module, inertial measurement module and host computer, the visual sensor module includes: a camera and an inertia measurement unit;
the camera acquires image data of a captured object and sends the image data to the upper computer;
the inertial measurement unit acquires first motion data of the visual sensor module and sends the first motion data to the upper computer;
the inertia measurement module acquires second motion data of the captured object and sends the second motion data to the upper computer;
the upper computer receives image data, first motion data and second motion data, first posture data of the captured object is obtained according to the image data and the first motion data, second posture data of the captured object is obtained according to the second motion data, the first posture data and the second posture data are fused to obtain action information of the captured object, the action information is determined according to a skeleton structure of the captured object, and the skeleton structure is obtained by performing dynamic calculation according to posture information determined by the first posture data and the second posture data;
the upper computer obtains third posture data of the camera in a world coordinate system according to the image data and the first motion data, obtains fourth posture data of the captured object in the camera coordinate system according to the image data, and obtains the first posture data of the captured object in the world coordinate system according to the third posture data and the fourth posture data.
CN201811303014.5A 2018-11-02 2018-11-02 Motion capture method, device and system based on vision and inertia Active CN109669533B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811303014.5A CN109669533B (en) 2018-11-02 2018-11-02 Motion capture method, device and system based on vision and inertia

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811303014.5A CN109669533B (en) 2018-11-02 2018-11-02 Motion capture method, device and system based on vision and inertia

Publications (2)

Publication Number Publication Date
CN109669533A CN109669533A (en) 2019-04-23
CN109669533B true CN109669533B (en) 2022-02-11

Family

ID=66142534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811303014.5A Active CN109669533B (en) 2018-11-02 2018-11-02 Motion capture method, device and system based on vision and inertia

Country Status (1)

Country Link
CN (1) CN109669533B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112815923B (en) * 2019-11-15 2022-12-30 华为技术有限公司 Visual positioning method and device
CN111207741B (en) * 2020-01-16 2022-01-07 西安因诺航空科技有限公司 Unmanned aerial vehicle navigation positioning method based on indoor vision vicon system
CN111382701B (en) * 2020-03-09 2023-09-22 抖音视界有限公司 Motion capture method, motion capture device, electronic equipment and computer readable storage medium
CN112097768B (en) * 2020-11-17 2021-03-02 深圳市优必选科技股份有限公司 Robot posture determining method and device, robot and storage medium
CN112729294B (en) * 2021-04-02 2021-06-25 北京科技大学 Pose estimation method and system suitable for vision and inertia fusion of robot
CN113889223A (en) * 2021-10-25 2022-01-04 合肥工业大学 Gesture recognition rehabilitation system based on computer vision

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279186A (en) * 2013-05-07 2013-09-04 兰州交通大学 Multiple-target motion capturing system integrating optical localization and inertia sensing
CN104658012A (en) * 2015-03-05 2015-05-27 第二炮兵工程设计研究院 Motion capture method based on inertia and optical measurement fusion
CN104834917A (en) * 2015-05-20 2015-08-12 北京诺亦腾科技有限公司 Mixed motion capturing system and mixed motion capturing method
CN106256394A (en) * 2016-07-14 2016-12-28 广东技术师范学院 The training devices of mixing motion capture and system
CN206990800U (en) * 2017-07-24 2018-02-09 宗晖(上海)机器人有限公司 A kind of alignment system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160073953A1 (en) * 2014-09-11 2016-03-17 Board Of Trustees Of The University Of Alabama Food intake monitor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279186A (en) * 2013-05-07 2013-09-04 兰州交通大学 Multiple-target motion capturing system integrating optical localization and inertia sensing
CN104658012A (en) * 2015-03-05 2015-05-27 第二炮兵工程设计研究院 Motion capture method based on inertia and optical measurement fusion
CN104834917A (en) * 2015-05-20 2015-08-12 北京诺亦腾科技有限公司 Mixed motion capturing system and mixed motion capturing method
CN106256394A (en) * 2016-07-14 2016-12-28 广东技术师范学院 The training devices of mixing motion capture and system
CN206990800U (en) * 2017-07-24 2018-02-09 宗晖(上海)机器人有限公司 A kind of alignment system

Also Published As

Publication number Publication date
CN109669533A (en) 2019-04-23

Similar Documents

Publication Publication Date Title
CN109669533B (en) Motion capture method, device and system based on vision and inertia
CN107990899B (en) Positioning method and system based on SLAM
CN109887032B (en) Monocular vision SLAM-based vehicle positioning method and system
US10989540B2 (en) Binocular vision localization method, device and system
CN107888828B (en) Space positioning method and device, electronic device, and storage medium
WO2019157925A1 (en) Visual-inertial odometry implementation method and system
JP5946924B2 (en) Scene structure-based self-pose estimation
CN105593877B (en) Object tracking is carried out based on the environmental map data dynamically built
CN106687063B (en) Tracking system and tracking method using the same
CN107845114B (en) Map construction method and device and electronic equipment
CN108389264B (en) Coordinate system determination method and device, storage medium and electronic equipment
WO2018120350A1 (en) Method and device for positioning unmanned aerial vehicle
CN107748569B (en) Motion control method and device for unmanned aerial vehicle and unmanned aerial vehicle system
JP6702543B2 (en) Information processing apparatus, method and program
CN111750853A (en) Map establishing method, device and storage medium
WO2020133172A1 (en) Image processing method, apparatus, and computer readable storage medium
Menozzi et al. Development of vision-aided navigation for a wearable outdoor augmented reality system
CN109767470B (en) Tracking system initialization method and terminal equipment
CN106023192A (en) Time reference real-time calibration method and system for image collection platform
CN111815781A (en) Augmented reality data presentation method, apparatus, device and computer storage medium
CN111862150A (en) Image tracking method and device, AR device and computer device
CN103900473A (en) Intelligent mobile device six-degree-of-freedom fused pose estimation method based on camera and gravity inductor
CN110825079A (en) Map construction method and device
CN109040525A (en) Image processing method, device, computer-readable medium and electronic equipment
CN114638897A (en) Multi-camera system initialization method, system and device based on non-overlapping views

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant