CN111930240A - Motion video acquisition method and device based on AR interaction, electronic equipment and medium - Google Patents

Motion video acquisition method and device based on AR interaction, electronic equipment and medium Download PDF

Info

Publication number
CN111930240A
CN111930240A CN202010981359.7A CN202010981359A CN111930240A CN 111930240 A CN111930240 A CN 111930240A CN 202010981359 A CN202010981359 A CN 202010981359A CN 111930240 A CN111930240 A CN 111930240A
Authority
CN
China
Prior art keywords
motion
augmented reality
user
world
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010981359.7A
Other languages
Chinese (zh)
Other versions
CN111930240B (en
Inventor
刘宏扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An International Smart City Technology Co Ltd
Original Assignee
Ping An International Smart City Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An International Smart City Technology Co Ltd filed Critical Ping An International Smart City Technology Co Ltd
Priority to CN202010981359.7A priority Critical patent/CN111930240B/en
Publication of CN111930240A publication Critical patent/CN111930240A/en
Application granted granted Critical
Publication of CN111930240B publication Critical patent/CN111930240B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of augmented reality, and provides a motion video acquisition method, a motion video acquisition device, electronic equipment and a motion video acquisition medium based on AR interaction, wherein the motion video acquisition method comprises the following steps: entering a world tracking mode in an augmented reality application and starting a 3D scanning frame to detect a horizontal plane in the augmented reality world; when a confirmation instruction of movement on a horizontal plane is received, a movement area is created in a 3D coordinate system of the augmented reality world according to position coordinates of the horizontal plane, and a preset 3D movement posture model is loaded in the movement area; when a confirmation instruction of the motion area is received, starting a human body tracking mode to track the user in the motion area in real time; and acquiring the augmented reality motion video of the user in the motion area. The method can be applied to intelligent education, interaction between people and virtual objects can be effectively enhanced, the augmented reality motion video can be rapidly acquired, and the use experience of a user is improved. Furthermore, the invention relates to the technical field of blockchains, the motion video being stored in blockchains.

Description

Motion video acquisition method and device based on AR interaction, electronic equipment and medium
Technical Field
The invention relates to the technical field of augmented reality, in particular to a motion video acquisition method and device based on AR interaction, electronic equipment and a medium.
Background
The Augmented Reality (AR) technology is a technology that skillfully fuses virtual information and the real world, and a plurality of technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like are widely applied, and virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is applied to the real world after being simulated, and the two kinds of information complement each other, so that the real world is enhanced.
The application of AR technology to moving AR products is increasing, but these AR products suffer from the following drawbacks: the guide is lacked, and when a novice user uses the AR product for the first time, the guide design is lacked, so that the user cannot normally use the AR function; the function is single, and the AR technology is simply utilized to play the model animation in the real world; the existing AR products have almost no other interaction functions except for clicking between the user and the AR model, so that the motion video in the AR world cannot be rapidly collected.
Disclosure of Invention
In view of the above, there is a need to provide a motion video capturing method, device, electronic device and medium based on AR interaction, which can effectively enhance the interaction between people and virtual objects, rapidly capture an augmented reality motion video, and improve the user experience.
The invention provides a motion video acquisition method based on AR interaction, which is applied to electronic equipment and comprises the following steps:
entering a world tracking mode in an augmented reality application and starting a 3D scanning frame to detect a horizontal plane in the augmented reality world;
in response to a received confirmation instruction of moving on the horizontal plane, creating a movement area in a 3D coordinate system of the augmented reality world according to the position coordinates of the horizontal plane, and loading a preset 3D movement posture model in the movement area;
responding to the received confirmation instruction of the motion area, starting a human body tracking mode to track the user in the motion area in real time;
and acquiring the augmented reality motion video of the user in the motion area.
According to an alternative embodiment of the present invention, the turning on the 3D scanning box to detect the horizontal plane in the augmented real world includes:
starting a camera of the electronic equipment and acquiring the running state of the camera;
controlling the 3D scanning frame to perform matrix transformation according to the operation state, so that the 3D scanning frame after matrix transformation is attached to the plane to be moved aligned with the camera;
controlling the camera to send a ray to the center of the augmented reality world, and acquiring an anchor point of any feature in the augmented reality world in response to the intersection of the ray and the feature;
judging whether the anchor point of the characteristic is a plane anchor point;
determining that a horizontal plane in the augmented reality world is detected when the anchor point of the feature is determined to be a planar anchor point.
According to an alternative embodiment of the present invention, the starting of the human body tracking mode to track the user in the motion area in real time includes:
acquiring an anchor point set scanned by the camera in real time, and judging whether an anchor point of a human body type exists in the anchor point set;
when it is determined that the anchor point of the human body type exists in the anchor point set, acquiring a human body 3D coordinate in the anchor point of the human body type;
acquiring the coordinate of a central point of the motion region in the augmented reality world and the radius of the motion region;
calculating the distance between the human body 3D coordinate and the central point coordinate, and judging whether the distance is smaller than the radius;
determining that the user is within the motion region when it is determined that the distance is less than or equal to the radius;
determining that the user is not within the motion region when it is determined that the distance is greater than the radius.
According to an optional embodiment of the present invention, the acquiring the human body 3D coordinates in the anchor point of the human body type comprises:
and acquiring a 3D coordinate corresponding to the target node from the anchor point of the human body type as a human body 3D coordinate.
According to an alternative embodiment of the invention, the method further comprises:
displaying the motion region in the augmented reality world in a preset first display mode when the user is determined to be within the motion region;
when the user is determined not to be within the motion region, displaying the motion region in the augmented reality world in a preset second display mode.
According to an optional embodiment of the present invention, the acquiring the augmented reality motion video of the user in the motion area comprises:
removing the preset 3D motion posture model in response to a touch instruction received on a preset shooting control;
and acquiring the augmented reality motion video of the user in the motion area through the camera after a preset time period.
According to an alternative embodiment of the invention, the entering into the world tracking mode in the augmented reality application comprises:
responding to a received augmented reality starting instruction, starting the augmented reality application and displaying an augmented reality motion interface;
transitioning from the augmented reality motion interface to the world tracking mode.
The invention provides a motion video acquisition device based on AR interaction, which runs in an electronic device and comprises:
the application starting module is used for entering a world tracking mode in the augmented reality application;
the plane detection module is used for starting the 3D scanning frame to detect the horizontal plane in the augmented reality world;
the area creating module is used for creating a motion area in the 3D coordinate system of the augmented reality world according to the position coordinates of the horizontal plane in response to the received confirmation instruction of the motion on the horizontal plane, and loading a preset 3D motion posture model in the motion area;
the user tracking module is used for responding to the received confirmation instruction of the motion area, and starting a human body tracking mode to track the user in the motion area in real time;
and the video acquisition module is used for acquiring the augmented reality motion video of the user in the motion area.
A third aspect of the present invention provides an electronic device comprising a processor for implementing the AR interaction based motion video capture method when executing a computer program stored in a memory.
A fourth aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the AR interaction-based moving video capturing method.
In summary, according to the method, the apparatus, the electronic device, and the medium for acquiring a motion video based on AR interaction described in the present invention, after entering a world tracking mode in an augmented reality application, a 3D scanning frame is started to detect a horizontal plane in the augmented reality world, a confirmation instruction for motion on the horizontal plane is triggered by a user, a motion region is created in a 3D coordinate system of the augmented reality world according to a position coordinate of the horizontal plane, a preset 3D motion posture model is loaded in the motion region to prompt the user to align with the preset 3D motion posture model, a confirmation instruction for the motion region is triggered by the user again, a human body tracking mode is started to track the user in the motion region in real time, and finally, an augmented reality motion video of the user in the motion region is acquired. The 3D scanning frame and the 3D movement posture model can attract the attention of a user, and the user is prevented from being trapped in the dilemma of finding information and finding a model, so that the user is difficult to use; using a world tracking mode, efficiently scanning anchor points of features of a horizontal plane in real time, and accurately creating a 3D motion area and a 3D motion posture model for a user in an augmented reality world; the human body tracking mode is used, the human body characteristic anchor points are scanned efficiently in real time, the human body coordinates of the user in the augmented reality world are accurately calculated, whether the user deviates from the motion area is detected, and prompt is output.
Drawings
Fig. 1 is a flowchart of a motion video capture method based on AR interaction according to an embodiment of the present invention.
Fig. 2 is a structural diagram of a motion video capture device based on AR interaction according to a second embodiment of the present invention.
Fig. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a detailed description of the present invention will be given below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments of the present invention and features of the embodiments may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
The motion video acquisition method based on the AR interaction is applied to the electronic equipment, and correspondingly, the motion video acquisition device based on the AR interaction is installed in the electronic equipment. The motion video acquisition method based on AR interaction can be applied to intelligent education, particularly intelligent motion teaching, and construction of a smart city is promoted.
Fig. 1 is a flowchart of a motion video capture method based on AR interaction according to an embodiment of the present invention.
The motion video acquisition method based on AR interaction specifically comprises the following steps, and the sequence of the steps in the flowchart can be changed and some steps can be omitted according to different requirements.
S11, enter the world tracking mode in the augmented reality application and turn on the 3D scan box to detect the water level in the augmented reality world.
An Augmented Reality (AR) application is installed in the electronic device in advance, and when a user needs to use the AR application to collect a segment of Augmented Reality motion video, a starting instruction of the AR application can be triggered by clicking or touching a virtual icon of the AR application, so that the electronic device starts the AR application when receiving the starting instruction of the AR application. And after the electronic equipment starts the AR application, displaying an AR motion interface on a display screen of the electronic equipment.
In an alternative embodiment, the electronic device displays a plurality of mode virtual icons in the augmented reality motion interface, such as a world tracking mode virtual icon, a body tracking mode virtual icon, a gesture tracking mode virtual icon. The user can enter the world tracking mode by clicking or touching the world tracking mode virtual icon, enter the human tracking mode by clicking or touching the human tracking mode virtual icon, and enter the gesture tracking mode by clicking or touching the gesture tracking mode virtual icon.
In an alternative embodiment, the entering into the world tracking mode in the augmented reality application comprises:
responding to a received augmented reality starting instruction, starting the augmented reality application and displaying an augmented reality motion interface;
transitioning from the augmented reality motion interface to the world tracking mode.
In the optional embodiment, the electronic device automatically jumps to the world tracking mode after a preset time period of the reality augmented reality motion interface, so that clicking or touch operation of a user is omitted, and the use experience of the user is improved.
In an alternative embodiment, the turning on the 3D scanning block detecting a level in the augmented real world includes:
starting a camera of the electronic equipment and acquiring the running state of the camera;
controlling the 3D scanning frame to perform matrix transformation according to the operation state, so that the 3D scanning frame after matrix transformation is attached to the plane to be moved aligned with the camera;
controlling the camera to send a ray to the center of the augmented reality world, and acquiring an anchor point of any feature in the augmented reality world in response to the intersection of the ray and the feature;
judging whether the anchor point of the characteristic is a plane anchor point;
determining that a horizontal plane in the augmented reality world is detected when the anchor point of the feature is determined to be a planar anchor point.
In this optional embodiment, after the AR application starts the world tracking mode, a 3D scanning box is displayed at the center of the virtual reality motion interface, and prompt information is output at the same time, so as to prompt the user to start a camera of the electronic device and align the camera to a plane of the preparatory motion. The prompt message can be output by one or more of the following modes: outputting a prompt in a voice mode; and outputting a prompt in a text form above the 3D scanning frame.
The user can hold the electronic equipment by hand to shake up and down, left and right, so that more characteristics in the augmented reality world can be conveniently identified. The operation state of the camera is that the camera shakes up and down, left and right, the 3D scanning frame moves along with the movement of the camera, the 3D scanning frame is always kept at the center position of the display screen and attached to the horizontal plane of an object in the real world aligned with the camera, and when the type of the anchor point is detected to be a plane anchor point, the horizontal plane is detected. The user may choose to create a motion region at this level.
In the augmented reality world, types of anchor points include planar anchor points, lines. If the anchor point type belongs to a plane anchor point, the horizontal plane is found. If the anchor point type belongs to a point or a line, the horizontal plane is not found.
S12, responding to the received confirmation instruction of the motion on the horizontal plane, creating a motion area in the 3D coordinate system of the augmented reality world according to the position coordinates of the horizontal plane, and loading a preset 3D motion posture model in the motion area.
After detecting the horizontal plane in the augmented reality world, the electronic device displays a confirmation virtual key, and a rotating 3D arrow can be further displayed right above the 3D scanning frame, so that the user is prompted to select the area as the motion area. When the user clicks or touches the confirmation virtual key, the electronic equipment receives a confirmation instruction of the user moving on the horizontal plane, and a movement area is dynamically created by taking the 3D scanning frame as a center.
In an alternative embodiment, the electronic device hides the 3D scanning box and the rotating 3D arrow while dynamically creating a motion region centered on the 3D scanning box.
The preset 3D motion posture model is an animation which is modeled and manufactured by art tools such as 3Dmax and the like by art workers, a unity development tool is utilized, and a virtual character with corresponding action is matched.
After the preset 3D motion posture model is loaded, outputting an interface prompt: please keep following the figure. A re-selection plane button may also appear below the prompt for the user to re-select the motion region.
And S13, responding to the received confirmation instruction of the motion area, and starting a human body tracking mode to track the user in the motion area in real time.
After the movement area is displayed on a display screen of the electronic equipment, a confirmation virtual key is displayed to prompt a user to confirm whether the movement area is feasible, and after the user clicks or touches the confirmation virtual key, the electronic equipment receives a confirmation instruction for confirming the movement area from the user and automatically switches to a human body tracking mode.
In an optional embodiment, the starting of the human body tracking mode to track the user in the motion area in real time includes:
acquiring an anchor point set scanned by the camera in real time, and judging whether an anchor point of a human body type exists in the anchor point set;
when it is determined that the anchor point of the human body type exists in the anchor point set, acquiring a human body 3D coordinate in the anchor point of the human body type;
acquiring the coordinate of a central point of the motion region in the augmented reality world and the radius of the motion region;
calculating the distance between the human body 3D coordinate and the central point coordinate, and judging whether the distance is smaller than the radius;
determining that the user is within the motion region when it is determined that the distance is less than or equal to the radius;
determining that the user is not within the motion region when it is determined that the distance is greater than the radius.
In this alternative embodiment, the camera scans features in the real world and returns a set of anchor points for the features. The anchor point set may include human body type anchor points or ordinary anchor points.
When an anchor point of the human body type is detected from the set of anchor points, it is indicated that a human body has been detected. And when no anchor point of the human body type is detected from the anchor point combination, indicating that no human body is detected.
And calculating the human body 3D coordinates of the user in the real world and the coordinates of the central point and the radius of the motion region created in the augmented reality world, and determining whether the user enters the motion region.
In an optional embodiment, the acquiring the human body 3D coordinates in the anchor point of the human body type includes:
and acquiring a 3D coordinate corresponding to the target node from the anchor point of the human body type as a human body 3D coordinate.
In this alternative embodiment, the anchor points of the human body type include 3D coordinates of nodes such as head, arm, abdomen, leg, and foot.
An abdomen node is defined in the electronic device as a target node in advance, and 3D coordinates of the abdomen node are extracted from an anchor point of a human body type as human body coordinates.
The 3D coordinates of the central point of the area where the target object is located corresponding to the target node are obtained, the coordinate position of the user in the real world can be accurately determined, and whether the user is in the moving area or not can be accurately and quickly judged according to the coordinate position of the user in the real world conveniently in the follow-up process.
In an optional embodiment, the method further comprises:
displaying the motion region in the augmented reality world in a preset first display mode when the user is determined to be within the motion region;
when the user is determined not to be within the motion region, displaying the motion region in the augmented reality world in a preset second display mode.
In this optional embodiment, the preset first display mode may be a preset first display color, for example, blue; or a preset first display shape, for example, an oval shape. The preset second display mode may be a preset second display color, for example, red; or a preset second display shape, for example, a random shape.
When the user is detected to enter the motion area, the motion area is switched from the red circular plane to the blue circular plane, and when the user is detected to leave the motion area, the motion area is switched from the blue circular plane to the red circular plane.
Displaying the motion area through different display models to indicate whether the user is in the motion area can play a role of prompting the user to keep in the motion area.
And S14, acquiring the augmented reality motion video of the user in the motion area.
When the user is in the motion area in the augmented reality world, a motion video of the motion of the user is collected. And when the user is not in the motion area in the augmented reality world, not acquiring the motion video of the motion of the user and prompting the user to jump out of the motion area.
In an optional embodiment, the acquiring the augmented reality motion video of the user in the motion area comprises:
removing the preset 3D motion posture model in response to a touch instruction received on a preset shooting control;
and acquiring the augmented reality motion video of the user in the motion area through the camera after a preset time period.
In this optional embodiment, after entering the human body tracking mode, a shooting control (shooting virtual key) is displayed on a display screen of the electronic device, and when the electronic device detects that a user clicks or touches the shooting control, it is determined that a touch instruction of the user is received on the shooting control.
And removing the preset 3D motion posture model, so that the motion video of the user in the real world can be conveniently collected through a camera.
After the preset 3D motion posture model is removed and a preset time period passes, the camera tracks and collects the user in real time, and YUV data collected by the camera is obtained and is coded and stored, namely the motion video of the user.
The preset time period may be, for example, 3 seconds. The camera collects the motion video after a preset time period, so that the collection process of the environment video of the user before motion can be reduced, subsequent quick editing of the motion video is facilitated, the environment video is not collected, and the storage space of the electronic equipment can be saved.
The method provided by the implementation comprises the steps of starting a 3D scanning frame to detect a horizontal plane in an augmented reality world after entering a world tracking mode in augmented reality application, triggering a confirmation instruction of motion on the horizontal plane through a user, creating a motion area in a 3D coordinate system of the augmented reality world according to position coordinates of the horizontal plane, loading a preset 3D motion posture model in the motion area to prompt the user to align with the preset 3D motion posture model, triggering the confirmation instruction of the motion area through the user again, starting a human body tracking mode to track a user in the motion area in real time, and finally collecting an augmented reality motion video of the user in the motion area. The 3D scanning frame and the 3D movement posture model can attract the attention of a user, and the user is prevented from being trapped in the dilemma of finding information and finding a model, so that the user is difficult to use; using a world tracking mode, efficiently scanning anchor points of features of a horizontal plane in real time, and accurately creating a 3D motion area and a 3D motion posture model for a user in an augmented reality world; the human body tracking mode is used, the human body characteristic anchor points are scanned efficiently in real time, the human body coordinates of the user in the augmented reality world are accurately calculated, whether the user deviates from the motion area is detected, and prompt is output.
It is emphasized that, to further ensure the privacy and security of the motion video, the motion video may be stored in a node of the block chain.
Fig. 2 is a structural diagram of a motion video capture device based on AR interaction according to a second embodiment of the present invention.
In some embodiments, the AR interaction-based motion video capture device 20 may include a plurality of functional modules comprised of computer program segments. The computer programs of the various program segments in the AR interaction based motion video capture device 20 may be stored in a memory of the electronic device and executed by the at least one processor to perform (see in detail fig. 1) the functions of AR interaction based motion video capture.
In this embodiment, the motion video capture device 20 based on AR interaction may be divided into a plurality of functional modules according to the functions performed by the device. The functional module may include: an application launching module 201, a plane detection module 202, a region creation module 203, a user tracking module 204, a region display module 205, and a video capture module 206. The module referred to herein is a series of computer program segments capable of being executed by at least one processor and capable of performing a fixed function and is stored in memory. In the present embodiment, the functions of the modules will be described in detail in the following embodiments.
The application starting module 201 is configured to enter a world tracking mode in an augmented reality application.
An Augmented Reality (AR) application is installed in the electronic device in advance, and when a user needs to use the AR application to collect a segment of Augmented Reality motion video, a starting instruction of the AR application can be triggered by clicking or touching a virtual icon of the AR application, so that the electronic device starts the AR application when receiving the starting instruction of the AR application. And after the electronic equipment starts the AR application, displaying an AR motion interface on a display screen of the electronic equipment.
In an alternative embodiment, the electronic device displays a plurality of mode virtual icons in the augmented reality motion interface, such as a world tracking mode virtual icon, a body tracking mode virtual icon, a gesture tracking mode virtual icon. The user can enter the world tracking mode by clicking or touching the world tracking mode virtual icon, enter the human tracking mode by clicking or touching the human tracking mode virtual icon, and enter the gesture tracking mode by clicking or touching the gesture tracking mode virtual icon.
In an alternative embodiment, the application launching module 201 entering the world tracking mode in the augmented reality application includes:
responding to a received augmented reality starting instruction, starting the augmented reality application and displaying an augmented reality motion interface;
transitioning from the augmented reality motion interface to the world tracking mode.
In the optional embodiment, the electronic device automatically jumps to the world tracking mode after a preset time period of the reality augmented reality motion interface, so that clicking or touch operation of a user is omitted, and the use experience of the user is improved.
The plane detection module 202 is configured to start a 3D scanning box to detect a horizontal plane in the augmented reality world.
In an alternative embodiment, the plane detection module 202 starts the 3D scanning box to detect the horizontal plane in the augmented real world includes:
starting a camera of the electronic equipment and acquiring the running state of the camera;
controlling the 3D scanning frame to perform matrix transformation according to the operation state, so that the 3D scanning frame after matrix transformation is attached to the plane to be moved aligned with the camera;
controlling the camera to send a ray to the center of the augmented reality world, and acquiring an anchor point of any feature in the augmented reality world in response to the intersection of the ray and the feature;
judging whether the anchor point of the characteristic is a plane anchor point;
determining that a horizontal plane in the augmented reality world is detected when the anchor point of the feature is determined to be a planar anchor point.
In this optional embodiment, after the AR application starts the world tracking mode, a 3D scanning box is displayed at the center of the virtual reality motion interface, and prompt information is output at the same time, so as to prompt the user to start a camera of the electronic device and align the camera to a plane of the preparatory motion. The prompt message can be output by one or more of the following modes: outputting a prompt in a voice mode; and outputting a prompt in a text form above the 3D scanning frame.
The user can hold the electronic equipment by hand to shake up and down, left and right, so that more characteristics in the augmented reality world can be conveniently identified. The operation state of the camera is that the camera shakes up and down, left and right, the 3D scanning frame moves along with the movement of the camera, the 3D scanning frame is always kept at the center position of the display screen and attached to the horizontal plane of an object in the real world aligned with the camera, and when the type of the anchor point is detected to be a plane anchor point, the horizontal plane is detected. The user may choose to create a motion region at this level.
In the augmented reality world, types of anchor points include planar anchor points, lines. If the anchor point type belongs to a plane anchor point, the horizontal plane is found. If the anchor point type belongs to a point or a line, the horizontal plane is not found.
The region creating module 203 is configured to create a motion region in the 3D coordinate system of the augmented reality world according to the position coordinates of the horizontal plane in response to the received confirmation instruction of the motion on the horizontal plane, and load a preset 3D motion posture model in the motion region.
After detecting the horizontal plane in the augmented reality world, the electronic device displays a confirmation virtual key, and a rotating 3D arrow can be further displayed right above the 3D scanning frame, so that the user is prompted to select the area as the motion area. When the user clicks or touches the confirmation virtual key, the electronic equipment receives a confirmation instruction of the user moving on the horizontal plane, and a movement area is dynamically created by taking the 3D scanning frame as a center.
In an alternative embodiment, the electronic device hides the 3D scanning box and the rotating 3D arrow while dynamically creating a motion region centered on the 3D scanning box.
The preset 3D motion posture model is an animation which is modeled and manufactured by art tools such as 3Dmax and the like by art workers, a unity development tool is utilized, and a virtual character with corresponding action is matched.
After the preset 3D motion posture model is loaded, outputting an interface prompt: please keep following the figure. A re-selection plane button may also appear below the prompt for the user to re-select the motion region.
The user tracking module 204 is configured to start a human body tracking mode to track the user in the motion area in real time in response to the received confirmation instruction for the motion area.
After the movement area is displayed on a display screen of the electronic equipment, a confirmation virtual key is displayed to prompt a user to confirm whether the movement area is feasible, and after the user clicks or touches the confirmation virtual key, the electronic equipment receives a confirmation instruction for confirming the movement area from the user and automatically switches to a human body tracking mode.
In an alternative embodiment, the user tracking module 204 enabling the human body tracking mode to track the user in the motion area in real time includes:
acquiring an anchor point set scanned by the camera in real time, and judging whether an anchor point of a human body type exists in the anchor point set;
when it is determined that the anchor point of the human body type exists in the anchor point set, acquiring a human body 3D coordinate in the anchor point of the human body type;
acquiring the coordinate of a central point of the motion region in the augmented reality world and the radius of the motion region;
calculating the distance between the human body 3D coordinate and the central point coordinate, and judging whether the distance is smaller than the radius;
determining that the user is within the motion region when it is determined that the distance is less than or equal to the radius;
determining that the user is not within the motion region when it is determined that the distance is greater than the radius.
In this alternative embodiment, the camera scans features in the real world and returns a set of anchor points for the features. The anchor point set may include human body type anchor points or ordinary anchor points.
When an anchor point of the human body type is detected from the set of anchor points, it is indicated that a human body has been detected. And when no anchor point of the human body type is detected from the anchor point combination, indicating that no human body is detected.
And calculating the human body 3D coordinates of the user in the real world and the coordinates of the central point and the radius of the motion region created in the augmented reality world, and determining whether the user enters the motion region.
In an optional embodiment, the acquiring the human body 3D coordinates in the anchor point of the human body type includes:
and acquiring a 3D coordinate corresponding to the target node from the anchor point of the human body type as a human body 3D coordinate.
In this alternative embodiment, the anchor points of the human body type include 3D coordinates of nodes such as head, arm, abdomen, leg, and foot.
An abdomen node is defined in the electronic device as a target node in advance, and 3D coordinates of the abdomen node are extracted from an anchor point of a human body type as human body coordinates.
The 3D coordinates of the central point of the area where the target object is located corresponding to the target node are obtained, the coordinate position of the user in the real world can be accurately determined, and whether the user is in the moving area or not can be accurately and quickly judged according to the coordinate position of the user in the real world conveniently in the follow-up process.
The region display module 205 is configured to display the motion region in the augmented reality world in a preset first display mode when it is determined that the user is within the motion region.
The region display module 205 is further configured to display the motion region in the augmented reality world in a preset second display mode when it is determined that the user is not in the motion region.
In this optional embodiment, the preset first display mode may be a preset first display color, for example, blue; or a preset first display shape, for example, an oval shape. The preset second display mode may be a preset second display color, for example, red; or a preset second display shape, for example, a random shape.
When the user is detected to enter the motion area, the motion area is switched from the red circular plane to the blue circular plane, and when the user is detected to leave the motion area, the motion area is switched from the blue circular plane to the red circular plane.
Displaying the motion area through different display models to indicate whether the user is in the motion area can play a role of prompting the user to keep in the motion area.
The video capture module 206 is configured to capture an augmented reality motion video of the user in the motion area.
When the user is in the motion area in the augmented reality world, a motion video of the motion of the user is collected. And when the user is not in the motion area in the augmented reality world, not acquiring the motion video of the motion of the user and prompting the user to jump out of the motion area.
In an optional embodiment, the video capture module 206 capturing the augmented reality motion video of the user in the motion area includes:
removing the preset 3D motion posture model in response to a touch instruction received on a preset shooting control;
and acquiring the augmented reality motion video of the user in the motion area through the camera after a preset time period.
In this optional embodiment, after entering the human body tracking mode, a shooting control (shooting virtual key) is displayed on a display screen of the electronic device, and when the electronic device detects that a user clicks or touches the shooting control, it is determined that a touch instruction of the user is received on the shooting control.
And removing the preset 3D motion posture model, so that the motion video of the user in the real world can be conveniently collected through a camera.
After the preset 3D motion posture model is removed and a preset time period passes, the camera tracks and collects the user in real time, and YUV data collected by the camera is obtained and is coded and stored, namely the motion video of the user.
The preset time period may be, for example, 3 seconds. The camera collects the motion video after a preset time period, so that the collection process of the environment video of the user before motion can be reduced, subsequent quick editing of the motion video is facilitated, the environment video is not collected, and the storage space of the electronic equipment can be saved.
The device provided by the implementation starts a 3D scanning frame to detect a horizontal plane in an augmented reality world after entering a world tracking mode in augmented reality application, triggers a confirmation instruction of motion on the horizontal plane through a user, creates a motion area in a 3D coordinate system of the augmented reality world according to position coordinates of the horizontal plane, loads a preset 3D motion posture model in the motion area to prompt the user to align with the preset 3D motion posture model, starts a human body tracking mode to track a user in the motion area in real time through triggering the confirmation instruction of the motion area by the user again, and finally collects an augmented reality motion video of the user in the motion area. The 3D scanning frame and the 3D movement posture model can attract the attention of a user, and the user is prevented from being trapped in the dilemma of finding information and finding a model, so that the user is difficult to use; using a world tracking mode, efficiently scanning anchor points of features of a horizontal plane in real time, and accurately creating a 3D motion area and a 3D motion posture model for a user in an augmented reality world; the human body tracking mode is used, the human body characteristic anchor points are scanned efficiently in real time, the human body coordinates of the user in the augmented reality world are accurately calculated, whether the user deviates from the motion area is detected, and prompt is output.
It is emphasized that, to further ensure the privacy and security of the motion video, the motion video may be stored in a node of the block chain.
Fig. 3 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention. In the preferred embodiment of the present invention, the electronic device 3 comprises a memory 31, at least one processor 32, at least one communication bus 33 and a transceiver 34.
It will be appreciated by those skilled in the art that the configuration of the electronic device shown in fig. 3 does not constitute a limitation of the embodiment of the present invention, and may be a bus-type configuration or a star-type configuration, and the electronic device 3 may include more or less other hardware or software than those shown, or a different arrangement of components.
In some embodiments, the electronic device 3 is an electronic device capable of automatically performing numerical calculation and/or information processing according to instructions set or stored in advance, and the hardware thereof includes but is not limited to a microprocessor, an application specific integrated circuit, a programmable gate array, a digital processor, an embedded device, and the like. The electronic device 3 may also include a client device, which includes, but is not limited to, any electronic product that can interact with a client through a keyboard, a mouse, a remote controller, a touch pad, or a voice control device, for example, a personal computer, a tablet computer, a smart phone, a digital camera, and the like.
It should be noted that the electronic device 3 is only an example, and other existing or future electronic products, such as those that can be adapted to the present invention, should also be included in the scope of the present invention, and are included herein by reference.
In some embodiments, the memory 31 has stored therein a computer program which, when executed by the at least one processor 32, implements all or part of the steps of the AR interaction based motion video capture method as described. The Memory 31 includes a Read-Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an electronically Erasable rewritable Read-Only Memory (Electrically-Erasable Programmable Read-Only Memory (EEPROM)), an optical Read-Only disk (CD-ROM) or other optical disk Memory, a magnetic disk Memory, a tape Memory, or any other medium readable by a computer capable of carrying or storing data.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
In some embodiments, the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements all or part of the steps of the AR interaction-based moving video capturing method; or implement all or part of the functions in the AR interaction-based motion video capture device.
In some embodiments, the at least one processor 32 is a Control Unit (Control Unit) of the electronic device 3, connects various components of the electronic device 3 by various interfaces and lines, and executes various functions and processes data of the electronic device 3 by running or executing programs or modules stored in the memory 31 and calling data stored in the memory 31. For example, the at least one processor 32, when executing the computer program stored in the memory, implements all or part of the steps of the AR interaction-based motion video capture method according to the embodiment of the present invention; or realize all or part of the functions of the motion video acquisition device based on AR interaction. The at least one processor 32 may be composed of an integrated circuit, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips.
In some embodiments, the at least one communication bus 33 is arranged to enable connection communication between the memory 31 and the at least one processor 32 or the like.
Although not shown, the electronic device 3 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 32 through a power management device, so as to implement functions of managing charging, discharging, and power consumption through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 3 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
The integrated unit implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable an electronic device (which may be a personal computer, an electronic device, or a network device) or a processor (processor) to execute parts of the methods according to the embodiments of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or that the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A motion video acquisition method based on AR interaction is applied to electronic equipment, and is characterized in that the method comprises the following steps:
entering a world tracking mode in an augmented reality application and starting a 3D scanning frame to detect a horizontal plane in the augmented reality world;
in response to a received confirmation instruction of moving on the horizontal plane, creating a movement area in a 3D coordinate system of the augmented reality world according to the position coordinates of the horizontal plane, and loading a preset 3D movement posture model in the movement area;
responding to the received confirmation instruction of the motion area, starting a human body tracking mode to track the user in the motion area in real time;
and acquiring the augmented reality motion video of the user in the motion area.
2. The method of claim 1, wherein the turning on of the 3D scan box to detect a horizontal plane in the augmented reality world comprises:
starting a camera of the electronic equipment and acquiring the running state of the camera;
controlling the 3D scanning frame to perform matrix transformation according to the operation state, so that the 3D scanning frame after matrix transformation is attached to the plane to be moved aligned with the camera;
controlling the camera to send a ray to the center of the augmented reality world, and acquiring an anchor point of any feature in the augmented reality world in response to the intersection of the ray and the feature;
judging whether the anchor point of the characteristic is a plane anchor point;
determining that a horizontal plane in the augmented reality world is detected when the anchor point of the feature is determined to be a planar anchor point.
3. The method of claim 2, wherein the turning on of the human tracking mode to track the user in the motion area in real time comprises:
acquiring an anchor point set scanned by the camera in real time, and judging whether an anchor point of a human body type exists in the anchor point set;
when it is determined that the anchor point of the human body type exists in the anchor point set, acquiring a human body 3D coordinate in the anchor point of the human body type;
acquiring the coordinate of a central point of the motion region in the augmented reality world and the radius of the motion region;
calculating the distance between the human body 3D coordinate and the central point coordinate, and judging whether the distance is smaller than the radius;
determining that the user is within the motion region when it is determined that the distance is less than or equal to the radius;
determining that the user is not within the motion region when it is determined that the distance is greater than the radius.
4. The AR interaction-based motion video capture method of claim 3, wherein said obtaining human 3D coordinates in an anchor point of the human type comprises:
and acquiring a 3D coordinate corresponding to the target node from the anchor point of the human body type as a human body 3D coordinate.
5. The method for capturing motion video based on AR interaction of claim 3, wherein the method further comprises:
displaying the motion region in the augmented reality world in a preset first display mode when the user is determined to be within the motion region;
when the user is determined not to be within the motion region, displaying the motion region in the augmented reality world in a preset second display mode.
6. The AR interaction-based motion video capture method of claim 2, wherein said capturing augmented reality motion video of the user within the motion region comprises:
removing the preset 3D motion posture model in response to a touch instruction received on a preset shooting control;
and acquiring the augmented reality motion video of the user in the motion area through the camera after a preset time period.
7. The method for capturing motion video based on AR interaction of any of claims 1 to 6, wherein the entering into a world tracking mode in an augmented reality application comprises:
responding to a received augmented reality starting instruction, starting the augmented reality application and displaying an augmented reality motion interface;
transitioning from the augmented reality motion interface to the world tracking mode.
8. An AR interaction-based motion video capture device, the device comprising:
the application starting module is used for entering a world tracking mode in the augmented reality application;
the plane detection module is used for starting the 3D scanning frame to detect the horizontal plane in the augmented reality world;
the area creating module is used for creating a motion area in the 3D coordinate system of the augmented reality world according to the position coordinates of the horizontal plane in response to the received confirmation instruction of the motion on the horizontal plane, and loading a preset 3D motion posture model in the motion area;
the user tracking module is used for responding to the received confirmation instruction of the motion area, and starting a human body tracking mode to track the user in the motion area in real time;
and the video acquisition module is used for acquiring the augmented reality motion video of the user in the motion area.
9. An electronic device, comprising a processor configured to implement the AR interaction-based motion video capture method of any one of claims 1-7 when executing a computer program stored in a memory.
10. A computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the AR interaction-based moving video capture method according to any one of claims 1 to 7.
CN202010981359.7A 2020-09-17 2020-09-17 Motion video acquisition method and device based on AR interaction, electronic equipment and medium Active CN111930240B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010981359.7A CN111930240B (en) 2020-09-17 2020-09-17 Motion video acquisition method and device based on AR interaction, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010981359.7A CN111930240B (en) 2020-09-17 2020-09-17 Motion video acquisition method and device based on AR interaction, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN111930240A true CN111930240A (en) 2020-11-13
CN111930240B CN111930240B (en) 2021-02-09

Family

ID=73334578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010981359.7A Active CN111930240B (en) 2020-09-17 2020-09-17 Motion video acquisition method and device based on AR interaction, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111930240B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699331A (en) * 2020-12-31 2021-04-23 深圳市慧鲤科技有限公司 Message information display method and device, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106483814A (en) * 2016-12-26 2017-03-08 岭南师范学院 A kind of 3D holographic projection system based on augmented reality and its using method
CN109358748A (en) * 2018-09-30 2019-02-19 深圳仓谷创新软件有限公司 A kind of device and method interacted with hand with mobile phone A R dummy object
CN109448050A (en) * 2018-11-21 2019-03-08 深圳市创梦天地科技有限公司 A kind of method for determining position and terminal of target point
CN110660130A (en) * 2019-09-23 2020-01-07 重庆邮电大学 Medical image-oriented mobile augmented reality system construction method
CN110739040A (en) * 2019-08-29 2020-01-31 北京邮电大学 rehabilitation evaluation and training system for upper and lower limbs
CN110928417A (en) * 2019-12-11 2020-03-27 漳州北极光数字科技有限公司 Plane recognition mode augmented reality multi-person sharing interaction method
CN111401190A (en) * 2020-03-10 2020-07-10 上海眼控科技股份有限公司 Vehicle detection method, device, computer equipment and storage medium
CN111462339A (en) * 2020-03-30 2020-07-28 网易(杭州)网络有限公司 Display method and device in augmented reality, medium and electronic equipment
CN111539484A (en) * 2020-04-29 2020-08-14 北京市商汤科技开发有限公司 Method and device for training neural network
US20200265649A1 (en) * 2019-02-20 2020-08-20 Samsung Electronics Co., Ltd. Apparatus and method for displaying contents on an augmented reality device
CN111611928A (en) * 2020-05-22 2020-09-01 杭州智珺智能科技有限公司 Height and body size measuring method based on monocular vision and key point identification

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106483814A (en) * 2016-12-26 2017-03-08 岭南师范学院 A kind of 3D holographic projection system based on augmented reality and its using method
CN109358748A (en) * 2018-09-30 2019-02-19 深圳仓谷创新软件有限公司 A kind of device and method interacted with hand with mobile phone A R dummy object
CN109448050A (en) * 2018-11-21 2019-03-08 深圳市创梦天地科技有限公司 A kind of method for determining position and terminal of target point
US20200265649A1 (en) * 2019-02-20 2020-08-20 Samsung Electronics Co., Ltd. Apparatus and method for displaying contents on an augmented reality device
CN110739040A (en) * 2019-08-29 2020-01-31 北京邮电大学 rehabilitation evaluation and training system for upper and lower limbs
CN110660130A (en) * 2019-09-23 2020-01-07 重庆邮电大学 Medical image-oriented mobile augmented reality system construction method
CN110928417A (en) * 2019-12-11 2020-03-27 漳州北极光数字科技有限公司 Plane recognition mode augmented reality multi-person sharing interaction method
CN111401190A (en) * 2020-03-10 2020-07-10 上海眼控科技股份有限公司 Vehicle detection method, device, computer equipment and storage medium
CN111462339A (en) * 2020-03-30 2020-07-28 网易(杭州)网络有限公司 Display method and device in augmented reality, medium and electronic equipment
CN111539484A (en) * 2020-04-29 2020-08-14 北京市商汤科技开发有限公司 Method and device for training neural network
CN111611928A (en) * 2020-05-22 2020-09-01 杭州智珺智能科技有限公司 Height and body size measuring method based on monocular vision and key point identification

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699331A (en) * 2020-12-31 2021-04-23 深圳市慧鲤科技有限公司 Message information display method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111930240B (en) 2021-02-09

Similar Documents

Publication Publication Date Title
CN108492363B (en) Augmented reality-based combination method and device, storage medium and electronic equipment
CN107358401B (en) Intelligent management system/method for building project, readable storage medium and terminal
CN112950773A (en) Data processing method and device based on building information model and processing server
JP2019075124A (en) Method and system for providing camera effect
CN103270537A (en) Image processing apparatus, image processing method, and program
JP2004246578A (en) Interface method and device using self-image display, and program
CN111643899A (en) Virtual article display method and device, electronic equipment and storage medium
CN107817750A (en) A kind of building monitoring system based on BIM
EP3836126A1 (en) Information processing device, mediation device, simulate system, and information processing method
CN111770317A (en) Video monitoring method, device, equipment and medium for intelligent community
CN111930240B (en) Motion video acquisition method and device based on AR interaction, electronic equipment and medium
CN103502910B (en) Method for operating laser diode
CN107562356B (en) Fingerprint identification positioning method and device, storage medium and electronic equipment
CN113469872B (en) Region display method, device, equipment and storage medium
CN115147948B (en) Electronic patrol method, device and equipment
CN114510142B (en) Gesture recognition method based on two-dimensional image, gesture recognition system based on two-dimensional image and electronic equipment
CN111798573B (en) Electronic fence boundary position determination method and device and VR equipment
CN111580659B (en) File processing method and device and electronic equipment
CN112287708A (en) Near Field Communication (NFC) analog card switching method, device and equipment
Hansen et al. Augmented Reality for Infrastructure Information: Challenges with information flow and interactions in outdoor environments especially on construction sites
CN115798054B (en) Gesture recognition method based on AR/MR technology and electronic equipment
WO2019229788A1 (en) Computer system, construction progress display method, and program
CN110852770A (en) Data processing method and device, computing equipment and display equipment
CN109840457B (en) Augmented reality registration method and augmented reality registration device
CN117275039A (en) Skeletal gesture recognition method and device based on virtual-real interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant