CN111640206A - Dynamic control method and device - Google Patents

Dynamic control method and device Download PDF

Info

Publication number
CN111640206A
CN111640206A CN202010514225.4A CN202010514225A CN111640206A CN 111640206 A CN111640206 A CN 111640206A CN 202010514225 A CN202010514225 A CN 202010514225A CN 111640206 A CN111640206 A CN 111640206A
Authority
CN
China
Prior art keywords
limb
target
dimensional model
user
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010514225.4A
Other languages
Chinese (zh)
Inventor
王子彬
孙红亮
揭志伟
李炳泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202010514225.4A priority Critical patent/CN111640206A/en
Publication of CN111640206A publication Critical patent/CN111640206A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Psychiatry (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Architecture (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The method can determine the limb swinging direction and the limb swinging amplitude of a target user according to the limb position and posture information by acquiring a user image of the target user in real time and extracting the limb position and posture information from the user image, and then adjust the display angle of a displayed three-dimensional model according to the determined limb swinging direction and limb swinging amplitude and the three-dimensional model information of a target historical relic, so that the user can conveniently and quickly control the display angle of the target historical relic according to the limb swinging direction and the limb swinging amplitude.

Description

Dynamic control method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a dynamic control method and apparatus.
Background
In the scene of an exhibition hall, the entity of the historical relic is inconvenient to show out sometimes, or the historical relic can be observed only through a window or glass after being shown out, and the observation details are limited.
With the rapid development of the AR technology and the three-dimensional modeling technology, the three-dimensional model of the historical relic can be made, the three-dimensional model is used for replacing the historical relic of the entity, and the virtual relic is displayed through the display equipment.
Disclosure of Invention
The embodiment of the disclosure at least provides a dynamic control method and a dynamic control device, which are used for conveniently and quickly controlling the display angle of a three-dimensional model of historical relics.
In a first aspect, an embodiment of the present disclosure provides a dynamic control method, including:
acquiring a user image of a target user entering a preset detection area in real time; an AR display screen is deployed at a position which is a set distance away from the preset detection area of the exhibition hall, and a three-dimensional model of a target historical relic is displayed on the AR display screen;
extracting limb pose information from the user image;
determining the limb swinging direction and the limb swinging amplitude of the target user according to the continuously acquired limb pose information in the plurality of user images;
and adjusting the display angle of the displayed three-dimensional model of the target historical relic according to the limb swinging direction, the limb swinging amplitude and the stored three-dimensional model information of the target historical relic.
By adopting the dynamic control method, the user image of the target user can be acquired in real time, the limb position and posture information is extracted from the user image, the limb swinging direction and the limb swinging amplitude of the target user are determined according to the limb position and posture information, and the display angle of the displayed three-dimensional model is adjusted according to the determined limb swinging direction and limb swinging amplitude and the three-dimensional model information of the target historical relic, so that the user can conveniently and quickly control the display angle of the target historical relic through the limb swinging direction and swinging amplitude.
In an optional embodiment, extracting the limb pose information from the user image includes:
after any one user image is obtained, detecting face information of the user image;
and after determining that the angle of the face facing the display screen meets the preset condition according to the face information, extracting the body pose information of the target body from the user image.
In an optional embodiment, the limb pose information is extracted from the user image according to the following steps:
detecting key points of target limbs of the user image, and detecting position information of each key point of the target limbs in the user image;
determining the limb pose information according to the position information of each key point of the target limb in the user image;
the determining the limb swinging direction and the limb swinging amplitude of the target user according to the continuously acquired limb pose information in the plurality of user images comprises the following steps:
and determining the limb swinging direction and the limb swinging amplitude of the target limb according to the continuously acquired position information of each key point of the target limb in the plurality of user images.
In an alternative embodiment, the three-dimensional model information of the target historical relic is constructed according to the following steps:
acquiring picture materials of the target historical relic, and performing three-dimensional reconstruction based on the acquired picture materials to obtain an initial three-dimensional model;
and adjusting the initial three-dimensional model and the updated three-dimensional model information of the target historical relic.
In an optional implementation manner, adjusting a display angle of the displayed three-dimensional model of the target historical relic according to the limb swinging direction, the limb swinging amplitude and the stored three-dimensional model information of the target historical relic comprises:
generating and determining rotation angle information in different three-dimensional coordinate directions corresponding to the three-dimensional model according to the limb swinging direction and the limb swinging amplitude;
and adjusting the display angle of the displayed three-dimensional model of the target historical relic according to the rotation angle information.
In an optional implementation manner, after adjusting the display angle of the displayed three-dimensional model of the target historical relic, the method further includes:
and displaying the cultural relic detail content introduction corresponding to the adjusted display angle.
In an alternative embodiment, the target limb comprises a hand of the target user.
In a second aspect, an embodiment of the present disclosure further provides a dynamic control apparatus, including:
the acquisition module is used for acquiring a user image of a target user entering a preset detection area in real time; an AR display screen is deployed at a position which is a set distance away from the preset detection area of the exhibition hall, and a three-dimensional model of a target historical relic is displayed on the AR display screen;
the extraction module is used for extracting limb pose information from the user image;
the determining module is used for determining the limb swinging direction and the limb swinging amplitude of the target user according to the continuously acquired limb pose information in the plurality of user images;
and the adjusting module is used for adjusting the display angle of the displayed three-dimensional model of the target historical relic according to the limb swinging direction, the limb swinging amplitude and the stored three-dimensional model information of the target historical relic.
In an optional implementation manner, the extraction module is specifically configured to:
after any one user image is obtained, detecting face information of the user image;
and after determining that the angle of the face facing the display screen meets the preset condition according to the face information, extracting the body pose information of the target body from the user image.
In an optional implementation manner, when extracting the limb pose information from the user image, the extraction module is specifically configured to:
detecting key points of target limbs of the user image, and detecting position information of each key point of the target limbs in the user image;
determining the limb pose information according to the position information of each key point of the target limb in the user image;
the determining module is specifically configured to:
and determining the limb swinging direction and the limb swinging amplitude of the target limb according to the continuously acquired position information of each key point of the target limb in the plurality of user images.
In an optional embodiment, the apparatus further comprises a construction module configured to:
acquiring picture materials of the target historical relic, and performing three-dimensional reconstruction based on the acquired picture materials to obtain an initial three-dimensional model;
and adjusting the initial three-dimensional model and the updated three-dimensional model information of the target historical relic.
In an optional implementation manner, the adjusting module is specifically configured to:
generating and determining rotation angle information in different three-dimensional coordinate directions corresponding to the three-dimensional model according to the limb swinging direction and the limb swinging amplitude;
and adjusting the display angle of the displayed three-dimensional model of the target historical relic according to the rotation angle information.
In an optional embodiment, the adjusting module is further configured to:
and displaying the cultural relic detail content introduction corresponding to the adjusted display angle.
In an alternative embodiment, the target limb comprises a hand of the target user.
In a third aspect, an embodiment of the present disclosure further provides a computer device, including: a processor and a memory coupled to each other, the memory storing machine-readable instructions executable by the processor, the machine-readable instructions being executable by the processor when a computer device is executed to implement the first aspect described above, or the dynamic control method in any one of the possible embodiments of the first aspect.
In a fourth aspect, this disclosed embodiment also provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps in the first aspect or any one of the possible implementation manners of the first aspect.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 is a flow chart illustrating a dynamic control method provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a user image in a dynamic control method provided by an embodiment of the disclosure;
FIG. 3 is a schematic diagram illustrating a three-dimensional model before adjusting a display angle in a dynamic control method provided by an embodiment of the disclosure;
FIG. 4 is a schematic diagram illustrating a three-dimensional model after adjusting a display angle in a dynamic control method provided by an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a dynamic control apparatus provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of another dynamic control apparatus provided by an embodiment of the present disclosure;
fig. 7 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
Research shows that in the situations of exhibition halls and the like, the three-dimensional model of the historical relic can be displayed through an AR display screen by utilizing AR and three-dimensional modeling technologies, and after the three-dimensional model of the historical relic is displayed, a user can control the display direction of the displayed three-dimensional model through control equipment such as a mouse, a keyboard, a touch panel and the like. However, in a scenario such as an exhibition hall, it is often inconvenient for users to hold the control device at any time, and it is also difficult for the exhibition hall to allocate the control device to each user, so that the users cannot control the exhibition direction at any time and any place.
Based on the research, the dynamic control method can be used for determining the limb swinging direction and the limb swinging amplitude of the target user according to the limb position and posture information obtained by obtaining the user image of the target user in real time, and adjusting the display angle of the displayed three-dimensional model according to the determined limb swinging direction and the determined limb swinging amplitude and the three-dimensional model information of the target historical relic, so that the user can conveniently and quickly control the display angle of the target historical relic according to the limb swinging direction and the limb swinging amplitude, the control mode is flexible and convenient, and the control operation requirements on control equipment are reduced.
The above-mentioned drawbacks are the results of the inventor after practical and careful study, and therefore, the discovery process of the above-mentioned problems and the solutions proposed by the present disclosure to the above-mentioned problems should be the contribution of the inventor in the process of the present disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
To facilitate understanding of the present embodiment, first, a detailed description is given to a dynamic control method disclosed in an embodiment of the present disclosure, where an execution subject of the dynamic control method provided in the embodiment of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device or a server or other processing device, which may be provided with or connected to a display screen and a camera device. In some possible implementations, the dynamic control method may be implemented by a processor calling computer readable instructions stored in a memory.
The following describes a dynamic control method provided by the embodiments of the present disclosure by taking an execution subject as a server.
Referring to fig. 1, a flowchart of a dynamic control method provided in the embodiment of the present disclosure is shown, where the method includes steps S101 to S104, where:
s101: and acquiring a user image of a target user entering a preset detection area in real time.
The method comprises the steps that an AR display screen is deployed at a position which is away from a preset detection area of an exhibition hall by a set distance, and a three-dimensional model of a target historical relic is displayed on the AR display screen.
Wherein, predetermine the detection area and can set up the dead ahead at the AR display screen, under the condition that the user is located predetermine the monitoring area, can observe the three-dimensional model of the historical relic that demonstrates in the AR display screen. In order to enable the user to clearly and completely observe the three-dimensional model displayed in the AR display screen, the set distance may be determined according to the size of the AR display screen.
In this step, a user image of the target user may be acquired by the image pickup device disposed on the AR display screen side. Under the condition that the target user is detected to enter the preset detection area, the server can acquire the user image of the target user in real time through the camera equipment, the acquired user image is a continuous multi-frame image, and the image when the target user is located in the preset detection area is recorded.
The user image may be a depth image, and in the case where the user image is a depth image, the image pickup apparatus may be a depth camera. Depth images, also known as range images, refer to images having as pixel values the distances (depths) from an image grabber to points in a scene, which directly reflect the geometry of the visible surface of a scene. The depth image can be calculated into point cloud data through coordinate conversion, and the point cloud data with regular and necessary information can also be inversely calculated into depth image data. In the image frames provided by the depth data stream, each pixel represents the distance (in millimeters) to the plane of the camera from the object closest to the plane at that particular (x, y) coordinate in the field of view of the depth sensor.
S102: and extracting limb pose information from the user image.
In the step, each limb of the human body can be identified from the user image by using a human body identification technology, and the limb pose information of the target user is determined according to the position information of each limb in the user image.
For example, when the user image is a depth image, the human body and the background in each frame of user image may be segmented to obtain a multi-frame human body silhouette image of the target user, then a pre-trained human body recognition model is used to recognize a plurality of human body key points in the human body silhouette image, and the pose information of the target limb is determined according to the coordinates of the human body key points.
The limb pose information may include position information and inclination angle information of the target limb.
S103: and determining the limb swinging direction and the limb swinging amplitude of the target user according to the continuously acquired limb pose information in the plurality of user images.
In this step, the body swing direction and amplitude of the target user in the two frames of user images can be determined according to the body pose information corresponding to the previous frame and the next frame of user images, so that the body swing direction and the body swing amplitude of the target user in the continuous multiple frames of user images can be determined.
S104: and adjusting the display angle of the displayed three-dimensional model of the target historical relic according to the limb swinging direction, the limb swinging amplitude and the stored three-dimensional model information of the target historical relic.
In this step, the rotation direction of the three-dimensional model of the target historical relic can be determined according to the limb swing direction and the three-dimensional model information, the rotation degree of the three-dimensional model can be determined according to the limb swing amplitude and the three-dimensional model information, and the display angle of the three-dimensional model of the target historical relic can be adjusted according to the determined rotation direction and rotation degree.
The three-dimensional model information of the target historical relic can comprise coordinates of the three-dimensional model surface of the target historical relic in a three-dimensional coordinate system, rotation axis information of the three-dimensional model, the current display angle of the three-dimensional model and the like.
According to the dynamic control method provided by the embodiment of the disclosure, the user image of the target user can be acquired in real time, the limb position and posture information is extracted from the user image, the limb swinging direction and the limb swinging amplitude of the target user are determined according to the limb position and posture information, and the display angle of the displayed three-dimensional model is adjusted according to the determined limb swinging direction and limb swinging amplitude and the three-dimensional model information of the target historical relic, so that the user can conveniently and quickly control the display angle of the target historical relic according to the limb swinging direction and the limb swinging amplitude.
In an optional embodiment, extracting the limb pose information from the user image includes:
after any one user image is obtained, detecting face information of the user image;
and after determining that the angle of the face facing the display screen meets the preset condition according to the face information, extracting the body pose information of the target body from the user image.
In the step, after the user image is obtained, the face information of the target user is recognized by using a pre-trained human body recognition model, the angle of the face towards the AR display screen is determined according to the face information, and when the angle of the face towards the AR display screen meets the preset condition, the body pose information of the target body is extracted from the user image.
The preset condition may be that an included angle between a direction of the face facing the display screen and a perpendicular line from the person to the display screen is smaller than or equal to a preset threshold. The preset condition is set to judge whether the user has the intention of adjusting the display angle of the three-dimensional model of the target historical relic. Under the condition that the angle of the face facing the display screen is smaller than or equal to the preset threshold value, the fact that the user has the intention of adjusting the display angle of the three-dimensional model of the target historical relic can be judged.
In an optional embodiment, the limb pose information is extracted from the user image according to the following steps:
detecting key points of target limbs of the user image, and detecting position information of each key point of the target limbs in the user image;
determining the limb pose information according to the position information of each key point of the target limb in the user image;
the determining the limb swinging direction and the limb swinging amplitude of the target user according to the continuously acquired limb pose information in the plurality of user images comprises the following steps:
and determining the limb swinging direction and the limb swinging amplitude of the target limb according to the continuously acquired position information of each key point of the target limb in the plurality of user images.
In this step, the server may first identify the key points of the target limb using the key point identification model, and after identifying the key points of the target limb, determine the position information of each key point of the target limb in the user image. After the position information of each key point of the target limb is obtained, the pose information of the target limb can be determined according to the position information of the key points.
After the pose information of the target limb in the plurality of continuously acquired user images is determined, the limb swinging direction and the limb swinging amplitude of the target limb in the two continuous user images can be determined according to the pose information of the target limb corresponding to each two continuous user images, and then the limb swinging direction and the limb swinging amplitude of the whole target limb are determined according to the pose information corresponding to all the user images.
In an alternative embodiment, the three-dimensional model information of the target historical relic is constructed according to the following steps:
acquiring picture materials of the target historical relic, and performing three-dimensional reconstruction based on the acquired picture materials to obtain an initial three-dimensional model;
and adjusting the initial three-dimensional model and the updated three-dimensional model information of the target historical relic.
In the step, the picture material of the target historical relic can be obtained firstly, the picture material can comprise texture information, space structure information, color information and the like of the target historical relic, after the picture material of the target historical relic is obtained, three-dimensional reconstruction can be carried out by using the picture material to obtain a rough initial three-dimensional model, after the initial three-dimensional model is obtained, the initial three-dimensional model can be adjusted, and details of the three-dimensional model can be added, modified or deleted to obtain updated three-dimensional model information.
In an optional implementation manner, adjusting a display angle of the displayed three-dimensional model of the target historical relic according to the limb swinging direction, the limb swinging amplitude and the stored three-dimensional model information of the target historical relic comprises:
generating and determining rotation angle information in different three-dimensional coordinate directions corresponding to the three-dimensional model according to the limb swinging direction and the limb swinging amplitude;
and adjusting the display angle of the displayed three-dimensional model of the target historical relic according to the rotation angle information.
In this step, the rotation direction of the three-dimensional model in the three-dimensional coordinate system can be determined according to the limb swing direction, and then the rotation degree of the three-dimensional model is determined according to the mapping relationship between the limb swing amplitude and the rotation degree, so as to obtain the rotation angle information of the three-dimensional model in different three-dimensional coordinate directions. After the rotation angle information is determined, the display angle of the three-dimensional model of the target historical relic can be adjusted according to the rotation direction and the rotation degree indicated by the rotation angle information.
In an optional implementation manner, after adjusting the display angle of the displayed three-dimensional model of the target historical relic, the method further includes:
and displaying the cultural relic detail content introduction corresponding to the adjusted display angle.
In this step, after the display angle is adjusted, the introduction of the details of the cultural relic corresponding to the part of the target historical cultural relic displayed on the screen may be displayed on the display screen.
Wherein the introduction of the contents of the details of the cultural relic can comprise an introduction of details of a part of the target historical cultural relic shown in the picture.
In an alternative embodiment, the target limb comprises a hand of the target user.
Referring to fig. 2, fig. 3 and fig. 4, fig. 2 is a schematic diagram of a user image in a dynamic control method provided by an embodiment of the present disclosure; fig. 3 is a schematic diagram of a three-dimensional model before adjusting a display angle in a dynamic control method provided by an embodiment of the disclosure; fig. 4 is a schematic diagram of a three-dimensional model after adjusting a display angle in a dynamic control method provided by an embodiment of the present disclosure.
Fig. 2 includes fig. 2a and 2b, and fig. 2a and 2b show two consecutive user images. In fig. 2a and 2b, the target user faces the display screen, the target limb is a hand of the target user, the hand of the target user includes an arm and a palm, and in fig. 2a, the target user lifts the hand, and the hand is in a vertical state; in fig. 2b, the target user swings the hand to the right, with the hand tilted.
As shown in fig. 3, before the display angle is adjusted, the display angle of the target historical relic is just opposite to the target user. As shown in fig. 4, after the angle is adjusted according to the user image in fig. 2, the target historical relic rotates to the right side of the target limb swing, and the exhibition angle is adjusted.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, a dynamic control device corresponding to the dynamic control method is also provided in the embodiments of the present disclosure, and since the principle of solving the problem of the device in the embodiments of the present disclosure is similar to the dynamic control method described above in the embodiments of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not described again.
Referring to fig. 5, a schematic diagram of a dynamic control apparatus provided in an embodiment of the present disclosure is shown, where the apparatus includes: an acquisition module 510, an extraction module 520, a determination module 530, and an adjustment module 540; wherein the content of the first and second substances,
an obtaining module 510, configured to obtain, in real time, a user image of a target user entering a preset detection area; an AR display screen is deployed at a position which is a set distance away from the preset detection area of the exhibition hall, and a three-dimensional model of a target historical relic is displayed on the AR display screen;
an extracting module 520, configured to extract limb pose information from the user image;
a determining module 530, configured to determine, according to continuously obtained limb pose information in multiple user images, a limb swinging direction and a limb swinging amplitude of the target user;
and the adjusting module 540 is configured to adjust a display angle of the displayed three-dimensional model of the target historical relic according to the limb swinging direction, the limb swinging amplitude, and the stored three-dimensional model information of the target historical relic.
According to the embodiment of the disclosure, the user image of the target user can be acquired in real time, the body position and posture information is extracted from the user image, the body swing direction and the body swing amplitude of the target user are determined according to the body position and posture information, and the display angle of the displayed three-dimensional model is adjusted according to the determined body swing direction, the determined body swing amplitude and the three-dimensional model information of the target historical relic, so that the user can conveniently and quickly control the display angle of the target historical relic through the body swing direction and the body swing amplitude.
Referring to fig. 6, a schematic diagram of another dynamic control apparatus provided in the embodiment of the present disclosure is shown, where the apparatus includes: an acquisition module 610, an extraction module 620, a determination module 630, an adjustment module 640, and a construction module 650; wherein the building block 650 is specifically configured to:
acquiring picture materials of the target historical relic, and performing three-dimensional reconstruction based on the acquired picture materials to obtain an initial three-dimensional model;
and adjusting the initial three-dimensional model and the updated three-dimensional model information of the target historical relic.
In an optional implementation manner, the extracting module 620 is specifically configured to:
after any one user image is obtained, detecting face information of the user image;
and after determining that the angle of the face facing the display screen meets the preset condition according to the face information, extracting the body pose information of the target body from the user image.
In an optional implementation, when extracting the limb pose information from the user image, the extracting module 620 is specifically configured to:
detecting key points of target limbs of the user image, and detecting position information of each key point of the target limbs in the user image;
determining the limb pose information according to the position information of each key point of the target limb in the user image;
the determining module 630 is specifically configured to:
and determining the limb swinging direction and the limb swinging amplitude of the target limb according to the continuously acquired position information of each key point of the target limb in the plurality of user images.
In an optional implementation manner, the adjusting module 640 is specifically configured to:
generating and determining rotation angle information in different three-dimensional coordinate directions corresponding to the three-dimensional model according to the limb swinging direction and the limb swinging amplitude;
and adjusting the display angle of the displayed three-dimensional model of the target historical relic according to the rotation angle information.
In an optional embodiment, the adjusting module 640 is further configured to:
and displaying the cultural relic detail content introduction corresponding to the adjusted display angle.
In an alternative embodiment, the target limb comprises a hand of the target user.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
The embodiment of the present disclosure further provides a computer device 10, as shown in fig. 5, which is a schematic structural diagram of the computer device 10 provided in the embodiment of the present disclosure, and includes:
a processor 11 and a memory 12; the memory 12 stores machine-readable instructions executable by the processor 11, which when executed by a computer device are executed by the processor to perform the steps of:
acquiring a user image of a target user entering a preset detection area in real time; an AR display screen is deployed at a position which is a set distance away from the preset detection area of the exhibition hall, and a three-dimensional model of a target historical relic is displayed on the AR display screen;
extracting limb pose information from the user image;
determining the limb swinging direction and the limb swinging amplitude of the target user according to the continuously acquired limb pose information in the plurality of user images;
and adjusting the display angle of the displayed three-dimensional model of the target historical relic according to the limb swinging direction, the limb swinging amplitude and the stored three-dimensional model information of the target historical relic.
For the specific execution process of the instruction, reference may be made to the steps of the dynamic control method described in the embodiments of the present disclosure, and details are not described here.
The embodiments of the present disclosure also provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the dynamic control method described in the above method embodiments. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the dynamic control method provided in the embodiments of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute steps of the dynamic control method described in the above method embodiments, which may be referred to specifically for the above method embodiments, and are not described herein again.
The embodiments of the present disclosure also provide a computer program, which when executed by a processor implements any one of the methods of the foregoing embodiments. The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A dynamic control method, comprising:
acquiring a user image of a target user entering a preset detection area in real time; an AR display screen is deployed at a position which is a set distance away from the preset detection area of the exhibition hall, and a three-dimensional model of a target historical relic is displayed on the AR display screen;
extracting limb pose information from the user image;
determining the limb swinging direction and the limb swinging amplitude of the target user according to the continuously acquired limb pose information in the plurality of user images;
and adjusting the display angle of the displayed three-dimensional model of the target historical relic according to the limb swinging direction, the limb swinging amplitude and the stored three-dimensional model information of the target historical relic.
2. The method of claim 1, wherein extracting limb pose information from the user image comprises:
after any one user image is obtained, detecting face information of the user image;
and after determining that the angle of the face facing the display screen meets the preset condition according to the face information, extracting the body pose information of the target body from the user image.
3. The method according to claim 1 or 2, characterized in that limb pose information is extracted from the user image according to the following steps:
detecting key points of target limbs of the user image, and detecting position information of each key point of the target limbs in the user image;
determining the limb pose information according to the position information of each key point of the target limb in the user image;
the determining the limb swinging direction and the limb swinging amplitude of the target user according to the continuously acquired limb pose information in the plurality of user images comprises the following steps:
and determining the limb swinging direction and the limb swinging amplitude of the target limb according to the continuously acquired position information of each key point of the target limb in the plurality of user images.
4. The method according to any one of claims 1 to 3, wherein the three-dimensional model information of the target historical relic is constructed according to the following steps:
acquiring picture materials of the target historical relic, and performing three-dimensional reconstruction based on the acquired picture materials to obtain an initial three-dimensional model;
and adjusting the initial three-dimensional model and the updated three-dimensional model information of the target historical relic.
5. The method according to any one of claims 1 to 4, wherein the adjusting of the display angle of the displayed three-dimensional model of the target historical relic according to the limb swinging direction and the limb swinging amplitude and the stored three-dimensional model information of the target historical relic comprises:
generating and determining rotation angle information in different three-dimensional coordinate directions corresponding to the three-dimensional model according to the limb swinging direction and the limb swinging amplitude;
and adjusting the display angle of the displayed three-dimensional model of the target historical relic according to the rotation angle information.
6. The method according to any one of claims 1 to 5, wherein after adjusting the display angle of the displayed three-dimensional model of the target historical relic, the method further comprises:
and displaying the cultural relic detail content introduction corresponding to the adjusted display angle.
7. The method of any of claims 1-6, wherein the target limb comprises a hand of the target user.
8. A dynamic control apparatus, comprising:
the acquisition module is used for acquiring a user image of a target user entering a preset detection area in real time; an AR display screen is deployed at a position which is a set distance away from the preset detection area of the exhibition hall, and a three-dimensional model of a target historical relic is displayed on the AR display screen;
the extraction module is used for extracting limb pose information from the user image;
the determining module is used for determining the limb swinging direction and the limb swinging amplitude of the target user according to the continuously acquired limb pose information in the plurality of user images;
and the adjusting module is used for adjusting the display angle of the displayed three-dimensional model of the target historical relic according to the limb swinging direction, the limb swinging amplitude and the stored three-dimensional model information of the target historical relic.
9. An electronic device, comprising: a processor, a memory storing machine readable instructions executable by the processor, the processor for executing the machine readable instructions stored in the memory, the processor performing the steps of the dynamic control method as claimed in any one of claims 1 to 7 when the machine readable instructions are executed by the processor.
10. A computer-readable storage medium, having stored thereon a computer program, which, when executed by an electronic device, executes the steps of the dynamic control method according to any one of claims 1 to 7.
CN202010514225.4A 2020-06-08 2020-06-08 Dynamic control method and device Pending CN111640206A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010514225.4A CN111640206A (en) 2020-06-08 2020-06-08 Dynamic control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010514225.4A CN111640206A (en) 2020-06-08 2020-06-08 Dynamic control method and device

Publications (1)

Publication Number Publication Date
CN111640206A true CN111640206A (en) 2020-09-08

Family

ID=72331823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010514225.4A Pending CN111640206A (en) 2020-06-08 2020-06-08 Dynamic control method and device

Country Status (1)

Country Link
CN (1) CN111640206A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023103999A1 (en) * 2021-12-10 2023-06-15 北京字跳网络技术有限公司 3d target point rendering method and apparatus, and device and storage medium
CN117312477A (en) * 2023-11-28 2023-12-29 北京三月雨文化传播有限责任公司 AR technology-based indoor intelligent exhibition positioning method, device, equipment and medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799263A (en) * 2012-06-19 2012-11-28 深圳大学 Posture recognition method and posture recognition control system
CN103440677A (en) * 2013-07-30 2013-12-11 四川大学 Multi-view free stereoscopic interactive system based on Kinect somatosensory device
CN105243375A (en) * 2015-11-02 2016-01-13 北京科技大学 Motion characteristics extraction method and device
CN106502401A (en) * 2016-10-31 2017-03-15 宇龙计算机通信科技(深圳)有限公司 A kind of display control method and device
CN106774938A (en) * 2017-01-16 2017-05-31 广州弥德科技有限公司 Man-machine interaction integrating device based on somatosensory device
CN106886284A (en) * 2017-01-20 2017-06-23 西安电子科技大学 A kind of Cultural relics in museum interactive system based on Kinect
CN107329671A (en) * 2017-07-05 2017-11-07 北京京东尚科信息技术有限公司 Model display methods and device
CN107766773A (en) * 2016-08-17 2018-03-06 宁波原子智能技术有限公司 Various dimensions control method and control device based on gesture
CN108596784A (en) * 2018-04-04 2018-09-28 内蒙古工业大学 A kind of intelligent grid comprehensive display system
CN108594999A (en) * 2018-04-20 2018-09-28 北京京东金融科技控股有限公司 Control method and device for panoramic picture display systems
CN109344796A (en) * 2018-10-22 2019-02-15 Oppo广东移动通信有限公司 Information processing method and device, electronic equipment, computer readable storage medium
CN109670420A (en) * 2018-12-04 2019-04-23 上海商汤智能科技有限公司 Store the control method and device, storage terminal, electronic equipment, medium of terminal
CN110947181A (en) * 2018-09-26 2020-04-03 Oppo广东移动通信有限公司 Game picture display method, game picture display device, storage medium and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799263A (en) * 2012-06-19 2012-11-28 深圳大学 Posture recognition method and posture recognition control system
CN103440677A (en) * 2013-07-30 2013-12-11 四川大学 Multi-view free stereoscopic interactive system based on Kinect somatosensory device
CN105243375A (en) * 2015-11-02 2016-01-13 北京科技大学 Motion characteristics extraction method and device
CN107766773A (en) * 2016-08-17 2018-03-06 宁波原子智能技术有限公司 Various dimensions control method and control device based on gesture
CN106502401A (en) * 2016-10-31 2017-03-15 宇龙计算机通信科技(深圳)有限公司 A kind of display control method and device
CN106774938A (en) * 2017-01-16 2017-05-31 广州弥德科技有限公司 Man-machine interaction integrating device based on somatosensory device
CN106886284A (en) * 2017-01-20 2017-06-23 西安电子科技大学 A kind of Cultural relics in museum interactive system based on Kinect
CN107329671A (en) * 2017-07-05 2017-11-07 北京京东尚科信息技术有限公司 Model display methods and device
CN108596784A (en) * 2018-04-04 2018-09-28 内蒙古工业大学 A kind of intelligent grid comprehensive display system
CN108594999A (en) * 2018-04-20 2018-09-28 北京京东金融科技控股有限公司 Control method and device for panoramic picture display systems
CN110947181A (en) * 2018-09-26 2020-04-03 Oppo广东移动通信有限公司 Game picture display method, game picture display device, storage medium and electronic equipment
CN109344796A (en) * 2018-10-22 2019-02-15 Oppo广东移动通信有限公司 Information processing method and device, electronic equipment, computer readable storage medium
CN109670420A (en) * 2018-12-04 2019-04-23 上海商汤智能科技有限公司 Store the control method and device, storage terminal, electronic equipment, medium of terminal

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023103999A1 (en) * 2021-12-10 2023-06-15 北京字跳网络技术有限公司 3d target point rendering method and apparatus, and device and storage medium
CN117312477A (en) * 2023-11-28 2023-12-29 北京三月雨文化传播有限责任公司 AR technology-based indoor intelligent exhibition positioning method, device, equipment and medium
CN117312477B (en) * 2023-11-28 2024-02-20 北京三月雨文化传播有限责任公司 AR technology-based indoor intelligent exhibition positioning method, device, equipment and medium

Similar Documents

Publication Publication Date Title
JP6560480B2 (en) Image processing system, image processing method, and program
JP6423435B2 (en) Method and apparatus for representing a physical scene
CN106663334B (en) Method executed by computing device, mobile communication device and storage medium
JP5453246B2 (en) Camera-based user input for compact devices
EP2908239A1 (en) Image processing device, image processing method, and computer program product
EP3120108A1 (en) Information processing apparatus, information processing method, and program
JP6352208B2 (en) 3D model processing apparatus and camera calibration system
JP2017191576A (en) Information processor, control method information processor and program
NZ525717A (en) A method of tracking an object of interest using multiple cameras
JP6054831B2 (en) Image processing apparatus, image processing method, and image processing program
JP2014235634A (en) Finger operation detection device, finger operation detection method, finger operation detection program, and virtual object processing system
CN111679742A (en) Interaction control method and device based on AR, electronic equipment and storage medium
CN111638797A (en) Display control method and device
CN111640206A (en) Dynamic control method and device
JP7162079B2 (en) A recording medium for recording a method, system and computer program for remotely controlling a display device via head gestures
CN112313605A (en) Object placement and manipulation in augmented reality environments
KR20190048506A (en) Method and apparatus for providing virtual room
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN111640203A (en) Image processing method and device
CN114202640A (en) Data acquisition method and device, computer equipment and storage medium
CN112905014A (en) Interaction method and device in AR scene, electronic equipment and storage medium
CN111625100A (en) Method and device for presenting picture content, computer equipment and storage medium
CN111651052A (en) Virtual sand table display method and device, electronic equipment and storage medium
CN111638798A (en) AR group photo method, AR group photo device, computer equipment and storage medium
US20200211275A1 (en) Information processing device, information processing method, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination