WO2021109376A1 - 一种分镜效果的实现方法、装置及相关产品 - Google Patents

一种分镜效果的实现方法、装置及相关产品 Download PDF

Info

Publication number
WO2021109376A1
WO2021109376A1 PCT/CN2020/082545 CN2020082545W WO2021109376A1 WO 2021109376 A1 WO2021109376 A1 WO 2021109376A1 CN 2020082545 W CN2020082545 W CN 2020082545W WO 2021109376 A1 WO2021109376 A1 WO 2021109376A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dimensional virtual
real
model
virtual
Prior art date
Application number
PCT/CN2020/082545
Other languages
English (en)
French (fr)
Chinese (zh)
Inventor
刘文韬
郑佳宇
黄展鹏
李佳桦
Original Assignee
深圳市商汤科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市商汤科技有限公司 filed Critical 深圳市商汤科技有限公司
Priority to KR1020227018465A priority Critical patent/KR20220093342A/ko
Priority to JP2022528715A priority patent/JP7457806B2/ja
Publication of WO2021109376A1 publication Critical patent/WO2021109376A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/2053D [Three Dimensional] animation driven by audio data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • This application relates to the field of virtual technology, and in particular to a method, device and related products for realizing the split-mirror effect.
  • the virtual characters in the network generally use motion capture technology in the generation process, and the real person images obtained by the image recognition method are analyzed, so as to direct the actions and expressions of the real characters to the virtual characters, so that the virtual characters can be Reproduce the movements and expressions of real characters.
  • the embodiments of the present application disclose a method, device and related products for realizing the split-mirror effect.
  • the embodiment of the present application provides a method for implementing the splitting effect, including: obtaining a three-dimensional virtual model; rendering the three-dimensional virtual model with at least two different lens angles to obtain at least two different lens angles, respectively The corresponding virtual image.
  • the above method obtains a three-dimensional virtual model and renders the three-dimensional virtual model with at least two different lens angles, so as to obtain virtual images corresponding to at least two different lens angles, so that the user can see the images under different lens angles.
  • Virtual images provide users with a rich visual experience.
  • the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model.
  • the above method further includes: obtaining a real image, where the real image includes a real character Image; feature extraction of real person images to obtain feature information, where the feature information includes the action information of the real person; generate a three-dimensional virtual model according to the feature information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model is the same as that of the real person Action information correspondence.
  • a 3D virtual model is generated, so that the 3D virtual person model in the 3D virtual model can reproduce the facial expressions and body movements of the real person, and it is convenient for the audience to watch the 3D virtual
  • the virtual image corresponding to the model can learn the facial expressions and body movements of the real person, so that the audience can interact more flexibly with the live anchor.
  • acquiring the real image includes: acquiring a video stream, and obtaining at least two frames of real images according to at least two frames of images in the video stream; performing feature extraction on real person images to obtain feature information, including: Perform feature extraction on each frame of real person images to obtain corresponding feature information.
  • the three-dimensional virtual model can be changed in real time according to the multiple frames of real images collected, so that the user can see the dynamic change process of the three-dimensional virtual model under different lens perspectives.
  • the real image further includes a real scene image
  • the three-dimensional virtual model also includes a three-dimensional virtual scene model; before obtaining the three-dimensional virtual model, the above method further includes: constructing a three-dimensional virtual scene based on the real scene image model.
  • the above method can also use real scene images to construct three-dimensional virtual scene images in the three-dimensional virtual model, which makes the three-dimensional virtual scene images more selective than only selecting specific three-dimensional virtual scene images.
  • acquiring at least two different lens angles includes: obtaining at least two different lens angles according to at least two frames of real images.
  • each frame of real image corresponds to a lens angle
  • multiple frames of real image correspond to multiple lens angles. Therefore, at least two frames of different lens angles can be obtained from at least two frames of real images, which can be used to realize the lens angle of the 3D virtual model. Rendering to provide users with a rich visual experience.
  • acquiring at least two different lens angles includes: obtaining at least two different lens angles according to the action information corresponding to the at least two frames of real images.
  • determining the lens angle of view based on the action information of the real person in the real image can magnify the action of the corresponding three-dimensional virtual character model in the image, so that the user can learn the action of the real person by watching the virtual image and improve the interaction Sex and fun.
  • acquiring at least two different camera angles includes: acquiring background music; determining a time collection corresponding to the background music, where the time collection includes at least two time periods; acquiring each of the time collections The lens angle of view corresponding to the time period.
  • the at least two different lens angles include a first lens angle of view and a second lens angle of view; the three-dimensional virtual model is rendered with at least two different lens angles to obtain at least two different lens angles.
  • the virtual images corresponding to the lens perspectives respectively include: rendering the three-dimensional virtual model with the first lens perspective to obtain the first virtual image; rendering the three-dimensional virtual model with the second lens perspective to obtain the second virtual image; An image sequence formed by the virtual image and the second virtual image.
  • rendering the three-dimensional virtual model with the first lens perspective and the second lens perspective respectively allows the user to view the three-dimensional virtual model in the first lens perspective and the three-dimensional virtual model in the second lens perspective, thereby providing users with Provide a rich visual experience.
  • rendering the three-dimensional virtual model in the second lens perspective to obtain the second virtual image includes: translating or rotating the three-dimensional virtual model in the first lens perspective to obtain the second lens A three-dimensional virtual model under a viewing angle; acquiring a second virtual image corresponding to the three-dimensional virtual model under a second lens perspective.
  • the three-dimensional virtual model under the second lens angle of view that is, the second virtual image
  • displaying the image sequence formed according to the first image and the second virtual image includes: inserting a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image The image is gently switched to the second virtual image, where a is a positive integer.
  • the a-frame virtual image is inserted between the first virtual image and the second virtual image, so that the viewer can see the entire change process from the first virtual image to the second virtual image, instead of a single two images ( The first virtual image and the second virtual image), so that the audience can adapt to the effect of the visual difference caused by the first virtual image to the second virtual image.
  • the method further includes: performing beat detection on the background music to obtain a beat collection of the background music, where the beat collection includes multiple beats, and each beat of the multiple beats corresponds to a stage special effect ; Add the target stage special effects corresponding to the beat collection to the 3D virtual model.
  • stage effects are added to the virtual scene where the virtual character model is located, thereby presenting different stage effects to the audience and enhancing the audience's viewing experience.
  • an embodiment of the present application also provides a device for realizing a splitting effect, including: an acquiring unit configured to acquire a three-dimensional virtual model; and a splitting unit configured to view the three-dimensional virtual model from at least two different lens angles. Perform rendering to obtain at least two virtual images respectively corresponding to different lens angles.
  • the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model
  • the device further includes: a feature extraction unit and a three-dimensional virtual model generation unit; wherein, the acquisition unit is also configured to Before acquiring a three-dimensional virtual model, acquire a real image, where the real image includes an image of a real person; the feature extraction unit is configured to perform feature extraction on the image of a real person to obtain feature information, where the feature information includes the action information of the real person; the three-dimensional virtual model The generating unit is configured to generate a three-dimensional virtual model according to the feature information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of the real character.
  • the obtaining unit is configured to obtain a video stream, and obtain at least two frames of real images according to at least two frames of images in the video stream; the feature extraction unit is configured to perform processing on each frame of real person images. Feature extraction obtains corresponding feature information.
  • the real image further includes a real scene image
  • the three-dimensional virtual model also includes a three-dimensional virtual scene model
  • the device further includes: a three-dimensional virtual scene image construction unit configured to before the acquisition unit acquires the three-dimensional virtual model , According to the real scene image, construct a three-dimensional virtual scene image.
  • the device further includes a lens angle acquisition unit configured to obtain at least two different lens angles according to at least two frames of real images.
  • the device further includes a lens angle acquisition unit configured to obtain at least two different lens angles according to the action information corresponding to the at least two frames of real images, respectively.
  • the device further includes a lens angle acquisition unit configured to acquire background music; determine a time collection corresponding to the background music, where the time collection includes at least two time periods; and acquire each time in the time collection The lens angle of view corresponding to the segment.
  • a lens angle acquisition unit configured to acquire background music; determine a time collection corresponding to the background music, where the time collection includes at least two time periods; and acquire each time in the time collection The lens angle of view corresponding to the segment.
  • At least two different lens angles include a first lens angle of view and a second lens angle of view
  • the splitting unit is configured to render the three-dimensional virtual model with the first lens angle of view to obtain the first lens angle.
  • Virtual image Render the three-dimensional virtual model from the second lens perspective to obtain the second virtual image; display the image sequence formed according to the first virtual image and the second virtual image.
  • the splitting unit is configured to translate or rotate the three-dimensional virtual model under the first lens angle of view to obtain the three-dimensional virtual model under the second lens angle of view; and obtain the three-dimensional virtual model under the second lens angle of view.
  • the second virtual image corresponding to the three-dimensional virtual model.
  • the mirror splitting unit is configured to insert a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image is gently switched to the second virtual image, wherein, a is a positive integer.
  • the device further includes: a beat detection unit and a stage special effect generation unit; wherein the beat detection unit is configured to perform beat detection on the background music to obtain a beat collection of the background music, wherein the beat collection Including multiple beats, each of the multiple beats corresponds to a stage special effect; the stage special effect generation unit is configured to add the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.
  • an embodiment of the present application provides an electronic device, including: a processor, a communication interface, and a memory; the memory is used to store instructions, the processor is used to execute instructions, and the communication interface is used to communicate with other devices under the control of the processor. Communicating, wherein the processor executes the instruction to enable the electronic device to implement any one of the methods in the first aspect described above.
  • an embodiment of the present application provides a computer-readable storage medium that stores a computer program, and the computer program is executed by hardware to implement any one of the methods in the first aspect.
  • the embodiments of the present application provide a computer program product.
  • the computer program product is read and executed by a computer, any one of the methods in the above-mentioned first aspect is executed.
  • Fig. 1 is a schematic diagram of a specific application scenario provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a possible three-dimensional virtual model provided by an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a method for realizing a split-mirror effect provided by an embodiment of the present application
  • FIG. 4 is a schematic diagram of an interpolation curve provided by an embodiment of the present application.
  • FIG. 5 is a schematic flowchart of a specific embodiment provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a splitting rule provided by an embodiment of the present application.
  • FIG. 7A is an effect diagram of a possible virtual image provided by an embodiment of the present application.
  • FIG. 7B is an effect diagram of a possible virtual image provided by an embodiment of the present application.
  • FIG. 7C is an effect diagram of a possible virtual image provided by an embodiment of the present application.
  • FIG. 7D is an effect diagram of a possible virtual image provided by an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of a device for implementing a split-mirror effect provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the method, device and related products for realizing the split-mirror effect provided by the embodiments of the present application can be applied in many fields such as social interaction, entertainment, and education. For example, it can be used for virtual live broadcast, social interaction in virtual communities, or Used to hold virtual concerts, can also be used in classroom teaching and so on.
  • the following takes virtual live broadcast as an example to describe the specific application scenarios of the embodiments of the present application in detail.
  • Virtual live broadcast is a way to use virtual characters instead of live anchors to conduct live broadcasts on a live broadcast platform. Because virtual characters have rich expressive power and are more in line with the communication environment of social networks, the virtual live broadcast industry is developing rapidly.
  • computer technologies such as facial expression capture, motion capture, and sound processing are usually used to apply the facial expressions and actions of the live anchor to the virtual character model, so as to realize the audience and the virtual anchor in the video website or social networking website. of interaction.
  • FIG. 1 is a schematic diagram of a specific application scenario provided by an embodiment of the present application.
  • the server 120 transmits the generated virtual image to the user terminal 130 through the network for processing, so that different viewers can watch the entire live broadcast process through the corresponding user terminal 130.
  • the posture of the generated virtual anchor is related to the relative position between the camera device 110 and the live anchor. That is to say, the audience can only see the virtual character under a specific angle of view, and this specific angle of view depends on the relative position between the camera device 110 and the live broadcaster, so that the live broadcast effect presented is unsatisfactory.
  • problems such as stiff movements of virtual anchors, unsmooth shot switching screens, or monotonous and boring shots, which cause visual fatigue of the audience and make it impossible for the audience to experience the immersive experience.
  • the teacher teaches students knowledge through online teaching, but this teaching method is usually boring, and the teacher in the video cannot know the students in real time
  • students can only see the teacher or teaching handouts in a single perspective, which can easily cause students' fatigue.
  • the teaching effect of video teaching is greatly reduced.
  • the singer can hold a virtual concert in the recording studio to simulate the scene of a real concert, in order to achieve a real concert It is usually necessary to set up multiple cameras to shoot the singer. This kind of virtual concert is complicated to operate and wastes costs.
  • the use of multiple cameras for shooting can get pictures under multiple lenses, which may cause lens switching. The problem of non-smoothness makes users unable to adapt to the visual difference caused by switching between different lenses.
  • an embodiment of the present application provides a method for realizing the splitting effect.
  • the method generates a three-dimensional virtual image based on the collected real image. Model, and obtain multiple different lens perspectives according to background music or the actions of real characters, and then render the three-dimensional virtual model with multiple different lens perspectives to obtain virtual images corresponding to multiple different lens perspectives, thereby simulating In the virtual scene, there are multiple virtual cameras to shoot the three-dimensional virtual model, which improves the viewer's viewing experience.
  • the method also analyzes the beats of the background music and adds corresponding stage effects to the three-dimensional virtual model according to the beat information to present different stage effects to the audience, which further enhances the audience's viewing experience.
  • the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene.
  • Figure 2 shows a schematic diagram of a possible three-dimensional virtual model.
  • the hands of the three-dimensional virtual character model are raised to the chest.
  • the figure The upper left corner of 2 also shows the real image collected by the split-mirror effect realization device, in which the real person is also raising his hands to his chest.
  • the three-dimensional virtual character model is consistent with the actions of the real character. It can be understood that the above-mentioned Figure 2 is only an example.
  • the real image collected by the device for implementing the split-mirror effect can be a three-dimensional image or a two-dimensional image.
  • the number of characters in the real image can be one or There are multiple.
  • the action of the real character can be raising both hands to the chest, raising the left foot or other actions, etc.
  • the number of 3D virtual character models in the 3D virtual model generated from the real character image can be One or more than one.
  • the action of the three-dimensional virtual character model can be raising both hands to the chest, raising the left foot or other actions, etc., which are not specifically limited here.
  • the sub-mirror effect device for implementing real people shooting to obtain a plurality of frames real image I 1, I 2, ..., I n-, and in chronological order I on the real image 1, I 2 in the present application embodiment, ..., I Perform feature extraction on n to obtain multiple corresponding three-dimensional virtual models M 1 , M 2 ,..., M n , where n is a positive integer, and the real images I 1 , I 2 ,..., I n and the three-dimensional virtual model M 1
  • n is a positive integer
  • There is a one-to-one correspondence between ,M 2 ,...,M n that is, one frame of real image is used to generate a three-dimensional virtual model.
  • a three-dimensional virtual model can be obtained as follows:
  • Step one the device for achieving the split-mirror effect obtains the real image I i .
  • the real image I i includes real person images, and i is a positive integer, 1 ⁇ i ⁇ n.
  • Step 2 The device for implementing the split-mirror effect performs feature extraction on the real person image in the real image I i to obtain feature information.
  • the feature information includes action information of real characters.
  • obtaining a real image includes: obtaining a video stream, and obtaining at least two frames of real images according to at least two frames of images in the video stream; correspondingly, performing feature extraction on the real person image to obtain feature information includes: separately for each frame Feature extraction is performed on the real person image to obtain corresponding feature information.
  • the feature information is used to control the posture of the three-dimensional virtual character model.
  • the action information in the feature information includes facial expression features and body action features. Facial expression features are used to describe various emotional states of the character, such as happy, sad, Surprise, fear, anger or disgust, etc., physical movement characteristics are used to describe the movement state of real characters, for example, raising the left hand, raising the right foot, or jumping.
  • the feature information can also include character information, where the character information includes multiple key points of the human body of the real person and their corresponding position information.
  • the key points of the human body include key points of the face and the key points of the human skeleton, and the position features include the key points of the real person The position coordinates of the key points of the human body.
  • the split-mirror effect realization device extracts and obtains the real person image in the real image I i by performing image segmentation on the real image I i ; and then performs key point detection on the extracted real person image to obtain the aforementioned multiple human key points And the position information of multiple key points of the human body, where the key points of the human body include key points of the face and the key points of the bones of the human body.
  • the key points of the human body may be located in the head area, neck area, shoulder area, spine area, and waist of the human body.
  • the device for realizing the split-mirror effect inputs the real image I i into the neural network for feature extraction, and after calculation of multiple convolutional layers, the multiple key point information of the human body is extracted.
  • the neural network is obtained through a large amount of training.
  • the neural network can be a Convolution Neural Network (CNN), a Back Propagation Neural Network (BPNN), or a generated confrontation Network (Generative Adversarial Network, GAN) or Recurrent Neural Network (Recurrent Neural Network, RNN), etc., which are not specifically limited here.
  • CNN Convolution Neural Network
  • BPNN Back Propagation Neural Network
  • GAN Geneative Adversarial Network
  • RNN Recurrent Neural Network
  • the device for implementing the split-mirror effect can use CNN to extract key points of a human face to obtain facial expression features; it can also use BPNN to extract key points of human bones to obtain human bone features and limb movement features, which are not specifically limited here.
  • CNN to extract key points of a human face to obtain facial expression features
  • BPNN to extract key points of human bones to obtain human bone features and limb movement features
  • the above example of the feature information used to drive the three-dimensional virtual character model is only used as an example, and other feature information may also be included in practical applications, which is not specifically limited here.
  • Step 3 The split-mirror effect realization device generates the three- dimensional virtual character model in the three-dimensional virtual model M i according to the characteristic information, so that the three-dimensional virtual character model in the three-dimensional virtual model M i corresponds to the action information of the real character in the real image I i.
  • the split-mirror effect realization device establishes a mapping relationship between the key points of the human body of the real person and the key points of the human body of the virtual character model through the above-mentioned feature information; and then controls the expression and posture of the virtual character model according to the mapping relationship, thereby making the virtual
  • the facial expressions and body movements of the character model are consistent with the facial expressions and body movements of the real characters.
  • the split-mirror effect realization device respectively performs serial number labeling on the key points of the human body of the real person to obtain the label information of the key points of the human body of the real person.
  • the key points of the human body correspond to the label information one by one;
  • the annotation information of the key points is used to mark the key points of the human body in the virtual character model. For example, if the label information of the left wrist of the real person is No. 1, the label information of the left wrist of the three-dimensional virtual character model is also No. 1, and the label information of the left arm of the real character is No. 2, then the left wrist of the three-dimensional virtual character model
  • the annotation information is also No.
  • the key point annotation information of the human body of the real person is matched with the key point annotation information of the human body of the three-dimensional virtual character model, and the position information of the key point of the human body of the real character is mapped to the corresponding three-dimensional virtual character model
  • the three-dimensional virtual character model can reproduce the facial expressions and body movements of real characters.
  • the real image I i also includes a real scene image
  • the three-dimensional virtual model M i also includes a three-dimensional virtual scene model.
  • the above-mentioned method for generating a three-dimensional virtual model M i based on the real image I i further includes: real scene image i, M i construct a three-dimensional virtual model of the three-dimensional virtual scene.
  • the device for realizing the split-mirror effect first performs image segmentation on the real image I i to obtain the real scene image in the real image I i ; then extracts the scene features in the real scene image, for example, the position features of the objects in the real scene, Shape feature, size feature, etc.; construct the three- dimensional virtual scene model in the three-dimensional virtual model M i according to the scene feature, so that the three-dimensional virtual scene model in the three-dimensional virtual model M i can highly restore the real scene image in the real image I i.
  • the above only illustrates the process of generating a three-dimensional virtual model M i from a real image I i .
  • the three-dimensional virtual model M 1 , M 2 ,...,M i-1 ,M i+1 ,...,M n the generation process of the three-dimensional virtual model generation process is similar to M i, will not expand further described herein.
  • the 3D virtual scene model in the 3D virtual model can be constructed based on the real scene image in the real image, or it can be a user-defined 3D virtual scene model; the facial features of the 3D virtual character model in the 3D virtual model can be changed from real The facial features of the real person image in the image can also be user-defined facial features, which is not specifically limited here.
  • FIG. 3 is a schematic flowchart of a method for realizing a split-mirror effect provided by an embodiment of the present application.
  • the method for realizing the splitting effect of this embodiment includes but not limited to the following steps:
  • the device for achieving split-mirror effect obtains a three-dimensional virtual model.
  • the three-dimensional virtual model is used to simulate real characters and real scenes.
  • the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model, and the three-dimensional virtual model is generated based on a real image.
  • the three-dimensional virtual character model is generated based on the real character image included in the real image
  • the three-dimensional virtual character model in the three-dimensional virtual model is used to simulate the real character in the real image
  • the actions of the three-dimensional virtual character model correspond to the actions of the real character .
  • the three-dimensional virtual scene model may be constructed based on the real scene image included in the real image, or may be a preset three-dimensional virtual scene model. When the three-dimensional virtual scene model is constructed from the real scene image, the three-dimensional virtual scene model can be used to simulate the real scene in the real image.
  • the device for achieving a split-mirror effect obtains at least two different lens angles of view.
  • the angle of view of the lens is used to indicate the position of the camera relative to the object when the camera is shooting the object.
  • the camera can get a top view of the object when shooting directly above the object.
  • the corresponding lens angle of view is V
  • the image captured by the camera shows the object under the lens angle of V, that is, the top view of the object.
  • obtaining at least two different lens angles includes: obtaining at least two different lens angles according to at least two frames of real images.
  • the real image can be taken by a real camera
  • the position of the real camera relative to the real person may be multiple
  • the multiple real images taken by multiple real cameras at different positions show multiple different lens perspectives. Real people.
  • obtaining at least two different lens angles includes: obtaining at least two different lens angles according to the action information corresponding to the at least two frames of real images.
  • the motion information includes the body motions and facial expressions of real characters in real images.
  • the body movements include many kinds.
  • the body movements can be one or more of raising the right hand, raising the left foot, jumping, etc.
  • the facial expressions also include many kinds.
  • the facial expressions can be, for example, smiling, tearing, etc.
  • One or more of facial expressions such as anger. Examples of body movements and facial expressions in this embodiment are not limited to the above description.
  • one action or a combination of multiple actions corresponds to one lens angle of view.
  • the corresponding lens angle of view is V 1
  • the corresponding lens angle of view can be the lens angle of view V 1 , or the lens angle of view V 2, etc., the same
  • the corresponding lens angle of view can be the lens angle of view V 1 , the lens angle of view V 2 , or the lens angle of view V 3, and so on.
  • obtaining at least two different camera angles includes: obtaining background music; determining a time collection corresponding to the background music, where the time collection includes at least two time periods; obtaining each time period in the time collection Corresponding lens angle of view.
  • the real image may be one or more frames in a video stream.
  • the video stream includes image information and background music information, where one frame of image corresponds to one frame of music.
  • the background music information includes background music and a corresponding time collection.
  • the time collection includes at least two time periods, and each time period corresponds to a lens angle.
  • the device for implementing the split-mirror effect renders the three-dimensional virtual model with at least two different lens angles to obtain virtual images corresponding to at least two different lens angles respectively.
  • the aforementioned at least two different lens angles include a first lens angle of view and a second lens angle of view
  • the three-dimensional virtual model is rendered with at least two different lens angles to obtain at least two different lenses
  • the virtual images corresponding to the viewing angles respectively include: S1031, rendering the three-dimensional virtual model with the first lens perspective to obtain the first virtual image; S1032, rendering the three-dimensional virtual model with the second lens perspective to obtain the second virtual image.
  • rendering the three-dimensional virtual model from the second lens perspective to obtain the second virtual image includes: translating or rotating the three-dimensional virtual model from the first lens perspective to obtain the three-dimensional virtual model from the second lens perspective. Model; Acquire the second virtual image corresponding to the three-dimensional virtual model under the second lens perspective.
  • the first lens angle can be obtained based on the real image, it can also be obtained based on the action information corresponding to the real image, or it can be obtained based on the time collection corresponding to the background music; similarly, the second lens angle can be It is obtained based on the real image, or based on the action information corresponding to the real image, or based on the time collection corresponding to the background music, which is not specifically limited in the embodiment of the present application.
  • the above display of the image sequence formed according to the first image and the second virtual image includes: inserting a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image is smooth Switch to the second virtual image, where a is a positive integer.
  • a frame of virtual images P 1 , P 2 ,..., P a between the first virtual image and the second virtual image, so that the first virtual image is gently switched to the second virtual image, where a frame
  • the time points at which the virtual images P 1 , P 2 ,..., P a are inserted are b 1 , b 2 ,..., b a , and the time points b 1 , b 2 ,..., b a
  • the slope value satisfies the function of monotonically decreasing first and then increasing monotonically, and a is a positive integer.
  • FIG. 4 shows a schematic diagram of an interpolation curve.
  • the device for realizing the split-mirror effect obtains the first virtual image at the first minute, and the second virtual image at the second minute.
  • One virtual image presents the front view of the three-dimensional virtual model, and the second virtual image presents the left view of the three-dimensional virtual model.
  • the split-lens effect realization device inserts multiple time points between the first minute and the second minute, and inserts a virtual image at each time point, for example, in 1.4 inserting the virtual image P is minutes 1, inserting the virtual image P in the first 1.65 minutes 2, insertion of the virtual image P at 1.8 minutes 3, insertion of the virtual image P 4 in the first 1.85 minutes, wherein the virtual image P 1 presented is
  • the virtual image P 2 presents the effect of rotating the three-dimensional virtual model to the left by 50 degrees
  • the virtual image P 3 and the virtual image P 4 both present the effect of rotating the three-dimensional virtual model to the left
  • the 90-degree effect allows the audience to see the entire process of the 3D virtual model gradually changing from the front view to the left view, instead of a single two images (the front view of the 3D virtual model and the left view of the 3D virtual model), thus making The audience can adapt to the changing effect of the visual difference when switching from
  • stage special effects mentioned in the embodiments of this application to render a three-dimensional virtual model to present different stage effects to the audience is described in detail, which specifically includes the following steps:
  • Step 1 The device for realizing the split-mirror effect detects the beats of the background music, and obtains a collection of beats of the background music.
  • the beat collection includes multiple beats, and each beat of the multiple beats corresponds to a stage special effect.
  • the split-mirror effect realization device can use shaders and particle special effects to respectively render the 3D virtual model.
  • the shader can be used to realize the spotlight rotation effect on the back of the virtual stage and the sound wave effect of the virtual stage itself, and particle special effects. It is used to add similar visual effects such as sparks, fallen leaves, meteors, etc. to the 3D virtual model.
  • Step 2 The split-mirror effect realization device adds the target stage special effects corresponding to the beat collection to the three-dimensional virtual model.
  • the above method generates a three-dimensional virtual model based on the collected real images, and switches the corresponding lens perspective according to the collected real images, background music, and the actions of real characters, thereby simulating that there are multiple virtual cameras in the virtual scene.
  • the effect of shooting the virtual model improves the viewer's sense of viewing experience.
  • the method also analyzes the beats of the background music and adds corresponding stage special effects to the virtual image according to the beat information to present different stage effects to the audience, which further enhances the audience's viewing experience.
  • FIG. 5 shows a schematic flowchart of a specific embodiment.
  • the device for achieving split-mirror effect obtains a real image and background music, and obtains a first lens angle of view according to the real image. Among them, when the background music sounds, the real person acts according to the background music, and the real camera shoots the real person to obtain the real image.
  • the device for realizing split-mirror effect generates a three-dimensional virtual model according to the real image. Among them, the three-dimensional virtual model is obtained at the first moment by the device for realizing the split-mirror effect.
  • the split-mirror effect realization device detects the beat of the background music to obtain the beat collection of the background music, and adds the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.
  • the device for implementing split-mirror effect renders the three-dimensional virtual model with the first lens angle of view to obtain a first virtual image corresponding to the first lens angle of view.
  • the device for realizing the split-mirror effect determines the time collection corresponding to the background music.
  • the time collection includes multiple time periods, and each of the multiple time periods corresponds to a lens angle.
  • the mirroring effect realization device judges whether the action information database contains action information, executes S207-S209 if the action information database does not contain action information, and executes S210-S212 if the action information database contains action information.
  • the action information is the action information of the real person in the real image
  • the action information database includes a plurality of action information, and each action information in the multiple action information corresponds to a lens angle of view.
  • the device for realizing the splitting effect determines the second lens angle corresponding to the time period at the first moment according to the time collection.
  • the device for implementing the split-mirror effect renders the three-dimensional virtual model with the second lens angle of view to obtain a second virtual image corresponding to the second lens angle of view.
  • the device for realizing split-mirror effect displays an image sequence formed according to the first virtual image and the second virtual image.
  • the device for achieving a split-mirror effect determines a third lens angle of view corresponding to the action information according to the action information.
  • the device for implementing the split-mirror effect renders the three-dimensional virtual model with the third lens angle of view to obtain a third virtual image corresponding to the third lens angle of view.
  • the device for realizing split-mirror effect displays an image sequence formed according to the first virtual image and the third virtual image.
  • an embodiment of the present application provides a schematic diagram of a splitting rule as shown in FIG. 6, and performing splitting processing and stage special effects processing on a virtual image according to the splitting rule shown in FIG.
  • the effect diagrams of the four virtual images as shown in FIGS. 7A-7D are obtained.
  • the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image I 1 (as shown in the upper left corner of Fig. 7A), and then according to the real image I 1 Obtain a three-dimensional virtual model M 1 .
  • Storyboard achieve the effect means the background music for the beat detection to determine the first minute corresponding to the pulse for B 1, and 1 was stage effects W during the first minute 1 according to the beat B, then the stage effects W 1 is added to the three-dimensional virtual model In M 1 ; the split-lens effect realization device determines the lens angle corresponding to the first minute (referred to as the time-lens angle of view) as V 1 according to the preset lens script; the split-lens effect realization device detects that the action of a real person in the first minute is The action of raising both hands to the chest and raising both hands to the chest is not in the action information database, that is, there is no lens angle of view corresponding to the action (referred to as the action lens angle of view), then the display on the device for achieving the effect of the splitter effect is shown in Figure 7A The virtual image shown in FIG. 7A and the real image I 1 have the same lens angle of view.
  • the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image I 2 (as shown in the upper left corner of Fig. 7B), and then according to the real image I 2 Obtain a three-dimensional virtual model M 2 .
  • Storyboard achieve the effect means the background music for the beat detection to determine the first 2 minutes corresponding to the tempo B 2, and 2 to give the stage effects W 2 during the first 2 minutes according to the beat B, then add stage effects W in the three-dimensional virtual model of M 2 2 ;
  • the split-lens effect realization device determines the lens angle corresponding to the second minute (referred to as the time-lens angle of view) as V 2 according to the preset lens script; the split-lens effect realization device detects that the real person’s action in the second minute is lifted up The action of raising your hands and raising your hands is not in the action information database, that is, there is no lens angle of view corresponding to the action (referred to as action lens angle of view), then the split-lens effect realization device rotates the three-dimensional virtual model M 2 to the upper left to obtain The lens angle of view is the virtual image corresponding to V 2. It can be seen that when the stage special effect W 2 is added to the three-dimensional virtual model M 2 , the virtual image shown in FIG. 7B
  • the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image I 3 (as shown in the upper left corner of Fig. 7C), and then according to the real image I 3 Obtain a three-dimensional virtual model M 3 .
  • Storyboard achieve the effect means the background music for the beat detection to determine the third minute of beats corresponding to B 3, and with stage effects W 3 at the 3rd minute according to the beat B 3, and then add stage effects W 3 in the three-dimensional virtual model M 3 ;
  • the splitting effect realization device determines that the corresponding lens angle of view (referred to as the time lens angle of view) at the third minute is V 2 according to the preset lens script; the splitting effect realization device detects that the real person’s action in the third minute is upward Lifting the left foot and lifting the left foot corresponds to the lens angle of view (referred to as the action lens angle of view) as V 3 , then the split-lens effect realization device rotates the three-dimensional virtual model M 3 to the left to obtain the lens angle corresponding to V 3 Virtual image.
  • the split-lens effect realization device shoots a real person under the lens angle of view V 1 to obtain a real image I 4 (as shown in the upper left corner of Fig. 7D), and then according to the real image I 4 Obtain a three-dimensional virtual model M 4 .
  • Storyboard achieve the effect means the background music for the beat detection to determine the first four minutes corresponding to the tempo B 4, and with the stage effects W 3 at the time of 4 minutes according to the beat B 4, and then add stage effects W in a 3D virtual model of M 4 in 4 ;
  • the splitting effect realization device determines the lens angle of view corresponding to the 3rd minute (referred to as the time lens angle of view) as V 4 according to the preset lens script; the splitting effect realization device detects that the real person’s action in the 4th minute is standing, And the lens angle of view corresponding to the action of standing (referred to as the action lens angle of view) is V 4 , at this time, the splitting effect realization device rotates the three-dimensional virtual model M 4 to the right to obtain a virtual image corresponding to the lens angle of view V 4.
  • the stage 4 is added to effect three-dimensional virtual model W M 4 in FIG. 7D so that the virtual image shown in FIG. 7C and the virtual image shown in different stage effects.
  • the splitting effect realization device provided in the embodiments of the present application may be a software device or a hardware device.
  • the splitting effect realization device is a software device
  • the splitting effect realization device can be separately deployed on a computing device in a cloud environment. It can be deployed separately on a terminal device.
  • the split-mirror effect realization device is a hardware device
  • the internal unit modules of the split-mirror effect realization device can also be divided into multiple types. Each module can be a software module, a hardware module, or part of it. It is a software module and the part is a hardware module, and this application does not limit it.
  • FIG. 8 is an exemplary division method. As shown in FIG. 8, FIG.
  • a device 800 for implementing a splitting effect provided by an embodiment of the present application including: an obtaining unit 810 configured to obtain a three-dimensional virtual model;
  • the mirror unit 820 is configured to render the three-dimensional virtual model with at least two different lens angles to obtain virtual images corresponding to at least two different lens angles.
  • the three-dimensional virtual model includes a three-dimensional virtual character model in a three-dimensional virtual scene model
  • the above-mentioned apparatus further includes: a feature extraction unit 830 and a three-dimensional virtual model generation unit 840; wherein,
  • the acquiring unit 810 is further configured to acquire a real image before acquiring the three-dimensional virtual model, where the real image includes a real person image;
  • the feature extraction unit 830 is configured to perform feature extraction on the real person image to obtain feature information, where the feature information includes Action information of a real character;
  • the three-dimensional virtual model generating unit 840 is configured to generate a three-dimensional virtual model according to the characteristic information, so that the action information of the three-dimensional virtual character model in the three-dimensional virtual model corresponds to the action information of the real character.
  • the acquiring unit is configured to acquire a video stream, and obtain at least two frames of real images according to at least two frames of images in the video stream;
  • the feature extraction unit 830 is configured to separately analyze each frame of real person images Perform feature extraction to obtain corresponding feature information.
  • the real image further includes a real scene image
  • the three-dimensional virtual model also includes a three-dimensional virtual scene model
  • the above-mentioned apparatus further includes: a three-dimensional virtual scene image construction unit 850 configured to acquire a three-dimensional virtual model in the acquiring unit Previously, a three-dimensional virtual scene image was constructed based on the real scene image.
  • the above-mentioned device further includes a lens angle acquisition unit 860 configured to obtain at least two different lens angles.
  • the lens angle of view acquisition unit 860 is configured to obtain at least two different lens angles according to at least two frames of real images.
  • the lens angle of view acquisition unit 860 is configured to obtain at least two different lens angles of view according to the action information corresponding to the at least two frames of real images, respectively.
  • the lens angle acquisition unit 860 is configured to acquire background music; determine the time collection corresponding to the background music, where the time collection includes at least two time periods; and obtain the corresponding time period in the time collection Lens angle of view.
  • At least two different lens angles include a first lens angle of view and a second lens angle of view
  • the splitter unit 820 is configured to render the three-dimensional virtual model with the first lens angle of view to obtain the first lens angle.
  • Virtual image Render the three-dimensional virtual model from the second lens perspective to obtain the second virtual image; display the image sequence formed according to the first virtual image and the second virtual image.
  • the splitting unit 820 is configured to translate or rotate the three-dimensional virtual model under the first lens angle of view to obtain the three-dimensional virtual model under the second lens angle of view; and obtain the three-dimensional virtual model under the second lens angle of view.
  • the second virtual image corresponding to the three-dimensional virtual model.
  • the mirror splitting unit 820 is configured to insert a frame of virtual image between the first virtual image and the second virtual image, so that the first virtual image is gently switched to the second virtual image, wherein, a is a positive integer.
  • the above-mentioned device further includes: a beat detection unit 870 configured to perform beat detection on the background music to obtain a beat collection of the background music, wherein the beat collection includes multiple beats, Each beat corresponds to a stage special effect; the stage special effect generation unit 880 is configured to add the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.
  • a beat detection unit 870 configured to perform beat detection on the background music to obtain a beat collection of the background music, wherein the beat collection includes multiple beats, Each beat corresponds to a stage special effect
  • the stage special effect generation unit 880 is configured to add the target stage special effect corresponding to the beat collection to the three-dimensional virtual model.
  • the above-mentioned split-mirror effect realization device generates a three-dimensional virtual model according to the collected real image, and obtains multiple lens perspectives according to the collected real image, background music, and the actions of real characters, and uses multiple lens perspectives to perform the three-dimensional virtual model.
  • the corresponding lens angle of view is switched to simulate the effect of multiple virtual cameras shooting the 3D virtual model in the virtual scene, so that the user can see the 3D virtual model under different lens angles, which improves the viewer’s viewing experience .
  • the device also analyzes the beats of the background music and adds corresponding stage effects to the three-dimensional virtual model according to the beat information to present different stage effects to the audience, which further enhances the audience's live viewing experience.
  • an embodiment of the present application provides a schematic structural diagram of an electronic device 900, and the foregoing device for implementing the split-mirror effect is applied to the electronic device 900.
  • the electronic device 900 includes a processor 910, a communication interface 920, and a memory 930, where the processor 910, the communication interface 920, and the memory 930 can be coupled through a bus 940. among them,
  • the processor 910 may be a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices (Programmable Logic Device, PLD), transistor logic devices, hardware components, or any combination thereof.
  • the processor 910 may implement or execute various exemplary methods described in conjunction with the disclosure of the present application. Specifically, the processor 910 reads the program code stored in the memory 930, and cooperates with the communication interface 920 to execute part or all of the steps of the method executed by the device for implementing the split-mirror effect in the foregoing embodiment of the present application.
  • the communication interface 920 can be a wired interface or a wireless interface for communicating with other modules or devices.
  • the wired interface can be an Ethernet interface, a controller area network interface, a local interconnect network (Local Interconnect Network, LIN), and a FlexRay interface.
  • the interface can be a cellular network interface or a wireless local area network interface.
  • the aforementioned communication interface 920 may be connected to an input/output device 950, and the input/output device 950 may include other terminal devices such as a mouse, a keyboard, and a microphone.
  • the memory 930 may include a volatile memory, such as a random access memory (Random Access Memory, RAM); the memory 930 may also include a non-volatile memory (Non-Volatile Memory), such as a read-only memory (Read-Only Memory, ROM). ), flash memory, hard disk (Hard Disk Drive, HDD), or solid-state hard disk (Solid-State Drive, SSD), and the memory 930 may also include a combination of the foregoing types of memory.
  • the memory 930 may store program codes and program data.
  • the program code is composed of the codes of some or all of the units in the above-mentioned mirror effect realization device 800, for example, the code of the acquisition unit 810, the code of the mirror unit 820, the code of the feature extraction unit 830, and the 3D virtual model generation unit 840
  • the program data is data generated during the operation of the split-mirror effect realization device 800, such as real image data, three-dimensional virtual model data, lens angle data, background music data, virtual image data, and so on.
  • the bus 940 may be a Controller Area Network (CAN) or other internal bus that implements interconnection between various systems or devices in the vehicle.
  • the bus 940 can be divided into an address bus, a data bus, a control bus, and so on. For ease of representation, the figure is only represented by a thick line, but it does not mean that there is only one bus or one type of bus.
  • the electronic device 900 may include more or fewer components than those shown in FIG. 9, or may have different component configurations.
  • the embodiment of the present application also provides a computer-readable storage medium.
  • the above-mentioned computer-readable storage medium stores a computer program, and the above-mentioned computer program is executed by hardware (such as a processor, etc.) to realize part or All steps.
  • the embodiment of the present application also provides a computer program product.
  • the computer program product runs on the above-mentioned device or electronic device for realizing the split-mirror effect, it executes part or all of the steps of the method for realizing the above-mentioned split-mirror effect.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a storage disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, an SSD).
  • a magnetic medium for example, a floppy disk, a storage disk, a magnetic tape
  • an optical medium for example, a DVD
  • a semiconductor medium for example, an SSD
  • the disclosed device may also be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components can be combined or integrated. To another system, or some features can be ignored or not implemented.
  • the displayed or discussed indirect coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or units, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. . Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions in the embodiments of the present application.
  • the functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the integrated unit may be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium.
  • a number of instructions are included to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media may include, for example, various media capable of storing program codes, such as U disk, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)
PCT/CN2020/082545 2019-12-03 2020-03-31 一种分镜效果的实现方法、装置及相关产品 WO2021109376A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020227018465A KR20220093342A (ko) 2019-12-03 2020-03-31 분할 미러 효과의 구현 방법, 장치 및 관련 제품
JP2022528715A JP7457806B2 (ja) 2019-12-03 2020-03-31 レンズ分割の実現方法、装置および関連製品

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911225211.4 2019-12-03
CN201911225211.4A CN111080759B (zh) 2019-12-03 2019-12-03 一种分镜效果的实现方法、装置及相关产品

Publications (1)

Publication Number Publication Date
WO2021109376A1 true WO2021109376A1 (zh) 2021-06-10

Family

ID=70312713

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/082545 WO2021109376A1 (zh) 2019-12-03 2020-03-31 一种分镜效果的实现方法、装置及相关产品

Country Status (5)

Country Link
JP (1) JP7457806B2 (ja)
KR (1) KR20220093342A (ja)
CN (1) CN111080759B (ja)
TW (1) TWI752502B (ja)
WO (1) WO2021109376A1 (ja)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113630646A (zh) * 2021-07-29 2021-11-09 北京沃东天骏信息技术有限公司 一种数据处理方法及装置、设备、存储介质
CN114900743A (zh) * 2022-04-28 2022-08-12 中德(珠海)人工智能研究院有限公司 基于视频推流的场景渲染过渡方法以及***
CN115883814A (zh) * 2023-02-23 2023-03-31 阿里巴巴(中国)有限公司 实时视频流的播放方法、装置及设备

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI762375B (zh) * 2021-07-09 2022-04-21 國立臺灣大學 語意分割錯誤偵測系統
CN114157879A (zh) * 2021-11-25 2022-03-08 广州林电智能科技有限公司 一种全场景虚拟直播处理设备
CN114630173A (zh) * 2022-03-03 2022-06-14 北京字跳网络技术有限公司 虚拟对象的驱动方法、装置、电子设备及可读存储介质
CN114745598B (zh) * 2022-04-12 2024-03-19 北京字跳网络技术有限公司 视频数据展示方法、装置、电子设备及存储介质
CN117014651A (zh) * 2022-04-29 2023-11-07 北京字跳网络技术有限公司 一种视频生成方法及装置
CN115442542B (zh) * 2022-11-09 2023-04-07 北京天图万境科技有限公司 一种分镜方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106157359A (zh) * 2015-04-23 2016-11-23 中国科学院宁波材料技术与工程研究所 一种虚拟场景体验***的设计方法
CN106295955A (zh) * 2016-07-27 2017-01-04 邓耀华 一种基于增强现实的客户对厂的鞋定制***及实现方法
US10068376B2 (en) * 2016-01-11 2018-09-04 Microsoft Technology Licensing, Llc Updating mixed reality thumbnails
CN108604121A (zh) * 2016-05-10 2018-09-28 谷歌有限责任公司 虚拟现实中的双手对象操纵
CN108830894A (zh) * 2018-06-19 2018-11-16 亮风台(上海)信息科技有限公司 基于增强现实的远程指导方法、装置、终端和存储介质

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201333882A (zh) * 2012-02-14 2013-08-16 Univ Nat Taiwan 擴增實境裝置及其方法
US20150049078A1 (en) * 2013-08-15 2015-02-19 Mep Tech, Inc. Multiple perspective interactive image projection
CN106385576B (zh) * 2016-09-07 2017-12-08 深圳超多维科技有限公司 立体虚拟现实直播方法、装置及电子设备
CN107103645B (zh) * 2017-04-27 2018-07-20 腾讯科技(深圳)有限公司 虚拟现实媒体文件生成方法及装置
CN107194979A (zh) * 2017-05-11 2017-09-22 上海微漫网络科技有限公司 一种虚拟角色的场景合成方法及***
US10278001B2 (en) * 2017-05-12 2019-04-30 Microsoft Technology Licensing, Llc Multiple listener cloud render with enhanced instant replay
JP6469279B1 (ja) * 2018-04-12 2019-02-13 株式会社バーチャルキャスト コンテンツ配信サーバ、コンテンツ配信システム、コンテンツ配信方法及びプログラム
CN108538095A (zh) * 2018-04-25 2018-09-14 惠州卫生职业技术学院 基于虚拟现实技术的医学教学***及方法
JP6595043B1 (ja) * 2018-05-29 2019-10-23 株式会社コロプラ ゲームプログラム、方法、および情報処理装置
CN108961376A (zh) * 2018-06-21 2018-12-07 珠海金山网络游戏科技有限公司 虚拟偶像直播中实时绘制三维场景的方法及***
CN108833740B (zh) * 2018-06-21 2021-03-30 珠海金山网络游戏科技有限公司 一种基于三维动画直播的实时提词器方法和装置
CN108877838B (zh) * 2018-07-17 2021-04-02 黑盒子科技(北京)有限公司 音乐特效匹配方法及装置
JP6538942B1 (ja) * 2018-07-26 2019-07-03 株式会社Cygames 情報処理プログラム、サーバ、情報処理システム、及び情報処理装置
CN110139115B (zh) * 2019-04-30 2020-06-09 广州虎牙信息科技有限公司 基于关键点的虚拟形象姿态控制方法、装置及电子设备
CN110335334A (zh) * 2019-07-04 2019-10-15 北京字节跳动网络技术有限公司 虚拟头像驱动显示方法、装置、电子设备及存储介质
CN110427110B (zh) * 2019-08-01 2023-04-18 广州方硅信息技术有限公司 一种直播方法、装置以及直播服务器

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106157359A (zh) * 2015-04-23 2016-11-23 中国科学院宁波材料技术与工程研究所 一种虚拟场景体验***的设计方法
US10068376B2 (en) * 2016-01-11 2018-09-04 Microsoft Technology Licensing, Llc Updating mixed reality thumbnails
CN108604121A (zh) * 2016-05-10 2018-09-28 谷歌有限责任公司 虚拟现实中的双手对象操纵
CN106295955A (zh) * 2016-07-27 2017-01-04 邓耀华 一种基于增强现实的客户对厂的鞋定制***及实现方法
CN108830894A (zh) * 2018-06-19 2018-11-16 亮风台(上海)信息科技有限公司 基于增强现实的远程指导方法、装置、终端和存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113630646A (zh) * 2021-07-29 2021-11-09 北京沃东天骏信息技术有限公司 一种数据处理方法及装置、设备、存储介质
CN114900743A (zh) * 2022-04-28 2022-08-12 中德(珠海)人工智能研究院有限公司 基于视频推流的场景渲染过渡方法以及***
CN115883814A (zh) * 2023-02-23 2023-03-31 阿里巴巴(中国)有限公司 实时视频流的播放方法、装置及设备

Also Published As

Publication number Publication date
TW202123178A (zh) 2021-06-16
CN111080759B (zh) 2022-12-27
KR20220093342A (ko) 2022-07-05
JP7457806B2 (ja) 2024-03-28
JP2023501832A (ja) 2023-01-19
CN111080759A (zh) 2020-04-28
TWI752502B (zh) 2022-01-11

Similar Documents

Publication Publication Date Title
WO2021109376A1 (zh) 一种分镜效果的实现方法、装置及相关产品
CN111970535B (zh) 虚拟直播方法、装置、***及存储介质
KR102503413B1 (ko) 애니메이션 인터랙션 방법, 장치, 기기 및 저장 매체
WO2022001593A1 (zh) 视频生成方法、装置、存储介质及计算机设备
CN113240782B (zh) 基于虚拟角色的流媒体生成方法及装置
US9654734B1 (en) Virtual conference room
CN110968736B (zh) 视频生成方法、装置、电子设备及存储介质
US20160110922A1 (en) Method and system for enhancing communication by using augmented reality
KR102491140B1 (ko) 가상 아바타 생성 방법 및 장치
JP6683864B1 (ja) コンテンツ制御システム、コンテンツ制御方法、およびコンテンツ制御プログラム
CN109035415B (zh) 虚拟模型的处理方法、装置、设备和计算机可读存储介质
CN113840049A (zh) 图像处理方法、视频流场景切换方法、装置、设备及介质
US10955911B2 (en) Gazed virtual object identification module, a system for implementing gaze translucency, and a related method
CN114363689B (zh) 直播控制方法、装置、存储介质及电子设备
KR102200239B1 (ko) 실시간 cg 영상 방송 서비스 시스템
JP2001051579A (ja) 映像表示方法、映像表示装置及び映像表示プログラムを記録した記録媒体
JP2021009351A (ja) コンテンツ制御システム、コンテンツ制御方法、およびコンテンツ制御プログラム
JP2021006886A (ja) コンテンツ制御システム、コンテンツ制御方法、およびコンテンツ制御プログラム
WO2023029289A1 (zh) 模型评估方法、装置、存储介质及电子设备
US20240048780A1 (en) Live broadcast method, device, storage medium, electronic equipment and product
WO2022160867A1 (zh) 远程重现方法、***、装置、设备、介质及程序产品
Arita et al. Non-verbal human communication using avatars in a virtual space
CN114550293A (zh) 动作修正方法和装置、存储介质及电子设备
Lung Returning back the initiative of spatial relations between inner and outer space from images
Naugle et al. Reflections on Heidegger: Performing Translations in Active Space Environments

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20897576

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022528715

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20227018465

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.10.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20897576

Country of ref document: EP

Kind code of ref document: A1