WO2022252823A1 - 直播视频生成方法及装置 - Google Patents

直播视频生成方法及装置 Download PDF

Info

Publication number
WO2022252823A1
WO2022252823A1 PCT/CN2022/086052 CN2022086052W WO2022252823A1 WO 2022252823 A1 WO2022252823 A1 WO 2022252823A1 CN 2022086052 W CN2022086052 W CN 2022086052W WO 2022252823 A1 WO2022252823 A1 WO 2022252823A1
Authority
WO
WIPO (PCT)
Prior art keywords
forearm
virtual model
multiple parts
real time
around
Prior art date
Application number
PCT/CN2022/086052
Other languages
English (en)
French (fr)
Inventor
南天骄
程晗
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2022252823A1 publication Critical patent/WO2022252823A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present application relates to the technical field of video processing, for example, to a method and device for generating live video.
  • the virtual live broadcast can synchronize the movement of the physical object to the virtual model in real time through 3D modeling and real-time motion capture, so that the user on the anchor side can broadcast live through the image of the virtual model, which greatly improves the flexibility of live broadcast.
  • the method of synchronizing the motion of the physical object to the virtual model in the related art is: using a simple redirection technology to migrate the motion of the physical object to the virtual model.
  • this action synchronization method can only synchronize the overall posture of the virtual model with the physical object, but cannot synchronize the details of the action, resulting in unnatural movements of the virtual model in the live video.
  • the present application provides a method and device for generating a live video, which can avoid unnatural movements of a virtual model in a live video generated by a virtual live broadcast method in the related art.
  • the embodiments of the present application provide a method for generating live video, including:
  • the entity object includes: a torso, a big arm connected to the torso through a shoulder joint, and a forearm connected to the big arm through an elbow joint, and the first feature data is used In order to characterize the rotation angle of multiple parts of the forearm around the axis direction, the rotation angle of multiple parts of the forearm around the axis direction and the distance from the multiple parts of the forearm to the elbow joint of the entity object Positive correlation;
  • the rotation angles of multiple parts of the forearm skin of the virtual model around the axis direction are controlled, and the virtual model includes: a trunk, a big arm connected to the trunk through a shoulder joint, and a The forearm connected with the elbow joint and the forearm, the forearm skin covers the outer surface of the forearm of the virtual model, and the rotation angles of multiple parts of the forearm skin around the axis direction are the same as those of the forearm skin The distances from multiple parts of the forearm skin to the elbow joint of the virtual model are positively correlated;
  • the image frames of the live video are generated according to the virtual model and the rotation angles of multiple parts of the forearm skin of the virtual model around the axis direction.
  • the embodiment of the present application provides a live video generation device, including:
  • the acquisition unit is configured to acquire the first characteristic data of the entity object in real time;
  • the entity object includes: a torso, a large arm connected to the torso through a shoulder joint, and a forearm connected to the large arm through an elbow joint, the
  • the first feature data is used to characterize the rotation angles of multiple parts of the forearm around the axis direction, and the rotation angles of the multiple parts of the forearm around the axis direction are related to the relationship between the multiple parts of the forearm and the entity object.
  • the distance of the elbow joint is positively correlated;
  • the processing unit is configured to control the rotation angles of multiple parts of the forearm skin of the virtual model around the axis direction based on the first characteristic data acquired in real time, and the virtual model includes: a torso connected to the torso through shoulder joints The upper arm of the forearm and the forearm connected to the forearm through the elbow joint, the forearm skin covers the outer surface of the forearm of the virtual model, and multiple parts of the forearm skin surround the axis direction
  • the rotation angle is positively correlated with the distance from multiple parts of the forearm skin to the elbow joint of the virtual model;
  • the generation unit is configured to generate the image frames of the live video according to the virtual model and the rotation angles of multiple parts of the forearm skin of the virtual model around the axis direction.
  • an embodiment of the present application provides an electronic device, including: a memory and a processor, the memory is configured to store a computer program; the processor is configured to execute the live video generation described in the implementation manner of the first aspect when calling the computer program method.
  • the embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the live video generation method described in the implementation manner of the first aspect is implemented.
  • an embodiment of the present application provides a computer program product, which, when running on a computer, enables the computer to implement the live video generation method described in the implementation manner of the first aspect.
  • FIG. 1 is a schematic diagram of an application scenario of a live video generation method provided in an embodiment of the present application
  • Fig. 2 is one of the step flow charts of the live video generation method provided by the embodiment of the present application.
  • Fig. 3 is a schematic diagram of the forearm structure of the virtual model provided by the embodiment of the present application.
  • FIG. 4 is the second flowchart of the steps of the live video generation method provided by the embodiment of the present application.
  • FIG. 5 is one of the structural schematic diagrams of a live video generating device provided in an embodiment of the present application.
  • FIG. 6 is the second structural schematic diagram of the live video generation device provided by the embodiment of the present application.
  • FIG. 7 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • first and second in the specification and claims of the present application are used to distinguish synchronized objects, rather than to describe a specific order of objects.
  • first correspondence and the second correspondence are used to distinguish different correspondences, not to describe a specific order of the correspondences.
  • words such as “exemplary” or “for example” are used as examples, illustrations or illustrations. Any embodiment or design scheme described as “exemplary” or “for example” in the embodiments of the present application shall not be interpreted as being more preferred or more advantageous than other embodiments or design schemes. Rather, the use of words such as “exemplary” or “such as” is intended to present related concepts in a concrete manner.
  • the meaning of "plurality” refers to two or more.
  • the scene architecture applied by the live video generation method provided by the embodiment of the present application includes: a tracking device 11, a depth image acquisition device 12, an audio recording device 13, and a tracking mark set on a physical object 14.
  • the tracking device 11 , the image acquisition device 12 , the audio recording device 13 and the terminal device 16 are all connected to the live server 15 .
  • the tracking device 11 is configured to acquire the location information of the tracking marker 14 set on the physical object in real time, and send the location information of the tracking marker 14 to the live server 15 .
  • the depth image acquisition device 12 is configured to acquire the depth image of the face of the entity object in real time, and send the depth image of the face of the entity object to the live server 15 .
  • the audio recording device 13 is configured to record the ambient sound of the space where the entity object is located, and send the recorded ambient sound to the live server 15 .
  • the live server 15 is configured to generate a live video according to the position information of the tracking marker 14 sent by the tracking device 11, the facial depth image of the entity object sent by the depth image acquisition device 12, and the audio data sent by the audio recording device 13, and the generated live video sent to the terminal device 16.
  • the terminal device 16 is set to play the live video sent by the live server 15 on the live broadcast interface.
  • the terminal device 16 may be a mobile phone, a tablet computer, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), a smart watch, a smart bracelet and other terminals equipment, or the terminal equipment may also be other types of terminal equipment.
  • the live server 15 can be any form of server. The embodiment of the present application does not limit the types of the tracking device 11, the depth image acquisition device 12, and the audio recording device 13, and the corresponding functions shall prevail.
  • the tracking device 11, the depth image acquisition device 12, and the audio recording device 13 are shown as examples of mutually independent devices, but the embodiments of the present application are not limited thereto.
  • two or all of the tracking device 11, the depth image acquisition device 12, and the audio recording device 13 may also be integrated into the same physical device.
  • the depth image acquisition device 12 and the audio recording device 13 can also be integrated into the same depth camera.
  • the embodiment of the present application provides a live video generation method, as shown in FIG. 2 , the live video generation method includes the following steps S201 to S203:
  • the entity object includes: a torso, an upper arm connected to the torso through a shoulder joint, and a forearm connected to the upper arm through an elbow joint.
  • the first feature data is used to characterize the rotation angles of multiple parts of the forearm around the axis.
  • the rotation angles of the multiple parts of the forearm around the axis direction are positively related to the distances from the multiple parts of the forearm to the elbow joint of the entity object.
  • the entity object in the embodiment of the present application may be a human, a primate, a robot, etc., which is not limited in the embodiment of the present application.
  • the rotation angles of multiple parts of the forearm around the axis direction are positively correlated with the distances from the multiple parts of the forearm to the elbow joint of the entity object means: for any two parts of the forearm If the second part on the forearm is closer to the elbow joint than the first part on the forearm, the rotation angle of the second part around the axis direction is less than or equal to that of the first part around the axis direction , and if the second part on the forearm is farther away from the elbow joint than the first part on the forearm, the rotation angle of the second part around the axis direction is greater than or equal to that of the first part The angle of rotation in the direction of the axis.
  • the axis direction refers to the extension direction of the forearm starting from the elbow joint.
  • the virtual model includes: a torso, an arm connected to the torso through a shoulder joint, and a forearm connected to the forearm through an elbow joint, and the forearm skin covers the forearm of the virtual model
  • the outer surface of the forearm skin, the rotation angles of the multiple parts of the forearm skin around the axis direction are positively related to the distances from the multiple parts of the forearm skin to the elbow joint of the virtual model.
  • the first feature data can be migrated to the forearm skeleton of the virtual model by means of redirection, and the forearm skin of the virtual model is driven to rotate through the forearm skeleton of the virtual model, so as to realize all
  • the first feature data controls the rotation angles of multiple parts of the forearm skin of the virtual model around the axis.
  • the forearm of the virtual model includes: a main skeleton 31 , a plurality of additional skeletons 32 socketed on the main skeleton 31 , and the covering The forearm skin 33 on the multiple additional bones 32 .
  • step S202 (controlling the rotation angles of multiple parts of the forearm skin of the virtual model around the axis direction based on the first characteristic data acquired in real time) includes:
  • At least one additional bone 32 in the plurality of additional bones 32 is controlled to rotate around the main bone 31, so as to drive the forearm skin 33 to rotate around the axis direction, so as to realize the adjustment of the forearm.
  • the rotation angle of the multiple additional bones around the main bone is positively correlated with the distance from the multiple additional bones to the elbow joint of the virtual model.
  • the live video generation method provided by the embodiment of the present application first obtains the first characteristic data used to characterize the rotation angles of multiple parts of the forearm of the physical object around the axis direction, and then controls the virtual model based on the first characteristic data acquired in real time The rotation angles of multiple parts of the forearm skin around the axis direction, and finally generate the image frame of the live video according to the rotation angles of the virtual model and the multiple parts of the forearm skin of the virtual model around the axis direction .
  • the rotation angles of multiple parts of the forearm of the solid object obtained in the embodiment of the present application around the axis direction are positively correlated with the distances from the multiple parts of the forearm to the elbow joint of the solid object, based on the first feature
  • the data controls the rotation angles of multiple parts of the forearm skin of the virtual model around the axis direction
  • the distances from each part to the elbow joint of the virtual model are positively correlated, so the embodiment of the present application can synchronize the movement of the forearm of the physical object to the virtual model in more detail, thereby making the movement of the virtual model in the live video more natural.
  • the image frame of the live video can be generated only according to the virtual model, without controlling the rotation angle of the forearm skin of the virtual model.
  • the embodiment of the present application provides another live video generation method, as shown in FIG. 4, the live video generation method includes the following steps S401 to S405:
  • the entity object includes: a torso, a large arm connected to the torso through a shoulder joint, and a small arm connected to the large arm through an elbow joint
  • the first feature data is used to characterize the multiple The rotation angles of each part around the axis direction, the rotation angles of the multiple parts of the forearm around the axis direction are positively related to the distance from the multiple parts of the forearm to the elbow joint of the entity object.
  • the second feature data is used to characterize the facial expression of the entity object.
  • the realization of the real-time acquisition of the first characteristic data of the entity object in the above step S401 may include the following steps 1 and 2:
  • Step 1 Obtain in real time the position information of multiple tracking markers set on the forearm of the entity object.
  • Step 2 Acquiring the first characteristic data of the physical object in real time according to the position information of the plurality of tracking markers.
  • the realization of the real-time acquisition of the second characteristic data of the entity object in the above step S401 may include the following steps a and b:
  • step a an image acquisition device is used to acquire an image of the physical object, and a facial image of the physical object is acquired in real time.
  • Step b Extracting second characteristic data of the physical object based on the facial image of the physical object acquired in real time.
  • the virtual model includes: a torso, an arm connected to the torso through a shoulder joint, and a forearm connected to the forearm through an elbow joint, and the forearm skin covers the forearm of the virtual model
  • the outer surface of the forearm skin, the rotation angles of the multiple parts of the forearm skin around the axis direction are positively related to the distances from the multiple parts of the forearm skin to the elbow joint of the virtual model.
  • an implementation of the above step S403 (controlling the facial expression of the virtual model based on the second feature data acquired in real time) includes the following steps I and II:
  • Step I Input the second feature data into the expression algorithm model, and obtain the expression driving parameters output by the expression algorithm model.
  • the expression algorithm model is a model obtained by training a preset algorithm model based on sample data
  • the sample data includes sample expression data of the entity object and expression driving parameters corresponding to the sample expression data.
  • the preset algorithm model may be a machine learning algorithm model such as a deep learning neural network model, a convolutional neural network model, and the embodiment of the present application does not limit the specific type of the preset algorithm model.
  • Step II Drive the face of the virtual model based on the expression driving parameters to generate facial expressions of the virtual model.
  • the camera orientation is used to represent the positional relationship between the virtual camera corresponding to the image frame and the virtual model.
  • the image frame may be spliced from images corresponding to the multiple lens orientations.
  • step S404 determining the lens orientation corresponding to the image frame
  • the lens orientation corresponding to the image frame is determined, wherein the lens control is configured to select the lens orientation.
  • the users in the above embodiments may be users on the host side, or users on the audience side.
  • the user on the host side and/or the user on the audience side can select the camera orientation corresponding to the image frame to be generated by operating the camera control.
  • the user can select the camera orientation corresponding to each image frame of the live video, so the user can control the live video to be presented as a video shot by a camera such as an orbital camera, a hand-held camera, or an aerial camera according to requirements, thereby enriching user choices.
  • a camera such as an orbital camera, a hand-held camera, or an aerial camera according to requirements, thereby enriching user choices.
  • step S404 determining the lens orientation corresponding to the image frame
  • the camera orientation corresponding to each image frame of the live video may also be automatically adjusted according to the first feature data and preset rules.
  • the preset rule may be: when the rotation angle of the specified part of the forearm of the physical object around the axis direction is a preset angle, determine the lens orientation corresponding to the image frame as the lens orientation corresponding to the preset angle .
  • the camera orientation corresponding to the image frame to be generated can be switched to a close-up facial lens according to the first feature data, and the lens orientation corresponding to the image frame to be generated can also be switched to a panoramic lens according to the first feature data.
  • the method for generating live video in the foregoing embodiments is performed in real time. Therefore, after acquiring the first characteristic data, second characteristic data and lens orientation corresponding to an image frame, the image frame is generated immediately according to the first characteristic data, second characteristic data and lens orientation corresponding to the image frame, and The image frame simultaneously acquires the first characteristic data, the second characteristic data and the lens orientation corresponding to the next image frame.
  • the live video generation method provided by the embodiment of the present application first obtains the first characteristic data used to characterize the rotation angles of multiple parts of the forearm of the physical object around the axis direction, and then controls the virtual model based on the first characteristic data acquired in real time The rotation angles of multiple parts of the forearm skin around the axis direction, and finally generate the image frame of the live video according to the rotation angles of the virtual model and the multiple parts of the forearm skin of the virtual model around the axis direction .
  • the rotation angles of multiple parts of the forearm of the solid object obtained in the embodiment of the present application around the axis direction are positively correlated with the distances from the multiple parts of the forearm to the elbow joint of the solid object, based on the first feature
  • the data controls the rotation angles of multiple parts of the forearm skin of the virtual model around the axis direction
  • the distances from each part to the elbow joint of the virtual model are positively correlated, so the embodiment of the present application can synchronize the movement of the forearm of the physical object to the virtual model in more detail, thereby making the movement of the virtual model in the live video more natural.
  • the live video generation method provided in the embodiment of the present application further includes:
  • the audio data of the live video is generated according to the environmental audio data.
  • generating the audio data of the live video according to the environmental audio data may be: using the environmental audio data as the audio data of the live video.
  • generating the audio data of the live video according to the environmental audio data may be: generating the audio data of the live video by fusing the environmental audio data and preset audio data. For example, if the environmental audio data acquired by recording the environmental sound of the space where the physical object is located is a cappella audio, then the acquired environmental audio data and accompaniment music may be fused to generate the audio data of the live video.
  • the embodiment of the present application also provides a live video generation device.
  • the embodiment of the live video generation device corresponds to the aforementioned method embodiment.
  • the details in the foregoing method embodiments are described one by one, but it should be clear that the live video generation device in this embodiment can correspondingly implement all the content in the foregoing method embodiments.
  • FIG. 5 is a schematic structural diagram of a live video generation device provided in an embodiment of the present application. As shown in FIG. 5 , the live video generation device 500 provided in this embodiment includes:
  • the acquisition unit 51 is configured to acquire the first characteristic data of the entity object in real time;
  • the entity object includes: a torso, a large arm connected to the torso through a shoulder joint, and a forearm connected to the large arm through an elbow joint, so
  • the first feature data is used to represent the rotation angles of multiple parts of the forearm around the axis direction, and the rotation angles of the multiple parts of the forearm around the axis direction are related to the distance between the multiple parts of the forearm and the entity.
  • the distance of the object's elbow joint is positively correlated;
  • the processing unit 52 is configured to control the rotation angles of multiple parts of the forearm skin of the virtual model around the axis direction based on the first characteristic data acquired in real time.
  • the virtual model includes: the trunk, the shoulder joint and the trunk A connected big arm and a small arm connected to the big arm through an elbow joint, the skin of the small arm covers the outer surface of the small arm of the virtual model, and multiple parts of the skin of the small arm surround the axial direction
  • the rotation angle of is positively correlated with the distance from multiple parts of the forearm skin to the elbow joint of the virtual model;
  • the generating unit 53 is configured to generate live video image frames according to the rotation angles of the virtual model and multiple parts of the forearm skin of the virtual model around the axis.
  • the forearm of the virtual model includes: a main skeleton, a plurality of additional bones socketed on the main skeleton, and the forearm covering the multi-segment additional bones Skinning;
  • the processing unit 52 is configured to control the rotation angle of at least one of the multiple additional bones around the main bone based on the first feature data acquired in real time, so as to control the direction of the forearm skin around the axis of rotation angle.
  • the live video generation device 500 further includes:
  • the determination unit 54 is configured to determine the lens orientation corresponding to the image frame, and the lens orientation is used to represent the positional relationship between the virtual lens corresponding to the image frame and the virtual model;
  • the generating unit 53 is configured to generate the image frame according to the camera orientation corresponding to the image frame and the virtual model.
  • the determining unit 54 is configured to determine the lens orientation corresponding to the image frame in response to the user's operation on the lens control;
  • the lens control is set to select the lens orientation
  • the determining unit 54 is configured to determine the lens orientation corresponding to the image frame based on the first feature data and preset rules.
  • the acquiring unit 51 is further configured to acquire second feature data of the entity object in real time; the second feature data is used to characterize the facial expression of the entity object;
  • the processing unit 52 is further configured to control the facial expression of the virtual model based on the second feature data acquired in real time.
  • the processing unit 52 is configured to input the second feature data into the expression algorithm model, and obtain the expression driving parameters output by the expression algorithm model; drive the face of the virtual model based on the expression driving parameters to generate the facial expression of the virtual model;
  • the expression algorithm model is a model obtained by training a preset algorithm model based on sample data
  • the sample data includes sample expression data of the entity object and expression driving parameters corresponding to the sample expression data.
  • the acquiring unit 51 is configured to acquire the position information of multiple tracking marks set on the forearm of the entity object in real time; according to the positions of the multiple tracking marks The information acquires the first feature data of the entity object in real time.
  • the acquiring unit 51 is configured to acquire an image of the entity object through an image acquisition device, and acquire the facial image of the entity object in real time; based on the real-time acquired facial image of the entity The face image of the object is used to extract the second feature data of the entity object.
  • the acquisition unit 51 is further configured to record the environmental sound of the space where the entity object is located, and acquire environmental audio data;
  • the generating unit 53 is further configured to generate the audio data of the live video according to the environmental audio data.
  • the device for generating live video provided in this embodiment can execute the method for generating live video provided in the foregoing method embodiment, and its implementation principle and technical effect are similar, and will not be repeated here.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device provided by this embodiment includes: a memory 71 and a processor 72, and the memory 71 is configured to store computer programs; the processing The device 72 is configured to enable the computing device to execute the method for generating live video provided by the above method embodiment when calling the computer program.
  • the memory 71 can be configured to store software programs as well as various kinds of data.
  • the memory 71 can mainly include a program storage area and a data storage area, wherein the program storage area can store an operating system, at least one application program required by a function (such as a sound playback function, an image playback function, etc.) etc.; Data created by the use of mobile phones (such as audio data, phonebook, etc.), etc.
  • the memory 71 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage devices.
  • the processor 72 is the control center of the electronic equipment. It uses various interfaces and lines to connect multiple parts of the entire electronic equipment. By running or executing the software programs and/or modules stored in the memory 71, and calling the software stored in the memory 71 Data, perform various functions of electronic equipment and process data, so as to monitor electronic equipment as a whole.
  • Processor 72 may include one or more processing units.
  • the electronic device provided in the embodiment of the present application may further include components such as a radio frequency unit, a network module, an audio output unit, a receiving unit, a sensor, a display unit, a user receiving unit, an interface unit, and a power supply.
  • components such as a radio frequency unit, a network module, an audio output unit, a receiving unit, a sensor, a display unit, a user receiving unit, an interface unit, and a power supply.
  • a radio frequency unit such as a radio frequency unit, a network module, an audio output unit, a receiving unit, a sensor, a display unit, a user receiving unit, an interface unit, and a power supply.
  • the structure of the electronic device described above does not constitute a limitation on the electronic device, and the electronic device may include more or less components, or combine some components, or arrange different components.
  • electronic devices include but are not limited to mobile phones, tablet computers, notebook computers, palmtop computers, vehicle-mounted terminals, wearable devices, and pedometers.
  • the radio frequency unit can be configured to send and receive information or to receive and send signals during a call, for example, after receiving downlink data from the base station, process it with the processor 72; in addition, send uplink data to the base station.
  • a radio frequency unit includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.
  • the radio frequency unit can also communicate with the network and other devices through the wireless communication system.
  • Electronic devices provide users with wireless broadband Internet access through network modules, such as helping users send and receive emails, browse web pages, and access streaming media.
  • the audio output unit may convert audio data received by the radio frequency unit or the network module or stored in the memory 71 into an audio signal and output as sound. Also, the audio output unit may also provide audio output related to a specific function performed by the electronic device (eg, call signal reception sound, message reception sound, etc.).
  • the audio output unit includes a speaker, a buzzer, and a receiver.
  • the receiving unit is configured to receive audio or video signals.
  • the receiving unit may include a graphics processor (Graphics Processing Unit, GPU) and a microphone, and the graphics processor processes image data of still pictures or videos obtained by an image capture device (such as a camera) in video capture mode or image capture mode.
  • the processed image frames can be displayed on a display unit.
  • the image frames processed by the graphics processor may be stored in a memory (or other storage medium) or sent via a radio frequency unit or a network module.
  • Microphones can receive sound and can process such sound into audio data.
  • the processed audio data can be converted into a format that can be sent to the mobile communication base station via the radio frequency unit for output in the case of a telephone call mode.
  • the electronic device also includes at least one sensor, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor includes an ambient light sensor and a proximity sensor, wherein the ambient light sensor can adjust the brightness of the display panel according to the brightness of the ambient light, and the proximity sensor can turn off the display panel and/or the backlight when the electronic device moves to the ear.
  • the accelerometer sensor can detect the magnitude of acceleration in each direction (usually three axes), and can detect the magnitude and direction of gravity when it is stationary, and can be set to recognize the posture of electronic equipment (such as horizontal and vertical screen switching, Related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer, tap), etc.; sensors can also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers , infrared sensor, etc., which will not be repeated here.
  • the display unit is configured to display information input by the user or information provided to the user.
  • the display unit may include a display panel, and the display panel may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD) or an organic light-emitting diode (Organic Light-Emitting Diode, OLED).
  • LCD Liquid Crystal Display
  • OLED Organic Light-Emitting Diode
  • the user receiving unit can be configured to receive input numbers or character information, and generate key signal input related to user settings and function control of the electronic device.
  • the user receiving unit includes a touch panel and other input devices.
  • a touch panel also called a touch screen, can collect user's touch operations on or near it (such as the user's operation on or near the touch panel by using any suitable object or accessory such as a finger or a stylus).
  • the touch panel can include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the user's touch orientation, and detects the signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts it into contact coordinates, and sends it to the To the processor 72, receive the command sent by the processor 72 and execute it.
  • various types of touch panels such as resistive, capacitive, infrared, and surface acoustic wave, can be used to realize the touch panel.
  • the user receiving unit may also include other input devices.
  • other input devices may include, but are not limited to, physical keyboards, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, and joysticks, which will not be repeated here.
  • the touch panel can be covered on the display panel. After the touch panel detects a touch operation on or near it, it is sent to the processor 72 to determine the type of the touch event. The corresponding visual output is provided on the display panel.
  • the touch panel and the display panel are used as two independent components to realize the input and output functions of the electronic device, but in some embodiments, the touch panel and the display panel can be integrated to realize the input of the electronic device and output functions, which are not limited here.
  • the interface unit is an interface for connecting an external device to the electronic equipment.
  • an external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device with an identification module, audio input/output (I/O) ports, video I/O ports, headphone ports, and more.
  • the interface unit may be used to receive input (eg, data information, power, etc.) from an external device and to transmit the received input to one or more elements in the electronic device or to communicate transfer data.
  • the electronic device may also include a power supply (such as a battery) that supplies power to multiple components.
  • a power supply such as a battery
  • the power supply may be logically connected to the processor 72 through a power management system, so that functions such as charging, discharging, and power consumption management may be implemented through the power management system.
  • An embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored.
  • the computer program is executed by a computing device, the computing device realizes the live broadcast provided by the above-mentioned method embodiment.
  • Video generation method When the computer program is executed by a computing device, the computing device realizes the live broadcast provided by the above-mentioned method embodiment. Video generation method.
  • the embodiment of the present application also provides a computer program product, which, when running on a computer, enables the computer to implement the method for generating live video provided in the method embodiment above.
  • the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein.
  • the computer readable storage medium may be a non-transitory computer readable storage medium.
  • Computer-readable media includes both volatile and non-volatile, removable and non-removable storage media.
  • the storage medium may store information by any method or technology, and the information may be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Flash memory or other memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, A magnetic tape cartridge, disk storage or other magnetic storage device or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer readable media does not include transitory computer readable media, such as modulated data signals and carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Studio Circuits (AREA)

Abstract

本申请实施例提供了一种直播视频生成方法及装置。该方法包括:实时获取实体对象的第一特征数据;实体对象包括:躯干、大臂、肘关节以及小臂,第一特征数据用于表征小臂的多个部位围绕轴线方向的转动角度,小臂的多个部位围绕轴线方向的转动角度与小臂的多个部位到实体对象的肘关节的距离正相关;基于实时获取的第一特征数据控制虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度,小臂蒙皮的多个部位围绕轴线方向的转动角度与小臂蒙皮的多个部位到虚拟模型的肘关节的距离正相关;根据虚拟模型以及所述虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度进行直播视频的图像帧的生成。

Description

直播视频生成方法及装置
本申请要求在2021年5月31日提交中国专利局、申请号为202110598989.0的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及视频处理技术领域,例如涉及一种直播视频生成方法及装置。
背景技术
近年来,随着流媒体技术的进步以及网络带宽的飞速增长,直播业务日趋火热,直播平台也已成为时下最为热门的娱乐媒体之一。
随着直播形式的丰富,相关技术中提出了虚拟直播的直播方式。虚拟直播可以通过3D建模和实时动作捕捉将实体对象的动作实时同步到虚拟模型上,从而使主播侧用户可以通过虚拟模型的形象进行直播,极大提升了直播的灵活性。相关技术中将实体对象的动作同步到虚拟模型上的方式为:使用简单重定向技术将实体对象的动作迁移到虚拟模型上。然而,这种动作同步方式仅能够使虚拟模型在整体姿势与实体对象同步,无法对动作细节也进行同步,从而导致直播视频中虚拟模型的动作很不自然。
发明内容
本申请提供了一种直播视频生成方法及装置,可以避免相关技术中虚拟直播方式生成的直播视频中虚拟模型的动作不自然的情况。
第一方面,本申请的实施例提供一种直播视频生成方法,包括:
实时获取实体对象的第一特征数据;所述实体对象包括:躯干、通过肩关节与所述躯干连接的大臂以及通过肘关节与所述大臂连接的小臂,所述第一特征数据用于表征所述小臂的多个部位围绕轴线方向的转动角度,所述小臂的多个部位围绕轴线方向的转动角度与所述小臂的多个部位到所述实体对象的肘关节的距离正相关;
基于实时获取的所述第一特征数据控制虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度,所述虚拟模型包括:躯干、通过肩关节与所述躯干连接的大臂以及通过肘关节与所述大臂连接的小臂,所述小臂蒙皮覆盖于所述虚拟模型的小臂的外表面,所述小臂蒙皮的多个部位围绕轴线方向的转动角度与所述小臂蒙皮的多个部位到所述虚拟模型的肘关节的距离正相关;
根据所述虚拟模型以及所述虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度进行直播视频的图像帧的生成。
第二方面,本申请实施例提供一种直播视频生成装置,包括:
获取单元,设置为实时获取实体对象的第一特征数据;所述实体对象包括:躯干、通过肩关节与所述躯干连接的大臂以及通过肘关节与所述大臂连接的小臂,所述第一特征数据用于表征所述小臂的多个部位围绕轴线方向的转动角度,所述小臂的多个部位围绕轴线方向的转动角度与所述小臂的多个部位到所述实体对象的肘关节的距离正相关;
处理单元,设置为基于实时获取的所述第一特征数据控制虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度,所述虚拟模型包括:躯干、通过肩关节与所述躯干连接的大臂以及通过肘关节与所述大臂连接的小臂,所述小臂蒙皮覆盖于所述虚拟模型的小臂的外表面,所述小臂蒙皮的多个部位围绕轴线方向的转动角度与所述小臂蒙皮的多个部位到所述虚拟模型的肘关节的距离正相关;
生成单元,设置为根据所述虚拟模型以及所述虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度进行直播视频的图像帧的生成。
第三方面,本申请实施例提供一种电子设备,包括:存储器和处理器,存储器设置为存储计算机程序;处理器设置为在调用计算机程序时执行第一方面的实施方式所述的直播视频生成方法。
第四方面,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现第一方面的实施方式所述的直播视频生成方法。
第五方面,本申请实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机实现第一方面的实施方式所述的直播视频生成方法。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。
图1为本申请实施例提供的直播视频生成方法的应用场景示意图;
图2为本申请实施例提供的直播视频生成方法的步骤流程图之一;
图3为本申请实施例提供的虚拟模型的小臂结构示意图;
图4为本申请实施例提供的直播视频生成方法的步骤流程图之二;
图5为本申请实施例提供的直播视频生成装置的结构示意图之一;
图6为本申请实施例提供的直播视频生成装置的结构示意图之二;
图7为本申请实施例提供的电子设备的硬件结构示意图。
具体实施方式
下面将对本申请的方案进行进一步描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本申请,但本申请还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本申请的一部分实施例,而不是全部的实施例。
本申请的说明书和权利要求书中的术语“第一”和“第二”等是用于区别同步的对象,而不是用于描述对象的特定顺序。例如,第一对应关系和第二对应关系是用于区别不同的对应关系,而不是用于描述对应关系的特定顺序。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。此外,在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个 或两个以上。
本文中术语“和/或”,用于描述关联对象的关联关系,具体表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
以下首先对本申请实施例提供的直播视频生成方法所应用的场景架构进行说明。示例性的,参照图1所示,本申请实施例提供的直播视频生成方法所应用的场景架构包括:追踪设备11、深度图像采集设备12、音频录制设备13、设置于实体对象上的跟踪标记14、直播服务器15以及终端设备16。追踪设备11、图像采集设备12、音频录制设备13以及终端设备16均与直播服务器15互联。追踪设备11设置为实时获取设置于实体对象上的跟踪标记14的位置信息,并将跟踪标记14的位置信息发送至直播服务器15。深度图像采集设备12设置为实时获取实体对象的面部的深度图像,并将实体对象的面部的深度图像发送至直播服务器15。音频录制设备13设置为录制实体对象所处空间的环境音,并将录制的环境音发送至直播服务器15。直播服务器15设置为根据追踪设备11发送的跟踪标记14的位置信息、深度图像采集设备12发送的实体对象的面部深度图像以及音频录制设备13发送的音频数据生成直播视频,并将生成的直播视频发送至终端设备16。终端设备16设置为在直播界面播放直播服务器15发送的直播视频。
其中,终端设备16可以为手机、平板电脑、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、智能手表、智能手环等终端设备,或者该终端设备还可以为其他类型的终端设备。直播服务器15可以为任意形式的服务器,本申请实施例对追踪设备11、深度图像采集设备12以及音频录制设备13的类型均不作限定,以能够实现对应功能为准。
需要说明的是,图1中以追踪设备11、深度图像采集设备12、音频录制设备13为相互独立的设备为例示出,但本申请实施例并不限定于此,在上述实施例的基础上,本申请实施例中的追踪设备11、深度图像采集设备12、音频录制设备13中的两个或全部还可以集成与同一实体设备之中。例如:深度图像采集设备12和音频录制设备13还可以集成于同一深度摄像机中。
在一种实现方式中,本申请实施例提供了一种直播视频生成方法,参照图2所示,该直播视频生成方法包括如下步骤S201至S203:
S201、实时获取实体对象的第一特征数据。
其中,所述实体对象包括:躯干、通过肩关节与所述躯干连接的大臂以及通过肘关节与所述大臂连接的小臂。所述第一特征数据用于表征所述小臂的多个部位围绕轴线方向的转动角度。所述小臂的多个部位围绕轴线方向的转动角度与所述小臂的多个部位到所述实体对象的肘关节的距离正相关。
示例性的,本申请实施例中的实体对象可以为人、灵长类动物、机器人等,本申请实施例对此不做限定。
本申请实施中的所述小臂的多个部位围绕轴线方向的转动角度与所述小臂的多个部位到所述实体对象的肘关节的距离正相关是指:对于小臂上的任意两个部位,若相比于小臂上的第一部位,小臂上的第二部位距离肘关节的距离越较近,则第二部位围绕轴线方向的转动角度小于或等于第一部位围绕轴线方向的转动角度,而若相比于小臂上的第一部位,小臂上的第二部位距离肘关节的距离越较远,则第二部位围绕轴线方向的转动角度大于或等于第一部 位围绕轴线方向的转动角度。
示例性的,轴线方向指的是以肘关节为起点沿小臂的延伸方向。
S202、基于实时获取的所述第一特征数据控制虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度。
其中,所述虚拟模型包括:躯干、通过肩关节与所述躯干连接的大臂以及通过肘关节与所述大臂连接的小臂,所述小臂蒙皮覆盖于所述虚拟模型的小臂的外表面,所述小臂蒙皮的多个部位围绕轴线方向的转动角度与所述小臂蒙皮的多个部位到所述虚拟模型的肘关节的距离正相关。
示例性的,可以将第一特征数据通过重定向的方式迁移到虚拟模型的小臂骨骼上,并通过虚拟模型的小臂骨骼驱动虚拟模型的小臂蒙皮转动,从而实现基于实时获取的所述第一特征数据控制虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度。
作为本申请实施例一种示例的实施方式,参照图3所示,所述虚拟模型的小臂包括:主骨骼31、套接于所述主骨骼31上的多段附加骨骼32以及所述覆盖于所述多段附加骨骼32上的所述小臂蒙皮33。
上述步骤S202(基于实时获取的所述第一特征数据控制虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度)包括:
基于实时获取的所述第一特征数据控制所述多段附加骨骼32中的至少一段附加骨骼32围绕所述主骨骼31转动,以驱动所述小臂蒙皮33围绕轴线方向转动,实现对小臂蒙皮围绕轴线方向的转动角度的控制。
其中,所述多段附加骨骼围绕所述主骨骼的转动角度与多段附加骨骼到所述虚拟模型的肘关节的距离正相关。
S203、根据所述虚拟模型以及所述虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度进行直播视频的图像帧的生成。
本申请实施例提供的直播视频生成方法首先获取用于表征实体对象的小臂的多个部位围绕轴线方向的转动角度的第一特征数据,然后基于实时获取的所述第一特征数据控制虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度,最后根据所述虚拟模型以及所述虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度进行直播视频的图像帧的生成。由于本申请实施例中获取的实体对象的小臂的多个部位围绕轴线方向的转动角度与所述小臂的多个部位到所述实体对象的肘关节的距离正相关,在基于第一特征数据控制虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度时,也可以使虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度与所述小臂蒙皮的多个部位到所述虚拟模型的肘关节的距离正相关,因此本申请实施例可以将实体对象的小臂的动作更加细致的同步到虚拟模型上,进而使直播视频中虚拟模型的动作更加自然。
在实体对象的小臂被遮挡的情况下,可以仅根据虚拟模型生成直播视频的图像帧,而不必对虚拟模型小臂蒙皮的转动角度进行控制。
作为对上述实施例的扩展和细化,本申请实施例提供了另一种直播视频生成方法,参照图4所示,该直播视频生成方法包括如下步骤S401至S405:
S401、实时获取实体对象的第一特征数据和第二特征数据。
其中,所述实体对象包括:躯干、通过肩关节与所述躯干连接的大臂以及通过肘关节与 所述大臂连接的小臂,所述第一特征数据用于表征所述小臂的多个部位围绕轴线方向的转动角度,所述小臂的多个部位围绕轴线方向的转动角度与所述小臂的多个部位到所述实体对象的肘关节的距离正相关。所述第二特征数据用于表征所述实体对象的面部表情。
例如,上述步骤S401中实时获取实体对象的第一特征数据的实现方式可以包括如下步骤1和步骤2:
步骤1、实时获取设置于所述实体对象的小臂上的多个追踪标记的位置信息。
步骤2、根据所述多个追踪标记的位置信息实时获取所述实体对象的第一特征数据。
例如,上述步骤S401中实时获取实体对象的第二特征数据的实现方式可以包括如下步骤a和步骤b:
步骤a、通过图像采集设备对所述实体对象进行图像采集,实时获取所述实体对象的面部图像。
步骤b、基于实时获取的所述实体对象的面部图像,提取所述实体对象的第二特征数据。
S402、基于实时获取的所述第一特征数据控制虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度。
其中,所述虚拟模型包括:躯干、通过肩关节与所述躯干连接的大臂以及通过肘关节与所述大臂连接的小臂,所述小臂蒙皮覆盖于所述虚拟模型的小臂的外表面,所述小臂蒙皮的多个部位围绕轴线方向的转动角度与所述小臂蒙皮的多个部位到所述虚拟模型的肘关节的距离正相关。
S403、基于实时获取的所述第二特征数据控制所述虚拟模型的面部表情。
作为本申请实施例一种示例的实施方式,上述步骤S403(基于实时获取的所述第二特征数据控制所述虚拟模型的面部表情)的一种实现方式包括如下步骤Ⅰ和步骤Ⅱ:
步骤Ⅰ、将所述第二特征数据输入表情算法模型,并获取所述表情算法模型输出的表情驱动参数。
其中,所述表情算法模型为基于样本数据对预设算法模型进行训练获取的模型,所述样本数据包括所述实体对象的样本表情数据和所述样本表情数据对应的表情驱动参数。
示例性的,预设算法模型可以为深度学习神经网络模型、卷积神经网络模型等机器学习算法模型,本申请实施例对预设算法模型的具体类型不做限定。
步骤Ⅱ、基于所述表情驱动参数对所述虚拟模型的面部进行驱动,生成所述虚拟模型的面部表情。
S404、确定图像帧对应的镜头方位。
其中,所述镜头方位用于表征用于所述图像帧对应的虚拟镜头与所述虚拟模型的位置关系。
需要说明的是:图像帧对应的镜头方位可以为一个,也可以为多个。当图像帧对应的镜头方位为多个时,图像帧可以由该多个镜头方位对应的图像拼接而成。
作为本申请实施例一种示例的实施例方式,上述步骤S404(确定图像帧对应的镜头方位)一种实现方式为:
响应于用户对所述镜头控件的操作,确定所述图像帧对应的镜头方位,其中,所述镜头控件设置为进行镜头方位的选择。
例如,上述实施例中的用户可以为主播侧用户,也可以为观众侧用户。主播侧用户和/或 观众侧用户可以通过操作镜头控件选择的待生成图像帧对应的镜头方位。
上述实施例中用户可以对直播视频的每个图像帧对应的镜头方位进行选择,因此用户可以根据需求控制直播视频呈现为轨道镜头、手持镜头、航拍镜头等镜头所拍摄视频,进而丰富用户选择,提升用户体验。
作为本申请实施例一种示例的实施例方式,上述步骤S404(确定图像帧对应的镜头方位)另一种实现方式为:
基于所述第一特征数据和预设规则,确定所述图像帧对应的镜头方位。
即,也可以根据第一特征数据和预设规则自动调整直播视频的每个图像帧对应的镜头方位。
示例性的,所述预设规则可以为:当实体对象的小臂的指定部位围绕轴线方向的转动角度为预设角度时,将确定图像帧对应的镜头方位确定为预设角度对应的镜头方位。例如:示例性的,可以根据第一特征数据将待生成图像帧对应的镜头方位切换为面部特写镜头,也可以根据第一特征数据将待生成图像帧对应的镜头方位切换全景镜头。
S405、根据所述图像帧对应的镜头方位和所述虚拟模型,进行所述图像帧的生成。
需要说明的是,上述实施例中的直播视频生成方法是实时进行的。因此,每获取一个图像帧对应的第一特征数据、第二特征数据以及镜头方位后,立即根据该图像帧对应的第一特征数据、第二特征数据以及镜头方位生成该图像帧,并在生成该图像帧同时获取下一个图像帧对应的第一特征数据、第二特征数据以及镜头方位。
本申请实施例提供的直播视频生成方法首先获取用于表征实体对象的小臂的多个部位围绕轴线方向的转动角度的第一特征数据,然后基于实时获取的所述第一特征数据控制虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度,最后根据所述虚拟模型以及所述虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度进行直播视频的图像帧的生成。由于本申请实施例中获取的实体对象的小臂的多个部位围绕轴线方向的转动角度与所述小臂的多个部位到所述实体对象的肘关节的距离正相关,在基于第一特征数据控制虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度时,也可以使虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度与所述小臂蒙皮的多个部位到所述虚拟模型的肘关节的距离正相关,因此本申请实施例可以将实体对象的小臂的动作更加细致的同步到虚拟模型上,进而使直播视频中虚拟模型的动作更加自然。
作为本申请实施例一种示例的实施方式,在上述任一实施例的基础上,本申请实施例提供的直播视频生成方法还包括:
对所述实体对象所处空间的环境音进行录制,获取环境音频数据;
根据所述环境音频数据生成所述直播视频的音频数据。
例如,根据所述环境音频数据生成所述直播视频的音频数据可以为:将所述环境音频数据作为所述直播视频的音频数据。
例如,根据所述环境音频数据生成所述直播视频的音频数据可以为:融合所述环境音频数据和预设音频数据生成所述直播视频的音频数据。例如:对所述实体对象所处空间的环境音进行录制获取的环境音频数据为清唱音频,则可以融合获取的环境音频数据和伴奏音乐生成所述直播视频的音频数据。
基于同一申请构思,作为对上述方法的实现,本申请实施例还提供了一种直播视频生成 装置,该直播视频生成装置实施例与前述方法实施例对应,为便于阅读,本装置实施例不再对前述方法实施例中的细节内容进行逐一赘述,但应当明确,本实施例中的直播视频生成装置能够对应实现前述方法实施例中的全部内容。
图5为本申请实施例提供的直播视频生成装置的结构示意图,如图5所示,本实施例提供的直播视频生成装置500包括:
获取单元51,设置为实时获取实体对象的第一特征数据;所述实体对象包括:躯干、通过肩关节与所述躯干连接的大臂以及通过肘关节与所述大臂连接的小臂,所述第一特征数据用于表征所述小臂的多个部位围绕轴线方向的转动角度,所述小臂的多个部位围绕轴线方向的转动角度与所述小臂的多个部位到所述实体对象的肘关节的距离正相关;
处理单元52,设置为基于实时获取的所述第一特征数据控制虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度,所述虚拟模型包括:躯干、通过肩关节与所述躯干连接的大臂以及通过肘关节与所述大臂连接的小臂,所述小臂蒙皮覆盖于所述虚拟模型的小臂的外表面,所述小臂蒙皮的多个部位围绕轴线方向的转动角度与所述小臂蒙皮的多个部位到所述虚拟模型的肘关节的距离正相关;
生成单元53,设置为根据所述虚拟模型以及所述虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度进行直播视频的图像帧的生成。
作为本申请实施例一种示例的实施方式,所述虚拟模型的小臂包括:主骨骼、套接于所述主骨骼上的多段附加骨骼以及覆盖于所述多段附加骨骼上的所述小臂蒙皮;
所述处理单元52,设置为基于实时获取的所述第一特征数据控制所述多段附加骨骼中的至少一段附加骨骼围绕所述主骨骼的转动角度,以控制所述小臂蒙皮围绕轴线方向的转动角度。
作为本申请实施例一种示例的实施方式,参照图6所示,所述直播视频生成装置500还包括:
确定单元54,设置为确定所述图像帧对应的镜头方位,所述镜头方位用于表征用于所述图像帧对应的虚拟镜头与所述虚拟模型的位置关系;
所述生成单元53,设置为根据所述图像帧对应的镜头方位和所述虚拟模型,进行所述图像帧的生成。
作为本申请实施例一种示例的实施方式,所述确定单元54,设置为响应于用户对所述镜头控件的操作,确定所述图像帧对应的镜头方位;
其中,所述镜头控件设置为进行镜头方位的选择;
作为本申请实施例一种示例的实施方式,所述确定单元54,设置为基于所述第一特征数据和预设规则,确定所述图像帧对应的镜头方位。
作为本申请实施例一种示例的实施方式,
所述获取单元51,还设置为实时获取所述实体对象的第二特征数据;所述第二特征数据用于表征所述实体对象的面部表情;
所述处理单元52,还设置为基于实时获取的所述第二特征数据控制所述虚拟模型的面部表情。
作为本申请实施例一种示例的实施方式,
所述处理单元52,设置为将所述第二特征数据输入表情算法模型,并获取所述表情算法 模型输出的表情驱动参数;基于所述表情驱动参数对所述虚拟模型的面部进行驱动,生成所述虚拟模型的面部表情;
其中,所述表情算法模型为基于样本数据对预设算法模型进行训练获取的模型,所述样本数据包括所述实体对象的样本表情数据和所述样本表情数据对应的表情驱动参数。
作为本申请实施例一种示例的实施方式,所述获取单元51,设置为实时获取设置于所述实体对象的小臂上的多个追踪标记的位置信息;根据所述多个追踪标记的位置信息实时获取所述实体对象的第一特征数据。
作为本申请实施例一种示例的实施方式,所述获取单元51,设置为通过图像采集设备对所述实体对象进行图像采集,实时获取所述实体对象的面部图像;基于实时获取的所述实体对象的面部图像,提取所述实体对象的第二特征数据。
作为本申请实施例一种示例的实施方式,所述获取单元51,还设置为对所述实体对象所处空间的环境音进行录制,获取环境音频数据;
所述生成单元53,还设置为根据所述环境音频数据生成所述直播视频的音频数据。
本实施例提供的直播视频生成装置可以执行上述方法实施例提供的直播视频生成方法,其实现原理与技术效果类似,此处不再赘述。
基于同一申请构思,本申请实施例还提供了一种电子设备。图7为本申请实施例提供的电子设备的结构示意图,如图7所示,本实施例提供的电子设备包括:存储器71和处理器72,所述存储器71设置为存储计算机程序;所述处理器72设置为在调用计算机程序时,使得所述计算设备执行上述方法实施例提供的直播视频生成方法。
例如,存储器71可设置为存储软件程序以及多种数据。存储器71可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器71可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器72是电子设备的控制中心,利用多种接口和线路连接整个电子设备的多个部分,通过运行或执行存储在存储器71中的软件程序和/或模块,以及调用存储在存储器71中的数据,执行电子设备的多种功能和处理数据,从而对电子设备进行整体监控。处理器72可包括一个或多个处理单元。
此外,应当理解的是,本申请实施例提供的电子设备还可以包括:射频单元、网络模块、音频输出单元、接收单元、传感器、显示单元、用户接收单元、接口单元、以及电源等部件。本领域技术人员可以理解,上述描述出的电子设备的结构并不构成对电子设备的限定,电子设备可以包括更多或更少的部件,或者组合某些部件,或者不同的部件布置。在本申请实施例中,电子设备包括但不限于手机、平板电脑、笔记本电脑、掌上电脑、车载终端、可穿戴设备、以及计步器等。
其中,射频单元可设置为收发信息或通话过程中,信号的接收和发送,例如,将来自基站的下行数据接收后,给处理器72处理;另外,将上行的数据发送给基站。通常,射频单元包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器、双工器等。此外,射频单元还可以通过无线通信***与网络和其他设备通信。
电子设备通过网络模块为用户提供了无线的宽带互联网访问,如帮助用户收发电子邮件、 浏览网页和访问流式媒体等。
音频输出单元可以将射频单元或网络模块接收的或者在存储器71中存储的音频数据转换成音频信号并且输出为声音。而且,音频输出单元还可以提供与电子设备执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出单元包括扬声器、蜂鸣器以及受话器等。
接收单元设置为接收音频或视频信号。接收单元可以包括图形处理器(Graphics Processing Unit,GPU)和麦克风,图形处理器对在视频捕获模式或图像捕获模式中由图像捕获装置(如摄像头)获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元上。经图形处理器处理后的图像帧可以存储在存储器(或其它存储介质)中或者经由射频单元或网络模块进行发送。麦克风可以接收声音,并且能够将这样的声音处理为音频数据。处理后的音频数据可以在电话通话模式的情况下转换为可经由射频单元发送到移动通信基站的格式输出。
电子设备还包括至少一种传感器,比如光传感器、运动传感器以及其他传感器。例如,光传感器包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板的亮度,接近传感器可在电子设备移动到耳边时,关闭显示面板和/或背光。作为运动传感器的一种,加速计传感器可检测每个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可设置为识别电子设备姿态(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;传感器还可以包括指纹传感器、压力传感器、虹膜传感器、分子传感器、陀螺仪、气压计、湿度计、温度计、红外线传感器等,在此不再赘述。
显示单元设置为显示由用户输入的信息或提供给用户的信息。显示单元可包括显示面板,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板。
用户接收单元可设置为接收输入的数字或字符信息,以及产生与电子设备的用户设置以及功能控制有关的键信号输入。例如,用户接收单元包括触控面板以及其他输入设备。触控面板,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板上或在触控面板附近的操作)。触控面板可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器72,接收处理器72发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板。除了触控面板,用户接收单元还可以包括其他输入设备。例如,其他输入设备可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆,在此不再赘述。
例如,触控面板可覆盖在显示面板上,当触控面板检测到在其上或附近的触摸操作后,传送给处理器72以确定触摸事件的类型,随后处理器72根据触摸事件的类型在显示面板上提供相应的视觉输出。一般情况下,触控面板与显示面板是作为两个独立的部件来实现电子设备的输入和输出功能,但是在某些实施例中,可以将触控面板与显示面板集成而实现电子设备的输入和输出功能,具体此处不做限定。
接口单元为外部装置与电子设备连接的接口。例如,外部装置可以包括有线或无线头戴 式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。接口单元可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到电子设备中的一个或多个元件或者可以用于在电子设备和外部装置之间传输数据。
电子设备还可以包括给多个部件供电的电源(比如电池),例如,电源可以通过电源管理***与处理器72逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗管理等功能。
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,当所述计算机程序被计算设备执行时,使得所述计算设备实现上述方法实施例提供的直播视频生成方法。
本申请实施例还提供一种一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机实现上述方法实施例提供的直播视频生成方法。
本领域技术人员应明白,本申请的实施例可提供为方法、***、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质上实施的计算机程序产品的形式。计算机可读存储介质可以为非暂态计算机可读存储介质。
计算机可读介质包括永久性和非永久性、可移动和非可移动存储介质。存储介质可以由任何方法或技术来实现信息存储,信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。根据本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitorymedia),如调制的数据信号和载波。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。

Claims (14)

  1. 一种直播视频生成方法,包括:
    实时获取实体对象的第一特征数据;所述实体对象包括:躯干、通过肩关节与所述躯干连接的大臂以及通过肘关节与所述大臂连接的小臂,所述第一特征数据用于表征所述小臂的多个部位围绕轴线方向的转动角度,所述小臂的多个部位围绕轴线方向的转动角度与所述小臂的多个部位到所述实体对象的肘关节的距离正相关;
    基于实时获取的所述第一特征数据控制虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度,所述虚拟模型包括:躯干、通过肩关节与所述躯干连接的大臂以及通过肘关节与所述大臂连接的小臂,所述小臂蒙皮覆盖于所述虚拟模型的小臂的外表面,所述小臂蒙皮的多个部位围绕轴线方向的转动角度与所述小臂蒙皮的多个部位到所述虚拟模型的肘关节的距离正相关;
    根据所述虚拟模型以及所述虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度进行直播视频的图像帧的生成。
  2. 根据权利要求1所述的方法,其中,所述虚拟模型的小臂包括:主骨骼、套接于所述主骨骼上的多段附加骨骼以及覆盖于所述多段附加骨骼上的所述小臂蒙皮;
    所述基于实时获取的所述第一特征数据控制虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度,包括:
    基于实时获取的所述第一特征数据控制所述多段附加骨骼中的至少一段附加骨骼围绕所述主骨骼的转动角度,以控制所述小臂蒙皮围绕轴线方向的转动角度。
  3. 根据权利要求1所述的方法,其中,所述根据所述虚拟模型以及所述虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度进行直播视频的图像帧的生成,包括:
    确定所述图像帧对应的镜头方位,所述镜头方位用于表征所述图像帧对应的虚拟镜头与所述虚拟模型的位置关系;
    根据所述图像帧对应的镜头方位和所述虚拟模型,进行所述图像帧的生成。
  4. 根据权利要求3所述的方法,其中,所述确定所述图像帧对应的镜头方位,包括:
    响应于用户对镜头控件的操作,确定所述图像帧对应的镜头方位,其中,所述镜头控件设置为进行镜头方位的选择。
  5. 根据权利要求3所述的方法,其中,所述确定所述图像帧对应的镜头方位,包括:
    基于所述第一特征数据和预设规则,确定所述图像帧对应的镜头方位。
  6. 根据权利要求1所述的方法,还包括:
    实时获取所述实体对象的第二特征数据;所述第二特征数据用于表征所述实体对象的面部表情;
    基于实时获取的所述第二特征数据控制所述虚拟模型的面部表情。
  7. 根据权利要求6所述的方法,其中,所述基于实时获取的所述第二特征数据控制所述虚拟模型的面部表情,包括:
    将所述第二特征数据输入表情算法模型,并获取所述表情算法模型输出的表情驱动参数;其中,所述表情算法模型为基于样本数据对预设算法模型进行训练获取的模型,所述样本数据包括所述实体对象的样本表情数据和所述样本表情数据对应的表情驱动参数;
    基于所述表情驱动参数对所述虚拟模型的面部进行驱动,生成所述虚拟模型的面部表情。
  8. 根据权利要求1所述的方法,其中,所述实时获取实体对象的第一特征数据,包括:
    实时获取设置于所述实体对象的小臂上的多个追踪标记的位置信息;
    根据所述多个追踪标记的位置信息实时获取所述实体对象的第一特征数据。
  9. 根据权利要求6所述的方法,其中,所述实时获取所述实体对象的第二特征数据,包括:
    通过图像采集设备对所述实体对象进行图像采集,实时获取所述实体对象的面部图像;
    基于实时获取的所述实体对象的面部图像,提取所述实体对象的第二特征数据。
  10. 根据权利要求1-9任一项所述的方法,还包括:
    对所述实体对象所处空间的环境音进行录制,获取环境音频数据;
    根据所述环境音频数据生成所述直播视频的音频数据。
  11. 一种直播视频生成装置,包括:
    获取单元,设置为实时获取实体对象的第一特征数据;所述实体对象包括:躯干、通过肩关节与所述躯干连接的大臂以及通过肘关节与所述大臂连接的小臂,所述第一特征数据用于表征所述小臂的多个部位围绕轴线方向的转动角度,所述小臂的多个部位围绕轴线方向的转动角度与所述小臂的多个部位到所述实体对象的肘关节的距离正相关;
    处理单元,设置为基于实时获取的所述第一特征数据控制虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度,所述虚拟模型包括:躯干、通过肩关节与所述躯干连接的大臂以及通过肘关节与所述大臂连接的小臂,所述小臂蒙皮覆盖于所述虚拟模型的小臂的外表面,所述小臂蒙皮的多个部位围绕轴线方向的转动角度与所述小臂蒙皮的多个部位到所述虚拟模型的肘关节的距离正相关;
    生成单元,设置为根据所述虚拟模型以及所述虚拟模型的小臂蒙皮的多个部位围绕轴线方向的转动角度进行直播视频的图像帧的生成。
  12. 一种电子设备,包括:存储器和处理器,所述存储器设置为存储计算机程序;所述处理器设置为在调用计算机程序时,使得所述电子设备实现权利要求1-10任一项所述的直播视频生成方法。
  13. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,当所述计算机程序被计算设备执行时,使得所述计算设备实现权利要求1-10任一项所述的直播视频生成方法。
  14. 一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机实现如权利要求1-10任一项所述的直播视频生成方法。
PCT/CN2022/086052 2021-05-31 2022-04-11 直播视频生成方法及装置 WO2022252823A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110598989.0 2021-05-31
CN202110598989.0A CN113365085B (zh) 2021-05-31 2021-05-31 一种直播视频生成方法及装置

Publications (1)

Publication Number Publication Date
WO2022252823A1 true WO2022252823A1 (zh) 2022-12-08

Family

ID=77528427

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/086052 WO2022252823A1 (zh) 2021-05-31 2022-04-11 直播视频生成方法及装置

Country Status (2)

Country Link
CN (1) CN113365085B (zh)
WO (1) WO2022252823A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861500A (zh) * 2022-12-09 2023-03-28 上海哔哩哔哩科技有限公司 2d模型碰撞体生成方法及装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113365085B (zh) * 2021-05-31 2022-08-16 北京字跳网络技术有限公司 一种直播视频生成方法及装置
CN113852838B (zh) * 2021-09-24 2023-08-11 北京字跳网络技术有限公司 视频数据生成方法、装置、电子设备及可读存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070172797A1 (en) * 2006-01-12 2007-07-26 Kabushiki Kaisha Toyota Chuo Kenkyusho Method of constructing computer-based musculoskeletal model by redefining directions of pivot axes of joints in the same model
CN109453509A (zh) * 2018-11-07 2019-03-12 龚映清 一种基于肌电和运动捕捉的虚拟上肢控制***及其方法
CN110557625A (zh) * 2019-09-17 2019-12-10 北京达佳互联信息技术有限公司 虚拟形象直播方法、终端、计算机设备及存储介质
CN111161335A (zh) * 2019-12-30 2020-05-15 深圳Tcl数字技术有限公司 虚拟形象的映射方法、映射装置及计算机可读存储介质
US20210042981A1 (en) * 2019-08-09 2021-02-11 Disney Enterprises, Inc. Systems and methods for partitioning an animatable model
US20210150806A1 (en) * 2019-11-15 2021-05-20 Ariel Al Ltd 3d body model generation
CN113365085A (zh) * 2021-05-31 2021-09-07 北京字跳网络技术有限公司 一种直播视频生成方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070172797A1 (en) * 2006-01-12 2007-07-26 Kabushiki Kaisha Toyota Chuo Kenkyusho Method of constructing computer-based musculoskeletal model by redefining directions of pivot axes of joints in the same model
CN109453509A (zh) * 2018-11-07 2019-03-12 龚映清 一种基于肌电和运动捕捉的虚拟上肢控制***及其方法
US20210042981A1 (en) * 2019-08-09 2021-02-11 Disney Enterprises, Inc. Systems and methods for partitioning an animatable model
CN110557625A (zh) * 2019-09-17 2019-12-10 北京达佳互联信息技术有限公司 虚拟形象直播方法、终端、计算机设备及存储介质
US20210150806A1 (en) * 2019-11-15 2021-05-20 Ariel Al Ltd 3d body model generation
CN111161335A (zh) * 2019-12-30 2020-05-15 深圳Tcl数字技术有限公司 虚拟形象的映射方法、映射装置及计算机可读存储介质
CN113365085A (zh) * 2021-05-31 2021-09-07 北京字跳网络技术有限公司 一种直播视频生成方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861500A (zh) * 2022-12-09 2023-03-28 上海哔哩哔哩科技有限公司 2d模型碰撞体生成方法及装置
CN115861500B (zh) * 2022-12-09 2023-08-18 上海哔哩哔哩科技有限公司 2d模型碰撞体生成方法及装置

Also Published As

Publication number Publication date
CN113365085A (zh) 2021-09-07
CN113365085B (zh) 2022-08-16

Similar Documents

Publication Publication Date Title
CN110544280B (zh) Ar***及方法
WO2021078116A1 (zh) 视频处理方法及电子设备
WO2022252823A1 (zh) 直播视频生成方法及装置
CN110495819B (zh) 机器人的控制方法、机器人、终端、服务器及控制***
CN109461117B (zh) 一种图像处理方法及移动终端
CN111641794B (zh) 声音信号采集方法和电子设备
CN108989672B (zh) 一种拍摄方法及移动终端
WO2021012900A1 (zh) 控制振动的方法、装置、移动终端及计算机可读存储介质
CN107248137B (zh) 一种实现图像处理的方法及移动终端
CN110970003A (zh) 屏幕亮度调整方法、装置、电子设备及存储介质
CN110213485B (zh) 一种图像处理方法及终端
CN110602389B (zh) 一种显示方法及电子设备
CN111031253B (zh) 一种拍摄方法及电子设备
WO2021043121A1 (zh) 一种图像换脸的方法、装置、***、设备和存储介质
CN109618218B (zh) 一种视频处理方法及移动终端
CN108881721B (zh) 一种显示方法及终端
JP2023518548A (ja) 検出結果出力方法、電子機器及び媒体
CN114332423A (zh) 虚拟现实手柄追踪方法、终端及计算可读存储介质
WO2020238454A1 (zh) 拍摄方法及终端
CN111091519B (zh) 一种图像处理方法及装置
CN109618055B (zh) 一种位置共享方法及移动终端
CN108156386B (zh) 一种全景拍照方法及移动终端
WO2020238913A1 (zh) 视频录制方法及终端
CN111982293B (zh) 体温测量方法、装置、电子设备及存储介质
WO2021104254A1 (zh) 信息处理方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22814870

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22814870

Country of ref document: EP

Kind code of ref document: A1