CN110794962A - Information fusion method, device, terminal and storage medium - Google Patents

Information fusion method, device, terminal and storage medium Download PDF

Info

Publication number
CN110794962A
CN110794962A CN201910995446.5A CN201910995446A CN110794962A CN 110794962 A CN110794962 A CN 110794962A CN 201910995446 A CN201910995446 A CN 201910995446A CN 110794962 A CN110794962 A CN 110794962A
Authority
CN
China
Prior art keywords
information
target virtual
target
coordinate system
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910995446.5A
Other languages
Chinese (zh)
Inventor
陈怡�
倪光耀
吕绍辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910995446.5A priority Critical patent/CN110794962A/en
Publication of CN110794962A publication Critical patent/CN110794962A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure provides an information fusion method, an information fusion device, a terminal and a storage medium, wherein the method comprises the following steps: acquiring target virtual information and real-time pose information of a terminal; determining target pose information of the target virtual information according to the real-time pose information and a preset pose corresponding relation between the terminal and the target virtual information; and when a fusion instruction is received, fusing the target virtual information into the input scene video according to the target pose information.

Description

Information fusion method, device, terminal and storage medium
Technical Field
The embodiment of the disclosure relates to the technical field of electronic communication, and in particular, to an information fusion method, an information fusion device, a terminal and a storage medium.
Background
With the rapid development of communication technology and the popularization of terminals, in order to meet the requirement that people can conveniently and rapidly acquire information related to daily life, study and work, various application programs running on terminals such as mobile phones and tablet computers are continuously emerging, and the functions provided by the terminals are more and more.
At present, a terminal may apply virtual information such as 3D text to a displayed real world based on Augmented Reality (AR) technology to bring a strong stereoscopic effect. However, in the actual application process, a user needs to perform operations such as dragging on a provided three-dimensional coordinate axis for virtual information such as 3D characters to be added in a scene video displayed by a terminal, so as to set the virtual information at a required position for information fusion, which is inefficient.
Disclosure of Invention
The embodiment of the disclosure provides an information fusion method and device, a terminal and a storage medium, and improves information fusion efficiency.
The technical scheme of the embodiment of the disclosure is realized as follows:
in a first aspect, an embodiment of the present disclosure provides an information fusion method, which is applied to a terminal, and the method includes:
acquiring target virtual information and real-time pose information of the terminal;
determining target pose information of the target virtual information according to the real-time pose information and a preset pose corresponding relation between the terminal and the target virtual information;
and when a fusion instruction is received, fusing the target virtual information into an input scene video according to the target pose information.
In the above scheme, after the target virtual information is fused into the input scene video according to the target pose information, the method further includes:
acquiring a following object from the scene video;
acquiring a position relation between the following object and the target virtual information;
and when the position of the following object changes in the three-dimensional space scene, adjusting the position of the target virtual information according to the position relation.
In the above solution, the acquiring a following object from the scene video includes:
carrying out object identification on the scene video by using a preset identification algorithm to obtain the following object;
or receiving an object indication instruction, and determining the following object from the scene video according to the object indication instruction.
In the above scheme, before the obtaining of the target virtual information and the real-time pose information of the terminal, the method further includes:
receiving a selected instruction;
determining a target effect from a preset information effect library according to the selected instruction;
receiving input information;
and processing the input information according to the target effect to generate the target virtual information.
In the foregoing solution, after the processing the input information according to the target effect and generating the target virtual information, the method further includes:
and when an editing instruction is received, editing the target virtual information according to the editing instruction.
In the above scheme, the fusing the target virtual information into the input scene video according to the target pose information includes:
acquiring a mapping relation between a first coordinate system and a second coordinate system; the first coordinate system is a coordinate system in which the real-time pose information is located, and the second coordinate system is a coordinate system in which each point in the scene video is located;
mapping the target pose information from the first coordinate system to the second coordinate system by using the mapping relation to obtain mapping pose information;
and adding the target virtual information into the scene video according to the mapping pose information.
In the above scheme, the target virtual information is three-dimensional character information.
In a second aspect, an embodiment of the present disclosure provides an information fusion apparatus, which is applied to a terminal, and the apparatus includes:
the acquisition unit is used for acquiring target virtual information and real-time pose information of the terminal;
the determining unit is used for determining target pose information of the target virtual information according to the real-time pose information and a preset pose corresponding relation between the terminal and the target virtual information;
and the fusion unit is used for fusing the target virtual information into the input scene video according to the target pose information when a fusion instruction is received.
In the above apparatus, the apparatus further comprises an adjustment unit,
the adjusting unit is used for acquiring a following object from the scene video; acquiring a position relation between the following object and the target virtual information; and when the position of the following object in the scene video changes, adjusting the position of the target virtual information according to the position relation.
In the above apparatus, the adjusting unit is specifically configured to perform object identification on the scene video by using a preset identification algorithm to obtain the following object; or receiving an object indication instruction, and determining the following object from the scene video according to the object indication instruction.
In the above apparatus, the apparatus further comprises a generation unit;
the generating unit is used for receiving a selected instruction; determining a target effect from a preset information effect library according to the selected instruction; receiving input information; and processing the input information according to the target effect to generate the target virtual information.
In the above apparatus, the generating unit is further configured to, when an editing instruction is received, edit the target virtual information according to the editing instruction.
In the above apparatus, the fusion unit is specifically configured to obtain a mapping relationship between a first coordinate system and a second coordinate system; the first coordinate system is a coordinate system in which the real-time pose information is located, and the second coordinate system is a coordinate system in which each point in the scene video is located; mapping the target pose information from the first coordinate system to the second coordinate system by using the mapping relation to obtain mapping pose information; and adding the target virtual information into the scene video according to the mapping pose information.
In the above apparatus, the target virtual information is three-dimensional character information.
In a third aspect, an embodiment of the present disclosure provides a terminal, including:
a memory for storing executable instructions;
and the processor is used for realizing the information fusion method when the executable instruction is executed.
In a fourth aspect, the present disclosure provides a storage medium storing executable instructions, where the executable instructions are executed by a processor, and are configured to implement the information fusion method.
The embodiment of the disclosure has the following beneficial effects:
by associating the virtual information with the pose of the terminal, when the pose of the terminal is controlled to a certain degree, the pose of the virtual information meeting the requirement can be determined by using the pose of the terminal so as to perform information fusion, and therefore the information fusion efficiency is improved.
Drawings
Fig. 1 is a schematic structural diagram of a terminal 100 implementing an embodiment of the present disclosure;
FIG. 2 is a schematic structural diagram of an information fusion device 200 for implementing an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart diagram of an alternative method for implementing information fusion in an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of an exemplary target effect selection implementing an embodiment of the present disclosure;
fig. 5 is an alternative flow chart diagram of an information fusion method implementing the embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Referring now to fig. 1, a block diagram of a terminal 100 suitable for use in implementing embodiments of the present disclosure is shown. The terminals include mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, Personal Digital Assistants (PDAs), tablet computers (PADs), Portable Multimedia Players (PMPs), car terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital Televisions (TVs), desktop computers, and the like. The terminal shown in fig. 1 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 1, the terminal 100 may include a processing device (e.g., a central processing unit, a graphic processor, etc.) 110, which may perform various appropriate actions and processes according to a program stored in a Read-Only Memory (ROM)120 or a program loaded from a storage device 180 into a Random Access Memory (RAM) 130. In the RAM130, various programs and data necessary for the operation of the terminal 100 are also stored. The processing device 110, the ROM 120, and the RAM130 are connected to each other through a bus 140. An Input/Output (I/O) interface 150 is also connected to bus 140.
Generally, the following devices may be connected to the I/O interface 150: input devices 160 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 170 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; a storage device 180 including, for example, a magnetic tape, a hard disk, or the like; and a communication device 190. The communication means 190 may allow the terminal 100 to perform wireless or wired communication with other devices to exchange data. While fig. 1 illustrates a terminal 100 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, the processes described by the provided flowcharts may be implemented as computer software programs according to embodiments of the present disclosure. For example, the disclosed embodiments include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network through the communication device 190, or installed from the storage device 180, or installed from the ROM 120. The computer program, when executed by the processing device 110, performs the functions in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a RAM, a ROM, an erasable programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable compact disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the disclosed embodiments, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the disclosed embodiments, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including over electrical wiring, fiber optics, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the terminal; or may exist separately and not be assembled into the terminal.
The computer readable medium carries one or more programs, and when the one or more programs are executed by the terminal, the terminal is enabled to execute the information fusion method provided by the embodiment of the disclosure.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) and a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams provided by the embodiments of the present disclosure illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described in the embodiments of the present disclosure may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: Field-Programmable Gate arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Parts (ASSPs)), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of embodiments of the present disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The following is a unit in an information fusion device provided in connection with an embodiment of the present disclosure. It is understood that the units or modules in the device can be implemented in the terminal as shown in fig. 1 by means of software (for example, the computer program stored in the computer software program) or can be implemented in the terminal as shown in fig. 1 by means of the hardware logic components (for example, FPG a, ASIC, ASSP, SOC, and CPLD) as described above.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an information fusion apparatus 200 implementing an embodiment of the present disclosure, showing the following modules:
an obtaining unit 210, configured to obtain target virtual information and real-time pose information of the terminal;
a determining unit 220, configured to determine target pose information of the target virtual information according to the real-time pose information and a preset pose corresponding relationship between the terminal and the target virtual information;
and a fusion unit 230, configured to fuse the target virtual information into an input scene video according to the target pose information when a fusion instruction is received.
Optionally, the apparatus further comprises an adjusting unit 240;
the adjusting unit 240 is configured to obtain a following object from the scene video; acquiring a position relation between the following object and the target virtual information; and when the position of the following object in the scene video changes, adjusting the position of the target virtual information according to the position relation.
Optionally, the adjusting unit 240 is specifically configured to perform object identification on the scene video by using a preset identification algorithm to obtain the following object; or receiving an object indication instruction, and determining the following object from the scene video according to the object indication instruction.
Optionally, the apparatus further comprises a generating unit 250;
the generating unit 250 is configured to receive a selected instruction; determining a target effect from a preset information effect library according to the selected instruction; receiving input information; and processing the input information according to the target effect to generate the target virtual information.
Optionally, the generating unit 250 is further configured to, when an editing instruction is received, edit the target virtual information according to the editing instruction.
Optionally, the target virtual information is three-dimensional character information.
It should be noted that the above-mentioned classification of units does not constitute a limitation of the terminal itself, for example, some units may be split into two or more sub-units, or some units may be combined into a new unit.
It should also be noted that the names of the above units do not in some cases form a limitation on the units themselves, and for example, the above acquisition unit 210 may also be described as a unit for "acquiring target virtual information and real-time pose information of the terminal".
For the same reason, elements and/or modules not described in detail in the terminal do not represent defaults of the corresponding elements and/or modules, and all operations performed by the terminal may be implemented by the corresponding elements and/or modules in the terminal.
With continuing reference to fig. 3, fig. 3 is an optional flowchart illustrating an information fusion method according to an embodiment of the present disclosure, for example, when the information fusion apparatus loads a program in a Read Only Memory (ROM)120 or a program in a storage apparatus 180 into a Random Access Memory (RAM)130, the information fusion method shown in fig. 3 may be implemented when the information fusion apparatus executes the program, and the following describes the steps shown in fig. 3:
s301, acquiring target virtual information and real-time pose information of the terminal.
In an embodiment of the present disclosure, the terminal includes an information fusion device, through which the information fusion method is executed. The information fusion device can acquire the target virtual information and the real-time pose information of the terminal.
It should be noted that, in the embodiment of the present disclosure, the target virtual information is virtual information that needs to be fused with the input real scene video. In the case of fusing target virtual information into an input scene video, a better visual effect may be provided to a user. The target virtual information may be generated in advance by the information fusion apparatus based on the received user interaction operation. The specific target virtual information may be virtual information such as 3D text, and the embodiment of the disclosure is not limited.
Specifically, in this embodiment of the present disclosure, before the information fusion apparatus acquires the target virtual information, the information fusion apparatus may include: receiving a selected instruction; determining a target effect from a preset information effect library according to the selected instruction; receiving input information; and processing the input information according to the target effect to generate target virtual information.
It can be understood that, in the embodiment of the present disclosure, a preset information effect library is stored in the information fusion device, where the preset information effect library includes a plurality of preset information effects, and after a user sends a selection designation to the information fusion device through a specific key or touch operation, the information fusion device may determine an effect indicated by the selection instruction from the preset information effect library as a target effect. When a user inputs information to be added through a specific key or touch operation, the information fusion device can process the input information according to a target effect to obtain target virtual information. The target virtual information is input information having a target effect.
Fig. 4 is a schematic diagram of an exemplary target effect selection for implementing an embodiment of the present disclosure. As shown in fig. 4, the preset information effect library includes 6 effects, and the user can select the effect 6 as a target effect through touch operation, and accordingly, the information fusion device can process the input information according to the effect 6 to achieve the effect.
It should be noted that, in the embodiment of the present disclosure, a specific target effect is determined by a user according to an actual requirement, and the embodiment of the present disclosure is not limited.
It should be noted that, in the embodiment of the present disclosure, after the information fusion apparatus processes the input information according to the target effect to generate the target virtual information, the information fusion apparatus may further include the following steps: and when an editing instruction is received, editing the target virtual information according to the editing instruction.
For example, in the embodiment of the present disclosure, the target virtual information is a 3D text, and the information fusion apparatus receives an editing instruction, which specifically instructs to edit the font of the 3D text to the target type, so that the information fusion apparatus can edit the font of the 3D text to the target type.
It should be noted that, in the embodiment of the present disclosure, the information fusion device may acquire real-time pose information of the terminal, that is, a spatial position and a spatial posture where the terminal is located in a three-dimensional space in real time. The real-time spatial position of the terminal in the three-dimensional space may be represented by a coordinate of the terminal in a world coordinate system, the real-time spatial posture of the terminal in the three-dimensional space may be represented by a deflection angle of the terminal in the world coordinate system, and the embodiment of the present disclosure is not limited.
It should be noted that, in the embodiment of the present disclosure, the information fusion device may acquire the real-time pose information of the terminal through a specific positioning device, and may also acquire some information related to the pose of the terminal to perform calculation to determine the real-time pose information of the terminal, and the specific real-time pose information is not limited in the embodiment of the present disclosure.
S302, determining target pose information of the target virtual information according to the real-time pose information and the preset pose corresponding relation between the terminal and the target virtual information.
In the embodiment of the disclosure, after the information fusion device acquires the target virtual information, further, the target pose information of the target virtual information may be determined according to the real-time pose information and the preset pose corresponding relationship between the terminal and the target virtual information.
It should be noted that, in the embodiment of the present disclosure, the preset pose corresponding relationship between the terminal and the target virtual information is stored in the information fusion device. The preset pose corresponding relationship is a corresponding fixed pose relationship between the pose of the terminal and the pose of the target virtual information, and the specific preset pose relationship is not limited in the embodiment of the disclosure.
For example, in the embodiment of the present disclosure, the preset pose corresponding relationship between the terminal and the target virtual information may be that the positions of the terminal and the target virtual information in the three-dimensional space on the x axis and the y axis, and the deflection angle are the same, and the difference between the positions on the z axis is d 1.
It can be understood that, in the embodiment of the present disclosure, the real-time pose information of the terminal and the target pose information of the target virtual information always satisfy the preset pose corresponding relationship, and therefore, the information fusion apparatus can determine the target pose information of the target virtual information by using the preset pose corresponding relationship according to the real-time pose information of the terminal.
Illustratively, in the embodiment of the present disclosure, the preset pose corresponding relationship between the terminal and the target virtual information is that the positions and the rotation angles of the terminal and the target virtual information in the three-dimensional space are the same, and the position on the z-axis is different from the real-time pose information of d1. terminal as (a, b, c, α), where a, b, and c respectively correspond to the real-time coordinates of the terminal in the three-dimensional space on the x-axis, the y-axis, and the z-axis, and α is the real-time deflection angle of the terminal.
Compared with the implementation mode, the method has the advantages that due to the fact that the terminal and the target virtual information have the corresponding preset pose corresponding relation, when the target virtual information is actually added, a user can control the target virtual information to be in a certain pose by adjusting the pose of the terminal to a certain degree, and the user does not need to perform more interactive operations, so that the information adding efficiency is improved.
And S303, when a fusion instruction is received, fusing the target virtual information into the input scene video according to the target pose information.
In the embodiment of the disclosure, after the information fusion device determines the target pose information of the target virtual information according to the real-time pose information and the preset pose corresponding relation of the terminal, when a fusion instruction is received, the target virtual information is fused into the input scene video according to the target pose information.
Specifically, in the embodiment of the present disclosure, the fusing, by the information fusion apparatus, the target virtual information into the input scene video according to the target pose information includes: acquiring a mapping relation between a first coordinate system and a second coordinate system; the first coordinate system is a coordinate system in which the real-time pose information is located, and the second coordinate system is a coordinate system in which each point in the scene video is located; mapping the target pose information from the first coordinate system to the second coordinate system by using the mapping relation to obtain mapping pose information; and adding the target virtual information into the scene video according to the mapping pose information.
It should be noted that, in the embodiment of the present disclosure, the target pose information acquired by the information fusion apparatus may be coordinates and a deflection angle of the terminal in a world coordinate system, and the first coordinate system is actually the world coordinate system. The scene video is actually displayed on the display interface of the terminal, each point in the scene video may correspond to a coordinate in the screen coordinate system, and the second coordinate system is actually the screen coordinate system.
It should be noted that, in the disclosed embodiment, the information fusion apparatus may obtain a mapping relationship between the first coordinate system and the second coordinate system, so as to implement mapping of the target pose information in the first coordinate system to the second coordinate system. The mapping relationship between the first coordinate system and the second coordinate system may be determined by a specific projection calculation, and the embodiment of the disclosure is not limited.
It can be understood that, in the embodiment of the present disclosure, since the coordinate system where each point in the scene video is located is the second coordinate system, after obtaining the mapping pose information corresponding to the target pose information in the second coordinate system, the information fusion apparatus may add the target virtual information to the scene video according to the position and the posture represented by the mapping pose information.
It can be understood that, in the embodiment of the present disclosure, a user may actually know a preset pose corresponding relationship between the terminal and the target virtual information, so as to control the terminal to be in a specific pose in a three-dimensional space, so as to achieve an effect that the target virtual information is in the specific pose, and therefore, the information fusion apparatus may fuse the target virtual information into the input scene video according to the target pose information.
It should be noted that, in the embodiment of the present disclosure, the input scene video may be a video of a current real scene presented on the display interface, or may be a scene video shot in advance, and the embodiment of the present disclosure is not limited.
It should be noted that, in the embodiment of the present disclosure, after the information fusion apparatus fuses the target virtual information into the input scene video, in the scene video, the target virtual information may always keep the spatial position and the posture represented by the target pose information unchanged, and may also acquire the following object from the scene video for following.
With continued reference to fig. 5, fig. 5 is an alternative flow chart diagram of an information fusion method implementing the embodiments of the present disclosure. As shown in fig. 5, after the information fusion apparatus fuses the target virtual information into the input scene video according to the target pose information, the method further includes the following steps:
s501, acquiring a following object from the scene video.
In the embodiment of the disclosure, the information fusion device may acquire a following object to be followed by the target virtual information from the scene video.
Specifically, in the embodiment of the present disclosure, the information fusion apparatus acquires a following object from a scene video, including: carrying out object identification on the scene video by using a preset identification algorithm to obtain a following object; or receiving an object indication instruction, and determining a following object from the scene video according to the object indication instruction. Specific objects follow embodiments of the present disclosure are not limited.
For example, in the embodiment of the present disclosure, the preset recognition algorithm is a face recognition algorithm, and the information fusion apparatus may use the face recognition algorithm, so as to recognize a face in the scene video and determine the face as a following object.
For example, in the embodiment of the present disclosure, the information fusion apparatus receives an object indication instruction sent by a user, so that a person indicated by the object indication instruction in the scene video is determined as a following object.
And S502, acquiring the position relation between the following object and the target virtual information.
In the embodiment of the present disclosure, after acquiring the following object, the information fusion apparatus may further acquire a positional relationship between the following object and the target virtual information.
It should be noted that, in the embodiment of the present disclosure, in the scene video, a certain position relationship exists between the following object and the target virtual information, and the information fusion device may directly acquire the spatial position information of the following object, and acquire the spatial position information of the target virtual information from the target pose information, so as to determine the corresponding position relationship according to the spatial position information of the following object and the target virtual information. Specific position relationship between the following object and the target virtual information embodiments of the present disclosure are not limited.
S503, when the position of the following object changes in the scene, the position of the target virtual information is adjusted according to the position relation.
In an embodiment of the present disclosure, after obtaining the positional relationship between the following object and the target virtual information, the information fusion apparatus adjusts the position of the target virtual information according to the positional relationship when the position of the following object changes in the scene video.
It should be noted that, in the embodiment of the present disclosure, in the three-dimensional space scene, the position of the following object may change at any time, and at this time, along with the change of the position of the following object, the information fusion apparatus will also adjust the position of the target virtual information so that the position of the target virtual information and the position of the following object always satisfy the previously acquired positional relationship.
The embodiment of the disclosure provides an information fusion method, which is applied to a terminal and comprises the following steps: acquiring target virtual information and real-time pose information of a terminal; determining target pose information of the target virtual information according to the real-time pose information and a preset pose corresponding relation between the terminal and the target virtual information; and when a fusion instruction is received, fusing the target virtual information into the input scene video according to the target pose information. According to the technical scheme provided by the embodiment of the disclosure, the virtual information is associated with the pose of the terminal, so that when the pose of the control terminal reaches a certain degree, the pose of the virtual information meeting the requirement can be determined by using the pose of the terminal to perform information fusion, and the information fusion efficiency is improved.
According to one or more embodiments of the present disclosure, there is provided an information fusion method applied to a terminal, the method including:
acquiring target virtual information and real-time pose information of the terminal;
determining target pose information of the target virtual information according to the real-time pose information and a preset pose corresponding relation between the terminal and the target virtual information;
and when a fusion instruction is received, fusing the target virtual information into an input scene video according to the target pose information.
In the above scheme, after the target virtual information is fused into the input scene video according to the target pose information, the method further includes:
acquiring a following object from the scene video;
acquiring a position relation between the following object and the target virtual information;
and when the position of the following object in the scene video changes, adjusting the position of the target virtual information according to the position relation.
In the above solution, the acquiring a following object from the scene video includes:
carrying out object identification on the scene video by using a preset identification algorithm to obtain the following object;
or receiving an object indication instruction, and determining the following object from the scene video according to the object indication instruction.
In the above scheme, before the obtaining of the target virtual information and the real-time pose information of the terminal, the method further includes:
receiving a selected instruction;
determining a target effect from a preset information effect library according to the selected instruction;
receiving input information;
and processing the input information according to the target effect to generate the target virtual information.
In the foregoing solution, after the processing the input information according to the target effect and generating the target virtual information, the method further includes:
and when an editing instruction is received, editing the target virtual information according to the editing instruction.
In the above scheme, the fusing the target virtual information into the input scene video according to the target pose information includes:
acquiring a mapping relation between a first coordinate system and a second coordinate system; the first coordinate system is a coordinate system in which the real-time pose information is located, and the second coordinate system is a coordinate system in which each point in the scene video is located;
mapping the target pose information from the first coordinate system to the second coordinate system by using the mapping relation to obtain mapping pose information;
and adding the target virtual information into the scene video according to the mapping pose information.
In the above scheme, the target virtual information is three-dimensional character information.
According to one or more embodiments of the present disclosure, there is provided an information fusion apparatus applied to a terminal, the apparatus including:
the acquisition unit is used for acquiring target virtual information and real-time pose information of the terminal;
the determining unit is used for determining target pose information of the target virtual information according to the real-time pose information and a preset pose corresponding relation between the terminal and the target virtual information;
and the fusion unit is used for fusing the target virtual information into the input scene video according to the target pose information when a fusion instruction is received.
In the above apparatus, the apparatus further comprises an adjustment unit,
the adjusting unit is used for acquiring a following object from the scene video; acquiring a position relation between the following object and the target virtual information; and when the position of the following object in the scene video changes, adjusting the position of the target virtual information according to the position relation.
In the above apparatus, the adjusting unit is specifically configured to perform object identification on the scene video by using a preset identification algorithm to obtain the following object; or receiving an object indication instruction, and determining the following object from the scene video according to the object indication instruction.
In the above apparatus, the apparatus further comprises a generation unit;
the generating unit is used for receiving a selected instruction; determining a target effect from a preset information effect library according to the selected instruction; receiving input information; and processing the input information according to the target effect to generate the target virtual information.
In the above apparatus, the generating unit is further configured to, when an editing instruction is received, edit the target virtual information according to the editing instruction.
In the above apparatus, the fusion unit is specifically configured to obtain a mapping relationship between a first coordinate system and a second coordinate system; the first coordinate system is a coordinate system in which the real-time pose information is located, and the second coordinate system is a coordinate system in which each point in the scene video is located; mapping the target pose information from the first coordinate system to the second coordinate system by using the mapping relation to obtain mapping pose information; and adding the target virtual information into the scene video according to the mapping pose information.
In the above apparatus, the target virtual information is three-dimensional character information.
According to one or more embodiments of the present disclosure, there is provided a terminal including:
a memory for storing executable instructions;
and the processor is used for realizing the information fusion method when the executable instruction is executed.
According to one or more embodiments of the present disclosure, a storage medium is provided, which stores executable instructions, when executed by a processor, for implementing the above information fusion method.
The above description is only an example of the present disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (16)

1. An information fusion method is applied to a terminal, and is characterized in that the method comprises the following steps:
acquiring target virtual information and real-time pose information of the terminal;
determining target pose information of the target virtual information according to the real-time pose information and a preset pose corresponding relation between the terminal and the target virtual information;
and when a fusion instruction is received, fusing the target virtual information into an input scene video according to the target pose information.
2. The method of claim 1, wherein after fusing the object virtual information into the input scene video according to the object pose information, the method further comprises:
acquiring a following object from the scene video;
acquiring a position relation between the following object and the target virtual information;
and when the position of the following object in the scene video changes, adjusting the position of the target virtual information according to the position relation.
3. The method of claim 2, wherein the obtaining of the follow object from the scene video comprises:
carrying out object identification on the scene video by using a preset identification algorithm to obtain the following object;
or receiving an object indication instruction, and determining the following object from the scene video according to the object indication instruction.
4. The method according to claim 1, wherein before the obtaining of the target virtual information and the real-time pose information of the terminal, the method further comprises:
receiving a selected instruction;
determining a target effect from a preset information effect library according to the selected instruction;
receiving input information;
and processing the input information according to the target effect to generate the target virtual information.
5. The method of claim 4, wherein after processing the input information according to the target effect to generate the target virtual information, the method further comprises:
and when an editing instruction is received, editing the target virtual information according to the editing instruction.
6. The method of claim 1, wherein the fusing the object virtual information into the input scene video according to the object pose information comprises:
acquiring a mapping relation between a first coordinate system and a second coordinate system; the first coordinate system is a coordinate system in which the real-time pose information is located, and the second coordinate system is a coordinate system in which each point in the scene video is located;
mapping the target pose information from the first coordinate system to the second coordinate system by using the mapping relation to obtain mapping pose information;
and adding the target virtual information into the scene video according to the mapping pose information.
7. The method of claim 1, wherein the target virtual information is three-dimensional text information.
8. An information fusion device applied to a terminal, the device comprising:
the acquisition unit is used for acquiring target virtual information and real-time pose information of the terminal;
the determining unit is used for determining target pose information of the target virtual information according to the real-time pose information and a preset pose corresponding relation between the terminal and the target virtual information;
and the fusion unit is used for fusing the target virtual information into the input scene video according to the target pose information when a fusion instruction is received.
9. The apparatus according to claim 8, characterized in that the apparatus further comprises an adjustment unit,
the adjusting unit is used for acquiring a following object from the scene video; acquiring a position relation between the following object and the target virtual information; and when the position of the following object in the scene video changes, adjusting the position of the target virtual information according to the position relation.
10. The apparatus of claim 9,
the adjusting unit is specifically configured to perform object identification on the scene video by using a preset identification algorithm to obtain the following object; or receiving an object indication instruction, and determining the following object from the scene video according to the object indication instruction.
11. The apparatus of claim 8, further comprising a generating unit;
the generating unit is used for receiving a selected instruction; determining a target effect from a preset information effect library according to the selected instruction; receiving input information; and processing the input information according to the target effect to generate the target virtual information.
12. The apparatus of claim 11,
the generating unit is further configured to edit the target virtual information according to an editing instruction when the editing instruction is received.
13. The apparatus of claim 8,
the fusion unit is specifically used for acquiring a mapping relation between a first coordinate system and a second coordinate system; the first coordinate system is a coordinate system in which the real-time pose information is located, and the second coordinate system is a coordinate system in which each point in the scene video is located; mapping the target pose information from the first coordinate system to the second coordinate system by using the mapping relation to obtain mapping pose information; and adding the target virtual information into the scene video according to the mapping pose information.
14. The apparatus of claim 8, wherein the target virtual information is three-dimensional text information.
15. A terminal, comprising:
a memory for storing executable instructions;
a processor, configured to execute the executable instructions to implement the information fusion method according to any one of claims 1 to 7.
16. A storage medium storing executable instructions for implementing the information fusion method according to any one of claims 1 to 7 when executed by a processor.
CN201910995446.5A 2019-10-18 2019-10-18 Information fusion method, device, terminal and storage medium Pending CN110794962A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910995446.5A CN110794962A (en) 2019-10-18 2019-10-18 Information fusion method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910995446.5A CN110794962A (en) 2019-10-18 2019-10-18 Information fusion method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN110794962A true CN110794962A (en) 2020-02-14

Family

ID=69439381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910995446.5A Pending CN110794962A (en) 2019-10-18 2019-10-18 Information fusion method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110794962A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111651051A (en) * 2020-06-10 2020-09-11 浙江商汤科技开发有限公司 Virtual sand table display method and device
CN111652987A (en) * 2020-06-12 2020-09-11 浙江商汤科技开发有限公司 Method and device for generating AR group photo image
CN114253421A (en) * 2021-12-16 2022-03-29 北京有竹居网络技术有限公司 Control method, device, terminal and storage medium of virtual model
CN114332416A (en) * 2021-11-30 2022-04-12 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
WO2023169199A1 (en) * 2022-03-07 2023-09-14 北京字跳网络技术有限公司 Method and apparatus for controlling virtual object, and computer device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102696057A (en) * 2010-03-25 2012-09-26 比兹摩德莱恩有限公司 Augmented reality systems
CN108255304A (en) * 2018-01-26 2018-07-06 腾讯科技(深圳)有限公司 Video data handling procedure, device and storage medium based on augmented reality
CN108520552A (en) * 2018-03-26 2018-09-11 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108711188A (en) * 2018-02-24 2018-10-26 石化盈科信息技术有限责任公司 A kind of factory's real time data methods of exhibiting and system based on AR
CN108744507A (en) * 2018-05-18 2018-11-06 腾讯科技(深圳)有限公司 Virtual objects whereabouts control method, device, electronic device and storage medium
CN108771866A (en) * 2018-05-29 2018-11-09 网易(杭州)网络有限公司 Virtual object control method in virtual reality and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102696057A (en) * 2010-03-25 2012-09-26 比兹摩德莱恩有限公司 Augmented reality systems
CN108255304A (en) * 2018-01-26 2018-07-06 腾讯科技(深圳)有限公司 Video data handling procedure, device and storage medium based on augmented reality
CN108711188A (en) * 2018-02-24 2018-10-26 石化盈科信息技术有限责任公司 A kind of factory's real time data methods of exhibiting and system based on AR
CN108520552A (en) * 2018-03-26 2018-09-11 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108744507A (en) * 2018-05-18 2018-11-06 腾讯科技(深圳)有限公司 Virtual objects whereabouts control method, device, electronic device and storage medium
CN108771866A (en) * 2018-05-29 2018-11-09 网易(杭州)网络有限公司 Virtual object control method in virtual reality and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张善立 等: "《虚拟现实概论》", 北京理工大学出版, pages: 86 - 87 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111651051A (en) * 2020-06-10 2020-09-11 浙江商汤科技开发有限公司 Virtual sand table display method and device
CN111651051B (en) * 2020-06-10 2023-08-22 浙江商汤科技开发有限公司 Virtual sand table display method and device
CN111652987A (en) * 2020-06-12 2020-09-11 浙江商汤科技开发有限公司 Method and device for generating AR group photo image
CN111652987B (en) * 2020-06-12 2023-11-07 浙江商汤科技开发有限公司 AR group photo image generation method and device
CN114332416A (en) * 2021-11-30 2022-04-12 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN114332416B (en) * 2021-11-30 2022-11-29 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN114253421A (en) * 2021-12-16 2022-03-29 北京有竹居网络技术有限公司 Control method, device, terminal and storage medium of virtual model
WO2023169199A1 (en) * 2022-03-07 2023-09-14 北京字跳网络技术有限公司 Method and apparatus for controlling virtual object, and computer device and storage medium

Similar Documents

Publication Publication Date Title
CN110794962A (en) Information fusion method, device, terminal and storage medium
CN111243049B (en) Face image processing method and device, readable medium and electronic equipment
CN113377366B (en) Control editing method, device, equipment, readable storage medium and product
CN114564106B (en) Method and device for determining interaction indication line, electronic equipment and storage medium
CN110969159B (en) Image recognition method and device and electronic equipment
CN111818265B (en) Interaction method and device based on augmented reality model, electronic equipment and medium
CN111833459B (en) Image processing method and device, electronic equipment and storage medium
CN109816791B (en) Method and apparatus for generating information
CN110619615A (en) Method and apparatus for processing image
CN116527993A (en) Video processing method, apparatus, electronic device, storage medium and program product
CN114723600A (en) Method, device, equipment, storage medium and program product for generating cosmetic special effect
CN111354070B (en) Stereoscopic graph generation method and device, electronic equipment and storage medium
CN113873156A (en) Image processing method and device and electronic equipment
CN109522459B (en) Method and device for specifying task contact person, computer equipment and storage medium
CN113703704A (en) Interface display method, head-mounted display device and computer readable medium
CN109600558B (en) Method and apparatus for generating information
CN110941389A (en) Method and device for triggering AR information points by focus
CN110620916A (en) Method and apparatus for processing image
CN112822418B (en) Video processing method and device, storage medium and electronic equipment
CN112837424B (en) Image processing method, apparatus, device and computer readable storage medium
CN110136181B (en) Method and apparatus for generating information
CN112395826B (en) Text special effect processing method and device
US20230409121A1 (en) Display control method, apparatus, electronic device, medium, and program product
CN117906634A (en) Equipment detection method, device, equipment and medium
CN115375801A (en) Image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination