CN107358659B - Multi-picture fusion display method based on 3D technology and storage device - Google Patents

Multi-picture fusion display method based on 3D technology and storage device Download PDF

Info

Publication number
CN107358659B
CN107358659B CN201710599954.2A CN201710599954A CN107358659B CN 107358659 B CN107358659 B CN 107358659B CN 201710599954 A CN201710599954 A CN 201710599954A CN 107358659 B CN107358659 B CN 107358659B
Authority
CN
China
Prior art keywords
display
different
scene
data sources
displayed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710599954.2A
Other languages
Chinese (zh)
Other versions
CN107358659A (en
Inventor
冯皓
林鎏娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujian Star Net Communication Co Ltd
Original Assignee
Fujian Star Net Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Star Net Communication Co Ltd filed Critical Fujian Star Net Communication Co Ltd
Priority to CN201710599954.2A priority Critical patent/CN107358659B/en
Publication of CN107358659A publication Critical patent/CN107358659A/en
Application granted granted Critical
Publication of CN107358659B publication Critical patent/CN107358659B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1423Digital output to display device ; Cooperation and interconnection of the display device with other functional units controlling a plurality of local displays, e.g. CRT and flat panel display

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of multimedia, and discloses a multi-picture fusion display method and a storage device based on a 3D technology, wherein the multi-picture fusion display method comprises the following steps: presetting a 3D display scene, wherein the 3D display scene comprises more than two display windows; acquiring more than two data sources to be displayed from an application program of the same terminal, rendering different data sources into different virtual screens respectively, and displaying virtual screen display results obtained after rendering into different display windows in a 3D display scene respectively; or different data sources are displayed in different display windows in the 3D display scene through different coordinate systems. Through setting for different 3D display scenes, realize the different display effect of each display window for the display mode and the effect of data source are no longer single, let the data source of treating to show fuse to the 3D scene, let the data source show more lifelike, have stronger sense of immersing, and it is more convenient to browse, shows more diversified lively.

Description

Multi-picture fusion display method based on 3D technology and storage device
Technical Field
The invention relates to the technical field of multimedia, in particular to a multi-picture fusion display method and storage equipment based on a 3D technology.
Background
With the development of screen technology, people have higher and higher requirements on the display effect of screen contents, and in the technical field of multimedia, information is mainly displayed through display equipment such as a display and a projector. In order to display a plurality of information, the display screen may be generally divided into a plurality of sub-windows, and different information may be displayed in each sub-window. Although a plurality of information can be displayed simultaneously in the prior art, the display mode and effect of each information are very single, so that the users are easy to have visual fatigue and are inconvenient to browse.
For example, in video playing, the most common method in the prior art is to directly divide a display screen into a plurality of square sub-windows and display video information in each sub-window, so that other richer and more vivid display application effects cannot be realized, and the user experience is poor.
Disclosure of Invention
Therefore, a multi-picture fusion display method based on a 3D technology is needed to solve the technical problems of single display mode and effect of each information and inconvenience in browsing when displaying a plurality of information in the prior art.
In order to achieve the above object, the inventor provides a multi-picture fusion display method based on 3D technology, and the specific technical solution is as follows:
a multi-picture fusion display method based on 3D technology comprises the following steps: presetting a 3D display scene, wherein the 3D display scene comprises more than two display windows, and each display window is positioned at different or incompletely same spatial positions in the 3D display scene; acquiring more than two data sources to be displayed from an application program of the same terminal; rendering different data sources into different virtual screens respectively, and displaying virtual screen display results obtained after rendering in different display windows of the 3D display scene respectively; or different data sources are displayed in different display windows of the 3D display scene through different coordinate systems.
Further, the data source comprises pictures and/or sounds output by the application program in real time.
Further, the two or more data sources to be displayed are provided by the same application program or provided by different application programs of the same terminal respectively.
Further, the more than two data sources to be displayed are provided by different application programs, and the step of rendering the different data sources to different virtual screens respectively and displaying the virtual screen display results obtained after rendering in different display windows of the 3D display scene respectively comprises the following steps: more than two spaces are preset in the video memory, and different spaces correspond to different data sources and virtual screens; rendering different data sources to corresponding virtual screens, and respectively storing display results of the virtual screens in corresponding spaces of the video memory; and copying the rendered virtual screen display result from different spaces of the video memory respectively, and pasting the virtual screen display result to different display windows of the 3D display scene.
Further, the two or more data sources to be displayed are provided by the same application program, and the step of "displaying different data sources in different display windows of the 3D display scene through different coordinate systems" includes the steps of: more than two coordinate systems are preset in a 3D display scene, and different coordinate systems correspond to different data sources and display windows; different data sources are displayed in different display windows of the 3D display scene by different coordinate systems.
Further, the method also comprises the steps of receiving screen recording data sent by other terminals through an image transmission interface or a network, decoding the screen recording data, and displaying pictures and/or sounds obtained through decoding in a display window of the 3D display scene.
In order to achieve the above object, the inventor further provides a storage device, which adopts the following specific technical scheme:
a storage device having stored therein a set of instructions for performing: presetting a 3D display scene, wherein the 3D display scene comprises more than two display windows, and each display window is positioned at different or incompletely same spatial positions in the 3D display scene; acquiring more than two data sources to be displayed from an application program of the same terminal, rendering different data sources into different virtual screens respectively, and displaying virtual screen display results obtained after rendering into different display windows in the 3D display scene respectively; or different data sources are displayed in different display windows of the 3D display scene through different coordinate systems.
Further, the data source comprises pictures and/or sounds output by the application program in real time.
Further, the two or more data sources to be displayed are provided by the same application program or provided by different application programs of the same terminal respectively.
Further, the more than two data sources to be displayed are provided by different application programs, and the step of rendering the different data sources to different virtual screens respectively and displaying the virtual screen display results obtained after rendering in different display windows of the 3D display scene respectively comprises the following steps: more than two spaces are preset in the video memory, and different spaces correspond to different data sources and virtual screens; rendering different data sources to corresponding virtual screens, and respectively storing display results of the virtual screens in corresponding spaces of the video memory; and copying the rendered virtual screen display result from different spaces of the video memory respectively, and pasting the virtual screen display result to different display windows of the 3D display scene.
Further, the two or more data sources to be displayed are provided by the same application program, and the step of "displaying different data sources in different display windows of the 3D display scene through different coordinate systems" includes the steps of: more than two coordinate systems are preset in a 3D display scene, and different coordinate systems correspond to different data sources and display windows; different data sources are displayed in different display windows of the 3D display scene by different coordinate systems.
Further, the set of instructions is further for performing: receiving screen recording data sent by other terminals through an image transmission interface or a network, decoding the screen recording data, and displaying pictures and/or sounds obtained by decoding in a display window of a 3D display scene.
The invention has the beneficial effects that: the method comprises the steps of obtaining more than two data sources to be displayed through presetting a 3D display scene comprising more than two display windows, respectively rendering different data sources to different virtual screens, and then respectively displaying virtual screen display results obtained after rendering in different display windows in the 3D display scene, or displaying different data sources in different display windows in the 3D display scene through different coordinate systems; different personalized display effects of each display window can be realized by setting different 3D display scenes, so that the display mode and the effect of the data source are not single any more, but the data source to be displayed is fused into the 3D scene, the data source is more vivid in display, stronger in immersion, more convenient to browse and more diverse in display; and the data source is firstly rendered into different coordinate systems and then displayed in different display windows, and the reality of the data source display is further ensured by rendering into different coordinate systems.
Drawings
Fig. 1 is a flowchart illustrating a multi-screen fusion display method based on 3D technology according to an embodiment;
fig. 2 is a schematic diagram of a 3D display scene being a book according to an embodiment;
fig. 3 is a schematic diagram illustrating a 3D display scene including two televisions according to an embodiment;
fig. 4 is a schematic diagram illustrating a 3D display scene is KTV according to an embodiment;
fig. 4a is a schematic diagram of a display effect of a 3D display scene as a stage according to an embodiment;
fig. 4b is a schematic diagram of a 3D display scene as a stage according to an embodiment;
FIG. 5 is a block diagram of a memory device according to an embodiment.
Description of reference numerals:
500. a storage device.
Detailed Description
To explain technical contents, structural features, and objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in conjunction with the embodiments.
Referring to fig. 1, in the present embodiment, a multi-frame fusion display method based on 3D technology is provided, which can simultaneously display more than two frames in a 3D scene (or 3D space). The multi-picture fusion display method can be applied to equipment supporting the same screen and multi-window display data source, such as a common PC, mobile terminal equipment, wearable equipment, a vehicle-mounted intelligent terminal and the like with the function.
The specific implementation technical scheme of the multi-picture fusion display method based on the 3D technology is as follows:
step S101: the method includes the steps of presetting a 3D display scene, wherein the 3D display scene comprises more than two display windows, each display window is located at different or incompletely same spatial positions in the 3D display scene, namely, the display windows can be completely separated in the 3D scene or can be partially overlapped. Such as: A3D display scene is preset as KTV, the virtual KTV comprises a song requesting screen and a plurality of televisions, the song requesting screen is a display window, and one television is a display window.
Referring to fig. 2, the spatial positions of the display windows in the 3D display scene may not be completely the same as the front and back pages of an open book, and there is a partial overlap, and similarly, the spatial positions of the display windows may also be completely different as the left and right pages of the open book, and there is no overlap. Certainly, as shown in fig. 2, a 3D display scene of a stereoscopic book can be directly established, each page of book corresponds to one display window, and a user views contents on different display windows as reading the book, so that the contents are vividly and rich in display.
Referring to fig. 3, the 3D display scene includes two televisions, and the display windows may be in different shapes, such as square, love, etc., and the display windows may be adjusted individually, so that the display is more vivid. After the 3D display scene is preset, step S102 is executed: and acquiring more than two data sources to be displayed from the application program of the same terminal. The following may be used: the data source comprises pictures and/or sounds output by the application program in real time. The picture may be a static picture or a dynamic picture including a plurality of continuous frames which can be played. Wherein the two or more data sources to be displayed can be provided by the same application program or respectively provided by different application programs of the same terminal. Both of the above two different modes will be described in step S103 and step S104 below.
After the data source to be displayed is acquired, step S103 is executed: rendering different data sources into different virtual screens respectively, and displaying virtual screen display results obtained after rendering into different display windows in the 3D display scene respectively; or different data sources are displayed in different display windows of the 3D display scene through different coordinate systems. The following may be used:
if the two or more data sources to be displayed are provided by the same application program, the following method can be adopted: more than two coordinate systems are preset in the 3D display scene, different coordinate systems correspond to different data sources and display windows, and different data sources are displayed in the corresponding display windows in the 3D display scene through different coordinate systems. The coordinate system refers to the positions of a set of 3D objects and a rendering reference. In this embodiment, the coordinate system is preferably an observation coordinate system, which is also referred to as a photographing coordinate system, and the origin of the observation coordinate system is located in the camera. One camera corresponds to a viewing coordinate system with the camera at the origin, the X-axis to the right, the Z-axis forward (toward the inside of the screen or camera), and the Y-axis upward (not above the world but above the camera itself). Such as: different data sources, such as sound and picture, are provided by the same browser, the different data sources are rendered to corresponding coordinate systems, namely, the data sources are placed in corresponding video memories, the video memories can directly display the data sources to corresponding display windows, and the coordinate systems of all the display windows are different. The two tv sets shown in fig. 3 are two display windows, and each of the two tv sets has an independent 3D scene, each 3D scene has an independent camera (one camera corresponds to one 3D coordinate system), wherein the cameras can independently render and display the rendering final result on a designated device (display or virtual screen) for the 3D object corresponding to the same 3D coordinate system, and different data sources are displayed in different display windows in the 3D display scene through different coordinate systems, so as to update the window frame. In the prior art, although data of the same application program is displayed on different windows, the coordinate systems of the different windows are the same, so that the display of the data cannot be adjusted, and the display is very single. In the embodiment, the coordinate systems corresponding to the display windows are different, each coordinate system is an independent module, the content of the display area can be conveniently changed into display results of different coordinate systems, for example, the display area is used as a brand new screen, multiple coordinate systems are independently applied, and the program architecture is flexible. Because the content of the display area is updated and rendered in real time by the independent coordinate system, the display areas can interact with each other without influencing the respective display, so that the data of the same application program can be displayed on different windows in an individualized way by adjusting the coordinate system, 3D animation is realized, the data source display is more vivid and vivid, and the display picture is greatly enriched.
If the more than two data sources to be displayed are respectively provided by different application programs of the same terminal, the following mode can be adopted: more than two spaces are preset in a video memory (different spaces are defined differently in a system so that the different spaces can be called by different application programs), and the different spaces correspond to different data sources and virtual screens; rendering different data sources to corresponding virtual screens, and respectively storing display results of the virtual screens in corresponding spaces of the video memory; and copying the rendered virtual screen display result from different spaces of the video memory respectively, and pasting the virtual screen display result to different display windows in the 3D display scene, thereby updating the window picture in the 3D scene. The virtual screen corresponds to a space in the video memory, a data source on the virtual screen cannot be directly displayed on a display window of a 3D display scene, and the data source needs to be copied and pasted to different display windows in the 3D display scene, which is different from the situation that the data source of the same application program can directly display different data sources in different display windows in the 3D display scene through different coordinate systems.
Such as: different data sources are respectively provided by a browser, song listening software and video software of the same mobile terminal, the different data sources are rendered to corresponding virtual screens, and display results of the virtual screens are respectively stored in spaces corresponding to a video memory; and copying the rendered virtual screen display result from different spaces of the video memory respectively, and pasting the virtual screen display result to different display windows in the 3D display scene. Through the mode, one screen is multipurpose, and different application displays can be realized on the same screen. The screen expansion is realized, such as brushing web pages, listening to songs, watching videos, playing games and the like. Not only the richer display effect is realized, but also the display cost is saved. And different applications are completely provided by a third party, and the implementation mode realizes an open platform and is convenient for integrating various third party applications. And 3D animation can be realized in each display window, and the display picture is enriched.
One of the implementation manners of rendering different data sources to corresponding virtual screens may be: different data sources are rendered to different textures. By rendering to textures, the realism of the data source display can be further ensured. The description of texture is as follows:
in computer graphics, the texture includes both the texture of the object surface in the usual sense, i.e. the surface of the object is in the form of computer graphics animal textures with uneven grooves, and also color patterns on the smooth surface of the object, which we refer more generally to as motifs. In the case of patterns, colored patterns or designs are drawn on the surface of an object, and the surface of the object with the texture is still smooth. In the case of the grooves, it is also necessary to draw a color pattern or design on the surface and to visually give a sense of unevenness. In real life, the texture is used for example as floor and wall. In graphics, textures are mainly used for enhancing the reality of a scene, and if a ground is drawn, a simple method can be used for directly drawing a rectangle; slightly more complicated, multiple triangular mesh combinations can be used for drawing; still more complex, the rendering may be performed using ground texture. Through texture mapping, the reality sense of the ground is obviously enhanced. In software commonly used in computer graphics, such as dx (directx), texture mapping is actually a simulation of real-life textures, and D3D (Direct3D) has a special data structure for managing textures.
In the conventional rendering operation, a scene is directly presented to a back buffer (the back buffer is an area actually drawn by the Direct3D device, and is finally displayed in a window), the back buffer is a memory, and the scene is loaded into a video memory through a drawing function and then is sent to a display through a Present function. In the embodiment, rendering is performed to a texture, rather than directly rendering to a background buffer, and the method specifically includes the following steps: a. creating a texture and obtaining a Surface of the texture (Surface); b. different data sources are respectively rendered into different textures. By rendering to textures, some special effects can be achieved, such as the common environment mapping, namely: such as a smooth sphere, which should be reflective of the surrounding environment, which is the environment map.
In other embodiments, the method further comprises receiving screen recording data sent by other terminals through an image transmission interface or a network, decoding the screen recording data, and displaying the decoded picture and/or sound in a display window of the 3D display scene. The following may be used: different terminals respectively process and encode the screen recording data, then send the screen recording data to one of the terminals through an image transmission interface or a network, the terminal receives the screen recording data, decodes the screen recording data, and displays the decoded picture and/or sound in a display window of a 3D display scene. Such as: the method comprises the following steps that a 3D display scene is preset in a host A, screen recording data in a host B are processed and coded and then sent to the 3D display scene in the host A to be displayed, and the following two conditions are provided:
1. the screen recording data of the host B can be directly output to the host A through the image transmission interface, the host A decodes the screen recording data, and pictures and/or sounds obtained by decoding are displayed in a display window of a 3D display scene;
2. and the host B encodes the screen recording data in real time, sends the encoded screen recording data to the host A through a network, decodes the screen recording data and displays the decoded picture and/or sound in a display window of a 3D display scene.
The screen recording data of the plurality of applications of the other terminal can be transmitted to the other terminal and displayed on the screen of the other terminal at the same time, so that the integration of third-party applications is facilitated; and the screen expansion is realized, the simultaneous display of a plurality of apps becomes possible, the display effect is enriched, and the display cost is saved. And different users can project screen recording data of respective terminals to the same screen for display, and the same-screen interaction among the users is realized, such as: a tournament play game or a collaboration-type game. And 3D animation can be realized in each display window, and the display picture is enriched.
In the present embodiment, the screen recording data of a plurality of hosts may be transmitted to the 3D display scene of the host a and displayed. The screen extension is realized to the effect that different users throw the screen recording data of terminal each to same screen on showing, lets a plurality of apps show simultaneously and becomes possible, and realizes the interaction of same screen between the user, if: a tournament play game or a collaboration-type game. The method has the advantages that app display of different terminals is achieved, more apps are displayed in an expanded mode through a large screen according to the performance of the terminals, display effects are enriched, and display cost is saved. And 3D animation can be realized in each display window, and the display picture is enriched. In all the above cases, one or more of them may be used simultaneously.
Through the mode, the information of a plurality of input sources can be displayed on the same screen. And different input sources are opened to different developers for manufacture by adopting a multi-application program or multi-host form, so that resource integration is facilitated.
Referring to fig. 4, finally, the following can be implemented: a user can select a song on a song selecting screen in a virtual KTV scene and watch a virtual television A to sing; the virtual television B runs a game, and a user can play the game through the interactive means of the host; the user can play movies viewed by movie programs on other handsets, etc. through the virtual television C. The display of the data sources in each display window is independent and not influenced mutually, and each display window can display the data sources to be displayed individually through the 3D scene, so that the display is more vivid and stereoscopic, the content display is greatly enriched, and the watching interest is increased. As shown in fig. 4a and 4b, the 3D scene of the stage is used as a background, three display windows are arranged on the stage, one display window displays a football game, one display song video and one display song-ordering screen effect, each window carries out personalized display aiming at the data displayed by the window, and the whole display effect is three-dimensional and vivid and is interesting.
The method comprises the steps of obtaining more than two data sources to be displayed through presetting a 3D display scene comprising more than two display windows, respectively rendering different data sources to different virtual screens, and then respectively displaying virtual screen display results obtained after rendering in different display windows in the 3D display scene, or displaying different data sources in different display windows in the 3D display scene through different coordinate systems; different personalized display effects of each display window can be realized by setting different 3D display scenes, so that the display mode and the effect of the data source are not single any more, but the data source to be displayed is fused into the 3D scene, the data source is more vivid in display, stronger in immersion, more convenient to browse and more diverse in display; and the data source is firstly rendered into different coordinate systems and then displayed in different display windows, and the reality of the data source display is further ensured by rendering into different coordinate systems.
In the embodiment, before different data sources are respectively rendered to different virtual screens or displayed in different display windows in a 3D display scene through different coordinate systems, the size and format of the data sources are adapted so that the data sources can be finally adapted and displayed on the display windows; in other embodiments, different data sources may be rendered on different virtual screens, and then the sizes of the rendered virtual screen display results are adapted, so that the virtual screen display results can be adapted and displayed on the display window.
Referring to fig. 5, in this embodiment, a specific implementation technical solution of a storage device 500 is as follows:
a storage device 500 having stored therein a set of instructions for performing: presetting a 3D display scene, wherein the 3D display scene comprises more than two display windows, and each display window is positioned at different or incompletely same spatial positions in the 3D display scene; acquiring more than two data sources to be displayed from an application program of the same terminal, rendering different data sources into different virtual screens respectively, and displaying virtual screen display results obtained after rendering into different display windows in the 3D display scene respectively; or different data sources are displayed in different display windows in the 3D display scene through different coordinate systems.
The data source comprises pictures and/or sounds output by the application program in real time.
The more than two data sources to be displayed are provided by the same application program or different application programs of the same terminal respectively.
The more than two data sources to be displayed are provided by different application programs, and the step of respectively rendering the different data sources into different virtual screens and respectively displaying the virtual screen display results obtained after rendering in different display windows of the 3D display scene comprises the following steps: more than two spaces are preset in the video memory, and different spaces correspond to different data sources and virtual screens; rendering different data sources to corresponding virtual screens, and respectively storing display results of the virtual screens in corresponding spaces of the video memory; and copying the rendered virtual screen display result from different spaces of the video memory respectively, and pasting the virtual screen display result to different display windows in the 3D display scene.
The more than two data sources to be displayed are provided by the same application program, and the step of displaying different data sources in different display windows of the 3D display scene through different coordinate systems comprises the following steps: more than two coordinate systems are preset in a 3D display scene, and different coordinate systems correspond to different data sources and display windows; and rendering different data sources to corresponding coordinate systems, and displaying the virtual screen display result obtained after rendering on a corresponding display window in the 3D display scene.
The set of instructions is further for performing: receiving screen recording data sent by other terminals through an image transmission interface or a network, decoding the screen recording data, and displaying pictures and/or sounds obtained by decoding in a display window of a 3D display scene.
The preset 3D display scene may be as follows: the 3D display scene includes more than two display windows, where each display window is located at a different or not identical spatial position in the 3D display scene, that is, the display windows may be completely separated in the 3D scene or may be partially overlapped. Such as: A3D display scene is preset as KTV, the virtual KTV comprises a song requesting screen and a plurality of televisions, the song requesting screen is a display window, and one television is a display window. Referring to fig. 2, the spatial positions of the display windows in the 3D display scene may not be completely the same as the front and back pages of an open book, and there is a partial overlap, and similarly, the spatial positions of the display windows may also be completely different as the left and right pages of the open book, and there is no overlap. Certainly, as shown in fig. 2, a 3D display scene of a stereoscopic book can be directly established, each page of book corresponds to one display window, and a user views contents on different display windows as reading the book, so that the contents are vividly and rich in display. Referring to fig. 3, the 3D display scene includes two televisions, and the display windows may be in different shapes, such as square, love, etc., and the display windows may be adjusted individually, so that the display is more vivid.
And after the 3D display scene is preset, more than two data sources to be displayed are obtained. The following may be used: the data source comprises pictures and/or sounds output by the application program in real time. Wherein the two or more data sources to be displayed can be provided by the same application program or respectively provided by different application programs of the same terminal.
After the data source to be displayed is acquired, different data sources are rendered into different virtual screens respectively, and virtual screen display results obtained after rendering are displayed in different display windows in the 3D display scene respectively; or different data sources are displayed in different display windows in the 3D display scene through different coordinate systems. The following may be used:
if the two or more data sources to be displayed are provided by the same application program, the following method can be adopted: more than two coordinate systems are preset in the 3D display scene, different coordinate systems correspond to different data sources and display windows, and different data sources are displayed in the corresponding display windows in the 3D display scene through different coordinate systems. The coordinate system refers to the positions of a set of 3D objects and a rendering reference. In this embodiment, the coordinate system is preferably an observation coordinate system, which is also referred to as a photographing coordinate system, and the origin of the observation coordinate system is located in the camera. One camera corresponds to a viewing coordinate system with the camera at the origin, the X-axis to the right, the Z-axis forward (toward the inside of the screen or camera), and the Y-axis upward (not above the world but above the camera itself). Such as: different data sources, such as sound and picture, are provided by the same browser, and are rendered to corresponding coordinate systems, namely, the data sources are placed in corresponding video memories, the video memories can directly display the data sources to corresponding display windows, and the coordinate systems of all the display windows are different. The two tv sets shown in fig. 3 are two display windows, and each of the two tv sets has an independent 3D scene, each 3D scene has an independent camera (one camera corresponds to one 3D coordinate system), wherein the cameras can independently render and display the rendering final result on a designated device (display or virtual screen) for the 3D object corresponding to the same 3D coordinate system, and different data sources are displayed in different display windows in the 3D display scene through different coordinate systems, so as to update the window frame. In the prior art, although data of the same application program is displayed on different windows, the coordinate systems of the different windows are the same, so that the display of the data cannot be adjusted, and the display is very single. In the embodiment, the coordinate systems corresponding to the display windows are different, each coordinate system is an independent module, the content of the display area can be conveniently changed into display results of different coordinate systems, for example, the display area is used as a brand new screen, multiple coordinate systems are independently applied, and the program architecture is flexible. Because the content of the display area is updated and rendered in real time by the independent coordinate system, the display areas can interact with each other without influencing the respective display, so that the data of the same application program can be displayed on different windows in an individualized way by adjusting the coordinate system, 3D animation is realized, the data source display is more vivid and vivid, and the display picture is greatly enriched.
If the more than two data sources to be displayed are respectively provided by different application programs of the same terminal, the following mode can be adopted: more than two spaces are preset in a video memory (different spaces are defined differently in a system so that the different spaces can be called by different application programs), and the different spaces correspond to different data sources and virtual screens; rendering different data sources to corresponding virtual screens, and respectively storing display results of the virtual screens in corresponding spaces of the video memory; and copying the rendered virtual screen display result from different spaces of the video memory respectively, and pasting the virtual screen display result to different display windows in the 3D display scene, thereby updating the window picture in the 3D scene. The virtual screen corresponds to a space in the video memory, a data source on the virtual screen cannot be directly displayed on a display window of a 3D display scene, and the data source needs to be copied and pasted to different display windows in the 3D display scene, which is different from the situation that the data source of the same application program can directly display different data sources in different display windows in the 3D display scene through different coordinate systems.
Such as: different data sources are respectively provided by a browser, song listening software and video software of the same mobile terminal, the different data sources are rendered to corresponding virtual screens, and display results of the virtual screens are respectively stored in spaces corresponding to a video memory; and copying the rendered virtual screen display result from different spaces of the video memory respectively, and pasting the virtual screen display result to different display windows in the 3D display scene. Through the mode, one screen is multipurpose, and different application displays can be realized on the same screen. The screen expansion is realized, such as brushing web pages, listening to songs, watching videos, playing games and the like. Not only the richer display effect is realized, but also the display cost is saved. And different applications are completely provided by a third party, and the implementation mode realizes an open platform and is convenient for integrating various third party applications. And 3D animation can be realized in each display window, and the display picture is enriched. One of the implementation manners of rendering different data sources to corresponding virtual screens may be: different data sources are rendered to different textures. By rendering to textures, the realism of the data source display can be further ensured.
In other embodiments, the method further comprises receiving screen recording data sent by other terminals through an image transmission interface or a network, decoding the screen recording data, and displaying the decoded picture and/or sound in a display window of the 3D display scene. The following may be used: different terminals respectively process and encode the screen recording data, then send the screen recording data to one of the terminals through an image transmission interface or a network, the terminal receives the screen recording data, decodes the screen recording data, and displays the decoded picture and/or sound in a display window of a 3D display scene. Such as: the method comprises the following steps that a 3D display scene is preset in a host A, screen recording data in a host B are processed and coded and then sent to the 3D display scene in the host A to be displayed, and the following two conditions are provided:
1. the screen recording data of the host B can be directly output to the host A through the image transmission interface, the host A decodes the screen recording data, and pictures and/or sounds obtained by decoding are displayed in a display window of a 3D display scene;
2. and the host B encodes the screen recording data in real time, sends the encoded screen recording data to the host A through a network, decodes the screen recording data and displays the decoded picture and/or sound in a display window of a 3D display scene.
The screen recording data of the plurality of applications of the other terminal can be transmitted to the other terminal and displayed on the screen of the other terminal at the same time, so that the integration of third-party applications is facilitated; and the screen expansion is realized, the simultaneous display of a plurality of apps becomes possible, the display effect is enriched, and the display cost is saved. And different users can project screen recording data of respective terminals to the same screen for display, and the same-screen interaction among the users is realized, such as: a tournament play game or a collaboration-type game. And 3D animation can be realized in each display window, and the display picture is enriched.
In the present embodiment, the screen recording data of a plurality of hosts may be transmitted to the 3D display scene of the host a and displayed. The screen extension is realized to the effect that different users throw the screen recording data of terminal each to same screen on showing, lets a plurality of apps show simultaneously and becomes possible, and realizes the interaction of same screen between the user, if: a tournament play game or a collaboration-type game. The method has the advantages that app display of different terminals is achieved, more apps are displayed in an expanded mode through a large screen according to the performance of the terminals, display effects are enriched, and display cost is saved. And 3D animation can be realized in each display window, and the display picture is enriched. In all the above cases, one or more of them may be used simultaneously.
Through the mode, the information of a plurality of input sources can be displayed on the same screen. And different input sources are opened to different developers for manufacture by adopting a multi-application program or multi-host form, so that resource integration is facilitated.
Referring to fig. 4, finally, the following can be implemented: a user can select a song on a song selecting screen in a virtual KTV scene and watch a virtual television A to sing; the virtual television B runs a game, and a user can play the game through the interactive means of the host; the user can play movies viewed by movie programs on other handsets, etc. through the virtual television C. The display of the data sources in each display window is not affected mutually, and the display of the data sources in each display window is more vivid and stereoscopic through a 3D scene. As shown in fig. 4a and 4b, the 3D scene of the stage is used as a background, three display windows are arranged on the stage, one display window displays a football game, one display song video and one display song-ordering screen effect, each window carries out personalized display aiming at the data displayed by the window, and the whole display effect is three-dimensional and vivid and is interesting.
The method comprises the steps of obtaining more than two data sources to be displayed through presetting a 3D display scene comprising more than two display windows, respectively rendering different data sources to different virtual screens, and then respectively displaying virtual screen display results obtained after rendering in different display windows in the 3D display scene, or displaying different data sources in different display windows in the 3D display scene through different coordinate systems; different personalized display effects of each display window can be realized by setting different 3D display scenes, so that the display mode and the effect of the data source are not single any more, but the data source to be displayed is fused into the 3D scene, the data source is more vivid in display, stronger in immersion, more convenient to browse and more diverse in display; and the data source is firstly rendered into different coordinate systems and then displayed in different display windows, and the reality of the data source display is further ensured by rendering into different coordinate systems.
In the embodiment, before different data sources are respectively rendered to different virtual screens or displayed in different display windows in a 3D display scene through different coordinate systems, the size and format of the data sources are adapted so that the data sources can be finally adapted and displayed on the display windows; in other embodiments, different data sources may be rendered on different virtual screens, and then the sizes of the rendered virtual screen display results are adapted, so that the virtual screen display results can be adapted and displayed on the display window.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising … …" or "comprising … …" does not exclude the presence of additional elements in a process, method, article, or terminal that comprises the element. Further, herein, "greater than," "less than," "more than," and the like are understood to exclude the present numbers; the terms "above", "below", "within" and the like are to be understood as including the number.
As will be appreciated by one skilled in the art, the above-described embodiments may be provided as a method, apparatus, or computer program product. These embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. All or part of the steps in the methods according to the embodiments may be implemented by a program instructing associated hardware, where the program may be stored in a storage medium readable by a computer device and used to execute all or part of the steps in the methods according to the embodiments. The computer devices, including but not limited to: personal computers, servers, general-purpose computers, special-purpose computers, network devices, embedded devices, programmable devices, intelligent mobile terminals, intelligent home devices, wearable intelligent devices, vehicle-mounted intelligent devices, and the like; the storage medium includes but is not limited to: RAM, ROM, magnetic disk, magnetic tape, optical disk, flash memory, U disk, removable hard disk, memory card, memory stick, network server storage, network cloud storage, etc.
The various embodiments described above are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a computer apparatus to produce a machine, such that the instructions, which execute via the processor of the computer apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer device to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer apparatus to cause a series of operational steps to be performed on the computer apparatus to produce a computer implemented process such that the instructions which execute on the computer apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the embodiments have been described, once the basic inventive concept is obtained, other variations and modifications of these embodiments can be made by those skilled in the art, so that the above embodiments are only examples of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes using the contents of the present specification and drawings, or any other related technical fields, which are directly or indirectly applied thereto, are included in the scope of the present invention.

Claims (10)

1. A multi-picture fusion display method based on 3D technology is characterized by comprising the following steps:
presetting a 3D display scene, wherein the 3D display scene comprises more than two display windows, each display window is positioned at different or incompletely same spatial positions in the 3D display scene, and the coordinate systems of the display windows are different;
acquiring more than two data sources to be displayed from an application program of the same terminal;
respectively rendering different data sources into different virtual screens by rendering to different textures, wherein the virtual screen corresponds to a space in a video memory, and the data of the virtual screen is not directly displayed on a display window of a 3D display scene;
respectively displaying the rendered virtual screen display results in different display windows of the 3D display scene, specifically copying the rendered virtual screen display results and pasting the virtual screen display results to different display windows of the 3D display scene, so that the data source to be displayed is fused into the 3D scene, wherein the data source is rendered to different coordinate systems and then displayed in different display windows.
2. The multi-screen fusion display method according to claim 1, wherein the data source comprises pictures and/or sounds output by an application program in real time.
3. The multi-screen fusion display method according to claim 1, wherein the two or more data sources to be displayed are provided by the same application program or provided by different application programs of the same terminal respectively.
4. The multi-screen fusion display method according to claim 3, wherein the two or more data sources to be displayed are provided by different application programs, and the step of rendering the different data sources to different virtual screens and displaying the virtual screen display results obtained after rendering in different display windows of the 3D display scene comprises the following steps:
more than two spaces are preset in the video memory, and different spaces correspond to different data sources and virtual screens;
rendering different data sources to corresponding virtual screens, and respectively storing display results of the virtual screens in corresponding spaces of the video memory;
and copying the rendered virtual screen display result from different spaces of the video memory respectively, and pasting the virtual screen display result to different display windows in the 3D display scene.
5. The method according to claim 1, further comprising receiving screen recording data sent by other terminals through an image transmission interface or a network, decoding the screen recording data, and displaying the decoded pictures and/or sounds in a display window of a 3D display scene.
6. A storage device having a set of instructions stored therein, the set of instructions being operable to perform:
presetting a 3D display scene, wherein the 3D display scene comprises more than two display windows, each display window is positioned at different or incompletely same spatial positions in the 3D display scene, and the coordinate systems of the display windows are different;
the method comprises the steps that more than two data sources to be displayed are obtained from an application program of the same terminal, different data sources are respectively rendered into different virtual screens through rendering to different textures, the virtual screens correspond to a space in a video memory, and data of the virtual screens are not directly displayed on a display window of a 3D display scene;
respectively displaying the rendered virtual screen display results in different display windows of the 3D display scene, specifically copying the rendered virtual screen display results and pasting the virtual screen display results to different display windows of the 3D display scene, so that the data source to be displayed is fused into the 3D scene, wherein the data source is rendered to different coordinate systems and then displayed in different display windows.
7. The storage device of claim 6, wherein the data source comprises a picture and/or sound output by an application in real time.
8. The storage device according to claim 6, wherein the two or more data sources to be displayed are provided by the same application program or provided by different application programs of the same terminal respectively.
9. The storage device according to claim 8, wherein the two or more data sources to be displayed are provided by different application programs, and the "rendering different data sources into different virtual screens respectively, and displaying the virtual screen display results obtained after rendering in different display windows of the 3D display scene" includes the following steps:
more than two spaces are preset in the video memory, and different spaces correspond to different data sources and virtual screens;
rendering different data sources to corresponding virtual screens, and respectively storing display results of the virtual screens in corresponding spaces of the video memory;
and copying the rendered virtual screen display result from different spaces of the video memory respectively, and pasting the virtual screen display result to different display windows of the 3D display scene.
10. The memory device of claim 6, wherein the set of instructions are further configured to perform:
receiving screen recording data sent by other terminals through an image transmission interface or a network, decoding the screen recording data, and displaying pictures and/or sounds obtained by decoding in a display window of a 3D display scene.
CN201710599954.2A 2017-07-21 2017-07-21 Multi-picture fusion display method based on 3D technology and storage device Active CN107358659B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710599954.2A CN107358659B (en) 2017-07-21 2017-07-21 Multi-picture fusion display method based on 3D technology and storage device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710599954.2A CN107358659B (en) 2017-07-21 2017-07-21 Multi-picture fusion display method based on 3D technology and storage device

Publications (2)

Publication Number Publication Date
CN107358659A CN107358659A (en) 2017-11-17
CN107358659B true CN107358659B (en) 2021-06-22

Family

ID=60284382

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710599954.2A Active CN107358659B (en) 2017-07-21 2017-07-21 Multi-picture fusion display method based on 3D technology and storage device

Country Status (1)

Country Link
CN (1) CN107358659B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108108140B (en) * 2018-01-09 2021-03-09 福建星网视易信息***有限公司 Multi-screen cooperative display method, storage device and equipment supporting 3D display
CN108228128B (en) * 2018-01-10 2021-04-23 广东辰宜信息科技有限公司 Interface data fusion multi-screen control method, electronic equipment, storage medium and system
CN109144451B (en) * 2018-08-31 2022-08-30 福建星网视易信息***有限公司 Multi-application collaborative display method and computer-readable storage medium
CN113051010B (en) * 2019-12-28 2023-04-28 Oppo(重庆)智能科技有限公司 Application picture adjustment method and related device in wearable equipment
CN112735393B (en) * 2020-12-29 2023-11-24 深港产学研基地(北京大学香港科技大学深圳研修院) Method, device and system for speech recognition of AR/MR equipment
CN113432614B (en) * 2021-08-26 2022-01-04 新石器慧通(北京)科技有限公司 Vehicle navigation method, device, electronic equipment and computer readable storage medium
CN114510206A (en) * 2022-01-12 2022-05-17 珠海格力电器股份有限公司 Multi-screen different display method, device, equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281590A (en) * 2008-01-03 2008-10-08 青岛海信电器股份有限公司 Operating unit as well as video system containing the same
CN101662537A (en) * 2009-08-18 2010-03-03 深圳市融创天下科技发展有限公司 Switching method of multi-picture video
CN103077036A (en) * 2013-01-29 2013-05-01 北京小米科技有限责任公司 Method and device for processing interface
CN103399723A (en) * 2013-08-27 2013-11-20 王艳 Large-screen display control system and method
CN104540027A (en) * 2014-12-19 2015-04-22 北京正文科技有限公司 Multimedia display interaction control system under multi-screen environment
CN105681772A (en) * 2014-12-04 2016-06-15 佳能株式会社 Display control apparatus, control method thereof and computer program
CN105892643A (en) * 2015-12-31 2016-08-24 乐视致新电子科技(天津)有限公司 Multi-interface unified display system and method based on virtual reality
CN106060475A (en) * 2016-06-29 2016-10-26 北京利亚德视频技术有限公司 System and method for video pre-monitoring and control through VR method
CN106201396A (en) * 2016-06-29 2016-12-07 乐视控股(北京)有限公司 A kind of method for exhibiting data and device, virtual reality device and playing controller

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106412562B (en) * 2015-07-31 2019-10-25 深圳超多维科技有限公司 The method and its system of stereo content are shown in three-dimensional scenic

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101281590A (en) * 2008-01-03 2008-10-08 青岛海信电器股份有限公司 Operating unit as well as video system containing the same
CN101662537A (en) * 2009-08-18 2010-03-03 深圳市融创天下科技发展有限公司 Switching method of multi-picture video
CN103077036A (en) * 2013-01-29 2013-05-01 北京小米科技有限责任公司 Method and device for processing interface
CN103399723A (en) * 2013-08-27 2013-11-20 王艳 Large-screen display control system and method
CN105681772A (en) * 2014-12-04 2016-06-15 佳能株式会社 Display control apparatus, control method thereof and computer program
CN104540027A (en) * 2014-12-19 2015-04-22 北京正文科技有限公司 Multimedia display interaction control system under multi-screen environment
CN105892643A (en) * 2015-12-31 2016-08-24 乐视致新电子科技(天津)有限公司 Multi-interface unified display system and method based on virtual reality
CN106060475A (en) * 2016-06-29 2016-10-26 北京利亚德视频技术有限公司 System and method for video pre-monitoring and control through VR method
CN106201396A (en) * 2016-06-29 2016-12-07 乐视控股(北京)有限公司 A kind of method for exhibiting data and device, virtual reality device and playing controller

Also Published As

Publication number Publication date
CN107358659A (en) 2017-11-17

Similar Documents

Publication Publication Date Title
CN107358659B (en) Multi-picture fusion display method based on 3D technology and storage device
US20170206708A1 (en) Generating a virtual reality environment for displaying content
CN105447898A (en) Method and device for displaying 2D application interface in virtual real device
JP4481166B2 (en) Method and system enabling real-time mixing of composite and video images by a user
CN107197341B (en) Dazzle screen display method and device based on GPU and storage equipment
WO2018000629A1 (en) Brightness adjustment method and apparatus
US9761056B1 (en) Transitioning from a virtual reality application to an application install
CN107995482B (en) Video file processing method and device
CN105138216A (en) Method and apparatus for displaying audience interaction information on virtual seats
WO2022247204A1 (en) Game display control method, non-volatile storage medium and electronic device
JP2019527899A (en) System and method for generating a 3D interactive environment using virtual depth
Song et al. On a non-web-based multimodal interactive documentary production
CN112153472A (en) Method and device for generating special picture effect, storage medium and electronic equipment
US11095956B2 (en) Method and system for delivering an interactive video
CN103325135B (en) Resource display method, device and terminal
CN113516761A (en) Optical illusion type naked eye 3D content manufacturing method and device
CN105187887A (en) Method and device for displaying lottery animation
KR20210056414A (en) System for controlling audio-enabled connected devices in mixed reality environments
JP2021508133A (en) Mapping pseudo-hologram providing device and method using individual video signal output
Gobira et al. Expansion of uses and applications of virtual reality
Wang Research on the Visual Language of VR Animation in the Multi-Screen Interactive Era
CN117596377B (en) Picture push method, device, electronic equipment, storage medium and program product
CN113676753B (en) Method and device for displaying video in VR scene, electronic equipment and storage medium
WO2023169089A1 (en) Video playing method and apparatus, electronic device, medium, and program product
CN103336678B (en) A kind of resource exhibition method, device and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant