WO2023151524A1 - 图像显示方法、装置、电子设备及存储介质 - Google Patents

图像显示方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2023151524A1
WO2023151524A1 PCT/CN2023/074499 CN2023074499W WO2023151524A1 WO 2023151524 A1 WO2023151524 A1 WO 2023151524A1 CN 2023074499 W CN2023074499 W CN 2023074499W WO 2023151524 A1 WO2023151524 A1 WO 2023151524A1
Authority
WO
WIPO (PCT)
Prior art keywords
increment
current
target
movement
reference point
Prior art date
Application number
PCT/CN2023/074499
Other languages
English (en)
French (fr)
Inventor
管峥皓
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023151524A1 publication Critical patent/WO2023151524A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • Embodiments of the present disclosure relate to the technical field of motion simulation, for example, to an image display method, device, electronic device, and storage medium.
  • the disclosure provides an image display method, device, electronic equipment, and storage medium, so as to realize the motion simulation of objects in the screen when the video screen changes, reduce the sense of separation between the objects in the screen and the real world, and enhance user experience. use experience.
  • an embodiment of the present disclosure provides an image display method, including:
  • the cumulative increment to be called determines the current cumulative increment of the target reference point; wherein, the cumulative increment to be called is based on the historical movement increment of the historical video frames to be processed and the corresponding The historical target movement amount is determined;
  • the target object is rendered to the target display position in the current video frame to be processed.
  • an embodiment of the present disclosure further provides an image display device, including:
  • the current movement increment determination module is configured to determine the current movement increment of the target reference point relative to the current video frame to be processed, wherein the current movement increment is determined based on the position information of the target reference point of the previous video frame to be processed ;
  • the current accumulative increment determination module is configured to determine the current accumulative increment of the target reference point according to the current movement increment and the accumulative increment to be called; wherein, the accumulative increment to be called is based on the historical video to be processed The historical movement increment of the frame and the corresponding historical target movement amount are determined;
  • a target movement amount determination module configured to determine the target movement amount of the target reference point according to the current cumulative increment and a preset displacement function
  • the rendering module is configured to render the target object to the target display position in the current video frame to be processed based on the target movement amount.
  • an embodiment of the present disclosure further provides an electronic device, and the electronic device includes:
  • processors one or more processors
  • storage means configured to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the image display method according to any one of the embodiments of the present disclosure.
  • the embodiments of the present disclosure further provide a storage medium containing computer-executable instructions, the computer-executable instructions are used to perform the image display described in any one of the embodiments of the present disclosure when executed by a computer processor method.
  • FIG. 1 is a schematic flowchart of an image display method provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic structural diagram of an image display device provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • the application scenarios of the embodiments of the present disclosure may be described as examples.
  • the captured video can be Add some special effects (for example, a 3D object model in a floating state), when the user moves the terminal device to cause the camera angle of view to change, the captured video picture will also change. At this time, if the object in the video picture remains still , it will make the user feel that the object is separated from the real world.
  • the application in the process of changing the video screen, the application can simulate the motion of the object in the screen, so that the object presents a more Realistic dynamic effects improve the quality of video images.
  • FIG. 1 is a schematic flow chart of an image display method provided by an embodiment of the present disclosure.
  • the embodiment of the present disclosure is applicable to performing motion simulation on objects in the display interface during the process of the terminal device being moved, so that the content of the display interface is consistent with the actual If the scene matches, the method can be executed by an image display device, which can be implemented in the form of software and/or hardware, for example, implemented by electronic equipment, and the electronic equipment can be a mobile terminal, PC or server.
  • the method includes:
  • the device for executing the image display method provided by the embodiments of the present disclosure can be integrated into application software supporting special effect video processing functions, and the software can be installed in electronic equipment, for example, the electronic equipment can be a mobile terminal or a PC terminal, etc. .
  • the application software may be a type of software for image/video processing, and its specific application software will not be described here one by one, as long as the image/video processing can be realized. It can also be a specially developed application program to implement adding and displaying special effects in the software, or it can be integrated in the corresponding page, and the user can realize the processing of the special effect video through the integrated page in the PC terminal.
  • the technical solution of this embodiment can be executed while the user is shooting a video, that is to say, when the user is shooting a video or making a video call with other users, the functions provided by the application can be used during the shooting Add augmented reality (Augmented Reality, AR) special effects to the obtained picture, and when the user controls the camera device to move, the special effects in the picture can simulate the effect of inertial motion based on the solution of this embodiment.
  • augmented reality Augmented Reality, AR
  • the application when the user touches the pre-developed special effect adding control to make the application add a target special effect (for example, a 3D object in a floating state) in the video screen, the application will combine the added target special effect with the pre-developed special effect
  • the created mount point is associated.
  • the target effect can be any floating model. For example, for pre-created 2D or 3D models. This model can be used as the target object in the display interface.
  • a traction effect can also be simulated for the object to reflect the effect of the user adjusting the target effect.
  • the target object can be a heart-shaped balloon with certain material properties pre-set.
  • the balloon is displayed on the display interface, it is also associated with a cartoon arm through a connection line, thereby creating a traction effect.
  • the mount point of the special effect is the target reference point.
  • the target reference point is pre-built based on the virtual rendering camera. It can be understood that the target reference point is pre-created at a certain position corresponding to the rendering camera, and the target reference point is used as the "child object" of the rendering camera. At the same time , so that the positional relationship between the target reference point and the rendering camera remains relatively static at all times.
  • the position of the virtual rendering camera in the three-dimensional space coordinate system will also change.
  • the target reference point The coordinates in the three-dimensional space coordinate system will also change.
  • the three-dimensional coordinates of the target reference point may vary in each video frame. Take two adjacent video frames in the video as an example. If the video frame at the current moment is used as the video frame to be processed, the distance between the position of the target reference point in the frame and the position of the target reference point in the previous frame The distance is the current movement increment of the target reference point. It can be understood that the current movement increment is a three-dimensional vector, and the current movement increment is determined based on the position information of the target reference point of the previous video frame to be processed.
  • the target reference point associated with the 3D balloon in the above example moves from point A to point B in the three-dimensional space
  • the vector calculated based on the three-dimensional coordinates of the two points ( ⁇ x, ⁇ y, ⁇ z) is the current movement increment of the target reference point, and the current movement increment can at least represent the distance and direction of its movement from point A to point B.
  • the relative position between the target reference point and the rendering camera remains unchanged at all times, so when the coordinates of the rendering camera in the three-dimensional space have been determined, according to the relative position matrix, the target reference point in the three-dimensional space can be calculated coordinates within.
  • the coordinates of the rendering camera in the three-dimensional space may be represented by camera position information, and the process of determining the camera position information will be described below.
  • the basic principle of Simultaneous Localization and Mapping (SLAM) algorithm is to use the method of probability and statistics to achieve positioning and reduce positioning errors through multi-feature matching, mainly including features
  • SLAM Simultaneous Localization and Mapping
  • the application can determine the coordinates of the rendering camera in the three-dimensional space coordinate system based on the SLAM algorithm.
  • the current position information of the target reference point can be calculated according to the coordinates and the relative position matrix, that is to say, the position of the target reference point in the video frame to be processed at the current moment is calculated.
  • the coordinate value in the three-dimensional space coordinate system is calculated according to the coordinates and the relative position matrix, that is to say, the position of the target reference point in the video frame to be processed at the current moment is calculated.
  • the difference between the current coordinates of the reference point and the corresponding coordinate value in the previous video frame to be processed can be calculated, and this value can be used as The current movement increment of the target reference point. It can be understood that the coordinate value corresponding to the target reference point in the previous video frame to be processed is its historical position information.
  • the rendered camera's The camera position information that is, the coordinate values in the three-dimensional space coordinate system will also change.
  • the application can determine the camera position information of the rendering camera at the moment based on the SLAM algorithm, that is, the coordinate value of the rendering camera in the three-dimensional space coordinate system, according to the determined
  • the coordinate values and the relative position matrix are calculated to obtain the coordinates (x 2 , y 2 , z 2 ) in the three-dimensional space coordinate system after the movement of the target reference point.
  • the camera position information of the rendering camera and the target current position information of the target reference point are coordinate values in the three-dimensional space coordinate system, in the actual application process, it can be based on the current position information corresponding to The current space coordinates and the historical space coordinates corresponding to the historical position information determine the movement sub-increment in the direction of the coordinate axis; the movement sub-increment is used as the current movement increment.
  • the current spatial coordinates of the target reference point associated with the 3D balloon are (x 2 , y 2 , z 2 ), and the adjacent video frame to be processed before the current moment , the historical space coordinates of the target reference point are (x 1 , y 1 , z 1 ), based on this, the variation of the coordinate value of the target reference point in the direction of the three coordinate axes of the space coordinate system can be determined, for example, the target
  • the above three coordinate value changes are the movement sub-increments of the target reference point in the direction of multiple coordinate axes, and each movement self-increment can be used as the current movement increment of the target reference point in the three-dimensional space coordinate system
  • the vector sum determined based on the above three movement sub-increments may also be used as the current movement increment, which will not be described again in this embodiment of the present disclosure.
  • the video frame to be processed at the current moment there may be multiple video frames to be processed before this frame.
  • the target reference there are also differences in the positions of the points, so in this embodiment, after determining the current movement increment of the target reference point relative to the current video frame to be processed, in order to make the target reference point accurately simulate the effect of inertial motion, it is also necessary to consider The accumulative increment to be called in multiple video frames to be processed before the current moment, and then calculate the sum of the current movement increment of the target reference point and the accumulative increment to be called, and use the calculation result as the current accumulative increment of the target reference.
  • the cumulative increment to be called is determined based on the historical movement increment of the historical video frames to be processed and the corresponding historical target movement amount.
  • the historical video frames to be processed can be For multiple video frames before the current moment, it can be understood that during the movement of the target reference point in the three-dimensional space, each time a video frame to be processed is displayed on the display interface, the cumulative increment to be called will be updated.
  • the camera device collects three video frames N 1 , N 2 and N 3 to be processed.
  • the video frame to be processed at the current moment is video frame N 3
  • the video frame before this moment is N 3 .
  • the two video frames N 1 and N 2 are historical video frames to be processed; at the same time, the target reference point movement increments corresponding to the video frames N 1 , N 2 and N 3 are (0,0,0), ( ⁇ x 1 , ⁇ y 1 , ⁇ z 1 ) and ( ⁇ x 2 , ⁇ y 2 , ⁇ z 2 ), correspondingly, the moving amount of the target reference point in the display interface corresponding to video frame N 1 is (0,0,0), and in the video
  • the movement amount in the display interface corresponding to frame N 2 is (a 1 , b 1 , c 1 ), on this basis, when When the picture of video frame N2 is displayed on the display interface, the cumulative increment ( ⁇ x 1 -a 1 , ⁇ y 1 -b 1 , ⁇ z 1
  • the current cumulative increment is the sum of the cumulative increment to be called and the current movement increment corresponding to the video frame to be processed at the current moment (ie, video frame N 3 ), namely ( ⁇ x 1 -a 1 + ⁇ x 2 , ⁇ y 1 - b 1 + ⁇ y 2 , ⁇ z 1 -c 1 + ⁇ z 2 ).
  • the cumulative increment to be called needs to be continuously updated. Therefore, in the storage space of the device, an incremental cache pool for storing the cumulative increment to be called can be pre-built.
  • the accumulative increment to be transferred includes accumulative sub-increments to be called in the direction of multiple coordinate axes, the accumulative increment to be called determined each time can be decoupled and stored based on different coordinate axes.
  • the cumulative increment to be called can be directly retrieved from the increment cache pool; based on the cumulative sub-increment to be called and the corresponding movement sub-increment, the target reference The accumulative movement sub-increment of the point in multiple coordinate axis directions; multiple accumulative movement sub-increments are used as the current accumulative increment.
  • the corresponding cumulative increment to be called is ( ⁇ x 1 -a 1 , ⁇ y 1 -b 1 , ⁇ z 1 -c 1 ), based on the x-axis, y-axis and z-axis in the three-dimensional space coordinate system, the accumulated increment to be called can be decoupled to obtain the The cumulative sub-increment ⁇ x 1 -a 1 to be called, the cumulative sub-increment ⁇ y 1 -b 1 to be called in the direction of the y-axis, and the cumulative sub-increment ⁇ z 1 -c 1 to be called in the direction of the z-axis, and then the above The three accumulative sub-increments to be called are stored in the increment buffer pool.
  • the current movement increment ( ⁇ x 2 , ⁇ y 2 , ⁇ z 2 ) of the target reference point in the video frame to be processed at the current moment is determined.
  • the coordinate values in the three directions of x-axis, y-axis and z-axis are respectively added to the corresponding three accumulative sub-increments to be called in the increment buffer pool to obtain ⁇ x 1 -a 1 + ⁇ x 2 , ⁇ y 1 -b 1 + ⁇ y 2 and ⁇ z 1 -c 1 + ⁇ z 2 are three accumulative movement sub-increments. It can be understood that the above three sub-increments can be used as the current accumulative increment of the target reference point at the current moment.
  • the target movement amount reflects the direction and distance that the target reference point needs to move in the display interface corresponding to the application.
  • the preset displacement function is determined based on the traction coefficient, the preset drag coefficient, and the speed coefficient of the traction target object, wherein the traction coefficient is a parameter determined to make the object appear to be pulled by a special traction effect during motion , the preset drag coefficient is the parameter determined to make the object show the effect of wind resistance during motion, and the velocity coefficient is the parameter determined to make the object move based on an initial speed during motion .
  • the coordinate value can be input as x into the above-mentioned preset displacement function, so as to obtain the target movement amount of the reference point in the display interface.
  • the three coefficients k 1 , k 2 and k 3 can be adjusted based on experience. It can also be set in advance, and a mapping table representing the corresponding relationship between different objects and the corresponding three coefficients can be established in advance. Based on this, when it is necessary to determine the target movement of an object, the above can be determined by looking up the table The three coefficients will not be described in detail here in the embodiment of the present disclosure.
  • a threshold is set for the current cumulative increment, for example, 0.1 is used as the movement threshold of the cumulative movement sub-increment.
  • the accumulated movement sub-increments greater than the preset movement amount threshold are processed to obtain the first to-be-moved amount; amount; based on the first to-be-moved amount and the second to-be-moved amount, determine a target moving amount.
  • the first to-be-moved amount/second to-be-moved amount refers to the theoretical moving amount in the display interface determined by the application as the target reference point, not the target moving amount that determines how the application finally renders the target object on the display interface. It can be understood that, when comparing the preset movement threshold and the accumulated movement sub-increments, if the comparison results are different, the determined target movement values are also different.
  • the cumulative movement sub-increment of the target reference point in the x-axis direction is 0.05, and the preset movement threshold is 0.1, since the cumulative movement sub-increment is smaller than the preset movement threshold, although the The corresponding target movement amount can be obtained by inputting the cumulative movement sub-increment into the preset displacement function, but it needs to be erased because the target movement amount is too small, so as to avoid unrealistic inertial motion of the target object associated with the target reference point In the case of , that is, the final determined target movement amount is 0, and the target object remains relatively still in the display interface.
  • the target when the cumulative movement sub-increment of the target reference point in the x-axis direction is 0.2, and the preset movement threshold is 0.1, since the cumulative movement sub-increment is greater than the preset movement threshold, at this time, the target needs to be
  • the target object associated with the reference point simulates inertial motion in the display interface, that is, input the cumulative movement sub-increment 0.2 into the preset displacement function to obtain the corresponding target movement amount.
  • the target display position of the target reference point in the current video frame to be processed can be determined based on the target movement amount, so as to render the target object to the target display position.
  • the target movement amount can be sent to the vertex shader used to render the image, so as to determine the final display position of the object associated with the target reference point in the display interface, that is, the target display Location.
  • the target display position can be two-dimensional coordinates
  • the vertex shader is used for image rendering, and replaces the programmable program of the fixed rendering pipeline, and is mainly responsible for the geometric operation of the vertices in the model.
  • the vertex shader can be activated at the same time, and when the vertex shader is running on the GPU, the corresponding image can be rendered on the display interface. repeat.
  • the primary task of the vertex shader is to convert the target movement amount of the target reference point from the 3D model space to the homogeneous clipping space, that is, convert the 3D coordinates into 2D coordinates corresponding to the display interface.
  • the target object in the display interface is composed of two parts.
  • the vertex coordinates of the two triangles on a certain surface of the target object can be determined.
  • the coordinates are also the coordinates obtained by performing perspective division or homogeneous division on the basis of the homogeneous clipping space, at least for reflecting the target object associated with the target reference point in the display.
  • the display position in the display interface that is to say, according to the two-dimensional coordinates determined by the vertex shader, the process of inertial motion of the target object as the shooting angle changes can be rendered on the display interface.
  • the above solution of this embodiment illustrates the process of determining the rendering target object in the video frame to be processed at the current moment.
  • the camera device may continue to move.
  • the target object needs to be processed later Inertial motion is generated in the frame of the processed video frame, therefore, the rendering method of the target object in subsequent frames can be performed according to the embodiments of the present disclosure.
  • the cumulative increment to be called is updated, and the cumulative increment to be called is stored in the delta buffer pool, so as to determine the target movement amount of the next video frame to be processed.
  • the solution of this embodiment can be executed in a mobile terminal installed with relevant video processing applications, and relevant data can also be uploaded to the cloud through the application, and executed by the cloud server. After the server finishes processing, the processing can be The result is sent to the corresponding terminal, so that the inertial motion picture of the target object is rendered on its display interface.
  • the balloon as an example for the target special effect mounted on the target reference point
  • the position of the target reference point in frame N is A
  • the position of the target reference point in frame N+1 is B
  • the balloon special effect starts from Point A moves to point B
  • the movement of the balloon is slow
  • the calculated display position of the balloon special effect in the Nth frame is point C between A and B, that is, the actual moving distance of the balloon special effect is less than Expected movement distance
  • the movement of any object has inertia.
  • the calculated balloon effect will move to point D, between A and D
  • the distance of is greater than the distance between A and B. Due to the effect of the traction rope, the balloon special effect can be returned from point D to point B in subsequent frames.
  • the technical solution of the embodiment of the present disclosure first determines the current movement increment of the target reference point relative to the current video frame to be processed, that is, determines the movement increment of the target reference point in the three-dimensional space coordinate system, and then according to the current movement increment and
  • the cumulative increment to be called determines the target movement amount of the target reference point, for example, according to the current cumulative increment and the preset displacement function, determines the target movement amount of the target reference point, and renders the target object to the current pending processing based on the target movement amount
  • the target display position of the video frame so that when the video screen changes, the motion simulation of the objects in the screen is realized, which reduces the sense of separation between the objects in the screen and the real world, improves the texture of the screen, and also enhances the User experience.
  • FIG. 2 is a schematic structural diagram of an image display device provided by an embodiment of the present disclosure. As shown in FIG. 2 , the device includes It includes: a current movement increment determination module 210 , a current cumulative increment determination module 220 , a target movement amount determination module 230 and a rendering module 240 .
  • the current movement increment determination module 210 is configured to determine the current movement increment of the target reference point relative to the current video frame to be processed, wherein the current movement increment is determined based on the position information of the target reference point of the previous video frame to be processed of.
  • the current accumulative increment determination module 220 is configured to determine the current accumulative increment of the target reference point according to the current movement increment and the accumulative increment to be called; wherein, the accumulative increment to be called is based on the history to be processed The historical movement increment of the video frame and the corresponding historical target movement amount are determined.
  • the target movement amount determination module 230 is configured to determine the target movement amount of the target reference point according to the current cumulative increment and a preset displacement function.
  • the rendering module 240 is configured to render the target object to the target display position in the current video frame to be processed based on the target movement amount.
  • the image display device further includes a cumulative incremental update module to be called.
  • the cumulative increment update module to be called is configured to update the cumulative increment to be called based on the target movement amount, and store the cumulative increment to be called in the increment buffer pool to determine the next video frame to be processed target movement amount.
  • the image display device further includes a relative position matrix determination module.
  • the relative position matrix determination module is configured to determine the relative position matrix of the target reference point and the rendering camera, so as to determine the current movement increment of the target reference point relative to the current video frame to be processed according to the relative position matrix; wherein, The above target reference point is used to mount target effects.
  • the current movement increment determination module 210 includes a camera position information determination unit, a current position information determination unit, and a current movement increment determination unit.
  • the camera position information determination unit is configured to determine the camera position information of the rendering camera based on the simultaneous positioning and map construction algorithm.
  • the current position information determining unit is configured to determine the current position information of the target reference point according to the relative position matrix and the camera position information.
  • the current movement increment determining unit is configured to determine the current movement increment according to the current position information and historical position information corresponding to the previous video frame to be processed.
  • the current movement increment determination unit is further configured to determine the movement sub-increment in the direction of the coordinate axis according to the current space coordinates corresponding to the current position information and the historical space coordinates corresponding to the historical position information. Amount; will move the sub-increment as the current move increment.
  • the current accumulative increment determining module 220 includes an accumulative increment calling unit to be called and a current accumulative increment determining unit.
  • the accumulative increment call unit to be called is configured to retrieve the accumulative increment to be called from the increment cache pool; wherein, the accumulative increment to be transferred includes the accumulative sub-increment to be called amount; based on the to-be-called cumulative sub-increment and the corresponding movement sub-increment, determine the cumulative movement sub-increment of the target reference point in the direction of the coordinate axis.
  • the current accumulative increment determining unit is configured to use the accumulative movement sub-increment as the current accumulative increment.
  • the target movement amount determining module 230 includes a to-be-moved amount determination unit and a target movement amount determination unit.
  • the to-be-moved amount determining unit is configured to process the accumulated moving sub-increments greater than the preset moving amount threshold based on the preset displacement function to obtain the first to-be-moved amount; The amount is used as the second to-be-moved amount.
  • the target movement amount determining unit is configured to determine the target movement amount based on the first to-be-moved amount and the second to-be-moved amount.
  • the preset displacement function is determined based on the traction coefficient, the preset drag coefficient and the speed coefficient for pulling the target object.
  • the rendering module 240 includes a target display position determining unit and a rendering unit.
  • the target display position determination unit is configured to determine the target display position of the target reference point in the current video frame to be processed based on the target movement amount.
  • a rendering unit configured to render the target object to the target display position.
  • the technical solution provided by this embodiment first determines the current movement increment of the target reference point relative to the current video frame to be processed, that is, determines the movement increment of the target reference point in the three-dimensional space coordinate system, and then according to the current movement increment and the cumulative increment to be called to determine the target movement amount of the target reference point, for example, according to the current cumulative increment and the preset displacement function, determine the target movement amount of the target reference point, based on the target movement amount, render the target object to the current pending
  • the target display position of the video frame is processed, so that when the video screen changes, the motion simulation of the objects in the screen is realized, which reduces the sense of separation between the objects in the screen and the real world, improves the texture of the screen, and also enhances the User experience.
  • the image display device provided by the embodiments of the present disclosure can execute the image display method provided by any embodiment of the present disclosure, and has corresponding functional modules and beneficial effects for executing the method.
  • FIG. 3 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the terminal equipment in the embodiment of the present disclosure may include but not limited to such as mobile phone, notebook computer, digital broadcast receiver, PDA (personal digital assistant), PAD (tablet computer), PMP (portable multimedia player), vehicle terminal (such as mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like.
  • the electronic device shown in FIG. 3 is only an example, and should not limit the functions and scope of use of the embodiments of the present disclosure.
  • the electronic device 300 may include a processing device (such as a central processing unit, a pattern processor, etc.) 301, which may be randomly accessed according to a program stored in a read-only memory (ROM) 302 or loaded from a storage device 306.
  • a processing device such as a central processing unit, a pattern processor, etc.
  • RAM random access memory
  • various appropriate actions and processes are executed by programs in the memory (RAM) 303 .
  • RAM 303 various programs and data necessary for the operation of the electronic device 300 are also stored.
  • the processing device 301, ROM 302, and RAM 303 are connected to each other through a bus 304.
  • An edit/output (I/O) interface 305 is also connected to the bus 304 .
  • the following devices can be connected to the I/O interface 305: including, for example, a touch screen, touch pad, keyboard, mouse, camera
  • An editing device 306 including a head, a microphone, an accelerometer, a gyroscope, etc.; an output device 307 including, for example, a liquid crystal display (LCD), a speaker, a vibrator, etc.; a storage device 308 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 309.
  • the communication means 309 may allow the electronic device 300 to perform wireless or wired communication with other devices to exchange data. While FIG. 3 shows electronic device 300 having various means, it should be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 309, or from storage means 306, or from ROM 302.
  • the processing device 301 When the computer program is executed by the processing device 301, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
  • the electronic device provided by the embodiment of the present disclosure belongs to the same inventive concept as the image display method provided by the above embodiment, and the technical details not described in detail in this embodiment can be referred to the above embodiment, and this embodiment has the same features as the above embodiment Beneficial effect.
  • An embodiment of the present disclosure provides a computer storage medium, on which a computer program is stored, and when the program is executed by a processor, the image display method provided in the foregoing embodiments is implemented.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can transmit, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and the server can communicate using any currently known or future-developed network protocols such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol), and can communicate with any digital data communication (for example, a communication network) interconnection through any format or medium.
  • HTTP HyperText Transfer Protocol
  • Examples of communication networks include local area networks ("LANs”), wide area networks ("WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device:
  • the cumulative increment to be called determines the current cumulative increment of the target reference point; wherein, the cumulative increment to be called is based on the historical movement increment of the historical video frames to be processed and the corresponding The historical target movement amount is determined;
  • the target object is rendered to the target display position in the current video frame to be processed.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of the unit does not constitute a limitation of the unit itself under certain circumstances, for example, the first obtaining unit may also be described as "a unit for obtaining at least two Internet Protocol addresses".
  • FPGAs Field Programmable Gate Arrays
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Product
  • SOC System on Chip
  • CPLD Complex Programmable Logic Device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • Example 1 provides an image display method, the method including:
  • the cumulative increment to be called determines the current cumulative increment of the target reference point; wherein, the cumulative increment to be called is based on the historical movement increment of the historical video frames to be processed and the corresponding The historical target movement amount is determined;
  • the target object is rendered to the target display position in the current video frame to be processed.
  • Example 2 provides an image display method, and the method further includes:
  • the cumulative increment to be called is updated based on the target movement amount, and the cumulative increment to be called is stored in an increment buffer pool, so as to determine a target movement amount of a next video frame to be processed.
  • Example 3 provides an image display method. Before the determination of the current movement increment of the target reference point relative to the current video frame to be processed, the method further includes:
  • the target reference point is used for mounting target special effects.
  • Example 4 provides an image display method, wherein determining the current movement increment of the target reference point relative to the current video frame to be processed includes:
  • the current movement increment is determined according to the current position information and historical position information corresponding to a previous video frame to be processed.
  • Example 5 provides an image display method, wherein the current movement increment is determined according to the current position information and the historical position information corresponding to the previous video frame to be processed, including :
  • Example 6 provides an image display method, wherein the current cumulative increment of the target reference point is determined according to the current movement increment and the cumulative increment to be called, including:
  • the cumulative increment to be transferred includes the cumulative sub-increment to be called in the direction of the coordinate axis; based on the cumulative sub-increment to be called and the corresponding movement sub-increment, determining the cumulative movement sub-increment of the target reference point in the direction of the coordinate axis;
  • the accumulated movement sub-increment is used as the current accumulated increment.
  • Example 7 provides an image display method, wherein, according to the current accumulated increment and the preset displacement function, the target movement amount of the target reference point is determined, including:
  • the target movement amount is determined based on the first to-be-moved amount and the second to-be-moved amount.
  • Example 8 provides an image display method, wherein the preset displacement function is determined based on the traction coefficient, preset drag coefficient and speed coefficient of the target object of.
  • Example 9 provides an image display method, wherein, based on the target movement amount, rendering the target object to the target display position in the current video frame to be processed includes:
  • Example 10 provides an image display device, which includes:
  • the current movement increment determination module is configured to determine the current movement increment of the target reference point relative to the current video frame to be processed, wherein the current movement increment is determined based on the position information of the target reference point of the previous video frame to be processed ;
  • the current accumulative increment determination module is configured to determine the current accumulative increment of the target reference point according to the current movement increment and the accumulative increment to be called; wherein, the accumulative increment to be called is based on the historical video to be processed The historical movement increment of the frame and the corresponding historical target movement amount are determined;
  • a target movement amount determination module configured to determine the target movement amount of the target reference point according to the current cumulative increment and a preset displacement function
  • the rendering module is configured to render the target object to the target display position in the current video frame to be processed based on the target movement amount.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本公开实施例提供了一种图像显示方法、装置、电子设备及存储介质,该方法包括:确定目标参考点相对于当前待处理视频帧的当前移动增量;根据当前移动增量以及待调用累计增量,确定目标参考点的当前累计增量;其中,待调用累计增量是基于历史待处理视频帧的历史移动增量和相应的历史目标移动量确定的;根据当前累计增量和预设位移函数,确定目标参考点的目标移动量;基于目标移动量,将目标物体渲染至当前待处理视频帧中的目标显示位置。

Description

图像显示方法、装置、电子设备及存储介质
本申请要求在2022年2月11日提交中国专利局、申请号为202210130360.8的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及运动模拟技术领域,例如涉及一种图像显示方法、装置、电子设备及存储介质。
背景技术
随着计算机技术的不断发展,应用为用户提供的动态特效越来越丰富,所生成视频的趣味性也不断增强,例如,用户在拍摄视频的过程中,即可按照自身喜好为视频添加特效。
然而,相关技术提供的方案中,当用户移动终端设备时,视频中的画面发生变化,此时显示界面中的元素无法对这一变化进行响应,也即是说,在终端设备移动过程中,显示界面中的物体可能与实际场景中呈现效果不符,因此,存在显示界面中所呈现的画面质感较差,从而影响用户的使用体验的问题。
发明内容
本公开提供一种图像显示方法、装置、电子设备及存储介质,以实现在视频的画面发生变化时,对画面内物体的运动模拟,减少画面内物体与现实世界之间的割裂感,增强用户的使用体验。
第一方面,本公开实施例提供了一种图像显示方法,包括:
确定目标参考点相对于当前待处理视频帧的当前移动增量,其中,所述当前移动增量是基于前一待处理视频帧目标参考点的位置信息确定的;
根据所述当前移动增量以及待调用累计增量,确定所述目标参考点的当前累计增量;其中,所述待调用累计增量是基于历史待处理视频帧的历史移动增量和相应的历史目标移动量确定的;
根据所述当前累计增量和预设位移函数,确定所述目标参考点的目标移动量;
基于所述目标移动量,将目标物体渲染至所述当前待处理视频帧中的目标显示位置。
第二方面,本公开实施例还提供了一种图像显示装置,包括:
当前移动增量确定模块,设置为确定目标参考点相对于当前待处理视频帧的当前移动增量,其中,所述当前移动增量是基于前一待处理视频帧目标参考点的位置信息确定的;
当前累计增量确定模块,设置为根据所述当前移动增量以及待调用累计增量,确定所述目标参考点的当前累计增量;其中,所述待调用累计增量是基于历史待处理视频帧的历史移动增量和相应的历史目标移动量确定的;
目标移动量确定模块,设置为根据所述当前累计增量和预设位移函数,确定所述目标参考点的目标移动量;
渲染模块,设置为基于所述目标移动量,将目标物体渲染至所述当前待处理视频帧中的目标显示位置。
第三方面,本公开实施例还提供了一种电子设备,所述电子设备包括:
一个或多个处理器;
存储装置,设置为存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本公开实施例任一所述的图像显示方法。
第四方面,本公开实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如本公开实施例任一所述的图像显示方法。
附图说明
贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开一实施例所提供的一种图像显示方法流程示意图;
图2为本公开一实施例所提供的一种图像显示装置结构示意图;
图3为本公开一实施例所提供的一种电子设备的结构示意图。
具体实施方式
应当理解,本公开的方法实施方式中记载的多个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
在介绍本技术方案之前,可以先对本公开实施例的应用场景进行示例性说明。示例性的,当用户通过应用软件拍摄视频,或者,与其他用户进行视频通话时,可以为所拍摄的视频中 添加一些特效(如,一个处于漂浮状态的3D物体模型),当用户移动终端设备导致相机视角发生变化时,所拍摄的视频画面也会发生改变,此时,如果视频画面中的物体依然保持静止,则会使用户产生物体与现实世界之间割裂的感觉,根据本实施例的技术方案,在视频画面发生变化的过程中,应用可以对画面内物体的运动进行模拟,从而使物体呈现出更加逼真的动态效果,提高视频画面的质感。
图1为本公开实施例所提供的一种图像显示方法流程示意图,本公开实施例适用于终端设备被移动的过程中,对显示界面中的物体进行运动模拟,以使显示界面的内容与实际场景相符的情形,该方法可以由图像显示装置来执行,该装置可以通过软件和/或硬件的形式实现,例如,通过电子设备来实现,该电子设备可以是移动终端、PC端或服务器等。
如图1所示,所述方法包括:
S110、确定目标参考点相对于当前待处理视频帧的当前移动增量。
其中,执行本公开实施例提供的图像显示方法的装置,可以集成在支持特效视频处理功能的应用软件中,且该软件可以安装至电子设备中,例如,电子设备可以是移动终端或者PC端等。应用软件可以是对图像/视频处理的一类软件,其具体的应用软件在此不再一一赘述,只要可以实现图像/视频处理即可。还可以是专门研发的应用程序,来实现添加特效并将特效进行展示的软件中,亦或是集成在相应的页面中,用户可以通过PC端中集成的页面来实现对特效视频的处理。
需要说明的是,本实施例的技术方案可以在用户拍摄视频的过程中执行,也即是说,当用户拍摄视频或者与其他用户进行视频通话的过程中,可以通过应用提供的功能,在拍摄得到的画面上添加增强现实(Augmented Reality,AR)特效,当用户控制摄像装置移动时,即可基于本实施例的方案使画面中的特效模拟出惯性运动的效果。
在本实施例中,当用户通过触控预先开发的特效添加控件,使应用在视频画面中添加目标特效(如,一个处于漂浮状态的3D物体)时,应用会将所添加的目标特效与预先创建的挂载点进行关联。其中,目标特效可以是任意可漂浮的模型。如,为预先创建的2D或3D模型。可以将此模型作为显示界面中的目标物体。当目标物体以漂浮状态展示于显示界面中时,还可以针对该物体模拟牵引特效,以体现用户调整目标特效的效果。例如,目标物体可以是预先设置好某种材质属性的爱心形状的气球,同时,该气球在展示于显示界面时还通过一条连线与一只卡通手臂相关联,从而构建出牵引特效。
可以理解,特效的挂载点即为目标参考点。目标参考点是基于虚拟的渲染相机预先构建出来的,可以理解为,在与渲染相机相对应的某个位置预先创建出目标参考点,并将目标参考点作为渲染相机的“子物体”,同时,使目标参考点与渲染相机之间的位置关系时刻保持相对静止。在此基础上,当用户移动终端设备,虚拟渲染相机在三维空间坐标系内的位置也会发生变化,相应的,为了保持目标参考点与渲染相机之间的相对位置关系不变,目标参考点在三维空间坐标系内的坐标也会发生变化。
示例性的,当用户移动终端设备导致所拍摄的视频画面发生变化时,不仅会使虚拟的渲 染相机在三维空间内的坐标发生变化,同时也会使目标参考点的坐标发生变化,基于此可以理解,视频画面内与目标参考点相关联的漂浮状态的3D气球会随着目标参考点的移动而发生运动。
需要说明的是,当用户移动终端设备导致视频画面发生变化时,目标参考点的三维坐标在每个视频帧中都可能存在差异。以视频中相邻两个视频帧为例进行说明,若当前时刻的视频帧作为待处理视频帧,该帧画面内目标参考点的位置,与上一帧画面内目标参考点的位置之间的距离,即是目标参考点的当前移动增量,可以理解,当前移动增量为一个三维的矢量,当前移动增量是基于前一待处理视频帧目标参考点的位置信息确定的。例如,在相邻两个视频帧所显示的画面中,与上述示例中的3D气球相关联的目标参考点从三维空间中的A点移动至B点,基于两点的三维坐标计算得到的矢量(Δx,Δy,Δz)即是目标参考点的当前移动增量,当前移动增量至少能够表征其从A点移动至B点的距离以及方向。
在本实施例中,为了准确得到目标参考点的当前移动增量,首先需要确定目标参考点和渲染相机的相对位置矩阵,从而根据相对位置矩阵,确定目标参考点相对于当前待处理视频帧的当前移动增量。
可以理解为,目标参考点与渲染相机之间的相对位置时刻保持不变,因此在已确定渲染相机在三维空间内的坐标情况下,根据相对位置矩阵,即可计算得到目标参考点在三维空间内的坐标。其中,渲染相机在三维空间内的坐标可以以相机位置信息来表示,下面对确定相机位置信息的过程进行说明。
例如,基于同时定位与地图构建算法确定渲染相机的相机位置信息;根据相对位置矩阵和相机位置信息,确定目标参考点的当前位置信息;根据当前位置信息和前一待处理视频帧所对应的历史位置信息,确定当前移动增量。
其中,在AR技术领域内,同时定位与地图构建(Simultaneous Localization and Mapping,SLAM)算法的基本原理为,运用概率统计的方法,通过多特征匹配来达到定位以及减少定位误差的效果,主要包括特征提取、数据关联、状态估计、状态更新以及特征更新等几个部分。因此可以理解,基于SLAM算法能够构建视觉效果更为真实的地图,从而针对当前视角渲染虚拟物体的叠加效果,使画面中的虚拟物体更加真实没有违和感。具体到本实施例中来说,无论摄像装置处于何种拍摄角度,应用都可以基于SLAM算法确定出渲染相机在三维空间坐标系内的坐标。
例如,在确定出渲染相机的相机位置信息后,根据该坐标以及相对位置矩阵即可计算出目标参考点的当前位置信息,也即是说,计算出当前时刻待处理视频帧中目标参考点在三维空间坐标系内的坐标值。
在本实施例中,确定出目标参考点的当前位置信息后,即可计算得到该参考点当前坐标与在前一待处理视频帧中所对应坐标值之间的差值,并将该值作为目标参考点的当前移动增量。可以理解,目标参考点在前一待处理视频帧中所对应的坐标值即是其历史位置信息。
继续以上述示例进行说明,当用户移动终端设备导致视频画面发生变化时,渲染相机的 相机位置信息,即在三维空间坐标系内的坐标值也会发生变化。例如,当视频画面中呈现出当前时刻的待处理视频帧时,应用即可基于SLAM算法确定出渲染相机在此刻的相机位置信息,即渲染相机在三维空间坐标系内的坐标值,根据所确定的坐标值以及相对位置矩阵,计算得到目标参考点移动后在三维空间坐标系内的坐标(x2,y2,z2)。例如,获取目标参考点在前一待处理视频帧中所对应的坐标值(x1,y1,z1),并将上述两个坐标值作差,即可得到用户改变视频拍摄角度这一过程中,目标参考点相对于当前待处理视频帧的当前移动增量(Δx,Δy,Δz)。
在本实施例中,由于渲染相机的相机位置信息,以及目标参考点的目标当前位置信息都是三维空间坐标系内的坐标值,因此在实际应用过程中,可以根据与当前位置信息所对应的当前空间坐标,以及与历史位置信息所对应的历史空间坐标,确定在坐标轴方向上的移动子增量;将移动子增量,作为当前移动增量。
示例性的,在当前时刻的待处理视频帧中,3D气球所关联的目标参考点的当前空间坐标为(x2,y2,z2),在处于当前时刻之前的相邻待处理视频帧中,该目标参考点的历史空间坐标为(x1,y1,z1),基于此,可以确定出目标参考点的坐标值在空间坐标系三个坐标轴方向的变化量,例如,目标参考点在x轴的变化量为Δx=x2-x1,在y轴的变化量为Δy=y2-y1,在z轴的变化量为Δz=z2-z1。可以理解,上述三个坐标值变化量即是目标参考点在多个坐标轴方向的移动子增量,每个移动自增量都可以作为目标参考点在三维空间坐标系中的当前移动增量,当然,基于上述三个移动子增量所确定的矢量和,也可以作为当前移动增量,本公开实施例在此不再赘述。
S120、根据当前移动增量以及待调用累计增量,确定目标参考点的当前累计增量。
对于当前时刻待处理视频帧来说,在该帧之前可能还存在多个待处理视频帧,可以理解,当用户移动终端设备导致视频画面发生变化时,在不同的待处理视频帧中,目标参考点的位置也存在差异,因此在本实施例中,当确定出目标参考点相对于当前待处理视频帧的当前移动增量后,为了使目标参考点准确模拟出惯性运动的效果,还需要考虑在当前时刻之前多个待处理视频帧中的待调用累计增量,进而计算出目标参考点当前移动增量与待调用累计增量之和,并将计算结果作为目标参考的当前累计增量。
其中,待调用累计增量是基于历史待处理视频帧的历史移动增量和相应的历史目标移动量确定的,在用户移动终端设备导致视频画面发生变化的过程中,历史待处理视频帧可以是处于当前时刻之前的多个视频帧,可以理解为,目标参考点在三维空间内发生移动的过程中,显示界面中每显示一幅待处理视频帧的画面,待调用累计增量都会发生更新。
示例性的,拍摄角度发生变化的过程中,摄像装置一共采集了三个待处理视频帧N1、N2以及N3,当前时刻的待处理视频帧即是视频帧N3,该时刻之前的两个视频帧N1、N2即是历史待处理视频帧;同时,视频帧N1、N2以及N3所对应的目标参考点移动增量分别为(0,0,0)、(Δx1,Δy1,Δz1)以及(Δx2,Δy2,Δz2),相应的,目标参考点在视频帧N1所对应显示界面中的移动量为(0,0,0),在视频帧N2所对应显示界面中的移动量为(a1,b1,c1),在此基础上,当 显示界面中显示视频帧N2的画面时,即可确定出目标参考点运动至视频帧N2画面中的位置时所对应的待调用累计增量(Δx1-a1,Δy1-b1,Δz1-c1)。例如,当前累计增量即是待调用累计增量与当前时刻待处理视频帧(即视频帧N3)对应的当前移动增量之和,即(Δx1-a1+Δx2,Δy1-b1+Δy2,Δz1-c1+Δz2)。
在实际应用过程中,由于目标参考点多次移动,待调用累计增量需要不断更新,因此,可以在设备的存储空间内,预先构建处用于存储待调用累计增量的增量缓存池,同时,由于待调动累计增量中包括多个坐标轴方向上的待调用累计子增量,因此还可以将每次确定的待调用累计增量基于不同的坐标轴进行解耦后存储。例如,当确定出目标参考点的当前移动增量后,即可直接从增量缓存池中调取待调用累计增量;基于待调用累计子增量和相应的移动子增量,确定目标参考点的在多个坐标轴方向上的累计移动子增量;将多个累计移动子增量,作为当前累计增量。
继续以上述示例进行说明,当预先构建出增量缓存池,并确定出目标参考点运动至第二个待处理视频帧中的位置时,所对应的待调用累计增量为(Δx1-a1,Δy1-b1,Δz1-c1)后,即可以三维空间坐标系内x轴、y轴以及z轴为基础,对待调用累计增量进行解耦,从而得到x轴方向上的待调用累计子增量Δx1-a1,y轴方向上的待调用累计子增量Δy1-b1,以及z轴方向上的待调用累计子增量Δz1-c1,进而将上述三个待调用累计子增量存储至增量缓存池中,当确定出目标参考点在当前时刻待处理视频帧中的当前移动增量(Δx2,Δy2,Δz2)后,即可将x轴、y轴以及z轴三个方向上的坐标值分别与增量缓存池中相应的三个待调用累计子增量相加,从而得到Δx1-a1+Δx2、Δy1-b1+Δy2以及Δz1-c1+Δz2三个累计移动子增量,可以理解,上述三个子增量即可作为目标参考点在当前时刻的当前累计增量。
S130、根据当前累计增量和预设位移函数,确定目标参考点的目标移动量。
在本实施例中,当确定出目标参考点在当前时刻的当前累计增量后,即可将其输入至已集成至应用中的预设位移函数,从而计算出目标参考点的目标移动量。其中,目标移动量即反映目标参考点在应用所对应显示界面中需要移动的方向与距离。
例如,预设位移函数是基于牵引目标物体的牵引系数、预设风阻系数以及速度系数确定的,其中,牵引系数即是为了使物体在运动过程中呈现出被牵引特效拉动的效果而确定的参数,预设风阻系数即是为了使物体在运动过程中呈现出受到风的阻力的效果而确定的参数,速度系数即是为了使物体在运动过程中呈现出基于一个初始速度进行移动而确定的参数。在实际应用过程中,预设位移函数可以为y=(math.abs(x)/x)*(k1+k3*x*x)+k2*x,其中,y即是最终求得的目标参考点在显示界面中的目标移动量,k1即是牵引系数,k2即是预设风阻系数,k3即是速度系数,x即是目标参考点的当前累计增量。例如,当确定出与3D气球关联的目标参考点的当前累计增量为(Δx1-a1+Δx2,Δy1-b1+Δy2,Δz1-c1+Δz2)后,即可将该坐标值作为x输入至上述预设位移函数中,从而得到该参考点在显示界面中的目标移动量。
本领域技术人员应当理解,在实际应用过程中,k1、k2以及k3三个系数可以根据经验进 行设定,也可以预先建立表征不同物体与相应的三种系数之间对应关系的映射表,基于此,当需要确定某一物体的目标移动量时,即可通过查表的方式确定出上述三种系数,本公开实施例在此不再赘述。
需要说明的是,在确定目标参考点的目标移动量的过程中,为了防止拍摄角度仅发生略微的变化,而导致显示界面中的物体产生不符合实际的惯性运动的情况发生,还可以预先为当前累计增量设置一个阈值,如将0.1作为累计移动子增量的移动量阈值。例如,基于预设位移函数对大于预设移动量阈值的累计移动子增量进行处理,得到第一待移动量;以及,将未大于预设移动量阈值的累计子增量作为第二待移动量;基于第一待移动量和第二待移动量,确定目标移动量。
其中,第一待移动量/第二待移动量是指应用为目标参考点所确定的在显示界面中的理论移动量,并非决定应用最终如何在显示界面上渲染目标物体的目标移动量。可以理解为,比较预设移动量阈值以及累计移动子增量的大小,当比较结果不同时,所确定出来的目标移动量的值也存在差异。
示例性的,当目标参考点在x轴方向上的累计移动子增量为0.05,且预设移动量阈值为0.1时,由于累计移动子增量小于预设移动量阈值,因此,虽然将该累计移动子增量输入至预设位移函数中可以得到相应的目标移动量,却因为目标移动量过小而需要将其抹除,从而避免目标参考点关联的目标物体产生不切实际的惯性运动的情况,即,最终确定的目标移动量为0,目标物体在显示界面中保持相对静止。相应的,当目标参考点在x轴方向上的累计移动子增量为0.2,且预设移动量阈值为0.1时,由于累计移动子增量大于预设移动量阈值,此时,需要使目标参考点关联的目标物体在显示界面中模拟出惯性运动,即,将累计移动子增量0.2输入至预设位移函数得到相应的目标移动量。
S140、基于目标移动量,将目标物体渲染至当前待处理视频帧中的目标显示位置。
在本实施例中,确定出目标参考点的目标移动量后,即可基于目标移动量,确定目标参考点在当前待处理视频帧中的目标显示位置,从而将目标物体渲染至目标显示位置。
例如,当确定出目标移动量后,可以将其下发至用于渲染图像的顶点着色器中,从而确定出目标参考点关联的物体在显示界面中的最终需要显示的位置,即,目标显示位置。其中,目标显示位置可以是二维坐标,顶点着色器即是用于图像渲染的,并替代固定渲染管线的可编程程序,主要负责模型中顶点的几何运算。本领域技术人员应当理解,在同一时间内,只能激活一个顶点着色器,当顶点着色器在GPU中运行时,即可在显示界面中渲染出相应的图像,本公开实施例在此不再赘述。
在本实施例中,顶点着色器的首要任务即是将目标参考点的目标移动量,从三维模型空间转换到齐次剪裁空间,即将三维坐标转换为与显示界面相对应的二维坐标。可以理解为,显示界面中的目标物体都是由两个片面构成的,基于顶点着色器对目标移动量进行处理后,即可确定出该目标物体中某一面的两个三角形的顶点坐标,该坐标也是在齐次剪裁空间的基础上进行透视除法或齐次除法所得到的坐标,至少用于反映目标参考点关联的目标物体在显 示界面中的显示位置,也即是说,根据顶点着色器确定出来的二维坐标,即可在显示界面中渲染展示出目标物体随拍摄角度变化而产生惯性运动的过程。
需要说明的是,本实施例上述方案说明了确定当前时刻待处理视频帧中渲染目标物体的过程,然而,在当前时刻之后,摄像装置还可能继续产生移动,对应的,目标物体需要在后续待处理视频帧的画面中产生惯性运动,因此,目标物体在后续帧中的渲染方法都可以按照本公开实施例来执行。
例如,在确定出当前时刻待处理视频帧中目标参考点的目标移动量后,为了保证应用可以在该时刻之后在显示界面中,准确渲染出目标物体的惯性运动画面,还需要基于目标移动量更新待调用累计增量,并将待调用累计增量存储至增量缓存池中,以确定下一待处理视频帧的目标移动量。
继续以上述示例进行说明,当确定出目标参考点在当前时刻待处理视频帧中的目标移动量为(a2,b2,c2)后,还需要基于该参数对当前时刻的待调用累计增量(Δx1-a1,Δy1-b1,Δz1-c1)进行更新,即在待调用累计增量中减去目标参考点在当前时刻待处理视频帧中的移动增量以及目标移动量,并将计算得到的(Δx1-a1+Δx2-a2,Δy1-b1+Δy2-b2,Δz1-c1+Δz2-c2)作为更新后的待调用累计增量,根据三维空间坐标系内的三个坐标轴将其进行解耦后,将得到的三个值分别存储至增量缓存池中,当需要确定出目标参考点在下一个待处理视频帧中的目标移动量时,即可按照本实施例的方案,从增量缓存池中调用待调用累计增量,从而计算出目标参考点新的目标移动量(a3,b3,c3)。
需要说明的是,本实施例的方案可以在安装有相关视频处理应用的移动终端中执行,还可以通过应用将相关数据上传至云端,由云端服务器来执行,当服务器处理完毕后,可以将处理结果下发至对应的终端上,从而在其显示界面中渲染出目标物体的惯性运动画面。
示例性的,以目标参考点挂载的目标特效为气球为例来说明,假设第N帧目标参考点的位置为A,第N+1帧目标参考点的位置为B,即期望气球特效从A点移动到B点,但在实际应用中,气球的移动是缓慢的,计算出的第N帧气球特效的显示位置为A和B之间的C点,即气球特效的实际移动距离是小于期望移动距离|AB|。在实际应用中,任何物体的移动都是存在惯性的,当第N+2帧之后的所有视频帧所属移动终端未发生移动时,计算出的气球特效会移动至D点,A和D之间的距离是大于A和B之间的距离的,由于存在牵引绳的作用,可以在后续帧中将气球特效从D点返回到B点。
本公开实施例的技术方案,先确定目标参考点相对于当前待处理视频帧的当前移动增量,即确定出目标参考点在三维空间坐标系内的移动增量,再根据当前移动增量以及待调用累计增量,确定目标参考点的目标移动量,例如,根据当前累计增量和预设位移函数,确定目标参考点的目标移动量,基于目标移动量,将目标物体渲染至当前待处理视频帧的目标显示位置,从而在视频的画面发生变化时,实现了对画面内物体的运动模拟,减少了画面内物体与现实世界之间的割裂感,提高了画面的质感,同时也增强了用户的使用体验。
图2为本公开实施例所提供的一种图像显示装置结构示意图,如图2所示,所述装置包 括:当前移动增量确定模块210、当前累计增量确定模块220、目标移动量确定模块230以及渲染模块240。
当前移动增量确定模块210,设置为确定目标参考点相对于当前待处理视频帧的当前移动增量,其中,所述当前移动增量是基于前一待处理视频帧目标参考点的位置信息确定的。
当前累计增量确定模块220,设置为根据所述当前移动增量以及待调用累计增量,确定所述目标参考点的当前累计增量;其中,所述待调用累计增量是基于历史待处理视频帧的历史移动增量和相应的历史目标移动量确定的。
目标移动量确定模块230,设置为根据所述当前累计增量和预设位移函数,确定所述目标参考点的目标移动量。
渲染模块240,设置为基于所述目标移动量,将目标物体渲染至所述当前待处理视频帧中的目标显示位置。
在上述技术方案的基础上,图像显示装置还包括待调用累计增量更新模块。
待调用累计增量更新模块,设置为基于所述目标移动量更新所述待调用累计增量,并将所述待调用累计增量存储至增量缓存池中,以确定下一待处理视频帧的目标移动量。
在上述技术方案的基础上,图像显示装置还包括相对位置矩阵确定模块。
相对位置矩阵确定模块,设置为确定目标参考点和渲染相机的相对位置矩阵,以根据所述相对位置矩阵,确定所述目标参考点相对于当前待处理视频帧的当前移动增量;其中,所述目标参考点用于挂载目标特效。
在上述技术方案的基础上,当前移动增量确定模块210包括相机位置信息确定单元、当前位置信息确定单元以及当前移动增量确定单元。
相机位置信息确定单元,设置为基于同时定位与地图构建算法确定渲染相机的相机位置信息。
当前位置信息确定单元,设置为根据所述相对位置矩阵和相机位置信息,确定所述目标参考点的当前位置信息。
当前移动增量确定单元,设置为根据所述当前位置信息和前一待处理视频帧所对应的历史位置信息,确定所述当前移动增量。
例如,当前移动增量确定单元,还设置为根据与所述当前位置信息所对应的当前空间坐标,以及与所述历史位置信息所对应的历史空间坐标,确定在坐标轴方向上的移动子增量;将移动子增量,作为所述当前移动增量。
在上述技术方案的基础上,当前累计增量确定模块220包括待调用累计增量调取单元以及当前累计增量确定单元。
待调用累计增量调取单元,设置为从所述增量缓存池中调取所述待调用累计增量;其中,所述待调动累计增量中包括坐标轴方向上的待调用累计子增量;基于所述待调用累计子增量和相应的移动子增量,确定所述目标参考点的在坐标轴方向上的累计移动子增量。
当前累计增量确定单元,设置为将累计移动子增量,作为所述当前累计增量。
在上述技术方案的基础上,目标移动量确定模块230包括待移动量确定单元以及目标移动量确定单元。
待移动量确定单元,设置为基于预设位移函数对大于预设移动量阈值的累计移动子增量进行处理,得到第一待移动量;以及,将未大于预设移动量阈值的累计子增量作为第二待移动量。
目标移动量确定单元,设置为基于第一待移动量和第二待移动量,确定所述目标移动量。
在上述技术方案的基础上,所述预设位移函数是基于牵引所述目标物体的牵引系数、预设风阻系数以及速度系数确定的。
在上述技术方案的基础上,渲染模块240包括目标显示位置确定单元以及渲染单元。
目标显示位置确定单元,设置为基于所述目标移动量,确定所述目标参考点在当前待处理视频帧中的目标显示位置。
渲染单元,设置为将所述目标物体渲染至所述目标显示位置。
本实施例所提供的技术方案,先确定目标参考点相对于当前待处理视频帧的当前移动增量,即确定出目标参考点在三维空间坐标系内的移动增量,再根据当前移动增量以及待调用累计增量,确定目标参考点的目标移动量,例如,根据当前累计增量和预设位移函数,确定目标参考点的目标移动量,基于目标移动量,将目标物体渲染至当前待处理视频帧的目标显示位置,从而视频的画面发生变化时,实现了对画面内物体的运动模拟,减少了画面内物体与现实世界之间的割裂感,提高了画面的质感,同时也增强了用户的使用体验。
本公开实施例所提供的图像显示装置可执行本公开任意实施例所提供的图像显示方法,具备执行方法相应的功能模块和有益效果。
值得注意的是,上述装置所包括的多个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,多个功能单元的具体名称也只是为了便于相互区分,并不用于限制本公开实施例的保护范围。
图3为本公开实施例所提供的一种电子设备的结构示意图。下面参考图3,其示出了适于用来实现本公开实施例的电子设备(例如图3中的终端设备或服务器)300的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图3示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图3所示,电子设备300可以包括处理装置(例如中央处理器、图案处理器等)301,其可以根据存储在只读存储器(ROM)302中的程序或者从存储装置306加载到随机访问存储器(RAM)303中的程序而执行多种适当的动作和处理。在RAM 303中,还存储有电子设备300操作所需的多种程序和数据。处理装置301、ROM 302以及RAM 303通过总线304彼此相连。编辑/输出(I/O)接口305也连接至总线304。
通常,以下装置可以连接至I/O接口305:包括例如触摸屏、触摸板、键盘、鼠标、摄像 头、麦克风、加速度计、陀螺仪等的编辑装置306;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置307;包括例如磁带、硬盘等的存储装置308;以及通信装置309。通信装置309可以允许电子设备300与其他设备进行无线或有线通信以交换数据。虽然图3示出了具有多种装置的电子设备300,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置309从网络上被下载和安装,或者从存储装置306被安装,或者从ROM 302被安装。在该计算机程序被处理装置301执行时,执行本公开实施例的方法中限定的上述功能。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
本公开实施例提供的电子设备与上述实施例提供的图像显示方法属于同一发明构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且本实施例与上述实施例具有相同的有益效果。
本公开实施例提供了一种计算机存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述实施例所提供的图像显示方法。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的***、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行***、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行***、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形 式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:
确定目标参考点相对于当前待处理视频帧的当前移动增量,其中,所述当前移动增量是基于前一待处理视频帧目标参考点的位置信息确定的;
根据所述当前移动增量以及待调用累计增量,确定所述目标参考点的当前累计增量;其中,所述待调用累计增量是基于历史待处理视频帧的历史移动增量和相应的历史目标移动量确定的;
根据所述当前累计增量和预设位移函数,确定所述目标参考点的目标移动量;
基于所述目标移动量,将目标物体渲染至所述当前待处理视频帧中的目标显示位置。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开多种实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(FPGA)、专用集成 电路(ASIC)、专用标准产品(ASSP)、片上***(SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行***、装置或设备使用或与指令执行***、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体***、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,【示例一】提供了一种图像显示方法,该方法包括:
确定目标参考点相对于当前待处理视频帧的当前移动增量,其中,所述当前移动增量是基于前一待处理视频帧目标参考点的位置信息确定的;
根据所述当前移动增量以及待调用累计增量,确定所述目标参考点的当前累计增量;其中,所述待调用累计增量是基于历史待处理视频帧的历史移动增量和相应的历史目标移动量确定的;
根据所述当前累计增量和预设位移函数,确定所述目标参考点的目标移动量;
基于所述目标移动量,将目标物体渲染至所述当前待处理视频帧中的目标显示位置。
根据本公开的一个或多个实施例,【示例二】提供了一种图像显示方法,该方法,还包括:
基于所述目标移动量更新所述待调用累计增量,并将所述待调用累计增量存储至增量缓存池中,以确定下一待处理视频帧的目标移动量。
根据本公开的一个或多个实施例,【示例三】提供了一种图像显示方法,在所述确定目标参考点相对于当前待处理视频帧的当前移动增量之前,该方法还包括:
确定目标参考点和渲染相机的相对位置矩阵,以根据所述相对位置矩阵,确定所述目标参考点相对于当前待处理视频帧的当前移动增量;
其中,所述目标参考点用于挂载目标特效。
根据本公开的一个或多个实施例,【示例四】提供了一种图像显示方法,其中,确定目标参考点相对于当前待处理视频帧的当前移动增量,包括:
基于同时定位与地图构建算法确定渲染相机的相机位置信息;
根据所述相对位置矩阵和相机位置信息,确定所述目标参考点的当前位置信息;
根据所述当前位置信息和前一待处理视频帧所对应的历史位置信息,确定所述当前移动增量。
根据本公开的一个或多个实施例,【示例五】提供了一种图像显示方法,其中,根据当前位置信息和前一待处理视频帧所对应的历史位置信息,确定当前移动增量,包括:
根据与所述当前位置信息所对应的当前空间坐标,以及与所述历史位置信息所对应的历 史空间坐标,确定在坐标轴方向上的移动子增量;
将移动子增量作为所述当前移动增量。
根据本公开的一个或多个实施例,【示例六】提供了一种图像显示方法,其中,根据当前移动增量以及待调用累计增量,确定目标参考点的当前累计增量,包括:
从所述增量缓存池中调取所述待调用累计增量;其中,所述待调动累计增量中包括坐标轴方向上的待调用累计子增量;基于所述待调用累计子增量和相应的移动子增量,确定所述目标参考点的在坐标轴方向上的累计移动子增量;
将累计移动子增量作为所述当前累计增量。
根据本公开的一个或多个实施例,【示例七】提供了一种图像显示方法,其中,根据当前累计增量和预设位移函数,确定目标参考点的目标移动量,包括:
基于预设位移函数对大于预设移动量阈值的累计移动子增量进行处理,得到第一待移动量;以及,
将未大于预设移动量阈值的累计子增量作为第二待移动量;
基于第一待移动量和第二待移动量,确定所述目标移动量。
根据本公开的一个或多个实施例,【示例八】提供了一种图像显示方法,其中,所述预设位移函数是基于牵引所述目标物体的牵引系数、预设风阻系数以及速度系数确定的。
根据本公开的一个或多个实施例,【示例九】提供了一种图像显示方法,其中,基于目标移动量,将目标物体渲染至当前待处理视频帧中的目标显示位置,包括:
基于所述目标移动量,确定所述目标参考点在当前待处理视频帧中的目标显示位置;
将所述目标物体渲染至所述目标显示位置。
根据本公开的一个或多个实施例,【示例十】提供了一种图像显示装置,该装置包括:
当前移动增量确定模块,设置为确定目标参考点相对于当前待处理视频帧的当前移动增量,其中,所述当前移动增量是基于前一待处理视频帧目标参考点的位置信息确定的;
当前累计增量确定模块,设置为根据所述当前移动增量以及待调用累计增量,确定所述目标参考点的当前累计增量;其中,所述待调用累计增量是基于历史待处理视频帧的历史移动增量和相应的历史目标移动量确定的;
目标移动量确定模块,设置为根据所述当前累计增量和预设位移函数,确定所述目标参考点的目标移动量;
渲染模块,设置为基于所述目标移动量,将目标物体渲染至所述当前待处理视频帧中的目标显示位置。
此外,虽然采用特定次序描绘了多种操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的多种特征也可以单独地或以任何合适的子组合的方 式实现在多个实施例中。

Claims (12)

  1. 一种图像显示方法,包括:
    确定目标参考点相对于当前待处理视频帧的当前移动增量,其中,所述当前移动增量是基于前一待处理视频帧目标参考点的位置信息确定的;
    根据所述当前移动增量以及待调用累计增量,确定所述目标参考点的当前累计增量;其中,所述待调用累计增量是基于历史待处理视频帧的历史移动增量和相应的历史目标移动量确定的;
    根据所述当前累计增量和预设位移函数,确定所述目标参考点的目标移动量;
    基于所述目标移动量,将目标物体渲染至所述当前待处理视频帧中的目标显示位置。
  2. 根据权利要求1所述的方法,还包括:
    基于所述目标移动量更新所述待调用累计增量,并将所述待调用累计增量存储至增量缓存池中,以确定下一待处理视频帧的目标移动量。
  3. 根据权利要求1所述的方法,在所述确定目标参考点相对于当前待处理视频帧的当前移动增量之前,还包括:
    确定目标参考点和渲染相机的相对位置矩阵,以根据所述相对位置矩阵,确定所述目标参考点相对于当前待处理视频帧的当前移动增量;
    其中,所述目标参考点用于挂载目标特效。
  4. 根据权利要求3所述的方法,其中,所述确定目标参考点相对于当前待处理视频帧的当前移动增量,包括:
    基于同时定位与地图构建算法确定渲染相机的相机位置信息;
    根据所述相对位置矩阵和相机位置信息,确定所述目标参考点的当前位置信息;
    根据所述当前位置信息和前一待处理视频帧所对应的历史位置信息,确定所述当前移动增量。
  5. 根据权利要求4所述的方法,其中,所述根据所述当前位置信息和前一待处理视频帧所对应的历史位置信息,确定所述当前移动增量,包括:
    根据与所述当前位置信息所对应的当前空间坐标,以及与所述历史位置信息所对应的历史空间坐标,确定在坐标轴方向上的移动子增量;
    将所述移动子增量作为所述当前移动增量。
  6. 根据权利要求2所述的方法,其中,所述根据所述当前移动增量以及待调用累计增量,确定所述目标参考点的当前累计增量,包括:
    从所述增量缓存池中调取所述待调用累计增量;其中,所述待调动累计增量中包括坐标轴方向上的待调用累计子增量;
    基于所述待调用累计子增量和相应的移动子增量,确定所述目标参考点的在所述坐标轴方向上的累计移动子增量;
    将所述累计移动子增量作为所述当前累计增量。
  7. 根据权利要求1所述的方法,其中,所述根据所述当前累计增量和预设位移函数,确 定所述目标参考点的目标移动量,包括:
    基于预设位移函数对大于预设移动量阈值的累计移动子增量进行处理,得到第一待移动量;以及,
    将未大于所述预设移动量阈值的累计子增量作为第二待移动量;
    基于所述第一待移动量和所述第二待移动量,确定所述目标移动量。
  8. 根据权利要求7所述的方法,其中,所述预设位移函数是基于牵引所述目标物体的牵引系数、预设风阻系数以及速度系数确定的。
  9. 根据权利要求1所述的方法,其中,所述基于所述目标移动量,将目标物体渲染至所述当前待处理视频帧中的目标显示位置,包括:
    基于所述目标移动量,确定所述目标参考点在当前待处理视频帧中的目标显示位置;
    将所述目标物体渲染至所述目标显示位置。
  10. 一种图像显示装置,包括:
    当前移动增量确定模块,设置为确定目标参考点相对于当前待处理视频帧的当前移动增量,其中,所述当前移动增量是基于前一待处理视频帧目标参考点的位置信息确定的;
    当前累计增量确定模块,设置为根据所述当前移动增量以及待调用累计增量,确定所述目标参考点的当前累计增量;其中,所述待调用累计增量是基于历史待处理视频帧的历史移动增量和相应的历史目标移动量确定的;
    目标移动量确定模块,设置为根据所述当前累计增量和预设位移函数,确定所述目标参考点的目标移动量;
    渲染模块,设置为基于所述目标移动量,将目标物体渲染至所述当前待处理视频帧中的目标显示位置。
  11. 一种电子设备,包括:
    一个或多个处理器;
    存储装置,设置为存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-9中任一所述的图像显示方法。
  12. 一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-9中任一所述的图像显示方法。
PCT/CN2023/074499 2022-02-11 2023-02-06 图像显示方法、装置、电子设备及存储介质 WO2023151524A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210130360.8 2022-02-11
CN202210130360.8A CN114494328B (zh) 2022-02-11 2022-02-11 图像显示方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2023151524A1 true WO2023151524A1 (zh) 2023-08-17

Family

ID=81480325

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/074499 WO2023151524A1 (zh) 2022-02-11 2023-02-06 图像显示方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN114494328B (zh)
WO (1) WO2023151524A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494328B (zh) * 2022-02-11 2024-01-30 北京字跳网络技术有限公司 图像显示方法、装置、电子设备及存储介质
CN115131471A (zh) * 2022-08-05 2022-09-30 北京字跳网络技术有限公司 基于图像的动画生成方法、装置、设备及存储介质
CN116301472A (zh) * 2022-09-07 2023-06-23 北京灵犀微光科技有限公司 增强现实画面处理方法、装置、设备及可读介质
CN116704075A (zh) * 2022-10-14 2023-09-05 荣耀终端有限公司 图像处理方法、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10297026B1 (en) * 2016-11-29 2019-05-21 Amazon Technologies, Inc. Tracking of a dynamic container in a video
CN111464749A (zh) * 2020-05-07 2020-07-28 广州酷狗计算机科技有限公司 进行图像合成的方法、装置、设备及存储介质
CN113487474A (zh) * 2021-07-02 2021-10-08 杭州小影创新科技股份有限公司 一种内容相关的gpu实时粒子特效方法
CN114494328A (zh) * 2022-02-11 2022-05-13 北京字跳网络技术有限公司 图像显示方法、装置、电子设备及存储介质

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9936128B2 (en) * 2015-05-20 2018-04-03 Google Llc Automatic detection of panoramic gestures
CN110515452B (zh) * 2018-05-22 2022-02-22 腾讯科技(深圳)有限公司 图像处理方法、装置、存储介质和计算机设备
CN109448117A (zh) * 2018-11-13 2019-03-08 北京旷视科技有限公司 图像渲染方法、装置及电子设备
CN112351222B (zh) * 2019-08-09 2022-05-24 北京字节跳动网络技术有限公司 图像特效处理方法、装置、电子设备和计算机可读存储介质
CN113810587B (zh) * 2020-05-29 2023-04-18 华为技术有限公司 一种图像处理方法及装置
CN112132940A (zh) * 2020-09-16 2020-12-25 北京市商汤科技开发有限公司 显示方法、装置,显示设备及存储介质
CN113691733B (zh) * 2021-09-18 2023-04-18 北京百度网讯科技有限公司 视频抖动检测方法、装置、电子设备和存储介质
CN113989173A (zh) * 2021-10-25 2022-01-28 北京字节跳动网络技术有限公司 视频融合方法、装置、电子设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10297026B1 (en) * 2016-11-29 2019-05-21 Amazon Technologies, Inc. Tracking of a dynamic container in a video
CN111464749A (zh) * 2020-05-07 2020-07-28 广州酷狗计算机科技有限公司 进行图像合成的方法、装置、设备及存储介质
CN113487474A (zh) * 2021-07-02 2021-10-08 杭州小影创新科技股份有限公司 一种内容相关的gpu实时粒子特效方法
CN114494328A (zh) * 2022-02-11 2022-05-13 北京字跳网络技术有限公司 图像显示方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN114494328A (zh) 2022-05-13
CN114494328B (zh) 2024-01-30

Similar Documents

Publication Publication Date Title
WO2023151524A1 (zh) 图像显示方法、装置、电子设备及存储介质
US11288857B2 (en) Neural rerendering from 3D models
JP2022528432A (ja) ハイブリッドレンダリング
JP2024505995A (ja) 特殊効果展示方法、装置、機器および媒体
JP2003533815A (ja) ブラウザシステム及びその使用方法
KR102590102B1 (ko) 증강 현실 기반 디스플레이 방법, 장치 및 저장 매체
WO2023179346A1 (zh) 特效图像处理方法、装置、电子设备及存储介质
WO2022088928A1 (zh) 弹性对象的渲染方法、装置、设备及存储介质
CN112053449A (zh) 基于增强现实的显示方法、设备及存储介质
WO2023151525A1 (zh) 生成特效视频的方法、装置、电子设备及存储介质
CN112672185A (zh) 基于增强现实的显示方法、装置、设备及存储介质
JP2023538257A (ja) 画像処理方法、装置、電子デバイス及び記憶媒体
WO2023221409A1 (zh) 虚拟现实空间的字幕渲染方法、装置、设备及介质
WO2023202358A1 (zh) 虚拟对象的运动控制方法及设备
WO2023121569A2 (zh) 粒子特效渲染方法、装置、设备及存储介质
CN110930492B (zh) 模型渲染的方法、装置、计算机可读介质及电子设备
CN111652675A (zh) 展示方法、装置和电子设备
WO2022012349A1 (zh) 动画处理方法、装置、电子设备及存储介质
WO2023246302A1 (zh) 字幕的显示方法、装置、设备及介质
CN114067030A (zh) 动态流体效果处理方法、装置、电子设备和可读介质
WO2023207354A1 (zh) 特效视频确定方法、装置、电子设备及存储介质
WO2023174087A1 (zh) 特效视频生成方法、装置、设备及存储介质
WO2023025085A1 (zh) 视频处理方法、装置、设备、介质及程序产品
CN114116081B (zh) 交互式动态流体效果处理方法、装置及电子设备
CN114049403A (zh) 一种多角度三维人脸重建方法、装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23752304

Country of ref document: EP

Kind code of ref document: A1