CN110769129B - Image processing method, image processing device, electronic equipment and computer readable storage medium - Google Patents

Image processing method, image processing device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN110769129B
CN110769129B CN201910974918.9A CN201910974918A CN110769129B CN 110769129 B CN110769129 B CN 110769129B CN 201910974918 A CN201910974918 A CN 201910974918A CN 110769129 B CN110769129 B CN 110769129B
Authority
CN
China
Prior art keywords
display area
display
matrix
parameter
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910974918.9A
Other languages
Chinese (zh)
Other versions
CN110769129A (en
Inventor
吕绍辉
倪光耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910974918.9A priority Critical patent/CN110769129B/en
Publication of CN110769129A publication Critical patent/CN110769129A/en
Application granted granted Critical
Publication of CN110769129B publication Critical patent/CN110769129B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The disclosure discloses an image processing method, an image processing device, an electronic device and a computer-readable storage medium. The image processing method comprises the following steps: acquiring a video image from a video source in response to receiving a first trigger signal; acquiring a first display area, wherein the first display area is used for displaying a plurality of video frames of the video image; acquiring a motion trail position parameter of the first display area; acquiring a motion trail time parameter of the first display area, wherein the motion trail time parameter is the time length from the time when the first display area is displayed on a screen to the current time; calculating the display position of the first display area according to the motion trail position parameter and the motion trail time parameter; rendering the video image in the first display area according to the display position of the first display area. By the method, the technical problem that the video effect cannot be simply and quickly generated in the prior art is solved.

Description

Image processing method, image processing device, electronic equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of computer networks and the popularization of smart phones, common users cannot express their emotions by using monotonous pictures and words. The video is deeply loved by users in order to present more abundant and various contents and forms and bring visual feelings, and is gradually popular, and it is gradually a trend that ordinary users make original videos. But on the other hand, the expression form of the original self-timer video is flat and tasteless, and meanwhile, the application of the video special effect in the film and television works is more and more abundant, the content expression form is more diversified, and the video persistence is successful support and guarantee of the film and television works.
However, the existing video special effect production is generally finished by recording a video first and then performing post production, and the displayed special effect is fixed and can only be played until the end according to the preset time logic; and the threshold of post-production is higher, so that a common user cannot quickly generate a special effect or produce a complicated special effect. Therefore, how to simply and rapidly generate the video effect becomes a technical problem to be solved urgently.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, an embodiment of the present disclosure provides an image processing method, including:
in response to receiving a first trigger signal, acquiring a video image from a video source, wherein the video image comprises a plurality of video frames;
acquiring a first display area, wherein the first display area is used for displaying a plurality of video frames of the video image;
acquiring motion trail position parameters of the first display area, wherein the motion trail position parameters comprise an initial position, a control position and a terminal position;
acquiring a motion trail time parameter of the first display area, wherein the motion trail time parameter is the time length from the time when the first display area is displayed on a screen to the current time;
calculating the display position of the first display area according to the motion trail position parameter and the motion trail time parameter;
rendering the video image in the first display area according to the display position of the first display area.
In a second aspect, an embodiment of the present disclosure provides an image processing apparatus, including:
the video image acquisition module is used for responding to the receiving of a first trigger signal and acquiring a video image from a video source, wherein the video image comprises a plurality of video frames;
a first display area acquisition module, configured to acquire a first display area, where the first display area is used to display a video frame of the video image;
the position parameter acquisition module is used for acquiring motion trail position parameters of the first display area, wherein the motion trail position parameters comprise an initial position, a control position and a terminal position;
a time parameter obtaining module, configured to obtain a motion trajectory time parameter of the first display area, where the motion trajectory time parameter is a time length from a time when the first display area is displayed on a screen to a current time;
the display position calculation module is used for calculating the display position of the first display area according to the motion track position parameter and the motion track time parameter;
and the image rendering module is used for rendering the video image in the first display area according to the display position of the first display area.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image processing method of any of the preceding first aspects.
In a fourth aspect, the present disclosure provides a non-transitory computer-readable storage medium, which stores computer instructions for causing a computer to execute the image processing method according to any one of the foregoing first aspects.
The disclosure discloses an image processing method, an image processing device, an electronic device and a computer-readable storage medium. The image processing method comprises the following steps: acquiring a video image from a video source in response to receiving a first trigger signal; acquiring a first display area, wherein the first display area is used for displaying a plurality of video frames of the video image; acquiring a motion trail position parameter of the first display area; acquiring a motion trail time parameter of the first display area, wherein the motion trail time parameter is the time length from the time when the first display area is displayed on a screen to the current time; calculating the display position of the first display area according to the motion trail position parameter and the motion trail time parameter; rendering the video image in the first display area according to the display position of the first display area. By the method, the technical problem that the video effect cannot be simply and quickly generated in the prior art is solved.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a flowchart of an embodiment of an image processing method provided by the present disclosure;
FIG. 2 is a flow diagram of a further example of an embodiment of an image processing method provided by the present disclosure;
FIG. 3 is a flow diagram of a further example of an embodiment of an image processing method provided by the present disclosure;
fig. 4 is a schematic structural diagram of an embodiment of an image processing apparatus provided in an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
Fig. 1 is a flowchart of an embodiment of an image processing method provided in an embodiment of the present disclosure, where the image processing method provided in this embodiment may be executed by an image processing apparatus, the image processing apparatus may be implemented as software, or implemented as a combination of software and hardware, and the image processing apparatus may be integrated in a certain device in an image processing system, such as an image processing server or an image processing terminal device. As shown in fig. 1, the method comprises the steps of:
step S101, responding to the receiving of a first trigger signal, and acquiring a video image from a video source;
optionally, the first trigger signal is a trigger signal received from a human-computer interaction interface of the terminal device, the terminal device may be an intelligent terminal device such as a smart phone, a tablet computer, and the like, the first trigger signal is received by a device capable of inputting signals, such as a touch screen, a keyboard, a camera, and the like, of the terminal device, typically, the first trigger signal is generated by a user clicking a specific area of the touch screen, and if the user clicks a start button of an image effect on the touch screen, the first trigger signal is generated, and the terminal device receives the first trigger signal through the touch screen.
Alternatively, the first trigger signal is a trigger signal triggered by an event, such as a timer timeout, and the like, and typically, a 5-second timer may be set, and when the timer expires, the first trigger signal is triggered. It is understood that the first trigger signal may be triggered by various events, and is not limited to the above-mentioned clicking touch screen and timer timeout event, which is not described herein again.
Optionally, in this step, the video source is an image sensor, and the video image is a video image received from the image sensor. The image sensor refers to various devices capable of acquiring images, and typical image sensors are video cameras, cameras and the like. In this embodiment, the image sensor may be a camera on the terminal device, such as a front-facing or rear-facing camera on a smart phone, and an image acquired by the camera may be directly displayed on a display screen of the smart phone.
Optionally, in this step, the video source is a memory, the video image is a video image file received from the memory, and the video image file is a video recorded in advance and stored in the memory. Optionally, the memory is a local memory or a network memory or a removable memory. In which the video image is retrieved from the address of the memory.
The video image comprises a plurality of video frames, the video frames are played at a certain speed to form the video image, typically 30 video frames are played every second, when the video image is played, the video frames are replaced every 33 milliseconds, and the video frames are played in a time sequence to form a dynamic video image.
Step S102, acquiring a first display area;
wherein the first display area is for displaying a plurality of video frames of the video image.
Optionally, the first display area is an area having a shape and a frame, and typically, the first display area may be in the shape of a rectangular film roll, and a middle area of the film roll is used for displaying a plurality of video frames of the video image. In this step, the acquiring the first display area includes acquiring an initial position parameter of the first display area, where the initial position parameter of the first display area is used to determine a range of the first display area, and the range of the first display area is used to display a plurality of video frames of the video image. Typical shapes such as rectangular, circular, etc.; taking a rectangular film as an example, the shape attribute parameters of the first display area are coordinates of four corner points of a rectangle, and the range of the first display area can be determined through the coordinates of the four corner points. The plurality of video frames of the video image are all displayed in the first display area, that is, the video image is not directly displayed on the whole screen but displayed in the first display area in the screen, and the initial position parameter of the first display area represents the position of the first display area in the screen.
Step S103, acquiring a motion track position parameter of the first display area;
optionally, the motion trajectory position parameter includes an initial position, a control position, and an end position. In this alternative embodiment, the initial, control and end positions are the absolute positions of the motion trajectory, not its position in the screen. Typically, the initial position POS0Is (0,0)Said control position POS1Is (X)1,Y1) Said end position POS2Is (X)2,Y2) The initial position is a starting point of the motion track, the control position is a control point for controlling the motion track, the end point position is an end point of the motion track, and the motion track of the first display area can be calculated through the three motion track position parameters. Optionally, the three motion trajectory position parameters are preset.
It can be understood that the position parameters of the motion trajectory may include more or less parameters, if the motion trajectory is a linear trajectory, only an initial position and an end position are needed, and if a more complex nonlinear trajectory is needed, more control points may be added, which is not described herein again.
Step S104, acquiring a motion trail time parameter of the first display area;
wherein the motion trail time parameter is a time length from a time when the first display area is displayed on the screen to a current time.
In step S102, after the initial movement trace position parameter of the first display area is acquired, the first display area is displayed in a screen, and a time length from the time when the first display area is displayed to the current time is the movement trace time parameter.
Optionally, the motion trajectory time parameter may be calculated by the first trigger signal, that is, a time length from when the first trigger signal is received to a current time is the motion trajectory time parameter.
It can be understood that the motion trajectory time parameter may be obtained in any manner to achieve different effects, for example, to achieve the effect of delaying movement, a delay value may be subtracted from the obtained time length to obtain an actual motion trajectory time parameter, and so on, which is not described herein again.
Step S105, calculating the display position of the first display area according to the motion trail position parameter and the motion trail time parameter;
alternatively, after the position parameter of the motion trajectory is acquired in step S103 and the time parameter of the motion trajectory is acquired in step S104, the display position of the first display area at the current time point may be calculated according to the motion trajectory formula. The motion track should satisfy:
POS=f(p,t)
namely, the display position of the first display area is in a functional relation with the motion trail position parameter P and the motion trail time parameter t.
Optionally, the display position of the first display area is calculated by the following motion trajectory equation:
POS=POS2*t2+2*POS1*(1-t)*t+POS0*(1-t)2(formula 1) of (A) and (B),
wherein the POS0For initial position of the movement path, POS1Being intermediate control points of the movement path, POS2The POS is the position of the motion trail when the current time t is the end position of the motion trail. The time t is normalized time, namely the current time t obtained after the time from the initial position of the motion track to the end position of the motion track is normalized to 1, and the actual meaning of the current time t is the proportion of the current time to the overall required time.
Optionally, the POS is directly a display position of the first display area, wherein the POS0For the initial position coordinates of each pixel point in the first display region, POS2For the end position coordinates, POS, of each pixel point in the first display region1Coordinates of each intermediate control point for each pixel point in the first display area, POS0、POS1And POS2Matrix representation can be used, so that the position of each pixel point in the first display area at the current time point can be directly obtained. In this alternative embodiment, the actual display position of the first display area is the POS3=POS。
Optionally, the POS is a relative display position of the first display area, where the POS is shown0Is a (0,0) point, POS1And POS2To be presetThe position of the POS at the current time point and the POS calculated by formula 10An offset value in between. Therefore, the accurate position of each pixel point in the first display area does not need to be known, and the actual position of the first display area on the screen can be obtained only by adding the offset value to the initial position in the screen of the first display area. In this alternative embodiment, the actual display position of the first display area is the POS3=POS4+ POS, wherein POS4Is the initial position of the first display area in the screen.
As will be appreciated, the above-described POS0、POS1、POS2Etc. the coordinates are merely examples and are not to be construed as limiting the disclosure, POS stations, etc0、POS1、POS2、POS3、POS4Or a matrix of coordinates, where each point in the matrix represents a coordinate of a pixel point in the first display region.
Optionally, before the calculating the display position of the first display area according to the motion trajectory position parameter and the motion trajectory time parameter, the method further includes:
step S201, obtaining a deflection angle of the first display area;
and S202, acquiring a deflection matrix according to the deflection angle.
In this embodiment, the first display area itself is deflected in addition to being moved according to the motion trajectory. Optionally, the obtaining a deflection angle of the first display area, where the deflection is related to the motion trajectory time parameter, includes: and calculating the deflection angle of the first display area at the current time according to the motion trail time parameter. Optionally, the calculating the deflection angle of the first display area at the current time according to the motion trajectory time parameter includes calculating the deflection angle at the current time according to the following formula:
Angle=Anglel-M (1-t) (equation 2)
Where Angle is the current deflection Angle, AnglelIs the first displayIndicating the deflection Angle of the zone at the end, Angle and AnglelAre all normalized angle values, 0<M<And 1 and t are motion trail time parameters. Typically, AnglelWhen M is 0.45 and M is 0.6, then:
Angle=0.45-0.6*(1-t)
from the above formula 2, the deflection angle of the first display region corresponding to each time point can be calculated,
from the deflection angle, the deflection matrix can be calculated as follows:
Figure BDA0002233288940000071
then, in this embodiment, the calculating the display position of the first display area according to the motion trail position parameter and the motion trail time parameter includes:
and calculating the display position of the first display area according to the motion track position parameter, the motion track time parameter calculation and the deflection matrix.
Namely, POS3POS or POS3=POS4*rotation+POS。
Optionally, the image processing is based on OPENGL, and the aspect ratio of OPENGL is 1:1, and the aspect ratio of the first display area is not necessarily 1:1, it is therefore necessary to normalize the first display area, then based on deflecting the first display area, and then re-scale to compensate to the aspect ratio of the first display area. Optionally, before the calculating the display position of the first display area according to the motion trajectory position parameter, the motion trajectory time parameter, and the deflection matrix, the method further includes:
step S301, acquiring the width-height ratio of the first display area;
step S302, a proportion matrix and a compensation matrix of the first display area are obtained according to the width-to-height ratio.
Wherein the scale matrix is used to normalize the first display area, and the compensation matrix is used to scale the first display area after normalization back to the aspect ratio.
Specifically, the method comprises the following steps:
acquiring the width-to-height ratio W to H of the first display area;
obtaining a proportion matrix and a compensation matrix of the first display area according to the width-to-height ratio, wherein the proportion matrix is as follows:
Figure BDA0002233288940000081
the compensation matrix is:
Figure BDA0002233288940000082
in the optional embodiment, the calculating the display position of the first display area according to the motion trajectory position parameter, the motion trajectory time parameter calculation, and the deflection matrix includes:
and calculating the display position of the first display area according to the motion track position parameter, the motion track time parameter, the deflection matrix, the proportion matrix and the compensation matrix.
In particular, POS3=POS*ratio*rotation*ratioinvOr POS3=POS4*ratio*rotation*ratioinv+POS。
Optionally, before the calculating the display position of the first display area according to the motion trajectory position parameter, the motion trajectory time parameter, the deflection matrix, the proportion matrix, and the compensation matrix, the method further includes: and acquiring a display scaling factor of the first display area, wherein the display scaling factor is used for determining the display size of the first display area. The scaling coefficient may also be obtained according to the motion trajectory time parameter, specifically, the scaling coefficient corresponds to or is in a functional relationship with the motion trajectory time parameter, and the scaling coefficient may be directly obtained or calculated according to the current timeA scaling factor used to calculate a size of the first display region at a current time. In this embodiment, the calculating the display position of the first display area according to the motion trajectory position parameter, the motion trajectory time parameter, the deflection matrix, the scale matrix, and the compensation matrix includes: and calculating the display position of the first display area according to the motion track position parameter, the motion track time parameter, the deflection matrix, the proportion matrix, the compensation matrix and the display scaling coefficient. Specifically, the scaling factor is scale, where scale is any positive number, then: POS (Point of sale)3=POS*ratio*rotation*ratioinvScale or POS3=POS4*ratio*rotation*ratioinv*scale+POS。
With the above-described embodiments, the position of each pixel in the first display region at each time point in a period of time can be calculated, which may include the movement position and the rotation position thereof.
Step S106, rendering the video image in the first display area according to the display position of the first display area.
In this step, a video image acquired from the video source is rendered at a display position of the first display area. Specifically, a video image is processed to be exactly the same as the first display area in size and position, the original size of the video image is the same as the first display area in size, and the video image can be rendered in the first display area by performing the same processing on the video image by using the method for processing the first display area.
The embodiment realizes the effect that the video image moves along with the first display area. It is understood that, what is shown in the above embodiments may be only one segment of the whole motion, that is, the first display area may include multiple segments of motions, the motion trajectories of the segments may be the same or different, and the deflection angles of the segments may be the same or different, in this case, a global motion trajectory time parameter needs to be set to record the time of the whole segment of motion, and the motion trajectory time parameter is used as a local motion trajectory time parameter to record the time of the segment of motion, the time duration of each segment of motion may be different, and in the case of normalization, the motion trajectory time parameter processing manner of each segment of motion is the same. If the time length of the whole motion is 12 seconds, the whole motion is divided into 4 segments of 1-3 seconds, 3-5 seconds, 5-8 seconds and 8-12 seconds, each segment of time is normalized, different track formulas are used, only global time points need to be recorded, when the time reaches 3 seconds, 5 seconds and 8 seconds, the track formulas are switched, and the description is omitted. It can be understood that the image processing method may further include a plurality of display areas, such as a first display area, a second display area, a third display area, and a fourth display area, where each display area has an initial motion trajectory position parameter, and each display area may display a video image acquired from the video source, or the first display area in the image processing method includes a plurality of sub display areas, and a range of the sub display area may be determined by the initial motion trajectory position parameter of each sub display area, and each sub display area may display the video image.
Optionally, to achieve richer effects, after the step S106, the method further includes:
step S107, in response to receiving a second trigger signal, fixedly rendering a video frame of the video image currently played to the first display area.
In this optional embodiment, the video image is captured, and the captured image is fixedly rendered in the first display area. Specifically, the second trigger signal may be set to be that the time reaches a predetermined threshold or a predetermined action is detected in the video image, and typically, for example, the video frame of the video image is intercepted when the global time reaches 3 seconds, the video frame is rendered in the first display area, and then the video frame is not displayed in the first display area any more, but is displayed all the time, so as to achieve the effect of taking a picture. Optionally, when the first display area includes a plurality of sub-display areas or the method includes a plurality of display areas such as a first display area, a second display area, a third display area, and a fourth display area, different video frames may be rendered in each display area according to a trigger signal to achieve an effect of a film or an album, and the specific display area may be rendered into a film or an album to achieve a more realistic effect, which is not described herein again.
The disclosure discloses an image processing method, an image processing device, an electronic device and a computer-readable storage medium. The image processing method comprises the following steps: acquiring a video image from a video source in response to receiving a first trigger signal; acquiring a first display area, wherein the first display area is used for displaying a plurality of video frames of the video image; acquiring a motion trail position parameter of the first display area; acquiring a motion trail time parameter of the first display area, wherein the motion trail time parameter is the time length from the time when the first display area is displayed on a screen to the current time; calculating the display position of the first display area according to the motion trail position parameter and the motion trail time parameter; rendering the video image in the first display area according to the display position of the first display area. By the method, the technical problem that the video effect cannot be simply and quickly generated in the prior art is solved.
In the above, although the steps in the above method embodiments are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiments of the present disclosure are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse, parallel, and cross, and further, on the basis of the above steps, other steps may also be added by those skilled in the art, and these obvious modifications or equivalents should also be included in the protection scope of the present disclosure, and are not described herein again.
Fig. 4 is a schematic structural diagram of an embodiment of an image processing apparatus according to an embodiment of the present disclosure, and as shown in fig. 4, the apparatus 400 includes: a video image acquisition module 401, a first display area acquisition module 402, a position parameter acquisition module 403, a time parameter acquisition module 404, a display position calculation module 405, and an image rendering module 406. Wherein the content of the first and second substances,
a video image obtaining module 401, configured to obtain a video image from a video source in response to receiving a first trigger signal, where the video image includes a plurality of video frames;
a first display area obtaining module 402, configured to obtain a first display area, where the first display area is used to display a video frame of the video image;
a position parameter obtaining module 403, configured to obtain a motion trajectory position parameter of the first display area, where the motion trajectory position parameter includes an initial position, a control position, and an end position;
a time parameter obtaining module 404, configured to obtain a motion trajectory time parameter of the first display area, where the motion trajectory time parameter is a time length from a time when the first display area is displayed on a screen to a current time;
a display position calculation module 405, configured to calculate a display position of the first display area according to the motion trajectory position parameter and the motion trajectory time parameter;
an image rendering module 406, configured to render the video image in the first display area according to the display position of the first display area.
Further, the image processing apparatus 400 further includes:
the deflection angle acquisition module is used for acquiring the deflection angle of the first display area;
and the deflection matrix acquisition module is used for acquiring a deflection matrix according to the deflection angle.
Further, the deflection angle obtaining module further includes:
and the deflection angle calculation module is used for calculating the deflection angle of the first display area at the current time according to the motion track time parameter.
Further, the display position calculation module 405 is further configured to:
and calculating the display position of the first display area according to the motion track position parameter, the motion track time parameter calculation and the deflection matrix.
Further, the image processing apparatus 400 further includes:
the width-to-height ratio acquisition module is used for acquiring the width-to-height ratio of the first display area;
a proportion compensation matrix obtaining module, configured to obtain a proportion matrix and a compensation matrix of the first display area according to the aspect ratio, where the proportion matrix is used to normalize the first display area, and the compensation matrix is used to scale the normalized first display area back to the aspect ratio.
Further, the display position calculation module 405 is further configured to:
and calculating the display position of the first display area according to the motion track position parameter, the motion track time parameter, the deflection matrix, the proportion matrix and the compensation matrix.
Further, the image processing apparatus 400 further includes:
and the display scaling factor acquisition module is used for acquiring the display scaling factor of the first display area, wherein the display scaling factor is used for determining the display size of the first display area.
Further, the display position calculation module 405 is further configured to:
and calculating the display position of the first display area according to the motion track position parameter, the motion track time parameter, the deflection matrix, the proportion matrix, the compensation matrix and the display scaling coefficient.
Further, the first display area obtaining module 402 further includes:
an initial position parameter obtaining module, configured to obtain an initial motion trajectory position parameter of a first display area, where the initial motion trajectory position parameter determines a range of the first display area, and a video frame of the video image is displayed in the range of the first display area.
Further, the image processing apparatus 400 further includes:
and the fixed rendering module is used for responding to the reception of a second trigger signal and fixedly rendering the video frame of the video image played currently to the first display area.
The apparatus shown in fig. 4 can perform the method of the embodiment shown in fig. 1-3, and the detailed description of this embodiment can refer to the related description of the embodiment shown in fig. 1-3. The implementation process and technical effect of the technical solution refer to the descriptions in the embodiments shown in fig. 1 to fig. 3, and are not described herein again.
Referring now to FIG. 5, a block diagram of an electronic device 500 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 506 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 506 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 506, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: in response to receiving a first trigger signal, acquiring a video image from a video source, wherein the video image comprises a plurality of video frames; acquiring a first display area, wherein the first display area is used for displaying a plurality of video frames of the video image; acquiring motion trail position parameters of the first display area, wherein the motion trail position parameters comprise an initial position, a control position and a terminal position; acquiring a motion trail time parameter of the first display area, wherein the motion trail time parameter is the time length from the time when the first display area is displayed on a screen to the current time; calculating the display position of the first display area according to the motion trail position parameter and the motion trail time parameter; rendering the video image in the first display area according to the display position of the first display area.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided an image processing method including:
in response to receiving a first trigger signal, acquiring a video image from a video source, wherein the video image comprises a plurality of video frames;
acquiring a first display area, wherein the first display area is used for displaying a plurality of video frames of the video image;
acquiring motion trail position parameters of the first display area, wherein the motion trail position parameters comprise an initial position, a control position and a terminal position;
acquiring a motion trail time parameter of the first display area, wherein the motion trail time parameter is the time length from the time when the first display area is displayed on a screen to the current time;
calculating the display position of the first display area according to the motion trail position parameter and the motion trail time parameter;
rendering the video image in the first display area according to the display position of the first display area.
Further, before the calculating the display position of the first display area according to the motion trail position parameter and the motion trail time parameter, the method further includes:
acquiring a deflection angle of the first display area;
and acquiring a deflection matrix according to the deflection angle.
Further, the obtaining the deflection angle of the first display area includes:
and calculating the deflection angle of the first display area at the current time according to the motion trail time parameter.
Further, the calculating the display position of the first display area according to the motion trail position parameter and the motion trail time parameter includes:
and calculating the display position of the first display area according to the motion track position parameter, the motion track time parameter calculation and the deflection matrix.
Further, before the calculating the display position of the first display area according to the motion trajectory position parameter, the motion trajectory time parameter, and the deflection matrix, the method further includes:
acquiring the width-height ratio of the first display area;
and acquiring a proportion matrix and a compensation matrix of the first display area according to the aspect ratio, wherein the proportion matrix is used for normalizing the first display area, and the compensation matrix is used for scaling the normalized first display area back to the aspect ratio.
Further, the calculating the display position of the first display area according to the motion trajectory position parameter, the motion trajectory time parameter calculation and the deflection matrix includes:
and calculating the display position of the first display area according to the motion track position parameter, the motion track time parameter, the deflection matrix, the proportion matrix and the compensation matrix.
Further, before the calculating the display position of the first display area according to the motion trajectory position parameter, the motion trajectory time parameter, the deflection matrix, the proportion matrix, and the compensation matrix, the method further includes:
and acquiring a display scaling factor of the first display area, wherein the display scaling factor is used for determining the display size of the first display area.
Further, the calculating the display position of the first display area according to the motion trajectory position parameter, the motion trajectory time parameter, the deflection matrix, the proportion matrix, and the compensation matrix includes:
and calculating the display position of the first display area according to the motion track position parameter, the motion track time parameter, the deflection matrix, the proportion matrix, the compensation matrix and the display scaling coefficient.
Further, the acquiring a first display area, where the first display area is used for displaying a video frame of the video image, includes:
acquiring initial position parameters of a first display area, wherein the initial position parameters of the first display area determine the range of the first display area, and the range of the first display area is used for displaying video frames of the video images.
Further, the method further comprises:
in response to receiving a second trigger signal, fixedly rendering a currently played video frame of the video image in the first display area.
According to one or more embodiments of the present disclosure, there is provided an image processing apparatus including:
the video image acquisition module is used for responding to the receiving of a first trigger signal and acquiring a video image from a video source, wherein the video image comprises a plurality of video frames;
a first display area acquisition module, configured to acquire a first display area, where the first display area is used to display a video frame of the video image;
the position parameter acquisition module is used for acquiring motion trail position parameters of the first display area, wherein the motion trail position parameters comprise an initial position, a control position and a terminal position;
a time parameter obtaining module, configured to obtain a motion trajectory time parameter of the first display area, where the motion trajectory time parameter is a time length from a time when the first display area is displayed on a screen to a current time;
the display position calculation module is used for calculating the display position of the first display area according to the motion track position parameter and the motion track time parameter;
and the image rendering module is used for rendering the video image in the first display area according to the display position of the first display area.
Further, the image processing apparatus further includes:
the deflection angle acquisition module is used for acquiring the deflection angle of the first display area;
and the deflection matrix acquisition module is used for acquiring a deflection matrix according to the deflection angle.
Further, the deflection angle obtaining module further includes:
and the deflection angle calculation module is used for calculating the deflection angle of the first display area at the current time according to the motion track time parameter.
Further, the display position calculation module is further configured to:
and calculating the display position of the first display area according to the motion track position parameter, the motion track time parameter calculation and the deflection matrix.
Further, the image processing apparatus further includes:
the width-to-height ratio acquisition module is used for acquiring the width-to-height ratio of the first display area;
a proportion compensation matrix obtaining module, configured to obtain a proportion matrix and a compensation matrix of the first display area according to the aspect ratio, where the proportion matrix is used to normalize the first display area, and the compensation matrix is used to scale the normalized first display area back to the aspect ratio.
Further, the display position calculation module is further configured to:
and calculating the display position of the first display area according to the motion track position parameter, the motion track time parameter, the deflection matrix, the proportion matrix and the compensation matrix.
Further, the image processing apparatus further includes:
and the display scaling factor acquisition module is used for acquiring the display scaling factor of the first display area, wherein the display scaling factor is used for determining the display size of the first display area.
Further, the display position calculation module is further configured to:
and calculating the display position of the first display area according to the motion track position parameter, the motion track time parameter, the deflection matrix, the proportion matrix, the compensation matrix and the display scaling coefficient.
Further, the first display area obtaining module further includes:
an initial position parameter obtaining module, configured to obtain an initial motion trajectory position parameter of a first display area, where the initial motion trajectory position parameter determines a range of the first display area, and a video frame of the video image is displayed in the range of the first display area.
Further, the image processing apparatus further includes:
and the fixed rendering module is used for responding to the reception of a second trigger signal and fixedly rendering the video frame of the video image played currently to the first display area.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the image processing methods described above.
According to one or more embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium characterized by storing computer instructions for causing a computer to execute any of the aforementioned image processing methods.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (11)

1. An image processing method comprising:
in response to receiving a first trigger signal, acquiring a video image from a video source, wherein the video image comprises a plurality of video frames;
acquiring a first display area, wherein the first display area is used for displaying a plurality of video frames of the video image;
acquiring motion trail position parameters of the first display area, wherein the motion trail position parameters comprise an initial position, a control position and a terminal position;
acquiring a motion trail time parameter of the first display area, wherein the motion trail time parameter is the time length from the time when the first display area is displayed on a screen to the current time;
acquiring a deflection angle of the first display area;
acquiring a deflection matrix according to the deflection angle;
calculating the display position of the first display area according to the motion track position parameter, the motion track time parameter and the deflection matrix;
rendering the video image in the first display area according to the display position of the first display area.
2. The image processing method of claim 1, wherein said obtaining a deflection angle of the first display region comprises:
and calculating the deflection angle of the first display area at the current time according to the motion trail time parameter.
3. The image processing method according to claim 2, wherein before said calculating the display position of the first display region from the motion trajectory position parameter, the motion trajectory time parameter calculation, and the deflection matrix, further comprises:
acquiring the width-height ratio of the first display area;
and acquiring a proportion matrix and a compensation matrix of the first display area according to the aspect ratio, wherein the proportion matrix is used for normalizing the first display area, and the compensation matrix is used for scaling the normalized first display area back to the aspect ratio.
4. The image processing method of claim 3, wherein said calculating a display position of said first display region from said motion trajectory position parameter, motion trajectory time parameter calculation, and said deflection matrix comprises:
and calculating the display position of the first display area according to the motion track position parameter, the motion track time parameter, the deflection matrix, the proportion matrix and the compensation matrix.
5. The image processing method of claim 4, wherein prior to said calculating the display position of the first display region from the motion trajectory position parameter, the motion trajectory time parameter, the deflection matrix, the scale matrix, and a compensation matrix, further comprising:
and acquiring a display scaling factor of the first display area, wherein the display scaling factor is used for determining the display size of the first display area.
6. The image processing method of claim 5, wherein said calculating a display position of the first display region from the motion trajectory position parameter, the motion trajectory time parameter, the deflection matrix, the scale matrix, and a compensation matrix comprises:
and calculating the display position of the first display area according to the motion track position parameter, the motion track time parameter, the deflection matrix, the proportion matrix, the compensation matrix and the display scaling coefficient.
7. The image processing method of claim 1, wherein said obtaining a first display region for displaying a video frame of the video image comprises:
acquiring initial position parameters of a first display area, wherein the initial position parameters of the first display area determine the range of the first display area, and the range of the first display area is used for displaying video frames of the video images.
8. The image processing method of claim 1, wherein the method further comprises:
in response to receiving a second trigger signal, fixedly rendering a currently played video frame of the video image in the first display area.
9. An image processing apparatus comprising:
the video image acquisition module is used for responding to the receiving of a first trigger signal and acquiring a video image from a video source, wherein the video image comprises a plurality of video frames;
a first display area acquisition module, configured to acquire a first display area, where the first display area is used to display a video frame of the video image;
the position parameter acquisition module is used for acquiring motion trail position parameters of the first display area, wherein the motion trail position parameters comprise an initial position, a control position and a terminal position;
a time parameter obtaining module, configured to obtain a motion trajectory time parameter of the first display area, where the motion trajectory time parameter is a time length from a time when the first display area is displayed on a screen to a current time;
the deflection angle acquisition module is used for acquiring the deflection angle of the first display area;
the deflection matrix acquisition module is used for acquiring a deflection matrix according to the deflection angle;
the display position calculation module is used for calculating the display position of the first display area according to the motion track position parameter, the motion track time parameter and the deflection matrix;
and the image rendering module is used for rendering the video image in the first display area according to the display position of the first display area.
10. An electronic device, comprising:
a memory for storing computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when executing implements the image processing method according to any of claims 1-8.
11. A non-transitory computer-readable storage medium storing computer-readable instructions that, when executed by a computer, cause the computer to perform the image processing method of any one of claims 1-8.
CN201910974918.9A 2019-10-14 2019-10-14 Image processing method, image processing device, electronic equipment and computer readable storage medium Active CN110769129B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910974918.9A CN110769129B (en) 2019-10-14 2019-10-14 Image processing method, image processing device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910974918.9A CN110769129B (en) 2019-10-14 2019-10-14 Image processing method, image processing device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110769129A CN110769129A (en) 2020-02-07
CN110769129B true CN110769129B (en) 2022-04-05

Family

ID=69332025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910974918.9A Active CN110769129B (en) 2019-10-14 2019-10-14 Image processing method, image processing device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110769129B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112565626B (en) * 2020-10-24 2023-03-10 北京博睿维讯科技有限公司 Video image special effect processing method, device and system and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010278997A (en) * 2009-06-01 2010-12-09 Sharp Corp Image processing device, image processing method, and program
CN106548504A (en) * 2015-09-23 2017-03-29 腾讯科技(深圳)有限公司 Web animation generation method and device
CN107509113A (en) * 2017-08-01 2017-12-22 广州酷狗计算机科技有限公司 Virtual objects display methods, terminal and computer-readable recording medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8866971B2 (en) * 2007-12-17 2014-10-21 Ati Technologies Ulc Method, apparatus and machine-readable medium for apportioning video processing between a video source device and a video sink device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010278997A (en) * 2009-06-01 2010-12-09 Sharp Corp Image processing device, image processing method, and program
CN106548504A (en) * 2015-09-23 2017-03-29 腾讯科技(深圳)有限公司 Web animation generation method and device
CN107509113A (en) * 2017-08-01 2017-12-22 广州酷狗计算机科技有限公司 Virtual objects display methods, terminal and computer-readable recording medium

Also Published As

Publication number Publication date
CN110769129A (en) 2020-02-07

Similar Documents

Publication Publication Date Title
CN111899192B (en) Interaction method, interaction device, electronic equipment and computer-readable storage medium
CN112261459A (en) Video processing method and device, electronic equipment and storage medium
CN110519645B (en) Video content playing method and device, electronic equipment and computer readable medium
CN112004032B (en) Video processing method, terminal device and storage medium
CN112051961A (en) Virtual interaction method and device, electronic equipment and computer readable storage medium
CN112887631B (en) Method and device for displaying object in video, electronic equipment and computer-readable storage medium
CN112261428A (en) Picture display method and device, electronic equipment and computer readable medium
US20220159197A1 (en) Image special effect processing method and apparatus, and electronic device and computer readable storage medium
CN110796664A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114385297A (en) Page display method and device, electronic equipment, storage medium and program product
CN111970571A (en) Video production method, device, equipment and storage medium
CN114390308A (en) Interface display method, device, equipment, medium and product in live broadcast process
CN113806306A (en) Media file processing method, device, equipment, readable storage medium and product
CN114900625A (en) Subtitle rendering method, device, equipment and medium for virtual reality space
CN110769129B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113766293B (en) Information display method, device, terminal and storage medium
CN115515014B (en) Media content sharing method and device, electronic equipment and storage medium
CN115904591B (en) Media content display method and device, electronic equipment and storage medium
CN111833459A (en) Image processing method and device, electronic equipment and storage medium
US11810336B2 (en) Object display method and apparatus, electronic device, and computer readable storage medium
WO2022022363A1 (en) Video generation and playing method and apparatus, and electronic device and storage medium
CN115082368A (en) Image processing method, device, equipment and storage medium
CN114419298A (en) Virtual object generation method, device, equipment and storage medium
CN112651909B (en) Image synthesis method, device, electronic equipment and computer readable storage medium
US20220392129A1 (en) Sticker processing method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant