CN112488977B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112488977B
CN112488977B CN202011535228.2A CN202011535228A CN112488977B CN 112488977 B CN112488977 B CN 112488977B CN 202011535228 A CN202011535228 A CN 202011535228A CN 112488977 B CN112488977 B CN 112488977B
Authority
CN
China
Prior art keywords
target
image
vertex
vector
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011535228.2A
Other languages
Chinese (zh)
Other versions
CN112488977A (en
Inventor
常志伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202011535228.2A priority Critical patent/CN112488977B/en
Publication of CN112488977A publication Critical patent/CN112488977A/en
Application granted granted Critical
Publication of CN112488977B publication Critical patent/CN112488977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to an image processing method, an image processing device, electronic equipment and a storage medium, and relates to the field of computers. The embodiment of the disclosure at least solves the problems of simplistic and direct image display form in the related art. The method comprises the following steps: acquiring anchor point positions of a plurality of anchor points in an image to be processed and at least one motion vector; converting the obtained anchor positions into anchor vectors, and combining the anchor vectors and at least one motion vector to generate a vector array; dividing an image to be processed into a plurality of triangles according to the vector array and the triangle subdivision algorithm to obtain a vertex index array; respectively determining target positions and target colors of vertexes of a plurality of triangles according to the vector array, the vertex index array and the time length coefficient; rendering the image to be processed according to the determined target position and target color to obtain a plurality of target images; the plurality of target images are displayed in display order within the display period.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computers, and in particular, to an image processing method, an image processing device, an electronic device, and a storage medium.
Background
With the continuous development of image processing technology, an account can take an image with a terminal and view the image. In the related art, in order to dynamically view an image, a terminal collects a video of a preset duration while collecting a still image, and combines the collected still image with the video of the preset duration to generate a dynamic image. In this way, when the account triggers the terminal to display the dynamic image, the terminal plays the video with preset duration, and can display the image to the account in a dynamic mode.
However, the above method still has a limitation, for example, in order to dynamically display an image, the terminal needs to photograph a video of a preset duration in advance and generate a dynamic image. The method cannot be applied to the method for photographing the generated static image in advance or receiving the static image sent by other devices by the terminal, the terminal can only display the static image to the account, and the display mode is too simple and direct.
Disclosure of Invention
The disclosure provides an image processing method, an image processing device, an electronic device and a storage medium, so as to at least solve the problem that an image display form is too simple and direct in the related art. The technical scheme of the present disclosure is as follows:
According to a first aspect of an embodiment of the present disclosure, there is provided an image processing method including: acquiring anchor point positions of a plurality of anchor points in an image to be processed and at least one motion vector; the motion vector is used for representing the starting point and the ending point of the adjustable point in the image to be processed; converting the obtained anchor positions into anchor vectors, and combining the anchor vectors and at least one motion vector to generate a vector array; the starting point and the end point of the anchor point vector are anchor points corresponding to the anchor point vector; dividing an image to be processed into a plurality of triangles according to the vector array and the triangle subdivision algorithm to obtain a vertex index array; the vertex index array comprises positions of the vertexes of a plurality of triangles corresponding to each other in the vector array; respectively determining target positions of vertexes of a plurality of triangles and target colors of vertexes of the triangles according to the vector array, the vertex index array and the time length coefficient; the time length coefficient is used for representing the display sequence of the images in the display period; rendering the image to be processed according to the determined target position and target color to obtain a plurality of target images; the plurality of target images are displayed in display order within the display period.
Optionally, the "merging multiple anchor vectors and at least one motion vector to generate a vector array" includes: acquiring coordinate values in a starting point coordinate and coordinate values in an end point coordinate of each vector to be processed; the vector to be processed is any one vector of a plurality of anchor point vectors and at least one motion vector; combining the acquired coordinate values to generate a vector array; the continuous preset number of coordinate values in the vector array corresponds to a vector to be processed.
Optionally, the image processing method further includes: determining the starting display time of the target image, wherein the starting display time of the target image comprises the time required for starting to display the target image in a display period; and determining the ratio of the display starting time length to the display period of the target image as the time length coefficient of the target image.
Optionally, the "determining the target positions of the vertices of the triangles according to the vector array, the vertex index array, and the time length coefficients" includes: determining the starting point coordinates and the end point coordinates of the target vertexes from the vector array according to the vertex index array; the target vertex is any vertex in a plurality of triangles; and determining the target position of the target vertex in the target image according to the time length coefficient of the target image, the starting point coordinate and the end point coordinate of the target vertex.
Optionally, the above "target position of the target vertex in the target image" satisfies the following formula one:
(x m,ym)={[xa×(1-v)+xb×v)],[ya×(1-v)+yb Xv) } equation one
Wherein x m is the abscissa of the target vertex in the target image, y m is the ordinate of the target vertex in the target image, x a is the abscissa of the start point coordinate of the target vertex, v is the duration coefficient of the target image, x b is the abscissa of the end point coordinate of the target vertex, y a is the ordinate of the start point coordinate of the target vertex, and y b is the ordinate of the end point coordinate of the target vertex.
Optionally, the "determining the target colors of the vertices of the triangles according to the vector array, the vertex index array, and the time length coefficients" includes: determining the starting point coordinates and the end point coordinates of the target vertexes from the vector array according to the vertex index array; the target vertex is any vertex in a plurality of triangles; acquiring a starting point color value of a target vertex from an image to be processed according to the starting point coordinates of the target vertex, and acquiring an end point color value of the target vertex from the image to be processed according to the end point coordinates of the target vertex; and determining the target color value of the target vertex in the target image according to the time length coefficient of the target image, the starting point color value and the end point color value of the target vertex.
Optionally, the "target color value of the target vertex in the target image" satisfies the following formula two:
x n=[xp×(1-v)+xq Xv formula II
Wherein x n is a target color value of the target vertex in the target image, x p is a starting point color value of the target vertex, v is a time length coefficient of the target image, and x q is an end point color value of the target vertex.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including an acquisition unit, a generation unit, a determination unit, a processing unit, and a display unit; the acquisition unit is used for acquiring anchor point positions of a plurality of anchor points in the image to be processed and at least one motion vector; the motion vector is used for representing the starting point and the ending point of the adjustable point in the image to be processed; the generating unit is used for converting the anchor points acquired by the acquiring unit into anchor point vectors, and combining the anchor point vectors and at least one motion vector to generate a vector array; the starting point and the end point of the anchor point vector are anchor points corresponding to the anchor point vector; the acquisition unit is also used for dividing the image to be processed into a plurality of triangles according to the vector array and the triangle subdivision algorithm so as to acquire the vertex index array; the vertex index array comprises positions of the vertexes of a plurality of triangles corresponding to each other in the vector array; the determining unit is used for respectively determining target positions of vertexes of the triangles and target colors of the vertexes of the triangles according to the vector array, the vertex index array and the time length coefficient; the time length coefficient is used for representing the display sequence of the images in the display period; the processing unit is used for rendering the image to be processed according to the target position and the target color determined by the determining unit to obtain a plurality of target images; and a display unit for displaying the plurality of target images obtained by the processing unit in display order in a display period.
Optionally, the generating unit is specifically configured to: acquiring coordinate values in a starting point coordinate and coordinate values in an end point coordinate of each vector to be processed; the vector to be processed is any one vector of a plurality of anchor point vectors and at least one motion vector; combining the acquired coordinate values to generate a vector array; the continuous preset number of coordinate values in the vector array corresponds to a vector to be processed.
Optionally, the determining unit is specifically configured to: determining the starting display time of the target image, wherein the starting display time of the target image comprises the time required for starting to display the target image in a display period; and determining the ratio of the display starting time length to the display period of the target image as the time length coefficient of the target image.
Optionally, the determining unit is specifically configured to: determining the starting point coordinates and the end point coordinates of the target vertexes from the vector array according to the vertex index array; the target vertex is any vertex in a plurality of triangles; and determining the target position of the target vertex in the target image according to the time length coefficient of the target image, the starting point coordinate and the end point coordinate of the target vertex.
Optionally, the target position of the target vertex in the target image satisfies the following formula one:
(x m,ym)={[xa×(1-v)+xb×v)],[ya×(1-v)+yb Xv) } equation one
Wherein x m is the abscissa of the target vertex in the target image, y m is the ordinate of the target vertex in the target image, x a is the abscissa of the start point coordinate of the target vertex, v is the duration coefficient of the target image, x b is the abscissa of the end point coordinate of the target vertex, y a is the ordinate of the start point coordinate of the target vertex, and y b is the ordinate of the end point coordinate of the target vertex.
Optionally, the determining unit is specifically configured to: determining the starting point coordinates and the end point coordinates of the target vertexes from the vector array according to the vertex index array; the target vertex is any vertex in a plurality of triangles; acquiring a starting point color value of a target vertex from an image to be processed according to the starting point coordinates of the target vertex, and acquiring an end point color value of the target vertex from the image to be processed according to the end point coordinates of the target vertex; and determining the target color value of the target vertex in the target image according to the time length coefficient of the target image, the starting point color value and the end point color value of the target vertex.
Optionally, the target color value of the target vertex in the target image satisfies the following formula two:
x n=[xp×(1-v)+xq Xv formula II
Wherein x n is a target color value of the target vertex in the target image, x p is a starting point color value of the target vertex, v is a time length coefficient of the target image, and x q is an end point color value of the target vertex.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: a processor, a memory for storing instructions executable by the processor; wherein the processor is configured to execute instructions to implement the image processing method as provided in the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium comprising instructions which, when executed by a processor, cause the processor to perform the image processing method as provided in the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising instructions which, when executed by a processor, implement the image processing method as provided in the first aspect.
The technical scheme provided by the disclosure at least brings the following beneficial effects: the method and the device have the advantages that after the vector array is determined and the vertex index array is obtained, the target positions and the target colors of the vertexes of the triangles in the images to be processed in the target images can be determined according to the vector array, the vertex index array and the time length coefficients, the images to be processed can be rendered according to the determined target positions and the determined target colors, and then the target images can be obtained and displayed. Because the plurality of target images can be displayed according to the display sequence in the display period, the account can be given an effect of dynamic display images, and the display process of the static images can be more vivid.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is a schematic diagram of an image processing system according to an exemplary embodiment;
FIG. 2 is one of the flow diagrams of an image processing method shown in accordance with an exemplary embodiment;
FIG. 3 is a schematic diagram of an anchor point in an image to be processed, according to an exemplary embodiment;
FIG. 4 is a second flow chart of an image processing method according to an exemplary embodiment;
FIG. 5 is a third flow chart of an image processing method according to an exemplary embodiment;
FIG. 6 is a flow chart diagram illustrating a method of image processing according to an exemplary embodiment;
fig. 7 is a schematic structural view of an image processing apparatus according to an exemplary embodiment;
fig. 8 is a schematic diagram of an electronic device according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
In addition, in the description of the embodiments of the present disclosure, "/" means or, unless otherwise indicated, for example, a/B may mean a or B. "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present disclosure, "a plurality" means two or more than two.
The image processing method provided by the embodiment of the disclosure can be applied to an image processing system. Fig. 1 shows a schematic configuration of the image processing system. As shown in fig. 1, the image processing system 10 is for processing a still image and displaying the processed image, and the image processing system 10 includes an image processing apparatus 11 and an electronic device 12. The image processing apparatus 11 is connected to an electronic device 12. The image processing apparatus 11 and the electronic device 12 may be connected by a wired manner or may be connected by a wireless manner, which is not limited in the embodiment of the present disclosure.
The image processing apparatus 11 may be configured to perform data interaction with the electronic device 12, for example, acquire an image to be processed from the electronic device 12, and send the processed multiple target images to the electronic device 12.
The image processing device 11 may be further configured to process the acquired image to be processed, for example, determine that positions of anchor points in the image to be processed are connected to generate a vector array.
The electronic device 12 may be configured to perform data interaction with the image processing apparatus 11, for example, receive a plurality of target images transmitted by the image processing apparatus 11, and display the plurality of target images in a preset display order for a preset duration.
The electronic device 12 may also be used to receive a triggering operation of an account on an image to be processed.
Alternatively, the electronic device may be a physical machine, for example: desktop computers, also called desktop computers or desktops (desktops), mobile phones, tablet computers, notebook computers, ultra-mobile personal computer (UMPC), netbooks, personal Digital Assistants (PDAs), etc., and the electronic device may be a server or a server group formed by a plurality of servers.
Alternatively, the image processing apparatus may implement the functions to be implemented by the image processing apparatus by a Virtual Machine (VM) deployed on a physical machine.
The image processing apparatus 11 and the electronic device 12 may be independent devices or may be integrated into the same device, which is not particularly limited in this disclosure.
When the image processing apparatus 11 and the electronic device 12 are integrated in the same device, the communication between the image processing apparatus 11 and the electronic device 12 is a communication between internal modules of the device. In this case, the communication flow therebetween is the same as "in the case where the image processing apparatus 11 and the electronic device 12 are independent of each other".
In the following embodiments provided in the present disclosure, the present disclosure is described taking an example in which the image processing apparatus 11 and the electronic device 12 are provided independently of each other.
An image processing method provided by an embodiment of the present disclosure is described below with reference to fig. 1.
As shown in fig. 2, the image processing method provided by the embodiment of the present disclosure includes the following S201 to S207.
S201, the image processing device acquires anchor point positions of a plurality of anchor points in the image to be processed and at least one motion vector.
The plurality of anchor points are points in the image to be processed, the positions and the colors of the points do not need to be adjusted, the motion vectors are used for representing the starting point and the ending point of the adjustable points in the image to be processed, and each motion vector in the at least one motion vector comprises a starting point coordinate and an ending point coordinate. The adjustable point location is the point location of which the position and the color need to be adjusted in the image to be processed.
It should be noted that, the image to be processed in the embodiment of the disclosure is a still image, which may be generated by shooting by the electronic device, or may be sent by the electronic device receiving other devices. The anchor point position, the start point coordinate and the end point coordinate related in the embodiment of the present disclosure may be coordinates used for characterizing a position in an image to be processed. The coordinates referred to in the embodiments of the present disclosure are two-dimensional coordinates, including an abscissa and an ordinate of one point.
It can be understood that the plurality of anchor points may specifically be anchor points on the outline of the fixed object in the image to be processed, or may be vertices of the image to be processed.
Illustratively, in a to-be-processed image related to the character head portrait, the anchor point may be an anchor point in the character head portrait and on the outline, and the starting point in the motion vector may be a point on a dynamic object in the hair and the background of the character.
As another example, fig. 3 shows a schematic diagram of a plurality of anchor points and at least one motion vector in an image to be processed, where the image to be processed includes anchor point 1 (x 1, y 1), start point 2 (x 2, y 2) in the motion vector, and end point 3 (x 3, y 3) of the motion vector, as shown in fig. 3. Fig. 3 shows only an anchor point 1, a start point 2 and an end point 3 of a motion vector by way of example, and in the actual application process, more anchor points (such as white dots in fig. 3) and more start points (such as black dots in fig. 3) of the motion vector exist in the image to be processed.
The following illustrates two implementations of the image processing apparatus to obtain anchor point positions of multiple anchor points in the embodiments of the present disclosure.
As a possible implementation manner, the image processing device acquires an image to be processed from the electronic device, extracts each anchor point in the image to be processed by using a preset machine learning algorithm, and further determines the anchor point position of each anchor point.
Specifically, the image processing apparatus may analyze the image to be processed by using a machine learning algorithm, extract anchor points in the image to be processed, and further determine anchor point positions of each anchor point.
The implementation manner of processing the image by using the machine learning algorithm in this step may refer to the description in the prior art, and will not be described herein.
As another possible implementation manner, the image processing apparatus acquires an image to be processed from the electronic device, determines each anchor point in the image to be processed in response to a triggering operation of the anchor points in the image to be processed by the account, and further determines an anchor point position of each anchor point.
Specifically, the image processing apparatus 11 may determine the type of the triggering operation according to parameters such as the time duration, the intensity of the triggering operation of the account, whether there is a sliding motion, and the like.
The operation types of the triggering operation comprise a first operation type, wherein the first operation type is used for selecting an anchor point by an account.
For example, the first operation type may include a click operation.
It should be noted that, the triggering operation of the account to the image to be processed may be performed by touching the display screen, or may be performed by external devices such as a mouse, a keyboard, and a stylus, which is not limited in the embodiment of the present disclosure.
The following illustrates various implementations of the image processing apparatus to obtain at least one motion vector in embodiments of the present disclosure.
As a possible implementation manner, the image processing apparatus obtains an image to be processed from the electronic device, extracts a start point and an end point of each adjustable point in the image to be processed by using a preset machine learning algorithm, and further determines a movement vector of the adjustable point according to the extracted start point and end point.
The implementation manner of processing the image by using the machine learning algorithm in this step may refer to the description in the prior art, and will not be described herein.
As a second possible implementation manner, the image processing apparatus acquires an image to be processed from the electronic device, determines a start point and an end point of at least one adjustable point in the image to be processed in response to a triggering operation of an anchor point in the image to be processed by the account, and further determines a motion vector according to the start point and the end point of the adjustable point.
The operation type of the triggering operation further comprises a second operation type, wherein the second operation type is used for selecting a starting point and an ending point of the adjustable point in the image to be processed by the account.
For example, the second operation type may include a sliding operation.
It should be noted that, the triggering operation of the account to the image to be processed may be performed by touching the display screen, or may be performed by external devices such as a mouse, a keyboard, and a stylus, which is not limited in the embodiment of the present disclosure.
As a third possible implementation manner, the image processing apparatus may further obtain, from the electronic device, at least one movement vector input by the account in the electronic device.
S202, the image processing device converts the obtained anchor points into anchor point vectors, and combines the anchor point vectors and at least one motion vector to generate a vector array.
The anchor point vector comprises a starting point and an ending point. The starting point and the end point of the anchor point vector are anchor points corresponding to the anchor point vector. The starting point coordinates and the ending point coordinates in the anchor point vectors are anchor point positions of the anchor points, and the vector array comprises the anchor point positions in a plurality of anchor point vectors and the starting point coordinates and the ending point coordinates in at least one moving vector.
It will be appreciated that the static vector in the disclosed embodiments, as well as the data format of the motion vector, includes a start position and an end position.
As a possible implementation manner, the image processing device determines an anchor point position of the anchor point, determines the anchor point position of the anchor point as a start point coordinate and an end point coordinate of the anchor point vector, and further determines the anchor point vector according to the determined start point coordinate and the determined end point coordinate.
Illustratively, in the case where the anchor point position of one anchor point is (x 1, y 1), the start point coordinate of its corresponding anchor point vector is (x 1, y 1), the end point coordinate of its corresponding anchor point vector is (x 1, y 1), and its corresponding static vector is [ (x 1, y 1), (x 1, y 1) ].
It will be appreciated that with this step, the start point coordinates and the end point coordinates in any one of the buyer vectors are the same.
Further, coordinate values of anchor points in a plurality of anchor point vectors of the image processing device are combined with coordinate values of a starting point coordinate and coordinate values of a middle point coordinate in at least one moving vector to obtain a vector array.
It should be noted that, in the embodiment of the present disclosure, the merging order between the plurality of static vectors and the at least one motion vector in the vector array is not limited.
For example, in connection with FIG. 3, the vector array may be [ x1, y1, x1, y1, x2, y2, x3, y3, x4, y4, x5, y5, … … ].
Wherein x1 is the abscissa of the anchor point position of the anchor point 1, y1 is the ordinate of the anchor point 1, x2 is the abscissa of the start point 2 of a motion vector, y2 is the ordinate of the start point 2 of the motion vector, x3 is the abscissa of the end point 3 of the motion vector, y3 is the ordinate of the end point 3 of the motion vector, x4 is the abscissa of the start point 4 of another motion vector, y4 is the ordinate of the start point 4 of the motion vector, x5 is the abscissa of the end point 5 of the motion vector, and y5 is the ordinate of the end point 5 of the motion vector.
For a specific implementation of this step, reference may be made to the following description of the present disclosure, which is not repeated here.
S203, the image processing device 11 divides the image to be processed into a plurality of triangles according to the vector array and the triangle subdivision algorithm so as to obtain the vertex index array.
The vertex index array comprises positions of the vertexes of the triangles corresponding to each other in the vector array. The vertex coordinates of the vertices in each triangle include the anchor point location or the origin coordinates of the adjustable point location.
As a possible implementation manner, the image processing apparatus 11 substitutes the data in the vector array into a triangle splitting algorithm, segments the image to be processed into a plurality of triangles by using the triangle splitting algorithm, the anchor point in the image to be processed, and the start point in the motion vector, and further determines the corresponding position of the vertex coordinates of the vertex of each triangle in the vector array.
For the vector array given in the above example, for a triangle, the vertex coordinates thereof are ((x 1, y 1), (x 2, y 2) and (x 4, y 4), respectively, and the positions of the vertices thereof in the vector array are 0,1,4,5,8 and 9, respectively, and the vertex index array output by the triangle splitting algorithm is [0,1,4,5,8,9 … … ].
It can be understood that the triangles obtained by the triangle splitting algorithm have vertices meeting the criteria of external empty circles, so that the image to be processed can be uniformly and reasonably split, and the display of the multiple target images in the later stage has certain continuity.
S204, the image processing device determines a time length coefficient.
The time length coefficient is used for representing the display sequence of the images in the display period, and in the case that a plurality of target images are included in the display period, the time length coefficient of each target image is used for representing the display sequence of the target image in the display period.
It will be appreciated that the time length coefficient reflects the position and color change of the adjustable point location in the target image. One target image corresponds to one time duration coefficient.
The display period may be preset in the image processing apparatus 11 by an operator, and may be understood as a display period of a plurality of subsequent target images. The display order may be preset in the image processing apparatus 11 by an operation and maintenance person. The duration coefficient is a coefficient greater than or equal to 0 and less than or equal to 1. The closer the duration coefficient is to 1, the later the target image corresponding to the duration coefficient is displayed in the display period, and the farther the distance of the adjustable point position in the target image is moved, the larger the color difference is. Conversely, the closer the duration coefficient is to 0, the earlier the target image corresponding to the duration coefficient is displayed in the display period, and the closer the distance of the adjustable point position in the target image is moved, the smaller the color difference is.
In another case, the time length coefficient may also be preset in the image processing apparatus by the operation and maintenance personnel in advance.
S205, the image processing device respectively determines target positions of vertexes of the triangles and target colors of the vertexes of the triangles according to the vector array, the vertex index array and the time length coefficient.
Wherein the target color comprises color values of vertices of triangles in the target image.
As one possible implementation manner, the image processing device inputs the image to be processed, the vector array, the vertex index array and the duration coefficient of the target image into the open graphic library to obtain the target position of the vertex of the triangle output by the open graphic library in the target image and the target color of the vertex of the triangle.
It should be noted that, the open graphics library may draw the data in the vertex index array by using DRAW ELEMENTS functions, the open graphics library may obtain the vector array by using glBufferSubData functions, the open graphics library may obtain the duration coefficient by using sendUinformf functions, and the open graphics library may determine the position information of the vertices of the triangle in the target image by using glVertexAttribPointer functions.
In this step and the following specific implementation manner of the open graphic library in the embodiments of the present disclosure, reference may be made to the prior art, and details are not repeated in the present disclosure.
Specific implementation manner in this step may also refer to the following description of the embodiments of the present disclosure, which is not repeated here.
S206, the image processing device renders the image to be processed according to the determined target position and the target color to obtain a plurality of target images.
The target images correspond to the images to be processed and are displayed in the display period according to the display sequence.
As a possible implementation manner, for any one of the plurality of target images, the image processing apparatus 11 may use an open graphics library, and render the image to be processed according to the target positions and the target colors of the vertices of the plurality of triangles in the target image determined by the open graphics library, so as to output the target image.
It will be appreciated that the image processing apparatus may acquire a plurality of target images within a display period using the above-described image processing method.
S207, the image processing device displays a plurality of target images in a display sequence in a display period. As one possible implementation, the image processing apparatus transmits a plurality of target images to the electronic device.
Correspondingly, after receiving the plurality of target images, the electronic device displays the plurality of target images in a display period according to a preset display sequence.
The image processing apparatus may further transmit a preset display period and a display order of each target image to the electronic device when transmitting the plurality of target images to the electronic device. The preset display period and the display sequence of each target image can also be preset in the electronic equipment by operation and maintenance personnel.
The technical scheme provided by the disclosure at least brings the following beneficial effects: the method and the device have the advantages that after the vector array is determined and the vertex index array is obtained, the target positions and the target colors of the vertexes of the triangles in the images to be processed in the target images can be determined according to the vector array, the vertex index array and the time length coefficients, the images to be processed can be rendered according to the determined target positions and the determined target colors, and then the target images can be obtained and displayed. Because the plurality of target images can be displayed according to the display sequence in the display period, the account can be given an effect of dynamic display images, and the display process of the static images can be more vivid.
In one design, to be able to generate a vector array, as shown in fig. 4, an embodiment of the disclosure provides S202, specifically including S2021-S2022.
S2021, the image processing apparatus acquires the coordinate values in the start point data and the coordinate values in the end point coordinates of each vector to be processed.
The vector to be processed is any one vector of a plurality of anchor point vectors and at least one motion vector.
For example, in the case where one anchor point vector is [ (x 1, y 1), (x 1, y 1) ], the image processing apparatus acquires the starting point coordinates of (x 1, y 1), the coordinate values of x1 and y1, and the ending point coordinates of (x 1, y 1), the coordinate values of x1 and y1. In the case where one motion vector is [ (x 2, y 2), (x 3, y 3) ], the image processing apparatus acquires the starting point coordinates of (x 2, y 2), the coordinates of x2 and y2, the ending point coordinates of (x 3, y 3), and the coordinates of x3 and y3.
S2022, the image processing device combines the acquired coordinate values to generate a vector array.
The continuous preset number of coordinate values in the vector array corresponds to a vector to be processed.
It should be noted that, in the vector array, starting from the first value, four consecutive values represent one vector to be processed.
Illustratively, take the vector array [ x1, y1, x1, y1, x2, y2, x3, y3, x4, y4, x5, y5, … … ] shown above as an example, where x1, y1, x1, y1 represent anchor vectors of anchor 1 in fig. 3, and x2, y2, x3, y3, and x4, y4, x5, y5 represent motion vectors, respectively.
In the merging process, the image processing device may merge the vectors to be processed according to a preset ordering rule, or may merge the vectors to be processed according to a random ordering rule.
The technical scheme provided by the disclosure at least brings the following beneficial effects: the coordinate values of all the acquired vectors to be processed can be combined to acquire a vector array which can meet the requirements of an open graphic library.
In one design, in order to determine the duration coefficient, as shown in fig. 5, S204 provided in the embodiment of the disclosure may specifically include S2041-S2042 described below.
S2041, the image processing apparatus determines a start display period of the target image.
The display starting time of the target image comprises a time required for starting to display the target image in a display period.
As a possible implementation manner, the image processing device obtains the duration of the display period, and the number of the plurality of target images in the display period, and further determines the time interval between two adjacent target images according to the ratio of the number of the plurality of target images to the duration of the display period.
The duration of the display period and the number of the plurality of target images in the real period may be set in advance in the image processing apparatus by the operation and maintenance personnel.
Further, the image processing device sorts the plurality of target images, acquires the display sequence of each target image, and determines the product of the time interval and the display sequence of each target image as the starting display time of the target image.
S2042, the image processing device determines the ratio of the display starting time length to the display period of the target image as the time length coefficient of the target image.
As one possible implementation manner, the image processing apparatus calculates a ratio of a start display duration of the target image to a duration of the display period after determining the start display duration of the target image, and uses the calculation result as a duration coefficient of the target image.
The technical scheme provided by the disclosure at least brings the following beneficial effects: the display starting time length can be determined according to the time interval between the target images, and further the time length coefficient corresponding to the target images is determined according to the display starting time length, so that the adjustable point position in each subsequent target image is in a position and color change state, and the effect of the dynamic image can be displayed to the account.
In one design, in order to determine the target positions of the vertices of the triangles in the target image, as shown in fig. 6, S205 provided in the embodiment of the disclosure specifically includes S2051-S2052.
S2051, the image processing device determines the starting point coordinates and the end point coordinates of the target vertex from the vector array according to the vertex index array.
Wherein the target vertex is any one vertex of a plurality of triangles. The starting point coordinates of the target vertexes are the starting point coordinates of the target vertexes in the vector to be processed, and the ending point coordinates of the target vertexes are the starting point coordinates of the target vertexes in the image to be processed.
As a possible implementation manner, the image processing apparatus determines, from the vector array, a start point coordinate and an end point coordinate of the target vertex in the vector to be processed according to the value of the target vertex in the vertex index array.
S2052, the image processing device determines the target position of the target vertex in the target image according to the time length coefficient of the target image, the starting point coordinate and the end point coordinate of the target vertex.
As a possible implementation manner, for a target vertex in a target image, the image processing device inputs a time length coefficient of the target image, a coordinate value of a start point coordinate and a coordinate value of an end point coordinate of the target vertex in the image to be processed into a preset formula to determine a coordinate value of a target position of the target vertex in the target image.
In one case, the target position of the target vertex in the target image satisfies the following equation one:
(x m,ym)={[xa×(1-v)+xb×v)],[ya×(1-v)+yb Xv) } equation one
Wherein x m is the abscissa of the target vertex in the target image, y m is the ordinate of the target vertex in the target image, x a is the abscissa of the start point coordinate of the target vertex, v is the duration coefficient of the first target image, x b is the abscissa of the end point coordinate of the target vertex, y a is the ordinate of the start point coordinate of the target vertex, and y b is the ordinate of the end point coordinate of the target vertex.
It should be noted that, the first formula may be used as an execution function for determining the position of the target vertex in the target image in the open graphic library.
The technical scheme provided by the disclosure at least brings the following beneficial effects: the method for realizing the target position of the target vertex in the target image can provide a data base for the subsequent rendering process so as to ensure the accuracy of dynamic display of the image to be processed.
In one design, in order to determine the target color of the vertices of the triangles in the target image, as shown in fig. 6, S205 provided by the embodiment of the disclosure specifically further includes S2053-S2054 described below after S2051.
S2053, the image processing device acquires the starting point color value of the target vertex from the image to be processed according to the starting point coordinate of the target vertex, and acquires the ending point color value of the target vertex from the image to be processed according to the ending point coordinate of the target vertex.
The color value of the starting point of the target vertex is the color value of the starting point of the target vertex in the vector to be processed in the image to be processed, and the color value of the ending point of the target vertex is the color value of the vertex in the vector to be processed corresponding to the target vertex in the image to be processed.
As a possible implementation manner, the image processing apparatus extracts a start point color value of the target vertex from the image to be processed according to the start point of the target vertex, and the image processing apparatus extracts an end point color value of the target vertex from the image to be processed according to the end point of the target vertex.
It should be noted that the color values referred to in the embodiments of the present disclosure may be specifically one color value or a combination of multiple color values in the RGB color mode.
The implementation manner of extracting the color value from the image to be processed in this step may refer to the prior art, and will not be described herein.
S2054, the image processing device determines a target color value of the target vertex in the target image according to the time length coefficient of the target image, the starting point color value and the end point color value of the target vertex.
As a possible implementation manner, the image processing device inputs the duration coefficient of the target image, the starting point color value and the end point color value of the target vertex into a preset formula to obtain the color value of the target color of the target vertex in the target image.
It should be noted that, the color value of the target color of the target vertex in the target image includes one color value or a combination of multiple color values in the RGB color mode.
In one case, the target color value of the target vertex in the target image provided by the embodiment of the disclosure satisfies the following formula two:
x n=[xp×(1-v)+xq Xv formula II
Wherein x n is the color value of the target vertex in the target image, x p is the starting point color value of the target vertex, v is the duration coefficient of the first target image, and x q is the ending point color value of the target vertex.
In one case, the above formula two may also be used as an execution function for determining the target color value of the target vertex in the target image in the open graphic library.
It can be understood that, through the above formula two, the open graphic library can fuse the color of the starting position and the color of the ending position of the target vertex to obtain the target color of the target vertex in the target image, and draw the target image according to the color of the target vertex.
The technical scheme provided by the disclosure at least brings the following beneficial effects: the method for realizing the target color of the target vertex in the target image can provide a data base for the subsequent rendering process so as to ensure the accuracy of dynamic display of the image to be processed.
In addition, the present disclosure also provides an image processing apparatus, and referring to fig. 7, the image processing apparatus 11 includes an acquisition unit 111, a generation unit 112, a determination unit 113, a processing unit 114, and a display unit 115.
The obtaining unit 111 is configured to obtain anchor positions of a plurality of anchors in the image to be processed, and at least one motion vector. The motion vector is used to characterize the start and end points of the adjustable point locations in the image to be processed. For example, in connection with fig. 2, the acquisition unit 111 may be used to perform S201.
The generating unit 112 is configured to convert the plurality of anchor positions acquired by the acquiring unit 111 into a plurality of anchor vectors, and combine the plurality of anchor vectors and at least one motion vector to generate a vector array. The starting point and the end point of the anchor point vector are anchor points corresponding to the anchor point vector. For example, in connection with fig. 2, the generating unit 112 may be used to perform S202.
The obtaining unit 111 is further configured to divide the image to be processed into a plurality of triangles according to the vector array and the triangle splitting algorithm, so as to obtain the vertex index array. The vertex index array comprises positions of the vertexes of the triangles corresponding to each other in the vector array. For example, in connection with fig. 2, the acquisition unit 111 may be used to perform S203.
A determining unit 113, configured to determine target positions of vertices of the triangles and target colors of vertices of the triangles respectively according to the vector array, the vertex index array, and the time length coefficients. The duration factor is used to characterize the display order of the images within the display period. For example, in connection with fig. 2, the determination unit 113 may be used to perform S205.
And a processing unit 114, configured to render the image to be processed according to the target position and the target color determined by the determining unit 113, so as to obtain a plurality of target images. For example, in connection with fig. 2, the processing unit 114 may be configured to perform S206.
A display unit 115 for displaying the plurality of target images obtained by the processing unit 114 in the display order in the display period. For example, in connection with fig. 2, the display unit 115 may be used to perform S207.
Optionally, as shown in fig. 7, the generating unit 112 provided in the embodiment of the present disclosure is specifically configured to:
And acquiring coordinate values in the starting point coordinates and coordinate values in the end point coordinates of each vector to be processed. The vector to be processed is any one vector of a plurality of anchor vectors and at least one motion vector. For example, in connection with fig. 4, the generation unit 112 may be used to perform S2021.
And merging the acquired coordinate values to generate a vector array. The continuous preset number of coordinate values in the vector array corresponds to a vector to be processed. For example, in connection with fig. 4, the generation unit 112 may be used to perform S2022.
Optionally, as shown in fig. 7, the determining unit 113 provided in the embodiment of the present disclosure is specifically configured to:
and determining the starting display time of the target image, wherein the starting display time of the target image comprises the time required for starting to display the target image in the display period. For example, in connection with fig. 5, the determination unit 113 may be used to perform S2041.
And determining the ratio of the display starting time length to the display period of the target image as the time length coefficient of the target image. For example, in connection with fig. 5, the determination unit 113 may be used to perform S2042.
Optionally, as shown in fig. 7, the determining unit 113 provided in the embodiment of the present disclosure is specifically configured to:
and determining the starting point coordinates and the ending point coordinates of the target vertexes from the vector array according to the vertex index array. The target vertex is any one of a plurality of triangles. For example, in connection with fig. 6, the determination unit 113 may be used to perform S2051.
And determining the target position of the target vertex in the target image according to the time length coefficient of the target image, the starting point coordinate and the end point coordinate of the target vertex. For example, in connection with fig. 6, the determination unit 113 may be used to perform S2052.
Optionally, as shown in fig. 7, the target position of the target vertex in the target image provided in the embodiment of the present disclosure satisfies the following formula one:
(x m,ym)={[xa×(1-v)+xb×v)],[ya×(1-v)+yb Xv) } equation one
Wherein x m is the abscissa of the target vertex in the target image, y m is the ordinate of the target vertex in the target image, x a is the abscissa of the start point coordinate of the target vertex, v is the duration coefficient of the target image, x b is the abscissa of the end point coordinate of the target vertex, y a is the ordinate of the start point coordinate of the target vertex, and y b is the ordinate of the end point coordinate of the target vertex.
Optionally, as shown in fig. 7, the determining unit 113 provided in the embodiment of the present disclosure is specifically configured to:
and determining the starting point coordinates and the ending point coordinates of the target vertexes from the vector array according to the vertex index array. The target vertex is any one of a plurality of triangles. For example, in connection with fig. 6, the determination unit 113 may be used to perform S2051.
And acquiring the starting point color value of the target vertex from the image to be processed according to the starting point coordinate of the target vertex, and acquiring the ending point color value of the target vertex from the image to be processed according to the ending point coordinate of the target vertex. For example, in connection with fig. 6, the determination unit 113 may be used to perform S2052.
And determining the target color value of the target vertex in the target image according to the time length coefficient of the target image, the starting point color value and the end point color value of the target vertex. For example, in connection with fig. 6, the determination unit 113 may be used to perform S2053.
Optionally, as shown in fig. 7, the target color value of the target vertex in the target image provided in the embodiment of the present disclosure satisfies the following formula two:
x n=[xp×(1-v)+xq Xv formula II
Wherein x n is a target color value of the target vertex in the target image, x p is a starting point color value of the target vertex, v is a time length coefficient of the target image, and x q is an end point color value of the target vertex.
The specific manner in which the individual modules or units perform the operations in the apparatus of the above embodiments has been described in detail in relation to the embodiments of the method, and will not be described in detail here.
Fig. 8 is a schematic structural diagram of an electronic device provided by the present disclosure, where the electronic device is configured to perform an image processing method provided by an embodiment of the present disclosure. As shown in fig. 8, the electronic device 30 may include at least one processor 301 and a memory 303 for storing processor-executable instructions. Wherein the processor 301 is configured to execute instructions in the memory 303 to implement the image processing method in the above-described embodiment.
In addition, the electronic device 30 may also include a communication bus 302 and at least one communication interface 304.
Processor 301 may be a processor (central processing units, CPU), a microprocessor unit, ASIC, or one or more integrated circuits for controlling the execution of programs in accordance with aspects of the present disclosure.
Communication bus 302 may include a path to transfer information between the above components.
Communication interface 304, using any transceiver-like device for communicating with other devices or communication networks, such as ethernet, radio access network (radio access network, RAN), wireless local area network (wireless local area networks, WLAN), etc.
The memory 303 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-only memory, EEPROM), a compact disc read-only memory (compact disc read-only memory) or other optical disc storage, a compact disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be stand alone and be connected to the processing unit by a bus. The memory may also be integrated with the processing unit.
The memory 303 is used for storing instructions for executing the disclosed aspects, and is controlled by the processor 301 for execution. The processor 301 is configured to execute instructions stored in the memory 303 to perform the functions of the methods of the present disclosure.
As an example, in connection with fig. 8, the acquisition unit 111, the generation unit 112, the determination unit 113, the processing unit 114, and the display unit 115 in the image processing apparatus realize the same functions as those of the processor 301 in fig. 8.
In a particular implementation, as one embodiment, processor 301 may include one or more CPUs, such as CPU0 and CPU1 of FIG. 8.
In a particular implementation, as one embodiment, electronic device 30 may include multiple processors, such as processor 301 and processor 307 in FIG. 8. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In a particular implementation, electronic device 30 may also include an output device 305 and an input device 306, as one embodiment. The output device 305 communicates with the processor 301 and may display information in a variety of ways. For example, the output device 305 may be a Liquid Crystal Display (LCD) CRYSTAL DISPLAY, a Light Emitting Diode (LED) display device, a Cathode Ray Tube (CRT) display device, or a projector (projector), or the like. The input device 306 communicates with the processor 301 and may accept input from an account in a variety of ways. For example, the input device 306 may be a mouse, keyboard, touch screen device, or sensing device, among others.
Those skilled in the art will appreciate that the structure shown in fig. 8 is not limiting of the electronic device 30 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In addition, the present disclosure also provides a computer-readable storage medium including instructions that, when executed by a processor, cause the processor to perform the image processing method as provided by the above embodiments.
In addition, the present disclosure also provides a computer program product comprising instructions which, when executed by a processor, implement the image processing method as provided in the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (15)

1. An image processing method, comprising:
Acquiring anchor point positions of a plurality of anchor points in an image to be processed and at least one motion vector; the motion vector is used for representing a starting point and an ending point of an adjustable point in the image to be processed;
Converting the obtained anchor positions into anchor vectors, and combining the anchor vectors and the at least one motion vector to generate a vector array; the starting point and the end point of the anchor point vector are anchor points corresponding to the anchor point vector;
Dividing the image to be processed into a plurality of triangles according to the vector array and a triangle subdivision algorithm to obtain a vertex index array; the vertex in each triangle is the anchor point or the starting point of the adjustable point, and the vertex index array comprises the positions of the vertexes of the triangles corresponding to each other in the vector array;
Respectively determining target positions of vertexes of the triangles in the target image and target colors of vertexes of the triangles in the target image according to the vector array, the vertex index array and the time length coefficient of the target image; the time length coefficient is the ratio of the starting display time length of the target image to the display period, and the starting display time length of the target image is the time length of starting to display the target image in the display period;
rendering the image to be processed according to the determined target position and the target color to obtain a plurality of target images;
And displaying the plurality of target images in the display period according to a preset display sequence.
2. The image processing method of claim 1, wherein the merging the plurality of anchor vectors and the at least one motion vector to generate a vector array comprises:
acquiring coordinate values in a starting point coordinate and coordinate values in an end point coordinate of each vector to be processed; the vector to be processed is any one vector of the anchor point vectors and the at least one motion vector;
Combining the acquired coordinate values to generate the vector array; the continuous preset number of coordinate values in the vector array corresponds to one vector to be processed.
3. The image processing method according to claim 1 or 2, wherein determining the target positions of the vertices of the plurality of triangles in the target image from the vector array, the vertex index array, and the time length coefficients of the target image comprises:
Determining the starting point coordinates and the end point coordinates of the target vertexes from the vector array according to the vertex index array; the target vertex is any vertex in the triangles;
And determining the target position of the target vertex in the target image according to the time length coefficient of the target image, the starting point coordinate and the end point coordinate of the target vertex.
4. The image processing method according to claim 3, wherein a target position of the target vertex in the target image satisfies the following formula one:
(x m,ym)={[xa×(1-v)+xb×v)],[ya×(1-v)+yb Xv) } equation one
Wherein x m is the abscissa of the target vertex in the target image, y m is the ordinate of the target vertex in the target image, x a is the abscissa of the start point coordinate of the target vertex, v is the duration coefficient of the target image, x b is the abscissa of the end point coordinate of the target vertex, y a is the ordinate of the start point coordinate of the target vertex, and y b is the ordinate of the end point coordinate of the target vertex.
5. The image processing method according to claim 1 or 2, wherein determining a target color of vertices of the plurality of triangles in the target image from the vector array, the vertex index array, and a time length coefficient of the target image comprises:
Determining the starting point coordinates and the end point coordinates of the target vertexes from the vector array according to the vertex index array; the target vertex is any vertex in the triangles;
acquiring a starting point color value of the target vertex from the image to be processed according to the starting point coordinate of the target vertex, and acquiring an end point color value of the target vertex from the image to be processed according to the end point coordinate of the target vertex;
and determining a target color value of the target vertex in the target image according to the time length coefficient of the target image, the starting point color value and the end point color value of the target vertex.
6. The image processing method according to claim 5, wherein a target color value of the target vertex in the target image satisfies the following formula two:
x n=[xp×(1-v)+xq x v) ] formula two
Wherein x n is a target color value of the target vertex in the target image, x p is a starting point color value of the target vertex, v is a time length coefficient of the target image, and x q is an end point color value of the target vertex.
7. An image processing apparatus, characterized by comprising an acquisition unit, a generation unit, a determination unit, a processing unit, and a display unit;
The acquisition unit is used for acquiring anchor point positions of a plurality of anchor points in the image to be processed and at least one motion vector; the motion vector is used for representing a starting point and an ending point of an adjustable point in the image to be processed;
The generating unit is configured to convert the multiple anchor positions acquired by the acquiring unit into multiple anchor vectors, and combine the multiple anchor vectors and the at least one motion vector to generate a vector array; the starting point and the end point of the anchor point vector are anchor points corresponding to the anchor point vector;
The acquisition unit is further used for dividing the image to be processed into a plurality of triangles according to the vector array and a triangle subdivision algorithm so as to acquire a vertex index array; the vertex in each triangle is the anchor point or the starting point of the adjustable point, and the vertex index array comprises the positions of the vertexes of the triangles corresponding to each other in the vector array;
The determining unit is used for respectively determining target positions of vertexes of the triangles in the target image and target colors of the vertexes of the triangles in the target image according to the vector array, the vertex index array and the time length coefficient of the target image; the time length coefficient is the ratio of the starting display time length of the target image to the display period, and the starting display time length of the target image is the time length of starting to display the target image in the display period;
The processing unit is used for rendering the image to be processed according to the target position and the target color determined by the determining unit to obtain a plurality of target images;
The display unit is used for displaying the plurality of target images obtained by the processing unit according to a preset display sequence in the display period.
8. The image processing device according to claim 7, wherein the generating unit is specifically configured to:
acquiring coordinate values in a starting point coordinate and coordinate values in an end point coordinate of each vector to be processed; the vector to be processed is any one vector of the anchor point vectors and the at least one motion vector;
Combining the acquired coordinate values to generate the vector array; the continuous preset number of coordinate values in the vector array corresponds to one vector to be processed.
9. The image processing device according to claim 7 or 8, wherein the determining unit is specifically configured to:
Determining the starting point coordinates and the end point coordinates of the target vertexes from the vector array according to the vertex index array; the target vertex is any vertex in the triangles;
And determining the target position of the target vertex in the target image according to the time length coefficient of the target image, the starting point coordinate and the end point coordinate of the target vertex.
10. The image processing apparatus according to claim 9, wherein a target position of the target vertex in the target image satisfies the following formula one:
(x m,ym)={[xa×(1-v)+xb×v)],[ya×(1-v)+yb x v) formula one wherein x m is the abscissa of the target vertex in the target image, y m is the ordinate of the target vertex in the target image, x a is the abscissa of the start point of the target vertex, v is the duration coefficient of the target image, x b is the abscissa of the end point of the target vertex, y a is the ordinate of the start point of the target vertex, and y b is the ordinate of the end point of the target vertex.
11. The image processing device according to claim 7 or 8, wherein the determining unit is specifically configured to:
Determining the starting point coordinates and the end point coordinates of the target vertexes from the vector array according to the vertex index array; the target vertex is any vertex in the triangles;
acquiring a starting point color value of the target vertex from the image to be processed according to the starting point coordinate of the target vertex, and acquiring an end point color value of the target vertex from the image to be processed according to the end point coordinate of the target vertex;
and determining a target color value of the target vertex in the target image according to the time length coefficient of the target image, the starting point color value and the end point color value of the target vertex.
12. The image processing apparatus according to claim 11, wherein a target color value of the target vertex in the target image satisfies the following formula two:
x n=[xp×(1-v)+xq x v) ] formula two
Wherein x n is a target color value of the target vertex in the target image, x p is a starting point color value of the target vertex, v is a time length coefficient of the target image, and x q is an end point color value of the target vertex.
13. An electronic device, comprising: a processor, a memory for storing instructions executable by the processor; wherein the processor is configured to execute instructions to implement the image processing method of any of claims 1-6.
14. A computer readable storage medium comprising instructions which, when executed by a processor, cause the processor to perform the image processing method of any of claims 1-6.
15. A computer program product comprising instructions which, when executed by a processor, implement the image processing method of any of claims 1-6.
CN202011535228.2A 2020-12-22 2020-12-22 Image processing method and device, electronic equipment and storage medium Active CN112488977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011535228.2A CN112488977B (en) 2020-12-22 2020-12-22 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011535228.2A CN112488977B (en) 2020-12-22 2020-12-22 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112488977A CN112488977A (en) 2021-03-12
CN112488977B true CN112488977B (en) 2024-06-11

Family

ID=74914331

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011535228.2A Active CN112488977B (en) 2020-12-22 2020-12-22 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112488977B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10261110A (en) * 1997-03-21 1998-09-29 Mitsubishi Electric Corp Dynamic image generation device
JP2001283244A (en) * 2000-03-29 2001-10-12 Konami Co Ltd Three-dimensional image compositing device, its method, information storage medium, program distributing device and its method
JP2004078430A (en) * 2002-08-13 2004-03-11 Monolith Co Ltd Image generation method and device
JP2014048941A (en) * 2012-08-31 2014-03-17 Axell Corp Image display processing method and image display processing device
CN110555812A (en) * 2019-07-24 2019-12-10 广州视源电子科技股份有限公司 image adjusting method and device and computer equipment
CN110942500A (en) * 2019-11-29 2020-03-31 广州久邦世纪科技有限公司 Method and device for converting static graph into dynamic graph
CN111340918A (en) * 2020-03-06 2020-06-26 北京奇艺世纪科技有限公司 Dynamic graph generation method and device, electronic equipment and computer readable storage medium
CN111696185A (en) * 2019-03-12 2020-09-22 北京奇虎科技有限公司 Method and device for generating dynamic expression image sequence by using static face image

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10261110A (en) * 1997-03-21 1998-09-29 Mitsubishi Electric Corp Dynamic image generation device
JP2001283244A (en) * 2000-03-29 2001-10-12 Konami Co Ltd Three-dimensional image compositing device, its method, information storage medium, program distributing device and its method
JP2004078430A (en) * 2002-08-13 2004-03-11 Monolith Co Ltd Image generation method and device
JP2014048941A (en) * 2012-08-31 2014-03-17 Axell Corp Image display processing method and image display processing device
CN111696185A (en) * 2019-03-12 2020-09-22 北京奇虎科技有限公司 Method and device for generating dynamic expression image sequence by using static face image
CN110555812A (en) * 2019-07-24 2019-12-10 广州视源电子科技股份有限公司 image adjusting method and device and computer equipment
CN110942500A (en) * 2019-11-29 2020-03-31 广州久邦世纪科技有限公司 Method and device for converting static graph into dynamic graph
CN111340918A (en) * 2020-03-06 2020-06-26 北京奇艺世纪科技有限公司 Dynamic graph generation method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN112488977A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
EP3852068A1 (en) Method for training generative network, method for generating near-infrared image and apparatuses
CN114187633B (en) Image processing method and device, and training method and device for image generation model
EP0575346A1 (en) Method and apparatus for rendering graphical images
CN107204044B (en) Picture display method based on virtual reality and related equipment
CN111476851A (en) Image processing method, image processing device, electronic equipment and storage medium
CN108290071B (en) Media, apparatus, system, and method for determining resource allocation for performing rendering with prediction of player's intention
WO2022237116A1 (en) Image processing method and apparatus
JP2019046055A (en) Image processing device, and image processing method, and program
CN113129362B (en) Method and device for acquiring three-dimensional coordinate data
CN108256072B (en) Album display method, apparatus, storage medium and electronic device
EP3282351A1 (en) System and method for facilitating an inspection process
CN112488977B (en) Image processing method and device, electronic equipment and storage medium
JP2018132821A (en) Information processing device, information processing system, terminal device, program, and information processing method
CN111950057A (en) Loading method and device of Building Information Model (BIM)
CN113436247B (en) Image processing method and device, electronic equipment and storage medium
JP6564259B2 (en) Image processing apparatus and image processing method
CN114020390A (en) BIM model display method and device, computer equipment and storage medium
CN111343472B (en) Image processing effect adjusting method, device, equipment and medium
CN114797109A (en) Object editing method and device, electronic equipment and storage medium
CN114913277A (en) Method, device, equipment and medium for three-dimensional interactive display of object
CN111599011A (en) WebGL technology-based rapid construction method and system for power system scene
CN111524240A (en) Scene switching method and device and augmented reality equipment
US20230078041A1 (en) Method of displaying animation, electronic device and storage medium
CN116112716B (en) Virtual person live broadcast method, device and system based on single instruction stream and multiple data streams
US20230401784A1 (en) Information processing apparatus, information processing method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant