CN113179376A - Video comparison method, device and equipment based on three-dimensional animation and storage medium - Google Patents

Video comparison method, device and equipment based on three-dimensional animation and storage medium Download PDF

Info

Publication number
CN113179376A
CN113179376A CN202110476140.6A CN202110476140A CN113179376A CN 113179376 A CN113179376 A CN 113179376A CN 202110476140 A CN202110476140 A CN 202110476140A CN 113179376 A CN113179376 A CN 113179376A
Authority
CN
China
Prior art keywords
picture
contour
pictures
video
histogram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110476140.6A
Other languages
Chinese (zh)
Inventor
刘珂
张义
张娜
王艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Digihuman Technology Co ltd
Original Assignee
Shandong Digihuman Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Digihuman Technology Co ltd filed Critical Shandong Digihuman Technology Co ltd
Priority to CN202110476140.6A priority Critical patent/CN113179376A/en
Publication of CN113179376A publication Critical patent/CN113179376A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a video contrast method, a device, equipment and a storage medium based on three-dimensional animation, wherein the method comprises the following steps: reading a video of the real person movement, and dividing the video into a plurality of pictures according to a frame rate; adding a fixed virtual camera to the model in the unified engine, and extracting the picture of the virtual camera in real time in the moving process of the model; carrying out contour extraction processing on the divided pictures to obtain a first contour picture, and simultaneously carrying out contour extraction processing on the pictures extracted in real time to obtain a second contour picture; and matching the first contour picture with the second contour picture, finding out two contour pictures with high matching degree, and synchronously displaying the divided pictures corresponding to the two contour pictures with high matching degree and the pictures extracted in real time. Therefore, the real-time synchronization of the model animation and the action of the video character can be realized, and the model animation can be checked in comparison with the real person video.

Description

Video comparison method, device and equipment based on three-dimensional animation and storage medium
Technical Field
The invention relates to the field of video comparison, in particular to a video comparison method, a video comparison device, video comparison equipment and a storage medium based on three-dimensional animation.
Background
The model animation is generated by changing the position of each model in space and the rotation angle thereof for a period of time. The actions of the real person in the video are finished in a real environment, and as the model animation is virtual and has difference with the actions of the real person, the video of the actions of the real person is required to be used as an auxiliary reference for better understanding of the person.
At present, the model animation is played through controlling the time axis of the model animation through an interface carried by an engine. However, since the time axis is fixed, the model animation cannot be played in a jumping manner. In addition, the time points of the parts of the real person video completing actions are not matched with the time points of the parts of the model completing actions, so that different pictures appear in real-time comparison, and wrong understanding is easily caused to people.
Therefore, how to solve the problem that the model animation and the live video cannot be synchronized is a technical problem to be solved urgently by those skilled in the art.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus, a device and a storage medium for video comparison based on three-dimensional animation, which can achieve real-time synchronization between model animation and video character motion. The specific scheme is as follows:
a video contrast method based on three-dimensional animation comprises the following steps:
reading a video of real person movement, and dividing the video into a plurality of pictures according to a frame rate;
adding a fixed virtual camera to the model in the unified engine, and extracting the picture of the virtual camera in real time in the model motion process;
carrying out contour extraction processing on the divided pictures to obtain a first contour picture, and simultaneously carrying out contour extraction processing on the pictures extracted in real time to obtain a second contour picture;
and matching the first contour picture with the second contour picture, finding out two contour pictures with high matching degree, and synchronously displaying the divided pictures corresponding to the two contour pictures with high matching degree and the pictures extracted in real time.
Preferably, in the video comparison method based on three-dimensional animation provided in the embodiment of the present invention, the contour extraction processing is performed on the divided picture to obtain a first contour picture, which specifically includes:
extracting the figures in the divided pictures to obtain figure pictures;
carrying out gray level processing on the figure picture to obtain a first gray level picture;
determining a first outer boundary and a first hole boundary through the surrounding relation of the first gray picture boundary;
and obtaining a first outline picture according to the first outer boundary, the first hole boundary and the hierarchical relationship between the first outer boundary and the first hole boundary.
Preferably, in the video comparison method based on three-dimensional animation provided in the embodiment of the present invention, the performing contour extraction processing on the picture extracted in real time to obtain a second contour picture specifically includes:
carrying out gray processing on the real-time extracted picture to obtain a second gray picture;
determining a second outer boundary and a second hole boundary through the surrounding relation of the second gray picture boundary;
and obtaining a second outline picture according to the second outer boundary, the second hole boundary and the hierarchical relationship between the second outer boundary and the second hole boundary.
Preferably, in the above video matching method based on three-dimensional animation provided in the embodiment of the present invention, before matching the first contour picture with the second contour picture, the method further includes:
and carrying out gray level processing on the first contour picture and the second contour picture.
Preferably, in the video comparison method based on three-dimensional animation provided in the embodiment of the present invention, the matching of the first outline picture and the second outline picture is performed to find out two outline pictures with high matching degree, which specifically includes:
processing the first contour picture into a first histogram, and simultaneously processing the second contour picture into a second histogram;
performing normalization processing on the first histogram and the second histogram;
and comparing the first histogram and the second histogram after normalization processing to find out two contour pictures with high matching degree.
Preferably, in the video comparison method based on three-dimensional animation provided in the embodiment of the present invention, the comparing the first histogram and the second histogram after normalization to find out two contour pictures with high matching degree includes:
calculating the distance between the first histogram and the second histogram after normalization processing;
obtaining a comparison value according to the calculated distance;
and selecting two histograms which are closest to 0 with the comparison numerical value, and judging that the matching degree of the two contour pictures corresponding to the two histograms is the highest.
Preferably, in the above video matching method based on three-dimensional animation provided by the embodiment of the present invention, the angle of the virtual camera is kept consistent with the angle of the person who takes the video.
The embodiment of the invention also provides a video contrast device based on three-dimensional animation, which comprises:
the picture segmentation module is used for reading a video of real person movement and segmenting the video into a plurality of pictures according to a frame rate;
the image extraction module is used for adding a fixed virtual camera to the model in the unified engine and extracting the image of the virtual camera in real time in the model motion process;
the contour extraction module is used for carrying out contour extraction processing on the divided pictures to obtain a first contour picture and simultaneously carrying out contour extraction processing on the pictures extracted in real time to obtain a second contour picture;
and the picture matching module is used for matching the first contour picture with the second contour picture, finding out two contour pictures with high matching degree, and synchronously displaying the divided pictures corresponding to the two contour pictures with high matching degree and the pictures extracted in real time.
The embodiment of the invention also provides video contrast equipment based on three-dimensional animation, which comprises a processor and a memory, wherein the processor realizes the video contrast method based on three-dimensional animation provided by the embodiment of the invention when executing the computer program stored in the memory.
The embodiment of the present invention further provides a computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, implements the above-mentioned video contrast method based on three-dimensional animation according to the embodiment of the present invention.
According to the technical scheme, the video contrast method based on the three-dimensional animation provided by the invention comprises the following steps: reading a video of the real person movement, and dividing the video into a plurality of pictures according to a frame rate; adding a fixed virtual camera to the model in the unified engine, and extracting the picture of the virtual camera in real time in the moving process of the model; carrying out contour extraction processing on the divided pictures to obtain a first contour picture, and simultaneously carrying out contour extraction processing on the pictures extracted in real time to obtain a second contour picture; and matching the first contour picture with the second contour picture, finding out two contour pictures with high matching degree, and synchronously displaying the divided pictures corresponding to the two contour pictures with high matching degree and the pictures extracted in real time.
According to the method, firstly, the outline picture extracted from the picture of the virtual camera is matched with the outline picture extracted from the picture segmented by the video in the real-time motion process of the model, two outline pictures with high matching degree are found after matching is finished, and finally the picture of the virtual camera corresponding to the two outline pictures with high matching degree and the picture segmented by the video are synchronously displayed, so that the real-time synchronization of the motion of the model animation and the motion of the video character can be realized, and the model animation can be compared with the video of a real person for viewing. In addition, the invention also provides a corresponding device, equipment and a computer readable storage medium aiming at the video comparison method based on the three-dimensional animation, so that the method has higher practicability, and the device, the equipment and the computer readable storage medium have corresponding advantages.
Drawings
In order to more clearly illustrate the embodiments of the present invention or technical solutions in related arts, the drawings used in the description of the embodiments or related arts will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a video comparison method based on three-dimensional animation according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a real-time monitoring model moving picture according to an embodiment of the present invention;
FIG. 3 is a diagram of a picture extracted from a virtual camera according to an embodiment of the present invention;
FIG. 4 is another view of a virtual camera according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a first grayscale picture and a first outline picture according to an embodiment of the invention;
FIG. 6 is a schematic diagram of a second gray scale picture and a second profile picture corresponding to FIG. 3;
FIG. 7 is a schematic diagram of a second gray scale picture and a second profile picture corresponding to FIG. 4;
FIG. 8 is a first schematic diagram obtained by a corresponding process of FIG. 5;
FIG. 9 is a second histogram obtained from the corresponding processing of FIG. 6;
FIG. 10 is a second histogram resulting from the corresponding processing of FIG. 7;
fig. 11 is a schematic structural diagram of a video comparison apparatus based on three-dimensional animation according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a video contrast method based on three-dimensional animation, which comprises the following steps as shown in figure 1:
s101, reading a video of real person movement, and dividing the video into a plurality of pictures according to a frame rate;
specifically, a video of a real person action is shot in advance through a camera, the camera is used for really rendering an object in reality, then a video processing tool can be used for reading the video, and the video is divided into pictures one by one according to the frame rate of the video. The divided pictures can be sorted from small to large starting from the first frame (i.e. chronological order) of the video, and the name of the picture is the frame number of the video.
S102, adding a fixed virtual camera to the model in the unified engine, and extracting the picture of the virtual camera in real time in the motion process of the model;
it should be noted that a fixed camera is added to the moving model in the unified engine (unity), and this camera is a virtual camera and is used for rendering the virtual model exclusively. The adding mode is that the virtual camera is placed at a specified position through the function of the engine. Preferably, as shown in fig. 2, after the angle of the virtual camera is consistent with the angle of the video character shot by the camera in step S101, the picture of the fixed virtual camera is extracted in real time during the movement of the model, so as to monitor the model in real time. Fig. 3 and 4 show different views extracted from a virtual camera.
S103, carrying out contour extraction processing on the divided pictures to obtain a first contour picture, and simultaneously carrying out contour extraction processing on the pictures extracted in real time to obtain a second contour picture;
and S104, matching the first contour picture with the second contour picture, finding out two contour pictures with high matching degree, and synchronously displaying the divided pictures corresponding to the two contour pictures with high matching degree and the pictures extracted in real time.
In the video comparison method based on the three-dimensional animation provided by the embodiment of the invention, firstly, the outline picture extracted from the picture of the virtual camera is matched with the outline picture extracted from the picture divided from the video in the real-time movement process of the model, then two outline pictures with high matching degree are found out, and finally the picture of the virtual camera corresponding to the two outline pictures with high matching degree and the picture divided from the video are synchronously displayed, so that the real-time synchronization of the motion of the model animation and the motion of a video character can be realized, and the model animation can be compared with the video of a real person for viewing.
Further, in a specific implementation, in the video matching method based on three-dimensional animation provided in the embodiment of the present invention, the step S103 performs contour extraction processing on the divided picture to obtain a first contour picture, which may specifically include: firstly, extracting figures in the segmented picture to obtain figure pictures; then carrying out gray level processing on the figure picture to obtain a first gray level picture; then, determining a first outer boundary and a first hole boundary through the surrounding relation of the first gray picture boundary; and finally, obtaining a first outline picture according to the first outer boundary, the first hole boundary and the hierarchical relationship between the first outer boundary and the first hole boundary. Taking fig. 5 as an example, the left side shows a first gray-scale picture obtained by gray-scale processing, and the right side shows a first outline picture corresponding to the first gray-scale picture.
Similarly, in a specific implementation, in the video comparison method based on the three-dimensional animation provided in the embodiment of the present invention, the step S103 performs contour extraction processing on the picture extracted in real time to obtain a second contour picture, which specifically includes: firstly, carrying out gray processing on a picture extracted in real time to obtain a second gray picture; then determining a second outer boundary and a second hole boundary according to the surrounding relation of the second gray picture boundary; and finally, obtaining a second outline picture according to the second outer boundary, the second hole boundary and the hierarchical relationship between the second outer boundary and the second hole boundary. Taking fig. 6 as an example, the left side shows a second gray scale picture obtained by performing gray scale processing on the picture in fig. 3, and the right side shows a second outline picture corresponding to the second gray scale picture. Taking fig. 7 as an example, the left side shows a second gray scale picture obtained by performing gray scale processing on the picture in fig. 4, and the right side shows a second outline picture corresponding to the second gray scale picture.
It can be understood that, since the boundaries and the regions of the original image (i.e., the divided picture and the real-time extracted picture) have a one-to-one correspondence: the outer boundary corresponds to a connected region with a pixel value of 1, and the hole boundary corresponds to a region with a pixel value of 0, so that the invention can calculate the outline of the original image by using the boundary. In particular, the idea of encoding can be used for understanding, and different boundaries are given different integer values, so that what boundary it is and the hierarchical relationship can be determined. The input binary image is an image of 0 and 1, and the pixel values of the image are represented by f (i, j). Each line scan is terminated with two cases:
in the first case: f (i, j-1) is 0, f (i, j) is 1; // f (i, j) is the starting point of the outer boundary;
in the second case: f (i, j) > < 1, f (i, j +1) > 0; // f (i, j) is the starting point of the pore boundary.
Then, starting from the starting point, pixels on the boundary are marked. A unique identifier is assigned to the newly discovered boundary, called NBD. Initially NBD is 1, adding 1 each time a new boundary is found. In this process, when f (p, q) ═ 1 and f (p, q +1) ═ 0 are encountered, f (p, q) is set to-NBD, the termination point of the right boundary.
In specific implementation, in the above video matching method based on three-dimensional animation provided in the embodiment of the present invention, before performing step S104 to match the first contour picture with the second contour picture, the method may further include: and carrying out gray level processing on the first outline picture and the second outline picture. Since the gray processing algorithm is not consistent with the contour extraction algorithm, the interference of other factors after contour extraction can be avoided by performing gray processing again.
In the following, when embodied, the above three-dimensional-based method provided in the embodiments of the present inventionIn the video comparison method of the animation, step S104 matches the first contour picture with the second contour picture, and finds two contour pictures with high matching degree, which may specifically include: processing the first contour picture into a first histogram H1While processing the second contour picture into a second histogram H2(ii) a For the first histogram H1And a second histogram H2Carrying out normalization processing; the first histogram H after the normalization processing1And a second histogram H2And comparing to find out two contour pictures with high matching degree. FIG. 8 shows a first histogram H obtained by the corresponding processing of FIG. 51(ii) a FIG. 9 shows a second histogram H resulting from the corresponding processing of FIG. 62(ii) a FIG. 10 shows a second histogram H resulting from the corresponding processing of FIG. 72
It should be noted that the histogram is a graph of a relationship between a gray level and a probability of occurrence of the gray level in an image, the histogram is a statistical expression, and reflects a statistical probability (number) of occurrence of different gray levels, and an abscissa thereof is: grey scale, ordinate is: number of occurrences (probability).
Discrete function h (rk) nk of the histogram; where rk is the gray value of the kth level and nk is the number of pixels in the image with a gray level of rk. Normalizing the histogram p (rk) nk/MN; where k is 0,1,2.. L-1 (the range of gray levels is [0, L-1]), and MN denotes the total number of pixels.
Calculating two input contour images to obtain a histogram H1And H2Will histogram H1And H2Normalization processing is carried out to obtain the same scale space, and the sum of each gray scale proportion is 1, so that errors caused by different image sizes can be avoided.
Further, in the implementation, the first histogram H after normalization processing in the above steps1And a second histogram H2Comparing, and finding out two contour pictures with high matching degree, which may specifically include: firstly, a first histogram H after normalization processing is calculated1And a second histogram H2The distance between them; then, obtaining a comparison value according to the calculated distance; finally, the ratio is selectedTwo histograms H corresponding to a comparative value closest to 01And H2Judging the two histograms H1And H2The matching degree of the two corresponding outline pictures is highest.
By aligning histogram values (e.g., first histogram H)1And a second histogram H2Distance between) to obtain a comparison value, which is in the range of 0 to 1. The closer the comparison value is to 0, the higher the matching degree of the two pictures is, which indicates that the two pictures are similar to each other to a higher degree, whereas the lower the matching degree is, the lower the similarity is.
It is calculated that the comparison value obtained by matching the second contour picture of fig. 6 with the first contour picture of fig. 5 is 0.018559341148851, and the comparison value obtained by matching the second contour picture of fig. 7 with the first contour picture of fig. 5 is 0.137399333456828, so that the matching degrees of fig. 6 and fig. 5 are the closest.
In this embodiment, the pictures extracted from the virtual camera in real time are compared with the pictures segmented from the video one by one, and after the comparison is completed, a picture closest to the picture extracted from the virtual camera in real time is obtained from the segmented pictures, and finally, the picture obtained from the pictures extracted from the virtual camera in real time can be synchronously displayed.
In practical application, the video comparison method based on the three-dimensional animation provided by the embodiment of the invention can also be applied to the fields of image recognition, such as license plate number recognition and the like.
Based on the same inventive concept, the embodiment of the invention also provides a video comparison device based on three-dimensional animation, and as the principle of solving the problem of the device is similar to that of the video comparison method based on three-dimensional animation, the implementation of the device can refer to the implementation of the video comparison method based on three-dimensional animation, and repeated parts are not repeated.
In specific implementation, the video comparison apparatus based on three-dimensional animation provided in the embodiment of the present invention, as shown in fig. 11, specifically includes:
the picture segmentation module 11 is used for reading a video of real person movement and segmenting the video into a plurality of pictures according to a frame rate;
the picture extraction module 12 is used for adding a fixed virtual camera to the model in the unified engine and extracting the picture of the virtual camera in real time in the model motion process;
the contour extraction module 13 is configured to perform contour extraction processing on the divided pictures to obtain a first contour picture, and perform contour extraction processing on the pictures extracted in real time to obtain a second contour picture;
and the picture matching module 14 is configured to match the first contour picture with the second contour picture, find out two contour pictures with high matching degrees, and synchronously display the divided pictures corresponding to the two contour pictures with high matching degrees and the pictures extracted in real time.
In the video comparison device based on the three-dimensional animation provided by the embodiment of the invention, the real-time synchronization of the model animation and the action of the video character can be realized through the interaction of the four modules, so that the model animation can be compared and viewed with the video of a real person.
For more specific working processes of the modules, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
Correspondingly, the embodiment of the invention also discloses a video contrast device based on the three-dimensional animation, which comprises a processor and a memory; wherein, the processor implements the video contrast method based on three-dimensional animation disclosed in the foregoing embodiments when executing the computer program stored in the memory.
For more specific processes of the above method, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
Further, the present invention also discloses a computer readable storage medium for storing a computer program; the computer program, when executed by the processor, implements the three-dimensional animation-based video collation method disclosed above.
For more specific processes of the above method, reference may be made to corresponding contents disclosed in the foregoing embodiments, and details are not repeated here.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device, the equipment and the storage medium disclosed by the embodiment correspond to the method disclosed by the embodiment, so that the description is relatively simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The embodiment of the invention provides a video contrast method based on three-dimensional animation, which comprises the following steps: reading a video of the real person movement, and dividing the video into a plurality of pictures according to a frame rate; adding a fixed virtual camera to the model in the unified engine, and extracting the picture of the virtual camera in real time in the moving process of the model; carrying out contour extraction processing on the divided pictures to obtain a first contour picture, and simultaneously carrying out contour extraction processing on the pictures extracted in real time to obtain a second contour picture; and matching the first contour picture with the second contour picture, finding out two contour pictures with high matching degree, and synchronously displaying the divided pictures corresponding to the two contour pictures with high matching degree and the pictures extracted in real time. Therefore, the contour pictures extracted from the pictures of the virtual camera are matched with the contour pictures extracted from the pictures of the video segmentation in the real-time motion process of the model, two contour pictures with high matching degree are found after matching, and then the pictures of the virtual camera corresponding to the two contour pictures with high matching degree and the pictures of the video segmentation are synchronously displayed, so that the real-time synchronization of the motion of the model animation and the motion of the video character can be realized, and the model animation can be compared with the video of a real person for viewing. In addition, the invention also provides a corresponding device, equipment and a computer readable storage medium aiming at the video comparison method based on the three-dimensional animation, so that the method has higher practicability, and the device, the equipment and the computer readable storage medium have corresponding advantages.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The video comparison method, device, equipment and storage medium based on three-dimensional animation provided by the invention are described in detail, a specific example is applied in the text to explain the principle and the implementation mode of the invention, and the description of the above embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A video contrast method based on three-dimensional animation is characterized by comprising the following steps:
reading a video of real person movement, and dividing the video into a plurality of pictures according to a frame rate;
adding a fixed virtual camera to the model in the unified engine, and extracting the picture of the virtual camera in real time in the model motion process;
carrying out contour extraction processing on the divided pictures to obtain a first contour picture, and simultaneously carrying out contour extraction processing on the pictures extracted in real time to obtain a second contour picture;
and matching the first contour picture with the second contour picture, finding out two contour pictures with high matching degree, and synchronously displaying the divided pictures corresponding to the two contour pictures with high matching degree and the pictures extracted in real time.
2. The method for video matching based on three-dimensional animation according to claim 1, wherein the contour extraction processing is performed on the divided picture to obtain a first contour picture, and specifically comprises:
extracting the figures in the divided pictures to obtain figure pictures;
carrying out gray level processing on the figure picture to obtain a first gray level picture;
determining a first outer boundary and a first hole boundary through the surrounding relation of the first gray picture boundary;
and obtaining a first outline picture according to the first outer boundary, the first hole boundary and the hierarchical relationship between the first outer boundary and the first hole boundary.
3. The method for video matching based on three-dimensional animation according to claim 2, wherein the step of performing contour extraction processing on the real-time extracted picture to obtain a second contour picture specifically comprises:
carrying out gray processing on the real-time extracted picture to obtain a second gray picture;
determining a second outer boundary and a second hole boundary through the surrounding relation of the second gray picture boundary;
and obtaining a second outline picture according to the second outer boundary, the second hole boundary and the hierarchical relationship between the second outer boundary and the second hole boundary.
4. The three-dimensional animation-based video contrast method according to claim 3, further comprising, before matching the first contour picture with the second contour picture:
and carrying out gray level processing on the first contour picture and the second contour picture.
5. The video comparison method based on three-dimensional animation according to claim 4, wherein the matching of the first contour picture and the second contour picture is performed to find out two contour pictures with high matching degree, and specifically comprises:
processing the first contour picture into a first histogram, and simultaneously processing the second contour picture into a second histogram;
performing normalization processing on the first histogram and the second histogram;
and comparing the first histogram and the second histogram after normalization processing to find out two contour pictures with high matching degree.
6. The video comparison method based on three-dimensional animation according to claim 5, wherein the first histogram and the second histogram after normalization processing are compared to find out two contour pictures with high matching degree, and specifically comprises:
calculating the distance between the first histogram and the second histogram after normalization processing;
obtaining a comparison value according to the calculated distance;
and selecting two histograms which are closest to 0 with the comparison numerical value, and judging that the matching degree of the two contour pictures corresponding to the two histograms is the highest.
7. The apparatus according to claim 6, wherein the angle of the virtual camera is identical to the angle of the character of the video.
8. A video collating device based on three-dimensional animation is characterized by comprising:
the picture segmentation module is used for reading a video of real person movement and segmenting the video into a plurality of pictures according to a frame rate;
the image extraction module is used for adding a fixed virtual camera to the model in the unified engine and extracting the image of the virtual camera in real time in the model motion process;
the contour extraction module is used for carrying out contour extraction processing on the divided pictures to obtain a first contour picture and simultaneously carrying out contour extraction processing on the pictures extracted in real time to obtain a second contour picture;
and the picture matching module is used for matching the first contour picture with the second contour picture, finding out two contour pictures with high matching degree, and synchronously displaying the divided pictures corresponding to the two contour pictures with high matching degree and the pictures extracted in real time.
9. A three-dimensional animation-based video collation apparatus comprising a processor and a memory, wherein the processor implements the three-dimensional animation-based video collation method according to any one of claims 1 to 7 when executing a computer program stored in the memory.
10. A computer-readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the three-dimensional animation-based video collation method according to any one of claims 1 to 7.
CN202110476140.6A 2021-04-29 2021-04-29 Video comparison method, device and equipment based on three-dimensional animation and storage medium Pending CN113179376A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110476140.6A CN113179376A (en) 2021-04-29 2021-04-29 Video comparison method, device and equipment based on three-dimensional animation and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110476140.6A CN113179376A (en) 2021-04-29 2021-04-29 Video comparison method, device and equipment based on three-dimensional animation and storage medium

Publications (1)

Publication Number Publication Date
CN113179376A true CN113179376A (en) 2021-07-27

Family

ID=76925421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110476140.6A Pending CN113179376A (en) 2021-04-29 2021-04-29 Video comparison method, device and equipment based on three-dimensional animation and storage medium

Country Status (1)

Country Link
CN (1) CN113179376A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090153569A1 (en) * 2007-12-17 2009-06-18 Electronics And Telecommunications Research Institute Method for tracking head motion for 3D facial model animation from video stream
US20110157178A1 (en) * 2009-12-28 2011-06-30 Cuneyt Oncel Tuzel Method and System for Determining Poses of Objects
CN106600638A (en) * 2016-11-09 2017-04-26 深圳奥比中光科技有限公司 Realization method of augmented reality
CN108053469A (en) * 2017-12-26 2018-05-18 清华大学 Complicated dynamic scene human body three-dimensional method for reconstructing and device under various visual angles camera
CN108629801A (en) * 2018-05-14 2018-10-09 华南理工大学 A kind of three-dimensional (3 D) manikin posture of video sequence and Shape Reconstruction method
CN111462337A (en) * 2020-03-27 2020-07-28 咪咕文化科技有限公司 Image processing method, device and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090153569A1 (en) * 2007-12-17 2009-06-18 Electronics And Telecommunications Research Institute Method for tracking head motion for 3D facial model animation from video stream
US20110157178A1 (en) * 2009-12-28 2011-06-30 Cuneyt Oncel Tuzel Method and System for Determining Poses of Objects
CN106600638A (en) * 2016-11-09 2017-04-26 深圳奥比中光科技有限公司 Realization method of augmented reality
CN108053469A (en) * 2017-12-26 2018-05-18 清华大学 Complicated dynamic scene human body three-dimensional method for reconstructing and device under various visual angles camera
CN108629801A (en) * 2018-05-14 2018-10-09 华南理工大学 A kind of three-dimensional (3 D) manikin posture of video sequence and Shape Reconstruction method
CN111462337A (en) * 2020-03-27 2020-07-28 咪咕文化科技有限公司 Image processing method, device and computer readable storage medium

Similar Documents

Publication Publication Date Title
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN109583483B (en) Target detection method and system based on convolutional neural network
CN109859227B (en) Method and device for detecting flip image, computer equipment and storage medium
CN109934847B (en) Method and device for estimating posture of weak texture three-dimensional object
CN109919971B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108961304B (en) Method for identifying moving foreground in video and method for determining target position in video
CN108961260B (en) Image binarization method and device and computer storage medium
CN112800850A (en) Video processing method and device, electronic equipment and storage medium
CN114494775A (en) Video segmentation method, device, equipment and storage medium
CN113313092B (en) Handwritten signature recognition method, and claims settlement automation processing method, device and equipment
US20120038785A1 (en) Method for producing high resolution image
CN117132503A (en) Method, system, equipment and storage medium for repairing local highlight region of image
CN117459661A (en) Video processing method, device, equipment and machine-readable storage medium
CN113179376A (en) Video comparison method, device and equipment based on three-dimensional animation and storage medium
CN112084855A (en) Outlier elimination method for video stream based on improved RANSAC method
CN116188826A (en) Template matching method and device under complex illumination condition
CN113284158B (en) Image edge extraction method and system based on structural constraint clustering
Viacheslav et al. Low-level features for inpainting quality assessment
CN110853087B (en) Parallax estimation method, device, storage medium and terminal
CN114387326A (en) Video generation method, device, equipment and storage medium
CN115249358A (en) Method and system for quantitatively detecting carbon particles in macrophages and computer equipment
CN110728699B (en) Track post-processing method based on characteristic distance
CN111767757B (en) Identity information determining method and device
CN111260623A (en) Picture evaluation method, device, equipment and storage medium
CN112672052A (en) Image data enhancement method and system, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210727