CN112991498A - Lens animation rapid generation system and method - Google Patents

Lens animation rapid generation system and method Download PDF

Info

Publication number
CN112991498A
CN112991498A CN201911285786.5A CN201911285786A CN112991498A CN 112991498 A CN112991498 A CN 112991498A CN 201911285786 A CN201911285786 A CN 201911285786A CN 112991498 A CN112991498 A CN 112991498A
Authority
CN
China
Prior art keywords
animation
unit
model
neural network
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911285786.5A
Other languages
Chinese (zh)
Other versions
CN112991498B (en
Inventor
熊军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yibai Education Technology Co ltd
Original Assignee
Shanghai Yibai Education Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yibai Education Technology Co ltd filed Critical Shanghai Yibai Education Technology Co ltd
Priority to CN201911285786.5A priority Critical patent/CN112991498B/en
Publication of CN112991498A publication Critical patent/CN112991498A/en
Application granted granted Critical
Publication of CN112991498B publication Critical patent/CN112991498B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a system and a method for quickly generating a lens animation, which relate to the technical field of animation, and the system comprises: the system comprises a model generation unit, a track control unit, a rendering unit and an animation detection unit; the animation detection unit is respectively in signal connection with the model generation unit, the track control unit and the rendering unit; the model generation unit is used for generating an animation model according to a model generation method; the track control unit is used for controlling the track of the painting brush according to the generated animation model, completing model painting and generating an intermediate lens; the rendering unit is used for rendering the intermediate lens to complete lens animation generation; the animation detection unit is a feedback neural network formed by a multilayer network and is used for detecting the animation model generated by the model generation unit, the model drawing finished by the trajectory control unit and the lens animation finally generated by the rendering unit. The method has the advantages of high generation efficiency, learning ability and high accuracy.

Description

Lens animation rapid generation system and method
Technical Field
The invention relates to the technical field of computers, in particular to a system and a method for quickly generating a shot animation.
Background
Animation production is divided into two-dimensional animation production, three-dimensional animation production and stop motion animation production, the two-dimensional animation and the three-dimensional animation are animation forms which are widely applied in the world at present, and shot animation generation belongs to the category of three-dimensional animation production. In the process of shooting in real life, the effect of moving a lens is generally achieved by placing a camera on a moving object for shooting, so that a lens picture is obtained; in the field of computer graphics, in the animation production process, animation production personnel program and realize the setting of the motion trail of the analog lens according to production requirements, and the analog lens moves according to the set motion trail, so that the images in the analog lens are presented to users.
The existing lens animation generation method has the defects that when a target object changes every time or the motion attribute of a virtual lens changes, animation producers need to repeatedly carry out a large amount of programming to set the motion track of the virtual lens, the workload is large, errors are easy to occur, and the animation production efficiency is not high.
Disclosure of Invention
In view of this, the present invention provides a system and a method for quickly generating a shot animation, which have the advantages of high generation efficiency, learning ability and high accuracy.
In order to achieve the purpose, the invention adopts the following technical scheme:
a shot animation fast generation system, the system comprising: the system comprises a model generation unit, a track control unit, a rendering unit and an animation detection unit; the animation detection unit is respectively in signal connection with the model generation unit, the track control unit and the rendering unit; the model generation unit is used for generating an animation model according to a model generation method; the track control unit is used for controlling the track of the painting brush according to the generated animation model, completing model painting and generating an intermediate lens; the rendering unit is used for rendering the intermediate lens to complete lens animation generation; the animation detection unit is a feedback neural network formed by a multilayer network, detects the animation model generated by the model generation unit, the model drawing finished by the track control unit and the lens animation finally generated by the rendering unit, carries out artificial evaluation according to a detection result, adjusts the operation of the model generation unit, the track control unit and the rendering unit according to an evaluation result, improves the operation efficiency of each unit and improves the operation effect of each unit.
Further, the animation detection unit includes: the neural network unit is used for constructing and training a deep neural network; the supervised learning unit is used for constructing an unsupervised learning algorithm of the deep neural network aiming at the characteristics of the deep learning network and the animation and constructing an unsupervised learning algorithm of the deep neural network aiming at the characteristics of the deep learning network and the animation on the basis of convolution operation and discrete coding algorithm; the training set unit is used for constructing an animation training set; the animation extraction unit is used for extracting candidate animation areas in the natural image by using a multi-scale sliding window algorithm and classifying the candidate animation areas by the deep neural network obtained by training to obtain animation areas; and the aggregation unit is used for aggregating the related animation areas into an animation line and calibrating the animation line by using a rectangular frame.
Further, the neural network unit, the method for constructing and training the deep neural network, performs the following steps: constructing an objective function according to convolution operation and a discrete coding algorithm, wherein the optimization objective is as follows: features, dictionaries and network parameters; fixing the dictionary to obtain the optimal characteristics; fixing the optimal characteristics, and training a dictionary by using a random gradient descent method for a single time; fixing the optimal characteristics, and training network parameters by using a random gradient descent method for multiple times until the training error is smaller than a preset value; recalculating the features using the latest network parameters; and repeating the steps until the learning target is reached.
Further, the aggregation unit, the method for aggregating the relevant animation regions into an animation line, performs the following steps: and using the obtained animation area to execute an animation line aggregation algorithm based on area correlation, wherein the specific correlation characteristics and aggregation rules are as follows: the height ratio of the two animation regions is between 0.5 and 2; 1/2 that the difference of the y coordinates of the center points of the circumscribed rectangles of the two animation areas is not more than the highest height value between the two animation areas; the difference value of the x coordinates of the central points of the circumscribed rectangles of the two animation areas is not more than 2 times of the widest width value between the two animation areas; the single animation line has at least three or more animation regions; the correlation is: the height of the two animation areas, the difference of the y coordinates of the central points of the circumscribed rectangles of the two animation areas and the difference of the x coordinates of the central points of the circumscribed rectangles of the two animation areas.
Further, the animation training set is 3500 common animations; the animation used is 15 representative animations; the animation image types are black-background white animation and white-background black animation; the size of the animated image is 32 x 32.
A shot animation rapid generation method comprises the following steps: a model generation unit for generating an animation model according to a model generation method; the track control unit controls the track of the painting brush according to the generated animation model, finishes model painting and generates an intermediate lens; the rendering unit renders the intermediate lens to complete lens animation generation; and the animation detection unit is a feedback neural network formed by a multilayer network, detects the animation model generated by the model generation unit, the model drawing finished by the track control unit and the lens animation finally generated by the rendering unit, carries out artificial evaluation according to a detection result, adjusts the operation of the model generation unit, the track control unit and the rendering unit according to an evaluation result, improves the operation efficiency of each unit and improves the operation effect of each unit.
Further, the animation detection unit includes: the neural network unit is used for constructing and training a deep neural network; the supervised learning unit is used for constructing an unsupervised learning algorithm of the deep neural network aiming at the characteristics of the deep learning network and the animation and constructing an unsupervised learning algorithm of the deep neural network aiming at the characteristics of the deep learning network and the animation on the basis of convolution operation and discrete coding algorithm; the training set unit is used for constructing an animation training set; the animation extraction unit is used for extracting candidate animation areas in the natural image by using a multi-scale sliding window algorithm and classifying the candidate animation areas by the deep neural network obtained by training to obtain animation areas; the aggregation unit is used for aggregating the related animation areas into animation lines and calibrating the animation lines by using a rectangular frame; the method for constructing and training the deep neural network executes the following steps: constructing an objective function according to convolution operation and a discrete coding algorithm, wherein the optimization objective is as follows: features, dictionaries and network parameters; fixing the dictionary to obtain the optimal characteristics; fixing the optimal characteristics, and training a dictionary by using a random gradient descent method for a single time; fixing the optimal characteristics, and training network parameters by using a random gradient descent method for multiple times until the training error is smaller than a preset value; recalculating the features using the latest network parameters; and repeating the steps until the learning target is reached.
Further, the animation extraction unit extracts the extraction parameters of the candidate animation region in the natural image by using a multi-scale sliding window algorithm as follows: the maximum scale is 1/4 image size, and the minimum scale is 20 pixels; sliding extraction with an overlap factor of 0.5; the aspect ratio of the extracted image block is 1: 1, and is uniformly scaled to 32 x 32 image blocks.
Compared with the prior art, the invention has the following beneficial effects: the invention trains a deep convolutional neural network by using an unsupervised learning method, then classifies all candidate animation regions in a natural image by using the network, and finally performs animation aggregation on the regions classified as the animations to detect and obtain the animation regions in the image. The method grasps the super-strong extraction capability of the deep learning network to the image characteristics, trains the deep convolution neural network aiming at the animation characteristics by utilizing the strong training capability of unsupervised learning, realizes the positioning and the segmentation of the animation area, and is simple and effective. The method constructs the unsupervised learning method of the deep convolutional neural network aiming at the animation characteristics, and has better pertinence in the aspect of animation detection, so that the method has higher initiative and accuracy. .
Drawings
The invention is described in further detail below with reference to the following figures and detailed description:
fig. 1 is a schematic system structure diagram of a rapid shot animation generation system according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart of a method for quickly generating a lens animation according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided for illustrative purposes, and other advantages and effects of the present invention will become apparent to those skilled in the art from the present disclosure.
Please refer to fig. 1. It should be understood that the structures, ratios, sizes, and the like shown in the drawings and described in the specification are only used for matching with the disclosure of the specification, so as to be understood and read by those skilled in the art, and are not used to limit the conditions under which the present invention can be implemented, so that the present invention has no technical significance, and any structural modification, ratio relationship change, or size adjustment should still fall within the scope of the present invention without affecting the efficacy and the achievable purpose of the present invention. In addition, the terms "upper", "lower", "left", "right", "middle" and "one" used in the present specification are for clarity of description, and are not intended to limit the scope of the present invention, and the relative relationship between the terms and the terms is not to be construed as a scope of the present invention.
Example 1
A shot animation fast generation system, the system comprising: the system comprises a model generation unit, a track control unit, a rendering unit and an animation detection unit; the animation detection unit is respectively in signal connection with the model generation unit, the track control unit and the rendering unit; the model generation unit is used for generating an animation model according to a model generation method; the track control unit is used for controlling the track of the painting brush according to the generated animation model, completing model painting and generating an intermediate lens; the rendering unit is used for rendering the intermediate lens to complete lens animation generation; the animation detection unit is a feedback neural network formed by a multilayer network, detects the animation model generated by the model generation unit, the model drawing finished by the track control unit and the lens animation finally generated by the rendering unit, carries out artificial evaluation according to a detection result, adjusts the operation of the model generation unit, the track control unit and the rendering unit according to an evaluation result, improves the operation efficiency of each unit and improves the operation effect of each unit.
Example 2
On the basis of the above embodiment, the animation detection unit includes: the neural network unit is used for constructing and training a deep neural network; the supervised learning unit is used for constructing an unsupervised learning algorithm of the deep neural network aiming at the characteristics of the deep learning network and the animation and constructing an unsupervised learning algorithm of the deep neural network aiming at the characteristics of the deep learning network and the animation on the basis of convolution operation and discrete coding algorithm; the training set unit is used for constructing an animation training set; the animation extraction unit is used for extracting candidate animation areas in the natural image by using a multi-scale sliding window algorithm and classifying the candidate animation areas by the deep neural network obtained by training to obtain animation areas; and the aggregation unit is used for aggregating the related animation areas into an animation line and calibrating the animation line by using a rectangular frame.
Specifically, animation shot division refers to shot representation in animation, and shot representation can bring great convenience to producers in the animation production process by means of shot sense design completed by a script. The pictures in motion are expressed one by the lens according to the conception and the design blueprint (including scene atmosphere, character performance, color light and shadow, dialogue and sound effect photographic processing) of the future film. With the great development of digital movie production technology and projection technology, digital high-definition 2K and 4K movies are becoming mainstream, and particularly, the requirement for the picture quality of movies is increasing due to the increase of the number of Imax projection halls. However, due to high technical requirements, common film manufacturers are difficult to make, film sources are very short, and most of the films in the cinemas are monopolized by imported films. Particularly, a three-dimensional animation movie adopts a frame-by-frame rendering and outputting technology, 24 frames of pictures are rendered and output every second, and 13 ten thousand frames of pictures are rendered and output for a movie of 90 minutes. The research on the three-dimensional animation post-rendering technology is mostly focused on the development of renderer software, the united states and the united kingdom are in technical monopoly positions, and the research and development of cluster rendering control and management software are still in blank stages. Many famous three-dimensional animation software, such as 3DSMAX and MAYA software under Autodesk flag, only have relatively simple network rendering control software, and are difficult to effectively control in the post-rendering output process of clustering. The imperfect editing of the animation shot in the cluster rendering makes it a technical bottleneck in the production efficiency and production quality.
Example 3
On the basis of the above embodiment, the neural network unit, the method for constructing and training the deep neural network, performs the following steps: constructing an objective function according to convolution operation and a discrete coding algorithm, wherein the optimization objective is as follows: features, dictionaries and network parameters; fixing the dictionary to obtain the optimal characteristics; fixing the optimal characteristics, and training a dictionary by using a random gradient descent method for a single time; fixing the optimal characteristics, and training network parameters by using a random gradient descent method for multiple times until the training error is smaller than a preset value; recalculating the features using the latest network parameters; and repeating the steps until the learning target is reached.
Example 4
On the basis of the above embodiment, the aggregation unit, the method for aggregating relevant animation regions into an animation line, performs the following steps: and using the obtained animation area to execute an animation line aggregation algorithm based on area correlation, wherein the specific correlation characteristics and aggregation rules are as follows: the height ratio of the two animation regions is between 0.5 and 2; 1/2 that the difference of the y coordinates of the center points of the circumscribed rectangles of the two animation areas is not more than the highest height value between the two animation areas; the difference value of the x coordinates of the central points of the circumscribed rectangles of the two animation areas is not more than 2 times of the widest width value between the two animation areas; the single animation line has at least three or more animation regions; the correlation is: the height of the two animation areas, the difference of the y coordinates of the central points of the circumscribed rectangles of the two animation areas and the difference of the x coordinates of the central points of the circumscribed rectangles of the two animation areas.
In particular, computer-generated animations are typically created using geometric models representing objects (e.g., trees, rocks, clouds, etc.) and characters (e.g., animals, characters, etc.) in a virtual environment. The animator can manipulate the models to locate objects and characters in some or all of the frames in the animation in a desired manner. The positioned geometric model may then be combined with other animation data, such as texture, color, lighting, and others, during rendering to produce an image that may be used as an animation frame. As the rendered frames are viewed in rapid succession, they give the viewer the perception of animation.
Example 5
On the basis of the previous embodiment, the animation training set is 3500 common animations; the animation used is 15 representative animations; the animation image types are black-background white animation and white-background black animation; the size of the animated image is 32 x 32.
Example 6
As shown in fig. 2, a method for quickly generating a shot animation performs the following steps: a model generation unit for generating an animation model according to a model generation method; the track control unit controls the track of the painting brush according to the generated animation model, finishes model painting and generates an intermediate lens; the rendering unit renders the intermediate lens to complete lens animation generation; and the animation detection unit is a feedback neural network formed by a multilayer network, detects the animation model generated by the model generation unit, the model drawing finished by the track control unit and the lens animation finally generated by the rendering unit, carries out artificial evaluation according to a detection result, adjusts the operation of the model generation unit, the track control unit and the rendering unit according to an evaluation result, improves the operation efficiency of each unit and improves the operation effect of each unit.
Example 7
On the basis of the above embodiment, the animation detection unit includes: the neural network unit is used for constructing and training a deep neural network; the supervised learning unit is used for constructing an unsupervised learning algorithm of the deep neural network aiming at the characteristics of the deep learning network and the animation and constructing an unsupervised learning algorithm of the deep neural network aiming at the characteristics of the deep learning network and the animation on the basis of convolution operation and discrete coding algorithm; the training set unit is used for constructing an animation training set; the animation extraction unit is used for extracting candidate animation areas in the natural image by using a multi-scale sliding window algorithm and classifying the candidate animation areas by the deep neural network obtained by training to obtain animation areas; the aggregation unit is used for aggregating the related animation areas into animation lines and calibrating the animation lines by using a rectangular frame; the method for constructing and training the deep neural network executes the following steps: constructing an objective function according to convolution operation and a discrete coding algorithm, wherein the optimization objective is as follows: features, dictionaries and network parameters; fixing the dictionary to obtain the optimal characteristics; fixing the optimal characteristics, and training a dictionary by using a random gradient descent method for a single time; fixing the optimal characteristics, and training network parameters by using a random gradient descent method for multiple times until the training error is smaller than a preset value; recalculating the features using the latest network parameters; and repeating the steps until the learning target is reached.
In particular, to edit a portion of a computer-generated animation, an animator may view a previously rendered animation and may modify a geometric model used to create the rendered version. The repositioned geometric model may then be combined with other animation data in another rendering process to produce an updated image that may be used as a frame of animation. This process may be repeated any number of times until the desired output is produced.
Example 8
On the basis of the above embodiment, the animation extraction unit extracts the extraction parameters of the candidate animation region in the natural image by using the multi-scale sliding window algorithm as follows: the maximum scale is 1/4 image size, and the minimum scale is 20 pixels; sliding extraction with an overlap factor of 0.5; the aspect ratio of the extracted image block is 1: 1, and is uniformly scaled to 32 x 32 image blocks.
It should be noted that, the system provided in the foregoing embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the foregoing embodiment may be combined into one module, or may be further split into multiple sub-modules, so as to complete all or part of the functions described above. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Those of skill in the art would appreciate that the various illustrative modules, method steps, and modules described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that programs corresponding to the software modules, method steps may be located in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.

Claims (8)

1. A shot animation rapid generation system, comprising: the system comprises a model generation unit, a track control unit, a rendering unit and an animation detection unit; the animation detection unit is respectively in signal connection with the model generation unit, the track control unit and the rendering unit; the model generation unit is used for generating an animation model according to a model generation method; the track control unit is used for controlling the track of the painting brush according to the generated animation model, completing model painting and generating an intermediate lens; the rendering unit is used for rendering the intermediate lens to complete lens animation generation; the animation detection unit is a feedback neural network formed by a multilayer network, detects the animation model generated by the model generation unit, the model drawing finished by the track control unit and the lens animation finally generated by the rendering unit, carries out artificial evaluation according to a detection result, adjusts the operation of the model generation unit, the track control unit and the rendering unit according to an evaluation result, improves the operation efficiency of each unit and improves the operation effect of each unit.
2. The system of claim 1, wherein the animation detection unit comprises: the neural network unit is used for constructing and training a deep neural network; the supervised learning unit is used for constructing an unsupervised learning algorithm of the deep neural network aiming at the characteristics of the deep learning network and the animation and constructing an unsupervised learning algorithm of the deep neural network aiming at the characteristics of the deep learning network and the animation on the basis of convolution operation and discrete coding algorithm; the training set unit is used for constructing an animation training set; the animation extraction unit is used for extracting candidate animation areas in the natural image by using a multi-scale sliding window algorithm and classifying the candidate animation areas by the deep neural network obtained by training to obtain animation areas; and the aggregation unit is used for aggregating the related animation areas into an animation line and calibrating the animation line by using a rectangular frame.
3. The system of claim 2, wherein the neural network unit, the method of constructing and training a deep neural network, performs the steps of: constructing an objective function according to convolution operation and a discrete coding algorithm, wherein the optimization objective is as follows: features, dictionaries and network parameters; fixing the dictionary to obtain the optimal characteristics; fixing the optimal characteristics, and training a dictionary by using a random gradient descent method for a single time; fixing the optimal characteristics, and training network parameters by using a random gradient descent method for multiple times until the training error is smaller than a preset value; recalculating the features using the latest network parameters; and repeating the steps until the learning target is reached.
4. The system of claim 3, wherein the aggregation unit, the method of aggregating relevant animation regions into an animation line performs the steps of: and using the obtained animation area to execute an animation line aggregation algorithm based on area correlation, wherein the specific correlation characteristics and aggregation rules are as follows: the height ratio of the two animation regions is between 0.5 and 2; 1/2 that the difference of the y coordinates of the center points of the circumscribed rectangles of the two animation areas is not more than the highest height value between the two animation areas; the difference value of the x coordinates of the central points of the circumscribed rectangles of the two animation areas is not more than 2 times of the widest width value between the two animation areas; the single animation line has at least three or more animation regions; the correlation is: the height of the two animation areas, the difference of the y coordinates of the central points of the circumscribed rectangles of the two animation areas and the difference of the x coordinates of the central points of the circumscribed rectangles of the two animation areas.
5. The system of claim 4, wherein the animation training set is 3500 common animations; the animation used is 15 representative animations; the animation image types are black-background white animation and white-background black animation; the size of the animated image is 32 x 32.
6. A shot animation fast generation method based on the system of one of claims 1 to 5, characterized in that the method performs the following steps: a model generation unit for generating an animation model according to a model generation method; the track control unit controls the track of the painting brush according to the generated animation model, finishes model painting and generates an intermediate lens; the rendering unit renders the intermediate lens to complete lens animation generation; and the animation detection unit is a feedback neural network formed by a multilayer network, detects the animation model generated by the model generation unit, the model drawing finished by the track control unit and the lens animation finally generated by the rendering unit, carries out artificial evaluation according to a detection result, adjusts the operation of the model generation unit, the track control unit and the rendering unit according to an evaluation result, improves the operation efficiency of each unit and improves the operation effect of each unit.
7. The method of claim 6, wherein the animation detection unit comprises: the neural network unit is used for constructing and training a deep neural network; the supervised learning unit is used for constructing an unsupervised learning algorithm of the deep neural network aiming at the characteristics of the deep learning network and the animation and constructing an unsupervised learning algorithm of the deep neural network aiming at the characteristics of the deep learning network and the animation on the basis of convolution operation and discrete coding algorithm; the training set unit is used for constructing an animation training set; the animation extraction unit is used for extracting candidate animation areas in the natural image by using a multi-scale sliding window algorithm and classifying the candidate animation areas by the deep neural network obtained by training to obtain animation areas; the aggregation unit is used for aggregating the related animation areas into animation lines and calibrating the animation lines by using a rectangular frame; the method for constructing and training the deep neural network executes the following steps: constructing an objective function according to convolution operation and a discrete coding algorithm, wherein the optimization objective is as follows: features, dictionaries and network parameters; fixing the dictionary to obtain the optimal characteristics; fixing the optimal characteristics, and training a dictionary by using a random gradient descent method for a single time; fixing the optimal characteristics, and training network parameters by using a random gradient descent method for multiple times until the training error is smaller than a preset value; recalculating the features using the latest network parameters; and repeating the steps until the learning target is reached.
8. The method of claim 7, wherein the animation extraction unit extracts the extraction parameters of the candidate animation region in the natural image using a multi-scale sliding window algorithm as follows: the maximum scale is 1/4 image size, and the minimum scale is 20 pixels; sliding extraction with an overlap factor of 0.5; the aspect ratio of the extracted image block is 1: 1, and is uniformly scaled to 32 x 32 image blocks.
CN201911285786.5A 2019-12-13 2019-12-13 System and method for rapidly generating lens animation Active CN112991498B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911285786.5A CN112991498B (en) 2019-12-13 2019-12-13 System and method for rapidly generating lens animation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911285786.5A CN112991498B (en) 2019-12-13 2019-12-13 System and method for rapidly generating lens animation

Publications (2)

Publication Number Publication Date
CN112991498A true CN112991498A (en) 2021-06-18
CN112991498B CN112991498B (en) 2023-05-23

Family

ID=76342379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911285786.5A Active CN112991498B (en) 2019-12-13 2019-12-13 System and method for rapidly generating lens animation

Country Status (1)

Country Link
CN (1) CN112991498B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710534A (en) * 2024-02-04 2024-03-15 昆明理工大学 Animation collaborative making method based on improved teaching and learning optimization algorithm

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010087778A1 (en) * 2009-02-02 2010-08-05 Agency For Science, Technology And Research Method and system for rendering an entertainment animation
CN106485773A (en) * 2016-09-14 2017-03-08 厦门幻世网络科技有限公司 A kind of method and apparatus for generating animation data
CN106504190A (en) * 2016-12-29 2017-03-15 浙江工商大学 A kind of three-dimensional video-frequency generation method based on 3D convolutional neural networks
CN106971414A (en) * 2017-03-10 2017-07-21 江西省杜达菲科技有限责任公司 A kind of three-dimensional animation generation method based on deep-cycle neural network algorithm
CN110033505A (en) * 2019-04-16 2019-07-19 西安电子科技大学 A kind of human action capture based on deep learning and virtual animation producing method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010087778A1 (en) * 2009-02-02 2010-08-05 Agency For Science, Technology And Research Method and system for rendering an entertainment animation
CN106485773A (en) * 2016-09-14 2017-03-08 厦门幻世网络科技有限公司 A kind of method and apparatus for generating animation data
CN106504190A (en) * 2016-12-29 2017-03-15 浙江工商大学 A kind of three-dimensional video-frequency generation method based on 3D convolutional neural networks
CN106971414A (en) * 2017-03-10 2017-07-21 江西省杜达菲科技有限责任公司 A kind of three-dimensional animation generation method based on deep-cycle neural network algorithm
CN110033505A (en) * 2019-04-16 2019-07-19 西安电子科技大学 A kind of human action capture based on deep learning and virtual animation producing method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈向奎等: "基于BP神经网络的图像识别跟踪技术", 《舰船科学技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710534A (en) * 2024-02-04 2024-03-15 昆明理工大学 Animation collaborative making method based on improved teaching and learning optimization algorithm
CN117710534B (en) * 2024-02-04 2024-04-23 昆明理工大学 Animation collaborative making method based on improved teaching and learning optimization algorithm

Also Published As

Publication number Publication date
CN112991498B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
US11425283B1 (en) Blending real and virtual focus in a virtual display environment
EP3533218B1 (en) Simulating depth of field
US11170523B2 (en) Analyzing screen coverage
US11328437B2 (en) Method for emulating defocus of sharp rendered images
CN112991498B (en) System and method for rapidly generating lens animation
WO2021242121A1 (en) Method for generating splines based on surface intersection constraints in a computer image generation system
US11803998B2 (en) Method for computation of local densities for virtual fibers
US11600041B2 (en) Computing illumination of an elongated shape having a noncircular cross section
WO2023021325A1 (en) Replacing moving objects with background information in a video scene
US11354878B2 (en) Method of computing simulated surfaces for animation generation and other purposes
US11430132B1 (en) Replacing moving objects with background information in a video scene
CN117082225B (en) Virtual delay video generation method, device, equipment and storage medium
US11593584B2 (en) Method for computation relating to clumps of virtual fibers
US20230260206A1 (en) Computing illumination of an elongated shape having a noncircular cross section
Zhdanov et al. Automatic building of annotated image datasets for training neural networks
EP4176415A1 (en) Method for computation of local densities for virtual fibers
Argudo Medrano et al. Tree variations
CN117241127A (en) Shooting scene evaluation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant