CN113516737A - Animation conversion method and device and intelligent equipment - Google Patents

Animation conversion method and device and intelligent equipment Download PDF

Info

Publication number
CN113516737A
CN113516737A CN202010230854.4A CN202010230854A CN113516737A CN 113516737 A CN113516737 A CN 113516737A CN 202010230854 A CN202010230854 A CN 202010230854A CN 113516737 A CN113516737 A CN 113516737A
Authority
CN
China
Prior art keywords
animation
attribute
format
format file
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010230854.4A
Other languages
Chinese (zh)
Inventor
路晓创
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Pinecone Electronic Co Ltd
Original Assignee
Beijing Xiaomi Pinecone Electronic Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Pinecone Electronic Co Ltd filed Critical Beijing Xiaomi Pinecone Electronic Co Ltd
Priority to CN202010230854.4A priority Critical patent/CN113516737A/en
Publication of CN113516737A publication Critical patent/CN113516737A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • G06F16/116Details of conversion of file system types or formats

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to an animation conversion method, an animation conversion device and intelligent equipment. The method is applied to the client and comprises the following steps: acquiring a first format file of the animation; acquiring at least one attribute and corresponding parameters of the animation according to the first format file of the animation; and converting the at least one attribute into a second format, and obtaining a second format file of the animation according to the parameter of the at least one attribute, wherein the second format file is a format which is supported by the system of the client to be analyzed. The conversion from the first format file to the second format file is completed by using various attributes of the animation as an intermediary, namely the conversion from the original format to a target format which is supported by a system of the client and analyzed. For example, an android client can convert SVG animation into a format supporting parsing by itself by using the conversion method provided by the present disclosure, and the converted format is used as a basis for realizing animation effect.

Description

Animation conversion method and device and intelligent equipment
Technical Field
The disclosure relates to the technical field of communication, in particular to an animation conversion method, an animation conversion device and intelligent equipment.
Background
Scalable Vector Graphics (SVG) is a Vector Graphics specification established by the World Wide Web Consortium (W3C). SVG includes static figure and dynamic graph (animation), and the android client can convert SVG static figure to the format that can itself resolve at present to realize the drawing of static figure, but can't convert SVG animation to the format that can itself resolve, can't realize animation effect.
Disclosure of Invention
In order to overcome the problems in the related art, embodiments of the present disclosure provide an animation conversion method, an animation conversion device, and an intelligent device, so as to solve the defects in the related art.
According to a first aspect of the embodiments of the present disclosure, there is provided an animation conversion method, applied to a client, the method including:
acquiring a first format file of the animation;
acquiring at least one attribute and corresponding parameters of the animation according to the first format file of the animation;
and converting the at least one attribute into a second format, and obtaining a second format file of the animation according to the parameter of the at least one attribute, wherein the second format file is a format which is supported by the system of the client to be analyzed.
In one embodiment, the first format file of the animation includes a plurality of first tags and corresponding parameters;
the acquiring at least one attribute and corresponding parameter of the animation according to the first format file of the animation comprises the following steps:
one or more first labels and corresponding parameters corresponding to each attribute of the animation are obtained and used as at least one attribute and corresponding parameters of the animation.
In an embodiment, the obtaining one or more first tags and corresponding parameters corresponding to each attribute of the animation as at least one attribute and corresponding parameter of the animation includes:
sequentially acquiring a plurality of first labels and corresponding parameters according to the format of the first format file;
and adding the first label and the corresponding parameter into one attribute of the corresponding animation according to the attribute of each first label.
In one embodiment, the second format file of the animation includes a plurality of second tags and corresponding parameters;
the converting the at least one attribute into a second format and obtaining a second format file of the animation according to the parameter of the at least one attribute comprises:
determining one or more second labels corresponding to each attribute of the animation according to the attribute of each second label;
determining parameters corresponding to one or more corresponding second labels according to the parameters corresponding to each attribute of the animation;
and forming a second format file according to the format of the second format file, all the second labels and corresponding parameters.
In one embodiment, the method further comprises:
and analyzing the second format file of the animation to generate an animation effect.
In one embodiment, the attributes of the animation include at least one of:
a zoom attribute, a displacement attribute, a rotation attribute, a registration point adjustment attribute, a shape change attribute, a color change attribute, a path attribute, and a stroke attribute.
According to a second aspect of the embodiments of the present disclosure, there is provided an animation conversion apparatus, applied to a client, the apparatus including:
the acquisition module is used for acquiring a first format file of the animation;
the attribute module is used for acquiring at least one attribute and corresponding parameters of the animation according to the first format file of the animation;
and the determining module is used for converting the at least one attribute into a second format and obtaining a second format file of the animation according to the parameter of the at least one attribute, wherein the second format file is a format which is analyzed by the system support of the client.
In one embodiment, the first format file of the animation includes a plurality of first tags and corresponding parameters;
the attribute module is specifically configured to:
one or more first labels and corresponding parameters corresponding to each attribute of the animation are obtained and used as at least one attribute and corresponding parameters of the animation.
In one embodiment, the attribute module includes:
the acquiring unit is used for sequentially acquiring a plurality of first labels and corresponding parameters according to the format of the first format file;
and the adding unit is used for adding the first label and the corresponding parameter into one attribute of the corresponding animation according to the attribute of each first label.
In one embodiment, the second format file of the animation includes a plurality of second tags and corresponding parameters;
the determining module comprises:
the label unit is used for determining one or more second labels corresponding to each attribute of the animation according to the attribute of each second label;
the parameter unit is used for determining parameters corresponding to one or more corresponding second labels according to the parameters corresponding to each attribute of the animation;
and the file unit is used for forming a second format file according to the format of the second format file, all the second labels and corresponding parameters.
In one embodiment, the method further comprises:
and the analysis module is used for analyzing the second format file of the animation so as to generate an animation effect.
In one embodiment, the attributes of the animation include at least one of:
one or more of a zoom attribute, a displacement attribute, a rotation attribute, a registration point adjustment attribute, a shape change attribute, a color change attribute, a path attribute, and a stroke attribute.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor, and a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a first format file of the animation;
acquiring at least one attribute and corresponding parameters of the animation according to the first format file of the animation;
and converting the at least one attribute into a second format, and obtaining a second format file of the animation according to the parameter of the at least one attribute, wherein the second format file is a format which is supported by the system of the client to be analyzed.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring a first format file of the animation;
acquiring at least one attribute and corresponding parameters of the animation according to the first format file of the animation;
and converting the at least one attribute into a second format, and obtaining a second format file of the animation according to the parameter of the at least one attribute, wherein the second format file is a format which is supported by the system of the client to be analyzed.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the method and the device, at least one attribute and corresponding parameters of the animation can be acquired through the first format file of the animation, namely various attributes and parameters of each attribute contained in animation information of the animation are acquired, each attribute is converted into the second format, and the second format file of the animation is generated by combining the conversion result and the parameters of each attribute, namely the conversion from the first format file to the second format file is completed by taking various attributes of the animation as intermediaries, namely the conversion from the original format to the target format of the client side supported and analyzed by a system can be converted. For example, an android client can convert SVG animation into a format supporting parsing by itself by using the conversion method provided by the present disclosure, and the converted format is used as a basis for realizing animation effect.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart diagram illustrating an animation transformation method according to an exemplary embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a method of obtaining attributes and corresponding parameters of an animation according to an exemplary embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating a method of obtaining a second format file for animation according to an exemplary embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a structure of an animation change device according to an exemplary embodiment of the present disclosure;
FIG. 5 is a schematic diagram illustrating the structure of a property module according to an exemplary embodiment of the present disclosure;
FIG. 6 is a block diagram illustrating a determination module in accordance with an exemplary embodiment of the present disclosure;
FIG. 7 is a schematic structural diagram of an animation change device according to another exemplary embodiment of the present disclosure;
fig. 8 is a block diagram of a smart device shown in an exemplary embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Scalable Vector Graphics (SVG) is a Vector Graphics specification established by the World Wide Web Consortium (W3C). The SVG comprises a static graph and a dynamic graph (namely animation), the SVG can be converted into a format which can be analyzed by the SVG by an android client at present, for example, a Vector asset in the android studio can convert the SVG static graph into a Vector Drawable format, and the Vector Drawable format can be analyzed by the android client to finish the drawing of the static graph; however, at present, an android client cannot convert an SVG animation into a format (for example, an Animated Vector Drawable format) which can be analyzed by the android client, and cannot realize an animation effect, so that at present, the SVG animation can be packaged into a special format supported by the android client only by depending on a third-party animation class library, the third-party animation class library needs to be introduced into a system to analyze the animation during development, the cost is high, the operation load is large, and the common third-party animation class libraries are lottiee and svga.
Based on this, referring to fig. 1, the present disclosure provides an animation conversion method applied to a client, where the method includes the following three steps S101 to S103:
in this embodiment, the client may be an intelligent terminal, for example, a smart phone, a tablet computer, a PDA (Personal Digital Assistant), an e-book reader, a multimedia player, and the like.
In step S101, a first format file of an animation is acquired.
In this step, an original format file, that is, a format file of the animation to be converted is obtained, where the first format file is a source code file for describing the animation, that is, a text for describing the animation in a certain format, so that the format file can have readability like an HTML web page. After the animation in the first format is output by the animation tool, the first format file can be opened by the word processing tool, namely, the source codes for describing the animation can be seen, so that the content of the animation can be read by the source codes as long as the grammar in the first format is mastered.
In one example, the animation is an object moving from point a to point B, so the source code describing the animation process is formed in the first format file by the inherent format, and the animation process can be read by the source code of the first format file.
In one example, the first format is an SVG format, and the first format file of the animation is a source code file of the SVG animation, that is, a text file written in Extensible Markup Language (XML). For example, the source code file of an SVG animation is shown below.
<svg class="lds-message"width="80px"height="80px"xmlns="http://www.w3.org/2000/svg"viewBox="0 0 100 100"preserveAspectRatio="xMidYMid">
<g transform="translate(20 50)">
<circle cx="0"cy="0"r="7"fill="#e15b64"transform="scale(0.992750.99275)">
<animateTransform attributeName="transform"type="scale"begin="-0.375s"calcMode="spline"keySplines="0.3 0 0.7 1;0.3 0 0.7 1"values="0;1;0"keyTimes="0;0.5;1"dur="1s"repeatCount=“indefinite”></animateTransform>
</circle>
</g></svg>
Wherein < g transform ═ transform (2050) > represents a figure position; < circle cx ═ 0 ═ cy ═ 0 ═ r ═ 7 ═ fill ═ e15b64 ″' transform ═ scale (0.992750.99275) > represents graphic style information; < animatatretransformer attributeName ═ scale "begin ═ 0.375s" calcMode ═ spline "keySplines ═ 0.300.71; 0.300.71 "values ═ 0; 1; 0"keyTimes ═ 0; 0.5; 1"dur ═ 1s" repeat count ═ indefinite ">/animation transform > represents the graphic animation information, and the animation type time attribute value changes.
In step S102, at least one attribute and corresponding parameter of the animation are obtained according to the first format file of the animation.
In this step, the first format file is a source text for describing the animation, which describes each attribute included in the animation, that is, when the syntax of the first format is used to read the animation content expressed by the first format file, which attributes are included in the animation, the number of parameters of each attribute is read. Therefore, the step is a process for reading the animation in the first format, and more direct animation description, namely attributes and parameters, is obtained through reading.
In one example, an animation of an object moving from point a to point B can be read from its source code file as having attributes of a start point of movement, an end point of movement, a duration of movement, a speed of movement, a path of movement, and the like.
In step S103, the at least one attribute is converted into a second format, and a second format file of the animation is obtained according to the parameter of the at least one attribute, where the second format file is a format that a system of the client supports parsing.
In this step, the second format is a format different from the format of the animation to be converted (i.e., the first format), and this format can be parsed by the system of the client, i.e., the destination format of the conversion process. Like the first format file, the second format file is an object code file for describing the animation, is a text for describing the animation in another format, and also has readability. The animation is described by using the second format, that is, the second format is used to describe each attribute of the animation in step S102, so that each attribute of the animation can be described based on the syntax of the second format, and during specific operation, each attribute can be first converted into the second format, and then the parameter of each attribute is assigned to the converted second format, so that a file in the second format can be obtained.
In one example, for an animation in which an object moves from point a to point B, attributes and parameters such as a movement start point, a movement end point, a movement duration, a movement speed, and a movement path need to be converted into a second format, then values are assigned to the converted second format through the parameters of the attributes, and the converted second format is edited by using a syntax of the second format to obtain a second format file corresponding to the animation.
In one example, the second format is an animation Vector Drawable (Animated Vector Drawable) format, and the second format file of the animation is code text Drawable by the animation Vector, that is, a text file written in Extensible Markup Language (XML). For example, the object code file that can draw an animation is shown below as an animation vector.
Figure BDA0002429231470000081
Figure BDA0002429231470000091
Figure BDA0002429231470000101
Figure BDA0002429231470000111
Figure BDA0002429231470000121
Figure BDA0002429231470000131
Figure BDA0002429231470000141
Figure BDA0002429231470000151
In the animation conversion method disclosed in this embodiment, various attributes of the animation are used as an intermediary, so that the conversion from the first format file to the second format file is completed, that is, the conversion from the original format to the target format of the client that the system supports the parsing is able to be performed. For example, an android client can convert SVG animation into a format (such as analyzed Vector primitive) that supports parsing by itself by using the conversion method provided by the present disclosure, as a basis for realizing animation effect.
In some embodiments of the present disclosure, the first format file of the animation includes a plurality of first tags and corresponding parameters. The present disclosure exemplarily shows a method for acquiring at least one attribute and corresponding parameters of an animation according to a first format file of the animation, which aims to acquire one or more first tags and corresponding parameters corresponding to each attribute of the animation as the at least one attribute and corresponding parameters of the animation, and please refer to fig. 2, which includes steps S201 to S202.
In step S201, a plurality of first tags and corresponding parameters are sequentially obtained according to the format of the first format file.
The first format file describes the animation through a first label and parameters, a specific type of the first label is generated according to the grammar of the first format file, each label has a fixed purpose, namely each first label is used for describing one attribute of the animation, but the first labels are not in one-to-one correspondence with the attributes of the animation, namely one first label can complete the description of one attribute of the animation, but one attribute of the animation can be complete by the cooperation of a plurality of first labels. That is, there is a mapping relationship between the first tag and the attribute of the animation, and in this mapping relationship, there is a one-to-one relationship between the first tag and the attribute of the animation, and there is also a many-to-one relationship between the first tag and the attribute of the animation. The position of the first label in the first format file, the connection statement among different first labels and other statements in the first format file are all generated according to the grammar of the first format.
In this step, the format of the first format file and the syntax of the first format are used to count the first labels.
In one example, the first format file is a source code file of an SVG animation, and the first tag is a tag in the source code file, such as width, height, etc. in the source code file of the SVG animation mentioned above.
In step S202, according to the attribute of each first tag, the first tag and the corresponding parameter are added to an attribute of the corresponding animation.
In this step, the attribute of the animation that each first tag is used to describe (i.e., the attribute of the first tag) can be determined according to the format of the first format file and the syntax of the first format file, and is added to the corresponding attribute. After all the first labels are assigned, the attribute type of the animation and the content of each attribute are obtained.
In one example, the attributes of the animation include at least one of: a zoom attribute, a displacement attribute, a rotation attribute, a registration point adjustment attribute, a shape change attribute, a color change attribute, a path attribute, and a stroke attribute. The scaling property can comprise a wide scaling factor and a high scaling factor; the displacement attributes may include a displacement of the abscissa and a displacement of the ordinate; the rotation attribute may include a rotation type, such as rotation; the registration point adjustment attribute may include a registration point abscissa and a registration point ordinate; the shape change attribute may include path information; the color change attribute may include a fill color, a transparency of the fill color; the path attribute comprises a path start and a path end; the stroke attributes include a stroke color, a stroke color transparency, and a stroke width.
Through the steps S201 and S202, the extraction of the effective content in the first format file (i.e., the extraction of the first tag) and the determination of the attribute of the animation (i.e., the allocation of the first tag) are completed, the format characters of the first format file are effectively removed, the content of the first format file is accurately and efficiently utilized, and the accurate description of the intermediary, namely the attribute of the animation, in the animation conversion process is quickly obtained.
In some embodiments of the present disclosure, the second format file of the animation includes a plurality of second tags and corresponding parameters; referring to fig. 3, a method for obtaining a second format file of an animation is exemplarily shown, which specifically includes steps S301 to S303.
In step S301, one or more second tags corresponding to each attribute of the animation are determined according to the attribute of each second tag.
The second-format file describes the animation through second tags and parameters, specific types of the second tags are generated according to the grammar of the second-format file, each second tag has a fixed purpose, namely each second tag is used for describing one attribute of the animation, but the second tags are not in one-to-one correspondence with the attributes of the animation, namely one second tag can complete the description of one attribute of the animation, but one attribute of the animation can be described by cooperation of a plurality of first tags. That is, there is a mapping relationship between the second tag and the attribute of the animation, and in this mapping relationship, there is a one-to-one relationship between the second tag and the attribute of the animation, and there is also a many-to-one relationship between the second tag and the attribute of the animation. And the position of the second label in the second format file, the connection statement among different second labels and other statements in the second format file are generated according to the grammar of the second format.
In this step, first, each attribute of the animation is counted, then which second tags are required for describing each attribute of the animation can be determined according to the mapping relationship between the second tags and the attributes of the animation, and the attributes of the animation are converted into one or more tags for describing the attributes of the animation.
In step S302, parameters corresponding to the corresponding one or more second tags are determined according to the parameters corresponding to each attribute of the animation.
In this step, when the attribute of the animation corresponds to a second tag, the parameter of the attribute needs to be converted into a format of the parameter corresponding to the second tag, so that the parameter of the second tag is determined; when the attribute of the animation corresponds to the plurality of second tags, the parameter of the attribute needs to be split into the plurality of parameters corresponding to the plurality of second tags, and the split parameter is converted into a format of the parameter corresponding to the second tag, so that the parameter of each second tag corresponding to the attribute is determined.
In step S303, a second format file is formed according to the format of the second format file, all the second tags and corresponding parameters.
In this step, the position of the second tag in the second format file, the connection statement between different second tags, and other statements in the second format file are determined according to the syntax of the second format, and then a complete second format file is generated.
In one example, the second format file is the aforementioned object code file in the identified Vector Drawable format, and the format can be directly analyzed by the android client, so that dependence on a third-party animation class library is avoided, any third-party animation class library is not required to be introduced into the system, and cost and computational load are reduced.
In some embodiments of the present disclosure, since the system of the client supports parsing the second format file, the animation conversion method further includes, after the second format file is generated, parsing the second format file by using the system of the client, and generating an animation effect after the parsing is completed.
In this step, the android client supports parsing the animation in the analyzed Vector Drawable format, so that after the SVG animation is converted into the analyzed Vector Drawable format, the android client can parse the file in the analyzed Vector Drawable format to generate an animation effect, and therefore the animation conversion method in this embodiment realizes that the android client uses the SVG animation which does not support parsing, and generates the animation effect.
In a second aspect, referring to fig. 4, the present disclosure provides an animation conversion apparatus applied to a client, the apparatus including:
an obtaining module 401, configured to obtain a first format file of an animation;
an attribute module 402, configured to obtain at least one attribute and a corresponding parameter of the animation according to the first format file of the animation;
a determining module 403, configured to convert the at least one attribute into a second format, and obtain a second format file of the animation according to the parameter of the at least one attribute, where the second format file is a format that a system of the client supports parsing.
In some embodiments of the present disclosure, the first format file of the animation includes a plurality of first tags and corresponding parameters;
the attribute module is specifically configured to:
one or more first labels and corresponding parameters corresponding to each attribute of the animation are obtained and used as at least one attribute and corresponding parameters of the animation.
Referring to fig. 5, in some embodiments of the present disclosure, the attribute module includes:
an obtaining unit 501, configured to sequentially obtain a plurality of first tags and corresponding parameters according to a format of a first format file;
an adding unit 502, configured to add, according to an attribute of each first tag, the first tag and the corresponding parameter to an attribute of the corresponding animation.
In some embodiments of the present disclosure, the second format file of the animation includes a plurality of second tags and corresponding parameters;
referring to fig. 6, the determining module includes:
a label unit 601, configured to determine one or more second labels corresponding to each attribute of the animation according to the attribute of each second label;
a parameter unit 602, configured to determine, according to a parameter corresponding to each attribute of the animation, a parameter corresponding to one or more corresponding second tags;
the file unit 603 is configured to form a second format file according to the format of the second format file, all the second tags, and corresponding parameters.
Referring to fig. 7, in some embodiments of the present disclosure, in addition to the obtaining module 701, the property module 702, and the determining module 703, the animation conversion apparatus further includes:
and the analyzing module 704 is used for analyzing the second format file of the animation to generate an animation effect.
In some embodiments of the present disclosure, the attributes of the animation include at least one of:
one or more of a zoom attribute, a displacement attribute, a rotation attribute, a registration point adjustment attribute, a shape change attribute, a color change attribute, a path attribute, and a stroke attribute.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In a third aspect, reference is made to fig. 8, which schematically illustrates a block diagram of an electronic device. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 8, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing elements 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power component 806 provides power to the various components of device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. The sensor assembly 814 may also include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The apparatus 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G or 5G or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the power supply method of the electronic devices.
In a fourth aspect, the present disclosure also provides, in an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the apparatus 800 to perform the method for powering the electronic device. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. An animation conversion method is applied to a client, and comprises the following steps:
acquiring a first format file of the animation;
acquiring at least one attribute and corresponding parameters of the animation according to the first format file of the animation;
and converting the at least one attribute into a second format, and obtaining a second format file of the animation according to the parameter of the at least one attribute, wherein the second format file is a format which is supported by the system of the client to be analyzed.
2. The animation conversion method as claimed in claim 1, wherein the first format file of the animation includes a plurality of first tags and corresponding parameters;
the acquiring at least one attribute and corresponding parameter of the animation according to the first format file of the animation comprises the following steps:
one or more first labels and corresponding parameters corresponding to each attribute of the animation are obtained and used as at least one attribute and corresponding parameters of the animation.
3. The animation conversion method according to claim 2, wherein the obtaining one or more first tags and corresponding parameters corresponding to each attribute of the animation as at least one attribute and corresponding parameter of the animation comprises:
sequentially acquiring a plurality of first labels and corresponding parameters according to the format of the first format file;
and adding the first label and the corresponding parameter into one attribute of the corresponding animation according to the attribute of each first label.
4. The animation conversion method as claimed in claim 1, wherein the second format file of the animation includes a plurality of second tags and corresponding parameters;
the converting the at least one attribute into a second format and obtaining a second format file of the animation according to the parameter of the at least one attribute comprises:
determining one or more second labels corresponding to each attribute of the animation according to the attribute of each second label;
determining parameters corresponding to one or more corresponding second labels according to the parameters corresponding to each attribute of the animation;
and forming a second format file according to the format of the second format file, all the second labels and corresponding parameters.
5. The animation conversion method according to claim 1, further comprising:
and analyzing the second format file of the animation to generate an animation effect.
6. The animation conversion method according to claim 1, wherein the attribute of the animation includes at least one of:
a zoom attribute, a displacement attribute, a rotation attribute, a registration point adjustment attribute, a shape change attribute, a color change attribute, a path attribute, and a stroke attribute.
7. An animation conversion apparatus applied to a client, the apparatus comprising:
the acquisition module is used for acquiring a first format file of the animation;
the attribute module is used for acquiring at least one attribute and corresponding parameters of the animation according to the first format file of the animation;
and the determining module is used for converting the at least one attribute into a second format and obtaining a second format file of the animation according to the parameter of the at least one attribute, wherein the second format file is a format which is analyzed by the system support of the client.
8. The animation conversion apparatus according to claim 7, wherein the first format file of the animation includes a plurality of first tags and corresponding parameters;
the attribute module is specifically configured to:
one or more first labels and corresponding parameters corresponding to each attribute of the animation are obtained and used as at least one attribute and corresponding parameters of the animation.
9. The animation conversion apparatus according to claim 8, wherein the property module includes:
the acquiring unit is used for sequentially acquiring a plurality of first labels and corresponding parameters according to the format of the first format file;
and the adding unit is used for adding the first label and the corresponding parameter into one attribute of the corresponding animation according to the attribute of each first label.
10. The animation conversion apparatus according to claim 7, wherein the second format file of the animation includes a plurality of second tags and corresponding parameters;
the determining module comprises:
the label unit is used for determining one or more second labels corresponding to each attribute of the animation according to the attribute of each second label;
the parameter unit is used for determining parameters corresponding to one or more corresponding second labels according to the parameters corresponding to each attribute of the animation;
and the file unit is used for forming a second format file according to the format of the second format file, all the second labels and corresponding parameters.
11. The animation conversion device according to claim 7, further comprising:
and the analysis module is used for analyzing the second format file of the animation so as to generate an animation effect.
12. The animation conversion apparatus according to claim 7, wherein the attribute of the animation includes at least one of:
one or more of a zoom attribute, a displacement attribute, a rotation attribute, a registration point adjustment attribute, a shape change attribute, a color change attribute, a path attribute, and a stroke attribute.
13. An electronic device, comprising:
a processor, and a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a first format file of the animation;
acquiring at least one attribute and corresponding parameters of the animation according to the first format file of the animation;
and converting the at least one attribute into a second format, and obtaining a second format file of the animation according to the parameter of the at least one attribute, wherein the second format file is a format which is supported by the system of the client to be analyzed.
14. A computer-readable storage medium, on which a computer program is stored, which program, when executed by a processor, carries out the steps of:
acquiring a first format file of the animation;
acquiring at least one attribute and corresponding parameters of the animation according to the first format file of the animation;
and converting the at least one attribute into a second format, and obtaining a second format file of the animation according to the parameter of the at least one attribute, wherein the second format file is a format which is supported by the system of the client to be analyzed.
CN202010230854.4A 2020-03-27 2020-03-27 Animation conversion method and device and intelligent equipment Pending CN113516737A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010230854.4A CN113516737A (en) 2020-03-27 2020-03-27 Animation conversion method and device and intelligent equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010230854.4A CN113516737A (en) 2020-03-27 2020-03-27 Animation conversion method and device and intelligent equipment

Publications (1)

Publication Number Publication Date
CN113516737A true CN113516737A (en) 2021-10-19

Family

ID=78060099

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010230854.4A Pending CN113516737A (en) 2020-03-27 2020-03-27 Animation conversion method and device and intelligent equipment

Country Status (1)

Country Link
CN (1) CN113516737A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102289834A (en) * 2011-08-30 2011-12-21 北京瑞信在线***技术有限公司 Micro-animation editer and edition method thereof
US20120249870A1 (en) * 2011-03-28 2012-10-04 Pieter Senster Cross-Compiling SWF to HTML Using An Intermediate Format
US20150049093A1 (en) * 2013-08-14 2015-02-19 Flipboard, Inc. Preloading Animation Files In A Memory Of A Client Device
CN104737121A (en) * 2012-09-04 2015-06-24 谷歌公司 In browser muxing and demuxing for video playback
CN105427353A (en) * 2015-11-12 2016-03-23 小米科技有限责任公司 Compression and drawing method and device of scalable vector graphic
CN105556569A (en) * 2013-06-03 2016-05-04 微软技术许可有限责任公司 Animation editing
KR20160096360A (en) * 2015-02-05 2016-08-16 주식회사 윌드림 Method for real-time generation and reproducing of EPUB animation using style sheet animation and real-time generation and reproducing of EPUB animation system using thereof
CN105956133A (en) * 2016-05-10 2016-09-21 腾讯科技(深圳)有限公司 Method and apparatus for displaying file in intelligent terminal
EP3168742A1 (en) * 2015-11-12 2017-05-17 Xiaomi Inc. Method and device for drawing gui
CN106815181A (en) * 2016-12-19 2017-06-09 广东小天才科技有限公司 Method and device for converting Indesign typesetted ind files into Office files
CN109389661A (en) * 2017-08-04 2019-02-26 阿里健康信息技术有限公司 A kind of animation file method for transformation and device
CN109636884A (en) * 2018-10-25 2019-04-16 阿里巴巴集团控股有限公司 Animation processing method, device and equipment

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120249870A1 (en) * 2011-03-28 2012-10-04 Pieter Senster Cross-Compiling SWF to HTML Using An Intermediate Format
CN102289834A (en) * 2011-08-30 2011-12-21 北京瑞信在线***技术有限公司 Micro-animation editer and edition method thereof
CN104737121A (en) * 2012-09-04 2015-06-24 谷歌公司 In browser muxing and demuxing for video playback
CN105556569A (en) * 2013-06-03 2016-05-04 微软技术许可有限责任公司 Animation editing
US20150049093A1 (en) * 2013-08-14 2015-02-19 Flipboard, Inc. Preloading Animation Files In A Memory Of A Client Device
KR20160096360A (en) * 2015-02-05 2016-08-16 주식회사 윌드림 Method for real-time generation and reproducing of EPUB animation using style sheet animation and real-time generation and reproducing of EPUB animation system using thereof
CN105427353A (en) * 2015-11-12 2016-03-23 小米科技有限责任公司 Compression and drawing method and device of scalable vector graphic
EP3168742A1 (en) * 2015-11-12 2017-05-17 Xiaomi Inc. Method and device for drawing gui
CN105956133A (en) * 2016-05-10 2016-09-21 腾讯科技(深圳)有限公司 Method and apparatus for displaying file in intelligent terminal
CN106815181A (en) * 2016-12-19 2017-06-09 广东小天才科技有限公司 Method and device for converting Indesign typesetted ind files into Office files
CN109389661A (en) * 2017-08-04 2019-02-26 阿里健康信息技术有限公司 A kind of animation file method for transformation and device
CN109636884A (en) * 2018-10-25 2019-04-16 阿里巴巴集团控股有限公司 Animation processing method, device and equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
NEMO__: "Android SVG使用之AnimatedVectorDrawable", pages 1 - 8, Retrieved from the Internet <URL:《https://blog.csdn.net/Nemo__/article/details/71099839?spm=1001.2014.3001.5501》> *
YIAN ZUO 等: "Flash Animation Watermarking Algorithm Based on SWF Tag Attributes", 《 PARALLEL AND DISTRIBUTED COMPUTING, APPLICATIONS AND TECHNOLOGIES》, 8 February 2019 (2019-02-08), pages 177 - 187 *
丁征凯 等: "利用VectorControl.Net组件进行DWG与SVG文件格式的转换研究", 《工程地球物理学报》, vol. 8, no. 1, pages 125 - 130 *
蒋连聘: "基于JavaEE的批量音视频转换平台研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 10, 15 October 2018 (2018-10-15), pages 138 - 124 *

Similar Documents

Publication Publication Date Title
US10108323B2 (en) Method and device for drawing a graphical user interface
CN106569800B (en) Front-end interface generation method and device
US10564945B2 (en) Method and device for supporting multi-framework syntax
CN109032606B (en) Native application compiling method and device and terminal
CN110874217B (en) Interface display method and device for quick application and storage medium
EP3242203A1 (en) Method for operating a display device and display device
CN109542417B (en) Method, device, terminal and storage medium for rendering webpage through DOM
US10909203B2 (en) Method and device for improving page display effect via execution, conversion and native layers
CN110928543A (en) Page processing method and device and storage medium
US20230004620A1 (en) Page display method
CN110704059A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112052059A (en) Popup display method and device, electronic equipment and storage medium
US20170308397A1 (en) Method and apparatus for managing task of instant messaging application
US20190012299A1 (en) Displaying page
CN107402756B (en) Method, device and terminal for drawing page
CN112269620A (en) Display method and device, electronic equipment and storage medium
CN112256445A (en) Data processing method, device and equipment based on application program and storage medium
CN107368562B (en) Page display method and device and terminal
CN113516737A (en) Animation conversion method and device and intelligent equipment
CN112182449A (en) Page loading method and device, electronic equipment and storage medium
CN113420531A (en) Code text conversion method and device and storage medium
CN112860625A (en) Data acquisition method, data storage method, device, equipment and storage medium
CN107423060B (en) Animation effect presenting method and device and terminal
CN113536180A (en) Item processing method, item processing device, electronic equipment, storage medium and program product
CN111131000A (en) Information transmission method, device, server and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination