WO2019206243A1 - 一种素材展示方法、终端和计算机存储介质 - Google Patents

一种素材展示方法、终端和计算机存储介质 Download PDF

Info

Publication number
WO2019206243A1
WO2019206243A1 PCT/CN2019/084391 CN2019084391W WO2019206243A1 WO 2019206243 A1 WO2019206243 A1 WO 2019206243A1 CN 2019084391 W CN2019084391 W CN 2019084391W WO 2019206243 A1 WO2019206243 A1 WO 2019206243A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
instruction
determining
detecting
objects
Prior art date
Application number
PCT/CN2019/084391
Other languages
English (en)
French (fr)
Inventor
吴嘉旭
黄慧滢
Original Assignee
咪咕动漫有限公司
***通信集团有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 咪咕动漫有限公司, ***通信集团有限公司 filed Critical 咪咕动漫有限公司
Publication of WO2019206243A1 publication Critical patent/WO2019206243A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Definitions

  • the present application relates to information processing technologies, and in particular, to a material display method, a terminal, and a computer storage medium.
  • the existing short video or beautiful drawing tools are static display for the material display, that is, the material is displayed in the form of a list, and the user wants to experience which material can only be displayed by manually selecting the material. If there is such a material display scheme, it can present the rotation of the material, which is convenient for the user to select, and can also enhance the user experience and reduce the interaction cost.
  • the embodiments of the present application provide a material display method, a terminal, and a computer storage medium.
  • the embodiment of the present application provides a material display method, and the method includes:
  • Detecting a first instruction determining at least a part of the plurality of material objects based on the first instruction, and sequentially adding material data corresponding to the at least part of the material objects to the first image according to a preset rule
  • the second image data is generated in the data, and the second image data including the material data is sequentially outputted on the output display interface.
  • the detecting the first instruction determining at least part of the plurality of material objects based on the first instruction, comprising: detecting a first operation for the first specific function button Determining that the first instruction is detected, determining the plurality of material objects based on the first instruction; or, when detecting a specific gesture operation, determining that the first instruction is detected, determining the plurality of the first instruction based on the first instruction a material object; or, when detecting a second operation for the tag identifier to which the plurality of material objects belong, determining to detect the first instruction, determining the plurality of material objects based on the first instruction; or detecting When the third operation of the first material object of the plurality of material objects is determined, determining that the first instruction is detected; wherein the first material object is any one of the plurality of material objects; Determining, by the first instruction, the first material object and other material objects sorted after the first material object; or detecting, respectively, for the plurality of material objects Determining that the first instruction is detected when
  • the method further includes: detecting the second instruction, suspending or terminating outputting the second material including the material data according to the second instruction Image data; wherein the detecting the second instruction comprises: determining that the second instruction is detected when the seventh operation for the second specific function button is detected; or determining that the second specific gesture operation is detected a second instruction; or, when detecting an eighth operation for the tag identification to which the plurality of material objects belong, determining to detect the second instruction.
  • the method further includes: detecting a third instruction, and restoring the output including the material according to the third instruction Second image data of the data; or determining a fourth material object; the fourth material object is the same or different from the material object corresponding to the second image data that is paused output; detecting the third instruction, according to the third instruction The fourth material object starts to restore output second image data including material data corresponding to the fourth material object.
  • the method further includes: detecting the fourth instruction, determining that the second image data is output when the fourth instruction is detected The material object corresponding to the material data; adding the material object to the first set based on the fourth instruction; wherein the detecting the fourth instruction comprises: detecting a ninth operation for the third specific function button Determining whether the fourth instruction is detected; or, when the image acquisition component is in an active state, and the first image data is continuously obtained by the image acquisition component, identifying whether the second image data includes a specific identifier;
  • the specific identifier includes at least one of the following: a specific object, a specific hand shape, a specific expression; and when it is recognized that the second image data includes a specific identifier, determining that the fourth instruction is detected.
  • the method further includes: performing comparison display on the material data corresponding to the material object included in the first set; wherein the comparing display manner comprises: sequentially displaying the second material including the material data Image data; or, second image data including material data is respectively displayed in at least two sub-interfaces divided by the output display interface.
  • the method includes: identifying whether the target object in the second image data meets a preset condition; and when it is recognized that the target object in the second image data does not satisfy the preset condition, temporarily outputting the second image data including the material data;
  • the identifying whether the target object in the second image data meets the preset condition comprises: identifying whether the target object is included in the second image data; and identifying that the target object is not included in the second image data Determining whether the target object in the second image data does not satisfy the preset condition; or identifying whether the display parameter of the target object in the second image data reaches a preset threshold; when the second image data is identified Determining that the target object in the second image data does not satisfy the preset bar when the display parameter of the target object in the target does not reach the preset threshold .
  • the method further includes: when it is recognized that the target object in the second image data satisfies a preset condition, restoring outputting the second image data including the material data.
  • the embodiment of the present application further provides a terminal, where the terminal includes: an image acquiring unit, a display unit, and a processing unit;
  • the image obtaining unit is configured to obtain first image data
  • the display unit is configured to output the first image data obtained by the image acquiring unit on an output display interface; the operation area deployed on the output display interface includes a plurality of material objects;
  • the processing unit is configured to detect the first instruction, determine at least part of the plurality of material objects based on the first instruction, and sequentially sequence the material data corresponding to the at least part of the material objects according to a preset rule Adding to the first image data to generate second image data; and sequentially outputting, by the display unit, second image data including material data on the output display interface.
  • the embodiment of the present application further provides a computer storage medium, where the computer instruction is stored, and when the instruction is executed by the processor, the steps of the material presentation method in the embodiment of the present application are implemented.
  • the embodiment of the present application further provides a terminal, including a memory, a processor, and a computer program stored on the memory and operable on the processor, and the processor executes the program to implement the material display in the embodiment of the present application.
  • a terminal including a memory, a processor, and a computer program stored on the memory and operable on the processor, and the processor executes the program to implement the material display in the embodiment of the present application. The steps of the method.
  • the material display method, the terminal, and the computer storage medium provided by the embodiment of the present application includes: obtaining first image data, outputting the first image data on an output display interface; and an operation area deployed on the output display interface includes a plurality of material objects; detecting the first instruction, determining at least a part of the plurality of material objects based on the first instruction, and sequentially adding the material data corresponding to the at least part of the material objects to the preset rule to The second image data is generated in the first image data, and the second image data including the material data is sequentially outputted on the output display interface.
  • the carousel of the material is realized, in particular, the carousel of the image material interacting with the user, which solves the problem that the user experience can be experienced only when the user can experience only one material data in one interaction, and the operation is cumbersome.
  • the problem greatly improves the user's operating experience and reduces the user interaction cost.
  • FIG. 1 is a schematic flow chart of a material display method according to Embodiment 1 of the present application.
  • FIG. 2 is a schematic diagram of an application scenario of a material display method according to an embodiment of the present application
  • FIG. 3 is a schematic diagram of a first application interaction of a material display method according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of a second application interaction of a material display method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a third application interaction of a material display method according to an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a structure of a terminal according to an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of another component of a terminal according to an embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of hardware components of a terminal according to an embodiment of the present application.
  • the embodiment of the present application provides a material display method.
  • 1 is a schematic flowchart of a material display method according to an embodiment of the present application; as shown in FIG. 1, the method includes:
  • Step 101 Obtain first image data, and output the first image data on an output display interface.
  • the operation area deployed on the output display interface includes a plurality of material objects.
  • Step 102 Detect a first instruction, determine at least part of the plurality of material objects based on the first instruction, and sequentially add material data corresponding to the at least part of the material objects to the The second image data is generated in the first image data, and the second image data including the material data is sequentially outputted on the output display interface.
  • the material display method of the embodiment of the present application is applied to a terminal, and the terminal may be a mobile phone, a computer, a digital broadcast terminal, an information transceiver device, a game console, a tablet device, a personal digital assistant, or the like.
  • the terminal has an application (such as a third-party application), and after the application is activated, the image data can be output through the output display interface of the application; the output display interface of the application has a toolbar, and the toolbar is activated to be outputted.
  • the operation area of the display interface displays a plurality of material objects, and the material objects can be combined with the image data to generate new image data by operations.
  • the material object is a material identifier displayed in the operation area, and each material object has a material data corresponding thereto.
  • the material data corresponding to the material object is added to the image data to generate new image data.
  • FIG. 2a to 2f are respectively schematic diagrams of application scenarios of a material display method according to an embodiment of the present application; as shown in FIG. 2a to FIG. 2f, respectively, application schematic diagrams of material objects provided by different application programs; it can be seen that the material objects can be Is a material identifier corresponding to any material data that can be added in the image data; wherein the material data may be data that increases display content in the image data, such as material data corresponding to a cartoon style photo frame; and, on the other hand, material data It may also be data that changes display parameters in the image data, such as changing the style of the image data, such as converting from a color image to a black and white image, and the like.
  • the material objects can be Is a material identifier corresponding to any material data that can be added in the image data; wherein the material data may be data that increases display content in the image data, such as material data corresponding to a cartoon style photo frame; and, on the other hand, material data It may also be data that changes display
  • the first image data may be image data stored in the terminal; or may be image data obtained by the terminal interacting with the network device, such as a web album; or the terminal may activate an image capturing component (such as a camera). Image data obtained by the image acquisition component in real time.
  • the image data may be a picture, a dynamic picture, or a video.
  • At least part of the plurality of material objects may be sequentially displayed in multiple manners.
  • the detecting the first instruction determining, according to the first instruction, the at least part of the plurality of material objects, comprising: detecting that the first specific When the first operation of the function button is performed, it is determined that the first instruction is detected, and the plurality of material objects are determined based on the first instruction.
  • the first specific function button is deployed in the output display interface, and the first specific function button may be a virtual button; or the first specific function button is deployed in the terminal, and the first specific function button may be a physical button . Then, when detecting the first operation for the first specific function button, determining that the first instruction is detected, determining the plurality of material objects according to the first instruction, and performing the material data corresponding to the plurality of material objects respectively The rule is sequentially added to the first image data to generate second image data, and the second image data including the material data is sequentially outputted on the output display interface.
  • the detecting the first instruction includes: determining that the first instruction is detected when detecting the specific gesture operation Determining the plurality of material objects based on the first instruction.
  • the terminal has a touch detection component, and the gesture operation can be detected by the touch detection component; wherein the gesture operation may be a gesture operation contacting the detection surface of the touch detection component, or may be a touch detection component
  • the detection surface has a dangling gesture operation with a certain distance.
  • the terminal may pre-configure a specific gesture that matches the first instruction, and when the specific gesture is detected, determine that the first instruction is detected, determine the multiple material objects according to the first instruction, and execute the multiple The material data corresponding to the material objects are sequentially added to the first image data to generate second image data according to a preset rule, and the second image data including the material data is sequentially outputted on the output display interface.
  • the detecting the first instruction, determining, according to the first instruction, the at least part of the plurality of material objects comprises: detecting a label identifier that belongs to the plurality of material objects In the second operation, it is determined that the first instruction is detected, and the plurality of material objects are determined based on the first instruction.
  • the present embodiment is applicable to a scenario in which a material object belonging to the same category is provided with a corresponding tag identifier.
  • a plurality of material objects that are attached to the hair may be correspondingly set as a hair accessory class, and a plurality of material objects having a frame feature may be correspondingly set as a frame class and the like.
  • the label identifier corresponding to the multiple material objects may be correspondingly deployed; and the operation of the label identification by the user may be switched to belong to the label identifier. Display of multiple material objects.
  • the determined plurality of material objects may be all the material objects displayed in the operation area; in another embodiment, if the material objects are incrementally loaded, it may be understood as belonging to the same class. If there are a large number of material objects, and all the material objects cannot be completely displayed in the operation area, the material object of the "next screen" needs to be displayed by "page turning" operation or by "pull-down” operation. In this case, it is determined.
  • the plurality of material objects may be all objects displayed in the current operation area, or may be all material objects belonging to the same class.
  • the detecting the first instruction determining, according to the first instruction, at least part of the plurality of material objects, comprises: detecting a first one of the plurality of material objects When the third operation of the material object is performed, determining that the first instruction is detected; wherein the first material object is any one of the plurality of material objects; determining the first material object based on the first instruction And sorting other material objects after the first material object.
  • the detecting the first instruction determining, according to the first instruction, the at least part of the plurality of material objects, comprising: detecting, respectively, the first one of the plurality of material objects Determining that the first instruction is detected when the fourth operation of the second material object and the fifth operation of the third material object; wherein the second material object and the third material object are respectively among the plurality of material objects Any two material objects; determining a material object between the second material object and the third material object based on the first instruction.
  • the fourth operation and the fifth operation may be an input operation, and in a scenario in which the operation is detected by the touch detection component, the fourth operation and the fifth operation may be a click operation.
  • the first instruction is detected, and at least part of the plurality of material objects are determined based on the first instruction, and may further include : determining, by the touch detection component, that the sliding gesture operation is detected, determining that the first instruction is detected; determining the second material object and the third material object based on the starting position and the ending position of the sliding gesture operation, determining the second material A material object between the object and the third material object.
  • the fourth embodiment described above by detecting a third operation for determining a starting position (ie, a first material object) among the plurality of material objects, determining to start with the first material object and sorting the first material object Other material objects after that.
  • the fifth operation of the fourth operation and the termination position (ie, the third material object) for determining the start position (ie, the second material object) among the plurality of material objects is detected, thereby Determining all the material objects between the second material object and the third material object, and performing the material data corresponding to the determined material objects to be sequentially added to the first image data according to a preset rule to generate second image data
  • the output display interface sequentially outputs second image data including material data.
  • the detecting the first instruction, determining, according to the first instruction, the at least part of the plurality of material objects comprising: detecting the first instruction, and activating according to the first instruction Selecting a location of each of the plurality of material objects; receiving a sixth operation for selecting a location of a portion of the plurality of material objects, determining the plurality of material objects based on the sixth operation Some of the material objects in .
  • the terminal may activate the selected positioning of each material object by using the detected first instruction; the selected positioning supports selection or deselection by the detected operation.
  • the selected box may be displayed in at least part of the area of the material object, and if the operation is detected, the selected box may be represented by a specific display manner (for example, displaying a specific identifier in the selected box)
  • the corresponding material object has been selected; correspondingly, if the operation is detected again for the selected material object, the selected box indicates that the specific display mode is cancelled (for example, the specific display is undisplayed in the selected box)
  • the material object has been deselected.
  • the user can select some of the material objects in a targeted manner for a plurality of material objects, for example, selecting a favorite material object without sequentially displaying the material data corresponding to the more material objects.
  • the adding the material data corresponding to the at least part of the material object to the first image data and generating the second image data according to a preset rule includes: identifying the first image data. a target object; the material data corresponding to the at least part of the material object is sequentially added to the first image data to generate second image data.
  • the terminal identifies the target object in the first image data by using an artificial intelligence (AI) module, specifically, performing depth detection on the first image data by using the AI module, and identifying the first image data.
  • AI artificial intelligence
  • a target object wherein the target object may be a person, an animal, or the like; wherein the target object may be a human face (or an animal face) or a human body (or an entire body of the animal).
  • the AI module can be located on the server side, and when the terminal recognizes the first image data, the terminal interacts with the server, and the first image data is identified by the AI module of the server, and the target object is identified.
  • the terminal can obtain the AI module through the server, or the terminal pre-configures the AI module, and the terminal can identify the target object in the first image data through the AI module without networking.
  • the material data corresponding to the at least part of the material objects is sequentially added to the target object to generate second image data. It can be understood that, if the number of the at least part of the material objects is, correspondingly generating an equal number of second image data.
  • the target object in the first image data is depth-detected by the AI module; for example, when the target object is a human face, after the face is recognized by the depth detection of the AI module, the material data is matched with the face.
  • the cartoon type rabbit ears, cat ears and other material data are attached to the head of the face to generate second image data, which enhances the perception of the user's WYSIWYG and greatly improves the user's operation experience.
  • the material data corresponding to the material object is added to the first image data, and the existing image processing mode can be referred to, and details are not described herein again.
  • the outputting the second image data including the material data in the output display interface includes: sequentially outputting the second image data including the target object and the material data on the output display interface.
  • the output duration of each second image data satisfies a preset duration, which is, for example, 2 seconds. This also facilitates the user to clearly see the effect of each material data during the sequential display of the material data.
  • the method further includes: detecting the second instruction, suspending or terminating outputting the material data including the material according to the second instruction The second image data; wherein the detecting the second instruction comprises: determining that the second instruction is detected when the seventh operation for the second specific function button is detected; or determining when the second specific gesture operation is detected The second instruction is detected; or when the eighth operation for the tag identification to which the plurality of material objects belong is detected, it is determined that the second instruction is detected.
  • the second specific function button may be the same as or different from the first specific function button.
  • the sequential display of the second image data including the material data is performed by the first operation for the first specific function button; in the display process, The seventh operation of a specific function button suspends or terminates outputting the second image data; wherein the user can view the effect of the material data added in the second image data for a long time by temporarily outputting the second image data, and subsequently recoverable The output of the second image data.
  • the termination of outputting the second image data indicates that the output of the second image data cannot be restored.
  • the second particular gesture operation is the same or different than the first particular gesture operation.
  • For a description of the second specific gesture operation refer to the related description of the foregoing first specific gesture operation.
  • For related description of the eighth operation reference may be made to the related description of the foregoing second operation, and details are not described herein again.
  • the method further includes: detecting the third instruction, and restoring the output according to the third instruction Second image data including material data; or determining a fourth material object; the fourth material object is the same or different from the material object corresponding to the second image data that is paused output; detecting the third instruction, according to the third The instruction resumes outputting the second image data including the material data corresponding to the fourth material object from the fourth material object.
  • the third instruction is detected, and the output is resumed from the suspended second image data according to the third instruction.
  • the fourth material object that is different from the currently suspended material object is determined by the operation, and the third instruction is further detected, according to the third instruction. The fourth material object starts outputting second image data corresponding to the fourth material object.
  • the detecting the third instruction comprises: detecting an operation for the fourth specific function button, determining to detect the third instruction; wherein the fourth specific function button is the same as or different from the second specific function button Or determining that the third specific gesture is detected when the third specific gesture operation is detected; wherein the third specific gesture operation is the same as or different from the second specific gesture operation; or When the operation of the tag identification to which the object belongs, it is determined that the third instruction is detected.
  • FIG. 3 is a schematic diagram of a first application interaction of a material display method according to an embodiment of the present disclosure; as shown in FIG. 3, the method includes:
  • Step 201 The user clicks the toolbar button, specifically, the toolbar button displayed in the image display interface of the client is used to display the material data.
  • Step 202 The client requests the material data to the server.
  • This step can request material data from the server when the client first uses it, or request material data from the server when the material data needs to be updated. It can be understood that the client needs to be connected to the server to obtain the material data under the above two conditions. In other cases, the client can perform subsequent processing through the material data that has been obtained and stored.
  • Step 203 The client displays the material identifier. Specifically, the client can display the material representation through the operation area of the output display interface. This operating area is the area where you can hear operational events.
  • Step 204 The user double-clicks on a certain type of material label.
  • a large amount of material data can be classified in advance, and each category corresponds to a material label.
  • Step 205 The client detects the double-click event and generates a carousel and capability call instruction.
  • Step 206 The client invokes the AI service to perform depth detection on the target object in the image data.
  • Step 207 The material data is combined with the image data to generate a new image and displayed, and the display duration reaches a preset duration, for example, 2 seconds.
  • the method may further include: the user performs a pause operation, the client detects the pause time, and the rotation pauses.
  • the method may further include: the user performs a termination operation, and the client detects the termination event, and then terminates the rotation.
  • the carousel of the material is realized, in particular, the carousel of the image material interacting with the user, which solves the problem that the user experience can be experienced only when the user can experience only one material data in one interaction, and the operation is cumbersome.
  • the problem greatly improves the user's operating experience and reduces the user interaction cost.
  • the method further includes: detecting the fourth instruction, determining the second image output when the fourth instruction is detected a material object corresponding to the material data in the data; adding the material object to the first set based on the fourth instruction.
  • the detecting the fourth instruction includes: determining that the fourth instruction is detected when the ninth operation for the third specific function button is detected; or, the image acquisition component is in an active state, and the first image data is When the image acquisition component is continuously obtained, identifying whether the second image data includes a specific identifier; the specific identifier includes at least one of: a specific object, a specific hand shape, a specific expression; when the second image is recognized When a specific identifier is included in the data, it is determined that the fourth instruction is detected.
  • the material objects corresponding to the favorite material data may be added to the first set by using various implementation manners, and the first set may be used as “a collection of materials that the user likes. ".
  • the material identification can be added to the first collection by the ninth operation of the deployed third specific function button.
  • the user may operate the second image by operating the third specific function button during the display of the second image data.
  • the material ID corresponding to the data is added to the first collection.
  • the image capturing component when the image capturing component is in an active state and the first image data is continuously obtained by the image capturing component, whether the specific identifier is included in the second image data may be identified. Including at least one of the following: a specific object, a specific hand shape, a specific expression; taking the specific identification as a "V-shaped hand shape" as an example, if the second image data is sequentially outputted, if the material data in the second image data is preferred The user can display the “V-shaped hand shape” by raising the hand, and the “V-shaped hand shape” is included in the image data collected by the image capturing component in real time, and the terminal recognizes the “V-shaped hand shape”, and the second image currently displayed is displayed. The material object corresponding to the material data in the data is added to the first collection.
  • the method further includes: performing comparison display on the material data corresponding to the material object included in the first set; wherein the comparing display manner comprises: sequentially displaying the material data including the material data Second image data; or second image data including material data is respectively displayed in at least two sub-interfaces divided by the output display interface.
  • the material data corresponding to the material objects in the first set may be displayed by comparing the display manners, so that the user performs the effect comparison.
  • display may be performed in a manner of sequentially outputting the display.
  • the display may be separately displayed in a split screen manner, for example, the output display interface is divided into at least two sub-interfaces, and the second image data including the material data is respectively displayed through each sub-interface, so that the user can be made one-time. Browse at least two second image data for easy comparison.
  • FIG. 4 is a schematic diagram of a second application interaction of the material display method according to the embodiment of the present application; as shown in FIG. 4, the method includes:
  • Step 301 The user clicks the "like" button during the carousel.
  • Step 302 The client detects the "like” event and adds the current material identifier to the "like” queue. Among them, the client can also synchronize the material information in the "like" queue to the server.
  • Step 303 The user clicks on the comparison operation of the material data of "like".
  • Step 304 The client detects the comparison operation event and generates a capability call instruction.
  • Step 305 The client invokes an AI service for detecting the target object in the image data.
  • Step 306 The client combines the material data “liked” by the user with the image data to generate a new image and display it.
  • the material data is compared and displayed in a small range in the carousel process, which greatly improves the user's operation experience.
  • the method when the image capturing component is in an active state, the first image data is continuously obtained by the image capturing component, and in the process of sequentially outputting the second image data including the material data,
  • the method includes: identifying whether a target object in the second image data satisfies a preset condition; and when it is recognized that the target object in the second image data does not satisfy a preset condition, suspending outputting the second image including the material data data.
  • the method further includes restoring outputting second image data including the material data when it is recognized that the target object in the second image data satisfies a preset condition.
  • the identifying whether the target object in the second image data meets the preset condition comprises: identifying whether the target object is included in the second image data; and identifying that the target object is not included in the second image data Determining whether the target object in the second image data does not satisfy the preset condition; or identifying whether the display parameter of the target object in the second image data reaches a preset threshold; when the second image data is identified When the display parameter of the target object in the target does not reach the preset threshold, it is determined that the target object in the second image data does not satisfy the preset condition.
  • the display parameters may include sharpness and/or resolution.
  • the embodiment is applicable to a scenario that the first image data is continuously obtained by the image capturing component when the image capturing component is in an active state, for example, a scene in which a user records a video through a front camera of the terminal, such a scenario.
  • the target object in the second image data may be the face of the user. If the terminal is not stabilized, the face in the second image data is sharply shaken or blurred, or the actual position of the terminal changes drastically, so that the target image is not included in the second image data, the material data is not very good with the second image.
  • the combination of the target objects in the data also indicates that the target object in the second image data does not satisfy the preset condition, and then the second image data including the material data is suspended.
  • FIG. 5 is a schematic diagram of a third application interaction of the material display method according to the embodiment of the present application; as shown in FIG. 5, the method includes:
  • Step 401 The user removes the terminal, or the terminal moves the range too large due to the user's wrong operation.
  • Step 402 The client determines, by using the call of the AI service, that the recognition of the target object in the image data is abnormal.
  • the abnormality includes not including the target object in the image data, or even if the target object is included, the target object is sharply shaken, or the target object is blurred. These conditions can be determined to identify anomalies.
  • Step 403 The client pauses the carousel.
  • Step 404 The user restores the terminal location such that the user is included in the image data.
  • Step 405 The client determines that the target object in the image data is normal through the call of the AI service.
  • the normal image includes the target object included in the image data, and the display of the target object also satisfies a preset condition, for example, the resolution of the target object reaches the preset. Set the threshold and so on.
  • Step 406 The client resumes the carousel.
  • the carousel is paused in the scene where the abnormality is recognized. For example, the terminal held by the user is not stabilized on the ground, and the recognition abnormality is detected at this time, if the material corresponding to the fourth material identifier is currently rotated to the fourth material identifier. For the data, the carousel is paused. After the user re-enters the camera, the carousel is resumed from the fifth material identification, so that the user can directly jump to other materials without seeing or seeing the effect of the material identification.
  • the operation may be an operation of inputting a device through a mouse or a keyboard, or a touch operation detected by the touch detection component.
  • the operation may be a click operation, a double-click operation, or a pressure operation that satisfies a certain pressure value.
  • FIG. 6 is a schematic structural diagram of a terminal according to an embodiment of the present disclosure; as shown in FIG. 6, the terminal includes: an image obtaining unit 51, a display unit 52, and a processing unit 53;
  • the image obtaining unit 51 is configured to obtain first image data
  • the display unit 52 is configured to output the first image data obtained by the image acquiring unit 51 on an output display interface; the operation area deployed on the output display interface includes a plurality of material objects;
  • the processing unit 53 is configured to detect the first instruction, determine at least a part of the material objects of the plurality of material objects based on the first instruction, and follow the preset rule for the material data corresponding to the at least part of the material objects
  • the second image data is sequentially generated by adding to the first image data; and the second image data including the material data is sequentially outputted by the display unit 52 on the output display interface.
  • the terminal further includes a detecting unit 54 configured to detect a first operation for the first specific function button; or, to detect a specific gesture operation; or, detecting a second operation for the tag identification to which the plurality of material objects belong; or a third operation for the first one of the plurality of material objects is detected; wherein the first material object is the plurality of Any one of the material objects; or, respectively, detecting a fourth operation for the second material object of the plurality of material objects and a fifth operation of the third material object; wherein the second material object And the third material object is respectively any two of the plurality of material objects;
  • the processing unit 53 is configured to: when the detecting unit 54 detects the first operation for the first specific function button, determine that the first instruction is detected, and determine the plurality of material objects based on the first instruction; or When detecting the specific gesture operation, the detecting unit 54 determines that the first instruction is detected, and determines the plurality of material objects based on the first instruction; or the detecting unit 54 detects that the plurality of material objects belong to Determining that the first instruction is detected, determining the plurality of material objects based on the first instruction; or the detecting unit 54 detects the first one of the plurality of material objects Determining that the first instruction is detected during the third operation of the material object; determining the first material object and other material objects sorted after the first material object based on the first instruction; or the detecting unit 54 When detecting the fourth operation of the second material object and the fifth operation of the third material object of the plurality of material objects, respectively, determining that the first instruction is detected; Determining, by the first instruction, a material object between the second material object and
  • the processing unit 53 is configured to identify a target object in the first image data, and sequentially add material data corresponding to the at least part of the material object to the target object to generate a second Image data.
  • the processing unit 53 is configured to output, by the display unit 52, the output time of each second image data in the process of sequentially outputting the second image data including the target object and the material data in the output display interface. Meet the preset duration.
  • the processing unit 53 is further configured to: in the process of sequentially outputting the second image data including the material data, detecting the second instruction, and suspending or terminating the output including the material data according to the second instruction
  • the second image data is further configured to detect the third instruction, to resume outputting the second image data including the material data according to the third instruction; or to determine the fourth material object; the fourth material object and the pause output
  • the material objects corresponding to the second image data are the same or different; and the third instruction is detected, and the second image data including the material data corresponding to the fourth material object is resumed from the fourth material object according to the third instruction.
  • the terminal further includes a detecting unit 54 configured to detect a seventh operation for the second specific function button; or, detecting a second specific gesture operation; or detecting An eighth operation for identifying a tag to which the plurality of material objects belong;
  • the processing unit 53 is configured to determine that the second instruction is detected when the detecting unit 54 detects the seventh operation for the second specific function button; or when the detecting unit 54 detects the second specific gesture operation, Determining that the second instruction is detected; or, when the detecting unit 54 detects the eighth operation for the tag identification to which the plurality of material objects belong, it is determined that the second instruction is detected.
  • the processing unit 53 is configured to detect a fourth instruction, and determine a material object corresponding to the material data in the second image data that is output when the fourth instruction is detected; The four instructions add the material object to the first collection.
  • the processing unit 53 is further configured to perform comparison display on the material data corresponding to the material object included in the first set by using the display unit 52; wherein the comparison display manner includes : displaying second image data including material data in sequence; or displaying second image data including material data in at least two sub-interfaces divided by the output display interface.
  • the terminal further includes a detecting unit 54 configured to detect a ninth operation for the third specific function button;
  • the processing unit 53 is configured to determine that the fourth instruction is detected when the detecting unit 54 detects the ninth operation for the third specific function button;
  • the image acquisition unit 51 is implemented by an image acquisition component, and when the image acquisition component is in an active state and the first image data is continuously obtained by the image acquisition component, the processing unit 53 is configured to identify the Whether the specific identifier is included in the second image data; the specific identifier includes at least one of: a specific object, a specific hand shape, a specific expression; and when it is recognized that the second image data includes the specific identifier, determining to detect the fourth instruction .
  • the first image data is continuously obtained by the image acquisition component when the image acquisition component is in an active state; the processing unit 53 is further configured to sequentially output through the display unit 52.
  • the processing unit 53 is further configured to sequentially output through the display unit 52.
  • the processing unit 53 is further configured to sequentially output through the display unit 52.
  • the processing unit 53 is further configured to sequentially output through the display unit 52.
  • the process of including the second image data of the material data identifying whether the target object in the second image data satisfies a preset condition; and suspending when the target object in the second image data does not satisfy the preset condition
  • the second image data including the material data is output through the display unit 52; and is further configured to resume outputting the second image data including the material data when it is recognized that the target object in the second image data satisfies a preset condition.
  • the processing unit 53 is configured to: identify whether the target object is included in the second image data; and when determining that the target object is not included in the second image data, determine a target in the second image data The object does not satisfy the preset condition; or, whether the display parameter of the target object in the second image data reaches a preset threshold; when the display parameter of the target object in the second image data is not found to reach a preset threshold And determining that the target object in the second image data does not satisfy the preset condition.
  • the processing unit 53 and the detecting unit 54 in the terminal may be used by a central processing unit (CPU, Central Processing Unit) and a digital signal processor (DSP, Digital Signal) in the terminal.
  • the display unit 52 in the terminal can be implemented by a display screen or a display in an actual application, wherein The display or display may have a touch detection component
  • the image acquisition unit 51 in the terminal may be implemented by an image acquisition component (such as a camera) in an actual application, or may be implemented by a CPU, a DSP, an MCU, or an FPGA.
  • the terminal provided by the foregoing embodiment performs the material presentation processing
  • only the division of each of the foregoing program modules is illustrated. In actual applications, the processing allocation may be completed by different program modules as needed.
  • the internal structure of the terminal is divided into different program modules to perform all or part of the processing described above.
  • the terminal and the material display method embodiment provided by the foregoing embodiments are in the same concept, and the specific implementation process is described in detail in the method embodiment, and details are not described herein again.
  • FIG. 8 is a schematic structural diagram of hardware components of a terminal according to an embodiment of the present application.
  • the terminal includes a memory 804, a processor 802, and is stored in the memory 804 and can be processed.
  • the terminal may be a mobile phone, a computer, a digital broadcast terminal, an information transceiver device, a game console, a tablet device, a personal digital assistant, or the like.
  • the terminal may also include at least one of the following components: a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component. 816.
  • Processor 802 typically controls the overall operation of the terminal, such as operations associated with display, telephone calls, data communications, camera photography, and information recording.
  • Processor 802 can include one or more to execute a computer program to perform all or part of the steps described above.
  • processor 802 can include one or more modules to facilitate interaction with other components.
  • processor 802 can include a multimedia module to facilitate interaction between processor 802 and multimedia component 808.
  • the non-volatile memory 804 can be implemented by any type of volatile or non-volatile storage device, or a combination thereof.
  • the non-volatile memory may be a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), or an Erasable Programmable Read (EPROM). Only Memory), Electrically Erasable Programmable Read-Only Memory (EEPROM), Ferromagnetic Random Access Memory (FRAM), Flash Memory, Magnetic Surface Memory , CD-ROM, or Compact Disc Read-Only Memory (CD-ROM); the magnetic surface memory can be a disk storage or a tape storage.
  • the volatile memory can be a random access memory (RAM) that acts as an external cache.
  • RAM Static Random Access Memory
  • SSRAM Synchronous Static Random Access Memory
  • SSRAM Dynamic Random Access
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM enhancement Enhanced Synchronous Dynamic Random Access Memory
  • SLDRAM Synchronous Dynamic Random Access Memory
  • DRRAM Direct Memory Bus Random Access Memory
  • the memory 804 is used to store various types of data to support the operation of the terminal. Examples of such data include: any computer program for operating on a terminal, such as an operating system and applications; contact data; phone book data; messages; pictures;
  • the operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks.
  • the application can include various applications, such as a Media Player, a Browser, etc., for implementing various application services.
  • a program implementing the method of the embodiment of the present application may be included in an application.
  • Power component 806 provides power to various components of the terminal.
  • Power component 806 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the terminal.
  • the multimedia component 808 includes a screen provided as an output interface between the terminal and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP, Touch Panel). If the screen includes a touch panel, the screen can be implemented by a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor not only senses the boundaries of the touch or slide operation, but also the duration and pressure associated with the touch or slide operation.
  • the multimedia component 808 can include a front camera and/or a rear camera. When the terminal is in an operation mode such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front or rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • Audio component 810 is used to output and/or input audio signals.
  • the audio component 810 includes a microphone (MIC) that is used to receive an external audio signal when the terminal is in an operational mode, such as a call mode, a recording mode, or a voice recognition mode.
  • the received audio signal may be further stored in memory 804 or transmitted via communication component 816.
  • the audio component 810 can also include a speaker for outputting an audio signal.
  • the I/O interface 812 provides an interface for information interaction between the processor and the peripheral interface module.
  • the peripheral interface module may be a keyboard, a mouse, a trackball, a click wheel, a button, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessment of various aspects to the terminal.
  • sensor component 814 can detect the open/closed state in which the terminal is located, the relative positioning of the components, such as the component being the display and keypad of the terminal; sensor component 814 can also detect the change in position of a component of the terminal or terminal, the user The presence or absence of contact with the terminal, the orientation or acceleration/deceleration of the terminal, and the temperature change of the terminal.
  • Sensor assembly 814 can include a proximity sensor for detecting the presence of nearby objects without any physical contact.
  • Sensor assembly 814 may also include a light sensor, such as a CMOS (Complementary Metal-Oxide Semiconductor) image sensor or a Charge Coupled Device (CCD) image sensor for use in imaging applications.
  • CMOS Complementary Metal-Oxide Semiconductor
  • CCD Charge Coupled Device
  • the sensor assembly 814 can also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor, and the like.
  • Communication component 816 is used for wired or wireless communication between the terminal and other devices.
  • the terminal can access a wireless network based on a communication standard such as WiFi, 2G or 3G, or a combination thereof.
  • communication component 816 receives broadcast signals or broadcast associated information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a Near Field Communication (NFC) module to facilitate short range communication.
  • NFC Near Field Communication
  • the NFC module may be based on Radio Frequency Identification (RFID) technology, IrDA (Infrared Data Association) technology, Ultra Wideband (UWB) technology, Bluetooth (BT, BlueTooth) technology or other technologies. to realise.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • Bluetooth BT, BlueTooth
  • Processor 802 may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the foregoing method may be completed by an integrated logic circuit of hardware in the processor 802 or an instruction in a form of software.
  • the processor 802 described above can be a general purpose processor, a DSP, or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, or the like.
  • the processor 802 can implement or perform the methods, steps, and logic blocks disclosed in the embodiments of the present application.
  • a general purpose processor can be a microprocessor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiment of the present application may be directly implemented as a hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software module can reside in a storage medium located in memory 804, which reads the information in memory 804 and, in conjunction with its hardware, performs the steps of the foregoing method.
  • the terminal may be configured by one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs, Complex) Programmable Logic Device), FPGA, general purpose processor, controller, MCU, microprocessor, or other electronic component implementation for performing the aforementioned methods.
  • ASICs Application Specific Integrated Circuits
  • DSPs Digital Signal processors
  • PLDs Programmable Logic Devices
  • CPLDs Complex Programmable Logic Devices
  • FPGA general purpose processor
  • controller MCU
  • microprocessor or other electronic component implementation for performing the aforementioned methods.
  • embodiments of the present application also provide a computer storage medium, such as a memory 804 including a computer program executable by the processor 802 of the terminal to perform the steps described in the foregoing methods.
  • the computer storage medium may be a memory such as FRAM, ROM, PROM, EPROM, EEPROM, Flash Memory, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories, such as a mobile phone. , computers, tablet devices, personal digital assistants, etc.
  • a computer storage medium is provided on the embodiment of the present application, and the computer instruction is stored thereon.
  • the instruction is executed by the processor, the material display method described in the embodiment of the present application is implemented.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated, and the components displayed as the unit may or may not be physical units, that is, may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit; the above integration
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing steps include the steps of the foregoing method embodiments; and the foregoing storage medium includes: a removable storage device, a ROM, a RAM, a magnetic disk, or an optical disk, and the like, which can store program codes.
  • the above-described integrated unit of the present application may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a stand-alone product.
  • the technical solution of the embodiments of the present application may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请实施例公开了一种素材展示方法、终端和计算机存储介质。所述方法包括:获得第一图像数据,在输出显示界面输出所述第一图像数据;部署于所述输出显示界面的操作区域包括多个素材对象;检测到第一指令,基于所述第一指令确定所述多个素材对象中的至少部分素材对象,以及将所述至少部分素材对象对应的素材数据按照预设规则依次添加至所述第一图像数据中生成第二图像数据,在所述输出显示界面依次输出包括素材数据的第二图像数据。

Description

一种素材展示方法、终端和计算机存储介质
相关申请的交叉引用
本申请基于申请号为201810381655.6、申请日为2018年04月25日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。
技术领域
本申请涉及信息处理技术,具体涉及一种素材展示方法、终端和计算机存储介质。
背景技术
现有的短视频或美图工具对于素材的展示方案均为静态展示,即素材通过列表的形式显示,用户想要体验哪个素材只能通过手动选定素材进行展示。如果有这样一种素材展示方案,能够呈现素材的轮播,便于用户选择,也可以提升用户体验,减少交互成本。
发明内容
为解决现有存在的技术问题,本申请实施例提供一种素材展示方法、终端和计算机存储介质。
为达到上述目的,本申请实施例的技术方案是这样实现的:
本申请实施例提供了一种素材展示方法,所述方法包括:
获得第一图像数据,在输出显示界面输出所述第一图像数据;部署于所述输出显示界面的操作区域包括多个素材对象;
检测到第一指令,基于所述第一指令确定所述多个素材对象中的至少部分素材对象,以及将所述至少部分素材对象对应的素材数据按照预设规则依次添加至所述第一图像数据中生成第二图像数据,在所述输出显示界面依次输出包括素材数据的第二图像数据。
在一可选实施例中,所述检测到第一指令,基于所述第一指令确定所述多个素材对象中的至少部分素材对象,包括:检测到针对第一特定功能按键的第一操作时,确定检测到第一指令,基于所述第一指令确定所述多个素材对象;或者,检测到特定手势操作时,确定检测到第一指令,基于所述第一指令确定所述多个素材对象;或者,检测到针对所述多个素材对象所属的标签标识的第二操作时,确定检测到第一指令,基于所述第一指令确定所述多个素材对象;或者,检测到针对所述多个素材对象中的第一素材对象的第三操作时,确定检测到第一指令;其中,所述第一素材对象为所述多个素材对象中的任一素材对象;基于所述第一指令确定所述第一素材对象以及排序在所述第一素材对象之后的其他素材对象;或者,分别检测到针对所述多个素材对象中的第二素材对象的第四操作和第三素材对象的第五操作时,确定检测到第一指令;其中,所述第二素材对象和所述第三素材对象分别为所述多个素材对象中的任两个素材对象;基于所述第一指 令确定所述第二素材对象和所述第三素材对象之间的素材对象;或者,检测到第一指令,基于所述第一指令激活所述多个素材对象中每个素材对象的选定位;接收到针对所述多个素材对象中的部分素材对象的选定位的第六操作,基于所述第六操作确定所述多个素材对象中的部分素材对象。
在一可选实施例中,在依次输出包括素材数据的第二图像数据过程中,所述方法还包括:检测到第二指令,根据所述第二指令暂停或终止输出包括素材数据的第二图像数据;其中,所述检测到第二指令,包括:检测到针对第二特定功能按键的第七操作时,确定检测到第二指令;或者,检测到第二特定手势操作时,确定检测到第二指令;或者,检测到针对所述多个素材对象所属的标签标识的第八操作时,确定检测到第二指令。
在一可选实施例中,所述根据所述第二指令暂停输出包括素材数据的第二图像数据后,所述方法还包括:检测到第三指令,根据所述第三指令恢复输出包括素材数据的第二图像数据;或者,确定第四素材对象;所述第四素材对象与暂停输出的第二图像数据对应的素材对象相同或不同;检测到第三指令,根据所述第三指令从所述第四素材对象开始恢复输出包括所述第四素材对象对应的素材数据的第二图像数据。
在一可选实施例中,在依次输出包括素材数据的第二图像数据过程中,所述方法还包括:检测到第四指令,确定检测到所述第四指令时输出的第二图像数据中的素材数据对应的素材对象;基于所述第四指令将所述素材对象添加至第一集合中;其中,所述检测到第四指令,包括:检测到针对第三特定功能按键的第九操作时,确定检测到第四指令;或者,在图像采集组件处于激活状态、且所述第一图像数据通过所述图像采集组件持续获得时,识别所述第二图像数据中是否包括特定标识;所述特定标识包括以下至少之一:特定物体、特定手形、特定表情;当识别出所述第二图像数据中包括特定标识时,确定检测到第四指令。
在一可选实施例中,所述方法还包括:对所述第一集合中包括的素材对象对应的素材数据进行比对显示;其中,比对显示方式包括:依次显示包括素材数据的第二图像数据;或者,在所述输出显示界面划分的至少两个子界面中分别显示包括素材数据的第二图像数据。
在一可选实施例中,在图像采集组件处于激活状态时,所述第一图像数据通过所述图像采集组件持续获得,在依次输出包括素材数据的第二图像数据的过程中,所述方法包括:识别所述第二图像数据中的目标对象是否满足预设条件;当识别出所述第二图像数据中的目标对象不满足预设条件时,暂停输出包括素材数据的第二图像数据;其中,所述识别所述第二图像数据中的目标对象是否满足预设条件,包括:识别所述第二图像数据中是否包括目标对象;当识别出所述第二图像数据中不包括目标对象时,确定所述第二图像数据中的目标对象不满足预设条件;或者,识别所述第二图像数据中的目标对象的显示参数是否达到预设阈值;当识别出所述第二图像数据中的目标对象的显示参数未达到预设阈值时,确定所述第二图像数据中的目标对象不满足预设条件。
在一可选实施例中,所述方法还包括:当识别出所述第二图像数据中的目标对象满足预设条件时,恢复输出包括素材数据的第二图像数据。
本申请实施例还提供了一种终端,所述终端包括:图像获取单元、显示单元和处理单元;其中,
所述图像获取单元,配置为获得第一图像数据;
所述显示单元,配置为在输出显示界面输出所述图像获取单元获得的所述第一图像数据;部署于所述输出显示界面的操作区域包括多个素材对象;
所述处理单元,配置为检测到第一指令,基于所述第一指令确定所述多个素材对象中的至少部分素材对象,以及将所述至少部分素材对象对应的素材数据按照预设规则依 次添加至所述第一图像数据中生成第二图像数据;通过所述显示单元在所述输出显示界面依次输出包括素材数据的第二图像数据。
本申请实施例还提供了一种计算机存储介质,其上存储有计算机指令,该指令被处理器执行时实现本申请实施例所述素材展示方法的步骤。
本申请实施例还提供了一种终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现本申请实施例所述素材展示方法的步骤。
本申请实施例提供的素材展示方法、终端和计算机存储介质,所述方法包括:获得第一图像数据,在输出显示界面输出所述第一图像数据;部署于所述输出显示界面的操作区域包括多个素材对象;检测到第一指令,基于所述第一指令确定所述多个素材对象中的至少部分素材对象,以及将所述至少部分素材对象对应的素材数据按照预设规则依次添加至所述第一图像数据中生成第二图像数据,在所述输出显示界面依次输出包括素材数据的第二图像数据。采用本申请实施例的技术方案,实现了素材的轮播,特别是与用户存在交互的图像素材的轮播,解决了用户一次交互仅能体验一个素材数据导致的用户体验不佳、操作繁琐的问题,大大提升了用户的操作体验,减少了用户的交互成本。
附图说明
图1为本申请实施例一的素材展示方法的流程示意图;
图2a至图2f分别为本申请实施例的素材展示方法的应用场景示意图;
图3为本申请实施例的素材展示方法的第一种应用交互示意图;
图4为本申请实施例的素材展示方法的第二种应用交互示意图;
图5为本申请实施例的素材展示方法的第三种应用交互示意图;
图6为本申请实施例的终端的一种组成结构示意图;
图7为本申请实施例的终端的另一种组成结构示意图;
图8为本申请实施例的终端的硬件组成结构示意图。
具体实施方式
下面结合附图及具体实施例对本申请作进一步详细的说明。
本申请实施例提供了一种素材展示方法。图1为本申请实施例的素材展示方法的流程示意图;如图1所示,所述方法包括:
步骤101:获得第一图像数据,在输出显示界面输出所述第一图像数据;部署于所述输出显示界面的操作区域包括多个素材对象。
步骤102:检测到第一指令,基于所述第一指令确定所述多个素材对象中的至少部分素材对象,以及将所述至少部分素材对象对应的素材数据按照预设规则依次添加至所述第一图像数据中生成第二图像数据,在所述输出显示界面依次输出包括素材数据的第二图像数据。
本申请实施例的素材展示方法应用于终端中,终端可以是移动电话、计算机、数字广播终端、信息收发设备、游戏控制台、平板设备、个人数字助理等。终端中具有应用程序(如第三方应用程序),该应用程序激活后,可通过该应用程序的输出显示界面输出图像数据;该应用程序的输出显示界面具有工具栏,激活该工具栏可在输出显示界面的操作区域显示多个素材对象,素材对象可通过操作与所述图像数据结合生成新的图像数据。
其中,素材对象为在操作区域中显示的素材标识,每个素材对象对应有一个素材数据。当用户操作任一素材对象时,将该素材对象对应的素材数据添加至图像数据中生成新的图像数据。
图2a至图2f分别为本申请实施例的素材展示方法的应用场景示意图;如图2a至图2f所示,分别为不同的应用程序提供的素材对象的应用示意图;可以看出,素材对象可以是能够添加在图像数据中的任何素材数据对应的素材标识;其中,素材数据可以是增加图像数据中的显示内容的数据,例如卡通风格的相框对应的素材数据等等;另一方面,素材数据还可以是改变图像数据中的显示参数的数据,例如改变图像数据的风格,比如由彩色图像转换为黑白图像等等。
本申请实施例中,第一图像数据可以是终端中存储的图像数据;也可以是终端与网络设备交互获得的图像数据,例如网络相册;还可以是终端激活图像采集组件(例如摄像头),通过图像采集组件实时获得的图像数据。其中,图像数据可以是图片、动态图片,也可以是视频等。
本申请实施例中,可通过多种方式触发多个素材对象中的至少部分素材对象依次显示。
本实施例步骤102中,作为第一种实施方式,所述检测到第一指令,基于所述第一指令确定所述多个素材对象中的至少部分素材对象,包括:检测到针对第一特定功能按键的第一操作时,确定检测到第一指令,基于所述第一指令确定所述多个素材对象。
具体的,在输出显示界面中部署有第一特定功能按键,所述第一特定功能按键可以是虚拟按键;或者,在终端部署有第一特定功能按键,该第一特定功能按键可以是物理按键。则检测到针对第一特定功能按键的第一操作时,确定检测到第一指令,根据第一指令确定所述多个素材对象,以及执行将所述多个素材对象分别对应的素材数据按照预设规则依次添加至所述第一图像数据中生成第二图像数据,在所述输出显示界面依次输出包括素材数据的第二图像数据。
作为第二种实施方式,所述检测到第一指令,基于所述第一指令确定所述多个素材对象中的至少部分素材对象,包括:检测到特定手势操作时,确定检测到第一指令,基于所述第一指令确定所述多个素材对象。
具体的,终端具有触控检测组件,通过触控检测组件可检测到手势操作;其中,该手势操作可以是与触控检测组件的检测面相接触的手势操作,也可以是与触控检测组件的检测面具有一定距离的悬空手势操作。实际应用中,终端可预先配置与第一指令相匹配的特定手势,检测到该特定手势时,确定检测到第一指令,根据第一指令确定所述多个素材对象,以及执行将所述多个素材对象分别对应的素材数据按照预设规则依次添加至所述第一图像数据中生成第二图像数据,在所述输出显示界面依次输出包括素材数据的第二图像数据。
作为第三种实施方式,所述检测到第一指令,基于所述第一指令确定所述多个素材对象中的至少部分素材对象,包括:检测到针对所述多个素材对象所属的标签标识的第二操作时,确定检测到第一指令,基于所述第一指令确定所述多个素材对象。
具体的,本实施方式适用于属于同一种类的素材对象设置有对应的标签标识的场景。例如,多个均在头发上贴合的素材对象可对应设置为发饰类,多个具有画框特征的素材对象可对应设置为画框类等等。实际应用中,在输出显示界面的操作区域部署多个素材对象时,可对应部署与多个素材对象对应的标签标识;通过用户针对不同的标签标识的操作,可切换至归属于该标签标识下的多个素材对象的显示。基于此,在检测到针对标签标识的第二操作时,确定检测到第一指令,基于所述第一指令确定归属于所述标签标识的多个素材对象,以及执行将所述多个素材对象分别对应的素材数据按照预设规 则依次添加至所述第一图像数据中生成第二图像数据,在所述输出显示界面依次输出包括素材数据的第二图像数据。
在上述三种实施方式中,确定的多个素材对象可以是操作区域中显示的所有素材对象;在另一种实施方式中,若素材对象通过施行增量加载,可以理解为,归属于同一类的素材对象的数量较多,操作区域中无法完全显示所有的素材对象,则需要通过“翻页”操作或者通过“下拉”操作显示“下一屏”的素材对象,在这种情况下,确定的多个素材对象可以是当前操作区域中显示的所有对象,也可以是归属于同一类的所有素材对象。
作为第四种实施方式,所述检测到第一指令,基于所述第一指令确定所述多个素材对象中的至少部分素材对象,包括:检测到针对所述多个素材对象中的第一素材对象的第三操作时,确定检测到第一指令;其中,所述第一素材对象为所述多个素材对象中的任一素材对象;基于所述第一指令确定所述第一素材对象以及排序在所述第一素材对象之后的其他素材对象。
作为第五种实施方式,所述检测到第一指令,基于所述第一指令确定所述多个素材对象中的至少部分素材对象,包括:分别检测到针对所述多个素材对象中的第二素材对象的第四操作和第三素材对象的第五操作时,确定检测到第一指令;其中,所述第二素材对象和所述第三素材对象分别为所述多个素材对象中的任两个素材对象;基于所述第一指令确定所述第二素材对象和所述第三素材对象之间的素材对象。
其中,所述第四操作和第五操作可以是输入操作,在通过触控检测组件检测操作的场景下,第四操作和第五操作可以是单击操作。
在通过触控检测组件检测操作的场景下,另一种实施方式中,所述检测到第一指令,基于所述第一指令确定所述多个素材对象中的至少部分素材对象,还可以包括:通过触控检测组件检测到滑动手势操作时,确定检测到第一指令;基于所述滑动手势操作的起始位置和终止位置确定第二素材对象和第三素材对象,确定所述第二素材对象和所述第三素材对象之间的素材对象。
上述第四种实施方式中,通过检测到多个素材对象中用于确定起始位置(即第一素材对象)的第三操作,从而确定以第一素材对象开始、以及排序在第一素材对象之后的其他素材对象。而上述第五种实施方式中,通过检测到多个素材对象中用于确定起始位置(即第二素材对象)的第四操作和终止位置(即第三素材对象)的第五操作,从而确定第二素材对象和第三素材对象之间的所有素材对象,以及执行将确定的素材对象分别对应的素材数据按照预设规则依次添加至所述第一图像数据中生成第二图像数据,在所述输出显示界面依次输出包括素材数据的第二图像数据。
作为第六种实施方式,所述检测到第一指令,基于所述第一指令确定所述多个素材对象中的至少部分素材对象,包括:检测到第一指令,基于所述第一指令激活所述多个素材对象中每个素材对象的选定位;接收到针对所述多个素材对象中的部分素材对象的选定位的第六操作,基于所述第六操作确定所述多个素材对象中的部分素材对象。
具体的,终端可通过检测到的第一指令激活每个素材对象的选定位;该选定位支持通过检测到的操作选定或取消选定。实际应用中,选定位被激活后,可在素材对象的至少部分区域显示选定框,若检测到操作,则选定框可通过一特定显示方式(例如选定框内显示一特定标识)表示对应的素材对象已被选定;相应的,若对于已被选定的素材对象再次检测到操作,则选定框通过取消该特定显示方式(例如选定框内取消显示该特定标识)表示对应的素材对象已被取消选定。通过上述方式,用户可针对多个素材对象有针对性的选定其中的部分素材对象,例如选定喜爱的素材对象,而无需对较多的素材对象对应的素材数据均依次显示。
本申请实施例步骤102中,所述将所述至少部分素材对象对应的素材数据按照预设规则依次添加至所述第一图像数据中生成第二图像数据,包括:识别所述第一图像数据中的目标对象;将所述至少部分素材对象对应的素材数据依次添加至所述第一图像数据中生成第二图像数据。
具体的,终端通过人工智能(AI,Artificial Intelligence)模块对第一图像数据中的目标对象进行识别,具体是通过AI模块对第一图像数据进行深度检测,识别出所述第一图像数据中的目标对象;其中,所述目标对象可以是人、动物等;其中,目标对象可以是人脸(或者动物脸),也可以是人的全身(或动物的全身)。其中,AI模块可位于服务器侧,则终端在对第一图像数据进行识别时,与服务器交互,通过服务器的AI模块对第一图像数据进行识别,识别出目标对象。另一种实施方式,终端可通过服务器获得AI模块,或者终端预先配置AI模块,则终端无需联网即可通过AI模块对第一图像数据中的目标对象进行识别。
本实施例中,将所述至少部分素材对象对应的素材数据依次添加至所述目标对象中生成第二图像数据。可以理解为,所述至少部分素材对象的数量有多少,则对应生成同等数量的第二图像数据。实际应用中,通过AI模块对第一图像数据中的目标对象进行深度检测;例如当目标对象为人脸时,通过AI模块的深度检测识别出人脸后,将素材数据与人脸进行贴合,例如将卡通类型的兔耳朵、猫耳朵等素材数据贴合至人脸的头部生成第二图像数据,加强了用户所见即所得的感知,大大提升了用户的操作体验。
这里,将素材对象对应的素材数据添加至第一图像数据中可参照现有的图像处理方式,这里不再赘述。
本实施例步骤102中,所述在所述输出显示界面依次输出包括素材数据的第二图像数据,包括:在所述输出显示界面依次输出包括所述目标对象和素材数据的第二图像数据过程中,每个第二图像数据的输出时长满足预设时长,该预设时长例如为2秒,这样,也便于用户在素材数据的依次显示过程中清楚的看到每个素材数据的效果。
在本申请一可选实施例中,在依次输出包括素材数据的第二图像数据过程中,所述方法还包括:检测到第二指令,根据所述第二指令暂停或终止输出包括素材数据的第二图像数据;其中,所述检测到第二指令,包括:检测到针对第二特定功能按键的第七操作时,确定检测到第二指令;或者,检测到第二特定手势操作时,确定检测到第二指令;或者,检测到针对所述多个素材对象所属的标签标识的第八操作时,确定检测到第二指令。
具体的,所述第二特定功能按键与所述第一特定功能按键可相同也可不同。在第一特定功能按键与第二特定功能按键相同时,通过针对该第一特定功能按键的第一操作执行包括素材数据的第二图像数据的依次显示;在显示过程中,再次通过针对该第一特定功能按键的第七操作,暂停或终止输出第二图像数据;其中,用户可通过暂停输出第二图像数据的方式长时间的观看第二图像数据中添加的素材数据的效果,后续可恢复第二图像数据的输出。而终止输出第二图像数据则表明不可以恢复第二图像数据的输出。
第二特定手势操作与第一特定手势操作相同或不同。第二特定手势操作的相关描述可参照前述第一特定手势操作的相关描述,第八操作的相关描述可参照前述第二操作的相关描述,这里不再赘述。
在本申请一可选实施例中,所述根据所述第二指令暂停输出包括素材数据的第二图像数据后,所述方法还包括:检测到第三指令,根据所述第三指令恢复输出包括素材数据的第二图像数据;或者,确定第四素材对象;所述第四素材对象与暂停输出的第二图像数据对应的素材对象相同或不同;检测到第三指令,根据所述第三指令从所述第四素材对象开始恢复输出包括所述第四素材对象对应的素材数据的第二图像数据。
作为一种实施方式,在已暂停输出第二图像数据的情况下,检测到第三指令,则根据所述第三指令从暂停的第二图像数据开始恢复输出。作为另一种实施方式,同样是已暂停输出第二图像数据的情况下,通过操作确定与当前暂停除的素材对象区别的第四素材对象,进一步检测到第三指令,则根据第三指令从第四素材对象开始输出与所述第四素材对象对应的第二图像数据。
这里,所述检测到第三指令,包括:检测到针对第四特定功能按键的操作,确定检测到第三指令;其中,所述第四特定功能按键与所述第二特定功能按键相同或不同;或者,检测到第三特定手势操作时,确定检测到第三指令;其中,所述第三特定手势操作与所述第二特定手势操作相同或不同;或者,检测到针对所述多个素材对象所属的标签标识的操作时,确定检测到第三指令。
图3为本申请实施例的素材展示方法的第一种应用交互示意图;如图3所示,包括:
步骤201:用户点击工具栏按钮,具体是点击客户端的图像显示界面中显示的工具栏按钮,用于显示素材数据。
步骤202:客户端向服务端请求素材数据。本步骤可在客户端首次使用时向服务端请求素材数据,或者在素材数据需要更新时向服务端请求素材数据。可以理解为,只有在上述两种条件下客户端需要与服务端联网获得素材数据,在其他情况下,客户端均可通过已获得并存储的素材数据进行后续处理。
步骤203:客户端显示素材标识。具体的,客户端可通过输出显示界面的操作区域显示素材表示。该操作区域为可侦听到操作事件的区域。
步骤204:用户双击某一类素材标签。实际应用中,对于大量的素材数据,可预先进行分类,每一类均对应一素材标签。
步骤205:客户端侦听到双击事件,生成轮播与能力调用指令。
步骤206:客户端调用AI服务,以对图像数据中的目标对象进行深度检测。
步骤207:素材数据与图像数据结合生成新图像并进行展示,展示的时长达到预设时长,例如2秒。
在一实施例中,还可以包括:用户进行暂停操作,客户端侦听到暂停时间,轮播暂停。
在一实施例中,还可以包括:用户进行终止操作,客户端侦听到终止事件,则终止轮播。
采用本申请实施例的技术方案,实现了素材的轮播,特别是与用户存在交互的图像素材的轮播,解决了用户一次交互仅能体验一个素材数据导致的用户体验不佳、操作繁琐的问题,大大提升了用户的操作体验,减少了用户的交互成本。
在本申请一可选实施例中,在依次输出包括素材数据的第二图像数据过程中,所述方法还包括:检测到第四指令,确定检测到所述第四指令时输出的第二图像数据中的素材数据对应的素材对象;基于所述第四指令将所述素材对象添加至第一集合中。
其中,所述检测到第四指令,包括:检测到针对第三特定功能按键的第九操作时,确定检测到第四指令;或者,在图像采集组件处于激活状态、且所述第一图像数据通过所述图像采集组件持续获得时,识别所述第二图像数据中是否包括特定标识;所述特定标识包括以下至少之一:特定物体、特定手形、特定表情;当识别出所述第二图像数据中包括特定标识时,确定检测到第四指令。
具体的,本实施例中,对于用户喜欢的素材数据,可通过多种实现方式将喜欢的素材数据对应的素材对象添加至第一集合中,所述第一集合可作为“用户喜欢的素材集合”。
作为第一种实施方式,可通过部署的第三特定功能按键的第九操作将素材标识添加 至第一集合中。在第二图像数据的依次输出过程中,若喜欢哪个第二图像数据中的素材数据,则在该第二图像数据的显示过程中,用户可通过操作第三特定功能按键,将该第二图像数据对应的素材标识添加至第一集合中。
作为另一种实施方式,在图像采集组件处于激活状态、且所述第一图像数据通过所述图像采集组件持续获得的场景下,则可通过识别第二图像数据中是否包括特定标识,特定标识包括以下至少之一:特定物体、特定手形、特定表情;以特定标识为“V字手形”为例,则在第二图像数据的依次输出过程中,若喜欢哪个第二图像数据中的素材数据,用户可通过抬手示出“V字手形”,在图像采集组件实时采集的图像数据中包括该“V字手形”,终端识别出该“V字手形”,则将当前显示的第二图像数据中的素材数据对应的素材对象添加至第一集合中。
在本申请一可选实施例中,所述方法还包括:对所述第一集合中包括的素材对象对应的素材数据进行比对显示;其中,比对显示方式包括:依次显示包括素材数据的第二图像数据;或者,在所述输出显示界面划分的至少两个子界面中分别显示包括素材数据的第二图像数据。
具体的,对于第一集合中的素材对象,可通过比对显示的方式对第一集合中的素材对象对应的素材数据进行显示,以便用户进行效果比对。作为一种实施方式,对于第一集合中的素材对象,可采用依次输出显示的方式进行显示。作为另一种实施方式,可通过分屏的方式分别显示,例如将输出显示界面划分为至少两个子界面,通过每个子界面分别显示包括素材数据的第二图像数据,从而可以使用户一次性的浏览至少两个第二图像数据,便于进行效果比对。
图4为本申请实施例的素材展示方法的第二种应用交互示意图;如图4所示,包括:
步骤301:在轮播过程中用户点击“喜欢”按钮。
步骤302:客户端侦听到“喜欢”事件,添加当前素材标识至“喜欢”队列。其中,客户端还可以将“喜欢”队列中的素材信息同步至服务端。
步骤303:用户点击“喜欢”的素材数据的比对操作。
步骤304:客户端侦听到比对操作事件,生成能力调用指令。
步骤305:客户端调用AI服务,用于对图像数据中的目标对象的检测。
步骤306:客户端对用户“喜欢”的素材数据与图像数据结合生成新图像并展示。
采用本实施例的技术方案,实现了素材数据在轮播过程中对喜欢的素材数据进行小范围的比对显示,大大提升了用户的操作体验。
在本申请一可选实施例中,在图像采集组件处于激活状态时,所述第一图像数据通过所述图像采集组件持续获得,在依次输出包括素材数据的第二图像数据的过程中,所述方法包括:识别所述第二图像数据中的目标对象是否满足预设条件;当识别出所述第二图像数据中的目标对象不满足预设条件时,暂停输出包括素材数据的第二图像数据。所述方法还包括:当识别出所述第二图像数据中的目标对象满足预设条件时,恢复输出包括素材数据的第二图像数据。
其中,所述识别所述第二图像数据中的目标对象是否满足预设条件,包括:识别所述第二图像数据中是否包括目标对象;当识别出所述第二图像数据中不包括目标对象时,确定所述第二图像数据中的目标对象不满足预设条件;或者,识别所述第二图像数据中的目标对象的显示参数是否达到预设阈值;当识别出所述第二图像数据中的目标对象的显示参数未达到预设阈值时,确定所述第二图像数据中的目标对象不满足预设条件。其中,所述显示参数可包括清晰度和/或分辨率。
具体的,本实施方式适用于图像采集组件处于激活状态时,所述第一图像数据通过所述图像采集组件持续获得的场景,例如,用户通过终端的前置摄像头录制视频的场景, 这种场景下,第二图像数据中的目标对象可以是用户的脸部。若终端没有稳定导致第二图像数据中的脸部出现剧烈抖动或者模糊,或者终端的实际位置发生了剧烈变化导致第二图像数据中没有包括目标对象,则素材数据不能很好的与第二图像数据中的目标对象结合,也表明第二图像数据中的目标对象不满足预设条件,则暂停输出包括素材数据的第二图像数据。
图5为本申请实施例的素材展示方法的第三种应用交互示意图;如图5所示,包括:
步骤401:用户移开终端,或者终端由于用户的误操作导致位置移动范围过大。
步骤402:客户端通过AI服务的调用确定图像数据中的目标对象的识别出现异常,这种异常包括图像数据中不包括目标对象,或者即使包括目标对象,但目标对象抖动剧烈,或者目标对象模糊,上述这些情况都可确定为识别异常。
步骤403:客户端暂停轮播。
步骤404:用户恢复终端位置从而使得图像数据中包括该用户。
步骤405:客户端通过AI服务的调用确定图像数据中的目标对象识别正常,这种正常包括图像数据中包括目标对象,并且目标对象的显示也满足预设条件,例如目标对象的清晰度达到预设阈值等。
步骤406:客户端恢复轮播。
采用本实施例的技术方案,在识别异常的场景下暂停轮播,例如用户手持的终端没有拿稳掉在地上,此时检测到识别异常,若当前轮播至第四个素材标识对应的素材数据,则暂停轮播,待用户重新入镜后从第五个素材标识开始恢复轮播,避免用户没有看到或者看清楚素材标识的效果而直接跳转至其他素材。
在本申请各实施例中,操作可以是通过鼠标或键盘等输入设备的操作,也可以是通过触控检测组件检测到的触控操作。在操作为触控操作时,可以是单击操作、双击操作、满足一定压力值的压力操作。
本申请实施例还提供了一种终端。图6为本申请实施例的终端的一种组成结构示意图;如图6所示,所述终端包括:图像获取单元51、显示单元52和处理单元53;其中,
所述图像获取单元51,配置为获得第一图像数据;
所述显示单元52,配置为在输出显示界面输出所述图像获取单元51获得的所述第一图像数据;部署于所述输出显示界面的操作区域包括多个素材对象;
所述处理单元53,配置为检测到第一指令,基于所述第一指令确定所述多个素材对象中的至少部分素材对象,以及将所述至少部分素材对象对应的素材数据按照预设规则依次添加至所述第一图像数据中生成第二图像数据;通过所述显示单元52在所述输出显示界面依次输出包括素材数据的第二图像数据。
在一可选实施例中,如图7所示,所述终端还包括检测单元54,配置为检测到针对第一特定功能按键的第一操作;或者,检测到特定手势操作;或者,检测到针对所述多个素材对象所属的标签标识的第二操作;或者,检测到针对所述多个素材对象中的第一素材对象的第三操作;其中,所述第一素材对象为所述多个素材对象中的任一素材对象;或者,分别检测到针对所述多个素材对象中的第二素材对象的第四操作和第三素材对象的第五操作;其中,所述第二素材对象和所述第三素材对象分别为所述多个素材对象中的任两个素材对象;
所述处理单元53,配置为所述检测单元54检测到针对第一特定功能按键的第一操作时,确定检测到第一指令,基于所述第一指令确定所述多个素材对象;或者,所述检测单元54检测到特定手势操作时,确定检测到第一指令,基于所述第一指令确定所述多个素材对象;或者,所述检测单元54检测到针对所述多个素材对象所属的标签标识的第二操作时,确定检测到第一指令,基于所述第一指令确定所述多个素材对象;或者, 所述检测单元54检测到针对所述多个素材对象中的第一素材对象的第三操作时,确定检测到第一指令;基于所述第一指令确定所述第一素材对象以及排序在所述第一素材对象之后的其他素材对象;或者,所述检测单元54分别检测到针对所述多个素材对象中的第二素材对象的第四操作和第三素材对象的第五操作时,确定检测到第一指令;基于所述第一指令确定所述第二素材对象和所述第三素材对象之间的素材对象;或者,检测到第一指令,基于所述第一指令激活所述多个素材对象中每个素材对象的选定位;接收到针对所述多个素材对象中的部分素材对象的选定位的第六操作,基于所述第六操作确定所述多个素材对象中的部分素材对象。
在一可选实施例中,所述处理单元53,配置为识别所述第一图像数据中的目标对象;将所述至少部分素材对象对应的素材数据依次添加至所述目标对象中生成第二图像数据。
具体的,所述处理单元53,配置为通过所述显示单元52在所述输出显示界面依次输出包括所述目标对象和素材数据的第二图像数据过程中,每个第二图像数据的输出时长满足预设时长。
在一可选实施例中,所述处理单元53,还配置为在依次输出包括素材数据的第二图像数据过程中,检测到第二指令,根据所述第二指令暂停或终止输出包括素材数据的第二图像数据;还配置为检测到第三指令,根据所述第三指令恢复输出包括素材数据的第二图像数据;或者,确定第四素材对象;所述第四素材对象与暂停输出的第二图像数据对应的素材对象相同或不同;检测到第三指令,根据所述第三指令从所述第四素材对象开始恢复输出包括所述第四素材对象对应的素材数据的第二图像数据。
作为一种实施方式,如图7所示,所述终端还包括检测单元54,配置为检测到针对第二特定功能按键的第七操作;或者,检测到第二特定手势操作;或者,检测到针对所述多个素材对象所属的标签标识的第八操作;
所述处理单元53,配置为所述检测单元54检测到针对第二特定功能按键的第七操作时,确定检测到第二指令;或者,所述检测单元54检测到第二特定手势操作时,确定检测到第二指令;或者,所述检测单元54检测到针对所述多个素材对象所属的标签标识的第八操作时,确定检测到第二指令。
在一可选实施例中,所述处理单元53,配置为检测到第四指令,确定检测到所述第四指令时输出的第二图像数据中的素材数据对应的素材对象;基于所述第四指令将所述素材对象添加至第一集合中。
在一可选实施例中,所述处理单元53,还配置为通过所述显示单元52对所述第一集合中包括的素材对象对应的素材数据进行比对显示;其中,比对显示方式包括:依次显示包括素材数据的第二图像数据;或者,在所述输出显示界面划分的至少两个子界面中分别显示包括素材数据的第二图像数据。
作为一种实施方式,如图7所示,所述终端还包括检测单元54,配置为检测到针对第三特定功能按键的第九操作;
所述处理单元53,配置为所述检测单元54检测到针对第三特定功能按键的第九操作时,确定检测到第四指令;
或者,所述图像获取单元51通过图像采集组件实现,在图像采集组件处于激活状态、且所述第一图像数据通过所述图像采集组件持续获得时,所述处理单元53,配置为识别所述第二图像数据中是否包括特定标识;所述特定标识包括以下至少之一:特定物体、特定手形、特定表情;当识别出所述第二图像数据中包括特定标识时,确定检测到第四指令。
在一可选实施例中,在图像采集组件处于激活状态时,所述第一图像数据通过所述 图像采集组件持续获得;所述处理单元53,还配置为在通过所述显示单元52依次输出包括素材数据的第二图像数据的过程中,识别所述第二图像数据中的目标对象是否满足预设条件;当识别出所述第二图像数据中的目标对象不满足预设条件时,暂停通过所述显示单元52输出包括素材数据的第二图像数据;还配置为当识别出所述第二图像数据中的目标对象满足预设条件时,恢复输出包括素材数据的第二图像数据。
具体的,所述处理单元53,配置为识别所述第二图像数据中是否包括目标对象;当识别出所述第二图像数据中不包括目标对象时,确定所述第二图像数据中的目标对象不满足预设条件;或者,识别所述第二图像数据中的目标对象的显示参数是否达到预设阈值;当识别出所述第二图像数据中的目标对象的显示参数未达到预设阈值时,确定所述第二图像数据中的目标对象不满足预设条件。
本申请实施例中,所述终端中的处理单元53和检测单元54,在实际应用中均可由所述终端中的中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Signal Processor)、微控制单元(MCU,Microcontroller Unit)或可编程门阵列(FPGA,Field-Programmable Gate Array)实现;所述终端中的显示单元52,在实际应用中可通过显示屏或显示器实现,其中,显示屏或显示器可具有触控检测组件;所述终端中的图像获取单元51,在实际应用中可通过图像采集组件(例如摄像头)实现,也可以通过CPU、DSP、MCU或FPGA实现。
需要说明的是:上述实施例提供的终端在进行素材展示处理时,仅以上述各程序模块的划分进行举例说明,实际应用中,可以根据需要而将上述处理分配由不同的程序模块完成,即将终端的内部结构划分成不同的程序模块,以完成以上描述的全部或者部分处理。另外,上述实施例提供的终端与素材展示方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
本申请实施例还提供了一种终端,图8为本申请实施例的终端的硬件组成结构示意图,如图8所示,终端包括存储器804、处理器802及存储在存储器804上并可在处理器802上运行的计算机程序,所述处理器802执行所述程序时实现本申请实施例所述的素材展示方法。
本申请实施例中,终端可以是移动电话、计算机、数字广播终端、信息收发设备、游戏控制台、平板设备、个人数字助理等。
可以理解,除存储器和处理器之外,终端还可以包括以下至少一个组件:电源组件806、多媒体组件808、音频组件810、输入/输出(I/O)接口812、传感器组件814、以及通信组件816。
处理器802通常控制终端的整体操作,诸如与显示、电话呼叫、数据通信、相机拍摄和信息记录等相关联的操作。处理器802可以包括一个或多个来执行计算机程序,以完成上述方法的全部或部分步骤。此外,处理器802可以包括一个或多个模块,便于与其他组件之间的交互。例如,处理器802可以包括多媒体模块,以方便处理器802与多媒体组件808之间的交互。
存储器804可以由任何类型的易失性或非易失性存储设备、或者它们的组合来实现。其中,非易失性存储器可以是只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,Erasable Programmable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,Electrically Erasable Programmable Read-Only Memory)、磁性随机存取存储器(FRAM,Ferromagnetic Random Access Memory)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory);磁表面存储器可以是磁盘存储器或磁带存储器。易失性存储器可以是随机存取存储器(RAM,Random  Access Memory),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(SRAM,Static Random Access Memory)、同步静态随机存取存储器(SSRAM,Synchronous Static Random Access Memory)、动态随机存取存储器(DRAM,Dynamic Random Access Memory)、同步动态随机存取存储器(SDRAM,Synchronous Dynamic Random Access Memory)、双倍数据速率同步动态随机存取存储器(DDRSDRAM,Double Data Rate Synchronous Dynamic Random Access Memory)、增强型同步动态随机存取存储器(ESDRAM,Enhanced Synchronous Dynamic Random Access Memory)、同步连接动态随机存取存储器(SLDRAM,SyncLink Dynamic Random Access Memory)、直接内存总线随机存取存储器(DRRAM,Direct Rambus Random Access Memory)。本申请实施例描述的存储器804旨在包括但不限于这些和任意其它适合类型的存储器。
存储器804用于存储各种类型的数据以支持终端的操作。这些数据的示例包括:用于在终端上操作的任何计算机程序,如操作***和应用程序;联系人数据;电话簿数据;消息;图片;视频等。其中,操作***包含各种***程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务。应用程序可以包含各种应用程序,例如媒体播放器(Media Player)、浏览器(Browser)等,用于实现各种应用业务。实现本申请实施例方法的程序可以包含在应用程序中。
电源组件806为终端的各种组件提供电力。电源组件806可以包括电源管理***,一个或多个电源,及其他与为终端生成、管理和分配电力相关联的组件。
多媒体组件808包括在终端与用户之间提供的一个作为输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD,Liquid Crystal Display)和触控面板(TP,Touch Panel)。如果屏幕包括触控面板,屏幕可以由触摸屏来实现,以接收来自用户的输入信号。触控面板包括一个或多个触摸传感器,以感测触摸、滑动和触摸面板上的手势。触摸传感器不仅能感测触摸或滑动操作的边界,而且还检测与触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件808可以包括一个前置摄像头和/或后置摄像头。当终端处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头或后置摄像头可以是一个固定的光学透镜***、或具有焦距和光学变焦能力。
音频组件810用于输出和/或输入音频信号。例如,音频组件810包括一个麦克风(MIC,Microphone),当终端处于操作模式,如呼叫模式、记录模式或语音识别模式时,麦克风用于接收外部音频信号。所接收的音频信号可以被进一步存储在存储器804或经由通信组件816发送。在一些实施例中,音频组件810还可以包括一个扬声器,用于输出音频信号。
I/O接口812为处理器与***接口模块之间的信息交互提供接口,上述***接口模块可以是键盘、鼠标、轨迹球、点击轮、按键、按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件814包括一个或多个传感器,用于为终端提供各个方面的状态评估。例如,传感器组件814可以检测到终端所处的打开/关闭状态,组件的相对定位,例如所述组件为终端的显示器和小键盘;传感器组件814还可以检测终端或终端一个组件的位置改变,用户与终端接触的存在或不存在,终端的方位或加速/减速、以及终端的温度变化。传感器组件814可以包括接近传感器,用于在没有任何的物理接触时检测附近物体的存在。传感器组件814还可以包括光传感器,如金属氧化物半导体元件(CMOS,Complementary Metal-Oxide Semiconductor)图像传感器或电荷耦合元件(CCD,Charge Coupled Device)图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组 件814还可以包括加速度传感器、陀螺仪传感器、磁传感器、压力传感器或温度传感器等。
通信组件816用于终端与其他设备之间有线或无线方式的通信。终端可以接入基于通信标准的无线网络,如WiFi、2G或3G、或它们的组合。在一个示例性实施例中,通信组件816经由广播信道接收来自外部广播管理***的广播信号或广播相关信息。在一个示例性实施例中,所述通信组件816还包括近场通信(NFC,Near Field Communication)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID,Radio Frequency IDentification)技术、红外数据组织(IrDA,Infrared Data Association)技术、超宽带(UWB,Ultra WideBand)技术、蓝牙(BT,BlueTooth)技术或其他技术来实现。
上述本申请实施例揭示的方法可以应用于处理器802中,或者由处理器802实现。处理器802可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器802中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器802可以是通用处理器、DSP,或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。处理器802可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤,可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于存储介质中,该存储介质位于存储器804,处理器802读取存储器804中的信息,结合其硬件完成前述方法的步骤。
在示例性实施例中,终端可以被一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、DSP、可编程逻辑器件(PLD,Programmable Logic Device)、复杂可编程逻辑器件(CPLD,Complex Programmable Logic Device)、FPGA、通用处理器、控制器、MCU、微处理器(Microprocessor)、或其他电子元件实现,用于执行前述方法。
在示例性实施例中,本申请实施例还提供了一种计算机存储介质,例如包括计算机程序的存储器804,上述计算机程序可由终端的处理器802执行,以完成前述方法所述步骤。计算机存储介质可以是FRAM、ROM、PROM、EPROM、EEPROM、Flash Memory、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备,如移动电话、计算机、平板设备、个人数字助理等。
本申请实施例提供的一种计算机存储介质,其上存储有计算机指令,该指令被处理器执行时实现本申请实施例所述的素材展示方法。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个***,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程 序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (19)

  1. 一种素材展示方法,所述方法包括:
    获得第一图像数据,在输出显示界面输出所述第一图像数据;部署于所述输出显示界面的操作区域包括多个素材对象;
    检测到第一指令,基于所述第一指令确定所述多个素材对象中的至少部分素材对象,以及将所述至少部分素材对象对应的素材数据按照预设规则依次添加至所述第一图像数据中生成第二图像数据,在所述输出显示界面依次输出包括素材数据的第二图像数据。
  2. 根据权利要求1所述的方法,其中,所述检测到第一指令,基于所述第一指令确定所述多个素材对象中的至少部分素材对象,包括:
    检测到针对第一特定功能按键的第一操作时,确定检测到第一指令,基于所述第一指令确定所述多个素材对象;或者,
    检测到特定手势操作时,确定检测到第一指令,基于所述第一指令确定所述多个素材对象;或者,
    检测到针对所述多个素材对象所属的标签标识的第二操作时,确定检测到第一指令,基于所述第一指令确定所述多个素材对象;或者,
    检测到针对所述多个素材对象中的第一素材对象的第三操作时,确定检测到第一指令;其中,所述第一素材对象为所述多个素材对象中的任一素材对象;基于所述第一指令确定所述第一素材对象以及排序在所述第一素材对象之后的其他素材对象;或者,
    分别检测到针对所述多个素材对象中的第二素材对象的第四操作和第三素材对象的第五操作时,确定检测到第一指令;其中,所述第二素材对象和所述第三素材对象分别为所述多个素材对象中的任两个素材对象;基于所述第一指令确定所述第二素材对象和所述第三素材对象之间的素材对象;或者,
    检测到第一指令,基于所述第一指令激活所述多个素材对象中每个素材对象的选定位;接收到针对所述多个素材对象中的部分素材对象的选定位的第六操作,基于所述第六操作确定所述多个素材对象中的部分素材对象。
  3. 根据权利要求1所述的方法,其中,在依次输出包括素材数据的第二图像数据过程中,所述方法还包括:
    检测到第二指令,根据所述第二指令暂停或终止输出包括素材数据的第二图像数据;
    其中,所述检测到第二指令,包括:
    检测到针对第二特定功能按键的第七操作时,确定检测到第二指令;或者,
    检测到第二特定手势操作时,确定检测到第二指令;或者,
    检测到针对所述多个素材对象所属的标签标识的第八操作时,确定检测到第二指令。
  4. 根据权利要求3所述的方法,其中,所述根据所述第二指令暂停输出包括素材数据的第二图像数据后,所述方法还包括:
    检测到第三指令,根据所述第三指令恢复输出包括素材数据的第二图像数据;或者,
    确定第四素材对象;所述第四素材对象与暂停输出的第二图像数据对应的素材对象相同或不同;检测到第三指令,根据所述第三指令从所述第四素材对象开始恢复输出包括所述第四素材对象对应的素材数据的第二图像数据。
  5. 根据权利要求1所述的方法,其中,在依次输出包括素材数据的第二图像数据 过程中,所述方法还包括:
    检测到第四指令,确定检测到所述第四指令时输出的第二图像数据中的素材数据对应的素材对象;
    基于所述第四指令将所述素材对象添加至第一集合中;
    其中,所述检测到第四指令,包括:
    检测到针对第三特定功能按键的第九操作时,确定检测到第四指令;或者,
    在图像采集组件处于激活状态、且所述第一图像数据通过所述图像采集组件持续获得时,识别所述第二图像数据中是否包括特定标识;所述特定标识包括以下至少之一:特定物体、特定手形、特定表情;
    当识别出所述第二图像数据中包括特定标识时,确定检测到第四指令。
  6. 根据权利要求5所述的方法,其中,所述方法还包括:
    对所述第一集合中包括的素材对象对应的素材数据进行比对显示;其中,比对显示方式包括:
    依次显示包括素材数据的第二图像数据;或者,
    在所述输出显示界面划分的至少两个子界面中分别显示包括素材数据的第二图像数据。
  7. 根据权利要求1所述的方法,其中,在图像采集组件处于激活状态时,所述第一图像数据通过所述图像采集组件持续获得,在依次输出包括素材数据的第二图像数据的过程中,所述方法包括:
    识别所述第二图像数据中的目标对象是否满足预设条件;
    当识别出所述第二图像数据中的目标对象不满足预设条件时,暂停输出包括素材数据的第二图像数据;
    其中,所述识别所述第二图像数据中的目标对象是否满足预设条件,包括:
    识别所述第二图像数据中是否包括目标对象;当识别出所述第二图像数据中不包括目标对象时,确定所述第二图像数据中的目标对象不满足预设条件;或者,
    识别所述第二图像数据中的目标对象的显示参数是否达到预设阈值;当识别出所述第二图像数据中的目标对象的显示参数未达到预设阈值时,确定所述第二图像数据中的目标对象不满足预设条件。
  8. 根据权利要求7所述的方法,其中,所述方法还包括:
    当识别出所述第二图像数据中的目标对象满足预设条件时,恢复输出包括素材数据的第二图像数据。
  9. 一种终端,所述终端包括:图像获取单元、显示单元和处理单元;其中,
    所述图像获取单元,配置为获得第一图像数据;
    所述显示单元,配置为在输出显示界面输出所述图像获取单元获得的所述第一图像数据;部署于所述输出显示界面的操作区域包括多个素材对象;
    所述处理单元,配置为检测到第一指令,基于所述第一指令确定所述多个素材对象中的至少部分素材对象,以及将所述至少部分素材对象对应的素材数据按照预设规则依次添加至所述第一图像数据中生成第二图像数据;通过所述显示单元在所述输出显示界面依次输出包括素材数据的第二图像数据。
  10. 根据权利要求9所述的终端,其中,所述终端还包括检测单元,配置为检测到针对第一特定功能按键的第一操作;或者,检测到特定手势操作;或者,检测到针对所述多个素材对象所属的标签标识的第二操作;或者,检测到针对所述多个素材对象中的第一素材对象的第三操作;其中,所述第一素材对象为所述多个素材对象中的任一素材对象;或者,分别检测到针对所述多个素材对象中的第二素材对象的第四操作和第三素 材对象的第五操作;其中,所述第二素材对象和所述第三素材对象分别为所述多个素材对象中的任两个素材对象;
    所述处理单元,配置为所述检测单元检测到针对第一特定功能按键的第一操作时,确定检测到第一指令,基于所述第一指令确定所述多个素材对象;或者,所述检测单元检测到特定手势操作时,确定检测到第一指令,基于所述第一指令确定所述多个素材对象;或者,所述检测单元检测到针对所述多个素材对象所属的标签标识的第二操作时,确定检测到第一指令,基于所述第一指令确定所述多个素材对象;或者,所述检测单元检测到针对所述多个素材对象中的第一素材对象的第三操作时,确定检测到第一指令;基于所述第一指令确定所述第一素材对象以及排序在所述第一素材对象之后的其他素材对象;或者,所述检测单元分别检测到针对所述多个素材对象中的第二素材对象的第四操作和第三素材对象的第五操作时,确定检测到第一指令;基于所述第一指令确定所述第二素材对象和所述第三素材对象之间的素材对象;或者,检测到第一指令,基于所述第一指令激活所述多个素材对象中每个素材对象的选定位;接收到针对所述多个素材对象中的部分素材对象的选定位的第六操作,基于所述第六操作确定所述多个素材对象中的部分素材对象。
  11. 根据权利要求9所述的终端,其中,所述处理单元,还配置为在依次输出包括素材数据的第二图像数据过程中,检测到第二指令,根据所述第二指令暂停或终止输出包括素材数据的第二图像数据;还配置为检测到第三指令,根据所述第三指令恢复输出包括素材数据的第二图像数据;或者,确定第四素材对象;所述第四素材对象与暂停输出的第二图像数据对应的素材对象相同或不同;检测到第三指令,根据所述第三指令从所述第四素材对象开始恢复输出包括所述第四素材对象对应的素材数据的第二图像数据。
  12. 根据权利要求11所述的终端,其中,所述终端还包括检测单元,配置为检测到针对第二特定功能按键的第七操作;或者,检测到第二特定手势操作;或者,检测到针对所述多个素材对象所属的标签标识的第八操作;
    所述处理单元,配置为所述检测单元检测到针对第二特定功能按键的第七操作时,确定检测到第二指令;或者,所述检测单元检测到第二特定手势操作时,确定检测到第二指令;或者,所述检测单元检测到针对所述多个素材对象所属的标签标识的第八操作时,确定检测到第二指令。
  13. 根据权利要求9所述的终端,其中,所述处理单元,配置为检测到第四指令,确定检测到所述第四指令时输出的第二图像数据中的素材数据对应的素材对象;基于所述第四指令将所述素材对象添加至第一集合中。
  14. 根据权利要求13所述的终端,其中,所述处理单元,还配置为通过所述显示单元对所述第一集合中包括的素材对象对应的素材数据进行比对显示;其中,比对显示方式包括:依次显示包括素材数据的第二图像数据;或者,在所述输出显示界面划分的至少两个子界面中分别显示包括素材数据的第二图像数据。
  15. 根据权利要求13所述的终端,其中,所述终端还包括检测单元,配置为检测到针对第三特定功能按键的第九操作;
    所述处理单元,配置为所述检测单元检测到针对第三特定功能按键的第九操作时,确定检测到第四指令;
    或者,所述图像获取单元通过图像采集组件实现,在图像采集组件处于激活状态、且所述第一图像数据通过所述图像采集组件持续获得时,所述处理单元,配置为识别所述第二图像数据中是否包括特定标识;所述特定标识包括以下至少之一:特定物体、特定手形、特定表情;当识别出所述第二图像数据中包括特定标识时,确定检测到第四指 令。
  16. 根据权利要求9所述的终端,其中,在图像采集组件处于激活状态时,所述第一图像数据通过所述图像采集组件持续获得;所述处理单元,还配置为在通过所述显示单元依次输出包括素材数据的第二图像数据的过程中,识别所述第二图像数据中的目标对象是否满足预设条件;当识别出所述第二图像数据中的目标对象不满足预设条件时,暂停通过所述显示单元输出包括素材数据的第二图像数据;还配置为当识别出所述第二图像数据中的目标对象满足预设条件时,恢复输出包括素材数据的第二图像数据。
  17. 根据权利要求16所述的终端,其中,所述处理单元,配置为识别所述第二图像数据中是否包括目标对象;当识别出所述第二图像数据中不包括目标对象时,确定所述第二图像数据中的目标对象不满足预设条件;或者,识别所述第二图像数据中的目标对象的显示参数是否达到预设阈值;当识别出所述第二图像数据中的目标对象的显示参数未达到预设阈值时,确定所述第二图像数据中的目标对象不满足预设条件。
  18. 一种计算机存储介质,其上存储有计算机指令,该指令被处理器执行时实现权利要求1至8任一项所述素材展示方法的步骤。
  19. 一种终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1至8任一项所述素材展示方法的步骤。
PCT/CN2019/084391 2018-04-25 2019-04-25 一种素材展示方法、终端和计算机存储介质 WO2019206243A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810381655.6A CN108628976A (zh) 2018-04-25 2018-04-25 一种素材展示方法、终端和计算机存储介质
CN201810381655.6 2018-04-25

Publications (1)

Publication Number Publication Date
WO2019206243A1 true WO2019206243A1 (zh) 2019-10-31

Family

ID=63694499

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/084391 WO2019206243A1 (zh) 2018-04-25 2019-04-25 一种素材展示方法、终端和计算机存储介质

Country Status (2)

Country Link
CN (1) CN108628976A (zh)
WO (1) WO2019206243A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108628976A (zh) * 2018-04-25 2018-10-09 咪咕动漫有限公司 一种素材展示方法、终端和计算机存储介质
CN109688453A (zh) * 2018-12-05 2019-04-26 深圳市子瑜杰恩科技有限公司 短视频的道具排列显示方法及相关产品
CN109710136A (zh) * 2018-12-05 2019-05-03 深圳市子瑜杰恩科技有限公司 短视频的道具显示方法及相关产品
CN109729394A (zh) * 2018-12-05 2019-05-07 深圳市子瑜杰恩科技有限公司 短视频的道具排序方法及相关产品
CN109886396A (zh) * 2019-03-18 2019-06-14 国家电网有限公司 一种输电线路舞动在线预测***及方法
CN110298283B (zh) * 2019-06-21 2022-04-12 北京百度网讯科技有限公司 图像素材的匹配方法、装置、设备以及存储介质
CN111399731B (zh) * 2020-03-12 2022-02-25 深圳市腾讯计算机***有限公司 图片的操作意图处理方法、推荐方法、装置、电子设备及存储介质
CN111770288B (zh) * 2020-06-23 2022-12-09 Oppo广东移动通信有限公司 视频编辑方法、装置、终端及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9196067B1 (en) * 2013-03-05 2015-11-24 Amazon Technologies, Inc. Application specific tracking of projection surfaces
CN106951090A (zh) * 2017-03-29 2017-07-14 北京小米移动软件有限公司 图片处理方法及装置
CN107798714A (zh) * 2017-09-11 2018-03-13 深圳创维数字技术有限公司 一种图像数据显示方法和相关装置以及计算机存储介质
CN108628976A (zh) * 2018-04-25 2018-10-09 咪咕动漫有限公司 一种素材展示方法、终端和计算机存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130163867A1 (en) * 2011-10-17 2013-06-27 Military Wraps Research & Development, Inc. Systems, processes, and computer program products for creating geo-location-based visual designs and arrangements originating from geo-location-based imagery
CN104899825B (zh) * 2014-03-06 2019-07-05 腾讯科技(深圳)有限公司 一种对图片人物造型的方法和装置
CN104951236A (zh) * 2015-07-16 2015-09-30 努比亚技术有限公司 一种终端设备壁纸的配置方法及相应终端设备
CN206711151U (zh) * 2016-09-22 2017-12-05 京东方科技集团股份有限公司 一种虚拟试衣眼镜及虚拟试衣***
CN106651761B (zh) * 2016-12-27 2018-10-19 维沃移动通信有限公司 一种为图片添加滤镜的方法及移动终端
CN106780401B (zh) * 2017-01-12 2018-12-04 维沃移动通信有限公司 一种图片处理的方法及移动终端
CN107645605A (zh) * 2017-09-29 2018-01-30 北京金山安全软件有限公司 屏幕主题页面获取方法、装置及终端设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9196067B1 (en) * 2013-03-05 2015-11-24 Amazon Technologies, Inc. Application specific tracking of projection surfaces
CN106951090A (zh) * 2017-03-29 2017-07-14 北京小米移动软件有限公司 图片处理方法及装置
CN107798714A (zh) * 2017-09-11 2018-03-13 深圳创维数字技术有限公司 一种图像数据显示方法和相关装置以及计算机存储介质
CN108628976A (zh) * 2018-04-25 2018-10-09 咪咕动漫有限公司 一种素材展示方法、终端和计算机存储介质

Also Published As

Publication number Publication date
CN108628976A (zh) 2018-10-09

Similar Documents

Publication Publication Date Title
WO2019206243A1 (zh) 一种素材展示方法、终端和计算机存储介质
JP7142783B2 (ja) 音声制御方法及び電子装置
US10649648B2 (en) Method and apparatus for screen capture processing
WO2019137429A1 (zh) 图片处理方法及移动终端
WO2018058728A1 (zh) 一种内容分享的方法及装置
US11354029B2 (en) Content collection method, apparatus and storage medium
CN105845124B (zh) 音频处理方法及装置
WO2018000585A1 (zh) 界面主题的推荐方法、装置、终端及服务器
RU2667027C2 (ru) Способ и устройство категоризации видео
WO2017084183A1 (zh) 信息显示方法与装置
RU2608545C1 (ru) Способ и устройство для резервного копирования видео
RU2663709C2 (ru) Способ и устройство для обработки информации
WO2017080084A1 (zh) 字体添加方法及装置
US11996123B2 (en) Method for synthesizing videos and electronic device therefor
WO2017088247A1 (zh) 输入处理方法、装置及设备
WO2017020479A1 (zh) 界面显示方法及装置
WO2023000639A1 (zh) 视频生成方法、装置、电子设备、存储介质和程序
WO2018095252A1 (zh) 视频录制方法及装置
WO2022142871A1 (zh) 视频录制方法及装置
EP3796317A1 (en) Video processing method, video playing method, devices and storage medium
EP3260998A1 (en) Method and device for setting profile picture
RU2666626C1 (ru) Способ и устройство для управления состоянием воспроизведения
WO2022198934A1 (zh) 卡点视频的生成方法及装置
WO2022088823A1 (zh) 图像处理方法及装置
US20210165670A1 (en) Method, apparatus for adding shortcut plug-in, and intelligent device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19793075

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08/02/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19793075

Country of ref document: EP

Kind code of ref document: A1