CN107734142B - Photographing method, mobile terminal and server - Google Patents

Photographing method, mobile terminal and server Download PDF

Info

Publication number
CN107734142B
CN107734142B CN201710844653.1A CN201710844653A CN107734142B CN 107734142 B CN107734142 B CN 107734142B CN 201710844653 A CN201710844653 A CN 201710844653A CN 107734142 B CN107734142 B CN 107734142B
Authority
CN
China
Prior art keywords
shot
picture
target
mobile terminal
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710844653.1A
Other languages
Chinese (zh)
Other versions
CN107734142A (en
Inventor
彭俊华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201710844653.1A priority Critical patent/CN107734142B/en
Publication of CN107734142A publication Critical patent/CN107734142A/en
Application granted granted Critical
Publication of CN107734142B publication Critical patent/CN107734142B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • General Business, Economics & Management (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Studio Devices (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention discloses a photographing method, a mobile terminal, a server and a computer readable storage medium, wherein the method comprises the following steps: uploading a scene photo shot under a current scene to be shot to a server; receiving at least one classification label sent by a server according to the scene photo; determining one of the at least one classification label as a target classification label, and sending first feedback information of the determined target classification label to a server; receiving a reference shot picture which is sent by a server according to the first feedback information and corresponds to the target classification label, wherein the reference shot picture comprises at least one target object which accords with the current scene to be shot; and if the mobile terminal is in a shooting preview state of the current scene to be shot, displaying the outline of the target object on a preview image of the current scene to be shot. The invention ensures that the pictures shot by the user meet the requirements of image composition, thereby improving the shooting efficiency and further improving the user experience effect.

Description

Photographing method, mobile terminal and server
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a photographing method, a mobile terminal, and a server.
Background
With the popularization, popularity and development of electronic products, the electronic products have more functions and stronger performance, wherein the photographing function becomes a common function of most electronic products. Currently, when a mobile terminal is in a photographing mode, a preview image is generally directly displayed on a display interface of the mobile terminal, so that a user can conveniently take a photograph according to the preview image. The shooting mode can only select shooting scenes, shooting angles and the like according to the sensory awareness of a user, and composition requirements during shooting of photos cannot be considered. When the mobile terminal is in a photographing mode, a well-shaped line can be displayed on the preview image so as to assist the user in composition. However, this method requires the user to know basic knowledge such as the three-point composition rule, requires the user to have a certain photography skill, is not generally suitable for all users to take pictures, and is not favorable for user experience. In summary, the current photographing mode cannot meet the requirements of the user on the composition of the photographed picture, so that the quality of the photographed picture is poor, and the user experience is reduced.
Disclosure of Invention
The invention provides a photographing method, a mobile terminal and a server, and aims to solve the problem that the photographing mode in the prior art cannot meet the requirements of a user on composition of a photographed picture.
In a first aspect, an embodiment of the present invention provides a photographing method, which is applied to a mobile terminal, and the method includes:
uploading a scene photo shot under a scene to be shot to a server;
receiving at least one classification label sent by the server according to the scene photo;
determining one of the at least one classification label as a target classification label, and sending first feedback information of the determined target classification label to the server;
receiving a reference shot picture which is sent by the server according to the first feedback information and corresponds to the target classification label, wherein the reference shot picture comprises at least one simulation object matched with an object to be shot in the scene to be shot;
and if the mobile terminal is in the shooting preview state of the scene to be shot, displaying the outline of the simulation object on a preview image of the scene to be shot.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes:
the transmission module is used for uploading the scene pictures shot under the scene to be shot to the server;
the first receiving module is used for receiving at least one classification label sent by the server according to the scene photo;
the first sending module is used for determining one of the at least one classification label as a target classification label and sending first feedback information for determining the target classification label to the server;
a second receiving module, configured to receive a reference captured picture corresponding to the target classification tag and sent by the server according to the first feedback information, where the reference captured picture includes at least one simulated object matched with an object to be captured in the scene to be captured;
and the display module is used for displaying the outline of the simulation object on a preview image of the scene to be shot if the mobile terminal is in a shooting preview state of the scene to be shot.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the photographing method described above.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the steps of the photographing method as described above.
In a fifth aspect, an embodiment of the present invention further provides a photographing method, which is applied to a server, and the method includes:
acquiring a scene photo uploaded by a mobile terminal;
identifying a shooting scene of the scene photo, determining at least one classification label corresponding to the shooting scene, and sending the classification label to the mobile terminal;
receiving first feedback information which is sent by the mobile terminal according to the classification label and determines a target classification label, wherein the target classification label is one of the at least one classification label;
and sending a reference shot picture corresponding to the target classification label to the mobile terminal according to the first feedback information, wherein the reference shot picture comprises at least one simulated object matched with an object to be shot in the shooting scene.
In a sixth aspect, an embodiment of the present invention further provides a server, where the server includes:
the acquisition module is used for acquiring a scene photo uploaded by the mobile terminal;
the second sending module is used for identifying the shooting scene of the scene photo, determining at least one classification label corresponding to the shooting scene and sending the classification label to the mobile terminal;
a third receiving module, configured to receive first feedback information, which is sent by the mobile terminal according to the classification tag and determines a target classification tag, where the target classification tag is one of the at least one classification tag;
and the third sending module is used for sending a reference shot picture corresponding to the target classification tag to the mobile terminal according to the first feedback information, wherein the reference shot picture comprises at least one simulated object matched with an object to be shot in the shooting scene.
In a seventh aspect, an embodiment of the present invention further provides a server, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the computer program implements the steps of the photographing method described above.
In an eighth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the steps of the photographing method as described above.
In this way, in the embodiment of the present invention, the mobile terminal uploads the scene picture shot in the current scene to be shot to the server, determines the target classification tag according to at least one classification tag corresponding to the scene picture sent by the server, and displays the outline of the simulation object in the reference shot picture on the preview image of the scene to be shot when receiving the reference shot picture corresponding to the target classification tag sent by the server, so as to provide a shooting reference for the user to compose a picture, shoot an angle, view and the like. Therefore, when the user takes a picture, the user can match the to-be-taken object corresponding to the simulation object in the to-be-taken scene according to the outline of the simulation object displayed on the preview image of the to-be-taken scene, so that the user can rapidly view the picture and select a proper shooting angle, the shot picture is guaranteed to meet the requirement of the picture composition, the picture taking efficiency is further improved, and the user experience effect is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a flow chart of a photographing method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a photograph of a scene in accordance with an embodiment of the invention;
FIG. 3 is a schematic diagram of a reference shot of an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating an outline of a target object according to an embodiment of the present invention;
FIG. 5 shows one of the block diagrams of a mobile terminal of an embodiment of the invention;
fig. 6 shows a second block diagram of a mobile terminal according to an embodiment of the invention;
fig. 7 shows a third block diagram of a mobile terminal according to an embodiment of the invention;
FIG. 8 is a fourth block diagram of a mobile terminal according to an embodiment of the present invention;
fig. 9 shows a fifth block diagram of a mobile terminal according to an embodiment of the present invention;
FIG. 10 is a second flowchart of a photographing method according to an embodiment of the invention;
FIG. 11 shows one of the block diagrams of a server according to an embodiment of the invention;
FIG. 12 shows a second block diagram of a server according to an embodiment of the invention;
fig. 13 shows a third block diagram of a server according to an embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a photographing method applied to a mobile terminal, where the method includes:
and 11, uploading the scene picture shot under the current scene to be shot to a server.
In the embodiment, when the photographing instruction is detected, the current scene to be photographed is photographed, and the photographed scene picture is uploaded to the server. Referring to fig. 2, an example of a picture of a scene taken by the mobile terminal in the above step 11 is given, wherein the taken scene is "grass".
And 12, receiving at least one classification label sent by the server according to the scene photo.
Specifically, when the mobile terminal uploads the shot scene picture to the server, the server identifies the target object in the scene picture. For example, the server performs parsing processing on the scene photo shown in fig. 2, and names of target objects in the scene photo are recognized to include "tree", "cloud", and "lawn". According to the preset mapping relationship between the names of the target objects and the classification labels, the classification labels corresponding to the names of the target objects such as "tree", "cloud", and "grassland" are determined, for example: "literature", "fresh", "sweet", etc. It should be noted that the name of a target object corresponds to at least one classification label.
And step 13, determining one of the at least one classification label as a target classification label, and feeding back first feedback information for determining the target classification label to the server.
Specifically, when at least one classification tag corresponding to a shooting scene fed back by the server is received, the classification tag may be displayed on a corresponding scene picture, so that a user may select one of the classification tags according to a corresponding scene picture.
And 14, receiving the reference shot picture which is sent by the server according to the first feedback information and corresponds to the target classification label.
The reference shot picture sent by the server is received by the mobile terminal and comprises at least one simulated object matched with the object to be shot in the scene to be shot. Specifically, the simulation object includes a target object (such as "tree", "cloud", and "grass" in fig. 2) in a scene photo taken by the mobile terminal, and a simulation person object generated by the server at a composition position of a simulation person image in the scene photo according to a predetermined composition manner.
In this embodiment, different classification tags correspond to different composition references, so that a user of the mobile terminal can determine different types of composition references by selecting the classification tags. For example: and if the target object is a grassland and the corresponding classification label is an art, composing the picture according to the type of the art: setting the simulation character object as a panoramic character, wherein the simulation character object occupies a relatively small proportion of the photo; if the scene object is 'building' and the corresponding classification label is 'ambitious', composition is carried out according to the 'ambitious' type: the proportion of the simulated character objects in the photo is set to be relatively larger relative to the proportion of the 'art' type. It should be noted that other patterning methods than the above-described patterning method may be adopted, and the present invention is not limited to this.
Referring to fig. 3, an example of the reference shot picture pushed by the server and acquired by the mobile terminal in the step 14 is given, where the reference shot picture includes a reference composition position of the simulated character image in the scene picture, so as to provide a shooting reference for a user to shoot an angle, view, and the like, which is convenient for the user to take a picture. It should be noted that one or at least two reference shot pictures pushed by the server may be used. If the reference shot pictures pushed by the server are at least two, the mobile terminal executes the following step 15 according to one of the reference shot pictures.
And step 15, if the mobile terminal is in a shooting preview state of the scene to be shot, displaying the outline of the simulation object on a preview image of the scene to be shot.
It should be noted that, if there is one reference shot picture sent by the server, the simulation object is a simulation object displayed in the reference shot picture; and if the number of the reference shot pictures sent by the server is at least two, the simulation object is the simulation object displayed on one of the reference shot pictures.
As one implementation mode, the mobile terminal extracts the outline of a simulation object (including a target object and a simulation person object in a scene picture) in a reference shot picture and displays the outline on a preview image of a current scene to be shot.
As another implementation manner, the server extracts the contour of the simulation object in the reference shot picture selected by the mobile terminal and sends the obtained contour information of the simulation object to the mobile terminal. And the mobile terminal displays the outline of the simulation object on the preview image of the scene to be shot according to the outline information sent by the server.
In this embodiment, the outline of the simulated object in the reference shot picture is displayed on the preview image of the current scene to be shot, so as to provide a shooting reference for the user's shooting angle, framing, and the like. Referring to fig. 4, an example is given in which the outline of the simulated object is displayed on the preview image of the current scene to be photographed in step 15, when the user takes a picture, the simulated object corresponding to the simulated object in the shooting scene may be matched according to the outline of the simulated object displayed on the preview image of the current scene to be photographed, so that the user can quickly frame a picture and select a proper shooting angle, the shot picture is ensured to meet the requirement of picture composition, and the user experience is improved.
In one implementation manner, in step 14, there is one reference shot picture. The step 15: displaying the outline of the simulation object on a preview image of a scene to be shot, specifically comprising:
detecting whether a confirmation instruction which is input by a user and confirms that the reference shot picture is taken as a target shot picture is acquired; if the target shot picture is acquired, extracting the outline of the simulation object in the target shot picture, and displaying the outline on a preview image of a scene to be shot; and if not, sending request information for acquiring the reference shot picture to the server.
Specifically, the mobile terminal receives one reference shot picture sent by the server for composition reference of the user. If the mobile terminal detects a confirmation instruction input by a user for confirming that the reference shot picture is taken as a target shot picture, namely the confirmation instruction indicates that the user confirms the composition reference of the reference shot picture, the mobile terminal directly extracts the outline of a simulation object in the reference shot picture and displays the outline on a preview image of a scene to be shot, wherein the outline of the simulation object comprises the following steps: referring to the outline of the simulated human object in the shot picture and the outline of the target object in at least one corresponding scene picture; and otherwise, sending request information for acquiring the reference shot picture to the server so that the server resends the reference shot picture according to the request information.
As another implementation manner, in step 14, the number of the reference shot pictures is at least two. The step 15: displaying the outline of the simulation object on a preview image of a scene to be shot, specifically comprising:
when a selection instruction which is input by a user and selects one of the reference shot pictures as a target shot picture is acquired, extracting the outline of the simulation object in the target shot picture, and displaying the outline on a preview image of a scene to be shot.
Specifically, the mobile terminal receives at least two reference shot pictures pushed by the server, so that the user can make composition reference. If the mobile terminal detects that one of the reference shot pictures is selected as a selection instruction of the target shot picture, which is input by the user, namely the user confirms the composition reference of the target shot picture, the mobile terminal directly extracts the outline of the simulation object in the target shot picture and displays the outline on the preview image of the scene to be shot. Wherein simulating the contour of the object comprises: the outline of the simulated human object in the shot picture and the outline of the target object in at least one corresponding scene picture are referred to.
In the embodiment, the mobile terminal directly extracts the outline of the simulation object in the target shot picture, so that the corresponding outline information of the simulation object is extracted and obtained after the reference shot picture sent by the server is received, the time for information interaction between the mobile terminal and the server is saved, and the shooting efficiency is improved.
As still another implementation, in the above step 15: displaying the outline of the simulation object on a preview image of a scene to be shot, specifically comprising:
sending second feedback information for determining the reference shot picture as a target shot picture to a server; acquiring contour information of a simulation object in the target shot picture, which is sent by the server according to the second feedback information; and displaying the outline of the simulation object on the preview image of the scene to be shot according to the outline information.
In this embodiment, the number of the reference shot pictures received by the mobile terminal from the server may be one, and if the mobile terminal detects a confirmation instruction that the user confirms that the reference shot picture is taken as the target shot picture, the second feedback information that the reference shot picture is taken as the target shot picture is sent to the server. The number of the reference shot pictures received by the mobile terminal and sent by the server can be at least two, so that the user can select a proper reference shot picture from the at least two reference shot pictures.
And if the mobile terminal detects that the user selects one of the reference shot pictures as a selection instruction of the target shot picture, sending second feedback information for determining the target shot picture to the server. Preferably, the mobile terminal receives a plurality of reference shot pictures sent by the server, so that the efficiency of information interaction between the mobile terminal and the server is improved, and the improvement of the shooting efficiency is facilitated.
In the embodiment, the mobile terminal extracts the contour information of the simulation object in the target shot picture determined by the mobile terminal through the acquisition server and displays the contour of the simulation object on the preview image of the scene to be shot, so that the image processing process of the mobile terminal can be shortened, the calculation amount is reduced, and the running speed of the mobile terminal is further improved.
Wherein, in the step 15: after displaying the outline of the simulated object on the preview image of the scene to be shot, the method further comprises:
detecting whether an object to be shot corresponding to the simulation object is included in a display area defined by the outline on the preview image; and if the object to be shot is included, prompting the user to execute a shooting action.
In this embodiment, when an object to be photographed in the preview image, which corresponds to the simulation object, is filled into a portion of the display area defined by the outline of the simulation object in the preview image, and meets a preset condition (for example, the filling degree is greater than a preset threshold), the user is prompted to perform a photographing action. It should be noted that the manner of prompting the user to execute the photographing action may be to prompt by displaying a prompt message on the current preview image, or prompt by using a prompt tone, or prompt by using a shake control display interface, or other prompting manners besides the above, and the present invention is not limited to this.
According to the scheme, the mobile terminal uploads the scene picture shot under the current scene to be shot to the server, determines the target classification label according to at least one classification label corresponding to the scene picture sent by the server, and displays the outline of the simulation object in the reference shot picture on the preview image of the scene to be shot when the reference shot picture corresponding to the target classification label sent by the server is received, so that shooting references of user composition, shooting angle, framing and the like are provided. Therefore, when the user takes a picture, the user can match the to-be-taken object corresponding to the simulation object in the scene to be taken according to the outline of the simulation object displayed on the preview image of the scene to be taken currently, so that the user can rapidly view the picture and select a proper shooting angle, the shot picture is ensured to meet the requirement of the picture composition, the picture taking efficiency is improved, and the user experience effect is further improved.
Referring to fig. 5 and fig. 6, an embodiment of the present invention further provides a mobile terminal 500, where the mobile terminal 500 includes:
and a transmission module 510, configured to upload a scene photo taken in a scene to be taken to a server.
A first receiving module 520, configured to receive that the server sends at least one classification tag according to the scene photo.
A first sending module 530, configured to determine one of the at least one classification tag as a target classification tag, and send first feedback information that determines the target classification tag to the server.
A second receiving module 540, configured to receive a reference captured picture corresponding to the target classification tag and sent by the server according to the first feedback information, where the reference captured picture includes at least one simulated object matched with an object to be captured in the scene to be captured.
A display module 550, configured to display the outline of the simulated object on a preview image of the scene to be shot if the mobile terminal is in a shooting preview state of the scene to be shot.
Wherein the reference shot picture received by the second receiving module 540 is one; the display module 550 includes:
a detecting unit 551, configured to detect whether a confirmation instruction input by a user to confirm the reference captured picture as a target captured picture is acquired.
A first display unit 552, configured to, if the target shot picture is obtained, extract a contour of a simulation object in the target shot picture, and display the contour on a preview image of the scene to be shot.
A first sending unit 553, configured to send, if not obtained, request information for obtaining a reference captured picture to the server.
Wherein the reference shot pictures received by the second receiving module 540 are at least two; the display module 550 includes:
a second display unit 554, configured to, when a selection instruction input by a user to select one of the reference captured pictures as a target captured picture is acquired, extract a contour of a simulation object in the target captured picture, and display the contour on a preview image of the scene to be captured.
Wherein the display module 550 comprises:
a second sending unit 555, configured to send second feedback information that determines that the reference captured picture is a target captured picture to the server.
A first obtaining unit 556, configured to obtain the contour information of the simulation object in the target captured picture sent by the server according to the second feedback information.
A third display unit 557, configured to display, according to the contour information, a contour of the simulation object on a preview image of the scene to be photographed.
If one reference shot picture sent by the server is received, the target shot picture is the reference shot picture; and if at least two reference shot pictures sent by the server are received, the target shot picture is one of the reference shot pictures.
Wherein the mobile terminal 500 further comprises:
a detecting module 560, configured to detect whether an object to be photographed corresponding to the simulation object is included in a display area defined by the outline on the preview image.
And the prompting module 570 is configured to prompt the user to execute a photographing action if the object to be photographed is included.
According to the mobile terminal in the scheme, the scene picture shot under the current scene to be shot is uploaded to the server, the target classification label is determined according to at least one classification label corresponding to the scene picture sent by the server, and when the reference shot picture corresponding to the target classification label sent by the server is received, the outline of the simulation object in the reference shot picture is displayed on the preview image of the scene to be shot so as to provide shooting references of user composition, shooting angle, framing and the like. Therefore, when the user takes a picture, the user can match the to-be-taken object corresponding to the simulation object in the scene to be taken according to the outline of the simulation object displayed on the preview image of the scene to be taken currently, so that the user can rapidly view the picture and select a proper shooting angle, the shot picture is ensured to meet the requirement of the picture composition, the picture taking efficiency is improved, and the user experience effect is further improved.
Fig. 7 is a block diagram of a mobile terminal according to another embodiment of the present invention, where the mobile terminal 700 includes a processor 701, a memory 702, and a computer program stored in the memory 702 and operable on the processor 701, and when executed by the processor 701, the computer program implements the following steps: uploading a scene photo shot under a scene to be shot to a server; receiving at least one classification label sent by the server according to the scene photo; determining one of the at least one classification label as a target classification label, and sending first feedback information of the determined target classification label to the server; receiving a reference shot picture which is sent by the server according to the first feedback information and corresponds to the target classification label, wherein the reference shot picture comprises at least one simulation object matched with an object to be shot in the scene to be shot; and if the mobile terminal is in the shooting preview state of the scene to be shot, displaying the outline of the simulation object on a preview image of the scene to be shot.
Optionally, in the step of receiving the reference photographed picture corresponding to the target classification tag sent by the server according to the first feedback information, there is one reference photographed picture; the computer program, when executed by the processor 701, may further implement the steps of: detecting whether a confirmation instruction which is input by a user and confirms that the reference shot picture is taken as a target shot picture is acquired; if the target shot picture is acquired, extracting the outline of the simulation object in the target shot picture, and displaying the outline on a preview image of the scene to be shot; and if not, sending request information for acquiring the reference shot picture to the server.
In the step of receiving reference shot pictures corresponding to the target classification labels and sent by the server according to the first feedback information, at least two reference shot pictures are obtained; the computer program, when executed by the processor 701, may further implement the steps of: when a selection instruction which is input by a user and selects one of the reference shot pictures as a target shot picture is acquired, extracting the outline of a simulation object in the target shot picture, and displaying the outline on a preview image of the scene to be shot.
Optionally, the computer program may further implement the following steps when executed by the processor 701: sending second feedback information for determining the reference shot picture as a target shot picture to the server; acquiring contour information of a simulation object in the target shot picture, which is sent by the server according to the second feedback information; displaying the outline of the simulation object on a preview image of the scene to be shot according to the outline information; if one reference shot picture sent by the server is received, the target shot picture is the reference shot picture; and if at least two reference shot pictures sent by the server are received, the target shot picture is one of the reference shot pictures.
Optionally, the computer program may further implement the following steps when executed by the processor 701: detecting whether an object to be shot corresponding to the simulation object is included in a display area defined by the outline on the preview image; and if the object to be shot is included, prompting the user to execute a shooting action.
The software module may be located in a computer readable storage medium, such as a ram, a flash memory, a rom, a programmable rom, an electrically erasable programmable memory, a register, and the like, which are well known in the art. The computer readable storage medium is located in the memory 702, and the processor 701 reads the information in the memory 702, and performs the steps of the above method in combination with the hardware thereof. In particular, the computer readable storage medium has stored thereon a computer program, which when executed by the processor 701 implements the steps of the embodiment of the photographing method as described above.
Fig. 8 is a block diagram of a mobile terminal 800 according to another embodiment of the present invention, the mobile terminal shown in fig. 8 including: at least one processor 801, a memory 802, a photographing component 803, and a user interface 804. The various components in the mobile terminal 800 are coupled together by a bus system 805. It is understood that the bus system 805 is used to enable communications among the components connected. The bus system 805 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 805 in fig. 8.
The user interface 804 may include, among other things, a display or a pointing device (e.g., a touch-sensitive pad or touch screen, etc.).
It will be appreciated that the memory 802 in embodiments of the invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data rate Synchronous Dynamic random access memory (ddr SDRAM ), Enhanced Synchronous SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct memory bus RAM (DRRAM). The memory 802 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 802 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof: an operating system 8021 and application programs 8022.
The operating system 8021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application program 8022 includes various application programs, such as a Media Player (Media Player), a Browser (Browser), and the like, for implementing various application services. A program implementing a method according to an embodiment of the present invention may be included in application program 8022.
In embodiments of the present invention, by calling a program or instructions stored in memory 802, in particular, a program or instructions stored in application program 8022. Wherein the processor 801 is configured to: uploading a scene photo shot under a scene to be shot to a server; receiving at least one classification label sent by the server according to the scene photo; determining one of the at least one classification label as a target classification label, and sending first feedback information of the determined target classification label to the server; receiving a reference shot picture which is sent by the server according to the first feedback information and corresponds to the target classification label, wherein the reference shot picture comprises at least one simulation object matched with an object to be shot in the scene to be shot; and if the mobile terminal is in the shooting preview state of the scene to be shot, displaying the outline of the simulation object on a preview image of the scene to be shot.
The methods disclosed in the embodiments of the present invention described above may be implemented in the processor 801 or implemented by the processor 801. The processor 801 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 801. The Processor 801 may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable Gate Array (FPGA) or other programmable logic device, discrete Gate or transistor logic device, discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 802, and the processor 801 reads the information in the memory 802, and combines the hardware to complete the steps of the method.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
In the step of receiving the reference shot pictures corresponding to the target classification labels and sent by the server according to the first feedback information, the number of the reference shot pictures is one; the processor 801 is further configured to: detecting whether a confirmation instruction which is input by a user and confirms that the reference shot picture is taken as a target shot picture is acquired; if the target shot picture is acquired, extracting the outline of the simulation object in the target shot picture, and displaying the outline on a preview image of the scene to be shot; and if not, sending request information for acquiring the reference shot picture to the server.
In the step of receiving reference shot pictures corresponding to the target classification labels and sent by the server according to the first feedback information, at least two reference shot pictures are obtained; the processor 801 is further configured to: when a selection instruction which is input by a user and selects one of the reference shot pictures as a target shot picture is acquired, extracting the outline of a simulation object in the target shot picture, and displaying the outline on a preview image of the scene to be shot.
Wherein, the processor 801 is further configured to: sending second feedback information for determining the reference shot picture as a target shot picture to the server; acquiring contour information of a simulation object in the target shot picture, which is sent by the server according to the second feedback information; displaying the outline of the simulation object on a preview image of the scene to be shot according to the outline information; if one reference shot picture sent by the server is received, the target shot picture is the reference shot picture; and if at least two reference shot pictures sent by the server are received, the target shot picture is one of the reference shot pictures.
Wherein, the processor 801 is further configured to: detecting whether an object to be shot corresponding to the simulation object is included in a display area defined by the outline on the preview image; and if the object to be shot is included, prompting the user to execute a shooting action.
According to the mobile terminal 800 of the embodiment of the invention, the scene picture shot under the current scene to be shot is uploaded to the server, the target classification label is determined according to at least one classification label corresponding to the scene picture sent by the server, and when the reference shot picture corresponding to the target classification label sent by the server is received, the outline of the simulation object in the reference shot picture is displayed on the preview image of the scene to be shot so as to provide shooting references of composition, shooting angle, framing and the like for a user. Therefore, when the user takes a picture, the user can match the to-be-taken object corresponding to the simulation object in the shooting scene according to the outline of the simulation object displayed on the preview image of the to-be-taken scene, so that the user can rapidly view a frame and select a proper shooting angle, the shot picture is guaranteed to meet the requirement of the picture composition, the shooting efficiency is improved, and the user experience effect is improved.
Fig. 9 is a schematic structural diagram of a mobile terminal according to another embodiment of the present invention. Specifically, the mobile terminal 900 in fig. 9 may be a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), or a vehicle-mounted computer.
The mobile terminal 900 in fig. 9 includes a power supply 910, a memory 920, an input unit 930, a display unit 940, a processor 950, a wifi (wireless fidelity) module 960, an audio circuit 970, and an RF circuit 980.
The input unit 930 may be used, among other things, to receive numeric or character information input by a user and to generate signal inputs related to user settings and function control of the mobile terminal 900. Specifically, in the embodiment of the present invention, the input unit 930 may include a touch panel 931. The touch panel 931, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 931 (for example, a user may operate the touch panel 931 by using a finger, a stylus pen, or any other suitable object or accessory), and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 931 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 950, and can receive and execute commands sent from the processor 950. In addition, the touch panel 931 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 931, the input unit 930 may also include other input devices 932, and the other input devices 932 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Among them, the display unit 940 may be used to display information input by the user or information provided to the user and various menu interfaces of the mobile terminal. The display unit 940 may include a display panel 941, and the display panel 941 may be optionally configured in the form of an LCD or an Organic Light-Emitting Diode (OLED).
It should be noted that the touch panel 931 may cover the display panel 941 to form a touch display screen, and when the touch display screen detects a touch operation on or near the touch display screen, the touch display screen is transmitted to the processor 950 to determine the type of the touch event, and then the processor 950 provides a corresponding visual output on the touch display screen according to the type of the touch event.
The touch display screen comprises an application program interface display area and a common control display area. The arrangement modes of the application program interface display area and the common control display area are not limited, and can be an arrangement mode which can distinguish two display areas, such as vertical arrangement, left-right arrangement and the like. The application interface display area may be used to display an interface of an application. Each interface may contain at least one interface element such as an icon and/or widget desktop control for an application. The application interface display area may also be an empty interface that does not contain any content. The common control display area is used for displaying controls with high utilization rate, such as application icons like setting buttons, interface numbers, scroll bars, phone book icons and the like.
The processor 950 is a control center of the mobile terminal, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the first memory 921 and calling data stored in the second memory 922, thereby integrally monitoring the mobile terminal. Optionally, processor 950 may include one or more processing units.
In an embodiment of the present invention, processor 950 is configured to, by invoking software programs and/or modules stored in first memory 921 and/or data stored in second memory 922: uploading a scene photo shot under a scene to be shot to a server; receiving at least one classification label sent by the server according to the scene photo; determining one of the at least one classification label as a target classification label, and sending first feedback information of the determined target classification label to the server; receiving a reference shot picture which is sent by the server according to the first feedback information and corresponds to the target classification label, wherein the reference shot picture comprises at least one simulation object matched with an object to be shot in the scene to be shot; and if the mobile terminal is in the shooting preview state of the scene to be shot, displaying the outline of the simulation object on a preview image of the scene to be shot.
In the step of receiving the reference shot pictures corresponding to the target classification labels and sent by the server according to the first feedback information, the number of the reference shot pictures is one; the processor 950 is further configured to: detecting whether a confirmation instruction which is input by a user and confirms that the reference shot picture is taken as a target shot picture is acquired; if the target shot picture is acquired, extracting the outline of the simulation object in the target shot picture, and displaying the outline on a preview image of the scene to be shot; and if not, sending request information for acquiring the reference shot picture to the server.
In the step of receiving the reference shot pictures corresponding to the target classification labels and sent by the server according to the first feedback information, at least two reference shot pictures are obtained; the processor 950 is further configured to: when a selection instruction which is input by a user and selects one of the reference shot pictures as a target shot picture is acquired, extracting the outline of a simulation object in the target shot picture, and displaying the outline on a preview image of the scene to be shot.
Wherein the processor 950 is further configured to: sending second feedback information for determining the reference shot picture as a target shot picture to the server; acquiring contour information of a simulation object in the target shot picture, which is sent by the server according to the second feedback information; displaying the outline of the simulation object on a preview image of the scene to be shot according to the outline information; if one reference shot picture sent by the server is received, the target shot picture is the reference shot picture; and if at least two reference shot pictures sent by the server are received, the target shot picture is one of the reference shot pictures.
Wherein the processor 950 is further configured to: detecting whether a shooting object corresponding to the simulation object is included in a display area defined by the outline on the preview image; and if the shooting object is included, prompting the user to execute a shooting action.
According to the mobile terminal 900 of the embodiment of the invention, the scene picture shot under the current scene to be shot is uploaded to the server, the target classification label is determined according to at least one classification label corresponding to the scene picture sent by the server, and when the reference shot picture sent by the server is received, the outline of the simulation object in the reference shot picture is displayed on the preview image of the scene to be shot, so that shooting references of user composition, shooting angle, framing and the like are provided. Therefore, when the user takes a picture, the user can match the to-be-taken object corresponding to the simulation object in the to-be-taken scene according to the outline of the simulation object displayed on the preview image of the to-be-taken scene, so that the user can rapidly view the picture and select a proper shooting angle, the shot picture is guaranteed to meet the requirement of the picture composition, the picture taking efficiency is improved, and the user experience effect is improved.
Referring to fig. 10, an embodiment of the present invention further provides a photographing method applied to a server, where the method includes:
step 101, obtaining a scene photo uploaded by a mobile terminal.
In this embodiment, the scene picture is shot by the mobile terminal in a scene to be shot.
Step 102, identifying a shooting scene of the scene photo, determining at least one classification label corresponding to the shooting scene, and sending the classification label to the mobile terminal.
In this embodiment, the server identifies a target object in the scene picture. For example, the server performs parsing processing on the scene photo shown in fig. 2, and names of target objects in the scene photo are recognized to include "tree", "cloud", and "lawn". According to the preset mapping relationship between the names of the target objects and the classification labels, the classification labels corresponding to the names of the target objects such as "tree", "cloud", and "grassland" are determined, for example: the mobile terminal comprises a plurality of classification labels, wherein the classification labels comprise 'literature', 'freshness', 'sweet' and the like, and the classification labels are sent to the mobile terminal so that a user of the mobile terminal can select one of the classification labels according to a corresponding scene photo. It should be noted that the name of a target object corresponds to at least one classification label.
Step 103, receiving first feedback information which is sent by the mobile terminal according to the classification label and determines a target classification label, wherein the target classification label is one of the at least one classification label.
And 104, sending a reference shot picture corresponding to the target classification label to the mobile terminal according to the first feedback information, wherein the reference shot picture comprises at least one simulated object matched with the object to be shot in the shooting scene.
In this embodiment, the server determines, according to the target classification tag determined by the first feedback information and according to a mapping relationship between at least one preset classification tag and the reference shot picture, the reference shot picture corresponding to the target classification tag, and sends the reference shot picture to the mobile terminal. Specifically, the simulation object includes a target object (such as "tree", "cloud", and "grass" in fig. 2) in a scene photo taken by the mobile terminal, and a simulation person object generated by the server at a composition position of a simulation person image in the scene photo according to a predetermined composition manner. The server sends the reference shot picture to the mobile terminal so as to provide shooting reference for composition, shooting angle, framing and the like for a user at the side of the mobile terminal, and the user can conveniently take pictures. It should be noted that one reference shot picture or at least two reference shot pictures pushed by the server to the mobile terminal may be provided.
In this embodiment, the server performs matching of classification tags according to target objects in the scene photo, where different classification tags correspond to different composition references, so that a user of the mobile terminal determines different types of composition references by selecting the classification tags. For example: if the target object is a grassland and the corresponding classification label is an art, composing the picture according to the type of the art: setting the simulation character object as a panoramic character, wherein the simulation character object occupies a relatively small proportion of the photo; if the target object is a building and the corresponding classification label is ambitious, composition is carried out according to the ambitious type: the proportion of the simulated character objects in the photo is set to be relatively larger relative to the proportion of the 'art' type. It should be noted that other patterning methods than the above-described patterning method may be adopted, and the present invention is not limited to this.
As an implementation manner, in step 104, there is one reference shot picture; the above step 104: after sending the reference shot picture corresponding to the target classification tag to the mobile terminal according to the first feedback information, the method further comprises:
receiving request information for acquiring a reference shot picture sent by the mobile terminal; sending a reference shot picture corresponding to the target classification label to the mobile terminal according to the request information; and sending the reference shot picture corresponding to the target classification label to the mobile terminal according to the first feedback information, wherein the reference shot picture is different from the reference shot picture in the step of sending the reference shot picture corresponding to the target classification label to the mobile terminal according to the first feedback information.
Specifically, the server sends one reference shot picture to the mobile terminal, so that the user at the mobile terminal side can make composition reference. And if the server receives request information for acquiring the reference shot picture sent by the mobile terminal, resending the reference shot picture according to the request information.
As another implementation manner, in step 104, at least two reference shot pictures are provided; the above step 104: after sending the reference shot picture corresponding to the target classification tag to the mobile terminal according to the first feedback information, the method further comprises:
receiving second feedback information which is sent by the mobile terminal and used for determining a reference shot picture as a target shot picture; extracting the outline of the simulated object in the target shot picture according to the second feedback information to obtain the outline information of the simulated object in the target shot picture; and sending the contour information to the mobile terminal.
Specifically, if one reference shot picture is sent to the mobile terminal by the server, the target shot picture is the reference shot picture; and if the number of the reference shot pictures sent to the mobile terminal by the server is at least two, the target shot picture is one of the reference shot pictures. Preferably, the server sends the plurality of reference shot pictures to the mobile terminal, so that a user of the mobile terminal can directly select a proper reference shot picture from the plurality of reference shot pictures, and the efficiency of information interaction between the mobile terminal and the server is improved, and the shooting efficiency is improved.
In this embodiment, when the target shot picture confirmed by the user of the mobile terminal is determined according to the received second feedback information, the contour of the simulation object in the target shot picture is extracted, and the obtained contour information of the simulation object in the target shot picture is sent to the mobile terminal, so that when the mobile terminal is in a shooting preview state, the contour of the simulation object is displayed on a preview image of a scene to be shot. Therefore, when a user of the mobile terminal takes a picture, the user can match the to-be-taken object corresponding to the analog object in the to-be-taken scene according to the outline of the analog object displayed on the preview image of the to-be-taken scene, so that the user can rapidly view the picture and select a proper shooting angle, the shot picture is guaranteed to meet the requirement of the picture composition, the picture taking efficiency is improved, and the user experience effect is improved.
Referring to fig. 11 and 12, an embodiment of the present invention further provides a server 1100, where the server 1100 includes:
the obtaining module 1110 is configured to obtain a scene photo uploaded by the mobile terminal.
A second sending module 1120, configured to identify a shooting scene of the scene photo, determine at least one classification tag corresponding to the shooting scene, and send the classification tag to the mobile terminal.
A third receiving module 1130, configured to receive first feedback information, which is sent by the mobile terminal according to the classification tag and determines a target classification tag, where the target classification tag is one of the at least one classification tag.
A third sending module 1140, configured to send a reference shot picture corresponding to the target classification tag to the mobile terminal according to the first feedback information, where the reference shot picture includes at least one simulated object matched with an object to be shot in the shooting scene.
Wherein the third sending module 1140 sends one reference shot picture; the server 1100 further comprises:
a fourth receiving module 1150, configured to receive request information for obtaining a reference captured picture sent by the mobile terminal.
A fourth sending module 1160, configured to send a reference captured picture corresponding to the target classification tag to the mobile terminal according to the request information.
The reference shot picture sent by the fourth sending module is different from the reference shot picture sent by the third sending module.
Wherein the server 1100 further comprises:
and a fifth receiving module 1170, configured to receive second feedback information sent by the mobile terminal and used for determining that the reference captured picture is taken as a target captured picture.
An extracting module 1180, configured to extract the contour of the simulated object in the target captured picture according to the second feedback information, so as to obtain contour information of the simulated object in the target captured picture.
A fifth sending module 1190, configured to send the contour information to the mobile terminal.
If one reference shot picture is sent to the mobile terminal by the third sending module 1140, the target shot picture is the reference shot picture; if at least two reference shot pictures are sent to the mobile terminal by the third sending module 1140, the target shot picture is one of the reference shot pictures.
The server in the scheme sends at least one classification label corresponding to the scene photo to the mobile terminal through the acquired scene photo sent by the mobile terminal; and sending the reference shot picture to the mobile terminal according to the target determined by the mobile terminal, and directly extracting the outline of the simulation object in the determined target shot picture by the mobile terminal, or extracting the outline of the simulation object in the determined target shot picture by the server, so that the outline of the simulation object is displayed on a preview image of a scene to be shot when the mobile terminal is in a shooting preview state. Therefore, when a user of the mobile terminal takes a picture, the user can match the to-be-taken object corresponding to the analog object in the to-be-taken scene according to the outline of the analog object displayed on the preview image of the to-be-taken scene, so that the user can rapidly view the picture and select a proper shooting angle, the shot picture is guaranteed to meet the requirement of the picture composition, the picture taking efficiency is improved, and the user experience effect is improved.
Fig. 13 is a block diagram of a server according to another embodiment of the present invention, where the server 1300 includes a processor 1301, a memory 1302, and a computer program stored in the memory 1302 and operable on the processor 1301, and when the computer program is executed by the processor 1301, the computer program implements the following steps: acquiring a scene photo uploaded by a mobile terminal; identifying a shooting scene of the scene photo, determining at least one classification label corresponding to the shooting scene, and sending the classification label to the mobile terminal; receiving first feedback information which is sent by the mobile terminal according to the classification label and determines a target classification label, wherein the target classification label is one of the at least one classification label; and sending a reference shot picture corresponding to the target classification label to the mobile terminal according to the first feedback information, wherein the reference shot picture comprises at least one simulated object matched with an object to be shot in the shooting scene.
In the step of sending the reference shot pictures corresponding to the target classification labels to the mobile terminal according to the first feedback information, one reference shot picture is used; the computer program when executed by the processor 1301 further realizes the steps of: receiving request information for acquiring a reference shot picture sent by the mobile terminal; sending a reference shot picture corresponding to the target classification label to the mobile terminal according to the request information; and sending the reference shot picture corresponding to the target classification label to the mobile terminal according to the first feedback information, wherein the reference shot picture is different from the reference shot picture in the step of sending the reference shot picture corresponding to the target classification label to the mobile terminal according to the first feedback information.
Optionally, the computer program when executed by the processor 1301 further implements the steps of: receiving second feedback information which is sent by the mobile terminal and used for determining the reference shot picture as a target shot picture; extracting the outline of the simulated object in the target shot picture according to the second feedback information to obtain the outline information of the simulated object in the target shot picture; sending the contour information to the mobile terminal; if one reference shot picture is sent to the mobile terminal, the target shot picture is the reference shot picture; and if at least two reference shot pictures are sent to the mobile terminal, the target shot picture is one of the reference shot pictures.
The software module may be located in a computer readable storage medium, such as a ram, a flash memory, a rom, a programmable rom, an electrically erasable programmable memory, a register, and the like, which are well known in the art. The computer readable storage medium is located in the memory 1302, and the processor 1301 reads the information in the memory 1302, and combines the hardware to complete the steps of the method. In particular, the computer readable storage medium has stored thereon a computer program, which when executed by the processor 1301, performs the steps of the above-described embodiment of the photographing method.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
While the preferred embodiments of the present invention have been described, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.

Claims (16)

1. A photographing method is applied to a mobile terminal, and is characterized by comprising the following steps:
uploading a scene photo shot under a scene to be shot to a server;
receiving at least one classification label sent by the server according to the scene photo;
determining one of the at least one classification label as a target classification label, and sending first feedback information of the determined target classification label to the server; the target classification label is determined by a user;
receiving a reference shot picture which is sent by the server according to the first feedback information and corresponds to the target classification label, wherein the reference shot picture comprises at least one simulation object matched with an object to be shot in the scene to be shot; the simulation objects comprise target objects in a scene photo shot by the mobile terminal and simulation character objects generated by the server according to a preset composition mode at composition positions of simulation character images in the scene photo;
and if the mobile terminal is in the shooting preview state of the scene to be shot, displaying the outline of the simulation object on a preview image of the scene to be shot.
2. The photographing method according to claim 1, wherein in the step of receiving the reference photographed picture corresponding to the target classification tag sent by the server according to the first feedback information, the reference photographed picture is one;
the step of displaying the outline of the simulated object on the preview image of the scene to be shot comprises the following steps:
detecting whether a confirmation instruction which is input by a user and confirms that the reference shot picture is taken as a target shot picture is acquired;
if the target shot picture is acquired, extracting the outline of the simulation object in the target shot picture, and displaying the outline on a preview image of the scene to be shot;
and if not, sending request information for acquiring the reference shot picture to the server.
3. The photographing method according to claim 1, wherein in the step of receiving the reference photographed pictures corresponding to the target classification tag sent by the server according to the first feedback information, the number of the reference photographed pictures is at least two;
the step of displaying the outline of the simulated object on the preview image of the scene to be shot comprises the following steps:
when a selection instruction which is input by a user and selects one of the reference shot pictures as a target shot picture is acquired, extracting the outline of a simulation object in the target shot picture, and displaying the outline on a preview image of the scene to be shot.
4. The photographing method according to claim 1, wherein the step of displaying the outline of the simulated object on the preview image of the scene to be photographed includes:
sending second feedback information for determining the reference shot picture as a target shot picture to the server;
acquiring contour information of a simulation object in the target shot picture, which is sent by the server according to the second feedback information;
displaying the outline of the simulation object on a preview image of the scene to be shot according to the outline information;
if one reference shot picture sent by the server is received, the target shot picture is the reference shot picture; and if at least two reference shot pictures sent by the server are received, the target shot picture is one of the reference shot pictures.
5. The photographing method according to claim 1, wherein after the step of displaying the outline of the simulation object on the preview image of the scene to be photographed, the method further comprises:
detecting whether an object to be shot corresponding to the simulation object is included in a display area defined by the outline on the preview image;
and if the object to be shot is included, prompting the user to execute a shooting action.
6. A photographing method is applied to a server and is characterized by comprising the following steps:
acquiring a scene photo uploaded by a mobile terminal;
identifying a shooting scene of the scene photo, determining at least one classification label corresponding to the shooting scene, and sending the classification label to the mobile terminal;
receiving first feedback information which is sent by the mobile terminal according to the classification label and determines a target classification label, wherein the target classification label is one of the at least one classification label; the target classification label is determined by a user;
sending a reference shot picture corresponding to the target classification label to the mobile terminal according to the first feedback information, wherein the reference shot picture comprises at least one simulated object matched with an object to be shot in the shooting scene; the simulation objects comprise target objects in a scene photo shot by the mobile terminal and simulation character objects generated by the server according to a preset composition mode at composition positions of simulation character images in the scene photo.
7. The photographing method according to claim 6, wherein in the step of sending the reference photographed picture corresponding to the target classification tag to the mobile terminal according to the first feedback information, there is one reference photographed picture;
after the step of sending the reference shot picture corresponding to the target classification tag to the mobile terminal according to the first feedback information, the method further includes:
receiving request information for acquiring a reference shot picture sent by the mobile terminal;
sending a reference shot picture corresponding to the target classification label to the mobile terminal according to the request information;
and sending the reference shot picture corresponding to the target classification label to the mobile terminal according to the first feedback information, wherein the reference shot picture is different from the reference shot picture in the step of sending the reference shot picture corresponding to the target classification label to the mobile terminal according to the first feedback information.
8. The photographing method according to claim 6, wherein after the step of sending the reference photographed picture corresponding to the target classification tag to the mobile terminal according to the first feedback information, the method further comprises:
receiving second feedback information which is sent by the mobile terminal and used for determining the reference shot picture as a target shot picture;
extracting the outline of the simulated object in the target shot picture according to the second feedback information to obtain the outline information of the simulated object in the target shot picture;
sending the contour information to the mobile terminal;
if one reference shot picture is sent to the mobile terminal, the target shot picture is the reference shot picture; and if at least two reference shot pictures are sent to the mobile terminal, the target shot picture is one of the reference shot pictures.
9. A mobile terminal, characterized in that the mobile terminal comprises:
the transmission module is used for uploading the scene pictures shot under the scene to be shot to the server;
the first receiving module is used for receiving at least one classification label sent by the server according to the scene photo;
the first sending module is used for determining one of the at least one classification label as a target classification label and sending first feedback information for determining the target classification label to the server; the target classification label is determined by a user;
a second receiving module, configured to receive a reference captured picture corresponding to the target classification tag and sent by the server according to the first feedback information, where the reference captured picture includes at least one simulated object matched with an object to be captured in the scene to be captured; the simulation objects comprise target objects in a scene photo shot by the mobile terminal and simulation character objects generated by the server according to a preset composition mode at composition positions of simulation character images in the scene photo;
and the display module is used for displaying the outline of the simulation object on a preview image of the scene to be shot if the mobile terminal is in a shooting preview state of the scene to be shot.
10. The mobile terminal according to claim 9, wherein the reference shot picture received by the second receiving module is one; the display module includes:
the detection unit is used for detecting whether a confirmation instruction which is input by a user and confirms the reference shot picture as a target shot picture is acquired;
the first display unit is used for extracting the outline of the simulation object in the target shooting picture and displaying the outline on the preview image of the scene to be shot if the outline is acquired;
and the first sending unit is used for sending request information for acquiring the reference shot picture to the server if the reference shot picture is not acquired.
11. The mobile terminal according to claim 9, wherein the reference shot pictures received by the second receiving module are at least two; the display module includes:
and the second display unit is used for extracting the outline of a simulation object in the target shooting picture and displaying the outline on the preview image of the scene to be shot when a selection instruction which is input by a user and used for selecting one of the reference shooting pictures as the target shooting picture is acquired.
12. The mobile terminal of claim 9, wherein the display module comprises:
a second sending unit configured to send second feedback information that determines that the reference captured picture is a target captured picture to the server;
the first acquisition unit is used for acquiring the contour information of the simulation object in the target shot picture sent by the server according to the second feedback information;
the third display unit is used for displaying the outline of the simulation object on the preview image of the scene to be shot according to the outline information;
if one reference shot picture sent by the server is received, the target shot picture is the reference shot picture; and if at least two reference shot pictures sent by the server are received, the target shot picture is one of the reference shot pictures.
13. The mobile terminal of claim 9, wherein the mobile terminal further comprises:
the detection module is used for detecting whether an object to be shot corresponding to the simulation object is included in a display area defined by the outline on the preview image;
and the prompting module is used for prompting the user to execute the photographing action if the object to be photographed is included.
14. A server, characterized in that the server comprises:
the acquisition module is used for acquiring a scene photo uploaded by the mobile terminal;
the second sending module is used for identifying the shooting scene of the scene photo, determining at least one classification label corresponding to the shooting scene and sending the classification label to the mobile terminal;
a third receiving module, configured to receive first feedback information, which is sent by the mobile terminal according to the classification tag and determines a target classification tag, where the target classification tag is one of the at least one classification tag; the target classification label is determined by a user;
a third sending module, configured to send, to the mobile terminal, a reference captured picture corresponding to the target classification tag according to the first feedback information, where the reference captured picture includes at least one simulated object matched with an object to be captured in the captured scene; the simulation objects comprise target objects in a scene photo shot by the mobile terminal and simulation character objects generated by the server according to a preset composition mode at composition positions of simulation character images in the scene photo.
15. The server according to claim 14, wherein the reference shot picture sent by the third sending module is one; the server further comprises:
the fourth receiving module is used for receiving request information for acquiring reference shot pictures sent by the mobile terminal;
a fourth sending module, configured to send, according to the request information, a reference captured picture corresponding to the target classification tag to the mobile terminal;
the reference shot picture sent by the fourth sending module is different from the reference shot picture sent by the third sending module.
16. The server according to claim 14, further comprising: the fifth receiving module is used for receiving second feedback information which is sent by the mobile terminal and used for determining the reference shot picture as a target shot picture;
the extraction module is used for extracting the outline of the simulated object in the target shot picture according to the second feedback information to obtain the outline information of the simulated object in the target shot picture;
a fifth sending module, configured to send the contour information to the mobile terminal;
if one reference shot picture is sent to the mobile terminal by the third sending module, the target shot picture is the reference shot picture; and if the number of the reference shot pictures sent to the mobile terminal by the third sending module is at least two, the target shot picture is one of the reference shot pictures.
CN201710844653.1A 2017-09-15 2017-09-15 Photographing method, mobile terminal and server Active CN107734142B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710844653.1A CN107734142B (en) 2017-09-15 2017-09-15 Photographing method, mobile terminal and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710844653.1A CN107734142B (en) 2017-09-15 2017-09-15 Photographing method, mobile terminal and server

Publications (2)

Publication Number Publication Date
CN107734142A CN107734142A (en) 2018-02-23
CN107734142B true CN107734142B (en) 2020-05-05

Family

ID=61207625

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710844653.1A Active CN107734142B (en) 2017-09-15 2017-09-15 Photographing method, mobile terminal and server

Country Status (1)

Country Link
CN (1) CN107734142B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875820A (en) * 2018-06-08 2018-11-23 Oppo广东移动通信有限公司 Information processing method and device, electronic equipment, computer readable storage medium
CN108898163B (en) * 2018-06-08 2022-05-13 Oppo广东移动通信有限公司 Information processing method and device, electronic equipment and computer readable storage medium
CN110766602A (en) * 2018-07-25 2020-02-07 中兴通讯股份有限公司 Photographing method and device for automatically matching props
CN109120851B (en) * 2018-09-21 2020-09-22 维沃移动通信有限公司 Image processing method and device and electronic equipment
CN114025097B (en) * 2020-03-09 2023-12-12 Oppo广东移动通信有限公司 Composition guidance method, device, electronic equipment and storage medium
US11297226B1 (en) * 2020-10-01 2022-04-05 Black Sesame Technologies Inc. Photo taking feedback system
CN114697539A (en) * 2020-12-31 2022-07-01 深圳市万普拉斯科技有限公司 Photographing recommendation method and device, electronic equipment and storage medium
CN112732961A (en) * 2021-01-05 2021-04-30 维沃移动通信有限公司 Image classification method and device
CN112887610A (en) * 2021-01-27 2021-06-01 维沃移动通信有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN113411505B (en) * 2021-08-19 2021-11-09 深圳康易世佳科技有限公司 Photographing control method and device and storage medium
CN113840085A (en) * 2021-09-02 2021-12-24 北京城市网邻信息技术有限公司 Vehicle source information acquisition method and device, electronic equipment and readable medium
CN117750196B (en) * 2024-02-10 2024-05-28 苔花科迈(西安)信息技术有限公司 Data acquisition method and device of underground drilling site mobile camera device based on template

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103945129A (en) * 2014-04-30 2014-07-23 深圳市中兴移动通信有限公司 Photographing-preview picture composition instruction method and system based on mobile terminal
CN106101536A (en) * 2016-06-22 2016-11-09 维沃移动通信有限公司 A kind of photographic method and mobile terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103945129A (en) * 2014-04-30 2014-07-23 深圳市中兴移动通信有限公司 Photographing-preview picture composition instruction method and system based on mobile terminal
CN106101536A (en) * 2016-06-22 2016-11-09 维沃移动通信有限公司 A kind of photographic method and mobile terminal

Also Published As

Publication number Publication date
CN107734142A (en) 2018-02-23

Similar Documents

Publication Publication Date Title
CN107734142B (en) Photographing method, mobile terminal and server
CN107528938B (en) Video call method, terminal and computer readable storage medium
CN106060406B (en) Photographing method and mobile terminal
CN107197169B (en) high dynamic range image shooting method and mobile terminal
CN105827952B (en) A kind of photographic method and mobile terminal removing specified object
EP3661187A1 (en) Photography method and mobile terminal
KR102013331B1 (en) Terminal device and method for synthesizing a dual image in device having a dual camera
CN107509030B (en) focusing method and mobile terminal
CN107678644B (en) Image processing method and mobile terminal
CN106657793B (en) A kind of image processing method and mobile terminal
CN106056533B (en) A kind of method and terminal taken pictures
CN107172346B (en) Virtualization method and mobile terminal
CN107659722B (en) Image selection method and mobile terminal
CN112954210B (en) Photographing method and device, electronic equipment and medium
JP2017531330A (en) Picture processing method and apparatus
CN106060422B (en) A kind of image exposure method and mobile terminal
CN106648382B (en) A kind of picture browsing method and mobile terminal
CN106791437B (en) Panoramic image shooting method and mobile terminal
CN107172347B (en) Photographing method and terminal
CN107959789B (en) Image processing method and mobile terminal
CN111159449B (en) Image display method and electronic equipment
CN107748615B (en) Screen control method and device, storage medium and electronic equipment
CN107592458B (en) Shooting method and mobile terminal
CN107483821B (en) Image processing method and mobile terminal
EP3806443A1 (en) Tracking photographing method and apparatus, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant