CN109379531B - Shooting method and mobile terminal - Google Patents

Shooting method and mobile terminal Download PDF

Info

Publication number
CN109379531B
CN109379531B CN201811152073.7A CN201811152073A CN109379531B CN 109379531 B CN109379531 B CN 109379531B CN 201811152073 A CN201811152073 A CN 201811152073A CN 109379531 B CN109379531 B CN 109379531B
Authority
CN
China
Prior art keywords
subject object
subject
target
objects
preview image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811152073.7A
Other languages
Chinese (zh)
Other versions
CN109379531A (en
Inventor
秦帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811152073.7A priority Critical patent/CN109379531B/en
Publication of CN109379531A publication Critical patent/CN109379531A/en
Application granted granted Critical
Publication of CN109379531B publication Critical patent/CN109379531B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides a shooting method, which comprises the following steps: displaying a first preview image, identifying a plurality of first subject objects in the first preview image; receiving at least one target subject object selected by a user; when a shooting instruction is received, the regions corresponding to other first subject objects except the target subject object are reconstructed, and a shot image is generated, wherein the subject object in the shot image is the target subject object. The method can actively filter the shot image to remove the unwanted background portrait without the need of processing the background portrait by the user by using software, thereby simplifying the operation steps and improving the use experience of the user.

Description

Shooting method and mobile terminal
Technical Field
The present invention relates to the field of mobile terminal technologies, and in particular, to a shooting method and a mobile terminal.
Background
With the emergence of intelligent mobile terminals at present, mobile terminal photographing technology has become an indispensable part of daily life of people.
When people take a picture, background images such as various passersby and the like are always inevitably generated in the picture taking scene, which is relatively poor for the whole picture taking effect and the picture taking experience. At present, the occurrence of the above situation needs to be processed by software at a later stage.
Therefore, the method for filtering various background portrait in the picture is complex to operate and affects the use experience of the user.
Disclosure of Invention
The embodiment of the invention provides a shooting method and a mobile terminal, and aims to solve the problem that the operation of filtering background portrait of a shot picture is complicated in the prior art.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a shooting method, where the method includes: displaying a first preview image, identifying a plurality of first subject objects in the first preview image; receiving at least one target subject object selected by a user; when a shooting instruction is received, reconstructing regions corresponding to other first subject objects except the target subject object, and generating a shooting image, wherein the subject object in the shooting image is the target subject object.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes: the first identification module is used for displaying the first preview image and identifying a plurality of first body objects in the first preview image; the receiving module is used for receiving at least one target subject object selected by a user; and the image processing module is used for reconstructing areas corresponding to other first subject objects except the target subject object when receiving a shooting instruction, and generating a shot image, wherein the subject object in the shot image is the target subject object.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, including a processor, a memory, and a computer program stored on the memory and operable on the processor, where the computer program, when executed by the processor, implements the steps of the shooting method.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the shooting method.
In an embodiment of the present invention, a first preview image is displayed, and a plurality of first subject objects in the first preview image are identified; receiving at least one target subject object selected by a user; when a shooting instruction is received, regions corresponding to other first main body objects except the target main body object are reconstructed, a shot image is generated, the main body object in the shot image is the target main body object, an unnecessary background portrait can be filtered from the shot image actively, a user does not need to use software to process the background portrait, operation steps are simplified, and use experience of the user is improved.
Drawings
Fig. 1 is a flowchart illustrating steps of a photographing method according to a first embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a photographing method according to a second embodiment of the present invention;
fig. 3 is a block diagram of a mobile terminal according to a third embodiment of the present invention;
fig. 4 is a block diagram of a mobile terminal according to a fourth embodiment of the present invention;
fig. 5 is a block diagram of a mobile terminal according to a fifth embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making an invasive task, are within the scope of the present invention.
Example one
Referring to fig. 1, a flowchart illustrating steps of a photographing method according to a first embodiment of the present invention is shown.
The shooting method provided by the embodiment of the invention comprises the following steps:
step 101: a first preview image is displayed, and a plurality of first subject objects in the first preview image are identified.
When a front camera or a rear camera of the mobile terminal is used for image shooting, a first preview image is displayed in a shooting interface, and a plurality of first subject objects in the first preview image are recognized according to an image recognition technology, wherein the first subject objects are all person images in the first preview image.
Step 102: at least one target subject object selected by a user is received.
The user can select each first subject object in the shooting interface, and each selected first subject object is used as a target subject object.
Step 103: when a photographing instruction is received, regions corresponding to other first subject objects except the target subject object are reconstructed, and a photographed image is generated.
Wherein, the subject object in the shot image is the target subject object.
When a shooting instruction for the first preview image is received, the regions corresponding to the first subject objects except the target subject object are reconstructed, and a shot image is generated, wherein the subject object in the generated shot image only comprises the target subject objects.
In an embodiment of the present invention, a first preview image is displayed, and a plurality of first subject objects in the first preview image are identified; receiving at least one target subject object selected by a user; when a shooting instruction is received, regions corresponding to other first main body objects except the target main body object are reconstructed, a shot image is generated, the main body object in the shot image is the target main body object, an unnecessary background portrait can be filtered from the shot image actively, a user does not need to use software to process the background portrait, operation steps are simplified, and use experience of the user is improved.
Example two
Referring to fig. 2, a flowchart illustrating steps of a photographing method according to a second embodiment of the present invention is shown.
The shooting method provided by the embodiment of the invention comprises the following steps:
step 201: a first preview image is displayed, and a plurality of first subject objects in the first preview image are identified.
When a front camera or a rear camera of the mobile terminal is used for image shooting, a first preview image is displayed in a shooting interface, and each first subject object in the first preview image is recognized according to an image recognition technology, wherein the first subject object is each person image in the first preview image.
Step 202: at least one target subject object selected by a user is received.
The user may select each first subject object, one or a plurality of first subject objects may be selected, and each first subject object selected may be set as each target subject object. So as to filter out the background portrait later.
Step 203: when the first preview image changes to a second preview image, a plurality of second subject objects in the second preview image are identified.
When the mobile terminal moves, or when each first subject image in the first image changes, that is, other figures enter or other figures exit in the current preview picture, the first preview image changes into a second preview image, and a first subject object in the first preview image may increase or decrease with the movement of the mobile terminal, at this time, each second subject object in the second preview image is identified according to an image identification technology, wherein the second subject objects are all the figure objects contained in the second preview image.
Step 204: and determining the characteristic vector corresponding to each second main body object.
Each second subject object corresponds to a feature vector, and the feature vectors are used for matching the same individuals in different frames.
Step 205: and respectively calculating the Euclidean distance between each feature vector and the target feature vector corresponding to each target subject object.
And determining the target characteristic vector of each target main body object selected by the user, and calculating the Euclidean distance between the characteristic vector corresponding to each second main body object and the target characteristic vector of each target main body object.
The euclidean distance refers to the true distance between two points in an m-dimensional space, or the natural length of a vector (i.e., the distance of the point from the origin). The euclidean distance in two and three dimensions is the actual distance between two points.
Specifically, the euclidean distances between the feature vector corresponding to each second subject object and the target feature vectors corresponding to each target subject object are calculated by the following formulas.
Figure RE-GDA0001940371210000041
Wherein the content of the first and second substances,
Figure RE-GDA0001940371210000042
a feature vector corresponding to each second subject object,
Figure RE-GDA0001940371210000043
and the target feature vector corresponding to the target subject object.
Step 206: and determining the minimum Euclidean distance in the Euclidean distances corresponding to the second subject object aiming at each second subject object.
Step 207: the minimum euclidean distance is compared to a preset distance.
For each second subject object, the euclidean distances are calculated in step 205, and the smallest euclidean distance among the euclidean distances is determined.
And comparing the minimum Euclidean distance with a preset distance, when the minimum Euclidean distance is smaller than the preset distance, indicating that the second main body object is at least one of the target main body objects, and when the minimum Euclidean distance is larger than or equal to the preset distance, indicating that the second main body object is not matched with each target main body object, namely the second main body object is a non-target main body object.
Step 208: and when the minimum Euclidean distance is smaller than the preset distance, determining that the second subject object is any one of the target subject objects.
And when the minimum Euclidean distance is greater than or equal to the preset distance, determining that the second subject object does not belong to any one of the target subject objects. When each second main body object does not belong to any one of the target main body objects, the prompt information is output. Wherein the prompt message is used for prompting the user to reselect the target subject object. And returning to execute the step of receiving at least one target subject object selected by the user.
Step 209: when a photographing instruction is received, other first subject objects except the target subject object in the first preview image are circumscribed.
Wherein each of the first subject objects other than the target subject object is defined to correspond to a region.
When the first preview image is not changed, other first subject objects except for the target subject objects in the first preview image are defined, and when the first preview image is changed to the second preview image, other second subject objects except for the target subject objects in the second preview image are defined.
Step 210: and performing image reconstruction on each area to be defined based on the background of the first preview image to generate a shot image.
And filtering other first main body objects or other second main body objects except the target main body objects, and carrying out image reconstruction on the filtered areas according to the background of the first preview image or the second preview image to generate a shot image.
In an embodiment of the present invention, a first preview image is displayed, and a plurality of first subject objects in the first preview image are identified; receiving at least one target subject object selected by a user; when a shooting instruction is received, regions corresponding to other first main body objects except the target main body object are reconstructed, a shot image is generated, the main body object in the shot image is the target main body object, an unnecessary background portrait can be filtered from the shot image actively, a user does not need to use software to process the background portrait, operation steps are simplified, and use experience of the user is improved. In addition, when the first image is changed, the target subject image selected by the user is locked through the Euclidean distance, and when the first preview image is changed into the second preview image, the user does not need to select the target subject again, each target subject image is automatically determined, and the time consumed by the user is reduced.
EXAMPLE III
Referring to fig. 3, a block diagram of a mobile terminal according to a third embodiment of the present invention is shown.
The mobile terminal provided by the embodiment of the invention comprises: a first identifying module 301, configured to display a first preview image, and identify a plurality of first subject objects in the first preview image; a receiving module 302, configured to receive at least one target subject object selected by a user; the image processing module 303 is configured to, when a shooting instruction is received, reconstruct a region corresponding to a first subject object other than the target subject object, and generate a shot image, where a subject object in the shot image is the target subject object.
When a front camera or a rear camera of the mobile terminal is used for image shooting, a first preview image is displayed in a shooting interface, and a first recognition module recognizes a plurality of first subject objects in the first preview image, wherein the first subject objects are all person images in the first preview image. The receiving module receives the selection operation of the user on each first subject object in the shooting interface, and takes each selected first subject object as a target subject object. When a shooting instruction for the first preview image is received, the image processing module reconstructs regions corresponding to other first subject objects except the target subject object and generates a shot image.
In an embodiment of the present invention, a first preview image is displayed, and a plurality of first subject objects in the first preview image are identified; receiving at least one target subject object selected by a user; when a shooting instruction is received, regions corresponding to other first main body objects except the target main body object are reconstructed, a shot image is generated, the main body object in the shot image is the target main body object, an unnecessary background portrait can be filtered from the shot image actively, a user does not need to use software to process the background portrait, operation steps are simplified, and use experience of the user is improved.
Example four
Referring to fig. 4, a block diagram of a mobile terminal according to a fourth embodiment of the present invention is shown.
The mobile terminal provided by the embodiment of the invention comprises: a first recognition module 401, configured to display a first preview image, and recognize a plurality of first subject objects in the first preview image; a receiving module 402, configured to receive at least one target subject object selected by a user; an image processing module 403, configured to, when receiving a shooting instruction, reconstruct a region corresponding to a first subject object other than the target subject object, and generate a shot image, where the subject object in the shot image is the target subject object.
Preferably, the mobile terminal further includes: a second identifying module 404, configured to identify a plurality of second subject objects in a second preview image when the first preview image changes into the second preview image after the receiving module 402 receives at least one target subject object selected by a user; a first determining module 405, configured to determine a feature vector corresponding to each second subject object; a calculating module 406, configured to calculate euclidean distances between each feature vector and a target feature vector corresponding to each target subject object; a second determining module 407, configured to determine, for each second subject object, whether the second subject object is any one of the target subject objects according to the euclidean distances corresponding to the second subject object.
Preferably, the second determining module 407 includes: a first determining sub-module 4071, configured to determine, for each second subject object, a minimum euclidean distance among the euclidean distances corresponding to the second subject object; a comparison sub-module 4072, configured to compare the minimum euclidean distance with a preset distance; a second determining sub-module 4073, configured to determine that the second subject object is any one of the target subject objects when the minimum euclidean distance is smaller than the preset distance.
Preferably, the mobile terminal further includes: a third determining module 408, configured to determine that the second subject object does not belong to any one of the target subject objects when the minimum euclidean distance is greater than or equal to a preset distance after the comparing sub-module 4072 compares the minimum euclidean distance with the preset distance.
Preferably, the mobile terminal further includes: an output module 409, configured to, after the second determining module 407 determines, for each second subject object, whether the second subject object is any one of the target subject objects according to the euclidean distance corresponding to the second subject object, output prompt information when none of the second subject objects belongs to any one of the target subject objects, where the prompt information is used to prompt a user to reselect a target subject object; a returning module 410 for returning to the step of executing the receiving the at least one target subject object selected by the user.
Preferably, the image processing module 403 includes: a delineating sub-module 4031, configured to, when a shooting instruction is received, delineate first subject objects other than the target subject object in the first preview image, where each of the delineated first subject objects other than the target subject object corresponds to an area; a generating sub-module 4032, configured to perform image reconstruction on each defined region based on the background of the first preview image, and generate the captured image.
The mobile terminal provided in the embodiment of the present invention can implement each process implemented by the mobile terminal in the method embodiments of fig. 1 to fig. 2, and is not described herein again to avoid repetition.
In an embodiment of the present invention, a first preview image is displayed, and a plurality of first subject objects in the first preview image are identified; receiving at least one target subject object selected by a user; when a shooting instruction is received, regions corresponding to other first main body objects except the target main body object are reconstructed, a shot image is generated, the main body object in the shot image is the target main body object, an unnecessary background portrait can be filtered from the shot image actively, a user does not need to use software to process the background portrait, operation steps are simplified, and use experience of the user is improved. In addition, when the first image is changed, the target subject image selected by the user is locked through the Euclidean distance, and when the first preview image is changed into the second preview image, the user does not need to select the target subject again, each target subject image is automatically determined, and the time consumed by the user is reduced.
EXAMPLE five
Referring to fig. 5, a hardware structure diagram of a mobile terminal for implementing various embodiments of the present invention is shown.
The mobile terminal 500 includes, but is not limited to: radio frequency unit 501, network module 502, audio output unit 503, input unit 504, sensor 505, display unit 506, user input unit 507, interface unit 508, memory 509, processor 510, and power supply 511. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 5 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
A processor 510 for displaying a first preview image, identifying a plurality of first subject objects in the first preview image; receiving at least one target subject object selected by a user; when a shooting instruction is received, reconstructing areas corresponding to other first subject objects except the target subject object, and generating a shooting image, wherein the subject object in the shooting image is the target subject object.
In an embodiment of the present invention, a first preview image is displayed, and a plurality of first subject objects in the first preview image are identified; receiving at least one target subject object selected by a user; when a shooting instruction is received, regions corresponding to other first main body objects except the target main body object are reconstructed, a shot image is generated, the main body object in the shot image is the target main body object, an unnecessary background portrait can be filtered from the shot image actively, a user does not need to use software to process the background portrait, operation steps are simplified, and use experience of the user is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 501 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 510; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 501 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 501 can also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides wireless broadband internet access to the user through the network module 502, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 503 may convert audio data received by the radio frequency unit 501 or the network module 502 or stored in the memory 509 into an audio signal and output as sound. Also, the audio output unit 503 may also provide audio output related to a specific function performed by the mobile terminal 500 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 503 includes a speaker, a buzzer, a receiver, and the like.
The input unit 504 is used to receive an audio or video signal. The input Unit 504 may include a Graphics Processing Unit (GPU) 5041 and a microphone 5042, and the Graphics processor 5041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 506. The image frames processed by the graphic processor 5041 may be stored in the memory 509 (or other storage medium) or transmitted via the radio frequency unit 501 or the network module 502. The microphone 5042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 501 in case of the phone call mode.
The mobile terminal 500 also includes at least one sensor 505, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 5061 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 5061 and/or a backlight when the mobile terminal 500 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 505 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 506 is used to display information input by the user or information provided to the user. The Display unit 506 may include a Display panel 5061, and the Display panel 5061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 507 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 507 includes a touch panel 5071 and other input devices 5072. Touch panel 5071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 5071 using a finger, stylus, or any suitable object or attachment). The touch panel 5071 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 510, and receives and executes commands sent by the processor 510. In addition, the touch panel 5071 may be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. In addition to the touch panel 5071, the user input unit 507 may include other input devices 5072. Specifically, the other input devices 5072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein.
Further, the touch panel 5071 may be overlaid on the display panel 5061, and when the touch panel 5071 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 510 to determine the type of the touch event, and then the processor 510 provides a corresponding visual output on the display panel 5061 according to the type of the touch event. Although in fig. 5, the touch panel 5071 and the display panel 5061 are two independent components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 5071 and the display panel 5061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 508 is an interface through which an external device is connected to the mobile terminal 500. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 508 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 500 or may be used to transmit data between the mobile terminal 500 and external devices.
The memory 509 may be used to store software programs as well as various data. The memory 509 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 509 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 510 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 509 and calling data stored in the memory 509, thereby performing overall monitoring of the mobile terminal. Processor 510 may include one or more processing units; preferably, the processor 510 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 510.
The mobile terminal 500 may further include a power supply 511 (e.g., a battery) for supplying power to various components, and preferably, the power supply 511 may be logically connected to the processor 510 via a power management system, so that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the mobile terminal 500 includes some functional modules that are not shown, and thus, are not described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, which includes a processor 510, a memory 509, and a computer program that is stored in the memory 509 and can be run on the processor 510, and when the computer program is executed by the processor 510, the processes of the shooting method embodiment are implemented, and the same technical effect can be achieved, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned shooting method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better embodiment. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that many more modifications and variations can be made without departing from the spirit of the invention and the scope of the appended claims.

Claims (7)

1. A shooting method is applied to a mobile terminal, and is characterized by comprising the following steps:
displaying a first preview image, identifying a plurality of first subject objects in the first preview image;
receiving at least one target subject object selected by a user;
when a shooting instruction is received, reconstructing areas corresponding to other first subject objects except the target subject object, and generating a shot image, wherein the subject object in the shot image is the target subject object;
wherein, after the step of receiving at least one target subject object selected by a user, the method further comprises:
identifying a plurality of second subject objects in a second preview image when the first preview image changes to the second preview image;
determining a feature vector corresponding to each second subject object;
respectively calculating Euclidean distances between each feature vector and a target feature vector corresponding to each target subject object;
for each second subject object, determining whether the second subject object is any one of the target subject objects according to the Euclidean distances corresponding to the second subject object;
after the step of determining, for each second subject object, whether the second subject object is any one of the target subject objects according to the euclidean distances corresponding to the second subject object, the method further includes:
when each second main body object does not belong to any one of the target main body objects, outputting prompt information, wherein the prompt information is used for prompting a user to reselect the target main body object;
and returning to the step of executing the at least one target subject object selected by the receiving user.
2. The method of claim 1, wherein the step of determining, for each of the second subject objects, whether the second subject object is any one of the target subject objects according to the euclidean distances corresponding to the second subject object comprises:
for each second subject object, determining a minimum Euclidean distance in the Euclidean distances corresponding to the second subject object;
comparing the minimum Euclidean distance with a preset distance;
and when the minimum Euclidean distance is smaller than the preset distance, determining that the second subject object is any one of the target subject objects.
3. The method of claim 2, wherein after the step of comparing the minimum euclidean distance to a preset distance, the method further comprises:
and when the minimum Euclidean distance is greater than or equal to the preset distance, determining that the second subject object does not belong to any one of the target subject objects.
4. The method according to claim 1, wherein the step of reconstructing a region corresponding to a first subject object other than the target subject object and generating a captured image in which the subject object is the target subject object includes:
when a shooting instruction is received, delineating other first subject objects except the target subject object in the first preview image, wherein each delineated other first subject object except the target subject object corresponds to an area;
and performing image reconstruction on each region to be defined based on the background of the first preview image to generate the shot image.
5. A mobile terminal, characterized in that the mobile terminal comprises:
the first identification module is used for displaying a first preview image and identifying a plurality of first main body objects in the first preview image;
the receiving module is used for receiving at least one target subject object selected by a user;
the image processing module is used for reconstructing areas corresponding to other first subject objects except the target subject object when a shooting instruction is received, and generating a shot image, wherein the subject object in the shot image is the target subject object;
wherein the mobile terminal further comprises:
a second recognition module for recognizing a plurality of second subject objects in the second preview image when the first preview image is changed into the second preview image after the receiving module receives the at least one target subject object selected by the user;
a first determining module, configured to determine a feature vector corresponding to each second subject object;
the calculation module is used for calculating the Euclidean distance between each feature vector and the target feature vector corresponding to each target subject object;
a second determining module, configured to determine, for each second subject object, whether the second subject object is any one of the target subject objects according to the euclidean distances corresponding to the second subject object;
an output module, configured to, after the second determining module determines, for each second subject object, according to the euclidean distances corresponding to the second subject object, whether the second subject object is any of the target subject objects, and when none of the second subject objects belongs to any of the target subject objects, output prompt information, where the prompt information is used to prompt a user to reselect a target subject object;
and the return module is used for returning and executing the step of receiving at least one target main body object selected by the user.
6. A mobile terminal, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the photographing method according to any of claims 1 to 4.
7. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, realizes the steps of the photographing method according to any one of claims 1 to 4.
CN201811152073.7A 2018-09-29 2018-09-29 Shooting method and mobile terminal Active CN109379531B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811152073.7A CN109379531B (en) 2018-09-29 2018-09-29 Shooting method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811152073.7A CN109379531B (en) 2018-09-29 2018-09-29 Shooting method and mobile terminal

Publications (2)

Publication Number Publication Date
CN109379531A CN109379531A (en) 2019-02-22
CN109379531B true CN109379531B (en) 2021-07-16

Family

ID=65402587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811152073.7A Active CN109379531B (en) 2018-09-29 2018-09-29 Shooting method and mobile terminal

Country Status (1)

Country Link
CN (1) CN109379531B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022160331A (en) * 2021-04-06 2022-10-19 キヤノン株式会社 Image processing apparatus and control method for the same

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2058763A1 (en) * 2007-11-02 2009-05-13 Core Logic, Inc. Apparatus for digital image stabilization using object tracking and method thereof
CN107509031A (en) * 2017-08-31 2017-12-22 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN108540724A (en) * 2018-04-28 2018-09-14 维沃移动通信有限公司 A kind of image pickup method and mobile terminal

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622581B (en) * 2012-02-20 2013-09-25 华焦宝 Face detection method and face detection system
CN104735364A (en) * 2013-12-19 2015-06-24 中兴通讯股份有限公司 Photo shooting method and device
JP2016103684A (en) * 2014-11-27 2016-06-02 キヤノン株式会社 Imaging device
CN105847771A (en) * 2015-01-16 2016-08-10 联想(北京)有限公司 Image processing method and electronic device
CN106603903A (en) * 2015-10-15 2017-04-26 中兴通讯股份有限公司 Photo processing method and apparatus
CN105513104A (en) * 2015-12-01 2016-04-20 小米科技有限责任公司 Picture taking method, device and system
CN106358069A (en) * 2016-10-31 2017-01-25 维沃移动通信有限公司 Video data processing method and mobile terminal
CN106791393B (en) * 2016-12-20 2019-05-17 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
JP6604522B2 (en) * 2017-01-30 2019-11-13 京セラドキュメントソリューションズ株式会社 Image forming system, image forming apparatus, and guide program
JP2018137726A (en) * 2017-02-22 2018-08-30 キヤノン株式会社 Imaging apparatus, control method, and program
CN107343149B (en) * 2017-07-31 2019-08-20 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN107820013A (en) * 2017-11-24 2018-03-20 上海创功通讯技术有限公司 A kind of photographic method and terminal
CN108170732A (en) * 2017-12-14 2018-06-15 厦门市美亚柏科信息股份有限公司 Face picture search method and computer readable storage medium
CN108184051A (en) * 2017-12-22 2018-06-19 努比亚技术有限公司 A kind of main body image pickup method, equipment and computer readable storage medium
CN108509904A (en) * 2018-03-30 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for generating information

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2058763A1 (en) * 2007-11-02 2009-05-13 Core Logic, Inc. Apparatus for digital image stabilization using object tracking and method thereof
CN107509031A (en) * 2017-08-31 2017-12-22 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN108540724A (en) * 2018-04-28 2018-09-14 维沃移动通信有限公司 A kind of image pickup method and mobile terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
融合K-means与Ncut算法的无遮挡双重叠苹果目标分割与重建;王丹丹 等;《农业工程学报》;20150523;全文 *

Also Published As

Publication number Publication date
CN109379531A (en) 2019-02-22

Similar Documents

Publication Publication Date Title
CN110740259B (en) Video processing method and electronic equipment
CN108989672B (en) Shooting method and mobile terminal
CN109461117B (en) Image processing method and mobile terminal
CN108038825B (en) Image processing method and mobile terminal
CN107730460B (en) Image processing method and mobile terminal
CN109005336B (en) Image shooting method and terminal equipment
CN108683850B (en) Shooting prompting method and mobile terminal
CN107749046B (en) Image processing method and mobile terminal
CN108881544B (en) Photographing method and mobile terminal
CN111401463B (en) Method for outputting detection result, electronic equipment and medium
CN111031234B (en) Image processing method and electronic equipment
CN109618218B (en) Video processing method and mobile terminal
CN109448069B (en) Template generation method and mobile terminal
CN108174110B (en) Photographing method and flexible screen terminal
CN110636225B (en) Photographing method and electronic equipment
CN109727212B (en) Image processing method and mobile terminal
CN108924413B (en) Shooting method and mobile terminal
CN109104573B (en) Method for determining focusing point and terminal equipment
CN109462727B (en) Filter adjusting method and mobile terminal
CN108259756B (en) Image shooting method and mobile terminal
CN110519443B (en) Screen lightening method and mobile terminal
CN107798662B (en) Image processing method and mobile terminal
CN109819331B (en) Video call method, device and mobile terminal
CN109379531B (en) Shooting method and mobile terminal
CN109634503B (en) Operation response method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant