CN106780394B - Image sharpening method and terminal - Google Patents

Image sharpening method and terminal Download PDF

Info

Publication number
CN106780394B
CN106780394B CN201611248625.5A CN201611248625A CN106780394B CN 106780394 B CN106780394 B CN 106780394B CN 201611248625 A CN201611248625 A CN 201611248625A CN 106780394 B CN106780394 B CN 106780394B
Authority
CN
China
Prior art keywords
image
sharpening
sharpened
area
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611248625.5A
Other languages
Chinese (zh)
Other versions
CN106780394A (en
Inventor
姬向东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201611248625.5A priority Critical patent/CN106780394B/en
Publication of CN106780394A publication Critical patent/CN106780394A/en
Application granted granted Critical
Publication of CN106780394B publication Critical patent/CN106780394B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses an image sharpening method, which comprises the following steps: identifying face information contained in a shot image according to a face identification method, and extracting biological feature information in the face information; determining that an image corresponding to biological characteristic information in the shot image is a first sharpened area, and determining that an image except the image corresponding to the biological characteristic information in the shot image is a second sharpened area; receiving a first sharpening instruction aiming at the first sharpening area, and acquiring a first sharpened image; receiving a second sharpening instruction aiming at the second sharpening area, and acquiring a second sharpened image; and fusing the first sharpened image and the second sharpened image to obtain a fused sharpened image. The embodiment of the invention also discloses an image sharpening terminal. The invention solves the problem of over sharpening when the portrait is sharpened and ensures the reasonability of the portrait sharpening degree.

Description

Image sharpening method and terminal
Technical Field
The invention relates to the field of image processing, in particular to an image sharpening method and a terminal.
Background
The image sharpening (image sharpening) can compensate the outline of the image, enhance the edge of the image and the part with the jump of the gray level, and make the image become clear. In portrait shooting, the portrait contour in the image can be clearer through proper image sharpening, and after image sharpening processing is carried out on the portrait facial information, the portrait facial information is clearer and the detailed information is richer. However, over-sharpening the image surface of the human body can increase noise and reduce the imaging effect of the human body, the reason is that the contrast of the image can be increased in an edge area through image sharpening, but the noise also has edges, and over-sharpening can cause the edges of the noise to be sharpened, so that the noise of the image is more obvious.
In the prior art, high-pass filtering is generally performed in spatial and frequency domains to realize sharpening of images. After the image is sharpened, the edge of the whole image becomes clearer, and particularly for the image with blurred boundary and contour of the image, the boundary and contour of the image and the details of the image after the image sharpening process become clearer. However, such image sharpening is for the entire image and does not distinguish between different image information in the image, which may result in local distortion or noise in the image being more noticeable.
Disclosure of Invention
The invention mainly aims to provide an image sharpening method and a terminal, and aims to solve the problem of over sharpening when an image is sharpened.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image sharpening method, where the method includes:
identifying face information contained in a shot image according to a face identification method, and extracting biological feature information in the face information;
determining that an image corresponding to biological characteristic information in the shot image is a first sharpened area, and determining that an image except the image corresponding to the biological characteristic information in the shot image is a second sharpened area;
receiving a first sharpening instruction aiming at the first sharpening area, and acquiring a first sharpened image;
receiving a second sharpening instruction aiming at the second sharpening area, and acquiring a second sharpened image;
and fusing the first sharpened image and the second sharpened image to obtain a fused sharpened image.
Optionally, identifying face information included in the captured image according to a face recognition method, and extracting biometric information in the face information specifically includes:
when the face information contained in the shot image is identified according to the face identification method, the biological feature information in the face information is extracted by adopting the biological feature identification method.
Optionally, the determining that the image corresponding to the biometric information in the captured image is a first sharpened region, and the images other than the image corresponding to the biometric information in the captured image are second sharpened regions specifically includes:
locking an image corresponding to the biological characteristic information in the shot image according to the biological characteristic information, and determining that the image corresponding to the biological characteristic information in the shot image is a first sharpened area;
and determining an image area outside the first sharpened area as a second sharpened area by utilizing image segmentation.
Optionally, the receiving a first sharpening instruction for the first sharpening region, and acquiring a first sharpened image specifically includes:
receiving a first sharpening instruction for the first sharpening region; wherein the first sharpening instruction indicates to increase a degree of sharpening of the first sharpened region;
improving the sharpening degree of the first sharpening area according to the first sharpening instruction, and acquiring a first sharpened image; the first sharpened image is an image with the sharpening degree of the first sharpened area improved.
Optionally, the receiving a second sharpening instruction for the second sharpening region, and acquiring a second sharpened image specifically includes:
receiving a second sharpening instruction for the second sharpening region; wherein the second sharpening instruction indicates to reduce a degree of sharpening of the second sharpened region;
reducing the sharpening degree of a second sharpening area according to the second sharpening instruction, and acquiring a second sharpened image; and the second sharpened image is an image with the sharpening degree of the second sharpened area reduced.
In a second aspect, an embodiment of the present invention provides an image sharpening terminal, where the terminal includes: the device comprises an extraction module, a determination module, a receiving module, an acquisition module and an image fusion module; wherein the content of the first and second substances,
the extraction module is used for identifying the face information contained in the shot image according to a face identification method and extracting the biological feature information in the face information;
the determining module is used for determining that an image corresponding to the biological characteristic information in the shot image is a first sharpening region, and an image except the image corresponding to the biological characteristic information in the shot image is a second sharpening region;
the receiving module is used for receiving a first sharpening instruction aiming at the first sharpening area;
the acquisition module is used for acquiring a first sharpened image;
the receiving module is further configured to receive a second sharpening instruction for the second sharpening region;
the acquisition module is further used for acquiring a second sharpened image;
the image fusion module is used for fusing the first sharpened image and the second sharpened image;
the acquisition module is further used for acquiring the fused sharpened image.
Optionally, the extracting module is configured to extract, when it is identified that the captured image includes facial information according to a face recognition method, biometric information in the facial information by using a biometric recognition method.
Optionally, the determining module is configured to lock an image corresponding to the biometric information in the captured image according to the biometric information, and determine that the image corresponding to the biometric information in the captured image is a first sharpened area;
and determining an image area except the first sharpened area as a second sharpened area by image segmentation.
Optionally, the receiving module is configured to receive a first sharpening instruction for the first sharpening region; wherein the first sharpening instruction indicates to increase a degree of sharpening of the first sharpened region;
the acquisition module is used for improving the sharpening degree of the first sharpening area according to the first sharpening instruction and acquiring a first sharpened image; the first sharpened image is an image with the sharpening degree of the first sharpened area improved.
Optionally, the receiving module is configured to receive a second sharpening instruction for the second sharpening region; wherein the second sharpening instruction indicates to reduce a degree of sharpening of the second sharpened region;
the obtaining module is configured to reduce a sharpening degree of a second sharpened area according to the second sharpening instruction, and obtain a second sharpened image; and the second sharpened image is an image with the sharpening degree of the second sharpened area reduced.
According to the image sharpening method and the terminal provided by the embodiment of the invention, the biological characteristic information in the face information is identified, the image corresponding to the biological characteristic information is divided into the first sharpening region, the image except the image corresponding to the biological characteristic information is divided into the second sharpening region, the sharpening degree of the image in the first sharpening region is improved, the sharpening degree of the image in the second sharpening region is reduced, and then the fusion is carried out to obtain the fused image, so that the problem of over sharpening during portrait sharpening is solved, and the reasonability of the portrait sharpening degree is ensured.
Drawings
Fig. 1 is a schematic diagram of a hardware structure of a mobile terminal according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an image sharpening method according to an embodiment of the present invention;
fig. 3 is a sectional sharpening shooting function interface diagram according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating a method for determining partitions according to an embodiment of the present invention;
fig. 5 is a diagram of an interface for implementing a partition sharpening function according to an embodiment of the present invention;
fig. 6 is a flowchart illustrating a process of increasing a sharpening degree of a first sharpened area according to a first embodiment of the present invention;
FIG. 7 is a flowchart illustrating a process of reducing a sharpening degree of a second sharpened area according to an embodiment of the invention;
fig. 8 is a block diagram of a structure of an image sharpening terminal according to a second embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
A mobile terminal implementing various embodiments of the present invention will now be described with reference to fig. 1. In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
The mobile terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a navigation device, etc., and a stationary terminal such as a digital TV, a desktop computer, etc. In the following, it is assumed that the terminal is a mobile terminal. However, it will be understood by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type terminal in addition to elements particularly used for moving purposes.
Fig. 1 is a schematic hardware configuration of a mobile terminal implementing various embodiments of the present invention.
The mobile terminal 100 may include a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, a controller 180, and the like. Fig. 1 illustrates a mobile terminal having various components, but it is to be understood that not all illustrated components are required to be implemented, and that more or fewer components may instead be implemented, the elements of the mobile terminal being described in detail below.
The user input unit 130 may generate key input data according to a command input by a user to control various operations of the mobile terminal. The user input unit 130 allows a user to input various types of information, and may include a keyboard, dome sheet, touch pad (e.g., a touch-sensitive member that detects changes in resistance, pressure, capacitance, and the like due to being touched), scroll wheel, joystick, and the like. In particular, when the touch pad is superimposed on the display unit 151 in the form of a layer, a touch screen may be formed.
The sensing unit 140 detects a current state of the mobile terminal 100 (e.g., an open or closed state of the mobile terminal 100), a position of the mobile terminal 100, presence or absence of contact (i.e., touch input) by a user with the mobile terminal 100, an orientation of the mobile terminal 100, acceleration or deceleration movement and direction of the mobile terminal 100, and the like, and generates a command or signal for controlling an operation of the mobile terminal 100. For example, when the mobile terminal 100 is implemented as a slide-type mobile phone, the sensing unit 140 may sense whether the slide-type phone is opened or closed. The sensing unit 140 may include a proximity sensor 141 as will be described below in connection with a touch screen.
In addition, when the mobile terminal 100 is connected with an external cradle, the interface unit 170 may serve as a path through which power is supplied from the cradle to the mobile terminal 100 or may serve as a path through which various command signals input from the cradle are transmitted to the mobile terminal. Various command signals or power input from the cradle may be used as signals for recognizing whether the mobile terminal is accurately mounted on the cradle. The output unit 150 is configured to provide output signals (e.g., audio signals, video signals, alarm signals, vibration signals, etc.) in a visual, audio, and/or tactile manner. The output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
The display unit 151 may display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 may display a User Interface (UI) or a Graphical User Interface (GUI) related to a call or other communication (e.g., text messaging, multimedia file downloading, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or an image and related functions, and the like.
Meanwhile, when the display unit 151 and the touch pad are overlapped with each other in the form of a layer to form a touch screen, the display unit 151 may serve as an input device and an output device. The display unit 151 may include at least one of a Liquid Crystal Display (LCD), a thin film transistor LCD (TFT-LCD), an Organic Light Emitting Diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as transparent displays, and a typical transparent display may be, for example, a TOLED (transparent organic light emitting diode) display or the like. Depending on the particular desired implementation, the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown). The touch screen may be used to detect a touch input pressure as well as a touch input position and a touch input area.
The audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 into an audio signal and output as sound when the mobile terminal is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output module 152 may provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output module 152 may include a speaker, a buzzer, and the like.
The alarm unit 153 may provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alarm unit 153 may provide output in different ways to notify the occurrence of an event. For example, the alarm unit 153 may provide an output in the form of vibration, and when a call, a message, or some other Incoming Communication (Incoming Communication) is received, the alarm unit 153 may provide a tactile output (e.g., vibration) to inform the user thereof. By providing such a tactile output, the user can recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 may also provide an output notifying the occurrence of an event via the display unit 151 or the audio output module 152.
The memory 160 may store software programs and the like for processing and controlling operations performed by the controller 180, or may temporarily store data (e.g., a phonebook, messages, still images, videos, and the like) that has been or will be output. Also, the memory 160 may store data regarding various ways of vibration and audio signals output when a touch is applied to the touch screen.
The memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. Also, the mobile terminal 100 may cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
The controller 180 generally controls the overall operation of the mobile terminal. For example, the controller 180 performs control and processing related to voice calls, data communications, video calls, and the like. In addition, the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, and the multimedia module 181 may be constructed within the controller 180 or may be constructed separately from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
The various embodiments described herein may be implemented in a computer-readable medium using, for example, computer software, hardware, or any combination thereof. For a hardware implementation, the embodiments described herein may be implemented using at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a processor, a controller, a microcontroller, a microprocessor, an electronic unit designed to perform the functions described herein, and in some cases, such embodiments may be implemented in the controller 180. For a software implementation, the implementation such as a process or a function may be implemented with a separate software module that allows performing at least one function or operation. The software codes may be implemented by software applications (or programs) written in any suitable programming language, which may be stored in the memory 160 and executed by the controller 180.
Up to this point, mobile terminals have been described in terms of their functionality. Hereinafter, a slide-type mobile terminal among various types of mobile terminals, such as a folder-type, bar-type, swing-type, slide-type mobile terminal, and the like, will be described as an example for the sake of brevity. Accordingly, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
The mobile terminal 100 as shown in fig. 1 may be configured to operate with communication systems such as wired and wireless communication systems and satellite-based communication systems that transmit data via frames or packets.
Based on the hardware structure of the mobile terminal, the invention provides various embodiments of the method.
Example one
Referring to fig. 2, it illustrates an image sharpening method provided by an embodiment of the present invention, the method includes:
s201, identifying face information contained in a shot image according to a face identification method, and extracting biological feature information in the face information;
s202, determining that an image corresponding to the biological characteristic information in the shot image is a first sharpening region, and determining that an image except the image corresponding to the biological characteristic information in the shot image is a second sharpening region;
s203, receiving a first sharpening instruction aiming at the first sharpening area, and acquiring a first sharpened image;
s204, receiving a second sharpening instruction aiming at the second sharpening area, and acquiring a second sharpened image;
s205, fusing the first sharpened image and the second sharpened image to obtain a fused sharpened image.
In step S201, the face recognition method may be a face feature-based method, a face recognition method based on geometric features, or a face recognition method based on neural networks, and the present invention focuses on the realizability of the face recognition method rather than the specific method of face recognition.
The face information contained in the shot image is identified through a face recognition technology, and the face information in the shot image is provided, and more detailed biological feature information is contained in the face information.
The biometric information refers to facial feature information, and specifically may refer to biometric features such as eyes, nose, mouth, eyebrows, face shapes, and hairs, which are indispensable to normal faces.
For step S201, identifying face information included in the captured image according to a face recognition method, and extracting biometric information in the face information, specifically including:
when the face information contained in the shot image is identified according to the face identification method, the biological feature information in the face information is extracted by adopting the biological feature identification method.
Referring to fig. 3, after the biometric information in the portrait is extracted by the biometric identification method, a prompt box shown in fig. 3 pops up on the screen of the terminal to prompt the user whether to turn on the partition sharpening shooting function. After the partition sharpening shooting function is started, the image is divided into a first sharpening area and a second sharpening area, and sharpening is respectively carried out on the first sharpening area and the second sharpening area to different degrees, so that the partition sharpening effect can be achieved.
With respect to step S202, referring to fig. 4, determining that an image corresponding to the biometric information in the captured image is a first sharpened region, and determining that an image other than the image corresponding to the biometric information in the captured image is a second sharpened region, specifically includes steps S2021 and S2022:
s2021, locking an image corresponding to the biological feature information in the shot image according to the biological feature information, and determining that the image corresponding to the biological feature information in the shot image is a first sharpened area;
s2022, determining the image area except the first sharpened area as a second sharpened area by image segmentation.
As for step S2021, it is understood that after the biometric information in the face information is extracted, the image corresponding to the biometric information is locked, and the locked image region is defined as the first sharpened region.
As for step S2022, it is understood that the image is divided according to the first sharpened region of the image, and the portion other than the first sharpened region is set as the second sharpened region.
Further, referring to fig. 5, the portion identified by the dotted and dotted dots in fig. 5 is a first sharpened region having biometric information recognized in the portrait, including the face, eyes, nose, mouth, and hair portions of the portrait. The area outside the first sharpened area marked by the dotted lines is a hat of a portrait and a background area, and the area is a second sharpened area. As can be seen from fig. 5, the partition sharpening function distinguishes a portion of the portrait having the biometric information from a portion of the portrait other than the biometric information, and the partition sharpening function can be used to sharpen the first sharpened region and the second sharpened region respectively, thereby avoiding over-sharpening or under-sharpening caused by uniform sharpening.
For step S203, referring to fig. 6, receiving a first sharpening instruction for the first sharpened region, and acquiring a first sharpened image, specifically including steps S2031 and S2032:
s2031, receiving a first sharpening instruction aiming at the first sharpening area; wherein the first sharpening instruction indicates to increase a degree of sharpening of the first sharpened region;
s2032, improving the sharpening degree of the first sharpening area according to the first sharpening instruction, and obtaining a first sharpened image; the first sharpened image is an image with the sharpening degree of the first sharpened area improved.
For step S2031, the image corresponding to the first sharpened area is an image corresponding to the biometric information, the first sharpening instruction for the first sharpened area is to increase the sharpening degree of the first sharpened area, and the purpose of increasing the sharpening degree of the first sharpened area is to enhance the biometric information in the portrait and make the key point information of the face in the portrait more prominent.
For step S2032, the sharpening degree of the first sharpened area is increased according to the first sharpening instruction, and the first sharpened image can be obtained by increasing the sharpening degree of the first sharpened area. As shown in fig. 5, the first sharpening region is an image region corresponding to the biometric information in the portrait, and by appropriately increasing the sharpening degree for the face, eyes, nose, mouth and hair portions in the first sharpening region, the sharpening degree of the portrait can be increased, so that the face, eyes, nose, mouth and hair portions of the portrait in fig. 5 are clearer in lines and more distinct in outline, and represent important features of the face in the portrait.
For step S204, referring to fig. 7, a second sharpening instruction for the second sharpened region is received, and a second sharpened image is obtained, specifically including S2041 and S2042:
s2041, receiving a second sharpening instruction for the second sharpening area; wherein the second sharpening instruction indicates to reduce a degree of sharpening of the second sharpened region;
s2042, reducing the sharpening degree of a second sharpening area according to the second sharpening instruction, and acquiring a second sharpened image; and the second sharpened image is an image with the sharpening degree of the second sharpened area reduced.
In step S2041, the image corresponding to the second sharpened area is an image corresponding to an image area other than the biometric information, and the second sharpening instruction for the second sharpened area is to reduce the degree of sharpening of the second sharpened area, where the purpose of reducing the degree of sharpening of the second sharpened area is to make the image of the second sharpened area relatively smooth compared with the image of the first sharpened area, so that the portrait portion in the whole image is more prominent.
In step S2042, the sharpening degree of the second sharpened area is reduced according to the second sharpening instruction, and a second sharpened image can be obtained by reducing the sharpening degree of the second sharpened area. As shown in fig. 5, the second sharpened region is an image other than the image region corresponding to the biometric information in the human image, and the image of the background portion can be relatively smoothed by reducing the degree of sharpening of the image portion such as the background in the second sharpened region.
It should be noted that the first sharpening instruction indicates to increase the degree of sharpening, and the degree of sharpening of the first sharpened region is increased under the first sharpening instruction, but the increase of the degree of sharpening of the image in the first sharpened region needs to be performed within a reasonable range. Increasing the degree of sharpening of the first sharpened region within the preset first threshold may prevent over-sharpening of the first sharpened region, thereby causing an increase in image noise of the first sharpened region or distortion due to over-sharpening of the image of the first sharpened region.
Similarly, the second sharpening instruction indicates that the sharpening degree is reduced, and the sharpening degree of the second sharpened region is reduced under the second sharpening instruction, but the image in the second sharpened region also needs to be reduced within a certain reasonable range. Reducing the sharpening degree of the second sharpened region within the preset second threshold value can prevent the second sharpened region from being insufficiently sharpened, so that excessive smoothing is caused, the detail information is lost, and the image specific information is blurred.
In step S205, the first sharpened image and the second sharpened image are fused, that is, the first sharpened image with the increased sharpening degree and the second sharpened image with the decreased sharpening degree are fused. Due to the adoption of the partition sharpening method, the obtained image effect is more pertinent compared with the image obtained by improving or reducing the sharpening degree of the whole image. Through the fused image, the degree of sharpening on the image surface of the human body is moderate, and the noise of the human body image cannot be increased due to image sharpening.
According to the image sharpening method provided by the embodiment of the invention, the biological characteristic information in the face information is identified, the image corresponding to the biological characteristic information is divided into the first sharpening region, the image except the image corresponding to the biological characteristic information is divided into the second sharpening region, the sharpening degree of the image in the first sharpening region is improved, the sharpening degree of the image in the second sharpening region is reduced, and then the images are fused to obtain the fused image, so that the problem of over sharpening during portrait sharpening is solved, and the reasonability of the portrait sharpening degree is ensured.
Example two
Referring to fig. 8, which shows a block diagram of a structure of an image sharpening terminal 8 according to an embodiment of the present invention, the terminal 8 includes: an extraction module 801, a determination module 802, a receiving module 803, an acquisition module 804 and an image fusion module 805; wherein the content of the first and second substances,
the extraction module 801 is configured to identify face information included in a captured image according to a face recognition method, and extract biometric information in the face information;
the determining module 802 is configured to determine that an image corresponding to the biometric information in the captured image is a first sharpened region, and an image other than the image corresponding to the biometric information in the captured image is a second sharpened region;
the receiving module 803 is configured to receive a first sharpening instruction for the first sharpening region;
the obtaining module 804 is configured to obtain a first sharpened image;
the receiving module 803 is further configured to receive a second sharpening instruction for the second sharpening region;
the obtaining module 804 is further configured to obtain a second sharpened image;
the image fusion module 805 is configured to fuse the first sharpened image and the second sharpened image;
the obtaining module 804 is further configured to obtain the fused sharpened image.
Further, the extracting module 801 is configured to extract biometric information in the face information by using a biometric identification method when the captured image is identified to contain the face information according to the face identification method.
Further, the determining module 802 is configured to lock an image corresponding to the biometric information in the captured image according to the biometric information, and determine that the image corresponding to the biometric information in the captured image is a first sharpened area;
and determining an image area except the first sharpened area as a second sharpened area by image segmentation.
Further, the receiving module 803 is configured to receive a first sharpening instruction for the first sharpening region; wherein the first sharpening instruction indicates to increase a degree of sharpening of the first sharpened region;
the obtaining module 804 is configured to increase a sharpening degree of the first sharpened area according to the first sharpening instruction, and obtain a first sharpened image; the first sharpened image is an image with the sharpening degree of the first sharpened area improved.
Further, the receiving module 803 is configured to receive a second sharpening instruction for the second sharpening region; wherein the second sharpening instruction indicates to reduce a degree of sharpening of the second sharpened region;
the obtaining module 804 is configured to reduce a sharpening degree of the second sharpened area according to the second sharpening instruction, and obtain a second sharpened image; and the second sharpened image is an image with the sharpening degree of the second sharpened area reduced.
Specifically, for this embodiment, the functions of the extracting module 801, the determining module 802, the receiving module 803, the obtaining module 804 and the image fusion module 805 may be implemented by the processor of the terminal 8 calling a program in a memory or pre-stored data. In practical applications, the Processor may be at least one of an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Central Processing Unit (CPU), a controller, a microcontroller, and a microprocessor. It will be appreciated that the electronic devices used to implement the processor functions described above may be other for different systems, and embodiments of the invention are not particularly limited.
According to the image sharpening terminal provided by the embodiment of the invention, the biological characteristic information in the face information is identified, the image corresponding to the biological characteristic information is divided into the first sharpening region, the image except the image corresponding to the biological characteristic information is divided into the second sharpening region, the sharpening degree of the image in the first sharpening region is improved, the sharpening degree of the image in the second sharpening region is reduced, and then the images are fused to obtain the fused image, so that the problem of over sharpening during portrait sharpening is solved, and the reasonability of the portrait sharpening degree is ensured.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method described in the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (6)

1. A method of image sharpening, the method comprising:
identifying face information contained in a shot image according to a face identification method, and extracting biological feature information in the face information, wherein the biological feature information comprises face feature information;
when the image sharpening terminal is determined to open a partition sharpening shooting function, determining that an image corresponding to the facial feature information in the shot image is a first sharpening area, and determining that an image except the image corresponding to the facial feature information in the shot image is a second sharpening area;
receiving a first sharpening instruction for the first sharpening region, wherein the first sharpening instruction indicates a degree of sharpening to which the first sharpening region is to be increased within a preset first threshold; improving the sharpening degree of the first sharpening area according to the first sharpening instruction, and acquiring a first sharpened image;
receiving a second sharpening instruction for the second sharpening region, wherein the second sharpening instruction indicates that a degree of sharpening of the second sharpening region is reduced within a preset second threshold; reducing the sharpening degree of a second sharpening area according to the second sharpening instruction, and acquiring a second sharpened image;
and fusing the first sharpened image and the second sharpened image to obtain a fused sharpened image.
2. The method according to claim 1, wherein the identifying of the face information contained in the captured image according to a face recognition method and the extracting of the biometric information from the face information specifically comprise:
when the face information contained in the shot image is identified according to a face identification method, the face feature information in the face information is extracted by adopting a biological feature identification method.
3. The method according to claim 1, wherein the determining that the image corresponding to the facial feature information in the captured image is a first sharpened region and the image other than the image corresponding to the facial feature information in the captured image is a second sharpened region specifically comprises:
locking an image corresponding to the facial feature information in the shot image according to the facial feature information, and determining that the image corresponding to the facial feature information in the shot image is a first sharpened area;
and determining an image area outside the first sharpened area as a second sharpened area by utilizing image segmentation.
4. An image sharpening terminal, characterized in that the terminal comprises: the device comprises an extraction module, a determination module, a receiving module, an acquisition module and an image fusion module; wherein the content of the first and second substances,
the extraction module is used for identifying face information contained in a shot image according to a face identification method and extracting biological feature information in the face information, wherein the biological feature information refers to face feature information;
the determining module is configured to determine that an image corresponding to the facial feature information in the captured image is a first sharpened area and an image other than the image corresponding to the facial feature information in the captured image is a second sharpened area when it is determined that the image sharpening terminal opens the partitioned sharpening capturing function;
the receiving module is configured to receive a first sharpening instruction for the first sharpening region, where the first sharpening instruction indicates a degree of sharpening of the first sharpening region to be increased within a preset first threshold;
the acquisition module is used for improving the sharpening degree of the first sharpening area according to the first sharpening instruction and acquiring a first sharpened image;
the receiving module is further configured to receive a second sharpening instruction for the second sharpening region; wherein the second sharpening instruction indicates that the degree of sharpening of the second sharpened region is reduced within a preset second threshold;
the obtaining module is further configured to reduce a sharpening degree of a second sharpened area according to the second sharpening instruction, and obtain a second sharpened image;
the image fusion module is used for fusing the first sharpened image and the second sharpened image;
the acquisition module is further used for acquiring the fused sharpened image.
5. The terminal of claim 4,
the extraction module is used for extracting the facial feature information in the facial information by adopting a biological feature recognition method when the shot image is recognized to contain the facial information according to the face recognition method.
6. The terminal of claim 4,
the determining module is used for locking an image corresponding to the facial feature information in the shot image according to the biological feature information and determining that the image corresponding to the facial feature information in the shot image is a first sharpened area; and determining an image area except the first sharpened area as a second sharpened area by image segmentation.
CN201611248625.5A 2016-12-29 2016-12-29 Image sharpening method and terminal Active CN106780394B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611248625.5A CN106780394B (en) 2016-12-29 2016-12-29 Image sharpening method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611248625.5A CN106780394B (en) 2016-12-29 2016-12-29 Image sharpening method and terminal

Publications (2)

Publication Number Publication Date
CN106780394A CN106780394A (en) 2017-05-31
CN106780394B true CN106780394B (en) 2020-12-08

Family

ID=58929321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611248625.5A Active CN106780394B (en) 2016-12-29 2016-12-29 Image sharpening method and terminal

Country Status (1)

Country Link
CN (1) CN106780394B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108053371B (en) * 2017-11-30 2022-04-19 努比亚技术有限公司 Image processing method, terminal and computer readable storage medium
CN109612114A (en) * 2018-12-04 2019-04-12 朱朝峰 Strange land equipment linkage system
CN109889537B (en) * 2019-03-20 2020-06-23 北方工业大学 Automatic network communication mechanism and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5835616A (en) * 1994-02-18 1998-11-10 University Of Central Florida Face detection using templates
EP1471462A1 (en) * 2003-04-21 2004-10-27 Hewlett-Packard Development Company, L.P. Method for processing a facial region of an image differently than the remaining portion of the image
CN1713209A (en) * 2004-06-24 2005-12-28 诺日士钢机株式会社 Photographic image processing method and equipment
US20070172140A1 (en) * 2003-03-19 2007-07-26 Nils Kokemohr Selective enhancement of digital images
CN105205779A (en) * 2015-09-15 2015-12-30 厦门美图之家科技有限公司 Eye image processing method and system based on image morphing and shooting terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006113658A (en) * 2004-10-12 2006-04-27 Canon Inc Image processing apparatus and method, and storage medium with program recorded thereon

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5835616A (en) * 1994-02-18 1998-11-10 University Of Central Florida Face detection using templates
US20070172140A1 (en) * 2003-03-19 2007-07-26 Nils Kokemohr Selective enhancement of digital images
EP1471462A1 (en) * 2003-04-21 2004-10-27 Hewlett-Packard Development Company, L.P. Method for processing a facial region of an image differently than the remaining portion of the image
CN1713209A (en) * 2004-06-24 2005-12-28 诺日士钢机株式会社 Photographic image processing method and equipment
CN105205779A (en) * 2015-09-15 2015-12-30 厦门美图之家科技有限公司 Eye image processing method and system based on image morphing and shooting terminal

Also Published As

Publication number Publication date
CN106780394A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
US9703939B2 (en) Mobile terminal and control method thereof
US8928723B2 (en) Mobile terminal and control method thereof
RU2533445C2 (en) Automatic recognition and capture of object
CN111783756B (en) Text recognition method and device, electronic equipment and storage medium
US9509959B2 (en) Electronic device and control method thereof
CN106228556B (en) image quality analysis method and device
US20150332439A1 (en) Methods and devices for hiding privacy information
EP2945048A2 (en) Mobile terminal and method of controlling the same
CN111553864B (en) Image restoration method and device, electronic equipment and storage medium
EP4145280A1 (en) Notification display method and terminal
US20150178318A1 (en) Mobile terminal and a method of controlling the mobile terminal
KR20150045271A (en) The mobile terminal and the control method thereof
CN106780394B (en) Image sharpening method and terminal
CN106612396B (en) Photographing device, terminal and method
CN109819114B (en) Screen locking processing method and device, electronic equipment and storage medium
CN110569835B (en) Image recognition method and device and electronic equipment
US20170140254A1 (en) Method and device for adding font
CN113409342A (en) Training method and device for image style migration model and electronic equipment
CN106713656B (en) Shooting method and mobile terminal
CN105426904B (en) Photo processing method, device and equipment
CN113033538B (en) Formula identification method and device
KR101392928B1 (en) Mobile terminal, control method thereof, and contents output system
CN112381091B (en) Video content identification method, device, electronic equipment and storage medium
CN110070046B (en) Face image recognition method and device, electronic equipment and storage medium
KR101587137B1 (en) Mobile terminal and method for controlling the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant