CN112822405A - Focusing method, device, equipment and storage medium - Google Patents

Focusing method, device, equipment and storage medium Download PDF

Info

Publication number
CN112822405A
CN112822405A CN202110044534.4A CN202110044534A CN112822405A CN 112822405 A CN112822405 A CN 112822405A CN 202110044534 A CN202110044534 A CN 202110044534A CN 112822405 A CN112822405 A CN 112822405A
Authority
CN
China
Prior art keywords
image
faces
determining
area
evaluation parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110044534.4A
Other languages
Chinese (zh)
Inventor
常群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202110044534.4A priority Critical patent/CN112822405A/en
Publication of CN112822405A publication Critical patent/CN112822405A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

The present disclosure relates to a focusing method, apparatus, device and storage medium, the method comprising: acquiring a preview image acquired by an image acquisition device, wherein at least two human faces exist in the preview image; determining at least two image regions from the preview image based on the at least two faces; determining a focus area based on the at least two image areas; and controlling the image acquisition device to focus based on the focusing area. The method and the device can avoid the direct automatic focusing of the image acquisition device on the face with the largest front row, can improve the focusing rationality, and further improve the quality of the follow-up acquired image.

Description

Focusing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of terminal technologies, and in particular, to a focusing method, apparatus, device, and storage medium.
Background
With the development of mobile terminal technology, mobile terminals such as smart phones and the like have more and more functions, for example, photographing (or recording) is performed by using image acquisition devices such as cameras and the like, so as to meet the requirements of users for recording good life and the like.
In the related art, a camera of a mobile terminal can perform automatic focusing during photographing, and the adopted focusing modes include a traditional focusing mode based on contrast and phase, a focusing mode based on target detection and the like. The focusing mode based on target detection mainly detects the face in the middle of the image, then carries out contrast calculation on the face area, and further calculates the focusing point. However, when multiple rows of people in a large place such as a classroom or a conference room are photographed based on the focusing method, the camera of the mobile terminal is often focused on the largest face in the front row automatically, so that faces of other people in the back row are blurred, and the quality of subsequent photographing is affected.
Disclosure of Invention
To overcome the problems in the related art, embodiments of the present disclosure provide a focusing method, apparatus, device and storage medium to solve the drawbacks in the related art.
According to a first aspect of embodiments of the present disclosure, there is provided a focusing method, the method including:
acquiring a preview image acquired by an image acquisition device, wherein at least two human faces exist in the preview image;
determining at least two image regions from the preview image based on the at least two faces;
determining a focus area based on the at least two image areas;
and controlling the image acquisition device to focus based on the focusing area.
In an embodiment, the determining at least two image regions from the preview image based on the at least two faces comprises:
identifying the at least two faces from the preview image;
determining the size of each face of the at least two faces;
at least two image regions are determined from the preview image based on the size of the face.
In one embodiment, the determining at least two image regions from the preview image based on the size of the face comprises:
clustering the at least two faces based on the sizes of the faces to obtain at least two classifications;
and dividing the human faces belonging to the same classification into the same image area.
In one embodiment, the determining at least two image regions from the preview image based on the size of the face comprises:
matching the size of each face of the at least two faces with the size ranges of the at least two set images;
and dividing the human face matched with the same image size range into the same image area.
In an embodiment, the determining a focus area based on the at least two image areas comprises:
determining the number of human faces in each image area;
determining a region evaluation parameter of each image region based on the number of faces in each image region and the size of the faces;
determining a focus area from the at least two image areas based on the area evaluation parameter.
In an embodiment, the determining a region evaluation parameter for each of the image regions based on the number of faces in each of the image regions and the size of the faces includes:
determining a first evaluation parameter corresponding to the number of the human faces in each image area based on a preset first corresponding relation;
determining a second evaluation parameter corresponding to the size of the face in each image area based on a preset second corresponding relation;
determining a region evaluation parameter for each of the image regions based on a product of the first evaluation parameter and the second evaluation parameter;
the determining a focus area from the at least two image areas based on the area evaluation parameter comprises:
and determining the image area with the largest area evaluation parameter in the at least two image areas as a focusing area.
According to a second aspect of the embodiments of the present disclosure, there is provided a focusing apparatus, the apparatus including:
the preview image acquisition module is used for acquiring a preview image acquired by the image acquisition device, wherein at least two human faces exist in the preview image;
an image region dividing module for determining at least two image regions from the preview image based on the at least two faces;
a focusing area determination module for determining a focusing area based on the at least two image areas;
and the image device focusing module is used for controlling the image acquisition device to focus based on the focusing area.
In one embodiment, the image region dividing module includes:
a face recognition unit configured to recognize the at least two faces from the preview image;
a face size determination unit for determining the size of each of the at least two faces;
and the image area dividing unit is used for determining at least two image areas from the preview image based on the size of the human face.
In an embodiment, the image area dividing unit is further configured to:
clustering the at least two faces based on the sizes of the faces to obtain at least two classifications;
and dividing the human faces belonging to the same classification into the same image area.
In an embodiment, the image area dividing unit is further configured to:
matching the size of each face of the at least two faces with the size ranges of the at least two set images;
and dividing the human face matched with the same image size range into the same image area.
In one embodiment, the focusing area determination module includes:
an image number determination unit for determining the number of faces in each of the image regions;
an evaluation parameter determination unit configured to determine a region evaluation parameter for each of the image regions based on the number of faces in each of the image regions and the size of the faces;
a focusing area determination unit for determining a focusing area from the at least two image areas based on the area evaluation parameter.
In an embodiment, the evaluation parameter determination unit is further configured to:
determining a first evaluation parameter corresponding to the number of the human faces in each image area based on a preset first corresponding relation;
determining a second evaluation parameter corresponding to the size of the face in each image area based on a preset second corresponding relation;
determining a region evaluation parameter for each of the image regions based on a product of the first evaluation parameter and the second evaluation parameter;
the focusing area determining unit is further configured to determine an image area with the largest area evaluation parameter of the at least two image areas as a focusing area.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic apparatus, the apparatus comprising:
a processor and a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a preview image acquired by an image acquisition device, wherein at least two human faces exist in the preview image;
determining at least two image regions from the preview image based on the at least two faces;
determining a focus area based on the at least two image areas;
and controlling the image acquisition device to focus based on the focusing area.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements:
acquiring a preview image acquired by an image acquisition device, wherein at least two human faces exist in the preview image;
determining at least two image regions from the preview image based on the at least two faces;
determining a focus area based on the at least two image areas;
and controlling the image acquisition device to focus based on the focusing area.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
this is disclosed through acquireing the preview image that image acquisition device gathered, there are two at least people faces in the preview image, and based on two at least people faces follow confirm two at least image area in the preview image, and based on two at least image area confirm the region of focusing, and then control image acquisition device is based on the region of focusing focuses, because divide two at least people faces in the preview image into two at least image area to confirm the region of focusing that is used for focusing based on the image area who divides, can avoid image acquisition device direct automatic focusing on the biggest people face in front row, can improve the rationality of focusing, and then improve the quality of follow-up collection image.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a focusing method according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating how at least two image regions are determined from the preview image based on the at least two faces in accordance with an exemplary embodiment;
FIG. 3 is a flow diagram illustrating how at least two image regions are determined from the preview image based on the size of the human face in accordance with an exemplary embodiment;
FIG. 4 is a flow chart showing how a focus area is determined based on the at least two image areas according to an example embodiment;
FIG. 5A is a flow diagram illustrating how region evaluation parameters for each of the image regions are determined based on the number of faces in each of the image regions and the size of the faces in accordance with an exemplary embodiment;
FIG. 5B is a schematic diagram illustrating a first correspondence between a number of faces and a first evaluation parameter, according to an exemplary embodiment;
FIG. 5C is a diagram illustrating a second correspondence between a size of a face and a second evaluation parameter, according to an exemplary embodiment;
FIG. 6 is a block diagram illustrating a focusing apparatus according to an exemplary embodiment;
FIG. 7 is a block diagram illustrating a focusing apparatus according to yet another exemplary embodiment;
FIG. 8 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
FIG. 1 is a flow chart illustrating a focusing method according to an exemplary embodiment; the method of the embodiment can be applied to terminal equipment (such as a smart phone, a tablet computer, a notebook computer or wearable equipment) with an image acquisition device such as a camera.
As shown in FIG. 1, the method includes the following steps S11-S13:
in step S11, a preview image captured by the image capturing device is acquired.
At least two faces corresponding to at least two people in the multi-person place can exist in the preview image.
In this embodiment, when a user takes a picture or records a video in a multi-user place such as a classroom or a conference room, the image acquisition device of the terminal device may be turned on to acquire an image, and then the terminal device may acquire a preview image acquired by the image acquisition device.
In an embodiment, the image capturing device may include a camera or the like, and the preview image may be an image captured by the image capturing device and used for a user to capture and save, which is not limited in this embodiment.
In step S12, at least two image regions are determined from the preview image based on the at least two human faces.
In this embodiment, after the preview image including at least two faces acquired by the image acquisition device is acquired, the preview image may be divided into at least two image areas based on the at least two faces.
In one embodiment, the preview image may be divided into at least two image regions based on factors such as the size of at least two faces or the density of faces.
For example, fig. 2 is a flow chart illustrating how at least two image regions are determined from the preview image based on the at least two faces according to an exemplary embodiment. As shown in fig. 2, the step S12 may further include the following steps S121 to S123:
in step S121, the at least two human faces are recognized from the preview image.
In this embodiment, after the preview image acquired by the image acquisition device is acquired, the at least two faces may be identified from the preview image based on a preset face identification algorithm.
It should be noted that the face recognition algorithm may be selected from related technologies based on actual needs, which is not limited in this embodiment.
In step S122, the size of each of the at least two faces is determined.
In this embodiment, after at least two faces are recognized from the preview image, the size of each face can be determined.
In an embodiment, the size of the face may be measured based on image pixels, or may be measured by using other indexes in the related art, which is not limited in this embodiment.
In step S123, at least two image regions are determined from the preview image based on the size of the face.
In this embodiment, after determining the size of each of the at least two faces, at least two image regions may be determined from the preview image based on the size of the face.
In one embodiment, considering that the size of the human face is related to the distance between the shot person and the camera, at least two human faces are divided into different image areas based on the distance between the shot person and the camera, that is, the human faces with similar sizes can be divided into the same image area. For example, the at least two faces may be clustered based on the size of the face to obtain at least two classifications, so that the faces belonging to the same classification may be divided into the same image region, i.e., the at least two image regions may be determined.
In another embodiment, as shown in fig. 3, the step S123 may further include the following steps S1231-S1232:
in step S1231, matching is performed based on the size of each of the at least two faces and the set at least two image size ranges.
In this embodiment, the image size ranges corresponding to the plurality of image areas may be set in advance. For example, a plurality of image areas are set: the image size range corresponding to the area 1 can be 100-150 pixels, the image size range corresponding to the area 2 can be 80-100 pixels, the image size range corresponding to the area 3 can be 50-80 pixels, and the image size range corresponding to the area 4 can be 10-50 pixels. On this basis, the size of each of the at least two faces can be matched with the image size range.
It should be noted that the division manner of the image size range is only used for exemplary illustration, and can be freely set based on the requirement in practical application.
In step S1232, the faces matching the same image size range are divided into the same image region.
In this embodiment, after the size of each face in the at least two faces is matched with the set size ranges of the at least two images, the faces matched with the same size range of the images can be divided into the same image area.
In step S13, a focus area is determined based on the at least two image areas.
In this embodiment, after determining at least two image regions from the preview image based on the at least two faces, a focusing region may be determined based on the at least two image regions.
In an embodiment, a focusing area for focusing may be determined from the at least two image areas based on the number of faces and the size of the faces of each image area.
For example, fig. 4 is a flow chart illustrating how to determine a focus area based on the at least two image areas according to an exemplary embodiment. As shown in fig. 4, the step S13 may further include the following steps S131 to S133:
in step S131, the number of faces in each of the image regions is determined.
In this embodiment, after determining at least two image regions from the preview image based on the at least two faces, the number of faces in each image region may be determined.
For example, the faces contained in each image region may be identified, and the number of faces in each image region may be counted.
In step S132, a region evaluation parameter for each of the image regions is determined based on the number of faces in each of the image regions and the size of the faces.
In this embodiment, after the number of faces in each image region is determined, the region evaluation parameter of each image region may be determined based on the number of faces in each image region and the size of the faces.
For example, fig. 5A is a flow chart illustrating how a region-evaluation parameter for each of the image regions is determined based on the number of faces in each of the image regions and the size of the faces, according to an exemplary embodiment. As shown in fig. 5A, the step S132 may further include the following steps S1321 to S1323:
in step S1321, a first evaluation parameter corresponding to the number of faces in each image region is determined based on a preset first corresponding relationship.
In this embodiment, a first corresponding relationship between the number of faces and the first evaluation parameter may be pre-constructed, and then, after the number of faces in each image region is determined, the corresponding first evaluation parameter may be determined based on the first corresponding relationship.
In an embodiment, the first corresponding relationship between the number of faces and the first evaluation parameter may be shown in a curve d (y) shown in fig. 5B, where y is the number of faces. As shown in fig. 5B, the larger the number of faces, the larger the corresponding first evaluation parameter. However, as the number of faces increases, the increase of the first evaluation parameter becomes slow, that is, the first evaluation parameter corresponding to the number of human images can be limited within a certain range.
The relationship may be positive correlation, that is, the larger the number of faces in each image region is, the larger the first evaluation parameter of the image region is.
In step S1322, based on a preset second corresponding relationship, a second evaluation parameter corresponding to the size of the face in each image region is determined.
In this embodiment, a second correspondence between the size of the face and a second evaluation parameter may be pre-constructed, and then, after the size of the face in each image region is determined, the corresponding second evaluation parameter may be determined based on the second correspondence.
In an embodiment, the second corresponding relationship between the size of the face and the second evaluation parameter may be shown in a curve f (x) shown in fig. 5C, where x is the size of the face. As shown in fig. 5C, the second evaluation parameter for a face of medium size is larger, and the second evaluation parameter for a face of smaller or larger size is smaller.
It should be noted that the size of the face in each image region may be the sum of the sizes of the faces in the region, or may be the average image size of the faces in the region, which is not limited in this embodiment.
In step S1323, a region evaluation parameter for each image region is determined based on a product of the first evaluation parameter and the second evaluation parameter.
In this embodiment, after determining a first evaluation parameter corresponding to the number of faces in each image region and a second evaluation parameter corresponding to the size of the face in each image region, a product of the first evaluation parameter and the second evaluation parameter may be calculated, and the product may be determined as the region evaluation parameter of each image region.
In step S133, a focus area is determined from the at least two image areas based on the area evaluation parameter.
In this embodiment, after determining the region evaluation parameter of each image region based on the number of faces in each image region and the size of the faces, the focusing region may be determined from the at least two image regions based on the region evaluation parameter.
In an embodiment, an image area of the at least two image areas in which the area evaluation parameter is the largest may be determined as the in-focus area.
For example, 2 faces exist in the area 1, 2 faces exist in the area 2, 10 faces exist in the area 3, 5 faces exist in the area 4, and the sizes of the faces in the area 1, the area 2, the area 3, and the area 4 are sequentially increased, and when the area evaluation parameter of the area 3 is calculated to be the maximum, the area 3 can be determined as a focusing area. Therefore, in the embodiment, the area evaluation parameter of each image area is calculated by comprehensively considering the number of the faces and the size of the faces in the area, so that the rationality of determining the focusing area can be improved.
In step S14, the image capture device is controlled to focus based on the focus area.
In this embodiment, after the focusing area is determined based on the at least two image areas, the image capturing device may be controlled to focus based on the focusing area.
In an embodiment, after the focusing area for focusing is determined, a preset focusing manner may be adopted to focus the focusing area.
It should be noted that the preset focusing method may be a focusing method based on the related art according to actual needs, for example, a focusing method based on a target and a focusing method based on a contrast in the related art may be selected, and this embodiment is not limited thereto.
As can be seen from the above description, in the method of this embodiment, by obtaining a preview image acquired by an image acquisition device, where at least two faces exist in the preview image, determining at least two image regions from the preview image based on the at least two faces, and determining a focusing region based on the at least two image regions, and then controlling the image acquisition device to focus based on the focusing region, because the at least two faces in the preview image are divided into the at least two image regions, and the focusing region for focusing is determined based on the divided image regions, it is possible to avoid that the image acquisition device automatically focuses on the face with the largest front row, so that the rationality of focusing can be improved, and the quality of a subsequent acquired image can be improved.
FIG. 6 is a block diagram illustrating a focusing apparatus according to an exemplary embodiment; the device of the embodiment can be applied to terminal equipment (such as a smart phone, a tablet computer, a notebook computer or wearable equipment and the like) with an image acquisition device such as a camera. As shown in fig. 6, the apparatus includes: a preview image acquisition module 110, an image area division module 120, a focusing area determination module 130, and an image device focusing module 140, wherein:
a preview image obtaining module 110, configured to obtain a preview image collected by an image collecting device, where at least two faces exist in the preview image;
an image region dividing module 120 configured to determine at least two image regions from the preview image based on the at least two human faces;
a focusing area determination module 130 for determining a focusing area based on the at least two image areas;
an image device focusing module 140, configured to control the image capturing device to focus based on the focusing area.
As can be seen from the above description, the device of this embodiment obtains the preview image collected by the image collecting device, where at least two faces exist in the preview image, and determines at least two image areas in the preview image based on the at least two faces, and determines a focusing area based on the at least two image areas, and then controls the image collecting device to focus based on the focusing area.
FIG. 7 is a block diagram illustrating a focusing apparatus according to yet another exemplary embodiment; the device of the embodiment can be applied to terminal equipment (such as a smart phone, a tablet computer, a notebook computer or wearable equipment and the like) with an image acquisition device such as a camera. Wherein: the preview image obtaining module 210, the image area dividing module 220, the focusing area determining module 230, and the image device focusing module 240 have the same functions as the preview image obtaining module 110, the image area dividing module 120, the focusing area determining module 130, and the image device focusing module 140 in the embodiment shown in fig. 6, and are not repeated herein.
As shown in fig. 6, the image area dividing module 220 may include:
a face recognition unit 221 configured to recognize the at least two faces from the preview image;
a face size determination unit 222, configured to determine a size of each face of the at least two faces;
an image area dividing unit 223 for determining at least two image areas from the preview image based on the size of the face.
In an embodiment, the image area dividing unit 223 may be further configured to:
clustering the at least two faces based on the sizes of the faces to obtain at least two classifications;
and dividing the human faces belonging to the same classification into the same image area.
In an embodiment, the image area dividing unit 223 may be further configured to:
matching the size of each face of the at least two faces with the size ranges of the at least two set images;
and dividing the human face matched with the same image size range into the same image area.
In an embodiment, the focusing area determining module 230 may include:
an image number determination unit 231 for determining the number of faces in each of the image regions;
an evaluation parameter determination unit 232 configured to determine an area evaluation parameter for each of the image areas based on the number of faces in each of the image areas and the size of the faces;
a focus area determination unit 233 for determining a focus area from the at least two image areas based on the area evaluation parameter.
In an embodiment, the evaluation parameter determining unit 232 may be further configured to:
determining a first evaluation parameter corresponding to the number of the human faces in each image area based on a preset first corresponding relation;
determining a second evaluation parameter corresponding to the size of the face in each image area based on a preset second corresponding relation;
determining a region evaluation parameter for each of the image regions based on a product of the first evaluation parameter and the second evaluation parameter;
on this basis, the focusing area determination unit 233 may be further configured to determine an image area with the largest area evaluation parameter of the at least two image areas as the focusing area.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 8 is a block diagram illustrating an electronic device in accordance with an example embodiment. For example, the apparatus 900 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like. In this embodiment, the electronic device may include a normally open image capturing device for capturing image information.
Referring to fig. 8, apparatus 900 may include one or more of the following components: processing component 902, memory 904, power component 906, multimedia component 908, audio component 910, input/output (I/O) interface 912, sensor component 914, and communication component 916.
The processing component 902 generally controls overall operation of the device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing element 902 may include one or more processors 920 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 902 can include one or more modules that facilitate interaction between processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operation at the device 900. Examples of such data include instructions for any application or method operating on device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 904 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power component 906 provides power to the various components of device 900. The power components 906 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 900.
The multimedia component 908 comprises a screen providing an output interface between the device 900 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 900 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 910 is configured to output and/or input audio signals. For example, audio component 910 includes a Microphone (MIC) configured to receive external audio signals when apparatus 900 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 904 or transmitted via the communication component 916. In some embodiments, audio component 910 also includes a speaker for outputting audio signals.
I/O interface 912 provides an interface between processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 914 includes one or more sensors for providing status assessment of various aspects of the apparatus 900. For example, sensor assembly 914 may detect an open/closed state of device 900, the relative positioning of components, such as a display and keypad of device 900, the change in position of device 900 or a component of device 900, the presence or absence of user contact with device 900, the orientation or acceleration/deceleration of device 900, and the change in temperature of device 900. The sensor assembly 914 may also include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 914 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate communications between the apparatus 900 and other devices in a wired or wireless manner. The apparatus 900 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, 4G or 5G or a combination thereof. In an exemplary embodiment, the communication component 916 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 916 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 904 comprising instructions, executable by the processor 920 of the apparatus 900 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. A focusing method, the method comprising:
acquiring a preview image acquired by an image acquisition device, wherein at least two human faces exist in the preview image;
determining at least two image regions from the preview image based on the at least two faces;
determining a focus area based on the at least two image areas;
and controlling the image acquisition device to focus based on the focusing area.
2. The method of claim 1, wherein determining at least two image regions from the preview image based on the at least two faces comprises:
identifying the at least two faces from the preview image;
determining the size of each face of the at least two faces;
at least two image regions are determined from the preview image based on the size of the face.
3. The method of claim 2, wherein determining at least two image regions from the preview image based on the size of the face comprises:
clustering the at least two faces based on the sizes of the faces to obtain at least two classifications;
and dividing the human faces belonging to the same classification into the same image area.
4. The method of claim 2, wherein determining at least two image regions from the preview image based on the size of the face comprises:
matching the size of each face of the at least two faces with the size ranges of the at least two set images;
and dividing the human face matched with the same image size range into the same image area.
5. The method of claim 2, wherein determining a focus area based on the at least two image areas comprises:
determining the number of human faces in each image area;
determining a region evaluation parameter of each image region based on the number of faces in each image region and the size of the faces;
determining a focus area from the at least two image areas based on the area evaluation parameter.
6. The method of claim 5, wherein determining the region evaluation parameter for each of the image regions based on the number of faces in each of the image regions and the size of the faces comprises:
determining a first evaluation parameter corresponding to the number of the human faces in each image area based on a preset first corresponding relation;
determining a second evaluation parameter corresponding to the size of the face in each image area based on a preset second corresponding relation;
determining a region evaluation parameter for each of the image regions based on a product of the first evaluation parameter and the second evaluation parameter;
the determining a focus area from the at least two image areas based on the area evaluation parameter comprises:
and determining the image area with the largest area evaluation parameter in the at least two image areas as a focusing area.
7. A focusing device, comprising:
the preview image acquisition module is used for acquiring a preview image acquired by the image acquisition device, wherein at least two human faces exist in the preview image;
an image region dividing module for determining at least two image regions from the preview image based on the at least two faces;
a focusing area determination module for determining a focusing area based on the at least two image areas;
and the image device focusing module is used for controlling the image acquisition device to focus based on the focusing area.
8. The apparatus of claim 7, wherein the image region dividing module comprises:
a face recognition unit configured to recognize the at least two faces from the preview image;
a face size determination unit for determining the size of each of the at least two faces;
and the image area dividing unit is used for determining at least two image areas from the preview image based on the size of the human face.
9. The apparatus of claim 8, wherein the image region dividing unit is further configured to:
clustering the at least two faces based on the sizes of the faces to obtain at least two classifications;
and dividing the human faces belonging to the same classification into the same image area.
10. The apparatus of claim 8, wherein the image region dividing unit is further configured to:
matching the size of each face of the at least two faces with the size ranges of the at least two set images;
and dividing the human face matched with the same image size range into the same image area.
11. The apparatus of claim 8, wherein the focusing area determining module comprises:
an image number determination unit for determining the number of faces in each of the image regions;
an evaluation parameter determination unit configured to determine a region evaluation parameter for each of the image regions based on the number of faces in each of the image regions and the size of the faces;
a focusing area determination unit for determining a focusing area from the at least two image areas based on the area evaluation parameter.
12. The apparatus of claim 11, wherein the evaluation parameter determination unit is further configured to:
determining a first evaluation parameter corresponding to the number of the human faces in each image area based on a preset first corresponding relation;
determining a second evaluation parameter corresponding to the size of the face in each image area based on a preset second corresponding relation;
determining a region evaluation parameter for each of the image regions based on a product of the first evaluation parameter and the second evaluation parameter;
the focusing area determining unit is further configured to determine an image area with the largest area evaluation parameter of the at least two image areas as a focusing area.
13. An electronic device, characterized in that the device comprises:
a processor and a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a preview image acquired by an image acquisition device, wherein at least two human faces exist in the preview image;
determining at least two image regions from the preview image based on the at least two faces;
determining a focus area based on the at least two image areas;
and controlling the image acquisition device to focus based on the focusing area.
14. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing:
acquiring a preview image acquired by an image acquisition device, wherein at least two human faces exist in the preview image;
determining at least two image regions from the preview image based on the at least two faces;
determining a focus area based on the at least two image areas;
and controlling the image acquisition device to focus based on the focusing area.
CN202110044534.4A 2021-01-13 2021-01-13 Focusing method, device, equipment and storage medium Pending CN112822405A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110044534.4A CN112822405A (en) 2021-01-13 2021-01-13 Focusing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110044534.4A CN112822405A (en) 2021-01-13 2021-01-13 Focusing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112822405A true CN112822405A (en) 2021-05-18

Family

ID=75869481

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110044534.4A Pending CN112822405A (en) 2021-01-13 2021-01-13 Focusing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112822405A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070110422A1 (en) * 2003-07-15 2007-05-17 Yoshihisa Minato Object determining device and imaging apparatus
US20090015681A1 (en) * 2007-07-12 2009-01-15 Sony Ericsson Mobile Communications Ab Multipoint autofocus for adjusting depth of field
CN101378458A (en) * 2007-08-30 2009-03-04 三星Techwin株式会社 Digital photographing apparatus and method using face recognition function
CN102338972A (en) * 2010-07-21 2012-02-01 华晶科技股份有限公司 Assistant focusing method using multiple face blocks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070110422A1 (en) * 2003-07-15 2007-05-17 Yoshihisa Minato Object determining device and imaging apparatus
US20090015681A1 (en) * 2007-07-12 2009-01-15 Sony Ericsson Mobile Communications Ab Multipoint autofocus for adjusting depth of field
CN101378458A (en) * 2007-08-30 2009-03-04 三星Techwin株式会社 Digital photographing apparatus and method using face recognition function
CN102338972A (en) * 2010-07-21 2012-02-01 华晶科技股份有限公司 Assistant focusing method using multiple face blocks

Similar Documents

Publication Publication Date Title
CN106331504B (en) Shooting method and device
CN107463052B (en) Shooting exposure method and device
CN108462833B (en) Photographing method, photographing device and computer-readable storage medium
CN110569822A (en) image processing method and device, electronic equipment and storage medium
CN107015648B (en) Picture processing method and device
EP3113071A1 (en) Method and device for acquiring iris image
WO2017140109A1 (en) Pressure detection method and apparatus
CN113364965A (en) Shooting method and device based on multiple cameras and electronic equipment
CN105959563B (en) Image storage method and image storage device
CN107122697B (en) Automatic photo obtaining method and device and electronic equipment
CN108154090B (en) Face recognition method and device
CN112004020B (en) Image processing method, image processing device, electronic equipment and storage medium
CN106469446B (en) Depth image segmentation method and segmentation device
CN110297929A (en) Image matching method, device, electronic equipment and storage medium
CN113315904B (en) Shooting method, shooting device and storage medium
CN111586296B (en) Image capturing method, image capturing apparatus, and storage medium
CN114979455A (en) Photographing method, photographing device and storage medium
CN112822405A (en) Focusing method, device, equipment and storage medium
CN114418865A (en) Image processing method, device, equipment and storage medium
CN107707819B (en) Image shooting method, device and storage medium
CN112883791A (en) Object recognition method, object recognition device, and storage medium
CN113079312A (en) Method and device for adjusting focal length of camera, electronic equipment and storage medium
CN111726531A (en) Image shooting method, processing method, device, electronic equipment and storage medium
CN114187874A (en) Brightness adjusting method and device and storage medium
CN106131403B (en) Touch focusing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210518