CN113170053A - Imaging method, imaging device, and storage medium - Google Patents

Imaging method, imaging device, and storage medium Download PDF

Info

Publication number
CN113170053A
CN113170053A CN202080006539.1A CN202080006539A CN113170053A CN 113170053 A CN113170053 A CN 113170053A CN 202080006539 A CN202080006539 A CN 202080006539A CN 113170053 A CN113170053 A CN 113170053A
Authority
CN
China
Prior art keywords
focusing
matching
frame image
areas
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080006539.1A
Other languages
Chinese (zh)
Inventor
程正喜
封旭阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN113170053A publication Critical patent/CN113170053A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Automatic Focus Adjustment (AREA)

Abstract

A photographing method, a photographing apparatus, and a storage medium, the method comprising: acquiring N focusing areas of N frames of images in an image sequence in a current focusing mode (S101); determining whether to switch the current focusing mode according to focusing targets in the N focusing areas (S102); photographing is performed in the determined focus mode (S103).

Description

Imaging method, imaging device, and storage medium
Technical Field
The present disclosure relates to the field of photography technologies, and in particular, to a photography method, a photography device, and a storage medium.
Background
The Auto Focus-Continuous (AFC) means that the photographing device always performs a focusing operation regardless of whether the shutter is half-pressed. For AFC, the following two general categories can be distinguished: (1) the AFC-Auto of the continuous Auto Focus mode, i.e. the user does not clearly indicate the Focus area or target, only performs composition, and the Auto Focus (AF) is automatically controlled by the system; (2) and a continuous automatic focusing Tracking mode (AFC-Tracking) for continuously Tracking and focusing, namely, a user clearly specifies a focusing target and keeps focusing to the target selected by the user in the shooting process.
In an AFC-Auto mode, the focusing process jumps randomly, and the focusing process jumps from one target to another target directly, so that the picture is easy to see to focus back and forth continuously; the AFC-Tracking mode has the advantage over the AFC-Auto mode that a relatively stable focusing area can be tracked, and a better focusing experience can be brought to a user. However, the photographing apparatus cannot automatically switch from the AFC-Auto mode to the AFC-Tracking mode.
Disclosure of Invention
Based on this, the application provides an imaging method, an imaging device and a storage medium.
In a first aspect, the present application provides a shooting method, including:
acquiring N focusing areas of N frames of images in an image sequence in a current focusing mode;
determining whether to switch the current focusing mode according to focusing targets in the N focusing areas;
and shooting in the determined focusing mode.
In a second aspect, the present application provides a camera, the camera comprising: a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the steps of:
acquiring N focusing areas of N frames of images in an image sequence in a current focusing mode;
determining whether to switch the current focusing mode according to focusing targets in the N focusing areas;
and shooting in the determined focusing mode.
In a third aspect, the present application provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the photographing method as described above.
The embodiment of the application provides a shooting method, a shooting device and a storage medium, wherein N focusing areas of N frames of images in an image sequence are obtained in a current focusing mode; determining whether to switch the current focusing mode according to focusing targets in the N focusing areas; and shooting in the determined focusing mode. The N focusing areas of the N frames of images are obtained in the current focusing mode, the scene shot by the user at present can be stably and automatically identified according to the focusing targets in the N focusing areas, the focusing scene is automatically identified, and whether the current focusing mode is switched or not is determined according to the identification result.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart diagram of an embodiment of a photographing method of the present application;
FIG. 2 is a diagram illustrating an embodiment of focus mode switching in the photographing method according to the present application;
FIG. 3 is a schematic flow chart diagram of another embodiment of the photographing method of the present application;
FIG. 4 is a schematic flow chart diagram of another embodiment of the photographing method of the present application;
FIG. 5 is a schematic diagram illustrating an embodiment of matching using center distance in the photographing method according to the present application;
fig. 6 is a schematic structural diagram of an embodiment of a camera according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Continuous autofocus AFC can be broadly classified into the following two categories: a continuous Auto-focus Auto mode AFC-Auto and a continuous Auto-focus Tracking mode AFC-Tracking. The AFC-Tracking mode has the advantage over the AFC-Auto mode that a relatively stable focusing area can be tracked, and a better focusing experience can be brought to a user. However, the photographing apparatus cannot automatically switch from the AFC-Auto mode to the AFC-Tracking mode.
The method includes the steps that N focusing areas of N frames of images in an image sequence are obtained; determining whether to switch the current focusing mode according to focusing targets in the N focusing areas; and shooting in the determined focusing mode. The N focusing areas of the N frames of images are obtained in the current focusing mode, the scene shot by the user at present can be stably and automatically identified according to the focusing targets in the N focusing areas, the focusing scene is automatically identified, whether the current focusing mode is switched or not is determined according to the identification result, and the shooting is carried out in the determined focusing mode. For example, whether the AFC-Tracking mode is automatically switched to can be determined according to whether the focusing targets in the N focusing areas include the same target, and the automatic switching can be automatically recognized without any operation by the user, so that better and smoother automatic focusing experience can be brought to the user.
Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a schematic flowchart of an embodiment of a shooting method of the present application, where the method includes:
step S101: in the current focusing mode, N focusing areas of N frames of images in the image sequence are obtained.
Step S102: and determining whether to switch the current focusing mode according to focusing targets in the N focusing areas.
Step S103: and shooting in the determined focusing mode.
Focusing is a process of adjusting the focusing mechanism of the image capturing device to change the object distance and the image distance to clearly image the subject (the subject or the subject). There are a variety of focusing modes of a typical photographing apparatus, including a manual focusing mode and an Auto Focus (AF) mode; according to different shooting subjects and scenes, the applicable focusing modes are different, and the selection of the proper focusing mode is more beneficial to shooting good images. The focusing mode of the embodiment of the application can be a manual focusing mode or an automatic focusing mode. For example: when the shooting device does not detect manual focusing, if the focusing target of the previous N frames of images is the same target, the automatic focusing mode can be automatically switched to, and the manual focusing mode is switched back as long as the manual focusing is detected. Since most of common users generally adopt the auto-focusing mode in more application scenarios, the focusing mode of the embodiment of the present application may be the auto-focusing mode with more applications.
Autofocus modes include, but are not limited to: an Auto Focus-Single (AFS) mode, a continuous Auto Focus mode, and the like. The single-shot AFS is a basic focusing mode, i.e. the focusing operation is performed only by half pressing the shutter, and the basic steps are as follows: view finding, composition, half shutter pressing, focusing and shooting. The continuous auto focus AFC means that the photographing device always performs a focusing operation regardless of whether or not the shutter is half-pressed. For AFC, the following two general categories can be distinguished: (1) AFC-Auto, i.e. the user does not clearly indicate the focusing area or target, only composition is carried out, and AF is automatically controlled by the system; (2) AFC-Tracking, which tracks focus continuously, i.e. a user specifies a focus target explicitly and keeps focusing to a target selected by the user during shooting.
The N frames of images in the image sequence can be used for identifying a scene shot by a user at present, and further identifying a focusing scene, so that a current region of interest or an interested object of the user can be judged, and a basis is provided for subsequently switching a focusing mode.
The focusing area of one frame image may be a focusing area in the frame image in a current focusing mode, and the focusing area of one frame image may also be an output matching area matching the focusing area of the frame image in the current focusing mode, and the output matching area may be a partial or complete inclusion of the focusing area. In this embodiment, the matching between the output matching area and the focusing area may be that the matching degree between the output matching area of the frame image and the focusing area of the frame image in the current focusing mode is greater than or equal to a preset matching degree, or may be that the overlapping degree between the output matching area of the frame image and the focusing area of the frame image in the current focusing mode is greater than or equal to a preset overlapping degree, and so on.
N can be set according to the requirements of users, and if the N is set to be large, the N can indicate that whether the focusing mode is switched or not is determined by triggering when the focusing time is longer. The N-frame image may be a continuous N-frame image or a discontinuous N-frame image. It is generally more common to use successive N frames of images to identify the scene that the user is currently filming.
Each focusing is a process of imaging a target (shot object) clearly, each frame image records a result of each focusing, and according to the focusing targets in the N focusing areas, a scene shot by a user at present can be stably and automatically identified, the focusing scene is automatically identified, and whether the current focusing mode is switched or not is determined according to the identification result.
For example: if the current focusing mode is single automatic focusing, N focusing areas of continuous N frames of images in the image sequence are obtained in the single automatic focusing mode, and the focusing targets in the N focusing areas are detected to be the same, at the moment, the continuous automatic focusing mode can be automatically switched to.
The method includes the steps that N focusing areas of N frames of images in an image sequence are obtained; determining whether to switch the current focusing mode according to focusing targets in the N focusing areas; and shooting in the determined focusing mode. The N focusing areas of the N frames of images are obtained in the current focusing mode, the scene shot by the user at present can be stably and automatically identified according to the focusing targets in the N focusing areas, the focusing scene is automatically identified, whether the current focusing mode is switched or not is determined according to the identification result, and the shooting is carried out in the determined focusing mode. For example, whether the AFC-Tracking mode is automatically switched to can be determined according to whether the focusing targets in the N focusing areas include the same target, and the automatic switching can be automatically recognized without any operation by the user, so that better and smoother automatic focusing experience can be brought to the user.
In an embodiment, the step S102 of determining whether to switch the current focusing mode according to the focusing targets in the N focusing areas may include: and determining whether to switch the current focusing mode according to whether the focusing targets in the N focusing areas comprise the same target.
The embodiment of the application is an optimization scheme of a focusing system, and can automatically detect the situation that a user focuses on the same object all the time when the user does not perform any operation, and automatically switch the focusing mode according to the detection result.
In an embodiment, in step S102, the determining whether to switch the current focusing mode according to whether the focusing targets in the N focusing areas include the same target may include: and if the focusing targets in the N focusing areas comprise the same target and the current focusing mode is an automatic mode AFC-Auto of continuous automatic focusing, controlling the camera device to switch to a Tracking mode AFC-Tracking of continuous automatic focusing.
The AFC-Auto can continuously try to focus back and forth in the focusing process of the focusing system, and if a focusing target is not clear at the moment or the focusing system cannot fixedly catch a focus, a picture can be easily seen to continuously focus back and forth at the moment; the AFC-Tracking can continuously focus on the area or target designated by the user, even if the target is moving continuously, and it is based on the stable Tracking of the target. The AFC-Tracking has the advantage over the AFC-Auto that the Tracking can be carried out to a relatively stable focusing area, while the AFC-Auto is focused to a remarkable area which can jump randomly in the focusing process and jump from one object to another object directly, and the AFC-Tracking can bring better focusing experience to users compared with the AFC-Auto.
Wherein, if the current focusing mode is an AFC-Tracking mode, the method may further include: and if the current focusing mode is an AFC-Tracking mode and the focusing target in the focusing area in the AFC-Tracking mode is detected not to be in the frame image, controlling the camera device to be switched to the AFC-Auto mode.
As shown in fig. 2, fig. 2 is a schematic diagram of an embodiment of switching a focus mode in the shooting method of the present application. In line 1, Frame 0-Frame N (Frame 0-Frame N) represents a certain continuous N-Frame image in the image sequence, each line of lines 2-4 represents the focusing condition of the continuous N frames, ROI-1, ROI-2, and ROI-3 in the figure represent 3 different focusing regions, the 3 different focusing regions include 3 different focusing targets, AFC-Auto on both sides, and AFC-Tracking represents the focusing mode change before Frame 0 and after Frame N:
the focusing areas of Frame 0-Frame N in the 2 nd row are ROI-1, ROI-3, ROI-1, … and ROI-3 in sequence;
the focusing areas of Frame 0-Frame N in the 3 rd row are all ROI-2;
the focusing areas of Frame 0-Frame N in the 4 th row are ROI-2, ROI-1, … and ROI-1 in sequence;
in general, if the focusing areas of N consecutive frames are the same target, it can be considered that the user pays attention to the target at this time, and the switching of the focusing mode can be triggered at this time, and the AFC-Auto mode is switched to the AFC-Tracking mode, that is, the situation in line 3 in fig. 2; if the focused target in the focused region ROI-2 in the AFC-Tracking mode is lost and the focused region ROI-2 is lost (for example, the target is not in the shot picture due to occlusion, etc.), it can be considered that the AFC-Tracking mode can be automatically exited at this time. And judging whether the user continuously focuses on the same target or not by identifying the change condition of the focusing area in the continuous N frames of images, and further determining whether to switch the focusing mode or not.
In an embodiment, if the photographing device does not have the target recognition and target tracking functions and cannot output an output area including a target, the step S101 of directly acquiring the focus areas may include:
and respectively acquiring N focusing areas of the continuous N frames of images in the image sequence in the current focusing mode.
At this time, step S102, the determining whether the focusing targets in the N focusing areas include the same target includes: substep S102A1, substep S102A2, and substep S102A3, as shown in FIG. 3.
Sub-step S102a 1: and under the current focusing mode, respectively extracting the characteristic points of the focusing area of each frame image in the image sequence.
Sub-step S102a 2: and matching the characteristic points of the N focusing areas of the continuous N frames of images.
Sub-step S102a 3: and if the feature points of the N focusing areas of the continuous N frames of images are successfully matched with each other, determining that the focusing targets in the N focusing areas comprise the same target.
The feature points may be points with a drastic change in the gray value of the image or points with a large curvature on the edges of the image (i.e., the intersection of two edges), and generally include color feature points and texture feature points. The characteristic points of the image can reflect the essential characteristics of the image, and can identify the target in the image. The feature point extraction method generally includes linear projection analysis and nonlinear feature extraction. Matching of images can be completed through matching of the feature points, and if the feature points of the N focusing areas are successfully matched with each other, it is determined that the focusing targets in the N focusing areas comprise the same target.
In this embodiment, in the current focusing mode, a focusing area of each frame image is obtained, feature points of the focusing area of each frame image are extracted, then, feature points of N focusing areas of consecutive N frame images are matched, and if the feature points of the N focusing areas are successfully matched with each other, it is determined that targets in the N focusing areas include the same target.
In another embodiment, if the photographing apparatus has a target recognition and target tracking function and is capable of outputting an output area including a target, the step S101 of acquiring N focusing areas of N frames of images in an image sequence in the current focusing mode may include: substep S101a1 and substep S101a 2.
Sub-step S101a 1: and under the current focusing mode, acquiring N output matching areas of continuous N frame images in the image sequence, wherein the output matching area of each frame image is matched with the focusing area of the corresponding frame image under the current focusing mode.
Sub-step S101a 2: and taking N output matching areas of the continuous N frames of images in the image sequence as N focusing areas of the N frames of images in the image sequence.
In the sub-step S101a1, in the current focusing mode, acquiring N output matching regions of N consecutive images in the image sequence may include: substep S101A11 and substep S101A12, as shown in FIG. 4.
Sub-step S101a 11: and under the current focusing mode, acquiring a plurality of output areas of each frame image in the image sequence through a multi-target tracking algorithm.
Sub-step S101a 12: and matching the focusing area of each frame image with a plurality of output areas of the corresponding frame image to obtain an output matching area of each frame image matched with the focusing area of the corresponding frame image, and further obtaining N output matching areas of the continuous N frame images in the image sequence.
The target tracking can be that in a given real-time image sequence, a target to be followed is selected in a frame image, and the size and the position of the target are calculated in subsequent frame images; multi-Object Tracking (MOT) refers to Tracking Multiple objects simultaneously. The multi-target tracking algorithm includes but is not limited to: twin-network structure based Tracking algorithm (full-conditional network for Object Tracking), correlation filter based DSST algorithm (Accurate Scale Estimation for Robust Visual Tracking), etc. It should be noted that there are many multi-target tracking algorithms available in the prior art, and different multi-target tracking algorithms can be selected according to different application platforms and computing resources.
In this embodiment, a plurality of output regions of each frame image in an image sequence are obtained through a multi-target tracking algorithm, and in a general case, each output region may include one target, and the plurality of targets of the plurality of output regions may possibly include a focusing target of a focusing region, and the focusing region of each frame image is matched with the plurality of output regions of a corresponding frame image (i.e., the frame image), so that an output matching region where each frame image is matched with the focusing region of the corresponding frame image can be obtained, and then N output matching regions where N consecutive frame images in the image sequence are matched with the focusing region of the corresponding frame image are obtained.
In many scenarios, the user does not need to manually switch the focusing mode to complete the automatic switching of the focusing mode. One of the more classical scenes is a long shot, a user may pay attention to different targets in different time slices in the whole shooting process, the long shot is assumed to be composed of T1, T2, … … and TN time slices, the target focused in T1 may be ROI-1, no ROI-1 may be in T2, the user only pays attention to ROI-2, and at this time, the switching of the focusing mode may also be realized according to such scene recognition, that is, the mode of focusing ROI-1 in T1 is AFC-Tracking, when ROI-1 is lost, the switching is performed to AFC-Auto, the time T2 is entered, the focusing target ROI-2 is automatically detected, and the switching is performed to AFC-Tracking to realize Tracking focusing on ROI-2.
In the sub-step S101a12, the matching the focusing region of each frame image with the plurality of output regions of the corresponding frame image to obtain an output region of each frame image matching the focusing region of the corresponding frame image may further include:
(A1) and matching the focusing area of each frame image with a plurality of output areas of the corresponding frame image.
(A2) And if the matching is successful, marking the corresponding target in the output matching area matched in each frame image to obtain a marked output matching area matched between each frame image and the focusing area of the corresponding frame image.
At this time, in step S102, the determining whether the focusing targets in the N focusing areas include the same target may include: judging whether the targets marked in the N mark output matching areas of the continuous N frames of images in the image sequence are the same target or not; and if so, determining that the focusing targets in the N focusing areas comprise the same target.
When the matching is successful, the targets in the successfully matched output matching areas are marked, and the N successfully matched output matching areas of the N continuous frames of images are marked with the same target, so that the targets in the N focusing areas are determined to comprise the same target.
In an embodiment, a1, the matching the in-focus region of each frame image with the plurality of output regions of the corresponding frame image may include: a plurality of center distances between the center of the in-focus region of each frame image and the centers of the plurality of output regions of the corresponding frame image, respectively, are determined. In this case, in a2, if the matching is successful, the method may include: judging the sizes of the plurality of central distances and distance thresholds; and if the center distances are smaller than the distance threshold value, determining that the matching is successful, wherein the output area corresponding to the center distance smaller than the distance threshold value is the output matching area matched with the focusing area.
As shown in fig. 5, for each frame image, the focusing area AA has a center O, each of the output areas BB1, BB2, BB3, … …, BBn also has a center O1, O2, O3, … …, On, and the center distances between the center O of the focusing area AA and the centers O1, O2, O3, … …, On of the output areas BB1, BB2, BB3, … …, BBn are simply and conveniently represented. Accordingly, a distance threshold value can be determined, when the center distance between the center of the focusing area of the frame image and the center of the output area of the corresponding frame image is smaller than the distance threshold value, the focusing area can be considered to be matched with the output area, and the output area corresponding to the center distance smaller than the distance threshold value is the output matching area matched with the focusing area.
In another embodiment, a1, the matching the in-focus region of each frame image with the plurality of output regions of the corresponding frame image may include: respectively extracting characteristic points of a focusing area of each frame image and a plurality of output areas of the corresponding frame image; and respectively matching the characteristic points of the focusing area of each frame image with the characteristic points of a plurality of output areas of the corresponding frame image to obtain a plurality of matching degrees. In this case, in a2, if the matching is successful, the method may include: and if the matching degrees which are greater than or equal to the threshold matching degree exist in the plurality of matching degrees, determining that the matching is successful, wherein the output area corresponding to the matching degree which is greater than or equal to the threshold matching degree is the output matching area matched with the focusing area.
In this embodiment, a threshold matching degree is predetermined, the feature points of the focusing region of the frame image are respectively matched with the feature points of the plurality of output regions of the corresponding frame image to obtain a plurality of matching degrees, if the matching degree in the plurality of matching degrees is greater than or equal to the threshold matching degree, the matching is successful, and the output region corresponding to the matching degree greater than or equal to the threshold matching degree is the output matching region matched with the focusing region.
In a further embodiment, in order to improve the matching accuracy, the center distance between the center of the focusing area and the center of the output area and the matching degree between the focusing area and the output area are combined together to judge the matching condition of the focusing area and the output area.
A1, the matching the in-focus area of each frame image with the plurality of output areas of the corresponding frame image may include: determining a plurality of center distances between the center of a focusing area of each frame image and the centers of a plurality of output areas of the corresponding frame image respectively, and judging the sizes of the plurality of center distances and a distance threshold value; respectively extracting the characteristic points of the focusing area of each frame image and the plurality of output areas of the corresponding frame image, and respectively matching the characteristic points of the focusing area of each frame image with the characteristic points of the plurality of output areas of the corresponding frame image to obtain a plurality of matching degrees. In this case, in a2, if the matching is successful, the method may include: and if the center distances are smaller than a distance threshold value and the matching degrees are larger than or equal to a threshold matching degree, determining that the matching is successful, wherein the output area corresponding to the center distance is smaller than the distance threshold value and the matching degree is larger than or equal to the threshold matching degree is the output matching area matched with the focusing area.
In an embodiment, if the matching is unsuccessful, it indicates that the current multi-target tracking algorithm cannot identify and track the focusing target in the focusing region, at this time, the multi-target tracking algorithm may learn to identify the focusing target in the focusing region, and a corresponding output region may be output in the subsequent frame image. The method may therefore further comprise: and if the matching is unsuccessful, taking the focusing area of the frame image with unsuccessful matching as a new output area of the multi-target tracking algorithm.
In more application scenarios, the focusing target includes a specific target, and the method may further include: if the current focusing mode is an AFC-Tracking mode, Tracking the specified target in an auxiliary mode through a specified target detection algorithm and focusing the specified target.
Wherein the specified target comprises a human face.
Such as: when a person is in a special visit, the focusing area may be the face area all the time; if the mode is automatically switched to the AFC-Tracking focusing mode, face detection can be carried out, a current focusing area is matched with a face detection result, whether the current focusing area is a face or not is judged, and if the current focusing area is the face, a face detection algorithm can be introduced to assist in Tracking the face and focusing the face in the subsequent process.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of the photographing apparatus of the present application, it should be noted that the photographing apparatus of the present embodiment can execute the steps in the photographing method, and details of relevant contents refer to the relevant contents of the photographing method, which are not described herein again.
The photographing apparatus 100 includes: a memory 1 and a processor 2; the processor 2 and the memory 1 are connected by a bus.
The processor 2 may be a micro-control unit, a central processing unit, a digital signal processor, or the like.
The memory 1 may be a Flash chip, a read-only memory, a magnetic disk, an optical disk, a usb disk, or a removable hard disk.
The memory 1 is used for storing a computer program; the processor 2 is configured to execute the computer program and, when executing the computer program, implement the following steps:
acquiring N focusing areas of N frames of images in an image sequence in a current focusing mode; determining whether to switch the current focusing mode according to focusing targets in the N focusing areas; and shooting in the determined focusing mode.
Wherein the processor, when executing the computer program, implements the steps of: and determining whether to switch the current focusing mode according to whether the focusing targets in the N focusing areas comprise the same target.
Wherein the processor, when executing the computer program, implements the steps of: and if the focusing targets in the N focusing areas comprise the same target and the current focusing mode is an automatic mode AFC-Auto of continuous automatic focusing, controlling the camera device to switch to a Tracking mode AFC-Tracking of continuous automatic focusing.
Wherein the processor, when executing the computer program, implements the steps of: and if the current focusing mode is an AFC-Tracking mode and the focusing target in the focusing area in the AFC-Tracking mode is detected not to be in the frame image, controlling the camera device to be switched to the AFC-Auto mode.
Wherein the processor, when executing the computer program, implements the steps of: and respectively acquiring N focusing areas of the continuous N frames of images in the image sequence in the current focusing mode.
Wherein the processor, when executing the computer program, implements the steps of: respectively extracting the characteristic points of the focusing area of each frame image in the image sequence in the current focusing mode; matching the feature points of the N focusing areas of the continuous N frames of images; and if the feature points of the N focusing areas of the continuous N frames of images are successfully matched with each other, determining that the focusing targets in the N focusing areas comprise the same target.
Wherein the processor, when executing the computer program, implements the steps of: under a current focusing mode, acquiring N output matching areas of continuous N frames of images in an image sequence, wherein the output matching area of each frame of image is matched with a focusing area of a corresponding frame of image under the current focusing mode; and taking N output matching areas of the continuous N frames of images in the image sequence as N focusing areas of the N frames of images in the image sequence.
Wherein the processor, when executing the computer program, implements the steps of: under the current focusing mode, acquiring a plurality of output areas of each frame image in the image sequence through a multi-target tracking algorithm; and matching the focusing area of each frame image with a plurality of output areas of the corresponding frame image to obtain an output matching area of each frame image matched with the focusing area of the corresponding frame image, and further obtaining N output matching areas of the continuous N frame images in the image sequence.
Wherein the processor, when executing the computer program, implements the steps of: matching the focusing area of each frame image with a plurality of output areas of the corresponding frame image; and if the matching is successful, marking the corresponding target in the output matching area in each frame image to obtain the marked output matching area of each frame image.
Wherein the processor, when executing the computer program, implements the steps of: judging whether the targets marked in the N mark output matching areas of the continuous N frames of images in the image sequence are the same target or not; and if so, determining that the focusing targets in the N focusing areas comprise the same target.
Wherein the processor, when executing the computer program, implements the steps of: a plurality of center distances between the center of the in-focus region of each frame image and the centers of the plurality of output regions of the corresponding frame image, respectively, are determined.
Wherein the processor, when executing the computer program, implements the steps of: judging the sizes of the plurality of central distances and distance thresholds; and if the center distances are smaller than the distance threshold value, determining that the matching is successful, wherein the output area corresponding to the center distance smaller than the distance threshold value is the output matching area.
Wherein the processor, when executing the computer program, implements the steps of: respectively extracting characteristic points of a focusing area of each frame image and a plurality of output areas of the corresponding frame image; and respectively matching the characteristic points of the focusing area of each frame image with the characteristic points of a plurality of output areas of the corresponding frame image to obtain a plurality of matching degrees.
Wherein the processor, when executing the computer program, implements the steps of: if the matching degrees which are greater than or equal to the threshold matching degree exist in the matching degrees, the matching is determined to be successful, and the output area corresponding to the matching degree which is greater than or equal to the threshold matching degree is the output matching area.
Wherein the processor, when executing the computer program, implements the steps of: determining a plurality of center distances between the center of a focusing area of each frame image and the centers of a plurality of output areas of the corresponding frame image respectively, and judging the sizes of the plurality of center distances and a distance threshold value; respectively extracting the characteristic points of the focusing area of each frame image and the plurality of output areas of the corresponding frame image, and respectively matching the characteristic points of the focusing area of each frame image with the characteristic points of the plurality of output areas of the corresponding frame image to obtain a plurality of matching degrees.
Wherein the processor, when executing the computer program, implements the steps of: and if the center distances are smaller than a distance threshold value and the matching degrees are larger than or equal to a threshold matching degree, determining that the matching is successful, wherein the output area corresponding to the center distance is smaller than the distance threshold value and the matching degree is larger than or equal to the threshold matching degree is the output matching area.
Wherein the processor, when executing the computer program, implements the steps of: and if the matching is unsuccessful, taking the focusing area of the frame image with unsuccessful matching as a new output area of the multi-target tracking algorithm.
Wherein the focus target comprises a designated target, the processor, when executing the computer program, implementing the steps of: if the current focusing mode is an AFC-Tracking mode, Tracking the specified target in an auxiliary mode through a specified target detection algorithm and focusing the specified target.
Wherein the specified target comprises a human face.
The present application also provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the photographing method as described in any one of the above. For a detailed description of relevant contents, reference is made to the above-mentioned relevant contents section, which is not described herein again in a redundant manner.
The computer-readable storage medium may be an internal storage unit of the above-mentioned shooting device, such as a hard disk or a memory. The computer readable storage medium may also be an external storage device such as a hard drive equipped with a plug-in, smart memory card, secure digital card, flash memory card, or the like.
It is to be understood that the terminology used in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
The above description is only for the specific embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present application, and these modifications or substitutions should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (39)

1. A photographing method, characterized in that the method comprises:
acquiring N focusing areas of N frames of images in an image sequence in a current focusing mode;
determining whether to switch the current focusing mode according to focusing targets in the N focusing areas;
and shooting in the determined focusing mode.
2. The method of claim 1, wherein the determining whether to switch the current focusing mode according to the focusing targets in the N focusing areas comprises:
and determining whether to switch the current focusing mode according to whether the focusing targets in the N focusing areas comprise the same target.
3. The method according to claim 2, wherein the determining whether to switch the current focusing mode according to whether the focusing targets in the N focusing areas include the same target comprises:
and if the focusing targets in the N focusing areas comprise the same target and the current focusing mode is an automatic mode AFC-Auto of continuous automatic focusing, controlling the camera device to switch to a Tracking mode AFC-Tracking of continuous automatic focusing.
4. The method of claim 3, further comprising:
and if the current focusing mode is an AFC-Tracking mode and the focusing target in the focusing area in the AFC-Tracking mode is detected not to be in the frame image, controlling the camera device to be switched to the AFC-Auto mode.
5. The method of claim 2, wherein acquiring N in-focus regions of N images in the image sequence in the current in-focus mode comprises:
under the current focusing mode, N focusing areas of continuous N frames of images in the image sequence are respectively obtained.
6. The method according to claim 5, wherein the determining whether the focusing targets in the N focusing areas include the same target comprises:
respectively extracting the characteristic points of the focusing area of each frame image in the image sequence in the current focusing mode;
matching the feature points of the N focusing areas of the continuous N frames of images;
and if the feature points of the N focusing areas of the continuous N frames of images are successfully matched with each other, determining that the focusing targets in the N focusing areas comprise the same target.
7. The method of claim 2, wherein acquiring N in-focus regions of N images in the image sequence in the current in-focus mode comprises:
under a current focusing mode, acquiring N output matching areas of continuous N frames of images in an image sequence, wherein the output matching area of each frame of image is matched with a focusing area of a corresponding frame of image under the current focusing mode;
and taking N output matching areas of the continuous N frames of images in the image sequence as N focusing areas of the N frames of images in the image sequence.
8. The method of claim 7, wherein obtaining N output matching regions for N consecutive images in the image sequence in the current focus mode comprises:
under the current focusing mode, acquiring a plurality of output areas of each frame image in the image sequence through a multi-target tracking algorithm;
and matching the focusing area of each frame image with a plurality of output areas of the corresponding frame image to obtain an output matching area of each frame image matched with the focusing area of the corresponding frame image, and further obtaining N output matching areas of the continuous N frame images in the image sequence.
9. The method of claim 8, wherein matching the focus area of each frame image with the plurality of output areas of the corresponding frame image to obtain an output matching area of each frame image matching the focus area of the corresponding frame image comprises:
matching the focusing area of each frame image with a plurality of output areas of the corresponding frame image;
and if the matching is successful, marking the corresponding target in the output matching area in each frame image to obtain the marked output matching area of each frame image.
10. The method according to claim 9, wherein the determining whether the focus targets in the N focus areas include the same target comprises:
judging whether the targets marked in the N mark output matching areas of the continuous N frames of images in the image sequence are the same target or not;
and if so, determining that the focusing targets in the N focusing areas comprise the same target.
11. The method of claim 9, wherein matching the in-focus region of each frame image with the plurality of output regions of the corresponding frame image comprises:
a plurality of center distances between the center of the in-focus region of each frame image and the centers of the plurality of output regions of the corresponding frame image, respectively, are determined.
12. The method of claim 11, wherein the matching, if successful, comprises:
judging the sizes of the plurality of central distances and distance thresholds;
and if the center distances are smaller than the distance threshold value, determining that the matching is successful, wherein the output area corresponding to the center distance smaller than the distance threshold value is the output matching area.
13. The method of claim 9, wherein matching the in-focus region of each frame image with the plurality of output regions of the corresponding frame image comprises:
respectively extracting characteristic points of a focusing area of each frame image and a plurality of output areas of the corresponding frame image;
and respectively matching the characteristic points of the focusing area of each frame image with the characteristic points of a plurality of output areas of the corresponding frame image to obtain a plurality of matching degrees.
14. The method of claim 13, wherein the matching, if successful, comprises:
if the matching degrees which are greater than or equal to the threshold matching degree exist in the matching degrees, the matching is determined to be successful, and the output area corresponding to the matching degree which is greater than or equal to the threshold matching degree is the output matching area.
15. The method of claim 9, wherein matching the in-focus region of each frame image with the plurality of output regions of the corresponding frame image comprises:
determining a plurality of center distances between the center of a focusing area of each frame image and the centers of a plurality of output areas of the corresponding frame image respectively, and judging the sizes of the plurality of center distances and a distance threshold value;
respectively extracting the characteristic points of the focusing area of each frame image and the plurality of output areas of the corresponding frame image, and respectively matching the characteristic points of the focusing area of each frame image with the characteristic points of the plurality of output areas of the corresponding frame image to obtain a plurality of matching degrees.
16. The method of claim 15, wherein the matching, if successful, comprises:
and if the center distances are smaller than a distance threshold value and the matching degrees are larger than or equal to a threshold matching degree, determining that the matching is successful, wherein the output area corresponding to the center distance is smaller than the distance threshold value and the matching degree is larger than or equal to the threshold matching degree is the output matching area.
17. The method of claim 9, further comprising:
and if the matching is unsuccessful, taking the focusing area of the frame image with unsuccessful matching as a new output area of the multi-target tracking algorithm.
18. The method of claim 3, wherein the focus target comprises a designated target, the method further comprising:
if the current focusing mode is an AFC-Tracking mode, Tracking the specified target in an auxiliary mode through a specified target detection algorithm and focusing the specified target.
19. The method of claim 18, wherein the designated target comprises a human face.
20. A camera, characterized in that the camera comprises: a memory and a processor;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and, when executing the computer program, implement the steps of:
acquiring N focusing areas of N frames of images in an image sequence in a current focusing mode;
determining whether to switch the current focusing mode according to focusing targets in the N focusing areas;
and shooting in the determined focusing mode.
21. The camera of claim 20, wherein the processor, when executing the computer program, performs the steps of:
and determining whether to switch the current focusing mode according to whether the focusing targets in the N focusing areas comprise the same target.
22. The camera of claim 21, wherein the processor, when executing the computer program, performs the steps of:
and if the focusing targets in the N focusing areas comprise the same target and the current focusing mode is an automatic mode AFC-Auto of continuous automatic focusing, controlling the camera device to switch to a Tracking mode AFC-Tracking of continuous automatic focusing.
23. The camera of claim 22, wherein the processor, when executing the computer program, performs the steps of:
and if the current focusing mode is an AFC-Tracking mode and the focusing target in the focusing area in the AFC-Tracking mode is detected not to be in the frame image, controlling the camera device to be switched to the AFC-Auto mode.
24. The camera of claim 21, wherein the processor, when executing the computer program, performs the steps of:
and respectively acquiring N focusing areas of the continuous N frames of images in the image sequence in the current focusing mode.
25. The camera of claim 24, wherein the processor, when executing the computer program, performs the steps of:
respectively extracting the characteristic points of the focusing area of each frame image in the image sequence in the current focusing mode;
matching the feature points of the N focusing areas of the continuous N frames of images;
and if the feature points of the N focusing areas of the continuous N frames of images are successfully matched with each other, determining that the focusing targets in the N focusing areas comprise the same target.
26. The camera of claim 21, wherein the processor, when executing the computer program, performs the steps of:
under a current focusing mode, acquiring N output matching areas of continuous N frames of images in an image sequence, wherein the output matching area of each frame of image is matched with a focusing area of a corresponding frame of image under the current focusing mode;
and taking N output matching areas of the continuous N frames of images in the image sequence as N focusing areas of the N frames of images in the image sequence.
27. The camera of claim 26, wherein the processor, when executing the computer program, performs the steps of:
under the current focusing mode, acquiring a plurality of output areas of each frame image in the image sequence through a multi-target tracking algorithm;
and matching the focusing area of each frame image with a plurality of output areas of the corresponding frame image to obtain an output matching area of each frame image matched with the focusing area of the corresponding frame image, and further obtaining N output matching areas of the continuous N frame images in the image sequence.
28. The camera of claim 27, wherein the processor, when executing the computer program, performs the steps of:
matching the focusing area of each frame image with a plurality of output areas of the corresponding frame image;
and if the matching is successful, marking the corresponding target in the output matching area in each frame image to obtain the marked output matching area of each frame image.
29. The camera of claim 28, wherein the processor, when executing the computer program, performs the steps of:
judging whether the targets marked in the N mark output matching areas of the continuous N frames of images in the image sequence are the same target or not;
and if so, determining that the focusing targets in the N focusing areas comprise the same target.
30. The camera of claim 28, wherein the processor, when executing the computer program, performs the steps of:
a plurality of center distances between the center of the in-focus region of each frame image and the centers of the plurality of output regions of the corresponding frame image, respectively, are determined.
31. The camera of claim 30, wherein the processor, when executing the computer program, performs the steps of:
judging the sizes of the plurality of central distances and distance thresholds;
and if the center distances are smaller than the distance threshold value, determining that the matching is successful, wherein the output area corresponding to the center distance smaller than the distance threshold value is the output matching area.
32. The camera of claim 28, wherein the processor, when executing the computer program, performs the steps of:
respectively extracting characteristic points of a focusing area of each frame image and a plurality of output areas of the corresponding frame image;
and respectively matching the characteristic points of the focusing area of each frame image with the characteristic points of a plurality of output areas of the corresponding frame image to obtain a plurality of matching degrees.
33. The camera of claim 32, wherein the processor, when executing the computer program, performs the steps of:
if the matching degrees which are greater than or equal to the threshold matching degree exist in the matching degrees, the matching is determined to be successful, and the output area corresponding to the matching degree which is greater than or equal to the threshold matching degree is the output matching area.
34. The camera of claim 28, wherein the processor, when executing the computer program, performs the steps of:
determining a plurality of center distances between the center of a focusing area of each frame image and the centers of a plurality of output areas of the corresponding frame image respectively, and judging the sizes of the plurality of center distances and a distance threshold value;
respectively extracting the characteristic points of the focusing area of each frame image and the plurality of output areas of the corresponding frame image, and respectively matching the characteristic points of the focusing area of each frame image with the characteristic points of the plurality of output areas of the corresponding frame image to obtain a plurality of matching degrees.
35. The camera of claim 34, wherein the processor, when executing the computer program, performs the steps of:
and if the center distances are smaller than a distance threshold value and the matching degrees are larger than or equal to a threshold matching degree, determining that the matching is successful, wherein the output area corresponding to the center distance is smaller than the distance threshold value and the matching degree is larger than or equal to the threshold matching degree is the output matching area.
36. The camera of claim 28, wherein the processor, when executing the computer program, performs the steps of:
and if the matching is unsuccessful, taking the focusing area of the frame image with unsuccessful matching as a new output area of the multi-target tracking algorithm.
37. The camera of claim 22, wherein the focus target comprises a designated target, and wherein the processor, when executing the computer program, performs the steps of:
if the current focusing mode is an AFC-Tracking mode, Tracking the specified target in an auxiliary mode through a specified target detection algorithm and focusing the specified target.
38. The camera of claim 37, wherein the designated object comprises a human face.
39. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement the photographing method according to any one of claims 1 to 19.
CN202080006539.1A 2020-07-24 2020-07-24 Imaging method, imaging device, and storage medium Pending CN113170053A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/104596 WO2022016550A1 (en) 2020-07-24 2020-07-24 Photographing method, photographing apparatus and storage medium

Publications (1)

Publication Number Publication Date
CN113170053A true CN113170053A (en) 2021-07-23

Family

ID=76879306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080006539.1A Pending CN113170053A (en) 2020-07-24 2020-07-24 Imaging method, imaging device, and storage medium

Country Status (2)

Country Link
CN (1) CN113170053A (en)
WO (1) WO2022016550A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100086292A1 (en) * 2008-10-08 2010-04-08 Samsung Electro- Mechanics Co., Ltd. Device and method for automatically controlling continuous auto focus
US20110063494A1 (en) * 2009-09-16 2011-03-17 Altek Corporation Continuous focusing method for digital camera
CN108777767A (en) * 2018-08-22 2018-11-09 Oppo广东移动通信有限公司 Photographic method, device, terminal and computer readable storage medium
US20180367725A1 (en) * 2017-06-16 2018-12-20 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method of focusing, terminal, and computer-readable storage medium
CN110572573A (en) * 2019-09-17 2019-12-13 Oppo广东移动通信有限公司 Focusing method and device, electronic equipment and computer readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008187591A (en) * 2007-01-31 2008-08-14 Fujifilm Corp Imaging apparatus and imaging method
CN102096925A (en) * 2010-11-26 2011-06-15 中国科学院上海技术物理研究所 Real-time closed loop predictive tracking method of maneuvering target
CN107124546B (en) * 2012-05-18 2020-10-16 华为终端有限公司 Method for automatically switching terminal focusing modes and terminal
CN103905717B (en) * 2012-12-27 2018-07-06 联想(北京)有限公司 A kind of switching method, device and electronic equipment
CN104902182B (en) * 2015-05-28 2019-04-19 努比亚技术有限公司 A kind of method and apparatus for realizing continuous auto-focusing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100086292A1 (en) * 2008-10-08 2010-04-08 Samsung Electro- Mechanics Co., Ltd. Device and method for automatically controlling continuous auto focus
US20110063494A1 (en) * 2009-09-16 2011-03-17 Altek Corporation Continuous focusing method for digital camera
US20180367725A1 (en) * 2017-06-16 2018-12-20 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method of focusing, terminal, and computer-readable storage medium
CN108777767A (en) * 2018-08-22 2018-11-09 Oppo广东移动通信有限公司 Photographic method, device, terminal and computer readable storage medium
CN110572573A (en) * 2019-09-17 2019-12-13 Oppo广东移动通信有限公司 Focusing method and device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
WO2022016550A1 (en) 2022-01-27

Similar Documents

Publication Publication Date Title
US9501834B2 (en) Image capture for later refocusing or focus-manipulation
CN100533248C (en) Imaging apparatus, imaging apparatus control method, and computer program
US7791668B2 (en) Digital camera
KR102032882B1 (en) Autofocus method, device and electronic apparatus
CN103261939B (en) Imaging device and main photography target recognition methods
JP4961965B2 (en) Subject tracking program, subject tracking device, and camera
CN108496350A (en) A kind of focusing process method and apparatus
KR20130139243A (en) Object detection and recognition under out of focus conditions
CN111263072A (en) Shooting control method and device and computer readable storage medium
US20110002680A1 (en) Method and apparatus for focusing an image of an imaging device
CN117041729A (en) Shooting method, shooting device and computer readable storage medium
JP2009141475A (en) Camera
WO2019084756A1 (en) Image processing method and device, and aerial vehicle
CN111212226A (en) Focusing shooting method and device
CN104935807B (en) Photographic device, image capture method and computer-readable recording medium
CN113170053A (en) Imaging method, imaging device, and storage medium
US11696025B2 (en) Image processing apparatus capable of classifying an image, image processing method, and storage medium
US11823428B2 (en) Image processing apparatus and control method therefor, image capturing apparatus, and storage medium
CN112995503B (en) Gesture control panoramic image acquisition method and device, electronic equipment and storage medium
JP4935380B2 (en) Image tracking device and imaging device
JP4810440B2 (en) IMAGING DEVICE, ITS CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
EP3723358B1 (en) Image processing apparatus, control method thereof, and program
JP5375943B2 (en) Imaging apparatus and program thereof
JP7458723B2 (en) Image processing device, imaging device, control method, and program
US20230308756A1 (en) Control apparatus, image pickup apparatus, control method, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210723

WD01 Invention patent application deemed withdrawn after publication