CN110378934B - Subject detection method, apparatus, electronic device, and computer-readable storage medium - Google Patents

Subject detection method, apparatus, electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN110378934B
CN110378934B CN201910658738.XA CN201910658738A CN110378934B CN 110378934 B CN110378934 B CN 110378934B CN 201910658738 A CN201910658738 A CN 201910658738A CN 110378934 B CN110378934 B CN 110378934B
Authority
CN
China
Prior art keywords
subject
target image
moving object
image
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910658738.XA
Other languages
Chinese (zh)
Other versions
CN110378934A (en
Inventor
卓海杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910658738.XA priority Critical patent/CN110378934B/en
Publication of CN110378934A publication Critical patent/CN110378934A/en
Application granted granted Critical
Publication of CN110378934B publication Critical patent/CN110378934B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a subject detection method, a subject detection device, an electronic device and a computer-readable storage medium. The method comprises the following steps: carrying out motion detection on the target image, and determining a moving object in the target image and a motion speed corresponding to the moving object; when the movement speed exceeds a speed threshold value, carrying out subject detection on the target image to obtain a candidate subject contained in the target image; and determining a target subject of the target image according to the candidate subject and the moving object. The subject detection can be performed on the target image based on the movement speed of the moving object, so that the target subject of the target image is determined according to the candidate subject and the moving object obtained by detection, and the accuracy of the subject detection can be improved.

Description

Subject detection method, apparatus, electronic device, and computer-readable storage medium
Technical Field
The present application relates to the field of imaging technologies, and in particular, to a method and an apparatus for detecting a subject, an electronic device, and a computer-readable storage medium.
Background
With the development of imaging technology, the application of subject detection technology is becoming more and more widespread. The main body in the image is identified through the main body detection technology, and operations such as focusing, tracking, beautifying, local processing and the like can be carried out on the main body. Currently, the subject detection technology is mainly implemented based on a deep learning algorithm. However, the conventional subject detection technology has a problem of low accuracy.
Disclosure of Invention
The embodiment of the application provides a subject detection method and device, electronic equipment and a computer-readable storage medium, which can improve the accuracy of subject detection.
A subject detection method, comprising:
carrying out motion detection on a target image, and determining a moving object in the target image and a motion speed corresponding to the moving object;
when the motion speed exceeds a speed threshold, carrying out subject detection on the target image to obtain a candidate subject contained in the target image;
and determining a target subject of the target image according to the candidate subject and the moving object.
A subject detection apparatus, comprising:
the motion detection module is used for carrying out motion detection on a target image and determining a moving object in the target image and a motion speed corresponding to the moving object;
the main body detection module is used for carrying out main body detection on the target image when the motion speed exceeds a speed threshold value to obtain a candidate main body contained in the target image;
and the subject determining module is used for determining a target subject of the target image according to the candidate subject and the moving object.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of:
carrying out motion detection on a target image, and determining a moving object in the target image and a motion speed corresponding to the moving object;
when the motion speed exceeds a speed threshold, carrying out subject detection on the target image to obtain a candidate subject contained in the target image;
and determining a target subject of the target image according to the candidate subject and the moving object.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
carrying out motion detection on a target image, and determining a moving object in the target image and a motion speed corresponding to the moving object;
when the motion speed exceeds a speed threshold, carrying out subject detection on the target image to obtain a candidate subject contained in the target image;
and determining a target subject of the target image according to the candidate subject and the moving object.
According to the subject detection method, the subject detection device, the electronic equipment and the computer readable storage medium, the moving speed of the moving object in the target image and the moving speed corresponding to the moving object can be obtained, and when the moving speed exceeds the moving threshold, the candidate subject contained in the target image is detected, so that the target subject of the target image is determined according to the moving object and the candidate subject, and the accuracy of subject detection can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a block diagram of an electronic device in one embodiment;
FIG. 2 is a flow diagram of a method for subject detection in one embodiment;
FIG. 3 is a flow diagram of a subject detection method provided in one embodiment;
FIG. 4 is a flow diagram of removing moving objects from a candidate body in one embodiment;
FIG. 5 is a flow diagram of motion detection for a target image in one embodiment;
FIG. 6 is a flow diagram of subject detection on a target image in one embodiment;
FIG. 7 is a diagram illustrating an image processing effect according to an embodiment;
FIG. 8 is a block diagram showing the structure of a subject detecting apparatus according to an embodiment;
FIG. 9 is a schematic diagram of an image processing circuit in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that the terms "first," "second," and the like, as used herein, may be used herein to describe various elements and parameters, but these elements and parameters are not limited by these terms. These terms are only used to distinguish one element from another, or to distinguish one parameter from another. For example, a first segmentation drawing may be referred to as a second segmentation drawing, and similarly, a second segmentation drawing may be referred to as a first segmentation drawing, without departing from the scope of the present application. Both the first and second segmentation maps are segmentation maps, but they are not the same segmentation map.
Fig. 1 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 1, the electronic device includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor for implementing a subject detection method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc. In some embodiments, the electronic device may also be a server. The server may be an independent server, or may be implemented by a server cluster composed of a plurality of servers.
FIG. 2 is a flow diagram of a method for subject detection in one embodiment. As shown in fig. 2, the subject detection method includes steps 202 to 206, in which:
step 202, performing motion detection on the target image, and determining a moving object in the target image and a motion speed corresponding to the moving object.
The target image can be an image acquired by the electronic equipment through a camera, and can also be an image stored locally in the electronic equipment or an image downloaded from a network. The target image may also be a video, a frame image in a sequence of images. Motion detection refers to an operation of recognizing a moving object contained in an image. The moving object refers to an object with a position change in the image acquisition process. For example, common moving objects include pedestrians, animals, automobiles, and the like. The movement speed is used for indicating the speed of the position change of the moving object. In general, the moving speed can be calculated from the magnitude of the change in position of a moving object in an imaged image.
The electronic equipment carries out motion detection on the target image, and can determine a moving object in the target image and the motion speed corresponding to the moving object. Specifically, the electronic device may acquire at least two frames of images including the target image, and detect the acquired at least two frames of images by a background subtraction Method, an inter-frame difference Method, or an optical flow Method such as Lucas-Kanade Method, and the like, so that the moving object in the target image and the moving speed corresponding to the moving object may be determined. Alternatively, the target image may include one or more moving objects, and each moving object has a corresponding moving speed.
And 204, when the motion speed exceeds a speed threshold, performing subject detection on the target image to obtain a candidate subject contained in the target image.
The electronic equipment can perform subject detection on the target image when the motion speed of the moving object exceeds a speed threshold value to obtain a candidate subject contained in the target image. Wherein, the speed threshold value can be set according to the actual application requirement. Specifically, the speed threshold value can be determined by collecting a large number of images containing a moving object, analyzing the moving speed of the moving object in the images and the photographic subject of the images.
Specifically, when the target image includes a moving object, the subject detection may be performed on the target image when the moving speed of the moving object exceeds a speed threshold. When the target image contains a plurality of moving objects, optionally, the electronic device may obtain a maximum moving speed from a plurality of moving speeds respectively corresponding to the plurality of moving objects, and when the maximum moving speed exceeds a speed threshold, perform subject detection on the target image; the electronic equipment can also perform main body detection on the target image when the movement speed of the moving object with the largest area exceeds a speed threshold; the distance between the moving object and the center of the image may also be integrated, and when the moving speed of the moving object with the minimum distance from the center of the image and the maximum area exceeds a speed threshold, the subject detection and the like are performed on the target image, which is not limited herein.
And the electronic equipment performs subject detection on the target image to obtain a candidate subject contained in the target image. Specifically, the electronic device may perform subject detection on the target image through a deeply learned subject detection model. The electronic device may input the target image to the subject detection model, and subject detection is performed on the target image through the subject detection model to obtain a plurality of candidate subjects included in the target image. The candidate main body is composed of corresponding pixel points of the candidate main body in the image. Specifically, when the main body detection model outputs the region corresponding to the main body in the main body contour manner, the edge pixel points of the candidate main body are the edge pixel points of the contour of the candidate main body. One or more candidate subjects included in the target image may be included.
The main body detection model can be realized by a Deep learning algorithm such as CNN (Convolutional Neural Network), DNN (Deep Neural Network), or RNN (Recurrent Neural Network). Optionally, in some embodiments, the electronic device may pre-store image feature information corresponding to a plurality of subjects, and match the image feature information of the target image with the pre-stored image feature information, where a subject corresponding to successfully matched image feature information is a candidate subject included in the target image.
And step 206, determining a target subject of the target image according to the candidate subject and the moving object.
The target subject refers to a subject of the finally determined target image. The electronic device may perform optimization processing on the target image according to the determined target subject, for example, the electronic device may perform beauty processing and color enhancement processing on the target subject, may also perform blurring processing on the target image according to the target subject, and may also perform image acquisition operation after controlling the camera to focus on the target subject in a scene of real-time shooting.
The electronic device determines a target subject of the target image according to the candidate subject and the moving object. Specifically, the electronic device may determine the target subject of the target image by synthesizing at least one of areas of the candidate subject and the moving object, a distance from the center of the image, a moving speed of the moving object, a confidence of the candidate subject, and the like. The confidence of the candidate subject refers to the credibility of the candidate subject included in the target image. Optionally, the electronic device may acquire a moving object whose moving speed does not exceed a speed threshold and a candidate subject whose area exceeds an area threshold as a target subject of the target image; a moving object, the moving speed of which does not exceed the speed threshold and is detected as a candidate object at the same time, can also be acquired as a target object of the target image; the moving object may also be removed from the candidate body, and the candidate body after removal is used as the target body, or the moving object with the moving speed exceeding the speed threshold is removed, and the candidate body after removal is used as the target body, and the like, which is not limited herein.
In the embodiment provided by the application, the target image can be subjected to motion detection, the moving object in the target image and the motion speed corresponding to the moving object are determined, when the motion speed exceeds a speed threshold value, the target image is subjected to subject detection to obtain a candidate subject contained in the target image, and the target subject of the target image is determined according to the candidate subject and the moving object. The target subject of the target image can be determined according to the candidate subject and the moving object detected by the subject when the moving speed of the moving object included in the image exceeds the speed threshold, so that the accuracy of subject detection can be improved.
In one embodiment, the electronic device may preset target subject determination manners corresponding to different shooting modes, so as to obtain a corresponding target subject determination manner according to a shooting mode currently adopted by the camera, and determine a target subject of the target image according to the candidate subject and the moving object based on the target subject determination manner. For example, the target subject determination method corresponding to the slow motion shooting mode may be to use a moving object whose acquired moving speed does not exceed a speed threshold and which is detected as a candidate subject at the same time as the target subject of the target image; the target subject determination mode corresponding to the portrait shooting mode may be that a moving object is removed from the candidate subject, and the candidate subject after removal is taken as the target subject; the target subject determination manner corresponding to the skip shooting mode may be such that a moving object whose moving speed exceeds a speed threshold and which is detected as a candidate subject at the same time is a target subject of the target image, or the like. When the shooting mode is slow-motion shooting, if the target image obtained by motion detection contains the portrait A and the portrait B with the motion speed smaller than the speed threshold and the portrait C with the motion speed larger than the speed threshold, and the target image obtained by subject detection contains candidate subjects including the portrait A, the portrait C and the flower, the electronic device may use the portrait A as the target subject of the target image.
The corresponding target subject determination mode is obtained according to the shooting mode adopted by the camera, so that the target subject is determined based on the target subject determination mode, the moving object of the target image and the candidate subject, namely, different shooting modes are distinguished, different subject detection modes are adopted, and the accuracy of subject detection can be improved.
In one embodiment, the provided subject detection method further includes: and when the moving speed does not exceed the speed threshold value, taking the moving object as a target main body of the target image.
Alternatively, the speed threshold is a maximum moving speed at which a moving object can be regarded as a photographic subject. That is, when the moving speed of the moving object exceeds the speed threshold, it may be determined that the moving object is not the subject of the target image; and when the moving speed of the moving object does not exceed the speed threshold, the moving object can be determined to be the subject in the target image.
The electronic equipment can perform motion detection on the target image to obtain a moving object contained in the target image and a motion speed corresponding to the moving object, and when the motion speed of the moving object does not exceed the speed threshold, the moving object is used as a target main body of the target image. Optionally, when the motion speed of one or more moving objects contained in the target image does not exceed the speed threshold, determining the one or more moving objects as the target subject of the target image; when the target image simultaneously contains a moving object with a movement speed exceeding the speed threshold and a moving object with a movement speed not exceeding the speed threshold, the electronic device can determine the target moving object for comparison with the speed threshold by integrating the distance between the moving object and the image center, the area size of the moving object and the like, and when the target moving object does not exceed the speed threshold, the moving object with a movement speed not exceeding the speed threshold is determined as a target main body of the target image.
In general, when image capturing is performed, the movement speed of a capturing subject is generally small, and a movement with a large movement speed is often a captured background, for example, when human image capturing is performed, an animal with a large movement speed, a running vehicle, or the like may exist in the background; according to the method and the device, the target image is subjected to motion detection, when the motion speed of the moving object contained in the target image exceeds the speed threshold, the candidate contained in the target image and the moving object are synthesized to determine the target main body of the target image, and when the motion speed of the moving object does not exceed the speed threshold, the moving object is determined as the target main body of the target image, namely the target main body contained in the target image can be detected based on the motion speed of the moving object, and the accuracy of main body detection can be improved.
In one embodiment, the provided subject detection method may further include: when the target image is determined not to contain the moving object, carrying out subject detection on the target image to obtain a candidate subject contained in the target image; and taking the candidate subject as a target subject of the target image.
There is a case where the target image does not contain a moving object. The electronic device can determine that the target image does not contain the moving object when the moving object contained in the target image and the moving speed corresponding to the moving object are not output when the target image is subjected to motion detection. Specifically, the electronic device may analyze a plurality of frames of images including the target image, and determine that the target image does not include the moving object when image information corresponding to each of the plurality of frames of images is the same or has a small difference.
When the target image does not contain the moving object, the electronic device may perform subject detection on the target image to obtain a candidate subject contained in the target image, and use the candidate subject as a target subject of the target image. The candidate subjects obtained by subject detection may be one or more. Optionally, the electronic device may use one or more detected candidate subjects as target subjects of the target image, or may select candidate subjects as target subjects from a plurality of candidate subjects. Specifically, the electronic device acquires categories corresponding to the plurality of candidate subjects, and takes the candidate subject corresponding to the category with the highest priority as the target subject based on the correspondence between the categories and the priorities. For example, the priority of the category may be that people, animals, plants decrease in sequence, and the like, and is not limited herein. Optionally, the electronic device may further combine one or more of a position of the candidate subject in the image, an area size of the candidate subject, a confidence of a category corresponding to the candidate subject, and the like to determine the target subject.
In one embodiment, the electronic device may perform subject detection on the target image in a poolnet (detection method based on pooling technology) subject detection mode to obtain candidate subjects included in the target image. The Poolnet subject detection model introduces a GGM (Global guide Module) and a FAM (Feature Aggregation Module), so that details of a candidate subject to be detected can be sharpened, and the subject detection efficiency can be improved.
Fig. 3 is a flow chart of a subject detection method provided in one embodiment. As shown in fig. 3, the flow of the subject detection method is as follows:
step 302, performing motion detection on the target image.
Step 304, judging whether the target image contains a moving object, if so, entering step 306, otherwise, entering step 316.
And step 306, acquiring a moving object contained in the target image and a moving speed corresponding to the moving object.
Step 308, determine whether the movement speed exceeds the speed threshold, if yes, go to step 310, otherwise go to step 314.
And 310, performing subject detection on the target image to obtain a candidate subject contained in the target subject.
In step 312, a target subject of the target image is determined according to the candidate subject and the moving object.
And step 314, taking the moving object as a target main body of the target image.
And step 316, performing subject detection on the target image to obtain a candidate subject contained in the target subject.
Step 318, the candidate subject is taken as the target subject of the target image.
By carrying out motion detection on the target image and adopting different main body determination modes according to the motion detection result to determine the target main body of the target image, the main body detection result is more accurate and meets the image shooting requirement.
In one embodiment, a subject detection method for determining a target subject of a target image from a candidate subject and a moving object is provided, including: and removing moving objects contained in the target image from the candidate main body, and taking the removed candidate main body as the target main body.
The elimination of the moving object from the candidate body means that the body of the moving object in the candidate body is removed. The electronic device removes moving objects included in the target image from the candidate subjects, and specifically, the electronic device may remove candidate subjects identified as moving objects from the candidate subjects, thereby using the removed candidate subjects as target subjects. For example, when a portrait is shot, the electronic device may perform motion detection on the target image to obtain that moving objects included in the target image are the portrait D, the automobile E, and the automobile F; and the candidate subject obtained by subject detection comprises a portrait D, a portrait E and an automobile E, the electronic equipment can remove the portrait D and the automobile E of the candidate subject, and the candidate subject after removal, namely the portrait E, is taken as a target subject of the target image.
When the image is captured, there are often some backgrounds, such as pedestrians, running vehicles, and the like, which are also recognized as subjects when the subject detection is performed on the image. The method comprises the steps of performing main body detection on a target image by using a moving object with the moving speed larger than a speed threshold in the target image to obtain a candidate main body contained in the target image, and removing the moving object from the candidate main body to obtain the target main body of the target image, wherein the main body detection accuracy is higher.
FIG. 4 is a flow diagram of removing moving objects from candidate bodies in one embodiment. As shown in fig. 4, in one embodiment, a process of removing a moving object included in a target image from candidate subjects and using the removed candidate subjects as target subjects in a subject detection method includes:
step 402, a first segmentation map corresponding to the candidate subject is obtained, and a second segmentation map corresponding to the moving object is obtained.
The first segmentation map is output when subject detection is performed on the target image. The first segmentation map identifies the location of the candidate subject in the target image. The first segmentation map may be a binary segmentation map, and 0 and 1 are used to represent the candidate subject and the other regions except the candidate subject in the target image, respectively. For example, the first segmentation map may be that the pixel value of the pixel point corresponding to the candidate subject of the target image is 1, and the pixel values of the pixel points other than the pixel point corresponding to the candidate subject are 0. Similarly, the second segmentation map is output when motion detection is performed on the target image. The second segmentation map identifies the position of the moving object in the target image. The second segmentation map may also be a binarized segmentation map. It should be noted that, in this embodiment, the pixel points of the candidate subject in the first segmentation map and the pixel points of the moving object in the second segmentation map adopt the same binarization value, that is, both may be 1 or both may be 0. In other implementations, the pixel points of the candidate subject in the first segmentation map and the pixel points of the moving object in the second segmentation map may be different by using the same binarization value.
Step 404, generating an intermediate segmentation map based on the first segmentation map and the second segmentation map; the intermediate segmentation map includes an overlapping region of the first segmentation map and the second segmentation map.
The intermediate segmentation map includes an overlapping region of the first segmentation map and the second segmentation map. Specifically, in this embodiment, the pixel points of the candidate subject in the first segmentation map and the pixel points of the moving object in the second segmentation map adopt the same binary values, so that the electronic device can obtain the pixel points with the same position and the same pixel value in the first segmentation map and the second segmentation map, and form the intermediate segmentation map by the pixel points. Optionally, the electronic device takes an intersection of the first segmentation image and the second segmentation image to obtain an intermediate segmentation image. When M denotes the first segmentation map and N denotes the second segmentation map, the intermediate segmentation map I is M ═ N.
Step 406, determining a target subject segmentation map according to the first segmentation map and the intermediate segmentation map.
The target subject segmentation map identifies the location of the target subject in the target image. The electronic device may determine a target subject in the target image from the target subject segmentation map. The first segmentation map is a segmentation map corresponding to the candidate subject. The electronic equipment determines a target subject segmentation map according to the first segmentation map and the middle segmentation map, and specifically, the electronic equipment performs subtraction operation on the first segmentation map and the middle segmentation map, namely, a moving object is removed from the candidate subject. According to the above example, where M denotes the first partition map, N denotes the second partition map, and the intermediate partition map I equals M ═ N, the target subject partition J equals M-M ═ N. Taking 1 in the first segmentation chart and the second segmentation chart as an example to explain the pixel values of the pixel points corresponding to the candidate subject and the moving object, the pixel value of the pixel point corresponding to the target subject in the obtained target subject segmentation chart is 1, and the pixel values of the other pixel points except the pixel point corresponding to the target subject are 0, and the electronic device can determine the target subject in the target image according to the target subject segmentation chart.
By acquiring the first segmentation map corresponding to the candidate subject and the second segmentation map corresponding to the moving object, the segmentation map of the target subject is obtained based on the first segmentation map and the second segmentation map, and the accuracy of the target subject can be improved.
In one embodiment, a process of removing a moving object included in a target image from a candidate subject and using the removed candidate subject as a target subject in a subject detection method is provided, and includes: removing moving objects contained in the target image from the candidate main body; and taking the candidate subject with the area exceeding the area threshold value in each candidate subject after the elimination as a target subject.
The area threshold may be obtained by performing statistical analysis on the area size corresponding to the subject in a large number of images, which is not limited herein. For example, the area threshold may be 30%, 40%, 50%, etc. of the image area. Alternatively, the area threshold may also be determined according to the number of target subjects required and the area of each candidate subject after being culled. For example, when the size of the area of the 4 candidate subjects after the culling is 300 × 300, 320 × 320, 400 × 400, 450 × 450, respectively, if the number of the required target subjects is 3, the area threshold may be 350 × 350, 360 × 360, or the like. Alternatively, the area of the candidate subject may also be expressed by a ratio of the area of the candidate subject to the area of the target image.
In an embodiment, the electronic device may also use, as the target subject, a candidate subject with the largest area among the removed candidate subjects.
By removing moving objects contained in the target image from the candidate bodies, the candidate bodies with areas exceeding the area threshold value in each removed candidate body are used as target bodies, so that the target bodies can be secondarily screened, and the candidate bodies with larger areas are used as the target bodies, so that the accuracy of the target bodies can be further improved.
FIG. 5 is a flow diagram of motion detection for a target image in one embodiment. As shown in fig. 5, the process of performing motion detection on a target image and determining a moving object in the target image and a motion speed corresponding to the moving object in the provided subject detection method includes:
step 502, an image sequence comprising a target image is acquired.
The target image may be any one frame image in the image sequence. Alternatively, the target image may be located at a middle position or a rear position in the video or the image sequence, that is, a moving object contained in the target image may be detected through the front frame image and the front and rear frame images. The image sequence can be a video or an image sequence formed by a plurality of frames of preview images captured by a camera in real time. The electronic device may perform motion detection on the target image from an image sequence containing the target image. Optionally, the exposure parameters corresponding to the multiple frames of images included in the image sequence are the same.
Step 504, the position of the pixel point in each frame of image included in the image sequence is analyzed.
In two frames of images imaged before and after the moving object, the pixel value corresponding to the moving object should be kept unchanged. Or, a moving object in the target image may be analyzed by detecting pixel points with the same pixel value in the previous and subsequent frame images.
The electronic device analyzes the position of the pixel points in each frame of image contained in the sequence of images. Specifically, the electronic device may analyze the positions of the pixel points with the same pixel value in each frame of image.
Step 506, determining the pixel points with position changes as motion pixel points, and acquiring a motion object composed of the motion pixel points in the target image.
If the positions of the pixel points with the same pixel values in different images are changed, the pixel points are considered to belong to the pixel points of the moving object, and the electronic equipment determines the pixel points as the moving pixel points. The electronic device acquires a moving object composed of moving pixel points in a target image. Specifically, the electronic device may include connected regions formed by moving pixel points, where each connected region corresponds to a moving object. Optionally, the electronic device may filter connected regions in which the number of pixel points is less than the number threshold, and determine the filtered connected regions as regions corresponding to the moving object.
In one embodiment, the position change amplitude of the pixel point can be determined according to the position of the pixel point in each frame of image; and when the position change amplitude exceeds a change threshold value, determining the pixel points as motion pixel points.
The position change amplitude refers to the amplitude of the position change of the same pixel point with the same pixel value in different images. The electronic device can determine the position change amplitude of the pixel point according to the position of the pixel point in each frame of image. For example, if the XY coordinate axes are established with the center of the image as the origin, and the positions of the pixel points in each frame of image are (2000,3122), (2000,3124), (1998,3122), (2001, 3126), the position variation range of the pixel point in the X direction is 2 pixels, and the position variation range in the Y direction is 6 pixels.
The variation threshold may be set according to actual application requirements, and is not limited herein. Specifically, the change threshold may be determined by analyzing a position change amplitude of a pixel point having the same pixel value in a multi-frame image captured when the camera shakes or the subject shakes. For example, the variation threshold may be 4 pixels, 5 pixels, 6 pixels, 7 pixels, etc. in units of pixels, which is not limited herein. Alternatively, the electronic apparatus may set a variation threshold corresponding to the X direction and a variation threshold corresponding to the Y direction. The electronic equipment can obtain the position change amplitude corresponding to the pixel point, and determine the pixel point with the position change amplitude exceeding the change threshold as the moving pixel point, so that a moving object consisting of the moving pixel points in the target image is obtained.
In the image shooting process, the problem that positions of pixel points with the same pixel value in a plurality of shot frames of images are inconsistent due to camera shake or slight shake of a shooting main body is easy to occur. By acquiring the position change amplitude of the pixel points and determining the pixel points with the position change amplitude exceeding the change threshold as the moving pixel points, the moving object consisting of the moving pixel points in the target image is obtained, the problem of inconsistent positions of the pixel points in the image due to camera shake and shooting subject shake can be avoided, and the accuracy of motion detection can be improved.
And step 508, calculating the motion speed of the moving object according to the positions of the motion pixel points contained in the moving object in each frame of image.
Based on the principle that the pixel values corresponding to the moving object remain unchanged in two frames of images imaged before and after the moving object moves, I1(x, y, t-1) ═ I2(x + u, y + v, t) can be determined, I1 and I2 represent the positions of pixel points in image 1 and image 2, respectively, and t represents unit time. The electronic device may substitute the position of the moving pixel point included in the moving object in each frame of image into the formula according to the formula, to obtain the moving speed of each moving pixel point, and may determine the moving speed of the moving object by calculating an average value, a median value, a mode, or the like of the moving speeds of the moving pixel points.
The position of the pixel point in each frame of image contained in the image sequence is analyzed to obtain a moving object consisting of the moving pixel points with position changes, and the moving speed of the moving object is calculated according to the position of the moving pixel point contained in the moving object in each frame of image, so that the moving object contained in the target image and the moving speed corresponding to the moving object can be obtained.
In an embodiment, the electronic device may perform motion detection on the target image by using an inter-frame difference method, and specifically, the electronic device may subtract pixel values of pixel points at corresponding positions of two adjacent frames of images in the image sequence to obtain an image, and if the pixel value is 0, it is determined that the pixel point belongs to a pixel point of a stationary object or a background, and if the pixel value is not 0, it is determined that the pixel point is a pixel point corresponding to a moving object. The motion detection is carried out on the target image by the interframe difference method, the calculation is simple, and the efficiency of the motion detection can be improved.
FIG. 6 is a flow diagram that illustrates subject detection for a target image, according to one embodiment. As shown in fig. 6, the process of performing subject detection on a target image to obtain candidate subjects included in the target image in the provided subject detection method includes:
step 602, generating a central weight map corresponding to the target image, wherein the weight value represented by the central weight map gradually decreases from the center to the edge.
The central weight map is a map used for recording the weight value of each pixel point in the target image. The weight values recorded in the central weight map gradually decrease from the center to the four sides, i.e., the central weight is the largest, and the weight values gradually decrease toward the four sides. And gradually reducing the weight value from the image center pixel point to the image edge pixel point of the target image represented by the center weight map.
The electronic device may generate a corresponding center weight map according to the size of the target image. The weight value represented by the central weight map gradually decreases from the center to the four sides. The central weight map may be generated using a gaussian function, or using a first order equation, or a second order equation. The gaussian function may be a two-dimensional gaussian function.
Step 604, inputting the target image and the central weight map into the subject detection model to obtain a subject region confidence map.
The main body detection model is obtained by training in advance according to a visible light image, a depth image, a center weight image and a corresponding marked main body mask image of the same scene. Specifically, the subject detection model is obtained by acquiring a large amount of training data in advance, and inputting the training data into the subject detection model including the initial network weight for training. Each set of training data comprises a visible light graph, a center weight graph and a labeled main body mask graph corresponding to the same scene. The visible light map and the central weight map are used as input of a trained subject detection model, and the labeled subject mask (mask) map is used as an expected output real value (ground true) of the trained subject detection model. The main body mask image is an image filter template used for identifying a main body in an image, and can shield other parts of the image and screen out the main body in the image. The subject detection model may be trained to recognize and detect various subjects, such as people, flowers, cats, dogs, backgrounds, etc.
Specifically, the electronic device may input the target image and the central weight map into the subject detection model, and perform detection to obtain a subject region confidence map. The subject region confidence map is used to record the probability of which recognizable subject the subject belongs to, for example, the probability of a certain pixel point belonging to a person is 0.8, the probability of a flower is 0.1, and the probability of a background is 0.1.
Step 606, candidate subjects in the target image are determined according to the subject region confidence map.
The subject refers to various subjects, such as human, flower, cat, dog, cow, sky, cloudiness, vehicle, etc.
Specifically, the electronic device may select one or more subjects with confidence greater than a confidence threshold as candidate subjects according to the subject region confidence map. The confidence threshold may be set according to the actual application requirement, and is not limited herein.
In one embodiment, the electronic device may process the body region confidence map to obtain a body mask map, detect highlight regions in the target image, and determine candidate bodies for eliminating highlight in the target image according to the highlight regions in the target image and the body mask map. The electronic device may obtain the subject mask image by filtering the subject region confidence map, where some scattered points with lower confidence exist in the subject region confidence map. The filtering process may employ a configured confidence threshold to filter the pixel points in the confidence map of the subject region whose confidence value is lower than the confidence threshold. The confidence threshold may adopt a self-adaptive confidence threshold, may also adopt a fixed threshold, and may also adopt a threshold corresponding to a regional configuration. The highlight region refers to a region having a luminance value greater than a luminance threshold value. The electronic equipment can perform highlight detection on the target image, screen target pixel points with brightness values larger than a brightness threshold value, and perform connected domain processing on the target pixel points to obtain a highlight area, so that differential calculation or logic and calculation is performed on the highlight area in the target image and the main body mask image to obtain a candidate main body for eliminating highlight in the target image.
Optionally, the electronic device may further perform adaptive confidence threshold filtering processing on the confidence map of the main body region to obtain a binary mask map; and performing morphology processing and guide filtering processing on the binary mask image to obtain a main body mask image. Specifically, after the confidence map of the main body region is filtered according to the adaptive confidence threshold, the electronic device represents the confidence value of the retained pixel point by using 1, and represents the confidence value of the removed pixel point by using 0, so as to obtain the binary mask map. Morphological treatments may include erosion and swelling. Firstly, carrying out corrosion operation on the binary mask image, and then carrying out expansion operation to remove noise; and then conducting guided filtering processing on the morphologically processed binary mask image to realize edge filtering operation and obtain a main body mask image with an edge extracted. The morphology processing and the guide filtering processing can ensure that the obtained main body mask image has less or no noise points and the edge is softer.
In an embodiment, the electronic device may further obtain a plurality of subjects and corresponding categories included in the target image according to the subject region confidence map, and determine candidate subjects based on the priority level of the category corresponding to each subject and the size of the region corresponding to the subject. Optionally, the electronic device may preset priority levels corresponding to different categories. For example, the priority of the categories may be people, flowers, cats, dogs, cattle, cloudiness decreasing in order. The electronic device determines the candidate subjects based on the priority level of the category corresponding to each subject and the size of the region, and specifically, when a plurality of subjects belonging to the same category exist in the target image, the electronic device may determine a preset number of subjects having the largest regions as the candidate subjects according to the sizes of the regions corresponding to the plurality of subjects; when multiple subjects belonging to different categories exist in the target image, the electronic device may use the subject corresponding to the category with the highest priority as the candidate subject, and if multiple subjects with the highest priority exist in the target image, the candidate subject may be further determined according to the size of the region where the multiple subjects are located. Optionally, the electronic device may also determine candidate subjects in combination with the position of each subject in the image. For example, the electronic device may further preset priority levels of different categories, sizes of different regions, and score values of subjects at different positions in the image, so as to calculate a score value of each subject according to the priority level of the category, the size of the region, and the position in the image corresponding to each subject, and use a preset number of subjects with the highest score values as candidate subjects.
The object in the center of the image can be detected more easily through the center weight graph, and the candidate main body in the target image can be identified more accurately by using the trained main body detection model obtained by training the visible light graph, the center weight graph, the main body mask graph and the like.
FIG. 7 is a diagram illustrating an image processing effect according to an embodiment. As shown in fig. 7, a butterfly exists in a target image 702, the target image 702 is input to a network 704 of a subject detection model to obtain a subject region confidence map 706, then the subject region confidence map 706 is filtered and binarized to obtain a binarized mask map 708, and then the binarized mask map 708 is subjected to morphological processing and guided filtering to realize edge enhancement to obtain a subject mask map 710. The subject mask map 710 identifies the location of the candidate subject, i.e., the butterfly, in the target image.
It should be understood that although the various steps in the flow charts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-6 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
Fig. 8 is a block diagram showing the structure of a subject detection device according to an embodiment. As shown in fig. 8, in one embodiment, the subject detection apparatus includes:
the motion detection module 802 is configured to perform motion detection on the target image, and determine a moving object in the target image and a motion speed corresponding to the moving object.
A subject detection module 804, configured to perform subject detection on the target image when the motion speed exceeds the speed threshold, so as to obtain a candidate subject included in the target image.
A subject determining module 806, configured to determine a target subject of the target image according to the candidate subject and the moving object.
According to the embodiment provided by the application, the target image can be subjected to motion detection, the motion speed of the moving object in the target image and the motion speed corresponding to the moving object are determined, when the motion speed exceeds a speed threshold value, main body detection is carried out on the target image to obtain a candidate main body contained in the target image, and the target main body of the target image is determined according to the candidate main body and the moving object. The target subject of the target image can be determined according to the candidate subject and the moving object detected by the subject when the moving speed of the moving object included in the image exceeds the speed threshold, so that the accuracy of subject detection can be improved.
In one embodiment, the subject determination module 806 may also be configured to treat the moving object as a target subject of the target image when the speed of movement does not exceed the speed threshold.
In one embodiment, the subject detection module 804 may be further configured to perform subject detection on the target image to obtain a candidate subject included in the target image when it is determined that the target image does not include the moving object; the subject determination module 806 is configured to use the candidate subject as a target subject of the target image.
In one embodiment, the subject determination module 806 may be further configured to remove a moving object included in the target image from the candidate subject, and use the removed candidate subject as the target subject.
In one embodiment, the subject determination module 806 may also be used to obtain a first segmentation map corresponding to the candidate subject and obtain a second segmentation map corresponding to the moving object; generating an intermediate segmentation map based on the first segmentation map and the second segmentation map; the intermediate segmentation map comprises an overlapping region of the first segmentation map and the second segmentation map; and determining a target body segmentation map according to the first segmentation map and the middle segmentation map.
In one embodiment, the subject determination module 806 may also be configured to eliminate moving objects included in the target image from the candidate subject; and taking the candidate subject with the area exceeding the area threshold value in each candidate subject after the elimination as a target subject.
In one embodiment, the motion detection module 802 may also be used to acquire a sequence of images that includes a target image; analyzing the position of the pixel point in each frame of image contained in the image sequence; determining the pixel points with position changes as moving pixel points, and acquiring a moving object consisting of the moving pixel points in the target image; and calculating the motion speed of the moving object according to the position of the motion pixel points contained in the moving object in each frame of image.
In one embodiment, the motion detection module 802 may be further configured to determine a position variation range of the pixel according to the position of the pixel in each frame of image; and when the position change amplitude exceeds a change threshold value, determining the pixel points as motion pixel points.
In one embodiment, the subject detection module 804 may be further configured to generate a center weight map corresponding to the target image, wherein the center weight map represents weight values that gradually decrease from the center to the edge; inputting the target image and the central weight map into a main body detection model to obtain a main body region confidence map; and determining candidate subjects in the target image according to the subject region confidence map.
The division of each module in the main body detection device is only for illustration, and in other embodiments, the main body detection device may be divided into different modules as needed to complete all or part of the functions of the main body detection device.
The implementation of each module in the subject detection apparatus provided in the embodiments of the present application may be in the form of a computer program. The computer program may be run on an electronic device. Program modules constituted by such computer programs may be stored on the memory of the electronic device. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides the electronic equipment. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 9 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 9, for convenience of explanation, only aspects of the image processing technique related to the embodiments of the present application are shown.
As shown in fig. 9, the image processing circuit includes an ISP processor 940 and a control logic 950. The image data captured by the imaging device 910 is first processed by the ISP processor 940, and the ISP processor 940 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of the imaging device 910. The imaging device 910 may include a camera having one or more lenses 912 and an image sensor 914. Image sensor 914 may include an array of color filters (e.g., Bayer filters), and image sensor 914 may acquire light intensity and wavelength information captured with each imaging pixel of image sensor 914 and provide a set of raw image data that may be processed by ISP processor 940. The sensor 920 (e.g., a gyroscope) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 940 based on the type of interface of the sensor 920. The sensor 920 interface may utilize an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
In addition, image sensor 914 may also send raw image data to sensor 920, sensor 920 may provide raw image data to ISP processor 940 based on the type of interface of sensor 920, or sensor 920 may store raw image data in image memory 930.
The ISP processor 940 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 940 may perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 940 may also receive image data from image memory 930. For example, the sensor 920 interface sends raw image data to the image memory 930, and the raw image data in the image memory 930 is then provided to the ISP processor 940 for processing. The image Memory 930 may be a part of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from image sensor 914 interface or from sensor 920 interface or from image memory 930, ISP processor 940 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 930 for additional processing before being displayed. ISP processor 940 receives processed data from image memory 930 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The image data processed by ISP processor 940 may be output to display 970 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of ISP processor 940 may also be sent to image memory 930 and display 970 may read image data from image memory 930. In one embodiment, image memory 930 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 940 may be transmitted to an encoder/decoder 960 for encoding/decoding the image data. The encoded image data may be saved and decompressed before being displayed on a display 970 device. The encoder/decoder 960 may be implemented by a CPU or GPU or coprocessor.
The statistical data determined by the ISP processor 940 may be transmitted to the control logic 950 unit. For example, the statistical data may include image sensor 914 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 912 shading correction, and the like. The control logic 950 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 910 and control parameters of the ISP processor 940 based on the received statistical data. For example, the control parameters of imaging device 910 may include sensor 920 control parameters (e.g., gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, lens 912 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 912 shading correction parameters.
In embodiments provided herein, the imaging device 910 may be used to acquire a target image, and the image memory 930 may be used to store the target image acquired by the imaging device 910 and a sequence of images including the target image. The ISP processor 940 may perform motion detection on the target image, determine a motion speed corresponding to a moving object in the target image, perform subject detection on the target image when the motion speed exceeds a speed threshold, obtain a candidate subject included in the target image, and determine a target subject of the target image according to the candidate subject and the moving object. Thus, the control logic 950 can perform focusing, beautifying, and beautifying on the target subject. The electronic device can implement the subject detection method provided by the above embodiment through the image processing circuit, which is not described herein again.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the subject detection method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform a subject detection method.
Any reference to memory, storage, database, or other medium used by embodiments of the present application may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (11)

1. A subject detection method, comprising:
carrying out motion detection on a target image, and determining a moving object in the target image and a motion speed corresponding to the moving object; the target image is a frame image in a video or an image sequence;
when the motion speed exceeds a speed threshold value, generating a center weight map corresponding to the target image, wherein the weight value represented by the center weight map is gradually reduced from the center to the edge;
inputting the target image and the central weight map into a main body detection model to obtain a main body region confidence map;
determining candidate subjects in the target image according to the subject region confidence map;
and determining a target subject of the target image according to the candidate subject and the moving object.
2. The method of claim 1, further comprising:
and when the movement speed does not exceed the speed threshold value, taking the moving object as a target subject of the target image.
3. The method of claim 1, further comprising:
when the target image is determined not to contain a moving object, performing subject detection on the target image to obtain a candidate subject contained in the target image;
and taking the candidate subject as a target subject of the target image.
4. The method of claim 1, wherein determining a target subject of the target image from the candidate subject and the moving object comprises:
and removing moving objects contained in the target image from the candidate main body, and taking the removed candidate main body as the target main body.
5. The method according to claim 4, wherein the removing of the moving object included in the target image from the candidate subject, and taking the removed candidate subject as the target subject, comprises:
acquiring a first segmentation map corresponding to the candidate body and acquiring a second segmentation map corresponding to the moving object;
generating an intermediate segmentation map based on the first segmentation map and the second segmentation map; the intermediate segmentation map includes an overlapping region of the first segmentation map and the second segmentation map;
and determining a target body segmentation map according to the first segmentation map and the intermediate segmentation map.
6. The method according to claim 4, wherein the removing of the moving object included in the target image from the candidate subject, and taking the removed candidate subject as the target subject, comprises:
removing a moving object contained in the target image from the candidate body;
and taking the candidate subject with the area exceeding the area threshold value in each candidate subject after the elimination as the target subject.
7. The method of claim 1, wherein the performing motion detection on the target image and determining the moving object in the target image and the corresponding motion speed of the moving object comprises:
acquiring an image sequence containing the target image;
analyzing the position of a pixel point in each frame of image contained in the image sequence;
determining pixel points with position changes as moving pixel points, and acquiring a moving object consisting of the moving pixel points in the target image;
and calculating the motion speed of the moving object according to the position of the motion pixel points contained in the moving object in each frame of image.
8. The method according to claim 7, wherein the determining the pixel point with the position change as the motion pixel point comprises:
determining the position change amplitude of the pixel points according to the positions of the pixel points in each frame of image;
and when the position change amplitude exceeds a change threshold value, determining the pixel point as a motion pixel point.
9. A subject detection device, comprising:
the motion detection module is used for carrying out motion detection on a target image and determining a moving object in the target image and a motion speed corresponding to the moving object; the target image is a frame image in a video or an image sequence;
the main body detection module is used for generating a central weight map corresponding to the target image when the motion speed exceeds a speed threshold, wherein the weight value represented by the central weight map is gradually reduced from the center to the edge; inputting the target image and the central weight map into a main body detection model to obtain a main body region confidence map; determining candidate subjects in the target image according to the subject region confidence map;
and the subject determining module is used for determining a target subject of the target image according to the candidate subject and the moving object.
10. An electronic device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to perform the steps of the subject detection method of any of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN201910658738.XA 2019-07-22 2019-07-22 Subject detection method, apparatus, electronic device, and computer-readable storage medium Active CN110378934B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910658738.XA CN110378934B (en) 2019-07-22 2019-07-22 Subject detection method, apparatus, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910658738.XA CN110378934B (en) 2019-07-22 2019-07-22 Subject detection method, apparatus, electronic device, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN110378934A CN110378934A (en) 2019-10-25
CN110378934B true CN110378934B (en) 2021-09-07

Family

ID=68254559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910658738.XA Active CN110378934B (en) 2019-07-22 2019-07-22 Subject detection method, apparatus, electronic device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN110378934B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866486B (en) * 2019-11-12 2022-06-10 Oppo广东移动通信有限公司 Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN111557692B (en) * 2020-04-26 2022-11-22 深圳华声医疗技术股份有限公司 Automatic measurement method, ultrasonic measurement device and medium for target organ tissue
CN113766130B (en) * 2021-09-13 2023-07-28 维沃移动通信有限公司 Video shooting method, electronic equipment and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103561271A (en) * 2013-11-19 2014-02-05 福建师范大学 Video airspace tamper detection method for removing moving object shot by static camera lens
CN105827952A (en) * 2016-02-01 2016-08-03 维沃移动通信有限公司 Photographing method for removing specified object and mobile terminal
CN108347563A (en) * 2018-02-07 2018-07-31 广东欧珀移动通信有限公司 Method for processing video frequency and device, electronic equipment, computer readable storage medium
CN108712609A (en) * 2018-05-17 2018-10-26 Oppo广东移动通信有限公司 Focusing process method, apparatus, equipment and storage medium
CN109389135A (en) * 2017-08-03 2019-02-26 杭州海康威视数字技术股份有限公司 A kind of method for screening images and device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6172934B2 (en) * 2012-12-27 2017-08-02 キヤノン株式会社 IMAGING DEVICE, ITS CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
CN105594197A (en) * 2013-09-27 2016-05-18 富士胶片株式会社 Imaging device and imaging method
CN106997589A (en) * 2017-04-12 2017-08-01 上海联影医疗科技有限公司 image processing method, device and equipment
CN109167910A (en) * 2018-08-31 2019-01-08 努比亚技术有限公司 focusing method, mobile terminal and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103561271A (en) * 2013-11-19 2014-02-05 福建师范大学 Video airspace tamper detection method for removing moving object shot by static camera lens
CN105827952A (en) * 2016-02-01 2016-08-03 维沃移动通信有限公司 Photographing method for removing specified object and mobile terminal
CN109389135A (en) * 2017-08-03 2019-02-26 杭州海康威视数字技术股份有限公司 A kind of method for screening images and device
CN108347563A (en) * 2018-02-07 2018-07-31 广东欧珀移动通信有限公司 Method for processing video frequency and device, electronic equipment, computer readable storage medium
CN108712609A (en) * 2018-05-17 2018-10-26 Oppo广东移动通信有限公司 Focusing process method, apparatus, equipment and storage medium

Also Published As

Publication number Publication date
CN110378934A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
CN110334635B (en) Subject tracking method, apparatus, electronic device and computer-readable storage medium
CN110149482B (en) Focusing method, focusing device, electronic equipment and computer readable storage medium
WO2020259118A1 (en) Method and device for image processing, method and device for training object detection model
CN110248096B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN110473185B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113766125B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN110572573B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN110248097B (en) Focus tracking method and device, terminal equipment and computer readable storage medium
CN110650291B (en) Target focus tracking method and device, electronic equipment and computer readable storage medium
CN110378934B (en) Subject detection method, apparatus, electronic device, and computer-readable storage medium
CN110191287B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN110660090B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN110248101B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN109712177B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110661977B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN110490196B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN110399823B (en) Subject tracking method and apparatus, electronic device, and computer-readable storage medium
CN110365897B (en) Image correction method and device, electronic equipment and computer readable storage medium
CN113610865B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN113658197B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN112581481B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110688926B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN110689007B (en) Subject recognition method and device, electronic equipment and computer-readable storage medium
CN107292853B (en) Image processing method, image processing device, computer-readable storage medium and mobile terminal
CN110475044B (en) Image transmission method and device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant