CN113079311A - Image acquisition method and device, electronic equipment and storage medium - Google Patents
Image acquisition method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN113079311A CN113079311A CN202010009870.0A CN202010009870A CN113079311A CN 113079311 A CN113079311 A CN 113079311A CN 202010009870 A CN202010009870 A CN 202010009870A CN 113079311 A CN113079311 A CN 113079311A
- Authority
- CN
- China
- Prior art keywords
- image
- preset
- target
- acquiring
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
- Studio Devices (AREA)
Abstract
The disclosure relates to an image acquisition method and device, an electronic device and a storage medium. The method comprises the following steps: acquiring a preview image acquired by the camera; detecting a target area in the preview image based on a preset composition model; in response to detecting a target region, acquiring an image containing the target region as a target image. In the embodiment, the target area concerned by the photographer can be obtained without manual selection of the photographer, so that the photographing experience can be improved; moreover, the target image can be acquired without operating keys by a photographer, so that the method is simple and rapid, and the shooting experience is further improved.
Description
Technical Field
The present disclosure relates to the field of control technologies, and in particular, to an image acquisition method and apparatus, an electronic device, and a storage medium.
Background
At present, a user interface of a camera App of an electronic device always simulates a UI interface style of a professional single lens reflex camera, and a plurality of icons (icons) are arranged in the UI interface, such as a light key, a camera switching key, a setting key, a shooting mode key and the like, so that a user can directly select a needed key to execute corresponding operation, and an effect of quickly shooting a picture or recording a video is achieved.
However, as the number of icons in the UI interface increases, the difficulty of design increases. And moreover, due to the fact that the icon is set in an intricate and complicated mode, shooting difficulty and learning cost of a user are increased, and use experience is reduced.
Disclosure of Invention
The present disclosure provides an image acquisition method and apparatus, an electronic device, and a storage medium to solve the deficiencies of the related art.
According to a first aspect of the embodiments of the present disclosure, there is provided an image acquisition method applied to an electronic device provided with a camera, the method including:
acquiring a preview image acquired by the camera;
detecting a target area in the preview image based on a preset composition model;
in response to detecting a target region, acquiring an image containing the target region as a target image.
Optionally, the composition model comprises at least one of: the method comprises the steps of forming a composition model based on a composition principle and training the composition model based on a preset composition sample.
Optionally, before detecting the target region in the preview image based on a preset composition model, the method includes:
acquiring voice data acquired by an audio acquisition module in the electronic equipment;
and responding to the fact that preset keyword instructions are contained in the keywords of the voice data, and detecting a target area in the preview image based on a preset composition model.
Optionally, before detecting the target region in the preview image based on a preset composition model, the method includes:
acquiring a preview image acquired by an instruction camera in the electronic equipment;
acquiring a photographer posture in a preview image acquired by the instruction camera;
and responding to the situation that the posture of the photographer comprises a preset posture, and detecting the target area in the preview image based on a preset composition model.
Optionally, before detecting the target region in the preview image based on a preset composition model, the method includes:
acquiring a preview video stream acquired by an instruction camera in the electronic equipment;
tracking the eyeball of a photographer in the preview video stream based on a preset eyeball tracking algorithm to obtain the eyeball posture of the photographer;
and responding to the obtained eyeball gesture, and executing target area detection in the preview image based on a preset composition model.
Optionally, before detecting the target region in the preview image based on a preset composition model, the method further includes:
acquiring spatial attitude data acquired by a spatial attitude sensor in the electronic equipment;
acquiring the attitude of the electronic equipment according to the spatial attitude data; the gesture comprises that the electronic device is in a stationary state;
in response to the duration of the gesture exceeding a preset first duration threshold, performing detection of a target region in the preview image based on a preset composition model; and recording a video in response to the duration of the gesture exceeding a preset second duration threshold.
Optionally, after recording the video and before storing the recorded video, the method further comprises:
acquiring a first video from the start of a camera to the start of video recording;
adding the first video to the recorded video.
Optionally, before the preview image acquired by the camera is acquired, the method further includes:
responding to the detected camera starting operation, and displaying a first preset interface for displaying the preview image; and no operation key is arranged in the first preset interface.
Optionally, after acquiring an image including the target region as a target image, the method further includes:
and stopping shooting this time and closing the first preset interface in response to the detected control instruction for representing stopping shooting.
Optionally, after acquiring an image including the target region as a target image, the method further includes:
in response to detecting a browsing instruction representing browsing the target image, displaying the target image in a second preset interface;
deleting the currently displayed target image in response to detecting a deletion instruction representing a deletion target image; and in response to detecting a switching instruction representing a switching target image, displaying a next target image switched.
According to a second aspect of the embodiments of the present disclosure, there is provided an image acquisition apparatus applied to an electronic device provided with a camera, the apparatus including:
the preview image acquisition module is used for acquiring a preview image acquired by the camera;
the preview image acquisition module is used for detecting a target area in the preview image based on a preset composition model;
and the target image acquisition module is used for responding to the detection of the target area and acquiring an image containing the target area as a target image.
Optionally, the composition model comprises at least one of: the method comprises the steps of forming a composition model based on a composition principle and training the composition model based on a preset composition sample.
Optionally, the apparatus comprises:
the voice data acquisition module is used for acquiring voice data acquired by an audio acquisition module in the electronic equipment;
and the keyword triggering module is used for responding to the fact that the keywords of the voice data contain preset keyword instructions and triggering the preview image acquisition module.
Optionally, the apparatus comprises:
the image acquisition module is used for acquiring a preview image acquired by an instruction camera in the electronic equipment;
the gesture acquisition module is used for acquiring the gesture of a photographer in the preview image acquired by the instruction camera;
and the gesture triggering module is used for triggering the preview image acquisition module in response to the fact that the photographer gesture comprises a preset gesture.
Optionally, the apparatus comprises:
the video stream acquisition module is used for acquiring a preview video stream acquired by an instruction camera in the electronic equipment;
the eyeball posture acquisition module is used for tracking the eyeball of the photographer in the preview video stream based on a preset eyeball tracking algorithm to obtain the eyeball posture of the photographer;
and the gesture triggering module is used for responding to the obtained eyeball gesture and triggering the preview image acquisition module.
Optionally, before detecting the target region in the preview image based on a preset composition model, the apparatus further includes:
the data acquisition module is used for acquiring spatial attitude data acquired by a spatial attitude sensor in the electronic equipment;
the attitude acquisition module is used for acquiring the attitude of the electronic equipment according to the spatial attitude data; the gesture comprises that the electronic device is in a stationary state;
the gesture triggering module is used for responding to the situation that the duration of the gesture exceeds a preset first time threshold value, and triggering the preview image acquisition module; and recording a video in response to the duration of the gesture exceeding a preset second duration threshold.
Optionally, the apparatus further comprises:
the video acquisition module is used for acquiring a first video from the start of a camera to the start of recording the video;
and the video adding module is used for adding the first video into the recorded video.
Optionally, the apparatus further comprises:
responding to the detected camera starting operation, and displaying a first preset interface for displaying the preview image; and no operation key is arranged in the first preset interface.
Optionally, the apparatus further comprises:
and the preset interface closing module is used for responding to the detected control instruction for representing the stop of shooting, stopping the shooting and closing the first preset interface.
Optionally, the apparatus further comprises:
the target image display module is used for responding to the detected browsing instruction representing the browsing of the target image and displaying the target image in a second preset interface;
the target image deleting module is used for responding to the detected deleting instruction for representing the deleted target image and deleting the currently displayed target image; and
and the target image switching module is used for responding to the detected switching instruction for representing the switching target image and displaying the next switched target image.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to execute executable instructions in the memory to implement the steps of any of the methods described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a readable storage medium having stored thereon executable instructions that, when executed by a processor, implement the steps of any one of the methods described above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
according to the embodiment, the preview image acquired by the camera is acquired; then, detecting a target area in the preview image based on a preset composition model; then, in response to detecting the target area, an image containing the target area is acquired as a target image. Therefore, the target area concerned by the photographer can be obtained in the embodiment without manual selection of the photographer, and the photographing experience can be improved; moreover, the target image can be acquired without operating keys by a photographer, so that the method is simple and rapid, and the shooting experience is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic view of a user interface in the related art.
FIG. 2 is a flow chart illustrating an image acquisition method according to an exemplary embodiment.
FIG. 3 is a schematic diagram illustrating a default interface in accordance with an exemplary embodiment.
FIG. 4 is a flow diagram illustrating control of acquisition of a target area based on voice data according to an example embodiment.
Fig. 5 is a flowchart illustrating control of an acquisition target area based on a preview image according to an exemplary embodiment.
FIG. 6 is a flowchart illustrating acquiring a target region based on eye tracking control according to an example embodiment.
FIG. 7 is a flow diagram illustrating control of acquisition of a target region based on spatial pose data, according to an exemplary embodiment.
FIG. 8 is a flowchart illustrating deletion of a target image according to an exemplary embodiment.
Fig. 9 to 16 are block diagrams illustrating an image pickup apparatus according to an exemplary embodiment.
FIG. 17 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The following exemplary described embodiments do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of devices consistent with certain aspects of the present disclosure as recited in the claims below.
At present, the user interface of the camera App of the electronic device always simulates the UI interface style of a professional single lens reflex camera, and a plurality of icons (icons) are arranged in the UI interface, such as a light key, a camera switching key, a setting key, a shooting mode key, and the like, and the effect is as shown in fig. 1. Therefore, the user can directly select the needed key to execute corresponding operation, and the effect of quickly taking a picture or recording a video is achieved.
However, as the number of icons in the UI interface increases, the difficulty of design increases. And moreover, due to the fact that the icon is set in an intricate and complicated mode, shooting difficulty and learning cost of a user are increased, and use experience is reduced.
In order to solve the above technical problem, an embodiment of the present disclosure provides an image acquisition method, which may be applied to an electronic device provided with a camera, such as a smart phone, a tablet computer, and the like, and fig. 2 is a flowchart illustrating an image acquisition method according to an exemplary embodiment. Referring to fig. 2, an image acquisition method includes steps 21 to 23, in which:
in step 21, a preview image captured by the camera is acquired.
In this embodiment, the electronic device is provided with a camera, and the camera may include one or more of a front camera, a rear camera, and a 3D camera (e.g., a TOF camera). In consideration of the role played by the camera, the camera in this embodiment can be divided into: a, acquiring a preview image or a camera used for shooting an image; and b, acquiring a preview image or a camera for previewing a video stream, wherein the preview image is used for acquiring the posture of the photographer, and the preview video stream is used for acquiring the eyeball posture of the photographer. For convenience of explanation, the camera applied to the action (b) in the following embodiments is referred to as an instruction camera.
In this embodiment, taking a rear-mounted camera as an example, when the camera is started, the camera may collect an image as a preview image, and send the preview image to a display screen for display so as to be presented to a photographer; at the same time, the preview image may also be temporarily stored to memory.
In this embodiment, a processor in the electronic device may communicate with the camera to obtain the acquired preview image. Alternatively, the processor communicates with the memory and may read preview images already stored by the memory.
In an embodiment, a preset interface may be preset in the electronic device, and is referred to as a first preset interface for distinction, and no operation key is arranged in the first preset interface, and the effect is as shown in fig. 3. In this way, the first preset interface may be used to display the preview image; and, because the button is not set up, need not shooter and display screen contact this moment to avoid electronic equipment shake, guarantee the shooting quality.
In step 22, a target region in the preview image is detected based on a preset composition model.
In this embodiment, the processor may obtain a preset composition model, input the obtained preview image to the composition model, and obtain the target region in the preview image by the composition model. The target area refers to an area in which the photographer is interested.
The composition model can be obtained through the following modes:
for example, a composition rule common in the photographic industry or a photographer' S tendency composition rule such as a trisection composition, a diagonal composition, an S-shaped composition, a golden section composition, a triangle composition, or the like may be acquired, and then a composition model may be generated based on the composition rule.
For another example, a large number of composition samples may be obtained, where the composition samples refer to images captured by using different composition principles, and then an initial composition model (such as a neural network) is trained, and after the composition model converges, the trained composition model is obtained after the training is finished.
It can be understood that if the composition sample is an image provided by the photographer or an image selected by the photographer from candidate samples, the trained composition model can determine a target region which the photographer may pay attention to from the preview image to some extent.
In this embodiment, the processor may acquire the target area in the preview image when acquiring the preview image. In another embodiment, the processor may further detect a control instruction after acquiring the preview image, and acquire the target area in the preview image after detecting the control instruction, including:
in one example, an audio capture module (e.g., a microphone) is disposed in the electronic device, and before use, a photographer adjusts a shooting mode in the electronic device to an audio control mode. Referring to fig. 4, in step 41, the processor may obtain voice data collected by an audio collection module in the electronic device. In step 42, the processor may obtain a keyword in the voice data, and in response to the keyword in the voice data containing a preset keyword instruction, execute step 22.
The method for acquiring the keyword by the processor may include:
in one way, the processor may convert the speech data into a text sequence, and the conversion method may be referred to in the related art. And then, performing word segmentation on the acquired text sequence, stopping words and the like to obtain keywords contained in the text sequence.
In a second mode, the processor may obtain a frequency or a phoneme in the speech data, and when a frequency or a phoneme corresponding to a preset keyword instruction is detected, it is determined that the preset keyword instruction is detected.
In another example, the electronic device can launch an instruction camera during the preview process. For example, when a photographer takes an image using a rear camera of the electronic apparatus, the front camera may serve as the instruction camera. When a photographer takes an image using a front camera of the electronic apparatus, the TOF camera or the rear camera may serve as the instruction camera. When a photographer uses the front camera to take a self-timer, the front camera can be used as an instruction camera. Referring to fig. 5, in step 51, the processor may acquire a preview image acquired by an instruction camera in the electronic device, and the acquiring may include the processor directly communicating with the instruction camera, or the processor reading the preview image from a specified location, which is not limited herein. In step 52, the processor may obtain a photographer pose in a preview image captured by the instruction camera. In step 53, in response to the photographer posture including a preset posture, step 22 is executed.
In yet another example, the electronic device can launch an instruction camera during the preview process. For example, when a photographer takes an image using a rear camera of the electronic apparatus, the front camera may serve as the instruction camera. When a photographer takes an image using a front camera of the electronic apparatus, the TOF camera or the rear camera may serve as the instruction camera. When a photographer uses the front camera to take a self-timer, the front camera can be used as an instruction camera. Referring to fig. 6, in step 61, a preview video stream captured by an instruction camera in the electronic device is acquired. In step 62, the eyeball of the photographer in the preview video stream is tracked based on a preset eyeball tracking algorithm to obtain the eyeball posture of the photographer. In step 63, in response to obtaining the eye pose, step 22 is performed.
It should be noted that, in this example, after the preset eye tracking algorithm performs three steps of face detection, eye ROI interception, and eye center positioning on each frame of image in the preview video stream, the eye posture can be determined according to the position of the eye center in the multi-frame image. For face detection, eye ROI acquisition, and eyeball center positioning, reference may be made to related technologies, which are not described herein again.
In yet another example, spatial attitude sensors, such as acceleration sensors, gravity sensors, gyroscopes, etc., may be disposed within the electronic device. Referring to fig. 7, in step 71, the processor may acquire spatial attitude data collected by a spatial attitude sensor in the electronic device. In step 72, acquiring the attitude of the electronic device according to the spatial attitude data; the gesture includes the electronic device being in a stationary state; in step 73, in response to the duration of the gesture exceeding a preset first duration threshold (e.g., 200ms, adjustable), step 22 is performed.
In this example, the processor may also record the video in response to the duration of the gesture exceeding a preset second duration threshold (e.g., 2s, adjustable). It can be understood that after the processor determines to record the video, the processor can also acquire a first video between the time when the camera is started and the time when the video starts to be recorded, and then add the first video into the recorded video, so that the integrity of the recorded video is maintained, and a helper can capture a wonderful scene.
In an embodiment, when the processor detects the control instruction, the processor may further detect a control instruction indicating that the shooting is stopped, and at this time, the processor may stop the shooting and close the first preset interface.
In step 23, in response to detecting the target region, an image containing the target region is acquired as a target image.
In this embodiment, the processor may acquire an image including the target area as the target image after detecting the target area. For example, when the number of target areas is one, the processor may crop out an image within the target area as a target image in a set size, and take the entire preview image as the target image. For another example, when the number of the target areas is multiple, the processor cuts the preview image according to the respective sizes of the target areas, and cuts one target area at a time by taking the preview image as a reference template, so that the obtained target images are not interfered with each other; of course, the image can be cut at the same time when the image is previewed, and under the condition that the images of two adjacent target areas are not intersected, a plurality of target images are obtained. The skilled person can select a suitable scheme for acquiring the target image according to a specific scene, which is not limited herein.
In this embodiment, in consideration of obtaining a plurality of target images after one shooting, a photographer may select a target image. The processor may detect a browse instruction, which may include detecting that the photographer triggered an "album" key, or a "browse" key in the camera APP. Referring to fig. 8, in step 81, in response to detecting a browsing instruction characterizing a browsing target image, the processor may display the target image within a second preset interface. In step 82, the processor may also delete the currently displayed target image in response to detecting a deletion instruction characterizing the deletion target image. For example, sliding the representation photographer to the left to like the currently displayed target image to be stored, and displaying the next target image; and if the character that the photographer dislikes the currently displayed target image by sliding to the right, deleting the target image and displaying the next target image effect.
It should be noted that the processor may obtain, according to a preset period, a target image stored by the photographer as a new composition sample, or obtain, as a new composition sample, a preset number of target images selected by the photographer. The processor can be beneficial to a new composition sample to update the composition sample set, and then the updated composition sample set is used for training the composition model, so that a target area output by the composition model and an area concerned by a photographer achieve higher matching precision, and finally, the area concerned by the photographer where the eyes arrive is completely consistent with the area concerned by the camera, and the effect of 'human-computer integration' is achieved.
Therefore, in the embodiment of the disclosure, the preview image acquired by the camera is acquired; then, detecting a target area in the preview image based on a preset composition model; then, in response to detecting the target area, an image containing the target area is acquired as a target image. Therefore, the target area concerned by the photographer can be obtained in the embodiment without manual selection of the photographer, and the photographing experience can be improved; moreover, the target image can be acquired without operating keys by a photographer, so that the method is simple and rapid, and the shooting experience is further improved.
On the basis of the image acquisition method, the embodiment of the present disclosure further provides an image acquisition apparatus, and fig. 9 is a block diagram illustrating an image acquisition apparatus according to an exemplary embodiment. Referring to fig. 9, an image capturing apparatus applied to an electronic device provided with a camera includes:
a preview image acquiring module 91, configured to acquire a preview image acquired by the camera;
a preview image obtaining module 92, configured to detect a target region in the preview image based on a preset composition model;
and a target image acquiring module 93, configured to, in response to detecting the target region, acquire an image including the target region as a target image.
Optionally, the composition model comprises at least one of: the method comprises the steps of forming a composition model based on a composition principle and training the composition model based on a preset composition sample.
In one embodiment, referring to fig. 10, the apparatus comprises:
the voice data acquisition module 101 is used for acquiring voice data acquired by an audio acquisition module in the electronic equipment;
and the keyword triggering module 102 is configured to trigger the preview image obtaining module in response to that the keywords of the voice data include a preset keyword instruction.
In one embodiment, referring to fig. 11, the apparatus comprises:
the image acquisition module 111 is used for acquiring a preview image acquired by an instruction camera in the electronic equipment;
a gesture obtaining module 112, configured to obtain a photographer gesture in the preview image collected by the instruction camera;
and the gesture triggering module 113 is configured to trigger the preview image acquiring module in response to that the photographer gesture includes a preset gesture.
In one embodiment, referring to fig. 12, the apparatus comprises:
a video stream acquiring module 121, configured to acquire a preview video stream acquired by an instruction camera in the electronic device;
an eyeball posture acquisition module 122, configured to track an eyeball of a photographer in the preview video stream based on a preset eyeball tracking algorithm, and obtain an eyeball posture of the photographer;
and the gesture triggering module 123 is configured to trigger the preview image acquiring module in response to obtaining the eyeball gesture.
In an embodiment, referring to fig. 13, the apparatus further comprises:
the data acquisition module 131 is configured to acquire spatial attitude data acquired by a spatial attitude sensor in the electronic device;
a pose acquisition module 132 configured to acquire a pose of the electronic device according to the spatial pose data; the gesture comprises that the electronic device is in a stationary state;
the gesture triggering module 133 is configured to trigger the preview image obtaining module in response to that the duration of the gesture exceeds a preset first duration threshold; and recording a video in response to the duration of the gesture exceeding a preset second duration threshold.
In an embodiment, referring to fig. 14, the apparatus further comprises:
the video acquiring module 141 is configured to acquire a first video from the start of the camera to the start of recording the video;
a video adding module 142, configured to add the first video to the recorded video.
In one embodiment, the apparatus further comprises:
the preset interface display module is used for responding to the detection of the starting operation of the camera and displaying a first preset interface for displaying the preview image; and no operation key is arranged in the first preset interface.
In one embodiment, referring to fig. 15, the apparatus further comprises:
and the preset interface closing module 151 is configured to stop the shooting and close the first preset interface in response to detecting a control instruction indicating that the shooting is stopped.
In one embodiment, referring to fig. 16, the apparatus further comprises:
a target image display module 161, configured to, in response to detecting a browsing instruction that represents browsing the target image, display the target image in a second preset interface;
a target image deletion module 162 configured to delete a currently displayed target image in response to detecting a deletion instruction indicating to delete the target image; and
and a target image switching module 163 for displaying a next target image switched in response to detecting a switching instruction representing a switching target image.
It can be understood that the apparatus provided in the embodiment of the present disclosure corresponds to the content of the above method embodiments, and specific content may refer to the content of each method embodiment, which is not described herein again.
Therefore, in the embodiment of the disclosure, the preview image acquired by the camera is acquired; then, detecting a target area in the preview image based on a preset composition model; then, in response to detecting the target area, an image containing the target area is acquired as a target image. Therefore, the target area concerned by the photographer can be obtained in the embodiment without manual selection of the photographer, and the photographing experience can be improved; moreover, the target image can be acquired without operating keys by a photographer, so that the method is simple and rapid, and the shooting experience is further improved.
FIG. 17 is a block diagram illustrating an electronic device in accordance with an example embodiment. For example, the electronic device 1700 may be a smartphone, a computer, a digital broadcast terminal, a tablet device, a medical device, a fitness device, a personal digital assistant, etc., that includes a transmitting coil, a first magnetic sensor, and a second magnetic sensor in an image acquisition device.
Referring to fig. 17, electronic device 1700 may include one or more of the following components: processing component 1702, memory 1704, power component 1706, multimedia component 1708, audio component 1710, input/output (I/O) interface 1712, sensor component 1714, communication component 1716, and image capture component 17117.
The processing component 1702 generally provides for overall operation of the electronic device 1700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. Processing component 1702 may include one or more processors 1720 to execute instructions. Further, processing component 1702 may include one or more modules that facilitate interaction between processing component 1702 and other components.
The memory 1704 is configured to store various types of data to support operations at the electronic device 1700. Examples of such data include instructions for any application or method operating on the electronic device 1700, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1704 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 1706 provides power to the various components of the electronic device 1700. The power components 1706 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 1700.
The multimedia component 1708 includes a screen providing an output interface between the electronic device 1700 and a target object. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a target object. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The I/O interface 1712 provides an interface between the processing component 1702 and peripheral interface modules, such as a keyboard, click wheel, buttons, and the like.
The sensor assembly 1714 includes one or more sensors for providing various aspects of state assessment for the electronic device 1700. For example, sensor assembly 1714 may detect an open/closed state of electronic device 1700, the relative positioning of components, such as a display and keypad of electronic device 1700, a change in the position of electronic device 1700 or a component, the presence or absence of a target object in contact with electronic device 1700, orientation or acceleration/deceleration of electronic device 1700, and a change in the temperature of electronic device 1700.
The communication component 1716 is configured to facilitate communications between the electronic device 1700 and other devices in a wired or wireless manner. The electronic device 1700 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1716 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1716 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an example embodiment, the electronic device 1700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components.
In an exemplary embodiment, a non-transitory readable storage medium including executable instructions, such as memory 1704 including instructions, that are executable by a processor within an audio component is also provided. The readable storage medium may be, among others, ROM, Random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosed solution following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (22)
1. An image acquisition method is applied to an electronic device provided with a camera, and the method comprises the following steps:
acquiring a preview image acquired by the camera;
detecting a target area in the preview image based on a preset composition model;
in response to detecting a target region, acquiring an image containing the target region as a target image.
2. The image acquisition method according to claim 1, wherein the composition model includes at least one of: the method comprises the steps of forming a composition model based on a composition principle and training the composition model based on a preset composition sample.
3. The image acquisition method according to claim 1, wherein before detecting the target region in the preview image based on a preset composition model, the method comprises:
acquiring voice data acquired by an audio acquisition module in the electronic equipment;
and responding to the fact that preset keyword instructions are contained in the keywords of the voice data, and detecting a target area in the preview image based on a preset composition model.
4. The image acquisition method according to claim 1, wherein before detecting the target region in the preview image based on a preset composition model, the method comprises:
acquiring a preview image acquired by an instruction camera in the electronic equipment;
acquiring a photographer posture in a preview image acquired by the instruction camera;
and responding to the situation that the posture of the photographer comprises a preset posture, and detecting the target area in the preview image based on a preset composition model.
5. The image acquisition method according to claim 1, wherein before detecting the target region in the preview image based on a preset composition model, the method comprises:
acquiring a preview video stream acquired by an instruction camera in the electronic equipment;
tracking the eyeball of a photographer in the preview video stream based on a preset eyeball tracking algorithm to obtain the eyeball posture of the photographer;
and responding to the obtained eyeball gesture, and executing target area detection in the preview image based on a preset composition model.
6. The image acquisition method according to claim 1, wherein before detecting the target region in the preview image based on a preset composition model, the method further comprises:
acquiring spatial attitude data acquired by a spatial attitude sensor in the electronic equipment;
acquiring the attitude of the electronic equipment according to the spatial attitude data; the gesture comprises that the electronic device is in a stationary state;
in response to the duration of the gesture exceeding a preset first duration threshold, performing detection of a target region in the preview image based on a preset composition model; and recording a video in response to the duration of the gesture exceeding a preset second duration threshold.
7. The image acquisition method of claim 6, wherein after recording the video and before storing the recorded video, the method further comprises:
acquiring a first video from the start of a camera to the start of video recording;
adding the first video to the recorded video.
8. The image acquisition method according to claim 1, wherein before acquiring the preview image acquired by the camera, the method further comprises:
responding to the detected camera starting operation, and displaying a first preset interface for displaying the preview image; and no operation key is arranged in the first preset interface.
9. The image acquisition method according to claim 8, wherein after acquiring an image containing the target region as a target image, the method further comprises:
and stopping shooting this time and closing the first preset interface in response to the detected control instruction for representing stopping shooting.
10. The image acquisition method according to claim 1, wherein after acquiring an image containing the target region as a target image, the method further comprises:
in response to detecting a browsing instruction representing browsing the target image, displaying the target image in a second preset interface;
deleting the currently displayed target image in response to detecting a deletion instruction representing a deletion target image; and in response to detecting a switching instruction representing a switching target image, displaying a next target image switched.
11. An image acquisition apparatus, applied to an electronic device provided with a camera, the apparatus comprising:
the preview image acquisition module is used for acquiring a preview image acquired by the camera;
the preview image acquisition module is used for detecting a target area in the preview image based on a preset composition model;
and the target image acquisition module is used for responding to the detection of the target area and acquiring an image containing the target area as a target image.
12. The image acquisition apparatus according to claim 11, wherein the composition model includes at least one of: the method comprises the steps of forming a composition model based on a composition principle and training the composition model based on a preset composition sample.
13. The image capturing apparatus according to claim 11, characterized in that the apparatus comprises:
the voice data acquisition module is used for acquiring voice data acquired by an audio acquisition module in the electronic equipment;
and the keyword triggering module is used for responding to the fact that the keywords of the voice data contain preset keyword instructions and triggering the preview image acquisition module.
14. The image capturing apparatus according to claim 11, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring a preview image acquired by an instruction camera in the electronic equipment;
the gesture acquisition module is used for acquiring the gesture of a photographer in the preview image acquired by the instruction camera;
and the gesture triggering module is used for triggering the preview image acquisition module in response to the fact that the photographer gesture comprises a preset gesture.
15. The image capturing apparatus according to claim 11, characterized in that the apparatus comprises:
the video stream acquisition module is used for acquiring a preview video stream acquired by an instruction camera in the electronic equipment;
the eyeball posture acquisition module is used for tracking the eyeball of the photographer in the preview video stream based on a preset eyeball tracking algorithm to obtain the eyeball posture of the photographer;
and the gesture triggering module is used for responding to the obtained eyeball gesture and triggering the preview image acquisition module.
16. The apparatus according to claim 11, wherein before detecting the target region in the preview image based on a preset composition model, the apparatus further comprises:
the data acquisition module is used for acquiring spatial attitude data acquired by a spatial attitude sensor in the electronic equipment;
the attitude acquisition module is used for acquiring the attitude of the electronic equipment according to the spatial attitude data; the gesture comprises that the electronic device is in a stationary state;
the gesture triggering module is used for responding to the situation that the duration of the gesture exceeds a preset first time threshold value, and triggering the preview image acquisition module; and recording a video in response to the duration of the gesture exceeding a preset second duration threshold.
17. The image capturing device of claim 16, wherein the device further comprises:
the video acquisition module is used for acquiring a first video from the start of a camera to the start of recording the video;
and the video adding module is used for adding the first video into the recorded video.
18. The image capturing apparatus according to claim 11, characterized in that the apparatus further comprises:
responding to the detected camera starting operation, and displaying a first preset interface for displaying the preview image; and no operation key is arranged in the first preset interface.
19. The image capturing device of claim 18, wherein the device further comprises:
and the preset interface closing module is used for responding to the detected control instruction for representing the stop of shooting, stopping the shooting and closing the first preset interface.
20. The image capturing apparatus according to claim 11, characterized in that the apparatus further comprises:
the target image display module is used for responding to the detected browsing instruction representing the browsing of the target image and displaying the target image in a second preset interface;
the target image deleting module is used for responding to the detected deleting instruction for representing the deleted target image and deleting the currently displayed target image; and
and the target image switching module is used for responding to the detected switching instruction for representing the switching target image and displaying the next switched target image.
21. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to execute executable instructions in the memory to implement the steps of the method of any of claims 1 to 10.
22. A readable storage medium having stored thereon executable instructions, wherein the executable instructions when executed by a processor implement the steps of the method of any one of claims 1 to 10.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010009870.0A CN113079311B (en) | 2020-01-06 | 2020-01-06 | Image acquisition method and device, electronic equipment and storage medium |
US16/901,731 US11715234B2 (en) | 2020-01-06 | 2020-06-15 | Image acquisition method, image acquisition device, and storage medium |
EP20181078.5A EP3846447A1 (en) | 2020-01-06 | 2020-06-19 | Image acquisition method, image acquisition device, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010009870.0A CN113079311B (en) | 2020-01-06 | 2020-01-06 | Image acquisition method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113079311A true CN113079311A (en) | 2021-07-06 |
CN113079311B CN113079311B (en) | 2023-06-27 |
Family
ID=71111330
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010009870.0A Active CN113079311B (en) | 2020-01-06 | 2020-01-06 | Image acquisition method and device, electronic equipment and storage medium |
Country Status (3)
Country | Link |
---|---|
US (1) | US11715234B2 (en) |
EP (1) | EP3846447A1 (en) |
CN (1) | CN113079311B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113485555A (en) * | 2021-07-14 | 2021-10-08 | 上海联影智能医疗科技有限公司 | Medical image reading method, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103442175A (en) * | 2013-09-02 | 2013-12-11 | 百度在线网络技术(北京)有限公司 | Photographing control method and device of mobile terminal and mobile terminal |
CN105872388A (en) * | 2016-06-06 | 2016-08-17 | 珠海市魅族科技有限公司 | Shooting method, shooting device and shooting terminal |
CN107257439A (en) * | 2017-07-26 | 2017-10-17 | 维沃移动通信有限公司 | A kind of image pickup method and mobile terminal |
US20190253615A1 (en) * | 2018-02-13 | 2019-08-15 | Olympus Corporation | Imaging device, information terminal, control method for imaging device, and control method for information terminal |
KR20190105533A (en) * | 2019-08-26 | 2019-09-17 | 엘지전자 주식회사 | Method for editing image based on artificial intelligence and artificial device |
CN110476405A (en) * | 2017-12-01 | 2019-11-19 | 三星电子株式会社 | For providing and shooting the method and system of related recommendation information |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6301440B1 (en) * | 2000-04-13 | 2001-10-09 | International Business Machines Corp. | System and method for automatically setting image acquisition controls |
US8948468B2 (en) * | 2003-06-26 | 2015-02-03 | Fotonation Limited | Modification of viewing parameters for digital images using face detection information |
US20050185052A1 (en) * | 2004-02-25 | 2005-08-25 | Raisinghani Vijay S. | Automatic collision triggered video system |
GB2448221B (en) | 2007-04-02 | 2012-02-01 | Samsung Electronics Co Ltd | Method and apparatus for providing composition information in digital image processing device |
JP5115139B2 (en) * | 2007-10-17 | 2013-01-09 | ソニー株式会社 | Composition determination apparatus, composition determination method, and program |
US8948574B2 (en) * | 2008-11-24 | 2015-02-03 | Mediatek Inc. | Multimedia recording apparatus and method |
KR101030652B1 (en) | 2008-12-16 | 2011-04-20 | 아이리텍 잉크 | An Acquisition System and Method of High Quality Eye Images for Iris Recognition |
EP2407943B1 (en) | 2010-07-16 | 2016-09-28 | Axis AB | Method for event initiated video capturing and a video camera for capture event initiated video |
WO2015103444A1 (en) * | 2013-12-31 | 2015-07-09 | Eyefluence, Inc. | Systems and methods for gaze-based media selection and editing |
US9794475B1 (en) * | 2014-01-29 | 2017-10-17 | Google Inc. | Augmented video capture |
US10521671B2 (en) * | 2014-02-28 | 2019-12-31 | Second Spectrum, Inc. | Methods and systems of spatiotemporal pattern recognition for video content development |
US9363426B2 (en) * | 2014-05-29 | 2016-06-07 | International Business Machines Corporation | Automatic camera selection based on device orientation |
WO2016061634A1 (en) * | 2014-10-24 | 2016-04-28 | Beezbutt Pty Limited | Camera application |
JP2018508135A (en) | 2014-12-30 | 2018-03-22 | モルフォトラスト・ユーエスエー・リミテッド ライアビリティ カンパニーMorphotrust Usa,Llc | Video trigger analysis |
WO2018140404A1 (en) * | 2017-01-24 | 2018-08-02 | Lonza Limited | Methods and systems for using a virtual or augmented reality display to perform industrial maintenance |
AU2018223225A1 (en) * | 2017-02-23 | 2019-10-17 | 5i Corporation Pty. Limited | Camera apparatus |
CN108307113B (en) | 2018-01-26 | 2020-10-09 | 北京图森智途科技有限公司 | Image acquisition method, image acquisition control method and related device |
CN108737715A (en) | 2018-03-21 | 2018-11-02 | 北京猎户星空科技有限公司 | A kind of photographic method and device |
CN109858539A (en) | 2019-01-24 | 2019-06-07 | 武汉精立电子技术有限公司 | A kind of ROI region extracting method based on deep learning image, semantic parted pattern |
-
2020
- 2020-01-06 CN CN202010009870.0A patent/CN113079311B/en active Active
- 2020-06-15 US US16/901,731 patent/US11715234B2/en active Active
- 2020-06-19 EP EP20181078.5A patent/EP3846447A1/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103442175A (en) * | 2013-09-02 | 2013-12-11 | 百度在线网络技术(北京)有限公司 | Photographing control method and device of mobile terminal and mobile terminal |
CN105872388A (en) * | 2016-06-06 | 2016-08-17 | 珠海市魅族科技有限公司 | Shooting method, shooting device and shooting terminal |
CN107257439A (en) * | 2017-07-26 | 2017-10-17 | 维沃移动通信有限公司 | A kind of image pickup method and mobile terminal |
CN110476405A (en) * | 2017-12-01 | 2019-11-19 | 三星电子株式会社 | For providing and shooting the method and system of related recommendation information |
US20190253615A1 (en) * | 2018-02-13 | 2019-08-15 | Olympus Corporation | Imaging device, information terminal, control method for imaging device, and control method for information terminal |
KR20190105533A (en) * | 2019-08-26 | 2019-09-17 | 엘지전자 주식회사 | Method for editing image based on artificial intelligence and artificial device |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113485555A (en) * | 2021-07-14 | 2021-10-08 | 上海联影智能医疗科技有限公司 | Medical image reading method, electronic equipment and storage medium |
CN113485555B (en) * | 2021-07-14 | 2024-04-26 | 上海联影智能医疗科技有限公司 | Medical image film reading method, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
US11715234B2 (en) | 2023-08-01 |
CN113079311B (en) | 2023-06-27 |
EP3846447A1 (en) | 2021-07-07 |
US20210209796A1 (en) | 2021-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3817395A1 (en) | Video recording method and apparatus, device, and readable storage medium | |
CN105845124B (en) | Audio processing method and device | |
CN108668080B (en) | Method and device for prompting degree of dirt of lens and electronic equipment | |
WO2017080084A1 (en) | Font addition method and apparatus | |
CN112069952A (en) | Video clip extraction method, video clip extraction device, and storage medium | |
US11961278B2 (en) | Method and apparatus for detecting occluded image and medium | |
WO2022142871A1 (en) | Video recording method and apparatus | |
CN110968364A (en) | Method and device for adding shortcut plug-in and intelligent equipment | |
CN111612876A (en) | Expression generation method and device and storage medium | |
CN112069951A (en) | Video clip extraction method, video clip extraction device, and storage medium | |
CN108986803B (en) | Scene control method and device, electronic equipment and readable storage medium | |
CN111340690B (en) | Image processing method, device, electronic equipment and storage medium | |
CN113079311B (en) | Image acquisition method and device, electronic equipment and storage medium | |
WO2023115969A1 (en) | Image posting method and apparatus | |
CN110913120B (en) | Image shooting method and device, electronic equipment and storage medium | |
EP4304190A1 (en) | Focus chasing method, electronic device, and storage medium | |
CN115713641A (en) | Video acquisition method, device and storage medium | |
KR20210133104A (en) | Method and device for shooting image, and storage medium | |
CN107679123B (en) | Picture naming method and device | |
CN113315903A (en) | Image acquisition method and device, electronic equipment and storage medium | |
CN112346606A (en) | Picture processing method and device and storage medium | |
EP3945717A1 (en) | Take-off capture method and electronic device, and storage medium | |
EP3826282A1 (en) | Image capturing method and device | |
CN111641754B (en) | Contact photo generation method and device and storage medium | |
CN112000250B (en) | Information processing method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |