CN113727013A - Method and apparatus for providing image capturing guide - Google Patents

Method and apparatus for providing image capturing guide Download PDF

Info

Publication number
CN113727013A
CN113727013A CN202110245479.5A CN202110245479A CN113727013A CN 113727013 A CN113727013 A CN 113727013A CN 202110245479 A CN202110245479 A CN 202110245479A CN 113727013 A CN113727013 A CN 113727013A
Authority
CN
China
Prior art keywords
image
display
electronic device
processor
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110245479.5A
Other languages
Chinese (zh)
Inventor
文灿奎
权范埈
张达峰
崔智焕
金哲洙
赵成大
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to KR1020210034817A priority Critical patent/KR20210138483A/en
Priority to PCT/KR2021/004718 priority patent/WO2021230507A1/en
Publication of CN113727013A publication Critical patent/CN113727013A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

An electronic device (electronic device) according to an embodiment herein may include a camera, a distance sensor, a display, and a processor. The processor may acquire an image including a subject by the camera, acquire distance data of a distance between the subject and the electronic device by the distance sensor, determine occurrence of defocus (de-focus) based on the distance data, sense a region of interest including at least a part of the subject within the image, crop (crop) at least a region including the region of interest within the image, and display the cropped region on the display in an enlarged manner together with a message indicating defocus.

Description

Method and apparatus for providing image capturing guide
Technical Field
The present disclosure relates to a technique of providing a user with an image capturing guide using a camera and a distance sensor provided in an electronic device.
Background
The minimum focal length refers to a minimum distance between the image sensor and the subject required for focusing on the subject. When an object is located within the minimum focal length from a photographing device such as a camera, a de-focus image is photographed due to an optical limit.
Disclosure of Invention
Recently, as the size of an image sensor mounted in a mobile phone device increases, the minimum focal length of a camera is increasing. Thus, the problem of capturing a defocused image of an object located within the minimum focal length is increasing.
In addition, the electronic apparatus according to the related art analyzes blur (blur) included in an image, and only uses low level features (low level features) such as frequency analysis of the image without considering the inherent features of the subject.
An electronic device (electronic device) according to an embodiment herein may include: a camera, a distance sensor, a display, and at least one processor electrically connected to the camera, the distance sensor, and the display, the at least one processor acquiring an image including a subject through the camera, acquiring distance data on a distance between the subject and the electronic device through the distance sensor, determining occurrence of defocus (de-focus) based on the distance data, sensing a region of interest including at least a part of the subject within the image, cropping (crop) a region including at least the region of interest within the image, and displaying the cropped region in the display in an enlarged manner together with a message indicating defocus.
The action method of the electronic device according to one embodiment of the text can comprise the following steps: an act of acquiring, by a camera included in the electronic device, an image including a subject; an operation of acquiring distance data on a distance between the subject and the electronic apparatus by a distance sensor included in the electronic apparatus; determining an action of defocusing based on the distance data; an act of sensing a region of interest including at least a portion of the subject within the image; an act of cropping (crop) within the image a region including at least the region of interest; and an act of displaying the cropped area on the display in an enlarged manner along with the message indicating defocus.
An electronic device according to an embodiment herein may include: a camera, a distance sensor, a display, and at least one processor electrically connected to the camera, the distance sensor, and the display, the at least one processor acquiring an image including a subject by the camera, acquiring distance data of a distance between the subject and the electronic device by the distance sensor, determining occurrence of defocus based on the distance data, and displaying a message guiding the distance between the subject and the electronic device to be a minimum focal length or more on the display.
According to various embodiments disclosed herein, a user may recognize blur contained within an image as defocus-induced blur resulting from photographing a subject located within a minimum focal length. In addition, the electronic device according to the present disclosure can propose an optimal photographing condition to the user by analyzing the characteristics of the subject, and the user can change the photographing condition according to the condition proposed by the electronic device to acquire an image in which defocus does not occur.
In addition, the electronic apparatus according to the present disclosure uses not only low-level features but also high-level features (high level features) such as object detection and texture check (texture) when analyzing an image, and thus can propose an optimal imaging condition based on the characteristic features of a subject.
Effects that can be obtained in the present disclosure are not limited to the above-mentioned effects, and other effects that have not been mentioned can be clearly understood by those having ordinary knowledge in the technical field to which the present disclosure belongs through the following descriptions.
Drawings
Fig. 1 is a diagram illustrating an electronic device according to an embodiment.
Fig. 2 is a block diagram of an electronic device according to an embodiment.
Fig. 3 is a flowchart (flow chart) illustrating a method of an electronic device cropping and enlarging a region of interest and displaying on a display while providing an image capturing guide according to an embodiment.
Fig. 4 is a flowchart illustrating a method of providing an image capturing guide upon receiving a user input for a first visual availability (visual after lance), according to an embodiment.
FIG. 5 is a flowchart illustrating acts in receiving user input for a second visual affordance, according to an embodiment.
FIG. 6 is a flow diagram illustrating a method of analyzing an image and distance data according to an embodiment.
Fig. 7 is a diagram illustrating an example of providing an image capturing guide according to an embodiment.
Fig. 8 is a diagram illustrating an example of a region of interest cropped and enlarged within an image as a distance between a subject and an electronic device becomes longer, according to an embodiment.
Fig. 9 is an example illustrating a UI that guides a region of interest to be located at the center of an image according to an embodiment.
Fig. 10 is a flowchart illustrating a method of providing an image capturing guide by a message indicating defocus according to an embodiment.
Fig. 11 is a diagram illustrating an example of a message indicating defocus according to an embodiment.
Fig. 12 is a block diagram of an electronic device within a network environment, according to various embodiments.
Fig. 13 is a block diagram illustrating a camera module according to various embodiments.
Detailed Description
Fig. 1 is a diagram illustrating an electronic device according to an embodiment.
Referring to fig. 1, a display 110 may be disposed in front of an electronic device 100 according to an embodiment. In one embodiment, the display 110 may occupy a substantial portion of the front of the electronic device 100. A display 110 and a bezel (bezel)120 area surrounding at least a portion of the edges of the display 110 may be disposed in front of the electronic device 100. In the example of fig. 1, the display 110 may include a flat area (flat area)111 and a curved area (curved area)112 extending at the flat area 111 toward a side of the electronic device 100. The curved surface region 112 is shown on only one side (e.g., the left side) in fig. 1, but it is understood that a curved surface region is similarly formed on the opposite side. In addition, the electronic device 100 illustrated in fig. 1 is an example, and may be implemented as various embodiments. For example, the display 110 of the electronic device 100 may include only the planar area 111 without the curved area 112, or may include the curved area 112 at only one edge instead of both sides. In addition, in one embodiment, the curved region may also extend towards the rear of the electronic device 100, such that the electronic device 100 is further provided with a planar region.
In an embodiment, a fingerprint sensor 141 for identifying a fingerprint of a user may be included in the first region 140 of the display 110. The fingerprint sensor 141 may be disposed at a lower layer of the display 110 so as not to be seen by a user or disposed to be difficult to see. In addition, a sensor for authenticating a user/a biometric may be further disposed in a partial area of the display 110 in addition to the fingerprint sensor 141. In other embodiments, sensors for authenticating a user/biometric may be disposed in one area of the bezel 120. For example, it may be that an IR (infrared) sensor for authenticating an iris is exposed through one region of the display 110 or through one region of the bezel 120.
In an embodiment, the sensor 143 may be included in at least one region of the bezel 120 or at least one region of the display 110 of the electronic device 100. The sensor 143 may be arranged at a distance adjacent to the camera module (e.g., front camera 131, rear camera 132) or formed as one module with the camera module.
In an embodiment, a front camera 131 may be arranged in front of the electronic device 100. In the embodiment of fig. 1, the front camera 131 is shown exposed through one area of the display 110, however in other embodiments, the front camera 131 may be exposed through the bezel 120.
In one embodiment, the display 110 may include at least one of a sensor module, a camera module (e.g., front camera 131, rear camera 132), and a light emitting element (e.g., LED) on the back of the screen display area (e.g., the flat area 111, the curved area 112).
In an embodiment, the camera module may be arranged on a back side of at least one of the front, side and/or back side of the electronic device 100 towards the front, side and/or back side. For example, the front camera 131 may not be visually exposed toward the screen display area (e.g., the flat area 111, the curved area 112), and may include a hidden under-screen camera (UDC). In an embodiment, the electronic device 100 may include more than one front camera 131. For example, the electronic device 100 may include two front cameras, such as a first front camera and a second front camera. In an embodiment, the first front camera and the second front camera may be the same type of camera having the same design structure (e.g., pixels), however the first front camera and the second front camera may be implemented by cameras of different design structures. The electronic device 100 may support functions related to the dual cameras (e.g., 3D photographing, auto focus (auto focus), etc.) through the two front cameras.
In an embodiment, a rear camera 132 may be disposed behind the electronic device 100. The rear camera 132 may be exposed through the camera area 130 of the rear cover 160. In an embodiment, the electronic device 100 may include a plurality of rear cameras arranged in the camera area 130. For example, the electronic device 100 may include more than two rear cameras. For example, the electronic device 100 may include a first rear camera, a second rear camera, and a third rear camera. The first rear camera, the second rear camera, and the third rear camera may have different design structures from each other. For example, the FOVs (field angles), pixels, apertures, whether or not to support optical zooming/digital zooming, whether or not to support image stabilization techniques, the kinds and/or arrangements of lens groups included in the respective cameras, and the like of the first rear camera and the second rear camera and/or the third rear camera may be different from each other. For example, the first rear camera may be a general camera, the second rear camera may be a camera for wide-angle shooting (e.g., a wide-angle camera), and the third rear camera may be a camera for telephoto shooting. In embodiments herein, the description of the function or characteristic of the front camera may be applied to the rear camera, and vice versa.
In an embodiment, a distance sensor 145 for sensing a distance between the subject and the electronic device 100, a sensor for object detection, and/or various hardware such as flash-assisted photographing may be further disposed in the camera area 130.
In an embodiment, the distance sensor 145 may be disposed at a distance adjacent to the camera module (e.g., front camera 131, rear camera 132) or formed as one module with the camera module. For example, the distance sensor 145 may act as at least part of an IR (infrared) camera (e.g., a TOF (time of flight) camera), a structured light (structured light) camera), or as at least part of a sensor module. For example, a TOF camera may act as at least part of a sensor module for sensing a distance to a subject.
In one embodiment, at least one physical key may be disposed on a side portion of the electronic device 100. For example, a first function key 151 for ON/OFF display 110 or power of ON/OFF electronic device 100 may be disposed at a right side edge with reference to the front of electronic device 100. In one embodiment, a second function key 152 for controlling the volume of the electronic device 100 or controlling the brightness of the screen, etc. may be disposed at the left edge with respect to the front of the electronic device 100. In addition to this, buttons or keys may be further disposed on the front or rear of the electronic device 100. For example, a physical button or a touch button mapping a specific function may be disposed in a lower end region in the front bezel 120.
The electronic device 100 illustrated in fig. 1 is an example, and does not limit the form of a device to which the technical ideas disclosed herein are applied. The technical ideas disclosed herein can be applied to a variety of user devices. For example, the technical ideas disclosed herein may also be applied to a foldable electronic device, or a tablet computer or a notebook computer, which employs the flexible display 110 and a hinge structure so as to be foldable in a lateral direction or in a vertical direction. For example, the electronic device 100 of the illustrated example shows a bar (bar type) or tablet (plate type) appearance, although various embodiments herein are not limited thereto. For example, the illustrated electronic device may be part of a roll-to-roll electronic device. A roll-to-roll electronic device may be understood as an electronic device in which the display 110 may be bent and deformed so that at least a portion of the display is rolled (or rolled) or may be received inside the electronic device 100. The roll-to-roll electronic device can be used by extending the display area (e.g., the flat area 111, the curved area 112) of the screen by unfolding the display 110 or exposing a wider area of the display 110 to the outside according to the user's needs. The display 110 may also be referred to as a slide-out display (slide-out display) or an extended display (extended display).
For convenience of description, various embodiments will be described below with reference to the electronic device 100 illustrated in fig. 1.
Fig. 2 is a block diagram of an electronic device according to an embodiment.
Referring to fig. 2, the electronic device 100 may include a camera 210, a distance sensor 145, a processor 220, and a display 110.
According to an embodiment, the camera 210 may include the rear camera 132 or the front camera 131 illustrated in fig. 1.
According to an embodiment, the camera 210 may focus on a subject located farther than the minimum focal length, and thus the processor 220 may acquire an image representing a sharp subject when the subject is located farther than the minimum focal length. The camera 210 cannot focus on an object located closer than the minimum focal length, and thus defocus may occur. For example, when the minimum focal length of the camera 210 is 50cm, the camera 210 may focus on a subject spaced 70cm from the image sensor, but may not focus on a subject spaced 30cm from the image sensor.
According to an embodiment, the distance sensor 145 may sense an object in a non-contact manner or determine a distance from the object using ultrasonic waves. For example, the distance sensor 145 may measure the distance between the subject and the electronic device 100. The processor 220 may use the distance data acquired by the distance sensor 145 in order to determine whether the object exists at a position farther than or closer to the minimum focal length of the camera 210.
According to an embodiment, it is understood that the processor 220 includes at least one processor. For example, the processor 220 may include at least one of an AP (application processor), an ISP (image signal processor), and a CP (communication processor). In this regard, reference may be made to processor 220 being at least one processor or more than one processor.
In one embodiment, the processor 220 may operate/control various functions supported in the electronic device 100. For example, the processor 220 may control various hardware by executing an application program by executing code written in a programming language stored in the memory. For example, the processor 220 may run an application program stored in the memory that supports the photographing function. In addition, the processor 220 may set and support an appropriate photographing mode to operate the camera 210 and cause the camera 210 to perform an action intended by the user.
According to an embodiment, the processor 220 may analyze the image acquired by the camera 210 and the distance data acquired by the distance sensor 145. The processor 220 may utilize low-level features that analyze frequency components within the image when analyzing the image. In addition, the processor 220 may analyze the image using high level features such as object detection (object detection), texture check (texture check), and high level feature extraction (high level feature). The details of which are described later in fig. 6.
In an embodiment, the display 110 may display at least one of a running screen of an application program run by the processor 220, or an image acquired by the camera 210, a crop (crop) and enlarged image by the processor 220, a message guiding an action of the user, or visual affordance (visual afterimage). In addition, the processor 220 may display the image data acquired by the camera 210 on the display 110 in real time.
In one embodiment, the display 110 may be integrated with a touch pad. The display 110 may support touch functionality, may sense user input such as touch with a finger, and communicate to the processor 220. The display 110 may be connected with a Display Driver Integrated Circuit (DDIC) for driving the display 110, and the touch pad may be connected with a touch IC (integrated circuit) for sensing touch coordinates and processing a touch-related algorithm. In one embodiment, the display driving circuit and the touch IC may be integrally formed, and in other embodiments, the display driving circuit and the touch IC may be separately formed. The display driving circuit and/or the touch IC may be electrically connected with the processor 220.
Fig. 3 is a flowchart illustrating a method of the electronic device 100 cropping and enlarging a region of interest and displaying it on the display 110 while providing an image capturing guide according to an embodiment. The method illustrated in fig. 3 may be executed by the electronic device 100 or the processor 220 of the electronic device 100.
According to an embodiment, in act 310, the processor 220 may acquire an image including a subject via the camera 210 and distance data for a distance between the subject and the electronic device 100 via the distance sensor 145.
According to an embodiment, in act 320, processor 220 may determine that defocus is occurring based on the distance data.
According to an embodiment, defocusing may occur when the distance between the subject and the electronic device 100 is within the minimum focal length. At this time, the processor 220 may analyze the distance data acquired through the distance sensor 145 to determine the occurrence of defocus. In addition, the processor 220 may analyze the amount of blur contained within the image to supplement utilization when determining that defocus is occurring. The processor 220 may utilize the image analysis result in order to correct an error that may be generated in the analysis result of the distance data. The processor 220 may determine, based on the distance data, that the blur contained within the image is caused by defocus due to shooting in less than the minimum focal length.
According to an embodiment, in act 330, the processor 220 may sense a region of interest within the image that includes at least a portion of the subject.
According to one embodiment, the processor 220 may sense the region of interest by object detection. The processor 220 may grasp a region belonging to the subject in the entire image, designated as a region of interest. The region of interest may include the entire subject or a part of the subject. The details of which are described later in fig. 6.
According to an embodiment, the processor 220 may display a line around the region of interest on the display 110 when the region of interest is sensed. For example, the processor 220 may display a box (box) surrounding the region of interest on the display 110.
According to an embodiment, in act 340, processor 220 may crop an area within the image that includes at least the region of interest. The processor 220 may crop only the region of interest or crop a wider region including the region of interest.
According to an embodiment, in act 350, processor 220 may display the cropped area on display 110 in an enlarged scale along with the message indicating defocus.
According to an embodiment, the message may include content that the electronic apparatus 100 generates defocus from the object being closer than the minimum focal distance, or content that moves the electronic apparatus 100 in a direction away from the object. For example, the message may be a message for moving the electronic apparatus 100 away from the subject by 30cm or more.
According to an embodiment, the processor 220 may notify the user of the fact that defocus occurs in the region of interest within the image by an action of enlarging the display of the cropped region on the display 110. In addition, the processor 220 may display a message indicating defocus on the display 110, thereby guiding the user to move the electronic apparatus 100 in a direction away from the object to remove defocus. The user can recognize, from the message indicating defocus displayed on the electronic apparatus 100 and the image that is not focused on the subject, that defocus is generated and that the electronic apparatus 100 needs to be moved in a direction away from the subject. The user may move the electronic apparatus 100 in a direction away from the subject, and the processor 220 may acquire a sharp image focused on the subject when the distance between the subject and the electronic apparatus 100 becomes the minimum focal length or more.
According to an embodiment, processor 220 may display the first visual affordance on display 110 when the region of interest is sensed in act 330. When there is user input for the first visual affordance, the processor 220 may crop an area within the image that includes at least the region of interest, displaying the cropped area on the display 110 with an enlargement of the cropped area along with the message indicating defocus. The details of which are described later in fig. 4.
According to an embodiment, when the region of interest is sensed by the processor 220 in act 330, the region including at least the region of interest may be automatically cropped within the image, even without additional user input, and the cropped region may be displayed enlarged on the display 110 along with a message indicating defocus.
FIG. 4 is a flow diagram illustrating a method of providing an image capture guide upon receiving user input for a first visual affordance, according to an embodiment.
According to an embodiment, in act 410, the processor 220 may sense a region of interest within the image that includes at least a portion of the subject. Act 410 may correspond to act 330 of fig. 3.
According to an embodiment, in act 420, processor 220 may display the first visual affordance on a display. The first visual affordance may be a soft button modality displayed on the display 110.
According to an embodiment, the electronic apparatus 100 can guide the user to perform shooting at a magnification suitable for shooting of the subject. For example, the processor 220 may display on the display a first visual affordance that adjusts the magnification of the camera lens to a particular magnification when defocus occurs.
According to an embodiment, in act 430, processor 220 may determine that user input for the first visual affordance is received. For example, the processor 220 may receive a touch input or a voice instruction to touch the touch screen with a finger or an operation tool.
According to an embodiment, in act 440, processor 220, upon receiving user input for the first visual affordance, may crop an area within the image that includes at least the region of interest. Act 440 may correspond to act 340 of fig. 3.
According to an embodiment, in act 450, processor 220 may display the cropped area on display 110 in an enlarged scale along with the message indicating defocus. Act 450 may correspond to act 350 of fig. 3.
According to an embodiment, processor 220 may display the cropped and enlarged image on display 110 according to a user's selection by displaying the first visual affordance on display 110. For example, the user may touch the first visual affordance to view a cropped and enlarged image displayed on the display 110, thereby accurately recognizing that the defocus occurs. To give another example, the user may provide an uncut and unmagnified image without touching the first visual affordance.
FIG. 5 is a flowchart illustrating acts in receiving user input for a second visual affordance, according to an embodiment.
According to an embodiment, in act 510, processor 220 may enlarge the cropped area along with the message indicating defocus and display on display 110 while displaying the second visual affordance on display 110. The second visual affordance may be a soft button modality displayed on the display 110.
According to an embodiment, the processor 220, upon sensing the region of interest, receives user input for a first visual availability displayed at the display 110, or may automatically zoom in on the cropped region along with a message indicating defocus and display on the display 110 with the sensing of the region of interest. At this point, the processor 110 may display the second visual affordance on the display 110 along with the message, cropped, and enlarged image.
According to an embodiment, in act 520, processor 220 may determine that user input for the second visual affordance is received. For example, the processor 220 may receive a touch input by touching the touch screen with a finger or an operating tool (e.g., an electronic pen).
According to an embodiment, in act 530, processor 220 may display an unsloped or unmagnified image on display 110. The processor 220, upon receiving user input for the second visual affordance, may cease the act of cropping an area within the image that includes at least the region of interest and enlarge displaying the cropped area on the display 110. The processor 220 may not crop or magnify the image acquired by the camera 210 and display it on the display 110.
According to an embodiment, the processor 220, upon receiving the user input for the second visual affordance, may stop displaying the message indicating defocus on the display 110. For example, when the electronic device 100 receives user input for the second visual affordance from the subject being located farther than the minimum focal length, the processor 220 may not display a message on the display 110 indicating defocus. According to other embodiments, the processor 220 may also not stop the display of the message indicating defocus upon receiving user input for the second visual affordance. For example, even if a user input for the second visual affordance is received, the processor 220 may continue to display a message indicating defocus on the display 110 as long as the distance between the subject and the electronic device 100 is within the minimum focal length, thereby maintaining the defocused state.
FIG. 6 is a flow diagram illustrating a method of analyzing an image and distance data according to an embodiment.
According to an embodiment, in act 610, the processor 220 may acquire an image via the camera 210 and distance data for a distance between the subject and the electronic device 100 via the distance sensor 145. Act 610 may correspond to act 310 of fig. 3.
According to an embodiment, in act 620, the processor 220 may analyze the image acquired by the camera 210 to determine whether the blur contained within the image is a motion blur (motion blur) caused by subject movement.
According to an embodiment, when the processor 220 determines that the blur in the image is not the dynamic blur in act 630, it may determine that the defocus occurs based on the distance data, and determine whether the blur in the image is a defocus-induced blur. In one embodiment, the processor 220 may analyze the amount of blur contained in the image, and use the result of the analysis in determining whether defocus is occurring.
According to an embodiment, when the distance between the object and the electronic device 100 is within the minimum focal length, the processor 220 may determine that defocus occurs based on the distance data acquired by the distance sensor 145, and may determine that blur in the image is blur caused by defocus.
According to an embodiment, in act 640, when the processor 220 determines that the blur contained within the image is a defocus-induced blur, a region of interest including at least a portion of the subject may be sensed within the image by target detection.
According to an embodiment, the object detection (object detection) action of the processor 220 may include: an action of analyzing the image acquired by the camera 210, recognizing the presence of an object in a specific portion of the image (object recognition); an action (object classification) of judging which object the object is; an act of finding the exact position of the object within the image (object localization).
According to an embodiment, in act 650, processor 220 may perform texture checking and advanced feature extraction within the image.
According to an embodiment, the processor 220 may grasp the texture of the object through texture inspection, thereby determining what degree of sharpness is required for the object. For example, when a subject including a font or an image is shot clearly, the electronic apparatus 100 may determine that the distance between the subject and the electronic apparatus 100 needs to be adjusted when there is a blur in the image of the subject. For example, when the object is a wall or a surface without texture, the electronic apparatus 100 may determine that the distance between the object and the electronic apparatus 100 does not need to be adjusted.
According to one embodiment, the processor 220 may learn the characteristics of the blur contained within the image through advanced feature extraction. For example, the processor 220 may determine that the blur contained in the image was generated for the user's intent. For example, the processor 220 may determine that the blur included in the image is a defocus-induced blur.
According to an embodiment, acts 620 to 650 may correspond to acts 320 and 330 of fig. 3.
Fig. 7 is a diagram illustrating an example of providing an image capturing guide according to an embodiment.
Reference numerals 710 to 730 show examples of providing the image capturing guide illustrated in fig. 3 to 6 according to an embodiment.
According to an embodiment, reference numeral 710 shows a picture displayed on the display 110 when defocus occurs.
According to an embodiment, the processor 220 may analyze the image acquired by the camera 210 to determine whether it is a motion blur caused by the movement of the object 711. The processor 220 confirms that there is no directivity in the blur around the subject as a result of analyzing the image, thereby determining that the blur of the image is not a dynamic blur. The processor 220 may determine the blur of the image as a blur caused by the movement of the subject 711, assuming that the blur of the subject 711 and the surroundings has a predetermined directivity.
According to an embodiment, when the processor 220 determines that the blur contained in the image is not the motion blur, it may determine whether the blur contained in the image is a defocus-induced blur based on the distance data. Since the distance between the object 711 and the electronic device 100 is within the minimum focal length, the processor 220 may determine that defocus occurs, and may determine that blur in the image is blur caused by defocus.
According to an embodiment, reference numeral 720 shows a screen displayed on the display 110 when a region of interest including at least a part of the subject 711 is sensed within an image. The region of interest may include a portion of the subject 711.
According to an embodiment, when the processor 220 determines that the blur included in the image is a defocus-induced blur, the region of interest may be sensed by object detection in the image. The processor 220 may sense the presence of the subject 711 in a specific portion within an image, determine that the subject 711 is a business card, and sense an area containing at least a part of the subject 711 as a region of interest.
According to an embodiment, the processor 220 may grasp the texture of the object 711 through texture check, and determine that it is necessary to clearly capture the image because the object 711 includes a font. The processor 220 may determine that the distance between the object 711 and the electronic device 100 needs to be adjusted because the blur contained in the image is out of focus and not intended by the user through advanced feature extraction.
According to an embodiment, when the processor 220 senses the region of interest, a line 721 surrounding the region of interest may be displayed on the display 110. The processor 220 may represent the lines 721 displayed in the display 110 in colors that are readily visible within the image, as well as in thicknesses.
According to an embodiment, the processor 220 may display a first visual affordance 725 on the display 110 upon sensing the area of interest. The first visual affordance 725 may include a text that enlarges the region of interest.
According to one embodiment, reference numeral 730 illustrates a screen displayed on the display 110 upon receiving user input to the first visual affordance 725.
According to an embodiment, the processor 220, upon receiving user input to the first visual affordance 725, may crop the region of interest, displaying the cropped region in enlargement on the display 110. The processor 220 may display a message 733 indicating defocus on the display 110 together. The message 733 may include content that causes the electronic apparatus 100 to move in a direction away from the subject.
According to an embodiment, the processor 220 may automatically display a magnified image, such as reference numeral 730, after sensing the region of interest within the image without displaying the first visual affordance 725 on the display 110. At this time, the electronic device 100 may provide an image of the cropped and enlarged region of interest through the display 110 even without input from the user.
According to an embodiment, the processor 220 may display the second visual affordance 735 on the display 110 with the image of the cropped area enlarged and the message 733 indicating defocus. The processor 220, upon receiving user input to the second visual affordance 735, may stop the act of cropping the region of interest and enlarge the cropped region. Processor 220 may display an uncut or unmagnified image on display 110.
According to an embodiment, after moving the electronic device 100 further than the minimum focal length from the subject, there is a user input to the second visual affordance 735, the processor 220 may stop displaying the message 733 indicating defocus while displaying an uncut or un-zoomed image on the display 110. According to other embodiments, the electronic device 100 may continue to display a message 733 indicating defocus on the display 110 upon receiving user input to the second visual affordance 735 while remaining defocused from the subject being within the minimum focal distance. Herein, visual affordances may replace/refer to visual objects, UI items, icons, menus, indicators, and the like.
Fig. 8 is a diagram illustrating an example of a region of interest cropped and enlarged within an image as a distance between a subject and an electronic device becomes longer, according to an embodiment.
According to one embodiment, FIG. 8 illustrates uncut or unmagnified images 810, 820, 830 and cut and enlarged images 815, 825, 835 of the region of interest when the electronic device 100 is moved away from the subject.
According to an embodiment, reference numeral 810 shows an image in which the electronic device 100 is defocused from the subject being closer than the minimum focal length.
Reference numeral 815 shows the region of interest 811 sensed by the cropping processor 220 within the image of reference numeral 810, and the image of the cropped region is displayed enlarged on the display 110, according to an embodiment. Since the distance between the subject and the electronic apparatus 100 is less than the minimum focal length, defocusing occurs in the subject 812, so that a strong blur can be displayed within the image.
According to an embodiment, reference numeral 820 shows an image of a case where although the electronic apparatus 100 is moved in a direction away from the subject, the distance between the subject and the electronic apparatus 100 is closer than the minimum focal distance.
According to an embodiment, reference numeral 825 shows the region of interest 821 sensed by the cropping processor 220 within the image of reference numeral 820, the image of the cropped region being displayed enlarged on the display 110. The distance between the subject and the electronic apparatus 100 becomes farther than the case of reference numeral 810, and therefore less defocus occurs in the subject 822 than the case of reference numeral 815, so that a weak blur can be displayed within the image.
According to an embodiment, reference numeral 830 shows an image of a case where the distance between the subject and the electronic apparatus 100 is the minimum focal length or more.
According to an embodiment, reference numeral 835 shows the region of interest 831 sensed by the cropping processor 220 within the image of reference numeral 830, and the image of the cropped region is displayed enlarged on the display 110. Since the electronic device 100 is located farther than the minimum focal length from the subject, it can be a sharp image focusing on the subject 832.
According to an embodiment, the size of the objects 812, 822, 832 displayed on the display 110 may be kept constant while the user moves the electronic device 100 away from the objects. The size of the attention areas 811, 821, 831 in the images 810, 820, 830 acquired by the camera 210 becomes smaller as the electronic device 100 is farther from the subject, and the attention areas 811, 821, 831 are sensed and cropped in the images 810, 820, 830, and the cropped areas are enlarged and displayed on the display 110 by the electronic device 100. Thus, the size of the images 815, 825, 835 of the region of interest displayed on the display 110 may be kept constant.
Fig. 9 is an example illustrating a UI that guides a region of interest to be located at the center of an image according to an embodiment.
According to an embodiment, the processor 220 may sense a region of interest including at least a portion of a subject within an image. At this time, the region of interest may not be located in the center of the image but located at the edge. The processor 220 may display a UI (user interface) 910 such as an arrow on the display 110. The user may adjust the position of the electronic device 100 to center the region of interest in the image through the UI910 displayed on the display 110.
Fig. 10 is a flowchart illustrating a method of providing an image capturing guide by a message indicating defocus according to an embodiment. The method illustrated in fig. 10 may be executed by the electronic device 100 or the processor 220 of the electronic device 100.
According to an embodiment, in act 1010, the processor 220 may acquire an image via the camera 210 and distance data for a distance between the subject and the electronic device 100 via the distance sensor 145. Act 1010 may correspond to act 310 of fig. 3.
According to an embodiment, in act 1020, processor 220 may determine that defocus occurs based on the distance data.
According to an embodiment, defocusing may occur when the distance between the subject and the electronic device 100 is within the minimum focal length. At this time, the processor 220 may analyze the distance data acquired through the distance sensor 145 to determine the occurrence of defocus. In addition, the processor 220 may analyze the amount of blur contained within the image to supplement utilization when determining that defocus is occurring. The processor 220 may utilize the image analysis result in order to correct an error that may be generated in the analysis result of the distance data. The processor 220 may determine, based on the distance data, that the blur contained within the image is caused by defocus due to shooting in less than the minimum focal length.
According to an embodiment, in act 1030, processor 220 may display a message on display 110 directing the distance between the subject and electronic device 100 to be above the minimum focal length. The user can recognize that the defocus can be removed when the electronic apparatus 100 is moved in a direction away from the subject through the message displayed on the display 110.
According to an embodiment, the processor 220 may further use at least one output device of a speaker or a light emitting device (e.g., an LED light) to notify that defocusing has occurred during the display of the message on the display 110. According to other embodiments, the processor 220 may use at least one output device of a speaker or a lighting device instead of displaying the message on the display 110 in order to inform the user that defocus occurs.
According to an embodiment, in act 1040, processor 220 may determine that the distance between the subject and electronic device 100 is greater than or equal to the minimum focal length. The user recognizes that the defocus occurs through a message displayed on the display 110, and then can move the electronic apparatus 100 in a direction away from the subject. The electronic apparatus 100 can analyze the distance data and determine that the distance between the object and the electronic apparatus 100 becomes the minimum focal length or more without defocusing.
According to an embodiment, in act 1050, the processor 220 may remove the message in the display 110 in response to the distance between the subject and the electronic device 100 being above the minimum focal length. When the user positions the electronic device 100 farther than the minimum focal length from the subject, no defocus occurs, so the processor 220 may not display the message on the display 110.
Fig. 11 is a diagram illustrating an example of a message indicating defocus according to an embodiment.
According to an embodiment, the processor 220 may determine that defocus occurs based on distance data acquired by the distance sensor 145. The processor 220 may display a message 1110 on the display 110 that directs the distance between the subject and the electronic device 100 to be above the minimum focal length from the subject.
According to an embodiment, the processor 220 may display a message 1110 on the display 110 that causes the electronic device 100 to move away from the subject. According to other embodiments, the processor 220 may suggest an accurate distance that the electronic device 100 needs to be moved away from the subject through a message.
According to an embodiment, the processor 220 may further use at least one output device of a speaker or a light emitting device (e.g., an LED light) to notify that defocusing has occurred during the display of the message 1110 on the display 110.
Fig. 12 is a block diagram illustrating an electronic apparatus 1201 in a network environment 1200 in accordance with various embodiments. Referring to fig. 12, an electronic device 1201 in a network environment 1200 may communicate with an electronic device 1202 via a first network 1298 (e.g., a short-range wireless communication network) or with at least one of an electronic device 1204 or a server 1208 via a second network 1299 (e.g., a long-range wireless communication network). According to an embodiment, electronic device 1201 may communicate with electronic device 1204 via server 1208. According to an embodiment, electronic device 1201 may include processor 1220, memory 1230, input module 1250, sound output module 1255, display module 1260, audio module 1270, sensor module 1276, interface 1277, connection terminal 1278, haptic module 1279, camera module 1280, power management module 1288, battery 1289, communication module 1290, user identification module (SIM)1296, or antenna module 1297. In some embodiments, at least one of the above-described components (e.g., 1212 connection end 1278) may be omitted from electronic device 1201, or one or more other components may be added to electronic device 1201. In some embodiments, some of the above components (e.g., the sensor module 1276, the camera module 1280, or the antenna module 1297) may be implemented as a single integrated component (e.g., the display module 1260) 1212.
The processor 1220 may run, for example, software (e.g., the program 1240) to control at least one other component (e.g., a hardware component or a software component) of the electronic device 1201 that is connected to the processor 1220 and may perform various data processing or calculations. According to one embodiment, as at least part of the data processing or computation, processor 1220 may store commands or data received from another component (e.g., sensor module 1276 or communication module 1290) in volatile memory 1232, process commands or data stored in volatile memory 1232, and store resulting data in non-volatile memory 1234. According to embodiments, the processor 1220 may include a main processor 1221 (e.g., a Central Processing Unit (CPU) or an Application Processor (AP)) or an auxiliary processor 1223 (e.g., a Graphics Processing Unit (GPU), a Neural Processing Unit (NPU), an Image Signal Processor (ISP), a sensor hub processor, or a Communication Processor (CP)) that is operatively independent of or in conjunction with the main processor 1221. For example, when the electronic device 1201 includes the main processor 1221 and the auxiliary processor 1223, the auxiliary processor 1223 may be adapted to consume less power than the main processor 1221, or be adapted to be dedicated to a particular function. The auxiliary processor 1223 may be implemented separately from the main processor 1221 or as part of the main processor 1221.
Secondary processor 1223 (rather than primary processor 1221) may control at least some of the functions or states associated with at least one of the components of electronic device 120112 (e.g., display module 1260, sensor module 1276, or communication module 1290) when primary processor 1221 is in an inactive (e.g., sleep) state, or secondary processor 1223 may control at least some of the functions or states associated with at least one of the components of electronic device 1201 (e.g., display module 1260, sensor module 1276, or communication module 1290) with primary processor 1221 when primary processor 1221 is in an active state (e.g., running an application). According to an embodiment, the auxiliary processor 1223 (e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., a camera module 1280 or a communication module 1290) that is functionally related to the auxiliary processor 1223. According to an embodiment, the auxiliary processor 1223 (e.g., neural processing unit) may include hardware structures dedicated to artificial intelligence model processing. The artificial intelligence model can be generated by machine learning. Such learning may be performed, for example, by electronic device 1201 where artificial intelligence is performed or via a separate server (e.g., server 1208). Learning algorithms may include, but are not limited to, supervised learning, unsupervised learning, semi-supervised learning, or reinforcement learning, for example. The artificial intelligence model can include a plurality of artificial neural network layers. The artificial neural network may be, but is not limited to, a Deep Neural Network (DNN), a Convolutional Neural Network (CNN), a Recurrent Neural Network (RNN), a Restricted Boltzmann Machine (RBM), a Deep Belief Network (DBN), a Bidirectional Recurrent Deep Neural Network (BRDNN), or a deep Q network, or a combination of two or more thereof. Additionally or alternatively, the artificial intelligence model may include software structures in addition to hardware structures.
Memory 1230 may store various data used by at least one component of electronic device 1201, such as processor 1220 or sensor module 1276. The various data may include, for example, software (e.g., program 1240) and input data or output data for commands associated therewith. The memory 1230 can include volatile memory 1232 or nonvolatile memory 1234.
Program 1240 may be stored as software in memory 1230, and program 1240 may include, for example, an Operating System (OS)1242, middleware 1244, or applications 1246.
Input module 1250 may receive commands or data from outside of electronic device 1201 (e.g., a user) that are to be used by other components of electronic device 1201, such as processor 1220. Input module 1250 may include, for example, a microphone, a mouse, a keyboard, keys (e.g., buttons), or a digital pen (e.g., a stylus).
The sound output module 1255 may output a sound signal to the outside of the electronic apparatus 1201. The sound output module 1255 may include, for example, a speaker or a receiver. The speakers may be used for general purposes such as playing multimedia or playing a record. The receiver may be operable to receive an incoming call. Depending on the embodiment, the receiver may be implemented separate from the speaker, or as part of the speaker.
Display module 1260 may visually provide information to an exterior (e.g., user) of electronic device 1201. The display device 1260 may include, for example, a display, a holographic device, or a projector, and control circuitry for controlling a respective one of the display, holographic device, and projector. According to embodiments, the display module 1260 may include a touch sensor adapted to detect a touch or a pressure sensor adapted to measure the intensity of a force caused by a touch.
The audio module 1270 may convert sound into electrical signals and vice versa. According to an embodiment, the audio module 1270 may obtain sound via the input module 1250 or output sound via the sound output module 1255 or an earphone of an external electronic device (e.g., the electronic device 1202) directly (e.g., wired) connected or wirelessly connected with the electronic device 1201.
The sensor module 1276 may detect an operating state (e.g., power or temperature) of the electronic device 1201 or an environmental state (e.g., state of a user) outside the electronic device 1201, and then generate an electrical signal or data value corresponding to the detected state. According to an embodiment, sensor module 1276 may include, for example, a gesture sensor, a gyroscope sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an Infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor.
The interface 1277 may support one or more particular protocols that will be used to connect the electronic device 1201 with an external electronic device (e.g., electronic device 1202) either directly (e.g., wired) or wirelessly. According to an embodiment, interface 1277 may include, for example, a High Definition Multimedia Interface (HDMI), a Universal Serial Bus (USB) interface, a Secure Digital (SD) card interface, or an audio interface.
Connection end 1278 may include a connector via which electronic device 1201 may be physically connected with an external electronic device (e.g., electronic device 1202). According to an embodiment, connection end 1278 may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (e.g., a headphone connector).
The haptic module 1279 may convert the electrical signal into a mechanical stimulus (e.g., vibration or motion) or an electrical stimulus that may be recognized by the user via his sense of touch or movement. According to embodiments, the haptic module 1279 may include, for example, a motor, a piezoelectric element, or an electrical stimulator.
The camera module 1280 may capture still images or moving images. According to embodiments, the camera module 1280 may include one or more lenses, an image sensor, an image signal processor, or a flash.
Power management module 1288 may manage power to electronic device 1201. According to an embodiment, the power management module 1288 may be implemented as at least part of a Power Management Integrated Circuit (PMIC), for example.
Battery 1289 may provide power to at least one component of electronic device 1201. According to an embodiment, the battery 1289 may include, for example, a non-rechargeable primary cell, a rechargeable battery, or a fuel cell.
The communication module 1290 may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device 1201 and an external electronic device (e.g., the electronic device 1202, the electronic device 1204, or the server 1208), and performing communication via the established communication channel. The communication module 1290 may include one or more communication processors capable of operating independently of the processor 1220 (e.g., Application Processor (AP)) and supporting direct (e.g., wired) or wireless communication. According to an embodiment, communication module 1290 can include a wireless communication module 1292 (e.g., a cellular communication module, a short-range wireless communication module, or a Global Navigation Satellite System (GNSS) communication module) or a wired communication module 1294 (e.g., a Local Area Network (LAN) communication module or a Power Line Communication (PLC) module). A respective one of the communication modules may communicate with external electronic devices via a first network 1298 (e.g., a short-range communication network such as bluetooth, wireless fidelity (Wi-Fi) direct, or infrared data association (IrDA)) or a second network 1299 (e.g., a long-range communication network such as a conventional cellular network, a 5G network, a next generation communication network, the internet, or a computer network (e.g., a LAN or a Wide Area Network (WAN)). These various types of communication modules may be implemented as a single component (e.g., a single chip), or may be implemented as multiple components (e.g., multiple chips) that are separate from one another. The wireless communication module 1292 may identify and authenticate the electronic device 1201 in a communication network, such as the first network 1298 or the second network 1299, using subscriber information, such as an International Mobile Subscriber Identity (IMSI), stored in the subscriber identity module 1296.
Wireless communication module 1292 may support 5G networks behind 4G networks as well as next generation communication technologies, such as New Radio (NR) access technologies. The NR access technology may support enhanced mobile broadband (eMBB), large-scale machine type communication (mtc), or ultra-reliable low latency communication (URLLC). Wireless communication module 1292 may support a high frequency band (e.g., the millimeter wave band) to achieve, for example, a high data transmission rate. Wireless communication module 1292 may support various techniques for ensuring performance over high frequency bands, such as, for example, beamforming, massive multiple-input multiple-output (massive MIMO), full-dimensional MIMO (FD-MIMO), array antennas, analog beamforming, or massive antennas. The wireless communication module 1292 may support various requirements specified in the electronic device 1201, an external electronic device (e.g., the electronic device 1204), or a network system (e.g., the second network 1299). According to embodiments, wireless communication module 1292 may support a peak data rate for implementing eMBB (e.g., 20Gbps or greater), a loss coverage for implementing mtc (e.g., 164dB or less), or a U-plane delay for implementing URLLC (e.g., 0.5ms or less for each of Downlink (DL) and Uplink (UL), or a round trip of 1ms or less).
Antenna module 1297 may send signals or receive signals or power to/from an external portion of electronic device 1201 (e.g., an external electronic device). According to an embodiment, antenna module 1297 may include an antenna that includes a radiating element comprised of a conductive material or conductive pattern formed in or on a substrate, such as a Printed Circuit Board (PCB). According to an embodiment, antenna module 1297 may include multiple antennas (e.g., array antennas). In this case, at least one antenna suitable for a communication scheme used in a communication network, such as the first network 1298 or the second network 1299, may be selected from the plurality of antennas by, for example, the communication module 1290 (e.g., the wireless communication module 1292). Signals or power may then be transmitted or received between the communication module 1290 and the external electronic device via the selected at least one antenna. According to an embodiment, additional components other than the radiating element, such as a Radio Frequency Integrated Circuit (RFIC), may be additionally formed as part of the antenna module 1297.
Antenna module 1297 may form a millimeter-wave antenna module according to various embodiments. According to an embodiment, a millimeter wave antenna module may include a printed circuit board, a Radio Frequency Integrated Circuit (RFIC) disposed on or adjacent to a first surface (e.g., a bottom surface) of the printed circuit board and capable of supporting a specified high frequency band (e.g., a millimeter wave band), and a plurality of antennas (e.g., array antennas) disposed on or adjacent to a second surface (e.g., a top surface or a side surface) of the printed circuit board and capable of transmitting or receiving signals of the specified high frequency band.
At least some of the above components may be interconnected and communicate signals (e.g., commands or data) communicatively between them via an inter-peripheral communication scheme (e.g., bus, General Purpose Input Output (GPIO), Serial Peripheral Interface (SPI), or Mobile Industry Processor Interface (MIPI)).
According to an embodiment, commands or data may be transmitted or received between the electronic device 1201 and the external electronic device 1204 via the server 1208 connected to the second network 1299. Each of electronic device 1202 or electronic device 1204 may be the same type of device as electronic device 1201, or a different type of device from electronic device 1201. According to an embodiment, all or some of the operations to be performed at the electronic device 1201 may be performed at one or more of the external electronic device 1202, the external electronic device 1204, or the server 1208. For example, if the electronic device 1201 should automatically perform a function or service or should perform a function or service in response to a request from a user or another device, the electronic device 1201 may request the one or more external electronic devices to perform at least part of the function or service instead of or in addition to performing the function or service. The one or more external electronic devices that received the request may perform the requested at least part of the functions or services or perform another function or another service related to the request and transmit the result of the execution to the electronic device 1201. Electronic device 1201 may provide the result as at least a partial reply to the request with or without further processing of the result. To this end, for example, cloud computing technology, distributed computing technology, Mobile Edge Computing (MEC) technology, or client-server computing technology may be used. The electronic device 1201 may provide ultra low delay services using, for example, distributed computing or mobile edge computing. In another embodiment, the external electronic device 1204 may comprise an internet of things (IoT) device. Server 1208 may be an intelligent server using machine learning and/or neural networks. According to an embodiment, the external electronic device 1204 or the server 1208 may be included in the second network 1299. The electronic device 1201 may be applied to a smart service (e.g., smart home, smart city, smart car, or healthcare) based on a 5G communication technology or an IoT related technology.
The electronic device according to various embodiments may be one of various types of electronic devices. The electronic device may comprise, for example, a portable communication device (e.g., a smartphone), a computer device, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. According to the embodiments of the present disclosure, the electronic devices are not limited to those described above.
It should be understood that the various embodiments of the present disclosure and the terms used therein are not intended to limit the technical features set forth herein to specific embodiments, but include various changes, equivalents, or alternatives to the respective embodiments. For the description of the figures, like reference numerals may be used to refer to like or related elements. It will be understood that a noun in the singular corresponding to a term may include one or more things unless the relevant context clearly dictates otherwise. As used herein, each of the phrases such as "a or B," "at least one of a and B," "at least one of a or B," "A, B or C," "at least one of A, B and C," and "at least one of A, B or C" may include any or all possible combinations of the items listed together with the respective one of the plurality of phrases. As used herein, terms such as "1 st" and "2 nd" or "first" and "second" may be used to distinguish one element from another element simply and not to limit the elements in other respects (e.g., importance or order). It will be understood that, if an element (e.g., a first element) is referred to as being "coupled to", "connected to" or "connected to" another element (e.g., a second element), it can be directly (e.g., wiredly) connected to, wirelessly connected to, or connected to the other element via a third element, when the term "operatively" or "communicatively" is used or not.
As used in connection with various embodiments of the present disclosure, the term "module" may include units implemented in hardware, software, or firmware, and may be used interchangeably with other terms (e.g., "logic," "logic block," "portion," or "circuitry"). A module may be a single integrated component adapted to perform one or more functions or a minimal unit or portion of the single integrated component. For example, according to an embodiment, the modules may be implemented in the form of Application Specific Integrated Circuits (ASICs).
The various embodiments set forth herein may be implemented as software (e.g., program 1240) comprising one or more instructions stored in a storage medium (e.g., internal memory 1236 or external memory 1238) that is readable by a machine (e.g., electronic device 1201). For example, under control of a processor, a processor (e.g., processor 1220) of the machine (e.g., electronic device 1201) may invoke and execute at least one of the one or more instructions stored in the storage medium, with or without the use of one or more other components. This enables the machine to be operable to perform at least one function in accordance with the invoked at least one instruction. The one or more instructions may include code generated by a compiler or code capable of being executed by an interpreter. The machine-readable storage medium may be provided in the form of a non-transitory storage medium. Where the term "non-transitory" simply means that the storage medium is a tangible device and does not include a signal (e.g., an electromagnetic wave), the term does not distinguish between data being semi-permanently stored in the storage medium and data being temporarily stored in the storage medium.
According to embodiments, methods according to various embodiments of the present disclosure may be included and provided in a computer program product. The computer program product may be used as a product for conducting a transaction between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium, such as a compact disc read only memory (CD-ROM), or may be distributed (e.g., downloaded or uploaded) online via an application store (e.g., a Play store), or may be distributed (e.g., downloaded or uploaded) directly between two user devices (e.g., smartphones). At least part of the computer program product may be temporarily generated if it is published online, or at least part of the computer program product may be at least temporarily stored in a machine readable storage medium, such as a memory of a manufacturer's server, a server of an application store, or a forwarding server.
According to various embodiments, each of the above components (e.g., modules or programs) may comprise a single entity or multiple entities, and some of the multiple entities may be separately provided in different components. According to various embodiments, one or more of the above-described components may be omitted, or one or more other components may be added. Alternatively or additionally, multiple components (e.g., modules or programs) may be integrated into a single component. In such a case, according to various embodiments, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as the corresponding one of the plurality of components performed the one or more functions prior to integration. Operations performed by a module, program, or another component may be performed sequentially, in parallel, repeatedly, or in a heuristic manner, or one or more of the operations may be performed in a different order or omitted, or one or more other operations may be added, in accordance with various embodiments.
Fig. 13 is a block diagram 1300 illustrating a camera module 1280, according to various embodiments. Referring to fig. 13, the camera module 1280 may include a lens assembly 1310, a flash 1320, an image sensor 1330, an image stabilizer 1340, a memory 1350 (e.g., a buffer memory), or an image signal processor 1360. Lens assembly 1310 may collect light emitted or reflected from an object whose image is to be captured. Lens assembly 1310 may include one or more lenses. According to an embodiment, camera module 1280 may include a plurality of lens assemblies 1310. In this case, the camera module 1280 may form, for example, a dual camera, a 360 degree camera, or a spherical camera. Some of the plurality of lens assemblies 1310 may have the same lens properties (e.g., angle of view, focal length, auto-focus, f-number, or optical zoom), or at least one lens assembly may have one or more lens properties that are different from the lens properties of another lens assembly. Lens assembly 1310 may include, for example, a wide-angle lens or a telephoto lens.
The flash 1320 may emit light, wherein the emitted light is used to enhance the light reflected from the object. According to an embodiment, the flash 1320 may include one or more Light Emitting Diodes (LEDs) (e.g., Red Green Blue (RGB) LEDs, white LEDs, Infrared (IR) LEDs, or Ultraviolet (UV) LEDs) or xenon lamps. The image sensor 1330 may acquire an image corresponding to an object by converting light emitted or reflected from the object and transmitted through the lens assembly 1310 into an electrical signal. According to an embodiment, the image sensor 1330 may include one image sensor (e.g., an RGB sensor, a Black and White (BW) sensor, an IR sensor, or a UV sensor) selected from among a plurality of image sensors having different attributes, a plurality of image sensors having the same attribute, or a plurality of image sensors having different attributes. Each of the image sensors included in the image sensor 1330 may be implemented using, for example, a Charge Coupled Device (CCD) sensor or a Complementary Metal Oxide Semiconductor (CMOS) sensor.
The image stabilizer 1340 may move the image sensor 1330 or at least one lens included in the lens assembly 1310 in a particular direction or control an operational property of the image sensor 1330 (e.g., adjust readout timing) in response to movement of the camera module 1280 or the electronic device 1201 in which the camera module 1280 is included. In this way, it is allowed to compensate for at least a part of the negative effects (e.g., image blur) due to the movement of the image being captured. According to an embodiment, the image stabilizer 1340 may sense such movement of the camera module 1280 or the electronic device 1201 using a gyro sensor (not shown) or an acceleration sensor (not shown) disposed within or outside the camera module 1280. According to an embodiment, the image stabilizer 1340 may be implemented as an optical image stabilizer, for example.
The memory 1350 may at least temporarily store at least a portion of the images acquired via the image sensor 1330 for subsequent image processing tasks. For example, if multiple images are captured quickly or image capture delays due to shutter lag, the acquired raw images (e.g., bayer pattern images, high resolution images) may be stored in memory 1350 and their corresponding replica images (e.g., low resolution images) may be previewed via display module 1260. Then, if a specified condition is satisfied (e.g., by user input or system commands), at least a portion of the original image stored in the memory 1350 may be retrieved and processed by, for example, the image signal processor 1360. According to embodiments, the memory 1350 may be configured as at least a portion of the memory 1230, or the memory 1350 may be configured as a separate memory that operates independently of the memory 1230.
The image signal processor 1360 may perform one or more image processes on an image acquired via the image sensor 1330 or an image stored in the memory 1350. The one or more image processes may include, for example, depth map generation, three-dimensional (3D) modeling, panorama generation, feature point extraction, image synthesis, or image compensation (e.g., noise reduction, resolution adjustment, brightness adjustment, blurring, sharpening, or softening). Additionally or alternatively, the image signal processor 1360 may perform control (e.g., exposure time control or readout timing control) on at least one of the components (e.g., the image sensor 1330) included in the camera module 1280. The image processed by the image signal processor 1360 may be stored back to the memory 1350 for further processing, or may be provided to an external component (e.g., the memory 1230, the display module 1260, the electronic device 1202, the electronic device 1204, or the server 1208) outside of the camera module 1280. According to an embodiment, the image signal processor 1360 may be configured as at least a portion of the processor 1220, or the image signal processor 1360 may be configured as a separate processor operating independently of the processor 1220. If the image signal processor 1360 is configured as a processor separate from the processor 1220, at least one image processed by the image signal processor 1360 may be displayed as it is by the processor 1220 via the display module 1260, or may be displayed after being further processed.
According to an embodiment, the electronic device 1201 may include multiple camera modules 1280 having different attributes or functions. In this case, at least one camera module 1280 of the plurality of camera modules 1280 may form, for example, a wide-angle camera, and at least another camera module 1280 of the plurality of camera modules 1280 may form a long-focus camera. Similarly, at least one camera module 1280 of the plurality of camera modules 1280 may form a front-facing camera, for example, and at least another camera module 1280 of the plurality of camera modules 1280 may form a rear-facing camera.
An electronic device according to an embodiment herein may include a camera, a distance sensor, a display, and at least one processor electrically connected with the camera, the distance sensor, and the display. The at least one processor may acquire an image including a subject through the camera, acquire distance data of a distance between the subject and the electronic device through the distance sensor, determine occurrence of defocus (de-focus) based on the distance data, sense a region of interest including at least a portion of the subject within the image, crop (crop) at least a region including the region of interest within the image, and display the cropped region on the display in an enlarged manner along with a message indicating defocus.
The at least one processor may display a first visual affordance (affordance) on the display in response to sensing the region of interest, crop an area including at least the region of interest within the image in the presence of user input for the first visual affordance, and display the cropped area on the display in an enlarged scale along with a message indicating defocus.
In an electronic device according to an embodiment herein, the at least one processor may automatically crop an area including at least the region of interest within the image in response to sensing the region of interest, display the cropped area on the display with an enlargement of the cropped area along with a message indicating defocus.
In an electronic device according to an embodiment herein, the at least one processor may display a second visual affordance on the display along with a message indicating defocus and an image enlarging the cropped area, the image being displayed on the display without cropping or without enlarging in the presence of a user input to the second visual affordance.
In an electronic device according to an embodiment herein, the at least one processor may display a second visual affordance on the display along with a message indicating defocus and an image enlarging the cropped area, the image being displayed on the display without cropping or without enlargement when there is user input to the second visual affordance.
In an electronic device according to an embodiment herein, the at least one processor may sense the region of interest by object detection (object detection).
In an electronic apparatus according to an embodiment herein, the at least one processor may analyze a texture of the subject through texture check (texture check), and analyze a characteristic of blur contained in the image through high level feature extraction (extract high level feature).
In an electronic device according to an embodiment herein, the at least one processor may display a line surrounding the region of interest on the display in response to sensing the region of interest.
In an electronic device according to an embodiment herein, the at least one processor may display a UI capable of guiding a user on the display so that the region of interest is located at a center of the image.
In an image capturing guide providing method of an electronic device according to an embodiment herein, may include: an act of acquiring, by a camera included in the electronic device, an image including a subject; an operation of acquiring distance data on a distance between the subject and the electronic apparatus by a distance sensor included in the electronic apparatus; determining an action of defocusing based on the distance data; an act of sensing a region of interest including at least a portion of the subject within the image; an act of cropping (crop) within the image a region including at least the region of interest; an act of a display included in the electronic device enlarging the cropped area for display with a message indicating defocus.
An image capturing guide providing method of an electronic device according to an embodiment herein may include: an act of displaying a first visual affordance (affordance) on the display in response to sensing the area of interest; and an act of cropping an area including at least the region of interest within the image, with a message indicating defocus, and displaying the cropped area on the display in enlargement, in the presence of user input to the first visual affordance.
An image capturing guide providing method of an electronic device according to an embodiment herein may include: an act of automatically cropping within the image an area including at least the region of interest in response to sensing the region of interest, and displaying the cropped area on the display in an enlarged manner along with a message indicating defocus.
An image capturing guide providing method of an electronic device according to an embodiment herein may include: an act of displaying a second visual affordance on the display with a message indicating defocus and an image that enlarges the cropped area; and an act of displaying an uncut or unmagnified image on the display in the presence of user input to the second visual affordance.
An image capturing guide providing method of an electronic device according to an embodiment herein may include: an act of displaying a second visual affordance on the display with a message indicating defocus and an image that enlarges the cropped area; and an act of displaying an uncut or unmagnified image on the display in the presence of user input to the second visual affordance.
In an image capturing guide providing method of an electronic device according to an embodiment herein, the act of sensing the region of interest may include: motion of the region of interest is sensed by object detection.
An electronic device according to an embodiment herein may include a camera, a distance sensor, a display, and at least one processor electrically connected with the camera, the distance sensor, and the display. The at least one processor may acquire an image including a subject by the camera, acquire distance data of a distance between the subject and the electronic device by the distance sensor, determine occurrence of defocus based on the distance data, and display a message guiding the distance between the subject and the electronic device to be a minimum focal length or more on the display.
In an electronic device according to an embodiment herein, the at least one processor may remove the message in the display in response to a distance between the subject and the electronic device being above a minimum focal length.
An electronic device according to an embodiment herein may include at least one of a speaker and a light emitting device electrically connected to the at least one processor, and the at least one processor may cause the at least one of the speaker and the light emitting device to perform output together when displaying the message on the display.
In an electronic device according to an embodiment herein, the at least one processor may sense a region of interest within the image through object detection.
In the electronic device according to an embodiment herein, the at least one processor may analyze a texture of the subject through texture inspection, and analyze characteristics of blur contained in the image through advanced feature extraction.

Claims (20)

1. An electronic device, the electronic device comprising:
a camera;
a distance sensor;
a display; and
at least one processor in electrical connection with the camera, the distance sensor, and the display,
the at least one processor:
acquiring an image including a subject by the camera,
acquiring distance data of a distance between the subject and the electronic apparatus by the distance sensor,
determining that defocus has occurred based on the distance data,
sensing a region of interest including at least a portion of the subject within the image,
cropping a region within the image that includes at least the region of interest,
displaying the cropped area on the display in an enlarged manner along with a message indicating defocus.
2. The electronic device of claim 1,
the at least one processor:
in response to sensing the area of interest, displaying a first visual affordance on the display,
cropping an area including at least the region of interest within the image when there is user input to the first visual affordance, the cropped area displayed enlarged on the display along with a message indicating defocus.
3. The electronic device of claim 1,
the at least one processor automatically crops an area including at least the region of interest within the image in response to sensing the region of interest, and displays the cropped area on the display in an enlarged manner along with a message indicating defocus.
4. The electronic device of claim 2,
the at least one processor:
displaying a second visual affordance on the display along with a message indicating defocus and an image with the cropped area enlarged,
displaying an uncut or unmagnified image on the display when there is user input to the second visual affordance.
5. The electronic device of claim 3,
the at least one processor:
displaying a second visual affordance on the display along with a message indicating defocus and an image with the cropped area enlarged,
displaying an uncut or unmagnified image on the display when there is user input to the second visual affordance.
6. The electronic device of claim 1,
the at least one processor senses the region of interest by object detection.
7. The electronic device of claim 6,
the at least one processor:
analyzing the texture of the object by texture inspection,
the characteristics of the blur contained in the image are analyzed by advanced feature extraction.
8. The electronic device of claim 1,
the at least one processor displays a line surrounding the region of interest on the display in response to sensing the region of interest.
9. The electronic device of claim 1,
the at least one processor displays a user interface on the display that is capable of guiding a user to center the region of interest in the image.
10. A method of an electronic device for providing image capture guidance, the method comprising acts of:
acquiring an image including a subject by a camera included in the electronic apparatus;
acquiring distance data of a distance between the subject and the electronic apparatus by a distance sensor included in the electronic apparatus;
determining that defocus occurs based on the distance data;
sensing a region of interest including at least a portion of the subject within the image;
cropping a region within the image that includes at least the region of interest;
displaying the cropped area in an enlarged manner on a display included in the electronic device along with a message indicating defocus.
11. The image capturing guide providing method of the electronic device according to claim 10, further comprising the acts of:
in response to sensing the area of interest, displaying a first visual affordance on the display; and
cropping an area including at least the region of interest within the image when there is user input to the first visual affordance, the cropped area displayed enlarged on the display along with a message indicating defocus.
12. The image capturing guide providing method of the electronic device according to claim 10, further comprising the acts of: in response to sensing the region of interest, automatically cropping within the image an area including at least the region of interest, an act of displaying the cropped area in enlargement on the display along with a message indicating defocus.
13. The image capturing guide providing method of the electronic device according to claim 11, further comprising the acts of:
displaying a second visual affordance on the display with a message indicating defocus and an image enlarging the cropped area; and
displaying an uncut or unmagnified image on the display when there is user input to the second visual affordance.
14. The image capturing guide providing method of the electronic device according to claim 12, further comprising the acts of:
an act of displaying a second visual affordance on the display with a message indicating defocus and an image that enlarges the cropped area; and
an act of displaying an uncut or unmagnified image on the display when there is user input for the second visual affordance.
15. The image capturing guide providing method of the electronic device according to claim 10,
the act of sensing the region of interest includes: motion of the region of interest is sensed by object detection.
16. An electronic device, the electronic device comprising:
a camera;
a distance sensor;
a display; and
at least one processor in electrical connection with the camera, the distance sensor, and the display,
the at least one processor:
acquiring an image including a subject by the camera,
acquiring distance data of a distance between the subject and the electronic apparatus by the distance sensor,
determining that defocus occurs based on the distance data,
displaying, on the display, a message that guides a distance between the subject and the electronic apparatus to be a minimum focal length or more.
17. The electronic device of claim 16,
the at least one processor removes the message in the display in response to a distance between the subject and the electronic device being above a minimum focal length.
18. The electronic device of claim 16, further comprising at least one of a speaker and a light emitting device electrically connected to the at least one processor,
the at least one processor causes at least one of the speaker and the light emitting device to perform output together when the message is displayed on the display.
19. The electronic device of claim 16,
the at least one processor senses a region of interest within the image through object detection.
20. The electronic device of claim 19,
the at least one processor:
analyzing the texture of the object by texture inspection,
the characteristics of the blur contained in the image are analyzed by advanced feature extraction.
CN202110245479.5A 2020-05-12 2021-03-05 Method and apparatus for providing image capturing guide Pending CN113727013A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020210034817A KR20210138483A (en) 2020-05-12 2021-03-17 Method or electronic device providing image shooting guide
PCT/KR2021/004718 WO2021230507A1 (en) 2020-05-12 2021-04-14 Method and device for providing imaging guidance

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2020-0056762 2020-05-12
KR20200056762 2020-05-12

Publications (1)

Publication Number Publication Date
CN113727013A true CN113727013A (en) 2021-11-30

Family

ID=78672601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110245479.5A Pending CN113727013A (en) 2020-05-12 2021-03-05 Method and apparatus for providing image capturing guide

Country Status (2)

Country Link
KR (1) KR20210138483A (en)
CN (1) CN113727013A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20240073520A1 (en) * 2022-08-29 2024-02-29 Sony Interactive Entertainment Inc. Dual camera tracking system
WO2024106746A1 (en) * 2022-11-18 2024-05-23 삼성전자주식회사 Electronic device and method for increasing resolution of digital bokeh image

Also Published As

Publication number Publication date
KR20210138483A (en) 2021-11-19

Similar Documents

Publication Publication Date Title
US11048923B2 (en) Electronic device and gesture recognition method thereof
US11144197B2 (en) Electronic device performing function according to gesture input and operation method thereof
US20230164442A1 (en) Method for providing image and electronic device supporting same
US20220417446A1 (en) Electronic device for editing video using objects of interest and operating method thereof
US20230188845A1 (en) Electronic device and method for controlling preview image
US20230262318A1 (en) Method for taking photograph by using plurality of cameras, and device therefor
US11954833B2 (en) Electronic device for supporting machine learning-based image processing
CN113727013A (en) Method and apparatus for providing image capturing guide
US20240013405A1 (en) Object tracking method and electronic apparatus therefor
US20230388441A1 (en) Electronic device and method for capturing image by using angle of view of camera module
US11467673B2 (en) Method for controlling camera and electronic device therefor
US20230113499A1 (en) Electronic device including a plurality of cameras and operating method thereof
US20230141559A1 (en) Method for providing image and electronic device supporting the same
US20230074962A1 (en) Electronic device including a plurality of cameras and operating method thereof
US20240089609A1 (en) Image processing method and electronic device therefor
US20230319423A1 (en) Method for providing image and electronic device supporting the same
US20240184379A1 (en) Electronic apparatus for detecting motion gesture, and operation method therefor
US20240078685A1 (en) Method for generating file including image data and motion data, and electronic device therefor
US20230214632A1 (en) Method for processing image through neural network and electronic device thereof
US20240098347A1 (en) Electronic device comprising image sensor and dynamic vision sensor, and operating method therefor
US20230360245A1 (en) Measurement method using ar, and electronic device
US11838652B2 (en) Method for storing image and electronic device supporting the same
US20220358776A1 (en) Electronic device and operating method thereof
US20240007732A1 (en) Electronic device including plurality of cameras
EP4228246A1 (en) Electronic device capable of auto-focusing and method for operating same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination