WO2019090734A1 - 拍摄方法及装置、移动终端及计算机可读存储介质 - Google Patents

拍摄方法及装置、移动终端及计算机可读存储介质 Download PDF

Info

Publication number
WO2019090734A1
WO2019090734A1 PCT/CN2017/110563 CN2017110563W WO2019090734A1 WO 2019090734 A1 WO2019090734 A1 WO 2019090734A1 CN 2017110563 W CN2017110563 W CN 2017110563W WO 2019090734 A1 WO2019090734 A1 WO 2019090734A1
Authority
WO
WIPO (PCT)
Prior art keywords
screen
display
content
partial
framing
Prior art date
Application number
PCT/CN2017/110563
Other languages
English (en)
French (fr)
Inventor
李金鑫
付洋
Original Assignee
深圳市柔宇科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市柔宇科技有限公司 filed Critical 深圳市柔宇科技有限公司
Priority to CN201780095805.0A priority Critical patent/CN111201773A/zh
Priority to PCT/CN2017/110563 priority patent/WO2019090734A1/zh
Publication of WO2019090734A1 publication Critical patent/WO2019090734A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present application relates to the field of image processing technologies, and in particular, to a photographing method and apparatus, a mobile terminal, and a computer readable storage medium.
  • the present application provides a shooting method and apparatus, a mobile terminal, and a computer readable storage medium, which can synchronously display the overall preview effect of the framing picture and the detailed information of the partial picture of the photographer's interest in real time during the framing process, thereby It can help photographers make quick shooting decisions to improve shooting efficiency and give the photographer a better experience.
  • the present application provides a photographing method, which is applied to a mobile terminal, where the mobile terminal includes a viewfinder display screen.
  • the shooting method includes:
  • the framing picture and the enlarged content of the first partial picture are displayed on the framing display screen in real time.
  • the photographing apparatus for use in a mobile terminal, the mobile terminal including a viewfinder display screen.
  • the photographing device includes:
  • a display module configured to display a framing screen in a framing display screen of the mobile terminal in an initial state
  • An acquiring module configured to acquire content of the first partial screen corresponding to the target area in the framing picture
  • a scaling module configured to enlarge content of the first partial screen
  • the display module is further configured to display the framing picture and the content of the enlarged first partial picture in a real-time manner on the framing display screen.
  • Still another aspect of the present application provides a mobile terminal, the mobile terminal comprising a processor, the processor being configured to perform the steps of the photographing method described in any of the embodiments when the computer program stored in the memory is executed.
  • Yet another aspect of the present application provides a computer readable storage medium having stored thereon computer instructions that, when executed by a processor, implement the steps of the photographing method described in any of the above embodiments.
  • the photographing method and apparatus and the mobile terminal of the present application allow the photographer to select a partial screen of interest from the framing screen during the framing process, and enlarge the content of the selected partial screen, and simultaneously display the framing screen in real time and in parallel
  • the enlarged partial picture enables the photographer to synchronously view the overall preview effect of the framing picture and the detailed information of the partial picture of interest in real time during the framing process, thereby enabling the photographer to quickly make a shooting decision to improve shooting efficiency. And give the photographer a better experience.
  • FIG. 1 is a flowchart of a photographing method according to a first embodiment of the present application.
  • FIG. 2 is a schematic diagram of a screen of a viewfinder display of the mobile terminal of the present application at a first time T1.
  • FIG. 3 is a schematic diagram of a screen after a selection operation is input in the viewfinder screen of FIG. 2.
  • FIG. 4 is a schematic diagram of the content of the first partial screen selected by the selection operation in FIG.
  • FIG. 5 is a schematic diagram showing the screen of the framing screen and the enlarged content of the first partial screen on the framing display screen of FIG.
  • FIG. 6 is another schematic diagram of displaying the framing screen and the content of the enlarged first partial screen on the framing display screen of FIG. 3 .
  • FIG. 7 is a flowchart of a photographing method according to a second embodiment of the present application.
  • FIG. 8 is a schematic diagram of the screen of the framing display screen of FIG. 5 at the second time T2.
  • FIG. 9 is a schematic diagram showing a screen in which the range of the target area shown in FIG. 8 is enlarged.
  • FIG. 10 is a schematic diagram of a screen for changing the range of the target area shown in FIG. 5.
  • FIG. 11 is a schematic diagram of a screen display of the image view display of FIG. 8 after the photographing operation is completed.
  • FIG. 12 is a flowchart of a photographing method according to a third embodiment of the present application.
  • FIG. 13A is a schematic diagram of a screen of a viewfinder display of the mobile terminal of the present application at a first time T1.
  • FIG. 13B is a schematic diagram of the screen of the secondary display of the mobile terminal of the present application at the first time T1.
  • Fig. 14 is a view showing a screen after a selection operation is input in the view display screen of Fig. 13A or the sub-display screen of Fig. 13B.
  • Fig. 15 is a schematic diagram showing the contents of the first partial screen selected by the selection operation in Fig. 14.
  • 16A is a schematic diagram showing the screen of the framing screen and the enlarged content of the first partial screen on the framing display screen of FIG. 13A.
  • Fig. 16B is a schematic diagram showing the content of the enlarged first partial screen on the sub-display screen of Fig. 13B.
  • FIG. 17 is a flowchart of a photographing method according to a fourth embodiment of the present application.
  • FIG. 18A is a schematic diagram of the screen of the framing display screen of FIG. 13A at the second time T2.
  • FIG. 18B is a schematic diagram of the screen of the secondary display screen of FIG. 13B at the second time T2.
  • FIG. 19A is a schematic diagram of a screen for displaying an image after the photographing operation is completed by the viewfinder screen of FIG. 13A.
  • FIG. 19A is a schematic diagram of a screen for displaying an image after the photographing operation is completed by the viewfinder screen of FIG. 13A.
  • FIG. 19B is a schematic diagram of a screen display of the sub-display screen of FIG. 13B after the photographing operation is completed.
  • FIG. 19B is a schematic diagram of a screen display of the sub-display screen of FIG. 13B after the photographing operation is completed.
  • 20 is a functional block diagram of an image pickup apparatus according to an embodiment of the present application.
  • FIG. 21 is a schematic diagram of functional modules of a mobile terminal according to a first embodiment of the present application.
  • FIG. 22 is a schematic diagram of functional modules of a mobile terminal according to a second embodiment of the present application.
  • FIG. 1 is a flowchart of a photographing method according to a first embodiment of the present application, where the photographing method is applied to a mobile terminal.
  • the mobile terminal may be an electronic device having a photographing function, such as a camera, a smart phone, or a tablet computer.
  • the mobile terminal includes at least a camera and a finder display.
  • the camera is used to collect images
  • the viewfinder display is used to display a framing interface and the like.
  • photographing method of the embodiment of the present application is not limited to the steps and the sequence in the flowchart shown in FIG. 1 .
  • the steps in the illustrated flow diagrams can be added, removed, or changed in order, depending on the requirements.
  • the photographing method includes the following steps:
  • Step 101 Display a framing picture in a framing display screen of the mobile terminal in an initial state.
  • the scene content in the focus range of the camera of the mobile terminal is displayed in the viewfinder screen.
  • the viewfinder screen can be displayed in real time in the viewfinder display. As shown in FIG. 2, the camera is displayed at the first time T1 in the viewfinder display screen. A real-time picture captured within its focus range for the photographer to view on the mobile terminal.
  • the viewfinder display screen of the mobile terminal displays the viewfinder screen in a full screen mode in an initial state. It can be understood that, in other embodiments, the viewfinder display screen of the mobile terminal may not display the viewfinder screen in a full screen mode in an initial state, for example, displaying the viewfinder at a certain display screen ratio (for example, 75%). Picture.
  • Step 102 When a selection operation is detected on the framing screen, determine a target area selected by the selection operation.
  • the selection operation may be a touch operation input by a touch object (such as a finger, a stylus pen, etc.) on a view display screen of the mobile terminal, or a peripheral device (such as a mouse or the like) on the view display screen.
  • a touch object such as a finger, a stylus pen, etc.
  • a peripheral device such as a mouse or the like
  • the selecting operation is to input an operating point in the framing interface, such as an operation point generated by clicking and touching the framing interface or clicking the framing interface with a mouse.
  • determining the target area selected by the selection operation includes:
  • An area surrounded by a square whose second preset value is a side length is taken as the target area, wherein the four sides of the square are respectively parallel to the edges of the finder display screen;
  • the third preset value is long, and the area surrounded by the rectangle whose fourth preset value is wide is used as the target area, wherein the four sides of the rectangle are respectively associated with the The edges of the framing display are parallel.
  • the selecting operation is a sliding trajectory input in the framing interface, such as slidingly touching the framing interface, or a sliding trajectory generated by clicking and sliding in the framing interface with a mouse.
  • determining the target area selected by the selecting operation includes:
  • An area surrounded by a square whose diagonal line is a line connecting the start point and the end point of the sliding track is used as the target area, wherein the four sides of the square are respectively parallel to the edges of the finder display screen;
  • An area surrounded by a rectangle having a line connecting the start point and the end point of the sliding track as a diagonal line is used as the target area, wherein the four sides of the rectangle are respectively parallel to the edges of the finder display screen.
  • Step 103 as shown in FIG. 3, an identification box K is displayed on the framing screen to identify the target area.
  • the identifier box K is presented as a dashed box.
  • Step 104 acquire content of the first partial screen corresponding to the target area in the framing picture.
  • Step 105 Enlarge the content of the first partial picture.
  • Step 106 As shown in FIG. 5, the framing picture and the content of the enlarged first partial picture are displayed on the framing display screen in real time.
  • the step of displaying the framing picture and the enlarged content of the first partial picture in real time on the framing display screen comprises:
  • the framing screen is displayed in real time on the first display sub-region R1, and the content of the enlarged first partial screen is displayed in real time on the second display sub-region R2.
  • the first display sub-region R1 and the second display sub-region R2 are arranged side by side in the display area of the view display screen. It can be understood that such a split screen display manner may correspond to a situation in which the viewfinder display screen of the mobile terminal is placed vertically.
  • the first display sub-region R1 and the second display sub-region R2 are arranged side by side in the display area of the viewfinder display screen. It can be understood that such a split screen display manner may correspond to a situation in which the viewfinder display screen of the mobile terminal is placed horizontally.
  • the step of displaying the framing screen in the first display sub-region R1 in real time includes:
  • the framing picture is reduced according to the size of the first display sub-region R1;
  • the reduced view screen is displayed in real time on the first display sub-region R1.
  • the step of enlarging the content of the first partial screen includes:
  • the content of the first partial screen is enlarged according to the size of the second display sub-region R2.
  • the photographing method of the present application allows the photographer to select a partial image of interest from the framing screen during the framing process, and enlarge the content of the selected partial screen, and simultaneously display the framing screen and the enlarged image in parallel in real time.
  • the partial picture enables the photographer to view the overall preview effect of the framing picture and the detailed information of the partial picture of interest in real time during the framing process, thereby enabling the photographer to quickly make a shooting decision to improve the shooting efficiency and make the photographer Have a better experience.
  • the photographer can view the detailed information of the partial image of interest in real time through the enlarged first partial screen, so as to determine whether the posture, expression, and the like of the target object of interest are in place, so that the shooting parameters can be adjusted in real time ( For example, a shooting angle or the like, or a posture of the target object, or the like, or directing the target object to perform self-adjustment to obtain a framing picture that is satisfactory to the photographer, and then perform a photographing operation.
  • the shooting parameters can be adjusted in real time ( For example, a shooting angle or the like, or a posture of the target object, or the like, or directing the target object to perform self-adjustment to obtain a framing picture that is satisfactory to the photographer, and then perform a photographing operation.
  • FIG. 7 is a flowchart of a photographing method according to a second embodiment of the present application.
  • the second embodiment and the first implementation The main difference between the modes is that the second embodiment further includes the steps of tracking the target object by using the target tracking technology and re-determining the new target area when detecting that the content of the viewfinder changes.
  • the specific solutions applicable to the first embodiment may also be correspondingly applied to the second embodiment, in order to save space and avoid repetition, here I won't go into details.
  • the photographing method includes the following steps:
  • Step 201 Display a framing picture in a framing display screen of the mobile terminal in an initial state.
  • Step 202 When a selection operation is detected on the framing screen, determining a target area selected by the selection operation.
  • Step 203 as shown in FIG. 3, an identification box K is displayed on the framing screen to identify the target area.
  • Step 204 acquire content of the first partial screen corresponding to the target area in the framing picture.
  • Step 205 Enlarge the content of the first partial picture.
  • Step 206 as shown in FIG. 5, dividing the display area of the finder display screen into a first display sub-region R1 and a second display sub-region R2 arranged side by side, and displaying the framing screen in real time on the first display sub-region
  • the sub-region R1 is displayed, and the content of the enlarged first partial screen is displayed in real time on the second display sub-region R2.
  • Step 207 Extract features of the partial screen content corresponding to the target area, and determine the target object according to the extracted features.
  • Step 208 Track the target object by using a target tracking technology when detecting that the content of the view screen changes.
  • Step 209 Re-determine a new target area according to the current position of the target object in the framing picture.
  • the photographing method further includes: displaying the marker box K on the framing screen to identify the new target area (as shown in FIG. 8).
  • Step 210 Acquire content of the second partial screen corresponding to the new target area.
  • Step 211 Enlarge the content of the second partial screen.
  • Step 212 Update the display screen of the second display sub-region R2 in real time according to the content of the enlarged second partial screen.
  • the step of enlarging the content of the second partial screen includes:
  • the content of the second partial screen is enlarged according to the size of the second display sub-region R2.
  • the first display sub-region R1 displays a real-time picture captured by the camera within its focus range at the second time T2, and the second display sub-area R2 is displayed after being enlarged. The content of the second partial picture.
  • the shooting method further includes:
  • the identification frame K shown in FIG. 8 can be enlarged according to the input adjustment operation to expand the range of the target area.
  • the identification frame K shown in FIG. 8 can also be reduced according to the input adjustment operation to narrow the range of the target area. It should be noted that adjusting the size of the identification frame K as described herein only includes separately adjusting the size of the identification frame K, and does not include scaling processing on the screen content of the area included in the identification frame K.
  • the identification frame K may be moved according to the input adjustment operation to reselect the local area of interest, for example, as shown in FIG. 5, the object of interest in the target area is a woman, when the identification box is After K moves to the position shown in FIG. 10, a man can be selected as the object of interest.
  • the position and size of the identification frame K can also be adjusted according to the input adjustment operation to reselect the local area of interest and adjust the range of the partial area.
  • the step of enlarging the content of the third partial screen includes:
  • the content of the third partial screen is enlarged according to the size of the second display sub-region R2.
  • Step 213 performing a photographing task according to the input photographing operation, and generating a corresponding image.
  • Step 214 displays the image in full screen in the viewfinder display screen.
  • the shooting method further includes:
  • the new viewfinder screen is redisplayed in the viewfinder display screen of the mobile terminal to The next shooting task is performed.
  • the photographing method of the present application allows the photographer to select a partial image of interest from the framing screen during the framing process, and enlarges the content of the selected partial screen, and simultaneously displays the framing screen and the enlarged image in real time.
  • the partial picture enables the photographer to synchronously view the overall preview effect of the framing picture and the detailed information of the partial picture of interest in real time during the framing process; on the other hand, the target area can be automatically adjusted according to the real-time change of the framing picture content.
  • FIG. 12 is a flowchart of a shooting method according to a third embodiment of the present application, where the shooting method is applied to a mobile terminal.
  • the mobile terminal includes at least a camera, a finder display, and a secondary display.
  • the camera is used to collect images, and the finder display and the secondary display are used to display a framing interface and the like.
  • the view display screen is disposed on a front side of the mobile terminal, and the sub display screen is disposed on a back side of the mobile terminal. It should be noted that the front side is the face facing the photographer when the mobile terminal is used, and correspondingly, the back side is the side facing away from the photographer when the mobile terminal is used.
  • the size of the finder display screen and the sub-display screen can be designed according to actual needs, and the size of the framing display screen and the sub-display screen is not limited by the present application.
  • the schematic diagram of the present application such as FIG. 13A, 13B, 16A, 16B, 18A, 18B, 19A, 19B mentioned below, is only used for the screen of the view display screen or the sub display screen.
  • the size of the viewfinder display is greater than, equal to, or smaller than the size of the secondary display.
  • the third embodiment further includes the steps of displaying the content of the enlarged partial screen on the sub-display screen. It should be noted that, within the scope of the spirit or the basic features of the present application, the specific solutions applicable to the first embodiment may be correspondingly applied to the third embodiment, in order to save space and avoid repetition, here I won't go into details.
  • the photographing method includes the following steps:
  • Step 301 Display a framing screen in the framing display screen and the sub-display screen of the mobile terminal in an initial state.
  • the live view screen captured by the camera within its focus range at the first time T1 is displayed in the viewfinder display for the photographer to view on the mobile terminal.
  • displayed in the sub-display screen is a real-time picture captured by the camera within its focus range at the first time T1 for the photographer to view on the mobile terminal.
  • the framing display screen and the sub-display screen of the mobile terminal display the framing screen in a full screen mode in an initial state.
  • the viewfinder display screen and/or the secondary display screen may not display the viewfinder screen in a full screen mode in an initial state, for example, display at a certain display screen ratio (for example, 75%).
  • the viewfinder screen includes the case where “and” is used as a constituent condition, and also includes “or” as a condition, for example, “A and/or B”. Including the three parallel cases A, B, A + B.
  • Step 302 When a selection operation is detected on the framing screen, determine a target area selected by the selection operation.
  • the selection operation may be a touch operation input by a touch object (such as a finger, a stylus pen, etc.) on a view display screen and/or a sub display screen of the mobile terminal, or a peripheral device (such as a mouse, etc.).
  • a touch object such as a finger, a stylus pen, etc.
  • a peripheral device such as a mouse, etc.
  • Step 303 as shown in FIG. 14, an identification frame K is displayed on the framing screen to identify the target area.
  • Step 304 as shown in FIG. 15, acquiring content of the first partial screen corresponding to the target area in the framing picture.
  • Step 305 Enlarge the content of the first partial picture.
  • Step 306 as shown in FIG. 16A, displaying the framing picture and the content of the enlarged first partial picture in real time on the framing display screen, and, as shown in FIG. 16B, in the vice
  • the enlarged content of the first partial screen is displayed on the display screen in real time.
  • the step of displaying the framing picture and the enlarged content of the first partial picture in real time on the framing display screen comprises:
  • the framing screen is displayed in real time on the first display sub-region R1, and the content of the enlarged first partial screen is displayed in real time on the second display sub-region R2.
  • the step of enlarging the content of the first partial screen includes:
  • the content of the first partial screen displayed in the second display sub-region R2 of the finder display screen is enlarged according to the size of the second display sub-region R2, and displayed in the sub-display screen.
  • the content of the first partial picture is enlarged according to the size of the display area of the sub display screen.
  • the photographing method of the present application allows the photographer to select a partial image of interest from the framing screen displayed by the finder screen during the framing process, and enlarges the content of the selected partial screen, and simultaneously displays the framing in real time.
  • the picture and the enlarged partial picture enable the photographer to view the overall preview effect of the framing picture and the detailed information of the partial picture of interest in real time during the framing process, so as to perform gestures, expressions, positions, etc. on the object to be photographed.
  • the adjustment can help the photographer to make quick shooting decisions to improve the shooting efficiency and make the photographer have a better experience; on the other hand, it can help the subject to view his own on a mobile terminal with dual display screens. Whether the posture, expression, etc. are in place, and adjust in real time to take a picture that satisfies the photographer/subject.
  • FIG. 17 is a flowchart of a photographing method according to a fourth embodiment of the present application.
  • the fourth embodiment further includes tracking the target object by using the target tracking technology when the content of the viewfinder is changed, and re-determining the new target. Area and other steps.
  • the specific solutions applicable to the third embodiment may be correspondingly applied to the fourth embodiment, in order to save space and avoid repetition, here I won't go into details.
  • the photographing method includes the following steps:
  • Step 401 Display a framing screen in the framing display screen and the sub-display screen of the mobile terminal in an initial state.
  • Step 402 When a selection operation is detected on the framing screen, determining a target area selected by the selection operation.
  • Step 403 as shown in FIG. 14, displaying an identification frame K on the framing screen to identify the target area.
  • Step 404 as shown in FIG. 15, acquiring content of the first partial screen corresponding to the target area in the framing picture.
  • Step 405 Enlarge the content of the first partial picture.
  • Step 406 as shown in FIGS. 16A and 16B, dividing the display area of the finder display screen into the first display sub-region R1 and the second display sub-region R2 arranged side by side, and displaying the framing screen in real time.
  • the first display sub-region R1 is described, and the content of the enlarged first partial screen is displayed in real time on the second display sub-region R2 and the sub-display screen.
  • the step of enlarging the content of the first partial screen includes:
  • the content of the first partial screen displayed in the second display sub-region R2 of the finder display screen is enlarged according to the size of the second display sub-region R2, and displayed in the sub-display screen.
  • the content of the first partial picture is enlarged according to the size of the display area of the sub display screen.
  • Step 407 Extract features of the partial screen content corresponding to the target area, and determine the target object according to the extracted features.
  • Step 408 Track the target object by using a target tracking technology when detecting that the content of the view screen changes.
  • Step 409 Re-determine a new target area according to the current position of the target object in the framing picture.
  • Step 410 Acquire content of the second partial screen corresponding to the new target area.
  • Step 411 enlarging the content of the second partial picture.
  • Step 412 Update the display screen of the second display sub-region R2 and the sub-display screen in real time according to the content of the enlarged second partial screen.
  • the step of enlarging the content of the second partial screen includes:
  • the content of the second partial screen displayed in the second display sub-region R2 of the finder display screen is enlarged according to the size of the second display sub-region R2, and displayed in the secondary display screen.
  • the content of the second partial picture is enlarged according to the size of the display area of the secondary display screen.
  • the camera is in its focus range when the second display time zone T2 is displayed in the first display sub-region R1.
  • the real-time picture captured in the circumference, the content of the second partial picture after the enlargement is displayed in the second display sub-area R2.
  • the content of the enlarged second partial screen is displayed in the sub-display screen.
  • the shooting method further includes:
  • the identification frame K may be enlarged according to an input adjustment operation to expand the range of the target area.
  • the identification frame K may be reduced according to the input adjustment operation to narrow the range of the target area.
  • adjusting the size of the identification frame K as described herein only includes separately adjusting the size of the identification frame K, and does not include scaling processing on the screen content of the area included in the identification frame K.
  • the identification frame K can also be moved according to the input adjustment operation to reselect the local area of interest.
  • the position and size of the identification frame K may be adjusted according to the input adjustment operation to reselect the local area of interest and adjust the range of the partial area.
  • the step of enlarging the content of the third partial screen includes:
  • the content of the third partial screen displayed in the second display sub-region R2 of the finder display screen is enlarged according to the size of the second display sub-region R2, and displayed in the sub-display screen.
  • the content of the third partial picture is enlarged according to the size of the display area of the sub display screen.
  • Step 413 performing a photographing task according to the input photographing operation, and generating a corresponding image.
  • Step 414 displays the image in full screen in the finder display screen and the sub-display screen.
  • the image may not be displayed in the secondary display.
  • the shooting method further includes:
  • a new framing screen is redisplayed in the framing display and the sub-display of the mobile terminal to perform the next shooting task.
  • the photographing method of the present application allows, on the one hand, the photographer and/or the subject to select a partial image of interest from the view screen during the framing process, and enlarges the content of the selected partial screen while simultaneously displaying the said side by side in real time.
  • the framing picture and the enlarged partial picture enable the photographer to synchronously view the overall preview effect of the framing picture and the detailed information of the partial picture of interest in real time during the framing process; on the other hand, according to the real-time content of the framing picture Changing to automatically adjust the target area to lock the content of the enlarged partial image to the detailed information of the photographer and/or the partial image of the subject, in order to adjust the posture, expression, position, etc.
  • the photographer can quickly make a shooting decision to improve the shooting efficiency, and can also help the subject to see if his or her posture, expression, etc. are in place on the mobile terminal with dual display, and adjust itself in real time to make the photographer And / or the subject has a better experience.
  • FIG. 20 is a schematic structural diagram of a photographing apparatus 10 according to an embodiment of the present application.
  • the photographing apparatus 10 is applied to a mobile terminal.
  • the mobile terminal may be an electronic device having a photographing function, such as a camera, a smart phone, or a tablet computer.
  • the mobile terminal includes at least a camera and a finder display.
  • the camera is used to collect images
  • the viewfinder display is used to display a framing interface and the like.
  • the camera device 10 may include one or more modules stored in a memory of the mobile terminal and configured to be one or more processors (one processor in this embodiment) Execute to complete this application.
  • the photographing apparatus 10 may include a display module 111, a selection module 112, an acquisition module 113, a scaling module 114, and a split screen module 115.
  • the module referred to in the embodiment of the present application may be a program segment that completes a specific function, and is more suitable than the program to describe the execution process of the software in the processor. It can be understood that, corresponding to each of the above-mentioned photographing methods, the photographing device 10 may include some or all of the functional modules shown in FIG. 20, and the functions of the respective modules will be specifically described below.
  • the display module 111 is configured to display a framing screen in the finder display screen of the mobile terminal in an initial state.
  • the scene content in the focus range of the camera of the mobile terminal is displayed in the viewfinder screen.
  • the viewfinder screen can be displayed in real time in the viewfinder display. As shown in FIG. 2, the camera is displayed at the first time T1 in the viewfinder display screen. A real-time picture captured within its focus range for the photographer to view on the mobile terminal.
  • the viewfinder display screen of the mobile terminal displays the viewfinder screen in a full screen mode in an initial state. It can be understood that, in other embodiments, the viewfinder display screen of the mobile terminal may not display the viewfinder screen in a full screen mode in an initial state, for example, displaying the viewfinder at a certain display screen ratio (for example, 75%). Picture.
  • the mobile terminal further includes a secondary display.
  • the viewfinder display is disposed on a front side of the mobile terminal, and the secondary display screen is disposed on a back side of the mobile terminal. It should be noted that the front side is the face facing the photographer when the mobile terminal is used, and correspondingly, the back side is the side facing away from the photographer when the mobile terminal is used.
  • the display module 111 is configured to respectively display a framing screen in the finder display screen and the secondary display screen of the mobile terminal in an initial state.
  • the live view screen captured by the camera within its focus range at the first time T1 is displayed in the viewfinder display for the photographer to view on the mobile terminal.
  • displayed in the sub-display screen is a real-time picture captured by the camera within its focus range at the first time T1 for the photographer to view on the mobile terminal.
  • the viewfinder display screen and the sub-display screen of the mobile terminal display the viewfinder screen in a full screen mode in an initial state. It can be understood that, in other embodiments, the viewfinder display screen and/or the secondary display screen may not display the viewfinder screen in a full screen mode in an initial state, for example, display at a certain display screen ratio (for example, 75%).
  • the viewfinder screen may not display the viewfinder screen in a full screen mode in an initial state, for example, display at a certain display screen ratio (for example, 75%).
  • the selecting module 112 is configured to determine a target area selected by the selecting operation when a selecting operation is detected on the framing screen.
  • the selection operation may be a touch operation input by a touch object (such as a finger, a stylus pen, etc.) on a view display screen of the mobile terminal, or a peripheral device (such as a mouse or the like) on the view display screen.
  • a touch object such as a finger, a stylus pen, etc.
  • a peripheral device such as a mouse or the like
  • the selecting operation may also be a touch operation input by a touch object (eg, a finger, a stylus, etc.) on a secondary display screen of the mobile terminal, or a peripheral device (eg, a mouse, etc.) The operation entered on the secondary display.
  • a touch object eg, a finger, a stylus, etc.
  • a peripheral device eg, a mouse, etc.
  • the selecting operation is to input an operating point in the framing interface, such as an operation point generated by clicking and touching the framing interface or clicking the framing interface with a mouse.
  • the selecting module 112 is specifically configured to: when determining the target area selected by the selecting operation:
  • An area surrounded by a square whose second preset value is a side length is taken as the target area, wherein the four sides of the square are respectively parallel to the edges of the finder display screen;
  • the third preset value is long, and the area surrounded by the rectangle whose fourth preset value is wide is used as the target area, wherein the four sides of the rectangle are respectively associated with the The edges of the framing display are parallel.
  • the selecting operation is a sliding trajectory input in the framing interface, such as slidingly touching the framing interface, or a sliding trajectory generated by clicking and sliding in the framing interface with a mouse.
  • the selecting module 112 is specifically configured to: when determining the target area selected by the selecting operation:
  • An area surrounded by a square whose diagonal line is a line connecting the start point and the end point of the sliding track is used as the target area, wherein the four sides of the square are respectively parallel to the edges of the finder display screen;
  • An area surrounded by a rectangle having a line connecting the start point and the end point of the sliding track as a diagonal line is used as the target area, wherein the four sides of the rectangle are respectively parallel to the edges of the finder display screen.
  • the display module 111 is further configured to display an identification frame K (shown in FIG. 3 or 14) on the framing screen to identify the target area.
  • the identifier box K is presented as a dashed box.
  • the acquiring module 113 is configured to acquire content of the first partial screen corresponding to the target area in the framing picture (as shown in FIG. 4 or 15).
  • the scaling module 114 is configured to enlarge the content of the first partial screen.
  • the display module 111 also And displaying, in real time, on the framing display screen, the framing picture and the content of the enlarged first partial picture (as shown in FIG. 5 or 16A).
  • the split screen module 115 is configured to divide the display area of the view display screen into the first display sub-area R1 and the second display sub-area R2 arranged side by side.
  • the display module 111 is specifically configured to display the framing screen in the first display sub-region R1 in real time, and display the content of the enlarged first partial screen in the second display sub-region in real time. R2.
  • the first display sub-region R1 and the second display sub-region R2 are arranged side by side in the display area of the view display screen. It can be understood that such a split screen display manner may correspond to a situation in which the viewfinder display screen of the mobile terminal is placed vertically.
  • the first display sub-region R1 and the second display sub-region R2 are arranged side by side in the display area of the viewfinder display screen. It can be understood that such a split screen display manner may correspond to a situation in which the viewfinder display screen of the mobile terminal is placed horizontally.
  • the scaling module 114 is further configured to reduce the framing picture according to the size of the first display sub-area R1, and enlarge the first partial picture according to the size of the second display sub-area R2.
  • Content is specifically configured to display the reduced view screen in the first display sub-region R1 in real time.
  • the display module 111 is further configured to display the framing picture and the content of the enlarged first partial picture in real time on the finder screen (as shown in FIG. 16A). And displaying the content of the enlarged first partial screen in real time on the secondary display screen (as shown in FIG. 16B).
  • the scaling module 114 is specifically configured to enlarge the content of the first partial screen according to the size of the second display sub-region R2, and according to the sub-display The size of the display area enlarges the content of the first partial picture.
  • the content of the first partial screen displayed in the second display sub-region R2 of the finder display screen is enlarged according to the size of the second display sub-region R2, and displayed in the sub-display screen.
  • the content of the first partial picture is enlarged according to the size of the display area of the sub display screen.
  • the photographing apparatus of the present application allows the photographer and/or the subject to select a partial image of interest from the view screen during the framing process on the one hand, and enlarges the content of the selected partial screen while simultaneously displaying the content in real time.
  • the framing picture and the enlarged partial picture enable the photographer to synchronously view the overall preview effect of the framing picture and the detailed information of the partial picture of interest in real time during the framing process, thereby enabling the photographer to quickly make a shooting decision.
  • the photographer can view the detailed information of the partial image of interest in real time through the enlarged first partial screen, so as to determine whether the posture, expression, and the like of the target object of interest are in place, so that the shooting parameters can be adjusted in real time ( For example, a shooting angle or the like, or a posture of the target object, or the like, or directing the target object to perform self-adjustment to obtain a framing picture that is satisfactory to the photographer, and then perform a photographing operation.
  • the shooting parameters can be adjusted in real time ( For example, a shooting angle or the like, or a posture of the target object, or the like, or directing the target object to perform self-adjustment to obtain a framing picture that is satisfactory to the photographer, and then perform a photographing operation.
  • the imaging device 10 further includes a feature analysis module 116, a detection module 117, and a tracking module 118, wherein the feature analysis module 116 is configured to extract the target area corresponding to A feature of the partial picture content, and determining the target object based on the extracted feature.
  • the detecting module 117 is configured to detect whether the content of the framing picture changes.
  • the tracking module 118 is configured to track the target object by using a target tracking technology when detecting that the content of the viewfinder changes.
  • the selecting module 112 is further configured to re-determine a new target area according to the current position of the target object in the framing picture.
  • the display module 111 is further configured to display the identification box K on the framing screen to identify the new target area (as shown in FIG. 8).
  • the acquiring module 113 is further configured to acquire content of the second partial screen corresponding to the new target area.
  • the scaling module 114 is further configured to enlarge content of the second partial screen.
  • the display module 111 is further configured to update the display screen of the second display sub-region R2 in real time according to the content of the enlarged second partial screen.
  • the scaling module 114 is specifically configured to enlarge the size according to the size of the second display sub-region R2. The content of the second partial picture.
  • the first display sub-region R1 displays a real-time picture captured by the camera within its focus range at the second time T2, and the second display sub-area R2 is displayed after being enlarged. The content of the second partial picture.
  • the display module 111 is further configured to update the display screen of the second display sub-region R2 and the sub-display screen in real time according to the content of the enlarged second partial screen.
  • the scaling module 114 is specifically configured to enlarge the content of the second partial screen according to the size of the second display sub-region R2, and according to the sub-display The size of the display area enlarges the content of the second partial picture.
  • the content of the second partial screen displayed in the second display sub-region R2 of the finder display screen is enlarged according to the size of the second display sub-region R2, and displayed in the secondary display screen.
  • the content of the second partial picture is enlarged according to the size of the display area of the secondary display screen.
  • the first display sub-region R1 displays a real-time picture captured by the camera within its focus range at the second time T2, and the second display sub-area R2 is displayed after being enlarged.
  • the content of the second partial picture As shown in FIG. 18B, the content of the enlarged second partial screen is displayed in the sub-display screen.
  • the photographing apparatus 10 further includes an adjustment module 119, and the adjustment module 119 is configured to adjust an attribute of the identifier box K according to an input adjustment operation to adjust the target area.
  • the scope of the identifier box includes a location, a size, or a combination of the two.
  • the acquiring module 113 is further configured to acquire content of the third partial screen corresponding to the adjusted target area.
  • the scaling module 114 is further configured to enlarge the content of the third partial screen.
  • the display module 111 is further configured to update the display screen of the second display sub-region R2 in real time according to the content of the enlarged third partial screen.
  • the adjustment module 119 can enlarge the identification frame K shown in FIG. 8 according to the input adjustment operation to expand the range of the target area. Similarly, the adjustment module 119 can also reduce the identification frame K shown in FIG. 8 according to the input adjustment operation to narrow the range of the target area. It should be noted that adjusting the size of the identification frame K as described herein only includes separately adjusting the size of the identification frame K, and does not include scaling processing on the screen content of the area included in the identification frame K.
  • the adjustment module 119 can also move the identification frame K according to the input adjustment operation, so as to re-select the local area of interest, for example, as shown in FIG. 5, the object of interest in the target area is a woman. After moving the identification frame K to the position shown in FIG. 10, a man can be selected as the object of interest.
  • the adjustment module 119 can also adjust the position and size of the identification frame K according to the input adjustment operation, so as to reselect the local area of interest and adjust the range of the local area.
  • the scaling module 114 is specifically configured to enlarge the content of the third partial screen according to the size of the second display sub-region R2.
  • the display module 111 is further configured to update the display screen of the second display sub-region R2 and the sub-display screen in real time according to the content of the enlarged third partial screen.
  • the scaling module 114 is specifically configured to enlarge the content of the third partial screen according to the size of the second display sub-region R2, and according to the sub-display The size of the display area enlarges the content of the third partial picture.
  • the content of the third partial screen displayed in the second display sub-region R2 of the finder display screen is enlarged according to the size of the second display sub-region R2, and displayed in the sub-display screen.
  • the content of the third partial picture is enlarged according to the size of the display area of the sub display screen.
  • the photographing apparatus 10 further includes a photographing module 120, and the photographing module 120 is configured to perform a photographing task according to the input photographing operation, and generate a corresponding image.
  • the display module 111 is further configured to display the image in full screen in the viewfinder display screen.
  • the display module 111 is further configured to display the image in full screen in the finder display screen and the secondary display screen.
  • the display module 111 is further configured to redisplay a new framing screen in the framing display screen and the secondary display screen of the mobile terminal when exiting the display mode of the image, so as to perform the next shooting task.
  • the photographing apparatus of the present application allows the photographer and/or the subject to select a partial image of interest from the view screen during the framing process on the one hand, and enlarges the content of the selected partial screen while simultaneously displaying the content in real time.
  • Viewing screen and The enlarged partial picture enables the photographer to synchronously view the overall preview effect of the framing picture and the detailed information of the partial picture of interest in real time during the framing process; and on the other hand, automatically according to the real-time change of the content of the framing picture Adjusting the target area to lock the content of the enlarged partial image to the detailed information of the photographer and/or the partial image of the subject, so as to adjust the posture, expression, position, etc.
  • the photographer quickly makes shooting decisions to improve shooting efficiency, and can also help the subject to see if his or her posture, expression, etc. are in place on a mobile terminal with dual display screens, and adjust itself in real time to make the photographer and/or The subjects have a better experience.
  • the embodiment of the present application further provides a mobile terminal, including a memory, a processor, and a computer program stored on the memory and operable on the processor, and the processor executes the program to implement the shooting described in the foregoing embodiment.
  • a mobile terminal including a memory, a processor, and a computer program stored on the memory and operable on the processor, and the processor executes the program to implement the shooting described in the foregoing embodiment. The steps of the method.
  • FIG. 21 is a schematic structural diagram of a mobile terminal 100 according to the first embodiment of the present application.
  • the mobile terminal 100 includes at least a processor 20, a memory 30, a computer program 40 (e.g., a photographing program) stored in the memory 30 and operable on the processor 20, a camera 52, and The finder screen 53 is displayed.
  • a computer program 40 e.g., a photographing program
  • the mobile terminal 100 may be an electronic device having a 3D shooting function, such as a camera, a smart phone, or a tablet computer. It can be understood by those skilled in the art that the schematic diagram 21 is only an example of the mobile terminal 100 used to implement the photographing method in the present application, and does not constitute a limitation on the mobile terminal 100, and may include more or less than the illustrated. Components, or combinations of certain components, or different components, such as the mobile terminal 100 may also include input and output devices, network access devices, wireless transmission devices, and the like.
  • the camera 52 of the mobile terminal 100 can be aligned with the object to be photographed in the shooting scene, and the camera 52 can pick up the content of the shooting scene in real time.
  • the finder screen 53 is used to display the framing interface and the contents of the enlarged first, second, and third partial screens described above. It can be understood that the scene content within the focus range of the camera 52 of the mobile terminal 100 is displayed in the viewfinder screen. After the camera 52 of the mobile terminal 100 is activated, the framing screen can be displayed in the finder screen 53 in real time.
  • the mobile terminal 100' may further include a sub-display screen 54, which may also be used to display the framing interface and the above-mentioned enlarged first, second, and third portions. The content of the screen, etc.
  • the finder screen 53 is disposed on the front side of the mobile terminal 100, and the sub-display screen 54 is disposed on the back side of the mobile terminal 100.
  • the front side is the face facing the photographer when the mobile terminal 100 is used, and correspondingly, the back side is the face facing the photographer when the mobile terminal 100 is used.
  • the sub-display screen 54 is disposed on the mobile terminal 100, and the framing interface and the content of the enlarged first, second, and third partial screens are displayed on the sub-display screen 54.
  • the subject included in the partial screen of interest is a person, and the distance of the subject from the camera 52 of the mobile terminal 100 is within a predetermined range, for example, 1 meter, the subject can pass the sub display
  • the screen displayed on the screen 54 checks whether the posture, expression, and the like are in place, and adjusts in real time to take a picture that satisfies the photographer/subject.
  • the size of the finder display screen 53 and the sub-display screen 54 can be designed according to actual needs, and the size of the finder display screen 53 and the sub-display screen 54 is not limited by the present application. It should be noted that the schematic diagram of the present application, such as FIG. 13A, 13B, 16A, 16B, 18A, 18B, 19A, 19B, is only used to schematically represent the screen of the finder display screen 53 or the sub-display screen 54. It should be noted that the size of the view display screen 53 is not greater than, equal to, or smaller than the size of the sub-display screen 54.
  • the view display screen 53 and/or the sub display screen 54 of the mobile terminal 100 are touch display screens, and the photographer/subject object can directly display on the viewfinder display screen 53 and/or the sub display.
  • a touch operation is performed on the screen 54 to select a partial image of interest from the above-described framing screen, or to adjust the position or size of the region of the partial image of interest.
  • the view display screen 53 and/or the sub-display screen 54 are non-touch display screens
  • the mobile terminal 100 further includes a peripheral device (such as a mouse, etc.), and the photographer/subject can be The operation of inputting on the finder screen 53 and/or the sub-display 54 by the peripheral device to select a partial image of interest from the framing screen described above, or to adjust the position or size of the region of the partial screen of interest. .
  • the processor 20 executes the computer program 40, the steps in the foregoing various shooting method embodiments are implemented, such as steps 101-106 shown in FIG. 1, or steps 201-214 shown in FIG. 7, or FIG. Steps 301 to 306, or steps 401 to 414 shown in FIG.
  • the processor 20 executes the computer program 40, the functions of the modules/units, for example, the modules 11-17, in the embodiment of the imaging device 10 described above are implemented.
  • the computer program 40 can be partitioned into one or more modules/units that are stored in the memory 30 and executed by the processor 20 to complete This application.
  • the one or more modules/units may be a series of computer program 40 instruction segments capable of performing a particular function, the instruction segments being used to describe the execution of the computer program 40 in the mobile terminal 100.
  • the computer program 40 can be divided into the display module 111, the selection module 112, the acquisition module 113, the scaling module 114, the split screen module 115, the feature analysis module 116, the detection module 117, and the tracking module 118 in FIG.
  • the processor 20 may be a central processing unit (CPU), or may be other general-purpose processors, a digital signal processor (DSP), an application specific integrated circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.
  • the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 20 is a control center of the mobile terminal 100, and connects the entire photographing device 10/moving by using various interfaces and lines. Various parts of the terminal 100.
  • the memory 30 can be used to store the computer program 40 and/or modules/units by running or executing computer programs 40 and/or modules/units stored in the memory 30, and for invoking storage
  • the data in the memory 30 realizes various functions of the photographing apparatus 10/mobile terminal 100.
  • the memory 30 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (for example, a sound playing function, an image playing function, etc.), and the like; the storage data area may be Data created according to the use of the mobile terminal 100 (for example, audio data, data set and acquired by applying the above-described photographing method, and the like) are stored.
  • the memory 30 may include a high-speed random access memory, and may also include a non-volatile memory such as a hard disk, a memory, a plug-in hard disk, a smart memory card (SMC), and a secure digital (SD).
  • a non-volatile memory such as a hard disk, a memory, a plug-in hard disk, a smart memory card (SMC), and a secure digital (SD).
  • SSD secure digital
  • flash card at least one disk storage device, flash device, or other volatile solid state storage device.
  • the present application also provides a computer readable storage medium having stored thereon a computer program that, when executed by a processor, implements the steps of the photographing method described in the above embodiments.
  • the camera/10/mobile device integrated module/unit of the present application if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the present application implements all or part of the processes in the foregoing embodiments, and may also be completed by a computer program to instruct related hardware.
  • the computer program may be stored in a computer readable storage medium.
  • the steps of the various method embodiments described above may be implemented when the program is executed by the processor.
  • the computer program comprises computer program code, which may be in the form of source code, object code form, executable file or some intermediate form.
  • the computer readable medium may include any entity or device capable of carrying the computer program code, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM). , random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. It should be noted that the content contained in the computer readable medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in a jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, computer readable media Does not include electrical carrier signals and telecommunication signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请提供了一种拍摄方法及装置、移动终端及计算机可读存储介质。所述方法包括:在初始状态下在移动终端的取景显示屏中显示取景画面;当在取景画面上侦测到选择操作时,确定选择操作选定的目标区域;获取取景画面中与目标区域对应的第一局部画面的内容;放大第一局部画面的内容;在取景显示屏上实时地分屏显示取景画面和放大后的第一局部画面的内容。本申请的拍摄方法允许拍摄者在取景过程中从取景画面中选择感兴趣的局部画面进行放大,并实时并列地显示取景画面以及放大后的局部画面,使得拍摄者在取景过程中能够实时同步地查看取景画面的整体预览效果以及感兴趣的局部画面的细节信息,以便快速做出拍摄决策,使拍摄者具有较佳的使用体验。

Description

拍摄方法及装置、移动终端及计算机可读存储介质 技术领域
本申请涉及图像处理技术领域,尤其涉及一种拍摄方法及装置、移动终端及计算机可读存储介质。
背景技术
在使用移动终端拍照时,在显示屏显示的取景画面中很难清楚地查看感兴趣的某处细节的情况。拍摄者只能从取景画面的整体情况来决定是否进行拍摄,而不能确定感兴趣的某处细节是否完美。
发明内容
鉴于此,本申请提供一种拍摄方法及装置、移动终端及计算机可读存储介质,能够在取景过程中实时同步地显示取景画面的整体预览效果以及拍摄者感兴趣的局部画面的细节信息,从而能够帮助拍摄者快速做出拍摄决策以提高拍摄效率,并使拍摄者具有较佳的使用体验。
本申请一方面提供一种拍摄方法,应用于移动终端中,所述移动终端包括取景显示屏。所述拍摄方法包括:
在初始状态下在所述移动终端的取景显示屏中显示取景画面;
当在所述取景画面上侦测到选择操作时,确定所述选择操作选定的目标区域;
获取所述取景画面中与所述目标区域对应的第一局部画面的内容;
放大所述第一局部画面的内容;
在所述取景显示屏上实时地分屏显示所述取景画面和放大后的所述第一局部画面的内容。
本申请另一方面提供一种拍摄装置,应用于移动终端中,所述移动终端包括取景显示屏。所述拍摄装置包括:
显示模块,用于在初始状态下在所述移动终端的取景显示屏中显示取景画面;
选定模块,用于当在所述取景画面上侦测到选择操作时,确定所述选择操作选定的目标区域;
获取模块,用于获取所述取景画面中与所述目标区域对应的第一局部画面的内容;
缩放模块,用于放大所述第一局部画面的内容;
所述显示模块还用于在所述取景显示屏上实时地分屏显示所述取景画面和放大后的所述第一局部画面的内容。
本申请再一方面提供一种移动终端,所述移动终端包括处理器,所述处理器用于执行存储器中的存储的计算机程序时实现上述任一实施例所述的拍摄方法的步骤。
本申请又一方面提供一种计算机可读存储介质,其上存储有计算机指令,所述计算机指令被处理器执行时实现上述任一实施例所述的拍摄方法的步骤。
本申请的拍摄方法及装置、移动终端允许拍摄者在取景过程中从取景画面中选择感兴趣的局部画面,并对选择的局部画面的内容进行放大,同时还实时并列地显示所述取景画面以及放大后的所述局部画面,使得拍摄者在取景过程中能够实时同步地查看取景画面的整体预览效果以及感兴趣的局部画面的细节信息,从而能够帮助拍摄者快速做出拍摄决策以提高拍摄效率,并使拍摄者具有较佳的使用体验。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请第一实施方式的拍摄方法的流程图。
图2为本申请的移动终端的取景显示屏在第一时刻T1时的画面示意图。
图3为在图2的取景显示屏中输入选择操作后的画面示意图。
图4为图3中的选择操作选定的第一局部画面的内容的画面示意图。
图5为在图3的取景显示屏上分屏显示取景画面以及放大后的第一局部画面的内容的画面示意图。
图6为在图3的取景显示屏上分屏显示取景画面以及放大后的第一局部画面的内容的另一画面示意图。
图7为本申请第二实施方式的拍摄方法的流程图。
图8为图5的取景显示屏在第二时刻T2时的画面示意图。
图9为扩大图8所示的目标区域的范围的画面示意图。
图10为改变图5所示的目标区域的范围的画面示意图。
图11为图8的取景显示屏在拍照操作完成后显示图像的画面示意图。
图12为本申请第三实施方式的拍摄方法的流程图。
图13A为本申请的移动终端的取景显示屏在第一时刻T1时的画面示意图。
图13B为本申请的移动终端的副显示屏在第一时刻T1时的画面示意图。
图14为在图13A的取景显示屏或图13B的副显示屏中输入选择操作后的画面示意图。
图15为图14中的选择操作选定的第一局部画面的内容的画面示意图。
图16A为在图13A的取景显示屏上分屏显示取景画面以及放大后的第一局部画面的内容的画面示意图。
图16B为在图13B的副显示屏上显示放大后的第一局部画面的内容的画面示意图。
图17为本申请第四实施方式的拍摄方法的流程图。
图18A为图13A的取景显示屏在第二时刻T2时的画面示意图。
图18B为图13B的副显示屏在第二时刻T2时的画面示意图。
图19A为图13A的取景显示屏在拍照操作完成后显示图像的画面示意图。
图19B为图13B的副显示屏在拍照操作完成后显示图像的画面示意图。
图20为本申请一实施方式的拍摄装置的功能模块图。
图21为本申请第一实施方式的移动终端的功能模块示意图。
图22为本申请第二实施方式的移动终端的功能模块示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
请参阅图1,为本申请第一实施方式的拍摄方法的流程图,所述拍摄方法应用于移动终端中。所述移动终端可以是相机、智能手机、平板电脑等具有拍摄功能的电子设备。在本实施方式中,所述移动终端至少包括摄像头以及取景显示屏。其中,所述摄像头用于采集图像,所述取景显示屏用于显示取景界面等。
应说明的是,本申请实施方式的所述拍摄方法并不限于图1所示的流程图中的步骤及顺序。根据不同的需求,所示流程图中的步骤可以增加、移除、或者改变顺序。
如图1所示,所述拍摄方法包括如下步骤:
步骤101,在初始状态下在所述移动终端的取景显示屏中显示取景画面。
可以理解,所述取景画面中显示的是所述移动终端的摄像头的聚焦范围内的场景内容。当启动所述移动终端的摄像头之后,所述取景显示屏中即可实时地显示所述取景画面,如图2所示,所述取景显示屏中显示的是第一时刻T1时所述摄像头在其聚焦范围内捕获的实时画面,以供拍摄者在所述移动终端上查看。
在本实施方式中,所述移动终端的取景显示屏在初始状态下以全屏方式显示所述取景画面。可以理解,在其他实施方式中,所述移动终端的取景显示屏在初始状态下也可以不以全屏方式显示所述取景画面,例如以一定的显示屏比例(例如75%)来显示所述取景画面。
步骤102,当在所述取景画面上侦测到选择操作时,确定所述选择操作选定的目标区域。
可以理解,所述选择操作可以为触摸物体(例如手指、手写笔等)在所述移动终端的取景显示屏上输入的触摸操作,或者为外设(例如鼠标等)在所述取景显示屏上输入的操作。
在一种实施方式中,所述选择操作为在所述取景界面中输入一个操作点,例如点击触摸所述取景界面或者利用鼠标单击所述取景界面而产生的操作点。
在所述一种实施方式中,确定所述选择操作选定的目标区域包括:
将以所述操作点为中心,以第一预设值为半径的圆所包围的区域作为所述目标区域;或者
将以所述操作点为中心,以第二预设值为边长的正方形所包围的区域作为所述目标区域,其中,所述正方形的四条边分别与所述取景显示屏的各边缘平行;或者
将以所述操作点为中心,以第三预设值为长,以第四预设值为宽的长方形所包围的区域作为所述目标区域,其中,所述长方形的四条边分别与所述取景显示屏的各边缘平行。
在另一实施方式中,所述选择操作为在所述取景界面中输入的滑动轨迹,例如滑动触摸所述取景界面,或者利用鼠标在所述取景界面中单击并滑动而产生的滑动轨迹。
在所述另一种实施方式中,确定所述选择操作选定的目标区域包括:
确定所述滑动轨迹的起始点和终点;
将以所述滑动轨迹的起始点和终点的连线为直径的圆所包围的区域作为所述目标区域;或者
将以所述滑动轨迹的起始点和终点的连线为对角线的正方形所包围的区域作为所述目标区域,其中,所述正方形的四条边分别与所述取景显示屏的各边缘平行;或者
将以所述滑动轨迹的起始点和终点的连线为对角线的长方形所包围的区域作为所述目标区域,其中,所述长方形的四条边分别与所述取景显示屏的各边缘平行。
步骤103,如图3所示,在所述取景画面上显示标识框K以标识所述目标区域。
在本实施方式中,所述标识框K呈现为虚线框。
步骤104,如图4所示,获取所述取景画面中与所述目标区域对应的第一局部画面的内容。
步骤105,放大所述第一局部画面的内容。
步骤106,如图5所示,在所述取景显示屏上实时地分屏显示所述取景画面和放大后的所述第一局部画面的内容。
在本实施方式中,在所述取景显示屏上实时地分屏显示所述取景画面和放大后的所述第一局部画面的内容的步骤包括:
将所述取景显示屏的显示区域划分为并列排布的第一显示子区域R1和第二显示子区域R2;
将所述取景画面实时地显示于所述第一显示子区域R1,并将放大后的所述第一局部画面的内容实时地显示于所述第二显示子区域R2。
其中,在一种实施方式中,如图5所示,所述第一显示子区域R1和所述第二显示子区域R2纵向并列地排布于所述取景显示屏的显示区域中。可以理解,此种分屏显示方式可对应于所述移动终端的取景显示屏处于竖向摆放的情形。
可选地,在另一实施方式中,如图6所示,所述第一显示子区域R1和所述第二显示子区域R2横向并列地排布于所述取景显示屏的显示区域中。可以理解,此种分屏显示方式可对应于所述移动终端的取景显示屏处于横向摆放的情形。
在本实施方式中,将所述取景画面实时地显示于所述第一显示子区域R1的步骤包括:
按照所述第一显示子区域R1的尺寸缩小所述取景画面;
将缩小后的所述取景画面实时地显示于所述第一显示子区域R1。
在本实施方式中,放大所述第一局部画面的内容的步骤包括:
按照所述第二显示子区域R2的尺寸放大所述第一局部画面的内容。
本申请的拍摄方法允许拍摄者在取景过程中从取景画面中选择感兴趣的局部画面,并对选择的局部画面的内容进行放大,同时还实时并列地显示所述取景画面以及放大后的所述局部画面,使得拍摄者在取景过程中能够实时同步地查看取景画面的整体预览效果以及感兴趣的局部画面的细节信息,从而能够帮助拍摄者快速做出拍摄决策以提高拍摄效率,并使拍摄者具有较佳的使用体验。
可以理解,拍摄者通过放大后的所述第一局部画面可以实时地查看感兴趣的局部画面的细节信息,以便判断关注的目标对象的姿态、表情等是否到位,从而能够实时地调整拍摄参数(例如拍摄角度等)或所述目标对象的姿态等,或者指挥所述目标对象进行自我调整,以获得令所述拍摄者满意的取景画面再执行拍照操作。
请参阅图7,为本申请第二实施方式的拍摄方法的流程图。所述的第二实施方式与第一实施 方式的主要区别在于,第二实施方式中还包括在侦测到取景画面的内容发生变化时,利用目标跟踪技术跟踪目标对象,并重新确定新的目标区域等步骤。需要说明的是,在本申请的精神或基本特征的范围内,适用于第一实施方式中的各具体方案也可以相应的适用于第二实施方式中,为节省篇幅及避免重复起见,在此就不再赘述。
如图7所示,所述拍摄方法包括如下步骤:
步骤201,在初始状态下在所述移动终端的取景显示屏中显示取景画面。
步骤202,当在所述取景画面上侦测到选择操作时,确定所述选择操作选定的目标区域。
步骤203,如图3所示,在所述取景画面上显示标识框K以标识所述目标区域。
步骤204,如图4所示,获取所述取景画面中与所述目标区域对应的第一局部画面的内容。
步骤205,放大所述第一局部画面的内容。
步骤206,如图5所示,将所述取景显示屏的显示区域划分为并列排布的第一显示子区域R1和第二显示子区域R2,将所述取景画面实时地显示于所述第一显示子区域R1,并将放大后的所述第一局部画面的内容实时地显示于所述第二显示子区域R2。
步骤207,提取所述目标区域对应的局部画面内容的特征,并根据提取的所述特征确定目标对象。
步骤208,在侦测到所述取景画面的内容发生变化时,利用目标跟踪技术跟踪所述目标对象。
步骤209,根据所述目标对象在所述取景画面中的当前位置重新确定新的目标区域。
可以理解,所述拍摄方法还包括:在所述取景画面上显示所述标识框K以标识所述新的目标区域(如图8所示)。
步骤210,获取所述新的目标区域对应的第二局部画面的内容。
步骤211,放大所述第二局部画面的内容。
步骤212,根据放大后的所述第二局部画面的内容实时地更新所述第二显示子区域R2的显示画面。
在本实施方式中,放大所述第二局部画面的内容的步骤包括:
按照所述第二显示子区域R2的尺寸放大所述第二局部画面的内容。
如图8所示,所述第一显示子区域R1中显示的是第二时刻T2时所述摄像头在其聚焦范围内捕获的实时画面,所述第二显示子区域R2中显示的是放大后的所述第二局部画面的内容。
在本实施方式中,所述拍摄方法还包括:
根据输入的调整操作调整所述标识框K的属性,以调整所述目标区域的范围,其中,所述标识框的属性包括所述标识框的位置、大小、或者两者的组合;
获取调整后的目标区域对应的第三局部画面的内容;
放大所述第三局部画面的内容;
根据放大后的所述第三局部画面的内容实时地更新所述第二显示子区域R2的显示画面。
例如图9所示,可根据输入的调整操作拉大图8所示的标识框K,以扩大所述目标区域的范围。同理,也可根据输入的调整操作缩小图8所示标识框K,以缩小所述目标区域的范围。应说明的是,此处所述的调整所述标识框K的大小仅包括单独调整所述标识框K的大小,而不包括对所述标识框K所包含的区域的画面内容做缩放处理。
此外,还可根据输入的调整操作移动所述标识框K,以便重新选定感兴趣的局部区域,例如图5所示,所述目标区域中的关注对象为一女人,当将所述标识框K移动至图10所示的位置之后可将一男人选定为关注对象。
可以理解,还可根据输入的调整操作调整所述标识框K的位置和大小,以便重新选定感兴趣的局部区域以及调整所述局部区域的范围。
在本实施方式中,放大所述第三局部画面的内容的步骤包括:
按照所述第二显示子区域R2的尺寸放大所述第三局部画面的内容。
步骤213,根据输入的拍摄操作执行拍照任务,并生成相应的图像。
步骤214,如图11所示,在所述取景显示屏中全屏显示所述图像。
可选地,在本实施方式中,所述拍摄方法还包括:
在退出所述图像的显示模式时,在所述移动终端的取景显示屏中重新显示新的取景画面,以 便执行下一次的拍摄任务。
本申请的拍摄方法一方面允许拍摄者在取景过程中从取景画面中选择感兴趣的局部画面,并对选择的局部画面的内容进行放大,同时还实时并列地显示所述取景画面以及放大后的所述局部画面,使得拍摄者在取景过程中能够实时同步地查看取景画面的整体预览效果以及感兴趣的局部画面的细节信息;另一方面还可根据取景画面内容的实时变化来自动调整目标区域,以将放大显示的局部画面的内容锁定为拍摄者感兴趣的局部画面的细节信息,以便对被拍摄对象进行姿态、表情、位置等进行调整,从而能够帮助拍摄者快速做出拍摄决策以提高拍摄效率,并使拍摄者具有较佳的使用体验。
请参阅图12,为本申请第三实施方式的拍摄方法的流程图,所述拍摄方法应用于移动终端中。在第三实施方式中,所述移动终端至少包括摄像头、取景显示屏以及副显示屏。其中,所述摄像头用于采集图像,所述取景显示屏和副显示屏用于显示取景界面等。在第三实施方式中,所述取景显示屏设置于所述移动终端的正面,所述副显示屏设置于所述移动终端的背面。应说明的是,所述正面为所述移动终端使用时面向拍摄者的这一面,相对应的,所述背面为所述移动终端使用时背向拍摄者的这一面。可以理解,所述取景显示屏以及所述副显示屏的尺寸可根据实际需要进行设计,本申请不对所述取景显示屏以及所述副显示屏的尺寸进行限定。应说明的是,本申请的示意图,例如下文中提及的图13A、13B、16A、16B、18A、18B、19A、19B中仅用于对所述取景显示屏或所述副显示屏的画面进行示意性说明,不代表所述取景显示屏的尺寸大于、等于或小于所述副显示屏的尺寸。
所述的第三实施方式与第一实施方式的主要区别在于,第三实施方式中还包括在副显示屏中显示放大后的局部画面的内容等步骤。需要说明的是,在本申请的精神或基本特征的范围内,适用于第一实施方式中的各具体方案也可以相应的适用于第三实施方式中,为节省篇幅及避免重复起见,在此就不再赘述。
如图12所示,所述拍摄方法包括如下步骤:
步骤301,在初始状态下在所述移动终端的取景显示屏和副显示屏中分别显示取景画面。
如图13A所示,所述取景显示屏中显示的是第一时刻T1时所述摄像头在其聚焦范围内捕获的实时画面,以供拍摄者在所述移动终端上查看。如图13B所示,所述副显示屏中显示的是第一时刻T1时所述摄像头在其聚焦范围内捕获的实时画面,以供被拍摄者在所述移动终端上查看。
在本实施方式中,所述移动终端的取景显示屏和副显示屏在初始状态下均以全屏方式显示所述取景画面。可以理解,在其他实施方式中,所述取景显示屏和/或副显示屏在初始状态下也可以不以全屏方式显示所述取景画面,例如以一定的显示屏比例(例如75%)来显示所述取景画面。应说明的是,本申请所提及的“和/或”包括以“和”作为构成条件的情况,也包括以“或”作为构成条件的情况,举例来说,“A和/或B”包括A、B、A+B这三种并列的情况。
步骤302,当在所述取景画面上侦测到选择操作时,确定所述选择操作选定的目标区域。
可以理解,所述选择操作可以为触摸物体(例如手指、手写笔等)在所述移动终端的取景显示屏和/或副显示屏上输入的触摸操作,或者为外设(例如鼠标等)在所述取景显示屏和/或副显示屏上输入的操作。
步骤303,如图14所示,在所述取景画面上显示标识框K以标识所述目标区域。
步骤304,如图15所示,获取所述取景画面中与所述目标区域对应的第一局部画面的内容。
步骤305,放大所述第一局部画面的内容。
步骤306,如图16A所示,在所述取景显示屏上实时地分屏显示所述取景画面和放大后的所述第一局部画面的内容,以及,如图16B所示,在所述副显示屏上实时地显示放大后的所述第一局部画面的内容。
在本实施方式中,在所述取景显示屏上实时地分屏显示所述取景画面和放大后的所述第一局部画面的内容的步骤包括:
将所述取景显示屏的显示区域划分为并列排布的第一显示子区域R1和第二显示子区域R2;
将所述取景画面实时地显示于所述第一显示子区域R1,并将放大后的所述第一局部画面的内容实时地显示于所述第二显示子区域R2。
在本实施方式中,放大所述第一局部画面的内容的步骤包括:
按照所述第二显示子区域R2的尺寸放大所述第一局部画面的内容,以及,按照所述副显示屏的显示区域的尺寸放大所述第一局部画面的内容。
其中,在所述取景显示屏的第二显示子区域R2中显示的所述第一局部画面的内容是按照所述第二显示子区域R2的尺寸来放大的,在所述副显示屏中显示的所述第一局部画面的内容是按照所述副显示屏的显示区域的尺寸来放大的。
本申请的拍摄方法一方面允许拍摄者在取景过程中从取景显示屏显示的取景画面中选择感兴趣的局部画面,并对选择的局部画面的内容进行放大,同时还实时并列地显示所述取景画面以及放大后的所述局部画面,使得拍摄者在取景过程中能够实时同步地查看取景画面的整体预览效果以及感兴趣的局部画面的细节信息,以便对被拍摄对象进行姿态、表情、位置等的调整,从而能够帮助拍摄者快速做出拍摄决策以提高拍摄效率,并使拍摄者具有较佳的使用体验;另一方面还能够在具有双显示屏的移动终端上帮助被拍摄对象查看自己的姿态、表情等是否到位,并实时地进行调整,以便拍摄出令拍摄者/被拍摄对象满意的照片。
请参阅图17,为本申请第四实施方式的拍摄方法的流程图。所述的第四实施方式与第三实施方式的主要区别在于,第四实施方式中还包括在侦测到取景画面的内容发生变化时,利用目标跟踪技术跟踪目标对象,并重新确定新的目标区域等步骤。需要说明的是,在本申请的精神或基本特征的范围内,适用于第三实施方式中的各具体方案也可以相应的适用于第四实施方式中,为节省篇幅及避免重复起见,在此就不再赘述。
如图17所示,所述拍摄方法包括如下步骤:
步骤401,在初始状态下在所述移动终端的取景显示屏和副显示屏中分别显示取景画面。
步骤402,当在所述取景画面上侦测到选择操作时,确定所述选择操作选定的目标区域。
步骤403,如图14所示,在所述取景画面上显示标识框K以标识所述目标区域。
步骤404,如图15所示,获取所述取景画面中与所述目标区域对应的第一局部画面的内容。
步骤405,放大所述第一局部画面的内容。
步骤406,如图16A、16B所示,将所述取景显示屏的显示区域划分为并列排布的第一显示子区域R1和第二显示子区域R2,将所述取景画面实时地显示于所述第一显示子区域R1,并将放大后的所述第一局部画面的内容实时地显示于所述第二显示子区域R2和所述副显示屏。
在本实施方式中,放大所述第一局部画面的内容的步骤包括:
按照所述第二显示子区域R2的尺寸放大所述第一局部画面的内容,以及,按照所述副显示屏的显示区域的尺寸放大所述第一局部画面的内容。
其中,在所述取景显示屏的第二显示子区域R2中显示的所述第一局部画面的内容是按照所述第二显示子区域R2的尺寸来放大的,在所述副显示屏中显示的所述第一局部画面的内容是按照所述副显示屏的显示区域的尺寸来放大的。
步骤407,提取所述目标区域对应的局部画面内容的特征,并根据提取的所述特征确定目标对象。
步骤408,在侦测到所述取景画面的内容发生变化时,利用目标跟踪技术跟踪所述目标对象。
步骤409,根据所述目标对象在所述取景画面中的当前位置重新确定新的目标区域。
步骤410,获取所述新的目标区域对应的第二局部画面的内容。
步骤411,放大所述第二局部画面的内容。
步骤412,根据放大后的所述第二局部画面的内容实时地更新所述第二显示子区域R2和所述副显示屏的显示画面。
在本实施方式中,放大所述第二局部画面的内容的步骤包括:
按照所述第二显示子区域R2的尺寸放大所述第二局部画面的内容,以及,按照所述副显示屏的显示区域的尺寸放大所述第二局部画面的内容。
其中,在所述取景显示屏的第二显示子区域R2中显示的所述第二局部画面的内容是按照所述第二显示子区域R2的尺寸来放大的,在所述副显示屏中显示的所述第二局部画面的内容是按照所述副显示屏的显示区域的尺寸来放大的。
如图18A所示,所述第一显示子区域R1中显示的是第二时刻T2时所述摄像头在其聚焦范 围内捕获的实时画面,所述第二显示子区域R2中显示的是放大后的所述第二局部画面的内容。如图18B所示,所述副显示屏中显示的是放大后的所述第二局部画面的内容。
在本实施方式中,所述拍摄方法还包括:
根据输入的调整操作调整所述标识框K的属性,以调整所述目标区域的范围,其中,所述标识框的属性包括所述标识框的位置、大小、或者两者的组合;
获取调整后的目标区域对应的第三局部画面的内容;
放大所述第三局部画面的内容;
根据放大后的所述第三局部画面的内容实时地更新所述第二显示子区域R2和所述副显示屏的显示画面。
例如,可根据输入的调整操作拉大所述标识框K,以扩大所述目标区域的范围。又如,可根据输入的调整操作缩小所述标识框K,以缩小所述目标区域的范围。应说明的是,此处所述的调整所述标识框K的大小仅包括单独调整所述标识框K的大小,而不包括对所述标识框K所包含的区域的画面内容做缩放处理。此外,还可根据输入的调整操作移动所述标识框K,以便重新选定感兴趣的局部区域。或者,还可根据输入的调整操作调整所述标识框K的位置和大小,以便重新选定感兴趣的局部区域以及调整所述局部区域的范围。
在本实施方式中,放大所述第三局部画面的内容的步骤包括:
按照所述第二显示子区域R2的尺寸放大所述第三局部画面的内容,以及,按照所述副显示屏的显示区域的尺寸放大所述第三局部画面的内容。
其中,在所述取景显示屏的第二显示子区域R2中显示的所述第三局部画面的内容是按照所述第二显示子区域R2的尺寸来放大的,在所述副显示屏中显示的所述第三局部画面的内容是按照所述副显示屏的显示区域的尺寸来放大的。
步骤413,根据输入的拍摄操作执行拍照任务,并生成相应的图像。
步骤414,如图19A、19B所示,在所述取景显示屏和所述副显示屏中全屏显示所述图像。
可选地,在其他实施方式中,所述副显示屏中也可以不显示所述图像。
可选地,在本实施方式中,所述拍摄方法还包括:
在退出所述图像的显示模式时,在所述移动终端的取景显示屏及副显示屏中重新显示新的取景画面,以便执行下一次的拍摄任务。
本申请的拍摄方法一方面允许拍摄者和/或被拍摄对象在取景过程中从取景画面中选择感兴趣的局部画面,并对选择的局部画面的内容进行放大,同时还实时并列地显示所述取景画面以及放大后的所述局部画面,使得拍摄者在取景过程中能够实时同步地查看取景画面的整体预览效果以及感兴趣的局部画面的细节信息;另一方面还可根据取景画面内容的实时变化来自动调整目标区域,以将放大显示的局部画面的内容锁定为拍摄者和/或被拍摄对象感兴趣的局部画面的细节信息,以便对被拍摄对象进行姿态、表情、位置等的调整,从而能够帮助拍摄者快速做出拍摄决策以提高拍摄效率,也能够在具有双显示屏的移动终端上帮助被拍摄对象查看自己的姿态、表情等是否到位,并实时地进行自我调整,使拍摄者和/或被拍摄对象均具有较佳的使用体验。
请参阅图20,为本申请一实施方式的拍摄装置10的结构示意图,所述拍摄装置10应用于移动终端中。所述移动终端可以是相机、智能手机、平板电脑等具有拍摄功能的电子设备。在本实施方式中,所述移动终端至少包括摄像头以及取景显示屏。其中,所述摄像头用于采集图像,所述取景显示屏用于显示取景界面等。
所述拍摄装置10可以包括一个或多个模块,所述一个或多个模块被储存在所述移动终端的存储器中并被配置成由一个或多个处理器(本实施方式为一个处理器)执行,以完成本申请。例如,参阅图20所示,所述拍摄装置10可以包括显示模块111、选定模块112、获取模块113、缩放模块114以及分屏模块115。本申请实施例所称的模块可以是完成一特定功能的程序段,比程序更适合于描述软件在处理器中的执行过程。可以理解的是,对应于上述拍摄方法中的各实施方式,所述拍摄装置10可以包括图20中所示的各功能模块中的一部分或全部,各模块的功能将在下面具体介绍。
在本实施方式中,所述显示模块111用于在初始状态下在所述移动终端的取景显示屏中显示取景画面。
可以理解,所述取景画面中显示的是所述移动终端的摄像头的聚焦范围内的场景内容。当启动所述移动终端的摄像头之后,所述取景显示屏中即可实时地显示所述取景画面,如图2所示,所述取景显示屏中显示的是第一时刻T1时所述摄像头在其聚焦范围内捕获的实时画面,以供拍摄者在所述移动终端上查看。
在本实施方式中,所述移动终端的取景显示屏在初始状态下以全屏方式显示所述取景画面。可以理解,在其他实施方式中,所述移动终端的取景显示屏在初始状态下也可以不以全屏方式显示所述取景画面,例如以一定的显示屏比例(例如75%)来显示所述取景画面。
在另一实施方式中,所述移动终端还包括副显示屏。其中,所述取景显示屏设置于所述移动终端的正面,所述副显示屏设置于所述移动终端的背面。应说明的是,所述正面为所述移动终端使用时面向拍摄者的这一面,相对应的,所述背面为所述移动终端使用时背向拍摄者的这一面。
在所述另一实施方式中,所述显示模块111用于在初始状态下在所述移动终端的取景显示屏和副显示屏中分别显示取景画面。
如图13A所示,所述取景显示屏中显示的是第一时刻T1时所述摄像头在其聚焦范围内捕获的实时画面,以供拍摄者在所述移动终端上查看。如图13B所示,所述副显示屏中显示的是第一时刻T1时所述摄像头在其聚焦范围内捕获的实时画面,以供被拍摄者在所述移动终端上查看。
在所述另一实施方式中,所述移动终端的取景显示屏和副显示屏在初始状态下均以全屏方式显示所述取景画面。可以理解,在其他实施方式中,所述取景显示屏和/或副显示屏在初始状态下也可以不以全屏方式显示所述取景画面,例如以一定的显示屏比例(例如75%)来显示所述取景画面。
所述选定模块112用于当在所述取景画面上侦测到选择操作时,确定所述选择操作选定的目标区域。
可以理解,所述选择操作可以为触摸物体(例如手指、手写笔等)在所述移动终端的取景显示屏上输入的触摸操作,或者为外设(例如鼠标等)在所述取景显示屏上输入的操作。
在其他实施方式中,所述选择操作还可以为触摸物体(例如手指、手写笔等)在所述移动终端的副显示屏上输入的触摸操作,或者为外设(例如鼠标等)在所述副显示屏上输入的操作。
在一种实施方式中,所述选择操作为在所述取景界面中输入一个操作点,例如点击触摸所述取景界面或者利用鼠标单击所述取景界面而产生的操作点。
在所述一种实施方式中,所述选定模块112在确定所述选择操作选定的目标区域时,具体用于:
将以所述操作点为中心,以第一预设值为半径的圆所包围的区域作为所述目标区域;或者
将以所述操作点为中心,以第二预设值为边长的正方形所包围的区域作为所述目标区域,其中,所述正方形的四条边分别与所述取景显示屏的各边缘平行;或者
将以所述操作点为中心,以第三预设值为长,以第四预设值为宽的长方形所包围的区域作为所述目标区域,其中,所述长方形的四条边分别与所述取景显示屏的各边缘平行。
在另一实施方式中,所述选择操作为在所述取景界面中输入的滑动轨迹,例如滑动触摸所述取景界面,或者利用鼠标在所述取景界面中单击并滑动而产生的滑动轨迹。
在所述另一种实施方式中,所述选定模块112在确定所述选择操作选定的目标区域时,具体用于:
确定所述滑动轨迹的起始点和终点;
将以所述滑动轨迹的起始点和终点的连线为直径的圆所包围的区域作为所述目标区域;或者
将以所述滑动轨迹的起始点和终点的连线为对角线的正方形所包围的区域作为所述目标区域,其中,所述正方形的四条边分别与所述取景显示屏的各边缘平行;或者
将以所述滑动轨迹的起始点和终点的连线为对角线的长方形所包围的区域作为所述目标区域,其中,所述长方形的四条边分别与所述取景显示屏的各边缘平行。
在本实施方式中,所述显示模块111还用于在所述取景画面上显示标识框K(如图3或14所示)以标识所述目标区域。在本实施方式中,所述标识框K呈现为虚线框。
所述获取模块113用于获取所述取景画面中与所述目标区域对应的第一局部画面的内容(如图4或15所示)。所述缩放模块114用于放大所述第一局部画面的内容。所述显示模块111还 用于在所述取景显示屏上实时地分屏显示所述取景画面和放大后的所述第一局部画面的内容(如图5或16A所示)。
在本实施方式中,所述分屏模块115用于将所述取景显示屏的显示区域划分为并列排布的第一显示子区域R1和第二显示子区域R2。所述显示模块111具体用于将所述取景画面实时地显示于所述第一显示子区域R1,并将放大后的所述第一局部画面的内容实时地显示于所述第二显示子区域R2。
其中,在一种实施方式中,如图5所示,所述第一显示子区域R1和所述第二显示子区域R2纵向并列地排布于所述取景显示屏的显示区域中。可以理解,此种分屏显示方式可对应于所述移动终端的取景显示屏处于竖向摆放的情形。
可选地,在另一实施方式中,如图6所示,所述第一显示子区域R1和所述第二显示子区域R2横向并列地排布于所述取景显示屏的显示区域中。可以理解,此种分屏显示方式可对应于所述移动终端的取景显示屏处于横向摆放的情形。
在本实施方式中,所述缩放模块114还用于按照所述第一显示子区域R1的尺寸缩小所述取景画面,以及按照所述第二显示子区域R2的尺寸放大所述第一局部画面的内容。所述显示模块111具体用于将缩小后的所述取景画面实时地显示于所述第一显示子区域R1。
在另一实施方式中,所述显示模块111还用于在所述取景显示屏上实时地分屏显示所述取景画面和放大后的所述第一局部画面的内容(如图16A所示),以及,在所述副显示屏上实时地显示放大后的所述第一局部画面的内容(如图16B所示)。
相应地,在所述另一实施方式中,所述缩放模块114具体用于按照所述第二显示子区域R2的尺寸放大所述第一局部画面的内容,以及,按照所述副显示屏的显示区域的尺寸放大所述第一局部画面的内容。
其中,在所述取景显示屏的第二显示子区域R2中显示的所述第一局部画面的内容是按照所述第二显示子区域R2的尺寸来放大的,在所述副显示屏中显示的所述第一局部画面的内容是按照所述副显示屏的显示区域的尺寸来放大的。
本申请的拍摄装置一方面允许拍摄者和/或被拍摄对象在取景过程中从取景画面中选择感兴趣的局部画面,并对选择的局部画面的内容进行放大,同时还实时并列地显示所述取景画面以及放大后的所述局部画面,使得拍摄者在取景过程中能够实时同步地查看取景画面的整体预览效果以及感兴趣的局部画面的细节信息,从而能够帮助拍摄者快速做出拍摄决策以提高拍摄效率,并使拍摄者具有较佳的使用体验;另一方面还能够在具有双显示屏的移动终端上帮助被拍摄对象查看自己的姿态、表情等是否到位,并实时地进行调整,以便拍摄出令拍摄者/被拍摄对象满意的照片。
可以理解,拍摄者通过放大后的所述第一局部画面可以实时地查看感兴趣的局部画面的细节信息,以便判断关注的目标对象的姿态、表情等是否到位,从而能够实时地调整拍摄参数(例如拍摄角度等)或所述目标对象的姿态等,或者指挥所述目标对象进行自我调整,以获得令所述拍摄者满意的取景画面再执行拍照操作。
请再次参阅图21,在本实施方式中,所述拍摄装置10还包括特征分析模块116、侦测模块117以及追踪模块118,其中,所述特征分析模块116用于提取所述目标区域对应的局部画面内容的特征,并根据提取的所述特征确定目标对象。所述侦测模块117用于侦测所述取景画面的内容是否发生变化。所述追踪模块118用于在侦测到所述取景画面的内容发生变化时,利用目标跟踪技术跟踪所述目标对象。
在本实施方式中,所述选定模块112还用于根据所述目标对象在所述取景画面中的当前位置重新确定新的目标区域。
可以理解,所述显示模块111还用于在所述取景画面上显示所述标识框K以标识所述新的目标区域(如图8所示)。
所述获取模块113还用于获取所述新的目标区域对应的第二局部画面的内容。所述缩放模块114还用于放大所述第二局部画面的内容。所述显示模块111还用于根据放大后的所述第二局部画面的内容实时地更新所述第二显示子区域R2的显示画面。
在本实施方式中,所述缩放模块114具体用于按照所述第二显示子区域R2的尺寸放大所述 第二局部画面的内容。
如图8所示,所述第一显示子区域R1中显示的是第二时刻T2时所述摄像头在其聚焦范围内捕获的实时画面,所述第二显示子区域R2中显示的是放大后的所述第二局部画面的内容。
在另一实施方式中,所述显示模块111还用于根据放大后的所述第二局部画面的内容实时地更新所述第二显示子区域R2和所述副显示屏的显示画面。
相应地,在所述另一实施方式中,所述缩放模块114具体用于按照所述第二显示子区域R2的尺寸放大所述第二局部画面的内容,以及,按照所述副显示屏的显示区域的尺寸放大所述第二局部画面的内容。
其中,在所述取景显示屏的第二显示子区域R2中显示的所述第二局部画面的内容是按照所述第二显示子区域R2的尺寸来放大的,在所述副显示屏中显示的所述第二局部画面的内容是按照所述副显示屏的显示区域的尺寸来放大的。
如图18A所示,所述第一显示子区域R1中显示的是第二时刻T2时所述摄像头在其聚焦范围内捕获的实时画面,所述第二显示子区域R2中显示的是放大后的所述第二局部画面的内容。如图18B所示,所述副显示屏中显示的是放大后的所述第二局部画面的内容。
请再次参阅图20,在本实施方式中,所述拍摄装置10还包括调整模块119,所述调整模块119用于根据输入的调整操作调整所述标识框K的属性,以调整所述目标区域的范围,其中,所述标识框的属性包括所述标识框的位置、大小、或者两者的组合。
在本实施方式中,所述获取模块113还用于获取调整后的目标区域对应的第三局部画面的内容。所述缩放模块114还用于放大所述第三局部画面的内容。所述显示模块111还用于根据放大后的所述第三局部画面的内容实时地更新所述第二显示子区域R2的显示画面。
例如图9所示,所述调整模块119可根据输入的调整操作拉大图8所示的标识框K,以扩大所述目标区域的范围。同理,所述调整模块119也可根据输入的调整操作缩小图8所示标识框K,以缩小所述目标区域的范围。应说明的是,此处所述的调整所述标识框K的大小仅包括单独调整所述标识框K的大小,而不包括对所述标识框K所包含的区域的画面内容做缩放处理。
此外,所述调整模块119还可根据输入的调整操作移动所述标识框K,以便重新选定感兴趣的局部区域,例如图5所示,所述目标区域中的关注对象为一女人,当将所述标识框K移动至图10所示的位置之后可将一男人选定为关注对象。
可以理解,所述调整模块119还可根据输入的调整操作调整所述标识框K的位置和大小,以便重新选定感兴趣的局部区域以及调整所述局部区域的范围。
在本实施方式中,所述缩放模块114具体用于按照所述第二显示子区域R2的尺寸放大所述第三局部画面的内容。
在另一实施方式中,所述显示模块111还用于根据放大后的所述第三局部画面的内容实时地更新所述第二显示子区域R2和所述副显示屏的显示画面。
相应地,在所述另一实施方式中,所述缩放模块114具体用于按照所述第二显示子区域R2的尺寸放大所述第三局部画面的内容,以及,按照所述副显示屏的显示区域的尺寸放大所述第三局部画面的内容。
其中,在所述取景显示屏的第二显示子区域R2中显示的所述第三局部画面的内容是按照所述第二显示子区域R2的尺寸来放大的,在所述副显示屏中显示的所述第三局部画面的内容是按照所述副显示屏的显示区域的尺寸来放大的。
请再次参阅图20,在本实施方式中,所述拍摄装置10还包括拍摄模块120,所述拍摄模块120用于根据输入的拍摄操作执行拍照任务,并生成相应的图像。所述显示模块111还用于在所述取景显示屏中全屏显示所述图像。
在另一实施方式中,所述显示模块111还用于在所述取景显示屏和所述副显示屏中全屏显示所述图像。
可选地,所述显示模块111还用于在退出所述图像的显示模式时,在所述移动终端的取景显示屏和副显示屏中重新显示新的取景画面,以便执行下一次的拍摄任务。
本申请的拍摄装置一方面允许拍摄者和/或被拍摄对象在取景过程中从取景画面中选择感兴趣的局部画面,并对选择的局部画面的内容进行放大,同时还实时并列地显示所述取景画面以及 放大后的所述局部画面,使得拍摄者在取景过程中能够实时同步地查看取景画面的整体预览效果以及感兴趣的局部画面的细节信息;另一方面还可根据取景画面内容的实时变化来自动调整目标区域,以将放大显示的局部画面的内容锁定为拍摄者和/或被拍摄对象感兴趣的局部画面的细节信息,以便对被拍摄对象进行姿态、表情、位置等的调整,从而能够帮助拍摄者快速做出拍摄决策以提高拍摄效率,也能够在具有双显示屏的移动终端上帮助被拍摄对象查看自己的姿态、表情等是否到位,并实时地进行自我调整,使拍摄者和/或被拍摄对象均具有较佳的使用体验。
本申请实施例还提供一种移动终端,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述实施方式中所述的拍摄方法的步骤。
图21为本申请第一实施方式的移动终端100的结构示意图。如图21所示,所述移动终端100至少包括处理器20、存储器30、存储在所述存储器30中并可在所述处理器20上运行的计算机程序40(例如拍摄程序)、摄像头52以及取景显示屏53。
其中,所述移动终端100可以是相机、智能手机、平板电脑等具有3D拍摄功能的电子设备。本领域技术人员可以理解,所述示意图21仅仅是本申请用于实现拍摄方法的移动终端100的示例,并不构成对所述移动终端100的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述移动终端100还可以包括输入输出设备、网络接入设备、无线传输设备等。
可以理解,当启动所述移动终端100后,可将所述移动终端100的摄像头52对准拍摄场景中的待拍摄对象,所述摄像头52可实时拾取所述拍摄场景的内容。
在本实施方式中,所述取景显示屏53用于显示取景界面以及上述的放大后的第一、二、三局部画面的内容等。可以理解,所述取景画面中显示的是所述移动终端100的摄像头52的聚焦范围内的场景内容。当启动所述移动终端100的摄像头52之后,所述取景显示屏53中即可实时地显示所述取景画面。
在另一实施方式中,如图22所示,移动终端100’还可包括副显示屏54,所述副显示屏54也可用于显示取景界面以及上述的放大后的第一、二、三局部画面的内容等。
其中,所述取景显示屏53设置于所述移动终端100的正面,所述副显示屏54设置于所述移动终端100的背面。应说明的是,所述正面为所述移动终端100使用时面向拍摄者的这一面,相对应的,所述背面为所述移动终端100使用时背向拍摄者的这一面。可以理解,在所述移动终端100上设置副显示屏54,并在所述副显示屏54上显示取景界面以及上述的放大后的第一、二、三局部画面的内容等的目的在于,当感兴趣的局部画面中包含的被拍摄对象为人物,且所述被拍摄对象距离所述移动终端100的摄像头52的距离在预定范围内,例如1米时,被拍摄对象可通过所述副显示屏54显示的画面查看自己的姿态、表情等是否到位,并实时地进行调整,以便拍摄出令拍摄者/被拍摄对象满意的照片。
可以理解,所述取景显示屏53以及所述副显示屏54的尺寸可根据实际需要进行设计,本申请不对所述取景显示屏53以及所述副显示屏54的尺寸进行限定。应说明的是,本申请的示意图,例如图13A、13B、16A、16B、18A、18B、19A、19B中仅用于对所述取景显示屏53或所述副显示屏54的画面进行示意性说明,不代表所述取景显示屏53的尺寸大于、等于或小于所述副显示屏54的尺寸。
在一种实施方式中,所述移动终端100的取景显示屏53和/或副显示屏54为触控显示屏,拍摄者/被拍摄对象可直接在所述取景显示屏53和/或副显示屏54上进行触摸操作,以从上述的取景画面中选择感兴趣的局部画面,或调整感兴趣的局部画面的区域位置或大小。
在另一种实施方式中,所述取景显示屏53和/或副显示屏54为非触控显示屏,所述移动终端100还包括外设(例如鼠标等),拍摄者/被拍摄对象可通过所述外设在所述取景显示屏53和/或副显示屏54上输入的操作,以从上述的取景画面中选择感兴趣的局部画面,或调整感兴趣的局部画面的区域位置或大小。
所述处理器20执行所述计算机程序40时实现上述各个拍摄方法实施方式中的步骤,例如图1所示的步骤101~106,或者图7所示的步骤201~214,或者图12所示的步骤301~306,或者图17所示的步骤401~414。或者,所述处理器20执行所述计算机程序40时实现上述拍摄装置10实施方式中各模块/单元,例如模块11~17的功能。
示例性的,所述计算机程序40可以被分割成一个或多个模块/单元,所述一个或多个模块/单元被存储在所述存储器30中,并由所述处理器20执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序40指令段,所述指令段用于描述所述计算机程序40在所述移动终端100中的执行过程。例如,所述计算机程序40可以被分割成图20中的显示模块111、选定模块112、获取模块113、缩放模块114、分屏模块115、特征分析模块116、侦测模块117、追踪模块118、调整模块119以及拍摄模块120,各模块111~120的具体功能请参见前面的具体介绍,为节省篇幅及避免重复起见,在此就不在赘述。
所称处理器20可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等,所述处理器20是所述移动终端100的控制中心,利用各种接口和线路连接整个拍摄装置10/移动终端100的各个部分。
所述存储器30可用于存储所述计算机程序40和/或模块/单元,所述处理器20通过运行或执行存储在所述存储器30内的计算机程序40和/或模块/单元,以及调用存储在存储器30内的数据,实现所述拍摄装置10/移动终端100的各种功能。所述存储器30可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序(例如声音播放功能、图像播放功能等)等;存储数据区可存储根据移动终端100的使用所创建的数据(例如音频数据,应用上述拍摄方法而设置、获取的数据等)等。此外,存储器30可以包括高速随机存取存储器,还可以包括非易失性存储器,例如硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
本申请还提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述实施方式中所述的拍摄方法的步骤。
本申请的所述拍摄装置10/移动终端100/计算机装置集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
以上实施方式仅用以说明本申请的技术方案而非限制,尽管参照以上较佳实施方式对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换都不应脱离本申请技术方案的精神和范围。

Claims (20)

  1. 一种拍摄方法,应用于移动终端中,所述移动终端包括取景显示屏,其特征在于,所述拍摄方法包括:
    在初始状态下在所述移动终端的取景显示屏中显示取景画面;
    当在所述取景画面上侦测到选择操作时,确定所述选择操作选定的目标区域;
    获取所述取景画面中与所述目标区域对应的第一局部画面的内容;
    放大所述第一局部画面的内容;
    在所述取景显示屏上实时地分屏显示所述取景画面和放大后的所述第一局部画面的内容。
  2. 如权利要求1所述的拍摄方法,其特征在于,在所述取景显示屏上实时地分屏显示所述取景画面和放大后的所述第一局部画面的内容的步骤包括:
    将所述取景显示屏的显示区域划分为并列排布的第一显示子区域和第二显示子区域;
    将所述取景画面实时地显示于所述第一显示子区域,并将放大后的所述第一局部画面的内容实时地显示于所述第二显示子区域。
  3. 如权利要求2所述的拍摄方法,其特征在于,所述第一显示子区域和所述第二显示子区域横向或者纵向并列地排布于所述取景显示屏的显示区域中。
  4. 如权利要求2所述的拍摄方法,其特征在于,将所述取景画面实时地显示于所述第一显示子区域的步骤包括:
    按照所述第一显示子区域的尺寸缩小所述取景画面;
    将缩小后的所述取景画面实时地显示于所述第一显示子区域。
  5. 如权利要求2所述的拍摄方法,其特征在于,放大所述第一局部画面的内容的步骤包括:
    按照所述第二显示子区域的尺寸放大所述第一局部画面的内容。
  6. 如权利要求2所述的拍摄方法,其特征在于,所述拍摄方法还包括:
    提取所述目标区域对应的局部画面内容的特征,并根据提取的所述特征确定目标对象;
    在侦测到所述取景画面的内容发生变化时,利用目标跟踪技术跟踪所述目标对象;
    根据所述目标对象在所述取景画面中的当前位置重新确定新的目标区域;
    获取所述新的目标区域对应的第二局部画面的内容;
    放大所述第二局部画面的内容;
    根据放大后的所述第二局部画面的内容实时地更新所述第二显示子区域的显示画面。
  7. 如权利要求6所述的拍摄方法,其特征在于,所述拍摄方法还包括:
    在所述取景画面上显示标识框以标识所述目标区域。
  8. 如权利要求7所述的拍摄方法,其特征在于,所述拍摄方法还包括:
    根据输入的调整操作调整所述标识框的属性,以调整所述目标区域的范围,其中,所述标识框的属性包括所述标识框的位置、大小或者两者的组合;
    获取调整后的目标区域对应的第三局部画面的内容;
    放大所述第三局部画面的内容;
    根据放大后的所述第三局部画面的内容实时地更新所述第二显示子区域的显示画面。
  9. 如权利要求6所述的拍摄方法,其特征在于,所述移动终端还包括副显示屏,所述拍摄方法还包括:
    在初始状态下在所述副显示屏中显示所述取景画面;以及
    在放大所述第一局部画面的内容之后,在所述副显示屏中实时地显示放大后的所述第一局部画面的内容。
  10. 如权利要求9所述的拍摄方法,其特征在于,在获取所述新的目标区域对应的第二局部画面的内容,并放大所述第二局部画面的内容之后,还包括:
    根据放大后的所述第二局部画面的内容实时地更新所述副显示屏的显示画面。
  11. 如权利要求8所述的拍摄方法,其特征在于,所述移动终端还包括副显示屏,所述副显示屏用于在初始状态下显示所述取景画面;
    在获取调整后的目标区域对应的第三局部画面的内容,并放大所述第三局部画面的内容之后, 还包括:
    根据放大后的所述第三局部画面的内容实时地更新所述副显示屏的显示画面。
  12. 如权利要求1、2或6所述的拍摄方法,其特征在于,所述拍摄方法还包括:
    根据输入的拍摄操作执行拍照任务,并生成相应的图像;
    在所述取景显示屏中全屏显示所述图像。
  13. 一种拍摄装置,应用于移动终端中,所述移动终端包括取景显示屏,其特征在于,所述拍摄装置包括:
    显示模块,用于在初始状态下在所述移动终端的取景显示屏中显示取景画面;
    选定模块,用于当在所述取景画面上侦测到选择操作时,确定所述选择操作选定的目标区域;
    获取模块,用于获取所述取景画面中与所述目标区域对应的第一局部画面的内容;
    缩放模块,用于放大所述第一局部画面的内容;
    所述显示模块还用于在所述取景显示屏上实时地分屏显示所述取景画面和放大后的所述第一局部画面的内容。
  14. 如权利要求13所述的拍摄装置,其特征在于,所述拍摄装置还包括分屏模块,所述分屏模块用于将所述取景显示屏的显示区域划分为并列排布的第一显示子区域和第二显示子区域;
    所述显示模块具体用于将所述取景画面实时地显示于所述第一显示子区域,并将放大后的所述第一局部画面的内容实时地显示于所述第二显示子区域。
  15. 如权利要求14所述的拍摄装置,其特征在于,所述缩放模块还用于按照所述第一显示子区域的尺寸缩小所述取景画面,以及按照所述第二显示子区域的尺寸放大所述第一局部画面的内容;
    所述显示模块具体用于将缩小后的所述取景画面实时地显示于所述第一显示子区域。
  16. 如权利要求14所述的拍摄装置,其特征在于,所述拍摄装置还包括:
    特征分析模块,用于提取所述目标区域对应的局部画面内容的特征,并根据提取的所述特征确定目标对象;
    侦测模块,用于侦测所述取景画面的内容是否发生变化;
    追踪模块,用于在侦测到所述取景画面的内容发生变化时,利用目标跟踪技术跟踪所述目标对象;
    所述选定模块还用于根据所述目标对象在所述取景画面中的当前位置重新确定新的目标区域;
    所述获取模块还用于获取所述新的目标区域对应的第二局部画面的内容;
    所述缩放模块还用于放大所述第二局部画面的内容;
    所述显示模块还用于根据放大后的所述第二局部画面的内容实时地更新所述第二显示子区域的显示画面。
  17. 如权利要求16所述的拍摄装置,其特征在于,所述显示模块还用于在所述取景画面上显示标识框以标识所述目标区域;
    所述拍摄装置还包括调整模块,所述调整模块用于根据输入的调整操作调整所述标识框的属性,以调整所述目标区域的范围,其中,所述标识框的属性包括所述标识框的位置、大小或者两者的组合;
    所述获取模块还用于获取调整后的目标区域对应的第三局部画面的内容;
    所述缩放模块还用于放大所述第三局部画面的内容;
    所述显示模块还用于根据放大后的所述第三局部画面的内容实时地更新所述第二显示子区域的显示画面。
  18. 如权利要求17所述的拍摄装置,其特征在于,所述移动终端还包括副显示屏,所述显示模块还用于:
    在初始状态下在所述副显示屏中显示所述取景画面;
    在放大所述第一局部画面的内容之后,在所述副显示屏中实时地显示放大后的所述第一局部画面的内容;或
    在放大所述第二局部画面的内容之后,根据放大后的所述第二局部画面的内容实时地更新所 述副显示屏的显示画面;或
    在放大所述第三局部画面的内容之后,根据放大后的所述第三局部画面的内容实时地更新所述副显示屏的显示画面。
  19. 一种移动终端,其特征在于,所述移动终端包括处理器,所述处理器用于执行存储器中的存储的计算机程序时实现如权利要求1-12中任意一项所述的拍摄方法的步骤。
  20. 一种计算机可读存储介质,其上存储有计算机指令,其特征在于,所述计算机指令被处理器执行时实现如权利要求1-12中任意一项所述的拍摄方法的步骤。
PCT/CN2017/110563 2017-11-10 2017-11-10 拍摄方法及装置、移动终端及计算机可读存储介质 WO2019090734A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780095805.0A CN111201773A (zh) 2017-11-10 2017-11-10 拍摄方法及装置、移动终端及计算机可读存储介质
PCT/CN2017/110563 WO2019090734A1 (zh) 2017-11-10 2017-11-10 拍摄方法及装置、移动终端及计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/110563 WO2019090734A1 (zh) 2017-11-10 2017-11-10 拍摄方法及装置、移动终端及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2019090734A1 true WO2019090734A1 (zh) 2019-05-16

Family

ID=66437418

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/110563 WO2019090734A1 (zh) 2017-11-10 2017-11-10 拍摄方法及装置、移动终端及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN111201773A (zh)
WO (1) WO2019090734A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866578A (zh) * 2021-02-03 2021-05-28 四川新视创伟超高清科技有限公司 基于8k视频画面全局到局部的双向可视化及目标跟踪***及方法
CN113741022A (zh) * 2021-07-22 2021-12-03 武汉高德智感科技有限公司 画中画显示红外图像的方法、装置及显示设备
CN114265538A (zh) * 2021-12-21 2022-04-01 Oppo广东移动通信有限公司 拍照控制方法及装置、存储介质和电子设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114063947A (zh) * 2020-07-31 2022-02-18 华为技术有限公司 屏幕显示的方法、装置、电子设备、计算机存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102033712A (zh) * 2010-12-25 2011-04-27 鸿富锦精密工业(深圳)有限公司 具有分屏显示功能的电子阅读装置及其显示方法
CN102685318A (zh) * 2011-02-15 2012-09-19 Lg电子株式会社 发送和接收数据的方法、显示装置以及移动终端
CN104145241A (zh) * 2011-12-02 2014-11-12 Gt电信公司 触摸屏上的画面操作方法
US20150015741A1 (en) * 2013-07-12 2015-01-15 Samsung Electronics Co., Ltd. Electronic device and method for controlling image display
CN105373359A (zh) * 2014-08-14 2016-03-02 三星电子株式会社 显示设备及其控制方法
CN105491220A (zh) * 2014-10-01 2016-04-13 Lg电子株式会社 移动终端及其控制方法
CN105898134A (zh) * 2015-11-15 2016-08-24 乐视移动智能信息技术(北京)有限公司 图像获取方法及装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101656826B (zh) * 2008-08-21 2011-11-09 鸿富锦精密工业(深圳)有限公司 录影***及其录影方法
CN101605207A (zh) * 2009-04-15 2009-12-16 明基电通有限公司 一种数码相机操作方法以及使用此方法的数码相机
TWI475473B (zh) * 2012-02-17 2015-03-01 Mitac Int Corp 根據觸控手勢產生分割畫面之方法
CN105824492A (zh) * 2015-09-30 2016-08-03 维沃移动通信有限公司 一种显示控制方法及终端

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102033712A (zh) * 2010-12-25 2011-04-27 鸿富锦精密工业(深圳)有限公司 具有分屏显示功能的电子阅读装置及其显示方法
CN102685318A (zh) * 2011-02-15 2012-09-19 Lg电子株式会社 发送和接收数据的方法、显示装置以及移动终端
CN104145241A (zh) * 2011-12-02 2014-11-12 Gt电信公司 触摸屏上的画面操作方法
US20150015741A1 (en) * 2013-07-12 2015-01-15 Samsung Electronics Co., Ltd. Electronic device and method for controlling image display
CN105373359A (zh) * 2014-08-14 2016-03-02 三星电子株式会社 显示设备及其控制方法
CN105491220A (zh) * 2014-10-01 2016-04-13 Lg电子株式会社 移动终端及其控制方法
CN105898134A (zh) * 2015-11-15 2016-08-24 乐视移动智能信息技术(北京)有限公司 图像获取方法及装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112866578A (zh) * 2021-02-03 2021-05-28 四川新视创伟超高清科技有限公司 基于8k视频画面全局到局部的双向可视化及目标跟踪***及方法
CN112866578B (zh) * 2021-02-03 2023-04-07 四川新视创伟超高清科技有限公司 基于8k视频画面全局到局部的双向可视化及目标跟踪***及方法
CN113741022A (zh) * 2021-07-22 2021-12-03 武汉高德智感科技有限公司 画中画显示红外图像的方法、装置及显示设备
CN114265538A (zh) * 2021-12-21 2022-04-01 Oppo广东移动通信有限公司 拍照控制方法及装置、存储介质和电子设备

Also Published As

Publication number Publication date
CN111201773A (zh) 2020-05-26

Similar Documents

Publication Publication Date Title
TWI677252B (zh) 車輛損害圖像獲取方法、裝置、伺服器和終端設備
EP3457683B1 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
KR102480245B1 (ko) 패닝 샷들의 자동 생성
JP6316968B2 (ja) 対話型画像合成
US9013592B2 (en) Method, apparatus, and computer program product for presenting burst images
WO2019090734A1 (zh) 拍摄方法及装置、移动终端及计算机可读存储介质
TW201839666A (zh) 車輛定損影像獲取方法、裝置、伺服器和終端設備
CN112714255B (zh) 拍摄方法、装置、电子设备及可读存储介质
US10317777B2 (en) Automatic zooming method and apparatus
US10832460B2 (en) Method and apparatus for generating image by using multi-sticker
CN110321768A (zh) 用于生成头部相关传递函数滤波器的布置
US20130076941A1 (en) Systems And Methods For Editing Digital Photos Using Surrounding Context
CN112584043B (zh) 辅助对焦方法、装置、电子设备及存储介质
WO2016101524A1 (zh) 纠正被摄物体拍摄倾斜方法、装置及移动终端、存储介质
WO2018166069A1 (zh) 拍照预览方法、图形用户界面及终端
WO2022001648A1 (zh) 图像处理方法、装置、设备及介质
US11770603B2 (en) Image display method having visual effect of increasing size of target image, mobile terminal, and computer-readable storage medium
US9767587B2 (en) Image extracting apparatus, image extracting method and computer readable recording medium for recording program for extracting images based on reference image and time-related information
US10447935B2 (en) Image generating apparatus, image generating method and computer readable recording medium for recording program for generating new image from images related to reference image
JP2013195524A (ja) 画像表示装置
JP2018189536A (ja) 画像処理装置、実寸法表示方法、及び実寸法表示処理プログラム
WO2020107186A1 (en) Systems and methods for taking telephoto-like images
CN113873160B (zh) 图像处理方法、装置、电子设备和计算机存储介质
CN112804451B (zh) 利用多个摄像头进行拍照的方法和***以及移动装置
WO2019084780A1 (zh) 3d图像拍摄方法及装置、拍摄终端及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17931724

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17931724

Country of ref document: EP

Kind code of ref document: A1