CN112437172A - Photographing method, terminal and computer readable storage medium - Google Patents

Photographing method, terminal and computer readable storage medium Download PDF

Info

Publication number
CN112437172A
CN112437172A CN202011187262.5A CN202011187262A CN112437172A CN 112437172 A CN112437172 A CN 112437172A CN 202011187262 A CN202011187262 A CN 202011187262A CN 112437172 A CN112437172 A CN 112437172A
Authority
CN
China
Prior art keywords
camera
scene
range
target
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011187262.5A
Other languages
Chinese (zh)
Inventor
张沛昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN202011187262.5A priority Critical patent/CN112437172A/en
Publication of CN112437172A publication Critical patent/CN112437172A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/958Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
    • H04N23/959Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention provides a photographing method, a terminal and a computer readable storage medium, wherein the photographing method comprises the following steps: identifying scene information in a first viewing scene of a first camera, and constructing a target viewing range in the first viewing scene according to a composition rule; and after the target view range is determined, displaying a guide identifier in a photographing preview interface of the second camera, guiding a second view scene of the second camera to be overlapped with the target view range, and finishing photographing. Through discerning the scene information of the first scene of finding a view in the first camera, determine the target range of finding a view wherein according to the composition rule, when the second camera is shood, just can guide the user to overlap the range of finding a view of second camera and target range of finding a view in the preview interface of shooing, thereby guarantee that the photo that the user shot accords with the composition rule, through mutually supporting between many cameras, make the user get very big promotion of the level of shooing, thereby user's experience has been promoted.

Description

Photographing method, terminal and computer readable storage medium
Technical Field
The present invention relates to the field of image processing of mobile terminals, and more particularly, to a photographing method, a terminal, and a computer-readable storage medium.
Background
In order to achieve a better photographing effect and achieve coverage of more focal sections, most of the existing smart phones are provided with a plurality of cameras. Meanwhile, with the development of the AI technology and the field of artificial intelligence, the image detection and identification technology is mature, the technology can detect the environment where the image is located, the object information contained in the image, the position of the object in the image and specifically what object is, such as whether the current picture is cloudy or sunny, whether the current picture is located on a beach or a grassland, and the picture comprises the characteristics of people, trees, pets, faces, trunks, four limbs and the like of the people.
Along with the gradual strengthening of the photographing function of the smart phone, more and more manufacturers start to take the photographing function as an important propaganda function, more and more users also start to be willing to use the photographing function of the smart phone, but many users find out after purchasing and using that the sample sheets photographed by the users can not achieve the effect of propaganda of the smart phone all the time, so that the expected value and the actual experience of the users generate a huge drop, and therefore in order to reduce the occurrence of the situation, the small and white users who photograph can also photograph the sample sheets with good effect, so as to improve the user experience, and the method has great significance for the smart phone which is provided with multiple cameras at present.
Disclosure of Invention
The technical problems to be solved by the invention are that in the existing smart phone, the coordination among the cameras in the multi-camera system is insufficient, the user cannot be given certain guidance and assistance to take a picture, and the picture taking experience and picture taking level of the user cannot be effectively improved.
In order to solve the above technical problem, the present invention provides a photographing method, including:
identifying scene information in a first viewing scene of a first camera, and constructing a target viewing range in the first viewing scene according to a composition rule;
and after the target viewing range is determined, displaying a guide identifier in a photographing preview interface of the second camera, guiding a second viewing scene of the second camera to be overlapped with the target viewing range, and finishing photographing.
Optionally, the identifying scene information in the first view scene of the first camera includes:
object features in the first viewing scene are identified, as well as position information and depth information for each object.
Optionally, the composition rule includes: a subject patterning method and/or a matching patterning method;
the main body mapping method is that after a main body object is determined, the main body object is placed at a target position according to the recognition result of the main body object and a conventional mapping rule is adopted to determine a target mapping range;
and the matching and mapping method is to match the recognition result of the first viewing scene with a photo database, match a target photo with the most similar result, and determine a target viewing range according to the target photo.
Optionally, the conventional patterning rule includes: at least one of a trisection pattern, a symmetrical pattern, a frame pattern, a guideline pattern, a diagonal pattern, and a golden triangle pattern.
Optionally, when the target viewing ranges are greater than or equal to two, displaying the first viewing scene in a suspension manner on a photographing preview interface of the second camera, and prompting a user to select at least one target viewing range;
and displaying guide marks in the photographing preview interface of the second camera in sequence according to the selection result of the user.
Optionally, the method further includes a self-selection mode, and when the self-selection mode is started, the first view-finding scene is displayed on the photographing preview interface of the second camera in a suspended manner, composition is performed again according to a selection result of a user, and a target view-finding range is determined.
Optionally, the guide identifier includes a guide arrow for indicating a moving direction of the second camera and a range indication box for indicating a limit of a target viewing range.
Optionally, when the viewing range of the first viewing scene is greater than the viewing range of the second viewing scene, the limit of the target viewing range is determined according to the viewing range of the second viewing scene, so that the limit of the target viewing range is the same as the viewing range of the second viewing scene.
Further, the invention also provides a terminal, which comprises a first camera, a second camera, a display screen, a processor, a memory and a communication bus;
the first camera is used for acquiring a first viewing scene;
the second camera is used for acquiring a second viewing scene;
the display screen is used for displaying a photographing preview interface and a guide identifier;
the communication bus is used for realizing the connection and communication among the first camera, the second camera, the display screen, the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of a photographing method according to any one of the above.
Further, the present invention also provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of a photographing method according to any one of the above.
Advantageous effects
The embodiment of the invention provides a photographing method, a terminal and a computer readable storage medium, wherein the photographing method comprises the following steps: identifying scene information in a first viewing scene of a first camera, and constructing a target viewing range in the first viewing scene according to a composition rule; and after the target view range is determined, displaying a guide identifier in a photographing preview interface of the second camera, guiding a second view scene of the second camera to be overlapped with the target view range, and finishing photographing. Through discerning the scene information of the first scene of finding a view in the first camera, determine the target range of finding a view wherein according to the composition rule, when the second camera is shood, just can guide the user to overlap the range of finding a view of second camera and target range of finding a view in the preview interface of shooing, thereby guarantee that the photo that the user shot accords with the composition rule, through mutually supporting between many cameras, make the user get very big promotion of the level of shooing, thereby user's experience has been promoted.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
fig. 1 is a schematic diagram of a hardware structure of an optional mobile terminal for implementing various embodiments of the present invention.
Fig. 2 is a basic flowchart of a photographing method according to a first embodiment of the present invention;
FIG. 3 is a diagram illustrating a first viewing scene captured by a first camera according to a first embodiment of the present invention;
fig. 4 is a schematic diagram of a second viewing scene acquired by a second camera and a corresponding photo preview interface in the first embodiment of the present invention;
FIG. 5 is a diagram illustrating a photo preview interface including a guide arrow and a range box according to a first embodiment of the present invention;
fig. 6 is a schematic structural diagram of a terminal according to a second embodiment of the present invention.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the construction according to the embodiment of the present invention can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the radio frequency unit 101 may be configured to receive and transmit signals during information transmission and reception or during a call, and specifically, receive downlink information of a base station and then process the downlink information to the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access), TD-SCDMA (Time Division-Synchronous Code Division Multiple Access), FDD-LTE (Frequency Division duplex Long Term Evolution), and TDD-LTE (Time Division duplex Long Term Evolution).
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
Based on the hardware structure of the mobile terminal, the invention provides various embodiments of the method.
First embodiment
Fig. 3 is a basic flowchart of a photographing method provided in this embodiment, where the photographing method includes:
s301, scene information in a first framing scene of the first camera is identified.
The first view scene is the environmental image information collected when the first camera starts to work, the first view scene can be displayed on the display for previewing, and can also be directly run on the background for recognition, and the recognition result includes, but is not limited to, the environment of the first view scene, the objects contained in the first view scene, the positions of the objects, the position relationship and depth information among the objects, the feature information of the objects, and the like. Referring to fig. 3, fig. 3 is a schematic view of a first viewing scene, and in fig. 3, it can be recognized that the current viewing scene includes a remote mountain, the sun above the top of the mountain, and various trees and bushes in front of the mountain, and it can also be recognized that the current scene is a sunny day according to the position of the sun and the purpose of clouds.
S302, constructing a target view range in the first view scene according to composition rules.
The composition rule is a set of composition algorithm preset by the system, one or more better photographing frames can be constructed in the first viewing scene through the composition rule, the viewing range contained in the photographing frame is the target viewing range, and a better picture can be obtained only by photographing the scene in the photographing frame during photographing. Referring to fig. 3, the range of the first viewing scene in fig. 3 is 100, the target viewing range in the viewing frame is 101, and it should be noted that the viewing frame of the target viewing range 100 indicated in fig. 3 may or may not exist in actual application, and the system background may also determine the target viewing range 100 by recording coordinate information.
And S303, displaying a guide identifier in a photographing preview interface of the second camera.
After the target view range in the first view scene is determined, a guide identifier is displayed on a photographing preview interface of the second camera, and the user can move the second view scene of the first camera into the target view range in the first view scene according to the guide of the guide identifier, so that photographing is completed. Referring to fig. 4, fig. 4 is a schematic diagram of a photographing preview interface, a display terminal of the preview interface of fig. 4 includes but is not limited to a smart phone, a photographing preview interface of a second camera in fig. 4 is 200, content displayed by the photographing preview interface 200 is a second viewing scene acquired by the second camera, a guide identifier 201 is further displayed on the photographing preview interface 200, and a guide direction of the guide identifier 201 is a moving direction of the second camera. It should be noted that the arrow-shaped guide mark provided in fig. 4 is only one type of the guide marks, and the guide mark can be adjusted according to the situation in practical application.
S304, guiding a second view finding scene of the second camera to be overlapped with the target view finding range, and finishing photographing.
When the user moves the second camera according to the guide identification, the second viewing scene of the second camera changes along with the change, when the second viewing scene moves to the target viewing range of the first viewing scene, the user can be reminded of taking pictures, and the user can take pictures according to the reminding.
In still other embodiments, the identifying scene information in the first framed scene of the first camera includes: object features in the first viewing scene are identified, as well as position information and depth information for each object.
Object characteristics typically include the type of object, such as trees, cats, dogs, and people; parts of objects such as the face, torso, and legs; the position angle of the object, such as the angle at which a person stands, the inclination angle of trees, etc., and the size of the object and the picture scale of the entire viewing scene. By identifying the characteristics, the position information and the depth information of the object, composition can be better performed, so that a composition picture is more reasonable and attractive.
In other embodiments, the composition rules include: a subject patterning method and/or a matching patterning method; the main body mapping method is that after a main body object is determined, the main body object is placed at a target position according to the recognition result of the main body object and a conventional mapping rule is adopted to determine a target mapping range; and the matching and mapping method is to match the recognition result of the first viewing scene with a photo database, match a target photo with the most similar result, and determine a target viewing range according to the target photo.
The main body composition method is to determine a main picture, the view range of the whole picture is adjusted aiming at the main body, the adjustment method adopts a conventional composition rule, for example, when a person is identified to exist in the picture and only the upper body of the person is in the picture, the person can be placed on a symmetrical axis of the picture by taking the person as the main body and the neck of the person as the center, and at the moment, the target view range can be determined by adjusting the view range of the picture and enabling the person to be located at a corresponding position in the view picture. The matching and mapping method is to identify all information in a picture, input the identification result into a photo database, store a great amount of excellent photos and content information corresponding to each photo in the photo database, match a target photo with the most similar result from the photo database, determine a target view range according to the target photo, and map the composition according to the excellent photo, so that the composition result and the excellent photo can be guaranteed to achieve similar effects, and even a map repairing suggestion can be provided for a user according to the parameters of the excellent photo.
In other embodiments, the conventional patterning rule comprises: at least one of a trisection pattern, a symmetrical pattern, a frame pattern, a guideline pattern, a diagonal pattern, and a golden triangle pattern. The composition rule is a composition rule which is used more daily, and the content included in the conventional composition rule is continuously improved along with the improvement of the photographing method, so that the conventional composition rule proposed above is also in dynamic change.
In other embodiments, when the target viewing ranges are greater than or equal to two, the first viewing scene is displayed on the photographing preview interface of the second camera in a floating manner, and the user is prompted to select at least one target viewing range; and displaying guide marks in the photographing preview interface of the second camera in sequence according to the selection result of the user.
When composition is carried out in a first viewing scene, a plurality of target viewing ranges meeting composition rules can be obtained sometimes, the first viewing scene can be displayed on a photographing preview interface in a suspension mode at the moment, a user can select a photo meeting own idea, the user can select one photo or a plurality of photos, when the user selects a plurality of photos, a guide identifier can guide the user to take a photo according to the selection sequence of the user or the moving shortest distance, a guide identifier corresponding to the next target viewing range is displayed, and the user is guided to finish photographing in sequence.
In other embodiments, the photographing method provided by the present invention further includes a self-selection mode, and when the self-selection mode is started, the first view-finding scene is displayed in a suspended manner on the photographing preview interface of the second camera, composition is performed again according to a selection result of the user, and a target view-finding range is determined.
When the user starts the self-selection mode, the first view-finding scene is displayed on the photographing preview interface in a suspension mode, the user specifies a subject object, a photographing range and the like according to own requirements at the moment, and after a selection result of the user is obtained, composition is re-performed according to the selection result of the user, and a target view-finding range is determined.
In other embodiments, the guide mark comprises a guide arrow for indicating the moving direction of the second camera and a range indication box for indicating the limit of the target viewing range.
Referring to fig. 5, fig. 5 is a schematic diagram of a photo preview interface including a guide identifier provided in this embodiment, in fig. 5, the guide identifier includes a guide arrow 202 and a range indication frame 203, where the range indication frame 203 is not completely shown, the guide arrow 202 is used to indicate a moving direction of the second camera, an arrow of the arrow 202 points to a diagonal direction which is generally the range indication frame 203, the range indication frame 203 is used to indicate a boundary of a target viewing range, and after the range indication frame 203 is completely moved into the photo preview interface, the photo preview interface is adjusted according to a display range of the range indication frame 203, so that the photo preview interface only displays the viewing range in the range indication frame 203.
In some embodiments, when the viewing range of the first viewing scene is greater than the viewing range of the second viewing scene, the limit of the target viewing range is determined according to the viewing range of the second viewing scene, so that the limit of the target viewing range is the same as the viewing range of the second viewing scene.
Without limitation, the boundary of the target view range determined according to the composition rule may be any ratio, but in practical applications, a user needs to take a picture of a specific ratio, and the boundary of the target view range to be constructed needs to be determined according to the view range of the second view scene, for example, the user sets the view ratio of the second view scene of the second camera to be 1:1, and the constructed target view range should be constructed at 1: 1.
It should be noted that, in general, the shooting effect of the second camera is better than that of the first camera, or the shooting effect of the second camera on the target viewing range is better than that of the first camera, for example, the first camera is an ultra-wide-angle camera, the second camera is a wide-angle camera, and when the wide-angle camera is used to shoot a partial area in the viewing scene of the ultra-wide-angle camera, the image quality of the obtained picture is obviously better than that of the corresponding area of the picture shot by the ultra-wide-angle camera; of course, in practice, the above-mentioned limitations may not be assumed for any particular purpose.
This embodiment (advantageous effects).
The embodiment of the invention provides a photographing method, which comprises the following steps: identifying scene information in a first viewing scene of a first camera, and constructing a target viewing range in the first viewing scene according to a composition rule; and after the target view range is determined, displaying a guide identifier in a photographing preview interface of the second camera, guiding a second view scene of the second camera to be overlapped with the target view range, and finishing photographing. Through discerning the scene information of the first scene of finding a view in the first camera, determine the target range of finding a view wherein according to the composition rule, when the second camera is shood, just can guide the user to overlap the range of finding a view of second camera and target range of finding a view in the preview interface of shooing, thereby guarantee that the photo that the user shot accords with the composition rule, through mutually supporting between many cameras, make the user get very big promotion of the level of shooing, thereby user's experience has been promoted.
Second embodiment
The present embodiment further provides a terminal, as shown in fig. 6, the terminal includes: a first camera 51, a second camera 52, a display 53, a processor 54, a memory 55 and a communication bus 56;
the first camera 51 is used for acquiring a first viewing scene;
the second camera 52 is used for acquiring a second viewing scene;
the display screen 53 is used for displaying a photographing preview interface and a guide identifier;
the communication bus 56 is used for realizing connection communication among the first camera 51, the second camera 52, the display 53, the processor 54 and the memory 55;
the processor 54 is configured to execute one or more programs stored in the memory 55 to implement the steps of a photographing method in an embodiment of the present invention.
Embodiments of the present invention also provide a computer-readable storage medium in which one or more programs are stored, the one or more programs being executable by one or more processors to perform the steps of a photographing method in the embodiments of the present invention.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A photographing method, comprising:
identifying scene information in a first viewing scene of a first camera, and constructing a target viewing range in the first viewing scene according to a composition rule;
and after the target viewing range is determined, displaying a guide identifier in a photographing preview interface of the second camera, guiding a second viewing scene of the second camera to be overlapped with the target viewing range, and finishing photographing.
2. The photographing method of claim 1, wherein the identifying scene information in the first framed scene of the first camera comprises:
object features in the first viewing scene are identified, as well as position information and depth information for each object.
3. A photographing method according to claim 1, wherein the composition rule includes: a subject patterning method and/or a matching patterning method;
the main body mapping method is that after a main body object is determined, the main body object is placed at a target position according to the recognition result of the main body object and a conventional mapping rule is adopted to determine a target mapping range;
and the matching and mapping method is to match the recognition result of the first viewing scene with a photo database, match a target photo with the most similar result, and determine a target viewing range according to the target photo.
4. A photographing method according to claim 3, wherein the regular composition rule comprises: at least one of a trisection pattern, a symmetrical pattern, a frame pattern, a guideline pattern, a diagonal pattern, and a golden triangle pattern.
5. The photographing method according to claim 1, wherein when the target view ranges are greater than or equal to two, the first view scene is displayed in a floating manner on the photographing preview interface of the second camera, and the user is prompted to select at least one target view range;
and displaying guide marks in the photographing preview interface of the second camera in sequence according to the selection result of the user.
6. The photographing method according to any one of claims 1 to 5, further comprising a self-selection mode, wherein when the self-selection mode is started, the first viewing scene is displayed on the photographing preview interface of the second camera in a floating manner, composition is performed again according to a selection result of a user, and a target viewing range is determined.
7. The photographing method according to claim 6, wherein the guide mark includes a guide arrow for indicating a moving direction of the second camera and a range indication box for indicating a limit of a target finder range.
8. The photographing method according to claim 7, wherein when the view range of the first view scene is larger than the view range of the second view scene, the limit of the target view range is determined according to the view range of the second view scene so that the limit of the target view range is the same as the view range of the second view scene.
9. A terminal is characterized by comprising a first camera, a second camera, a display screen, a processor, a memory and a communication bus;
the first camera is used for acquiring a first viewing scene;
the second camera is used for acquiring a second viewing scene;
the display screen is used for displaying a photographing preview interface and a guide identifier;
the communication bus is used for realizing the connection and communication among the first camera, the second camera, the display screen, the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of the photographing method according to any one of claims 1 to 8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores one or more programs which are executable by one or more processors to implement the steps of the photographing method according to any one of claims 1 to 8.
CN202011187262.5A 2020-10-30 2020-10-30 Photographing method, terminal and computer readable storage medium Pending CN112437172A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011187262.5A CN112437172A (en) 2020-10-30 2020-10-30 Photographing method, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011187262.5A CN112437172A (en) 2020-10-30 2020-10-30 Photographing method, terminal and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN112437172A true CN112437172A (en) 2021-03-02

Family

ID=74694752

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011187262.5A Pending CN112437172A (en) 2020-10-30 2020-10-30 Photographing method, terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112437172A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113038015A (en) * 2021-03-19 2021-06-25 城云科技(中国)有限公司 Secondary shooting method and system
CN115150543A (en) * 2021-03-31 2022-10-04 华为技术有限公司 Shooting method, shooting device, electronic equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107333068A (en) * 2017-08-24 2017-11-07 上海与德科技有限公司 A kind of photographic method, device, equipment and medium
CN107566529A (en) * 2017-10-18 2018-01-09 维沃移动通信有限公司 A kind of photographic method, mobile terminal and cloud server
CN108513066A (en) * 2018-03-28 2018-09-07 努比亚技术有限公司 It takes pictures composition guidance method, mobile terminal and storage medium
US20190174056A1 (en) * 2017-12-01 2019-06-06 Samsung Electronics Co., Ltd. Method and system for providing recommendation information related to photography
CN110445978A (en) * 2019-06-24 2019-11-12 华为技术有限公司 A kind of image pickup method and equipment
CN111277760A (en) * 2020-02-28 2020-06-12 Oppo广东移动通信有限公司 Shooting composition method, terminal and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107333068A (en) * 2017-08-24 2017-11-07 上海与德科技有限公司 A kind of photographic method, device, equipment and medium
CN107566529A (en) * 2017-10-18 2018-01-09 维沃移动通信有限公司 A kind of photographic method, mobile terminal and cloud server
US20190174056A1 (en) * 2017-12-01 2019-06-06 Samsung Electronics Co., Ltd. Method and system for providing recommendation information related to photography
CN110476405A (en) * 2017-12-01 2019-11-19 三星电子株式会社 For providing and shooting the method and system of related recommendation information
CN108513066A (en) * 2018-03-28 2018-09-07 努比亚技术有限公司 It takes pictures composition guidance method, mobile terminal and storage medium
CN110445978A (en) * 2019-06-24 2019-11-12 华为技术有限公司 A kind of image pickup method and equipment
CN111277760A (en) * 2020-02-28 2020-06-12 Oppo广东移动通信有限公司 Shooting composition method, terminal and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113038015A (en) * 2021-03-19 2021-06-25 城云科技(中国)有限公司 Secondary shooting method and system
CN115150543A (en) * 2021-03-31 2022-10-04 华为技术有限公司 Shooting method, shooting device, electronic equipment and readable storage medium
CN115150543B (en) * 2021-03-31 2024-04-16 华为技术有限公司 Shooting method, shooting device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN111541845B (en) Image processing method and device and electronic equipment
CN109361865B (en) Shooting method and terminal
CN111355889B (en) Shooting method, shooting device, electronic equipment and storage medium
CN108184050B (en) Photographing method and mobile terminal
CN110505408B (en) Terminal shooting method and device, mobile terminal and readable storage medium
CN108174103B (en) Shooting prompting method and mobile terminal
CN107248137B (en) Method for realizing image processing and mobile terminal
WO2020020134A1 (en) Photographing method and mobile terminal
CN109474786B (en) Preview image generation method and terminal
CN109819168B (en) Camera starting method and mobile terminal
CN108881544B (en) Photographing method and mobile terminal
CN110602389B (en) Display method and electronic equipment
CN110099218B (en) Interactive control method and device in shooting process and computer readable storage medium
WO2020011080A1 (en) Display control method and terminal device
CN109684277B (en) Image display method and terminal
CN108307105B (en) Shooting method, terminal and computer readable storage medium
CN110602387B (en) Shooting method and electronic equipment
CN112437172A (en) Photographing method, terminal and computer readable storage medium
CN110717964B (en) Scene modeling method, terminal and readable storage medium
CN108600609A (en) The method and device of auxiliary photo-taking
CN110177209B (en) Video parameter regulation and control method, device and computer readable storage medium
CN110933293A (en) Shooting method, terminal and computer readable storage medium
CN111064888A (en) Prompting method and electronic equipment
CN108156386B (en) Panoramic photographing method and mobile terminal
CN114266843A (en) Composition recommendation method, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210302