CN109479087B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN109479087B
CN109479087B CN201780007318.4A CN201780007318A CN109479087B CN 109479087 B CN109479087 B CN 109479087B CN 201780007318 A CN201780007318 A CN 201780007318A CN 109479087 B CN109479087 B CN 109479087B
Authority
CN
China
Prior art keywords
image
processed
determining
background
reference composition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780007318.4A
Other languages
Chinese (zh)
Other versions
CN109479087A (en
Inventor
郭佳春
董辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN109479087A publication Critical patent/CN109479087A/en
Application granted granted Critical
Publication of CN109479087B publication Critical patent/CN109479087B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method, relates to the technical field of communication, and can solve the problem that pictures shot by a terminal are poor in picture sense. According to the method and the device, the target reference composition corresponding to the image to be processed is determined through the main body object and the background in the image to be processed, and then the image to be processed and the target reference composition are calibrated according to the size of the main body object and the position of the main body object in the image to be processed, so that the target reference composition is obtained. The scheme provided by the application is suitable for being adopted in image processing.

Description

Image processing method and device
The present application claims priority from a chinese patent application entitled "an automatic composition method and apparatus based on image aesthetics" filed by the chinese patent office on 19/01/2017 under application number 201710044420.3, the entire contents of which are incorporated herein by reference.
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for image processing.
Background
For example, when a photographer needs to take a picture, a main object and a reference object of the picture to be taken can be selected by touching a display screen and the like, and then the terminal can determine the depth of field according to the main object and the reference object, and the photographer can take a picture with clear main object and blurred reference object by pressing a shutter key after focusing is finished.
However, a sufficiently beautiful photograph cannot be obtained by relying only on the photographing function of the terminal because the beauty of the photograph has a great relationship with the photographing level of the photographer. When a photo is taken, if the overall picture feeling of the photo is poor due to no good composition, the composition experience of common photographers except professional photographers is generally less at present, so that the beautiful photo is difficult to be taken by performing good composition, and the picture feeling of the photo taken through the terminal is poor.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, which can solve the problem that pictures shot by a terminal at present are poor in image feeling.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, the present application provides a method of image processing, the method comprising: the method comprises the steps that a terminal determines a main body object and a background in an image to be processed, a target reference composition corresponding to the image to be processed is determined according to the main body object and the background in the image to be processed, and then the image to be processed and the target reference composition are calibrated according to the size of the main body object and the position of the main body object in the image to be processed to obtain a target image. Therefore, even if a photographer has no composition experience, the terminal can automatically calibrate the image to be processed and the target reference image to obtain the target image which accords with the target reference composition, and the aesthetic degree of the target image is improved by calibrating the target image and the target reference image.
In one possible design, the method of determining the subject object and the background of the image to be processed may be implemented as:
dividing an image to be processed into a preset number of regions, detecting the color of each region, respectively determining the number of regions corresponding to each color, determining the region corresponding to the color with the largest number of regions as a background, and then determining the region corresponding to the color with the largest number of regions or determining the region except the background in the image to be processed as a main object.
In one possible design, the method of determining the subject object and the background in the image to be processed may be implemented as: the method comprises the steps of detecting straight lines in an image to be processed, dividing the image to be processed into at least two areas through the detected straight lines, determining any one of the divided areas as a main body object of the image to be processed, and determining the area except the main body object in the image to be processed as a background.
In one possible design, determining the target reference composition corresponding to the image to be processed according to the subject object and the background in the image to be processed may specifically be implemented as: and respectively matching the image to be processed with each pre-stored reference composition, and determining a target reference composition matched with the image to be processed. Therefore, various reference compositions are stored in the terminal in advance, the terminal can determine the target reference composition matched with the image to be processed by matching the image to be processed with each reference composition stored in advance, and then the target image according with the target reference composition can be obtained through calibration operation, so that the picture sense of the picture shot by the terminal is improved, and the shooting function and the intelligence of the image processing function of the terminal are also improved.
In one possible design, the target image is obtained by calibrating the image to be processed and the target reference composition according to the size of the subject object and the position of the subject object in the image to be processed, which may specifically be implemented as:
and determining calibration parameters according to the target reference composition, wherein the calibration parameters at least comprise the standard proportion of the main body object in the whole picture and the position of the main body object in the whole picture, and then performing calibration operation on the image to be processed according to the calibration parameters to obtain a target image according with the calibration parameters. Therefore, the target image which accords with the calibration parameters can be obtained through calibration operation, namely the target image accords with the target reference composition, and the image which accords with the target reference composition has stronger picture feeling and is more attractive.
In a second aspect, the present application provides an image processing apparatus, which may implement the functions performed by the terminal in the first aspect, where the functions may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software comprises one or more modules corresponding to the functions.
In one possible design, the apparatus includes a processor and a communication interface, and the processor is configured to support the apparatus to perform the corresponding functions of the method. The communication interface is used to support communication between the apparatus and other network elements. The apparatus may also include a memory, coupled to the processor, that retains program instructions and data necessary for the apparatus.
In a third aspect, the present application provides a computer storage medium for storing computer software instructions for the terminal, which contains a program designed to execute the above aspects.
Compared with the prior art that the picture sense of the shot picture is poor due to the fact that the user lacks composition experience, the terminal can determine the target reference composition corresponding to the image to be processed according to the main body object and the background in the image to be processed, the photographer does not need to have composition experience, the terminal can automatically calibrate the image to be processed and the target reference composition to obtain the target image according with the target reference composition, the target image is calibrated with the target reference composition, the attractiveness of the target image is improved, and the picture sense of the shot picture by the terminal is better.
Drawings
Fig. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of image processing provided by an embodiment of the present application;
fig. 3a is an exemplary schematic diagram of an image to be processed according to an embodiment of the present application;
FIG. 3b is an exemplary diagram of another image to be processed according to an embodiment of the present application;
FIG. 4a is an exemplary diagram of another image to be processed according to an embodiment of the present application;
FIG. 4b is an exemplary diagram of another image to be processed according to an embodiment of the present application;
FIG. 5 is an exemplary diagram of a method of image processing provided by an embodiment of the present application;
FIG. 6 is a flow chart of another method of image processing provided by an embodiment of the present application;
FIG. 7 is an exemplary diagram of a target image provided by an embodiment of the present application;
FIG. 8 is an exemplary diagram of another method of image processing provided by an embodiment of the present application;
fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
The system architecture and the service scenario described in this application are for more clearly illustrating the technical solution of this application, and do not constitute a limitation to the technical solution provided in this application, and it can be known by those skilled in the art that the technical solution provided in this application is also applicable to similar technical problems along with the evolution of the system architecture and the appearance of new service scenarios.
It is noted that, in the present application, words such as "exemplary" or "for example" are used to mean exemplary, illustrative, or descriptive. Any embodiment or design described herein as "exemplary" or "e.g.," is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
It should be noted that "of, corresponding to" and "corresponding" may be sometimes used in combination in the present application, and it should be noted that the intended meaning is consistent when the difference is not emphasized.
The technical solutions in the present application will be described in detail below with reference to the accompanying drawings in the present application.
The technical solution provided by the present application can be applied to the terminal 100 shown in fig. 1. A terminal, also called User Equipment (UE), is a device providing voice and/or data connectivity to a User, such as a handheld device, a vehicle-mounted device, etc. having wireless connection and image display and processing functions. Common terminals include, for example: a mobile phone, a camera, a tablet computer, a notebook computer, a palm computer, a Mobile Internet Device (MID), a wearable device, such as a smart watch, a smart bracelet, a pedometer, etc.
Taking the terminal device 100 as a mobile phone as an example, a general hardware architecture of the mobile phone will be described. As shown in fig. 1, the mobile phone may include: radio Frequency (RF) circuitry 110, memory 120, communication interface 130, display screen 140, sensors 150, audio circuitry 160, I/O subsystem 170, processor 180, and camera 190. Those skilled in the art will appreciate that the configuration of the handset shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, some components may be separated, or a different arrangement of components may be used. Those skilled in the art will appreciate that the display screen 140 belongs to a User Interface (UI), and the display screen 140 may include a display panel 141 and a touch panel 142. And the handset may include more or fewer components than shown. Although not shown, the mobile phone may further include a power supply, a bluetooth module, and other functional modules or devices, which are not described herein.
Further, processor 180 is coupled to RF circuitry 110, memory 120, audio circuitry 160, I/O subsystem 170, and camera 190, respectively. The I/O subsystem 170 is connected to the communication interface 130, the display screen 140, and the sensor 150, respectively.
The RF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then processes the received downlink information to the processor 180.
The memory 120 may be used to store software programs and modules. The processor 180 executes various functional applications and data processing of the cellular phone by executing software programs and modules stored in the memory 120.
The communication interface 130 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone.
The display screen 140 may be used to display information input by or provided to the user and various menus of the handset, and may also accept user input. The display screen 140 may include a display panel 141 and a touch panel 142. The Display panel 141 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. The touch panel 142, also referred to as a touch screen, a touch sensitive screen, etc., may collect contact or non-contact operations (e.g., operations performed by a user on or near the touch panel 142 using any suitable object or accessory such as a finger or a stylus, and may also include body sensing operations; including single-point control operations, multi-point control operations, etc.) on or near the touch panel 142, and drive the corresponding connection device according to a preset program.
The sensor 150 may be a light sensor, a motion sensor, or other sensor.
Audio circuitry 160 may provide an audio interface between the user and the handset. The I/O subsystem 170 is used to control input and output peripherals, which may include other device input controllers, sensor controllers, and display controllers.
The processor 180 is a control center of the mobile phone 200, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile phone 200 and processes data by operating or executing software programs and/or modules stored in the memory 120 and calling data stored in the memory 120, thereby performing overall monitoring of the mobile phone.
The camera 190, which may also be used as an input device, is specifically configured to convert the captured analog video or image signal into a digital signal, and store the digital signal in the memory 120. Specifically, the camera 190 may include a front camera, a rear camera, an internal camera, an external camera, and the like, which is not limited in this embodiment of the present application, and the embodiment of the present application takes a dual camera as an example for description.
Hereinafter, a method for processing an image according to an embodiment of the present application will be described in detail with reference to specific embodiments, where an execution subject of the method is a terminal, as shown in fig. 2, and the method includes:
201. and determining a main object and a background in the image to be processed.
The image to be processed can be an image acquired by two cameras of the terminal in real time, and can also be a static image.
In a possible implementation manner, the terminal may divide the image to be processed into a preset number of regions, detect the color of each region, determine the number of regions corresponding to each color, determine the region corresponding to the color with the largest number of regions as the background, and determine the region corresponding to the color with the largest number of regions as the subject object, or may also determine the region other than the background in the image to be processed as the subject object.
For example, as shown in fig. 3a, assuming that a person stands on a grassland, the terminal may detect that most of the area of the image to be processed is green of the grassland, and then may determine that the green area of the image to be processed is the background, and the area other than the green area is the main object, as shown in fig. 3b, the terminal may detect that the black area shown in fig. 3b is green by the green detector, and the area of the area occupies a large proportion of the whole picture area, so it may be determined that the black area shown in fig. 3b is the background, and the part other than the black area is the main object.
In another possible implementation manner, the terminal may further determine the subject object and the background of the image to be processed by detecting a straight line in the image to be processed, and specifically, the terminal may divide the image to be processed into at least two regions by the detected straight line, determine any one of the divided regions as the subject object of the image to be processed, and determine a region other than the subject object in the image to be processed as the background. It should be noted that, when the method for determining the subject object and the background is adopted, the terminal may perform subsequent steps by taking each region as the subject object, and determine the target reference composition by taking each region as the subject.
For example, as shown in fig. 4a, the image to be processed includes a beach, a sea, and a sky, a boundary line between the beach and the sea is a straight line, and a boundary line between the sea and the sky is also a straight line, as shown in fig. 4b, the image to be processed may be divided into three regions by the two straight lines, and then the terminal may determine any one region as a subject body and the other two regions as backgrounds, or the terminal may further determine region 1, region 2, and region 3 as subject objects, and determine a target reference composition when region 1 is the subject object, a target reference composition when region 2 is the subject object, and a target reference composition when region 3 is the subject object by performing subsequent steps.
The terminal may be configured with a plurality of composition detectors, and may detect the image to be processed by using each composition detector, respectively, to determine the subject object and the background more accurately, for example, the first composition detector determines that the background of the image to be processed is not a single color, or regions of the image to be processed other than the background are not concentrated, that is, there may be a plurality of subject objects in the image to be processed, and the terminal may determine that the image to be processed does not match the first composition detector. The second composition detector may be continuously used to detect the image to be processed, and if the second composition detector is a line detector, the line detector may detect a line in the image, and then the subject object and the background in the image to be processed may be determined according to the detection result of the line detector, and if the line detector does not detect a line, then the other composition detectors may be continuously used to detect the line.
In addition, the terminal can also identify the object contained in the image to be processed through a deep learning technology, so that the main object and the background can be determined more intelligently.
It should be noted that, if the image to be processed is an image captured by the two cameras of the terminal in real time, after the terminal determines the subject object of the image to be processed, the terminal also needs to determine depth-of-field data according to the subject object, and set an aperture value and a focal length according to the depth-of-field data. For example, if the depth of field data is within 5 meters, the aperture value may be set to 2.8, and if the depth of field data is 5 meters or more, the aperture value may be set to 4.
202. And determining a target reference composition corresponding to the image to be processed according to the main body object and the background in the image to be processed.
The terminal can match the characteristics of the main body object and the background in the image to be processed with the prestored reference composition, and then select the target reference composition from the prestored reference image.
It should be noted that, in the embodiment of the present application, the number of target reference compositions is not limited, and when the image to be processed matches a plurality of pre-stored reference compositions, this step may determine a plurality of target reference compositions.
203. And calibrating the image to be processed and the target reference composition according to the size of the main object and the position of the main object in the image to be processed to obtain a target image.
It can be understood that after the image to be processed is calibrated with the target reference composition, the target image conforming to the composition mode of the target reference composition can be obtained.
It should be noted that, if a plurality of target reference compositions are determined in step 202, in this step, the image to be processed needs to be calibrated with each target reference composition, so as to obtain a target image conforming to each target reference composition.
After a plurality of target images are determined, a user can select one target image as a final target image, or a terminal can randomly select one target image as the final target image, and if the target image randomly selected by the terminal does not meet the requirements of the user, the user can manually select other target images.
For example, as shown in fig. 5, assuming that three target reference compositions are determined in step 202, and three target images, namely target image 1, target image 2 and target image 3, are determined according to the three target reference compositions, the terminal may receive a selection instruction input by a user, and take the target image selected by the user as a final target image according to the selection instruction, for example, in fig. 5, assuming that the user performs a click operation on target image 2, the terminal takes target image 2 as a final target image.
Compared with the poor picture sense of a shot picture caused by the fact that a user lacks composition experience in the prior art, the image processing method provided by the embodiment of the application has the advantages that the terminal can determine the target reference composition corresponding to the image to be processed according to the main object and the background in the image to be processed, a photographer does not need to have composition experience, the terminal can automatically calibrate the image to be processed and the target reference composition to obtain the target image according with the target reference composition, the target image is calibrated with the target reference composition, the aesthetic degree of the target image is improved, and the picture sense of the shot picture by the terminal is better.
In a possible implementation manner provided by the embodiment of the present application, as shown in fig. 6, in step 203, the to-be-processed image and the target reference composition are calibrated according to the size of the subject object and the position of the subject object in the to-be-processed image, so as to obtain the target image, which may be specifically implemented as:
2031. calibration parameters are determined from the target reference pattern.
The calibration parameters at least include the standard proportion of the subject object in the whole picture and the position of the subject object in the whole picture.
2032. And carrying out calibration operation on the image to be processed according to the calibration parameters to obtain a target image which accords with the calibration parameters.
The calibration operation on the image to be processed can be cropping, rotating and the like.
It can be understood that after the picture to be processed is calibrated according to the calibration parameters, the obtained target image will conform to the target reference composition. As an example, if fig. 3 is taken as an image to be processed, a target reference composition is determined as a trisection composition according to fig. 3, the trisection composition means that a scene is divided into three equal parts by two horizontal lines in the longitudinal direction and two vertical lines in the horizontal direction, which is equivalent to a scene divided into two horizontal lines and two vertical lines, and is similar to a "well", so that four intersections are obtained, and a main object is located at one of the intersections. The image to be processed shown in fig. 3 does not conform to the composition of the trisection method, so the terminal can calculate the calibration parameters, and crop the image to be processed through the calibration parameters to obtain the target image conforming to the composition of the trisection method, as shown in fig. 7, after the image to be processed is cropped according to the thick lines in fig. 7, the target image conforming to the composition of the trisection method can be obtained. It should be noted that, if the to-be-processed image is a dynamic image being shot by the camera, the terminal may display a trisection composition on the shooting interface, as shown in fig. 8, four intersections in fig. 8 are positions of the subject object, and the user may be prompted to adjust the to-be-processed image currently shot by the camera by displaying the trisection composition, so that the actually shot subject object coincides with the position of the subject object indicated in the trisection composition, thereby shooting a target image conforming to the trisection composition. Assuming that another target reference composition determined according to the image to be processed shown in fig. 3 is a center composition, the center composition may be used as an option of the target reference composition, if the user selects a trisection composition, the trisection composition is displayed in the shooting interface of the terminal, and if the user selects the center composition, the center composition is displayed in the shooting interface of the terminal.
For the embodiment of the application, the terminal can process the image to be processed which is shot in real time, the target reference composition is determined by detecting the image shot by the current camera, and the target reference composition is displayed on the shooting interface, so that a user can adjust the shot picture according to the target reference composition displayed on the shooting interface, the shot picture is made to accord with the target reference composition, and a more attractive picture is shot.
The above description mainly introduces the scheme provided by the embodiment of the present invention from the perspective of the terminal. It is understood that the terminal includes hardware structures and/or software modules for performing respective functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiment of the present application, the terminal may be divided into the functional modules according to the above method example, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiments of the present application, the division of the modules is schematic, and is only one logical function division, and there may be another division manner in actual implementation.
In the case of adopting a method of dividing each functional module corresponding to each function, an embodiment of the present application further provides an image processing apparatus, which may be implemented as the terminal in the above-described embodiment. As shown in fig. 9, fig. 9 shows a schematic diagram of a possible structure of the terminal involved in the above embodiment. The terminal includes: determination module 901 and calibration module 902.
Wherein, the determining module 901 is configured to support the terminal to execute step 201 and step 202 in fig. 2, and the calibrating module 902 is configured to support the terminal to execute step 203 in fig. 2 and step 2031 to step 2032 in fig. 6.
All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
In the case of an integrated unit, it should be noted that the determining module 901 and the calibrating module 902 shown in fig. 9 may be integrated in the processor 180 shown in fig. 1, so that the processor 180 executes the specific functions of the determining module 901 and the calibrating module 902.
Embodiments of the present application further provide a computer storage medium for storing computer software instructions for the terminal, which includes a program designed to perform the steps performed by the terminal in the above embodiments.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied in hardware or in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in a core network interface device. Of course, the processor and the storage medium may reside as discrete components in a core network interface device.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may also be distributed on a plurality of network devices. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated into one processing unit, or each functional unit may exist independently, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by software plus necessary general hardware, and certainly, the present application can also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application may be substantially implemented or a part of the technical solutions contributing to the prior art may be embodied in the form of a software product, where the computer software product is stored in a readable storage medium, such as a floppy disk, a hard disk, or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods described in the embodiments of the present application.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and all changes and substitutions within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (12)

1. A method of image processing, comprising:
determining a main object and a background in an image to be processed;
determining a target reference composition corresponding to the image to be processed according to a main object and a background in the image to be processed;
calibrating the image to be processed and the target reference composition according to the size of the main body object and the position of the main body object in the image to be processed to obtain a target image;
the calibrating the image to be processed and the target reference composition according to the size of the subject object and the position of the subject object in the image to be processed to obtain a target image includes:
determining calibration parameters according to the target reference composition, wherein the calibration parameters at least comprise a standard proportion of a main body object in the whole picture and the position of the main body object in the whole picture;
calibrating the image to be processed according to the calibration parameters to obtain a target image which accords with the calibration parameters;
the calibration operation of the image to be processed is cutting and rotating operation.
2. The method of image processing according to claim 1, wherein the determining the subject object and the background of the image to be processed comprises:
dividing the image to be processed into a preset number of areas;
detecting the color of each area, and respectively determining the number of areas corresponding to each color;
determining the area corresponding to the color with the largest number of areas as a background;
and determining the area corresponding to the color with the plurality of areas or the area except the background in the image to be processed as the main object.
3. The method of claim 1, wherein the determining the subject object and the background in the image to be processed comprises:
detecting a straight line in the image to be processed;
dividing the image to be processed into at least two areas through the detected straight line;
determining any one of the divided regions as a main object of the image to be processed;
and determining the area except the main object in the image to be processed as the background.
4. The method according to claim 2 or 3, wherein the determining a target reference composition corresponding to the image to be processed according to the subject object and the background in the image to be processed comprises:
and respectively matching the image to be processed with each pre-stored reference composition, and determining a target reference composition matched with the image to be processed.
5. An apparatus for image processing, comprising:
the determining module is used for determining a main object and a background in the image to be processed; determining a target reference composition corresponding to the image to be processed according to a main object and a background in the image to be processed;
the calibration module is used for determining calibration parameters according to the target reference composition, wherein the calibration parameters at least comprise a standard proportion of a main object in the whole picture and the position of the main object in the whole picture; calibrating the image to be processed according to the calibration parameters to obtain a target image which accords with the calibration parameters;
the calibration operation of the image to be processed is cutting and rotating operation.
6. The apparatus according to claim 5,
the determining module is used for dividing the image to be processed into a preset number of areas; detecting the color of each area, and respectively determining the number of areas corresponding to each color; determining the area corresponding to the color with the largest number of areas as a background; and determining the area corresponding to the color with the plurality of areas or the area except the background in the image to be processed as the main object.
7. The apparatus according to claim 5, wherein the determining module is configured to detect a straight line in the image to be processed; dividing the image to be processed into at least two areas through the detected straight line; determining any one of the divided regions as a main object of the image to be processed; and determining the area except the main object in the image to be processed as the background.
8. The apparatus for image processing according to claim 6 or 7,
the determining module is used for respectively matching the image to be processed with each pre-stored reference composition and determining a target reference composition matched with the image to be processed.
9. An apparatus for image processing, comprising:
a memory for storing information including program instructions;
a processor coupled to the memory for controlling execution of program instructions, in particular for determining a subject object and a background in an image to be processed; determining a target reference composition corresponding to the image to be processed according to a main object and a background in the image to be processed; determining calibration parameters according to the target reference composition, wherein the calibration parameters at least comprise a standard proportion of a main body object in the whole picture and the position of the main body object in the whole picture; calibrating the image to be processed according to the calibration parameters to obtain a target image which accords with the calibration parameters;
the calibration operation of the image to be processed is cutting and rotating operation.
10. The apparatus for image processing according to claim 9,
the processor is used for dividing the image to be processed into a preset number of areas; detecting the color of each area, and respectively determining the number of areas corresponding to each color; determining the area corresponding to the color with the largest number of areas as a background; and determining the area corresponding to the color with the plurality of areas or the area except the background in the image to be processed as the main object.
11. The apparatus for image processing according to claim 9,
the processor is used for detecting a straight line in the image to be processed; dividing the image to be processed into at least two areas through the detected straight line; determining any one of the divided regions as a main object of the image to be processed; and determining the area except the main object in the image to be processed as the background.
12. The apparatus for image processing according to claim 10 or 11,
the processor is used for respectively matching the image to be processed with each pre-stored reference composition and determining a target reference composition matched with the image to be processed.
CN201780007318.4A 2017-01-19 2017-06-13 Image processing method and device Active CN109479087B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201710044420 2017-01-19
CN2017100444203 2017-01-19
PCT/CN2017/088085 WO2018133305A1 (en) 2017-01-19 2017-06-13 Method and device for image processing

Publications (2)

Publication Number Publication Date
CN109479087A CN109479087A (en) 2019-03-15
CN109479087B true CN109479087B (en) 2020-11-17

Family

ID=62907547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780007318.4A Active CN109479087B (en) 2017-01-19 2017-06-13 Image processing method and device

Country Status (2)

Country Link
CN (1) CN109479087B (en)
WO (1) WO2018133305A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111432122B (en) * 2020-03-30 2021-11-30 维沃移动通信有限公司 Image processing method and electronic equipment
CN112037160B (en) * 2020-08-31 2024-03-01 维沃移动通信有限公司 Image processing method, device and equipment
CN113206956B (en) * 2021-04-29 2023-04-07 维沃移动通信(杭州)有限公司 Image processing method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5873007A (en) * 1997-10-28 1999-02-16 Sony Corporation Picture composition guidance system
CN104243787A (en) * 2013-06-06 2014-12-24 华为技术有限公司 Photographing method and equipment, and photo management method
CN104917951A (en) * 2014-03-14 2015-09-16 宏碁股份有限公司 Camera device and auxiliary human image shooting method thereof
CN106131418A (en) * 2016-07-19 2016-11-16 腾讯科技(深圳)有限公司 A kind of composition control method, device and photographing device
CN106131411A (en) * 2016-07-14 2016-11-16 纳恩博(北京)科技有限公司 A kind of method and apparatus shooting image

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101000451A (en) * 2006-01-10 2007-07-18 英保达股份有限公司 Automatic viecofinding support device and method
JP5880263B2 (en) * 2012-05-02 2016-03-08 ソニー株式会社 Display control device, display control method, program, and recording medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5873007A (en) * 1997-10-28 1999-02-16 Sony Corporation Picture composition guidance system
CN104243787A (en) * 2013-06-06 2014-12-24 华为技术有限公司 Photographing method and equipment, and photo management method
CN104917951A (en) * 2014-03-14 2015-09-16 宏碁股份有限公司 Camera device and auxiliary human image shooting method thereof
CN106131411A (en) * 2016-07-14 2016-11-16 纳恩博(北京)科技有限公司 A kind of method and apparatus shooting image
CN106131418A (en) * 2016-07-19 2016-11-16 腾讯科技(深圳)有限公司 A kind of composition control method, device and photographing device

Also Published As

Publication number Publication date
CN109479087A (en) 2019-03-15
WO2018133305A1 (en) 2018-07-26

Similar Documents

Publication Publication Date Title
AU2022200580B2 (en) Photographing method, photographing apparatus, and mobile terminal
EP3032821B1 (en) Method and device for shooting a picture
KR101839569B1 (en) Method and terminal for acquiring panoramic image
CN110113515B (en) Photographing control method and related product
CN109089043B (en) Shot image preprocessing method and device, storage medium and mobile terminal
US11030733B2 (en) Method, electronic device and storage medium for processing image
CN109361794B (en) Zoom control method and device of mobile terminal, storage medium and mobile terminal
CN109479087B (en) Image processing method and device
US20200084374A1 (en) Camera module, processing method and apparatus, electronic device, and storage medium
CN113572980A (en) Photographing method and device, terminal equipment and storage medium
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium
CN110012208B (en) Photographing focusing method and device, storage medium and electronic equipment
CN108093181B (en) Picture shooting method and device, readable storage medium and mobile terminal
US20220270313A1 (en) Image processing method, electronic device and storage medium
CN111353934B (en) Video synthesis method and device
CN114390195A (en) Automatic focusing method, device, equipment and storage medium
CN114549658A (en) Camera calibration method and device and electronic equipment
CN116939351A (en) Shooting method, shooting device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210421

Address after: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040

Patentee after: Honor Device Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd.