WO2018133305A1 - 一种图像处理的方法及装置 - Google Patents

一种图像处理的方法及装置 Download PDF

Info

Publication number
WO2018133305A1
WO2018133305A1 PCT/CN2017/088085 CN2017088085W WO2018133305A1 WO 2018133305 A1 WO2018133305 A1 WO 2018133305A1 CN 2017088085 W CN2017088085 W CN 2017088085W WO 2018133305 A1 WO2018133305 A1 WO 2018133305A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
processed
subject object
reference composition
target
Prior art date
Application number
PCT/CN2017/088085
Other languages
English (en)
French (fr)
Inventor
郭佳春
董辰
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201780007318.4A priority Critical patent/CN109479087B/zh
Publication of WO2018133305A1 publication Critical patent/WO2018133305A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present application relates to the field of communications technologies, and in particular, to a method and an apparatus for image processing.
  • the current terminal has dual cameras, and the photographer can touch when he needs to take pictures. Control the display screen to select the subject and reference object of the photo to be taken, and then the terminal can determine the depth of field according to the subject and the reference object. After the photographer finishes focusing, press the shutter button to shoot the subject clearly, and the reference object is blurred.
  • the photo by blurring the background of the photo, can achieve the effect of highlighting the theme, making the photo more visual impact and aesthetic expression.
  • the embodiment of the present application provides a method and an apparatus for image processing, which can solve the problem that the photo picture taken by the terminal is poor.
  • the present application provides a method for image processing, the method comprising: determining, by a terminal, a subject object and a background in an image to be processed, and determining a target corresponding to the image to be processed according to the subject object and the background in the image to be processed
  • the image to be processed is calibrated with the target reference composition according to the size of the subject object and the position of the subject object in the image to be processed to obtain a target image. It can be seen that even if the photographer has no composition experience, the terminal can automatically calibrate the image to be processed and the target reference image to obtain a target image that conforms to the target reference composition, and improves the aesthetic degree of the target image by calibrating with the target reference image.
  • the method of determining the subject object and background of the image to be processed can be implemented as:
  • the method for determining the subject object and the background in the image to be processed may be implemented by: detecting a straight line in the image to be processed, dividing the image to be processed into at least two regions by the detected straight line, and then Any of the divided regions is determined as the subject object of the image to be processed, and an area other than the subject object in the image to be processed is determined as the background.
  • determining the target reference composition corresponding to the image to be processed according to the subject object and the background in the image to be processed may be implemented as: matching the image to be processed with each reference composition stored in advance, and determining The target reference composition that matches the image to be processed. It can be seen that a plurality of reference compositions are pre-stored in the terminal.
  • the terminal can determine the target reference composition that matches the image to be processed by matching the image to be processed with each reference composition stored in advance, and then perform a calibration operation. , the target image conforming to the target reference composition can be obtained, the picture feeling of the photograph taken by the terminal is improved, and the shooting function of the terminal and the intelligence of the image processing function are also improved.
  • the image to be processed is calibrated with the target reference composition to obtain the target image, which can be realized as follows:
  • the calibration parameters including at least the standard proportion of the main object and the position of the main object in the entire picture, and then performing calibration operations on the image to be processed according to the calibration parameters to obtain a target meeting the calibration parameters image. It can be seen that through the calibration operation, the target image conforming to the calibration parameter can be obtained, that is, the target image conforms to the target reference composition, and the image image conforming to the target reference composition is stronger and more beautiful.
  • the present application provides an apparatus for image processing, which can implement the functions performed by the terminal in the above first aspect, and the functions can be implemented by hardware or by executing corresponding software by hardware.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • the apparatus includes a processor and a communication interface configured to support the apparatus to perform the corresponding functions of the above methods.
  • the communication interface is used to support communication between the device and other network elements.
  • the apparatus can also include a memory for coupling with the processor that retains the program instructions and data necessary for the apparatus.
  • the present application provides a computer storage medium for storing computer software instructions for use in the above terminal, comprising a program designed to perform the above aspects.
  • the terminal of the present application can determine the target reference composition corresponding to the image to be processed according to the subject object and the background in the image to be processed, compared to the lack of the composition experience of the user.
  • the photographer can automatically calibrate the image to be processed and the target reference composition to obtain a target image that conforms to the target reference composition, and calibrate with the target reference composition to improve the aesthetic appearance of the target image, so that the terminal can capture the image. The picture feels better.
  • FIG. 1 is a schematic structural diagram of a terminal according to an embodiment of the present application.
  • FIG. 2 is a flowchart of a method for image processing according to an embodiment of the present application
  • FIG. 3a is an exemplary schematic diagram of an image to be processed according to an embodiment of the present application.
  • FIG. 3b is an exemplary schematic diagram of another image to be processed according to an embodiment of the present application.
  • FIG. 4a is an exemplary schematic diagram of another image to be processed according to an embodiment of the present application.
  • FIG. 4b is an exemplary schematic diagram of another image to be processed according to an embodiment of the present application.
  • FIG. 5 is an exemplary schematic diagram of a method for image processing according to an embodiment of the present application.
  • FIG. 6 is a flowchart of another method for image processing according to an embodiment of the present application.
  • FIG. 7 is an exemplary schematic diagram of a target image provided by an embodiment of the present application.
  • FIG. 8 is an exemplary schematic diagram of another method for image processing according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an apparatus for image processing according to an embodiment of the present application.
  • a terminal also known as a User Equipment (UE) is a device that provides voice and/or data connectivity to users, for example, a handheld device, an in-vehicle device, etc. having a wireless connection and image display and processing functions.
  • Common terminals include, for example, mobile phones, cameras, tablets, notebook computers, PDAs, mobile internet devices (MIDs), wearable devices such as smart watches, smart bracelets, pedometers, and the like.
  • the mobile phone may include: a radio frequency (RF) circuit 110 , a memory 120 , a communication interface 130 , a display screen 140 , a sensor 150 , an audio circuit 160 , an I/O subsystem 170 , a processor 180 , and Camera 190 and other components.
  • RF radio frequency
  • FIG. 1 the structure of the mobile phone shown in FIG. 1 does not constitute a limitation on the mobile phone, and may include more or less components than those illustrated, or combine some components, or split some components, or Different parts are arranged.
  • the display screen 140 belongs to a user interface (UI), and the display screen 140 can include a display panel 141 and a touch panel 142.
  • the handset can include more or fewer components than shown.
  • the mobile phone may also include functional modules or devices such as a power supply and a Bluetooth module, and details are not described herein.
  • the processor 180 is connected to the RF circuit 110, the memory 120, the audio circuit 160, the I/O subsystem 170, and the camera 190, respectively.
  • the I/O subsystem 170 is connected to the communication interface 130, the display screen 140, and the sensor 150, respectively.
  • the RF circuit 110 can be used for receiving and transmitting signals during and after receiving or transmitting information, and in particular, receiving downlink information of the base station and processing it to the processor 180.
  • the memory 120 can be used to store software programs as well as modules.
  • the processor 180 executes various functional applications and data processing of the mobile phone by running software programs and modules stored in the memory 120.
  • Communication interface 130 can be used to receive input numeric or character information, as well as to generate key signal inputs related to user settings and function controls of the handset.
  • the display screen 140 can be used to display information input by the user or information provided to the user as well as various menus of the mobile phone, and can also accept user input.
  • the specific display screen 140 may include a display panel 141 and a touch panel 142.
  • the display panel 141 can be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like.
  • the touch panel 142 also referred to as a touch screen, a touch sensitive screen, etc., can collect contact or non-contact operations on or near the user (eg, the user uses any suitable object or accessory such as a finger, a stylus, etc. on the touch panel 142.
  • the operation in the vicinity of the touch panel 142 may also include a somatosensory operation; the operation includes a single-point control operation, a multi-point control operation, and the like, and drives the corresponding connection device according to a preset program.
  • Sensor 150 can be a light sensor, a motion sensor, or other sensor.
  • the audio circuit 160 can provide an audio interface between the user and the handset.
  • the I/O subsystem 170 is used to control external devices for input and output, and the external devices may include other device input controllers, sensor controllers, and display controllers.
  • the processor 180 is the control center of the handset 200, which connects various portions of the entire handset using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 120, and recalling data stored in the memory 120, The various functions and processing data of the mobile phone 200 are executed to perform overall monitoring of the mobile phone.
  • the camera 190 can also be used as an input device, specifically for converting the collected analog video or image signal into a digital signal, and then storing it in the memory 120.
  • the camera 190 may include a front camera, a rear camera, a built-in camera, and an external camera.
  • the embodiment of the present application is not limited in any way. In the embodiment of the present application, a dual camera is taken as an example for description.
  • the image to be processed may be an image acquired by the dual camera of the terminal in real time, or may be a static image.
  • the terminal may divide the image to be processed into a preset number of regions, and then detect the color of each region, determine the number of regions corresponding to each color, and the region corresponding to the color with the largest number of regions. Determined as the background, the area corresponding to the color with the second largest number of regions It is defined as a subject object, or an area other than the background in the image to be processed may also be determined as the subject object.
  • the terminal can detect that most of the areas in the image to be processed are green of the grassland, and then the green area in the image to be processed can be determined.
  • the background the area other than the green area is the main object.
  • the terminal can detect that the black area shown in FIG. 3b is green by the green detector, and the area of the area occupies the entire picture area. A large proportion, so it can be determined that the black area shown in Fig. 3b is the background, and the part other than the black area is the main body.
  • the terminal may further determine a main object and a background of the image to be processed by detecting a straight line in the image to be processed. Specifically, the terminal may divide the image to be processed into at least two by using the detected straight line. For each region, any of the divided regions is determined as the subject object of the image to be processed, and an area other than the subject object in the image to be processed is determined as the background. It should be noted that, when adopting such a method for determining a subject object and a background, the terminal may perform a subsequent step by using each region as a main object, respectively, and respectively determine a target reference composition when each region is the main body.
  • the image to be processed includes a beach, a sea, and a sky.
  • the boundary line between the beach and the sea is a straight line
  • the boundary line between the sea and the sky is also a straight line, as shown in FIG. 4b.
  • the two lines can divide the image to be processed into three regions, and the terminal can determine any region as the main body, and the other two regions are the background, or the terminal can also respectively segment the region 1, the region 2 and
  • the area 3 is determined as a subject object, and the target reference composition when the area 1 is the main object, the target reference composition when the area 2 is the main object, and the target reference composition when the area 3 is the main object are determined by performing the subsequent steps.
  • a plurality of composition detectors can be configured in the terminal, and the terminal can respectively detect the image to be processed by using each composition detector to more accurately determine the subject object and the background, for example, determining the image to be processed by the first composition detector.
  • the background is not a single color, or the area of the image to be processed other than the background is not concentrated, that is, there may be multiple body objects in the image to be processed, and the terminal may determine that the image to be processed does not match the first composition detector.
  • the second composition detector can be used to detect the image to be processed. It is assumed that the second composition detector is a line detector, and the line detector detects the line in the image, and the detection result of the line detector can be used. Determine the subject object and background in the image to be processed. If the line detector still does not detect a line, it can continue to be detected by other composition detectors.
  • the terminal can also identify objects contained in the image to be processed through deep learning techniques, thereby more intelligently determining the subject object and the background.
  • the terminal further needs to determine the depth of field data according to the subject object, and set the aperture value and the focal length according to the depth of field data. For example, if the depth of field data is within 5 meters, you can set the aperture value to 2.8. If the depth of field data is more than 5 meters, you can set the aperture value to 4.
  • various reference compositions are pre-stored, and the terminal can match the features of the subject object and the background in the image to be processed with the pre-stored reference composition, and then select the target reference composition from the pre-stored reference images.
  • this step may determine multiple target reference compositions.
  • step 202 the image to be processed needs to be calibrated with each target reference composition, respectively, to obtain target images that conform to each target reference composition.
  • a target image may be selected by the user as a final target image, or a target image may be randomly selected by the terminal as a final target image. If the target image randomly selected by the terminal does not meet the user requirements, Users can also manually select other target images.
  • the terminal may receive a selection instruction input by the user, and use the target image selected by the user as the final target image according to the selection instruction. For example, in FIG. 5, if the user performs a click operation on the target image 2, the terminal uses the target image 2 as the target image 2 The final target image.
  • the method for image processing can compare the subject object and the background in the image to be processed, compared to the prior art, because the user lacks the composition experience and the photographed picture is poor.
  • the target reference composition corresponding to the image to be processed is determined, and the terminal can automatically calibrate the image to be processed and the target reference composition to obtain a target image that conforms to the target reference composition, and is calibrated by the target reference composition, without the photographer having the composition experience.
  • the aesthetic appearance of the target image is improved, so that the picture taken by the terminal is better.
  • the foregoing step 203 according to the size of the subject object and the position of the subject object in the image to be processed, the image to be processed and the target reference composition Perform calibration to obtain the target image, which can be realized as follows:
  • the calibration parameter includes at least the standard proportion of the main object and the position of the main object in the entire picture.
  • the calibration operation of the image to be processed may be a cropping, rotation, or the like.
  • a target reference composition is a three-point composition.
  • the three-point composition means that the scene is divided into three equal parts by two horizontal lines in the longitudinal direction and three third lines in the horizontal direction. It is equivalent to dividing a scene with two horizontal lines and two vertical lines, similar to a "well" word, so that four intersections are obtained, so that the main object is located at one of the intersections.
  • the terminal can calculate the calibration parameters, and the image to be processed is cropped by the calibration parameter to obtain a target image conforming to the three-part composition, as shown in FIG. 7, according to FIG. After the thick line is to be processed for cropping, the target image conforming to the three-way composition is obtained.
  • the image to be processed is a moving image that the camera is shooting
  • the terminal can display the three-way composition on the shooting interface. As shown in FIG. 8, the four intersections in FIG. 8 are the positions of the main object.
  • the user can be prompted to adjust the image to be processed currently photographed by the camera, so that the actual captured subject object coincides with the position of the subject object indicated in the three-way composition, thereby capturing the conformity method.
  • the target image of the composition Assuming that another target reference composition determined according to the image to be processed shown in FIG. 3 is the center composition, the center composition can be used as an option for the target reference composition. If the user selects the three-point composition, the terminal's shooting interface. The three-point composition is displayed in the middle. If the user selects the center composition, the shooting interface of the terminal displays the center composition.
  • the terminal may process the image to be processed captured in real time, determine the target reference composition by detecting the image captured by the current camera, and display the target reference composition on the shooting interface, so that the user can
  • the target reference composition displayed on the shooting interface is used to adjust the captured image so that the captured image conforms to the target reference composition, and then a more beautiful photo is taken.
  • the terminal includes hardware structures and/or software modules corresponding to each function.
  • the present application can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
  • the embodiment of the present application may divide the function module into the terminal according to the foregoing method example.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of modules in the embodiments of the present application is schematic, and is only a logical function division, and may be further divided in actual implementation.
  • FIG. 9 shows a possible structural diagram of the terminal involved in the above embodiment.
  • the terminal includes a determining module 901 and a calibration module 902.
  • the determining module 901 is configured to support the terminal to perform step 201 and step 202 in FIG. 2, and
  • the quasi-module 902 is configured to support the terminal to perform step 203 in FIG. 2 and steps 2031 to 2032 in FIG.
  • the determining module 901 shown in FIG. 9 can be integrated into the processor 180 shown in FIG. 1 to cause the processor 180 to execute the determining module 901, and the calibration module 902. Specific features.
  • Embodiments of the present application also provide a computer storage medium for storing computer software instructions for use by the terminal, the device comprising a program designed to perform the steps performed by the terminal in the above embodiment.
  • the steps of a method or algorithm described in connection with the present disclosure may be implemented in a hardware or may be implemented by a processor executing software instructions.
  • the software instructions may be composed of corresponding software modules, which may be stored in a random access memory (RAM), a flash memory, a read only memory (ROM), an erasable programmable read only memory ( Erasable Programmable ROM (EPROM), electrically erasable programmable read only memory (EEPROM), registers, hard disk, removable hard disk, compact disk read only (CD-ROM) or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and the storage medium can be located in an ASIC. Additionally, the ASIC can be located in a core network interface device.
  • the processor and the storage medium may also exist as discrete components in the core network interface device.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network devices. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each functional unit may exist independently, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional units.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开一种图像处理的方法,涉及通信技术领域,可以解决终端拍摄的照片的画面感较差的问题。本申请通过待处理图像中的主体物体和背景,确定待处理图像对应的目标参考构图,进而根据主体物体的大小以及主体物体在待处理图像中的位置,将待处理图像与目标参考构图进行校准,得到目标参考构图。本申请提供的方案适于图像处理时采用。

Description

一种图像处理的方法及装置
本申请要求于2017年01月19日提交中国专利局、申请号为201710044420.3、发明名称为“一种基于图像美学的自动构图方法和设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,尤其涉及一种图像处理的方法及装置。
背景技术
随着人们对照片美观度的要求越来越高,手机、相机等具有拍照功能的终端的功能也越来越多,例如,目前的终端中具有双摄像头,拍摄者需要拍照时,可以通过触控显示屏等方式来选取将要拍摄的照片的主体物与参考物,进而终端可以根据主体物和参考物确定景深,拍摄者对焦完成后按下快门键即可拍摄出主体物清晰,参考物模糊的照片,通过虚化照片的背景,可以达到突出主题的效果,使照片具有更强的视觉冲击力和美学表现力。
然而,仅仅依靠终端的拍照功能还不能得到足够美观的照片,因为照片的美观程度和拍摄者的拍照水平有很大的关系。在拍摄照片时,如果没有良好的构图会使得照片的整体画面感差,目前除专业摄影师之外的普通拍摄者的构图经验一般都比较少,很难通过进行良好的构图来拍摄出美观的照片,导致通过终端拍摄的照片的画面感较差。
发明内容
本申请的实施例提供一种图像处理的方法及装置,可以解决目前终端拍摄的照片画面感差的问题。
为达到上述目的,本申请的实施例采用如下技术方案:
第一方面,本申请提供一种图像处理的方法,该方法包括:终端确定待处理图像中的主体物体和背景,根据所述待处理图像中的主体物体和背景,确定待处理图像对应的目标参考构图,然后根据主体物体的大小以及主体物体在待处理图像中的位置,将所述待处理图像与所述目标参考构图进行校准,得到目标图像。可见,即使拍摄者没有构图经验,终端也可以自动将待处理图像与目标参考图像进行校准,得到符合目标参考构图的目标图像,通过与目标参考图像进行校准,提高了目标图像的美观程度。
在一种可能的设计中,确定待处理图像的主体物体和背景的方法可以实现为:
将待处理图像划分为预设数量的区域,然后检测每个区域的颜色,分别确定每种颜色对应的区域数量,将区域数量最多的颜色对应的区域确定为背景,然后将区域数量次多的颜色对应的区域,或者所述待处理图像中除所述背景之外的区域确定为主体物体。
在一种可能的设计中,确定待处理图像中的主体物体和背景的方法可以实现为:检测待处理图像中的直线,通过检测到的直线将待处理图像划分为至少两个区域,然后将划分出的任一区域确定为待处理图像的主体物体,将待处理图像中除所述主体物体之外的区域确定为背景。
在一种可能的设计中,根据待处理图像中的主体物体和背景,确定待处理图像对应的目标参考构图具体可以实现为:将待处理图像分别与预先存储的每个参考构图进行匹配,确定与待处理图像相匹配的目标参考构图。可见,终端中预先存储了多种参考构图,本申请中终端可以通过将待处理图像与预先存储的每个参考构图进行匹配,来确定与待处理图像相匹配的目标参考构图,进而通过校准操作,即可得到符合目标参考构图的目标图像,提高了终端拍摄的照片的画面感,也提高了终端的拍摄功能以及图像处理功能的智能度。
在一种可能的设计中,根据主体物体的大小以及主体物体在待处理图像中的位置,将待处理图像与目标参考构图进行校准,得到目标图像,具体可以实现为:
根据目标参考构图确定校准参数,所述校准参数至少包括主体物体应占整个图片的标准比例以及主体物体在整个图片中的位置,然后根据校准参数对待处理图像进行校准操作,得到符合校准参数的目标图像。可见,通过校准操作,可以得到符合校准参数的目标图像,即目标图像是符合目标参考构图的,符合目标参考构图的图像画面感更强,更加美观。
第二方面,本申请提供了一种图像处理的装置,该装置可以实现上述第一方面中终端所执行的功能,所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个上述功能相应的模块。
在一种可能的设计中,该装置的结构中包括处理器和通信接口,该处理器被配置为支持该装置执行上述方法中相应的功能。该通信接口用于支持该装置与其他网元之间的通信。该装置还可以包括存储器,该存储器用于与处理器耦合,其保存该装置必要的程序指令和数据。
第三方面,本申请提供了一种计算机存储介质,用于储存为上述终端所用的计算机软件指令,其包含用于执行上述方面所设计的程序。
相比于现有技术中由于用户缺少构图经验导致拍摄出的照片画面感差相比,本申请中终端可以根据待处理图像中的主体物体和背景,确定出待处理图像对应的目标参考构图,无需拍摄者具有构图经验,终端可以自动将待处理图像与目标参考构图进行校准,得到符合目标参考构图的目标图像,通过与目标参考构图进行校准,提高了目标图像的美观程度,使得终端拍摄出的照片的画面感更好。
附图说明
图1为本申请的实施例提供的一种终端的结构示意图;
图2为本申请的实施例提供的一种图像处理的方法的流程图;
图3a为本申请的实施例提供的一种待处理图像的示例性示意图;
图3b为本申请的实施例提供的另一种待处理图像的示例性示意图;
图4a为本申请的实施例提供的另一种待处理图像的示例性示意图;
图4b为本申请的实施例提供的另一种待处理图像的示例性示意图;
图5为本申请的实施例提供的一种图像处理的方法的示例性示意图;
图6为本申请的实施例提供的另一种图像处理的方法的流程图;
图7为本申请的实施例提供的一种目标图像的示例性示意图;
图8为本申请的实施例提供的另一种图像处理的方法的示例性示意图;
图9为本申请的实施例提供的一种图像处理的装置的结构示意图。
具体实施方式
本申请描述的***架构以及业务场景是为了更加清楚的说明本申请的技术方案,并不构成对于本申请提供的技术方案的限定,本领域普通技术人员可知,随着***架构的演变和新业务场景的出现,本申请提供的技术方案对于类似的技术问题,同样适用。
需要说明的是,本申请中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
需要说明的是,本申请中“的(英文:of)”,相应的“(英文corresponding,relevant)”和“对应的(英文:corresponding)”有时可以混用,应当指出的是,在不强调其区别时,其所要表达的含义是一致的。
下面将结合本申请中的附图,对本申请中的技术方案进行详细地描述。
本申请提供的技术方案可以应用于图1所示的终端100中。终端,又称之为用户设备(User Equipment,UE),是一种向用户提供语音和/或数据连通性的设备,例如,具有无线连接以及图像显示和处理功能的手持式设备、车载设备等。常见的终端例如包括:手机、相机、平板电脑、笔记本电脑、掌上电脑、移动互联网设备(mobile internet device,MID)、可穿戴设备,例如智能手表、智能手环、计步器等。
以终端设备100为手机为例,对手机的通用硬件架构进行说明。如图1所示,手机可以包括:射频(radio Frequency,RF)电路110、存储器120、通信接口130、显示屏140、传感器150、音频电路160、I/O子***170、处理器180、以及摄像头190等部件。本领域技术人员可以理解,图1所示的手机的结构并不构成对手机的限定,可以包括比图示更多或者更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。本领域技术人员可以理解显示屏140属于用户界面(user Interface,UI),显示屏140可以包括显示面板141和触摸面板142。且手机可以包括比图示更多或者更少的部件。尽管未示出,手机还可以包括电源、蓝牙模块等功能模块或器件,在此不再赘述。
进一步地,处理器180分别与RF电路110、存储器120、音频电路160、I/O子***170、以及摄像头190均连接。I/O子***170分别与通信接口130、显示屏140、传感器150均连接。
其中,RF电路110可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器180处理。
存储器120可用于存储软件程序以及模块。处理器180通过运行存储在存储器120的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。
通信接口130可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。
显示屏140可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜单,还可以接受用户输入。具体的显示屏140可包括显示面板141,以及触控面板142。可以采用LCD(Liquid Crystal Display,液晶显示器)、OLED(Organic Light-Emitting Diode,有机发光二极管)等形式来配置显示面板141。触控面板142,也称为触摸屏、触敏屏等,可收集用户在其上或附近的接触或者非接触操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板142上或在触控面板142附近的操作,也可以包括体感操作;该操作包括单点控制操作、多点控制操作等操作类型。),并根据预先设定的程式驱动相应的连接装置。
传感器150可以为光传感器、运动传感器或者其他传感器。
音频电路160可提供用户与手机之间的音频接口。I/O子***170用来控制输入输出的外部设备,外部设备可以包括其他设备输入控制器、传感器控制器、显示控制器。
处理器180是手机200的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器120内的软件程序和/或模块,以及调用存储在存储器120内的数据,执行手机200的各种功能和处理数据,从而对手机进行整体监控。
摄像头190,也可以作为一种输入设备,具体用于将采集到的模拟视频或图像信号转换成数字信号,进而将其储存在存储器120中。具体的,摄像头190可以包括前置摄像头、后置摄像头、内置摄像头以及外置摄像头等,本申请实施例对此不作任何限制,本申请实施例中以双摄像头为例进行说明。
以下,将结合具体实施例详细阐述本申请实施例提供的一种图像处理的方法,该方法的执行主体为终端,如图2所示,该方法包括:
201、确定待处理图像中的主体物体和背景。
其中,待处理图像可以为终端的双摄像头实时采集的图像,也可以为静态图像。
在一种可能的实现方式中,终端可以将待处理图像划分为预设数量的区域,然后检测每个区域的颜色,确定出每种颜色对应的区域数量,将区域数量最多的颜色对应的区域确定为背景,将区域数量次多的颜色对应的区域确 定为主体物体,或者也可以将待处理图像中除背景之外的区域确定为主体物体。
例如,如图3a所示,假设待处理图像为草原上站着一个人,则终端可以检测出待处理图像中的大部分区域均为草原的绿色,进而即可确定待处理图像中绿色的区域为背景,除绿色区域之外的区域为主体物体,如图3b所示,终端通过绿色检测器可以检测出图3b示出的黑色区域均为绿色,且该区域的面积占了整个图片面积的很大比例,所以可以确定图3b中示出的黑色区域为背景,除黑色区域之外的部分为主体物体。
在另一种可能的实现方式中,终端还可以通过检测待处理图像中的直线来确定待处理图像的主体物体和背景,具体的,终端可以通过检测到的直线将待处理图像划分为至少两个区域,将划分出的任一区域确定为待处理图像的主体物体,将待处理图像中除主体物体之外的区域确定为背景。需要说明的是,在采用这种确定主体物体和背景的方法时,终端可以分别以每个区域为主体物体来执行后续步骤,分别确定出以每个区域为主体时的目标参考构图。
例如,如图4a所示,待处理图像中包含沙滩、大海以及天空,沙滩和大海之间的交界线为一条直线,大海和天空之间的交界线也为一条直线,如图4b所示,通过这两条直线可以将待处理图像划分为三个区域,则终端可以将任意一个区域确定为主体主体,另外的两个区域即为背景,或者,终端还可以分别将区域1、区域2和区域3确定为主体物体,并通过执行后续步骤确定以区域1为主体物体时的目标参考构图、以区域2为主体物体时的目标参考构图,以及以区域3为主体物体时的目标参考构图。
其中,终端中可以配置多种构图检测器,终端可以分别用每个构图检测器来检测待处理图像,以更加准确地确定主体物体和背景,例如通过第一个构图检测器确定待处理图像的背景不是单一颜色,或者待处理图像中除背景之外的区域不集中,即待处理图像中可能存在多个主体物体,则终端可确定待处理图像与第一个构图检测器不匹配。则可以继续采用第二个构图检测器来检测待处理图像,假设第二个构图检测器为直线检测器,通过该直线检测器检测出图像中的直线,则可根据直线检测器的检测结果来确定待处理图像中的主体物体和背景,若直线检测器仍未检测到直线,则可继续用其他构图检测器来检测。
另外,终端还可以通过深度学习技术识别待处理图像中所包含的物体,从而更加智能地确定主体物体和背景。
需要说明的是,若待处理图像为终端的双摄像头实时拍摄的图像,则终端在确定待处理图像的主体物体之后,还需要根据主体物体确定景深数据,并根据景深数据设置光圈值和焦距。例如,如果景深数据在5米以内,则可以设置光圈值为2.8,若景深数据在5米以上,则可以设置光圈值为4。
202、根据待处理图像中的主体物体和背景,确定待处理图像对应的目标参考构图。
其中,终端中预先存储了各种参考构图,终端可以将待处理图像中的主体物体和背景的特征与预先存储的参考构图进行匹配,进而从预先存储的参考图像中选取目标参考构图。
需要说明的是,本申请的实施例不对目标参考构图的数量进行限制,当待处理图像与多个预先存储的参考构图相匹配时,则本步骤可以确定出多个目标参考构图。
203、根据主体物体的大小以及主体物体在待处理图像中的位置,将待处理图像与目标参考构图进行校准,得到目标图像。
可以理解的是,将待处理图像与目标参考构图进行校准之后,可以得到符合目标参考构图的构图方式的目标图像。
需要说明的是,如果步骤202中确定出了多个目标参考构图,则在本步骤中需要将待处理图像分别与每个目标参考构图进行校准,分别得到符合每个目标参考构图的目标图像。
在确定出多个目标图像之后,可以由用户选择一个目标图像作为最终的目标图像,或者也可以由终端随机选择一个目标图像作为最终的目标图像,若终端随机选择的目标图像不符合用户要求,用户也可以手动选取其他的目标图像。
示例性地,如图5所示,假设步骤202中确定出了三个目标参考构图,根据这三个目标参考构图确定出三个目标图像,分别为目标图像1、目标图像2与目标图像3,终端可以接收用户输入的选择指令,根据选择指令将用户选择的目标图像作为最终的目标图像,例如,在图5中,假设用户对目标图像2进行了点击操作,则终端将目标图像2作为最终的目标图像。
本申请的实施例提供的图像处理的方法,相比于现有技术中由于用户缺少构图经验导致拍摄出的照片画面感差相比,本申请中终端可以根据待处理图像中的主体物体和背景,确定出待处理图像对应的目标参考构图,无需拍摄者具有构图经验,终端可以自动将待处理图像与目标参考构图进行校准,得到符合目标参考构图的目标图像,通过与目标参考构图进行校准,提高了目标图像的美观程度,使得终端拍摄出的照片的画面感更好。
在本申请的实施例提供的一种可能的实现方式中,如图6所示,上述步骤203、根据主体物体的大小以及主体物体在待处理图像中的位置,将待处理图像与目标参考构图进行校准,得到目标图像,具体可以实现为:
2031、根据目标参考构图确定校准参数。
其中,校准参数中至少包含主体物体应占整个图片的标准比例以及主体物体在整个图片中的位置。
2032、根据校准参数对待处理图像进行校准操作,得到符合校准参数的目标图像。
其中,对待处理图像的校准操作可以为裁剪、旋转等操作。
可以理解的是,将待处理图片按照校准参数进行校准之后,得到的目标图像将会符合目标参考构图。作为一个例子,若将图3作为待处理图像,根 据图3确定出一个目标参考构图为三分法构图,三分法构图是指将场景在纵方向上用两横线分为三等份,在横方向上用两条竖线分为三等份,相当于将一个场景用两条横线和两条竖线分割,类似于一个“井”字,这样就会得到四个交叉点,进而使主体物***于其中一个交叉点上。图3所示的待处理图像不符合三分法构图,所以终端可以计算校准参数,通过校准参数对待处理图像进行裁剪得到符合三分法构图的目标图像,如图7所示,按照图7中的粗线对待处理图像进行裁剪之后,即可得到符合三分法构图的目标图像。需要说明的是,如果待处理图像为摄像头正在拍摄的动态图像,则终端可以将三分法构图显示在拍摄界面上,如图8所示,图8中的四个交叉点为主体物体的位置,通过显示三分法构图可以提示用户对摄像头当前拍摄到的待处理图像进行调整,使实际拍摄到的主体物体与三分法构图中指示的主体物体的位置重合,从而拍摄出符合三分法构图的目标图像。假设根据图3所示的待处理图像确定出的另一种目标参考构图为中心构图,则可将中心构图作为一个目标参考构图的选项,若用户选择了三分法构图,则终端的拍摄界面中显示的是三分法构图,若用户选择了中心构图,则终端的拍摄界面显示的为中心构图。
对于本申请的实施例,终端可以对实时拍摄的待处理图像进行处理,通过检测当前摄像头拍摄到的图像,来确定目标参考构图,并将目标参考构图显示在拍摄界面上,这样用户就可以根据拍摄界面上显示的目标参考构图来调整拍摄的画面,使得拍摄的画面符合目标参考构图,进而拍摄出更加美观的照片。
上述主要从终端的角度对本发明实施例提供的方案进行了介绍。可以理解的是,终端中包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
本申请的实施例可以根据上述方法示例对终端进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请的实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用对应各个功能划分各个功能模块的情况下,本申请的实施例还提供一种图像处理的装置,该装置可以实现为上述实施例中的终端。如图9所示,图9示出了上述实施例中所涉及的终端的一种可能的结构示意图。该终端包括:确定模块901,校准模块902。
其中,确定模块901用于支持终端执行图2中的步骤201和步骤202,校 准模块902用于支持终端执行图2中的步骤203以及图6中的步骤2031至步骤2032。
其中,上述方法实施例涉及的各步骤的所有相关内容均可援引到对应功能模块的功能描述,在此不再赘述。
在采用集成的单元的情况下,需要说明的是,图9所示的确定模块901,校准模块902可以集成在图1所示处理器180中,使处理器180执行确定模块901,校准模块902的具体功能。
本申请的实施例还提供了一种计算机存储介质,用于存储为上述终端所用的计算机软件指令,器包含用于执行上述实施例中终端执行的步骤所设计的程序。
结合本申请公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(Random Access Memory,RAM)、闪存、只读存储器(Read Only Memory,ROM)、可擦除可编程只读存储器(Erasable Programmable ROM,EPROM)、电可擦可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于核心网接口设备中。当然,处理器和存储介质也可以作为分立组件存在于核心网接口设备中。
在本申请所提供的几个实施例中,应该理解到,所揭露的***,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络设备上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个功能单元独立存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本 质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在可读取的存储介质中,如计算机的软盘,硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (15)

  1. 一种图像处理的方法,其特征在于,包括:
    确定待处理图像中的主体物体和背景;
    根据所述待处理图像中的主体物体和背景,确定所述待处理图像对应的目标参考构图;
    根据所述主体物体的大小以及所述主体物体在所述待处理图像中的位置,将所述待处理图像与所述目标参考构图进行校准,得到目标图像。
  2. 根据权利要求1所述的图像处理的方法,其特征在于,所述确定所述待处理图像的主体物体和背景,包括:
    将所述待处理图像划分为预设数量的区域;
    检测每个区域的颜色,分别确定每种颜色对应的区域数量;
    将区域数量最多的颜色对应的区域确定为背景;
    将区域数量次多的颜色对应的区域,或者所述待处理图像中除所述背景之外的区域确定为主体物体。
  3. 根据权利要求1所述的图像处理的方法,其特征在于,所述确定待处理图像中的主体物体和背景,包括:
    检测所述待处理图像中的直线;
    通过检测到的直线将所述待处理图像划分为至少两个区域;
    将划分出的任一区域确定为所述待处理图像的主体物体;
    将所述待处理图像中除所述主体物体之外的区域确定为背景。
  4. 根据权利要求2或3所述的图像处理的方法,其特征在于,所述根据所述待处理图像中的主体物体和背景,确定所述待处理图像对应的目标参考构图,包括:
    将所述待处理图像分别与预先存储的每个参考构图进行匹配,确定与所述待处理图像相匹配的目标参考构图。
  5. 根据权利要求4所述的图像处理的方法,其特征在于,所述根据所述主体物体的大小以及所述主体物体在所述待处理图像中的位置,将所述待处理图像与所述目标参考构图进行校准,得到目标图像,包括:
    根据所述目标参考构图确定校准参数,所述校准参数至少包括主体物体应占整个图片的标准比例以及主体物体在整个图片中的位置;
    根据所述校准参数对所述待处理图像进行校准操作,得到符合所述校准参数的目标图像。
  6. 一种图像处理的装置,其特征在于,包括:
    确定模块,用于确定待处理图像中的主体物体和背景;根据所述待处理图像中的主体物体和背景,确定所述待处理图像对应的目标参考构图;
    校准模块,用于根据所述主体物体的大小以及所述主体物体在所述待处理图像中的位置,将所述待处理图像与所述目标参考构图进行校准,得到目标图像。
  7. 根据权利要求6所述的图像处理的装置,其特征在于,
    所述确定模块,还用于将所述待处理图像划分为预设数量的区域;检测每个区域的颜色,分别确定每种颜色对应的区域数量;将区域数量最多的颜色对应的区域确定为背景;将区域数量次多的颜色对应的区域,或者所述待处理图像中除所述背景之外的区域确定为主体物体。
  8. 根据权利要求6所述的图像处理的装置,其特征在于,所述确定模块,还用于检测所述待处理图像中的直线;通过检测到的直线将所述待处理图像划分为至少两个区域;将划分出的任一区域确定为所述待处理图像的主体物体;将所述待处理图像中除所述主体物体之外的区域确定为背景。
  9. 根据权利要求7或8所述的图像处理的装置,其特征在于,
    所述确定模块,还用于将所述待处理图像分别与预先存储的每个参考构图进行匹配,确定与所述待处理图像相匹配的目标参考构图。
  10. 根据权利要求9所述的图像处理的装置,其特征在于,所述校准模块,还用于根据所述目标参考构图确定校准参数,所述校准参数至少包括主体物体应占整个图片的标准比例以及主体物体在整个图片中的位置;根据所述校准参数对所述待处理图像进行校准操作,得到符合所述校准参数的目标图像。
  11. 一种图像处理的装置,其特征在于,包括:
    存储器,用于存储包括程序指令的信息;
    处理器,与所述存储器耦合,用于控制程序指令的执行,具体用于确定待处理图像中的主体物体和背景;根据所述待处理图像中的主体物体和背景,确定所述待处理图像对应的目标参考构图;根据所述主体物体的大小以及所述主体物体在所述待处理图像中的位置,将所述待处理图像与所述目标参考构图进行校准,得到目标图像。
  12. 根据权利要求11所述的图像处理的装置,其特征在于,
    所述处理器,还用于将所述待处理图像划分为预设数量的区域;检测每个区域的颜色,分别确定每种颜色对应的区域数量;将区域数量最多的颜色对应的区域确定为背景;将区域数量次多的颜色对应的区域,或者所述待处理图像中除所述背景之外的区域确定为主体物体。
  13. 根据权利要求11所述的图像处理的装置,其特征在于,
    所述处理器,还用于检测所述待处理图像中的直线;通过检测到的直线将所述待处理图像划分为至少两个区域;将划分出的任一区域确定为所述待处理图像的主体物体;将所述待处理图像中除所述主体物体之外的区域确定为背景。
  14. 根据权利要求12或13所述的图像处理的装置,其特征在于,
    所述处理器,还用于将所述待处理图像分别与预先存储的每个参考构图进行匹配,确定与所述待处理图像相匹配的目标参考构图。
  15. 根据权利要求14所述的图像处理的装置,其特征在于,
    所述处理器,还用于根据所述目标参考构图确定校准参数,所述校准参数至少包括主体物体应占整个图片的标准比例以及主体物体在整个图片中的位置;根据所述校准参数对所述待处理图像进行校准操作,得到符合所述校准参数的目标图像。
PCT/CN2017/088085 2017-01-19 2017-06-13 一种图像处理的方法及装置 WO2018133305A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201780007318.4A CN109479087B (zh) 2017-01-19 2017-06-13 一种图像处理的方法及装置

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710044420.3 2017-01-19
CN201710044420 2017-01-19

Publications (1)

Publication Number Publication Date
WO2018133305A1 true WO2018133305A1 (zh) 2018-07-26

Family

ID=62907547

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/088085 WO2018133305A1 (zh) 2017-01-19 2017-06-13 一种图像处理的方法及装置

Country Status (2)

Country Link
CN (1) CN109479087B (zh)
WO (1) WO2018133305A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111432122A (zh) * 2020-03-30 2020-07-17 维沃移动通信有限公司 一种图像处理方法及电子设备
CN112037160A (zh) * 2020-08-31 2020-12-04 维沃移动通信有限公司 图像处理方法、装置及设备

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113206956B (zh) * 2021-04-29 2023-04-07 维沃移动通信(杭州)有限公司 图像处理方法、装置、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5873007A (en) * 1997-10-28 1999-02-16 Sony Corporation Picture composition guidance system
CN101000451A (zh) * 2006-01-10 2007-07-18 英保达股份有限公司 自动构图支持装置及方法
CN103384304A (zh) * 2012-05-02 2013-11-06 索尼公司 显示控制设备、显示控制方法、程序和记录介质
CN104243787A (zh) * 2013-06-06 2014-12-24 华为技术有限公司 拍照方法、照片管理方法及设备
CN106131418A (zh) * 2016-07-19 2016-11-16 腾讯科技(深圳)有限公司 一种构图控制方法、装置及拍照设备
CN106131411A (zh) * 2016-07-14 2016-11-16 纳恩博(北京)科技有限公司 一种拍摄图像的方法和装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104917951A (zh) * 2014-03-14 2015-09-16 宏碁股份有限公司 摄像装置及其辅助拍摄人像方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5873007A (en) * 1997-10-28 1999-02-16 Sony Corporation Picture composition guidance system
CN101000451A (zh) * 2006-01-10 2007-07-18 英保达股份有限公司 自动构图支持装置及方法
CN103384304A (zh) * 2012-05-02 2013-11-06 索尼公司 显示控制设备、显示控制方法、程序和记录介质
CN104243787A (zh) * 2013-06-06 2014-12-24 华为技术有限公司 拍照方法、照片管理方法及设备
CN106131411A (zh) * 2016-07-14 2016-11-16 纳恩博(北京)科技有限公司 一种拍摄图像的方法和装置
CN106131418A (zh) * 2016-07-19 2016-11-16 腾讯科技(深圳)有限公司 一种构图控制方法、装置及拍照设备

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111432122A (zh) * 2020-03-30 2020-07-17 维沃移动通信有限公司 一种图像处理方法及电子设备
CN112037160A (zh) * 2020-08-31 2020-12-04 维沃移动通信有限公司 图像处理方法、装置及设备
CN112037160B (zh) * 2020-08-31 2024-03-01 维沃移动通信有限公司 图像处理方法、装置及设备

Also Published As

Publication number Publication date
CN109479087B (zh) 2020-11-17
CN109479087A (zh) 2019-03-15

Similar Documents

Publication Publication Date Title
US11330194B2 (en) Photographing using night shot mode processing and user interface
CN107528938B (zh) 一种视频通话方法、终端及计算机可读存储介质
CN109891874B (zh) 一种全景拍摄方法及装置
KR101839569B1 (ko) 파노라마 이미지 획득을 위한 방법 및 단말기
US9413967B2 (en) Apparatus and method for photographing an image using photographing guide
WO2022063023A1 (zh) 视频拍摄方法、视频拍摄装置及电子设备
US11030733B2 (en) Method, electronic device and storage medium for processing image
WO2017124899A1 (zh) 一种信息处理方法及装置、电子设备
CN106303029A (zh) 一种画面的旋转控制方法、装置及移动终端
KR20140104753A (ko) 신체 부위 검출을 이용한 이미지 프리뷰
WO2021238564A1 (zh) 显示设备及其畸变参数确定方法、装置、***及存储介质
WO2018133305A1 (zh) 一种图像处理的方法及装置
US10009545B2 (en) Image processing apparatus and method of operating the same
WO2023083279A1 (zh) 拍摄方法及装置
CN108881739B (zh) 图像生成方法、装置、终端及存储介质
CN112399080A (zh) 视频处理方法、装置、终端及计算机可读存储介质
US20220270313A1 (en) Image processing method, electronic device and storage medium
CN116939351A (zh) 拍摄方法、装置、电子设备及可读储存介质
CN116416141A (zh) 图像处理方法及装置、设备、存储介质
JP2014200034A (ja) 画像取込装置、画像取込方法、画像取込プログラムおよび移動通信端末

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17892834

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17892834

Country of ref document: EP

Kind code of ref document: A1