WO2019029573A1 - 图像虚化方法、计算机可读存储介质和计算机设备 - Google Patents

图像虚化方法、计算机可读存储介质和计算机设备 Download PDF

Info

Publication number
WO2019029573A1
WO2019029573A1 PCT/CN2018/099403 CN2018099403W WO2019029573A1 WO 2019029573 A1 WO2019029573 A1 WO 2019029573A1 CN 2018099403 W CN2018099403 W CN 2018099403W WO 2019029573 A1 WO2019029573 A1 WO 2019029573A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
image
face
distance information
blurring
Prior art date
Application number
PCT/CN2018/099403
Other languages
English (en)
French (fr)
Inventor
丁佳铭
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019029573A1 publication Critical patent/WO2019029573A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates to the field of computer technology, and in particular, to an image blurring method, apparatus, computer readable storage medium, and computer device.
  • the scenes of photographing are often complicated and varied.
  • the subject of the photographs is more prominent and the layering is reflected.
  • the usual processing method is to maintain the sharpness of the subject and to shoot the subject.
  • the area other than the area is blurred.
  • Blurring is to blur the area outside the subject, making the subject more prominent.
  • the traditional method of blurring is to first identify the subject in the image, and then directly perform a fixed degree of blurring of the area outside the subject, so that the background and the subject are displayed differently.
  • Embodiments of the present application provide an image blurring method, a computer readable storage medium, and a computer device.
  • An image blurring method comprising:
  • One or more non-transitory computer readable storage media containing computer executable instructions that, when executed by one or more processors, cause the processor to perform the following operations:
  • a computer device comprising a memory and a processor, the memory storing computer readable instructions, the instructions being executed by the processor, causing the processor to perform the following operations:
  • the image blurring method, apparatus, computer readable storage medium and computer device provided by the embodiments of the present application first detect a face region in an image to be processed, and obtain a blurring intensity of the background region according to physical distance information of the face region. Then, the background area is blurred according to the blurring intensity.
  • the physical distance information can reflect the distance between the face and the lens, and the blurring strength of the different acquisitions is different, which makes the blurring process more precise.
  • FIG. 1 is a schematic diagram showing the internal structure of an electronic device in an embodiment
  • FIG. 2 is a schematic diagram showing the internal structure of a server in an embodiment
  • 3 is a flow chart of an image blurring method in an embodiment
  • FIG. 5 is a schematic diagram of acquiring physical distance information in an embodiment
  • FIG. 6 is a schematic structural diagram of an image blurring device in an embodiment
  • FIG. 7 is a schematic structural diagram of an image blurring device in another embodiment
  • Figure 8 is a schematic illustration of an image processing circuit in one embodiment.
  • first may be referred to as a second client
  • second client may be referred to as a first client, without departing from the scope of the present application.
  • Both the first client and the second client are clients, but they are not the same client.
  • FIG. 1 is a schematic diagram showing the internal structure of an electronic device in an embodiment.
  • the electronic device includes a processor coupled through a system bus, a non-volatile storage medium, an internal memory and network interface, a display screen, and an input device.
  • the non-volatile storage medium of the electronic device stores an operating system and computer readable instructions.
  • the computer readable instructions are executed by a processor to implement an image blurring method.
  • the processor is used to provide computing and control capabilities to support the operation of the entire electronic device.
  • the internal memory in the electronic device provides an environment for the operation of computer readable instructions in a non-volatile storage medium.
  • the network interface is used for network communication with the server, such as sending an image blur request to the server, receiving the blurred image returned by the server, and the like.
  • the display screen of the electronic device may be a liquid crystal display or an electronic ink display screen
  • the input device may be a touch layer covered on the display screen, or may be a button, a trackball or a touchpad provided on the outer casing of the electronic device, or may be An external keyboard, trackpad, or mouse.
  • the electronic device can be a cell phone, a tablet or a personal digital assistant or a wearable device. A person skilled in the art can understand that the structure shown in FIG.
  • FIG. 1 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the electronic device to which the solution of the present application is applied.
  • the specific electronic device may be It includes more or fewer components than those shown in the figures, or some components are combined, or have different component arrangements.
  • the server includes a processor, a non-volatile storage medium, an internal memory, and a network interface connected by a system bus.
  • the non-volatile storage medium of the server stores an operating system and computer readable instructions.
  • the computer readable instructions are executed by a processor to implement an image blurring method.
  • the server's processor is used to provide computing and control capabilities that support the operation of the entire server.
  • the network interface of the server is configured to communicate with an external terminal through a network connection, such as receiving an image blur request sent by the terminal, and returning the blurred image to the terminal.
  • the server can be implemented with a stand-alone server or a server cluster consisting of multiple servers.
  • FIG. 2 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the server to which the solution of the present application is applied.
  • the specific server may include a ratio. More or fewer components are shown in the figures, or some components are combined, or have different component arrangements.
  • FIG. 3 is a flow chart of an image blurring method in one embodiment. As shown in FIG. 3, the image blurring method includes operations 302 through 306, wherein:
  • Operation 302 obtaining an image to be processed.
  • the image to be processed refers to an image that needs to be blurred, and can be collected by the image capturing device.
  • the image acquisition device refers to a device for acquiring an image.
  • the image acquisition device may be a camera, a camera on a mobile terminal, a camera, or the like.
  • the user terminal may directly perform the blurring process on the image to be processed by the user terminal, or may initiate an image blurring request to the server, and perform the blurring process on the server to process the image.
  • the image blurring instruction may be input by the user, or may be automatically triggered by the user terminal.
  • the user inputs a photographing instruction through the user terminal, and after detecting the photographing instruction, the mobile terminal collects the image to be processed through the camera. Then, the image blurring instruction is automatically triggered, and the image to be processed is blurred.
  • the photographing instruction may be triggered by a physical button or a touch screen operation of the mobile terminal, or may be a voice instruction or the like.
  • Operation 304 detecting a face region in the image to be processed, and acquiring physical distance information corresponding to the face region.
  • the face area refers to the area where the face is located in the image to be processed
  • the physical distance information refers to a related parameter indicating the physical distance between the image collection device and the object corresponding to each pixel point in the image to be processed.
  • the physical distance information corresponding to the face region refers to a related parameter of the physical distance between the image capturing device and the face.
  • the feature points in the image to be processed may be first identified, and then the feature points are extracted and matched with the preset face model. If the extracted feature points match the preset face model, the feature points are extracted.
  • the area is the face area.
  • the image to be processed is composed of a plurality of pixels, each pixel having corresponding physical distance information indicating the physical distance of the object represented by the pixel to the image capturing device.
  • a plurality of face regions may exist in the image to be processed, and after detecting the face region in the image to be processed, the area corresponding to each face region may be acquired; and the physical region corresponding to the face region having the largest area is obtained.
  • Distance information The background blur strength is obtained according to the physical distance information corresponding to the face area having the largest area.
  • the area of the area refers to the size of the area corresponding to the face area, and the area of the area may be represented by the number of pixels included in the face area, or may be represented by the ratio of the size of the area occupied by the face area to the size of the image to be processed. .
  • the physical distance information within the effective distance range can be represented by an accurate numerical value
  • the physical distance information exceeding the effective distance range is represented by a fixed numerical value
  • the operation 304 may include: detecting a face region within a preset distance range in the image to be processed, and acquiring physical distance information corresponding to the face region.
  • the preset distance range may be, but is limited to, a value range of valid physical distance information.
  • the background blurring intensity is obtained according to the physical distance information, and the background area in the image to be processed is blurred according to the background blurring intensity.
  • the blurring process refers to blurring the image and performing blurring processing according to the blurring intensity, and the blurring intensity is different, and the degree of blurring processing is also different.
  • the background area may refer to an area other than the face area or the portrait area in the image to be processed.
  • the portrait area refers to the area where the entire portrait in the image to be processed is located.
  • the background blur strength refers to a parameter indicating the degree of blurring of the background area. Obtaining the background blurring intensity according to the physical distance information of the face region, and then blurring the background region according to the background blurring intensity, and the obtained blurring result will change with the actual physical distance of the face to the image capturing device. change. Generally, the larger the physical distance information is, the smaller the background blurring intensity is, and the smaller the degree of blurring of the background area is; the smaller the physical distance information is, the greater the background blurring intensity is, and the background area is blurred. The greater the degree.
  • the image blurring method firstly detects a face region in the image to be processed, and obtains a blurring intensity of the background region according to the physical distance information of the face region, and then blurs the background region according to the blurring intensity.
  • the physical distance information can reflect the distance between the face and the lens, and the blurring intensity of the different acquisitions is also different.
  • the degree of blurring changes with the change of the physical distance information, so that the effect of the blurring process can adapt to different shooting scenes, and the blurring process is performed. More precise.
  • the image blurring method includes operations 402 through 410, wherein:
  • Operation 402 obtaining an image to be processed.
  • the image to be processed may be acquired directly on the local or server.
  • the user terminal may directly acquire the corresponding image to be processed according to the image storage address and the image identifier included in the image blurring instruction.
  • the image storage address can be local to the user terminal or it can be on the server.
  • the image to be processed may be blurred locally, or may be blurred on the server.
  • Operation 404 detecting a face area in the image to be processed, and acquiring physical distance information corresponding to each face area in the image to be processed.
  • a dual camera can be installed on the image acquisition device, and the physical distance information between the image acquisition device and the object is measured by the dual camera.
  • the image of the object is respectively captured by the first camera and the second camera; and the first angle and the second angle are obtained according to the image, wherein the first angle is the first camera to the horizontal line of the object and the first camera is The angle between the horizontal lines of the second camera, the second angle is the angle between the horizontal line of the second camera to the object and the horizontal line of the second camera to the first camera; according to the first angle, the second angle and The distance between the first camera and the second camera acquires physical distance information between the image capturing device and the object.
  • Figure 5 is a schematic diagram of obtaining physical distance information in one embodiment.
  • an image of the object 506 is respectively captured by the first camera 502 and the second camera 504, and according to the image, the first angle A1 and the second angle A2 can be acquired, and then according to the first angle A1,
  • the two angles A2 and the distance T between the first camera 502 and the second camera 504 can obtain a physical distance D between any point on the horizontal line of the first camera 402 to the second camera 504 and the object 506.
  • the image to be processed also includes multiple face regions.
  • Each face region in the image to be processed is extracted, and physical distance information corresponding to the face region is obtained.
  • the depth map corresponding to the scene may be acquired at the same time.
  • the obtained depth map has a one-to-one correspondence with the image, and the value in the depth map represents physical distance information of the corresponding pixel in the image. That is to say, the corresponding depth map can be acquired while acquiring the image to be processed, and after detecting the face region in the image to be processed, the corresponding physical distance information can be obtained in the depth map according to the pixel coordinates in the face region. .
  • the face region contains a plurality of pixels. After obtaining the physical distance information corresponding to each pixel in the face region, the physical distance information corresponding to all the pixels in the face region may be averaged, or the physical distance information corresponding to a certain pixel may be obtained to represent the person.
  • the physical distance information corresponding to the face area For example, physical distance information corresponding to a central pixel of the face region is obtained to represent physical distance information corresponding to the face region.
  • the background blurring intensity is obtained according to the physical distance information, and the background area in the image to be processed is blurred according to the background blurring intensity.
  • the portrait is considered to be on the same vertical plane as the face, and the physical distance of the portrait to the image capture device is within the same range as the physical distance of the face to the image capture device. Therefore, after the physical distance information and the face area are acquired, the portrait area in the image to be processed can be acquired according to the physical distance corresponding to the face area, and then the background area can be determined in the image to be processed according to the portrait area.
  • the face area in the image to be processed is detected, and the range of the portrait distance is obtained according to the physical distance information corresponding to the face area, and the portrait area in the image to be processed can be obtained according to the range of the portrait distance, and then the background is obtained according to the portrait area. region.
  • the portrait distance range refers to a range of values of physical distance information corresponding to the portrait area in the image to be processed. Since the physical distance from the image capturing device to the human face and the physical distance to the human face can be regarded as equal, after detecting the face region, the physical distance information corresponding to the face region is acquired, and then the physical distance corresponding to the face region is obtained. The information can determine the range of the physical distance information corresponding to the portrait area.
  • the physical distance information in the range is considered to be the physical distance information corresponding to the portrait area, and the physical distance information outside the range is regarded as the physical distance information of the background area.
  • the method further includes: acquiring a range of the portrait distance according to the physical distance information corresponding to the face region, and acquiring the image region in the image to be processed according to the physical distance information in the range of the portrait distance; acquiring the color information of the image region, And obtaining a background area other than the portrait area in the image to be processed according to the color information.
  • the image area extracted according to the range of portrait distance is the area of the object in the same physical distance range from the face in the image to be processed. If there are other objects beside the person, the extracted image area may exist except the portrait area. Other objects. At this time, the portrait area can be further extracted according to the color information of the image area.
  • the color information refers to a related parameter used to represent the color of the image, and for example, the color information may include information such as hue, saturation, brightness, and the like of the color in the image.
  • the hue of color refers to the angle measurement of color, which ranges from 0° to 360°, and is calculated from the red counterclockwise direction, red is 0°, green is 120°, and blue is 240°.
  • Saturation refers to the degree to which the color is close to the spectrum. Generally, the higher the saturation, the brighter the color; the lower the saturation, the darker the color. Brightness indicates the brightness of the color.
  • the color information presented in the image is also different.
  • the color of the trees is green
  • the sky is blue
  • the earth is yellow
  • the background area outside the portrait area and the portrait area can be extracted based on the color information in the image area.
  • the color component of the image region is acquired, and an area in the image region in which the color component is within the preset range is extracted as the portrait region.
  • the color component refers to an image component generated by converting an image to be processed into an image of a certain color dimension.
  • the color component may refer to an RGB color component, a CMY color component, an HSV color component, etc. of the image, and an RGB color is understood.
  • the component, CMY color component, and HSV color component can be converted to each other.
  • the HSV color component of the image region is acquired, and an area of the image region in which the HSV color component is within a preset range is extracted as the portrait region.
  • the HSV color component refers to the hue (H), saturation (S), and lightness (V) components of the image, respectively, and respectively sets a preset range for the three components, and the three components in the image region are The area within the preset range is extracted as a portrait area.
  • the HSV color component is used to obtain the portrait region, specifically, the HSV color component of the image region is acquired, and the condition “H value is 20-25, S value is 10-50, and V value is 50.
  • the area between ⁇ 85 is used as a portrait area.
  • the operation 406 may include: acquiring an area corresponding to each face area, obtaining a background blur strength according to the physical distance information and the area area, and performing blurring on the background area in the image to be processed according to the background blur intensity. deal with.
  • each face region has corresponding physical distance information, and the background blur strength is obtained according to the acquired physical distance information.
  • the area of the area corresponding to each face area may be first acquired, and the background blur strength may be obtained according to the area area and the physical distance information.
  • the background blur strength is obtained according to the physical distance information corresponding to the face region having the largest or smallest region area. It is also possible to obtain the physical distance information corresponding to each face region, and obtain the background blur strength according to the average value of the physical distance information corresponding to each face region.
  • the physical distance information has a corresponding relationship with the background blur strength.
  • the background blur strength can be obtained according to the physical distance information and the corresponding relationship.
  • the background area is blurred according to the background blur strength.
  • Operation 408 Obtain a portrait blurring intensity corresponding to each face region in the image to be processed according to the physical distance information corresponding to each face region.
  • the portrait region corresponding to the face region may be further blurred, and the image blur intensity is obtained according to the physical distance information corresponding to the face region, and the portrait blur intensity is represented. The degree to which the portrait area is blurred.
  • the portrait area corresponding to the face area is blurred according to the image blurring intensity.
  • the area corresponding to the face area is obtained, the face area having the largest area is used as the base area, and the face area other than the base area is used as the face blur area; according to the base area and the face virtual
  • the physical distance information corresponding to the region is obtained, and the portrait blurring intensity corresponding to the blurred area of the face is obtained; and the portrait region corresponding to the blurred area of the face is blurred according to the blurring intensity of the portrait.
  • the background blur strength is obtained according to the physical distance information corresponding to the basic region.
  • the face area is divided into the base area and the face blur area, and the base area and the face blur area are treated to different degrees.
  • the portrait area corresponding to the base area is not blurred, and the portrait area corresponding to the face blur area needs to be blurred.
  • the portrait blurring intensity corresponding to the face blur region is obtained.
  • the image to be processed includes three face regions A, B, and C, and the corresponding physical distance information is D a , D b , and D c , respectively .
  • the area of the A area is the largest, and the A area is used as the basic area, and the B area and the C area are used as the blurred area of the face.
  • the physical distance information corresponding to the A area has a corresponding relationship with the background blur strength. After the physical distance information corresponding to the A area is acquired, the background blur strength can be obtained.
  • the background blurring intensity may indicate the intensity of the blurring process on the background region, assuming that the background blurring intensity is X, and the portrait blurring intensities of the portrait regions corresponding to the B region and the C region are X b and X c , respectively, then X b And X c can be calculated by the following formula:
  • the image blurring method firstly detects a face region in the image to be processed, and obtains a blurring intensity of the background region according to the physical distance information of the face region, and then blurs the background region according to the blurring intensity.
  • the physical distance information can reflect the distance between the face and the lens, and the blurring intensity of the different acquisitions is also different.
  • the degree of blurring changes with the change of the physical distance information, so that the effect of the blurring process can adapt to different shooting scenes, and the blurring process is performed. More precise.
  • the face area is divided into the basic area and the face blurred area, and different face areas are treated differently, which further improves the accuracy of the blurring process.
  • FIG. 6 is a schematic structural diagram of an image blurring device in an embodiment.
  • the image blurring device 600 includes an image obtaining module 602, an information acquiring module 604, and a background blurring module 606. among them:
  • the image obtaining module 602 is configured to acquire an image to be processed.
  • the information acquiring module 604 is configured to detect a face area in the image to be processed, and acquire physical distance information corresponding to the face area.
  • the ambiguity module 606 is configured to obtain a background blurring strength according to the physical distance information, and perform a blurring process on the background region in the image to be processed according to the background blurring intensity.
  • the image blurring device first detects a face region in the image to be processed, and obtains a blurring intensity of the background region according to the physical distance information of the face region, and then blurs the background region according to the blurring intensity.
  • the physical distance information can reflect the distance between the face and the lens, and the blurring intensity of the different acquisitions is also different.
  • the degree of blurring changes with the change of the physical distance information, so that the effect of the blurring process can adapt to different shooting scenes, and the blurring process is performed. More precise.
  • FIG. 7 is a schematic structural diagram of an image blurring device in another embodiment.
  • the image blurring device 700 includes an image obtaining module 702, an information acquiring module 704, a background blurring module 706, a region obtaining module 708, a parameter obtaining module 710, and a portrait blurring module 712. among them:
  • the image obtaining module 702 is configured to acquire an image to be processed.
  • the information acquiring module 704 is configured to detect a face region in the image to be processed, and acquire physical distance information corresponding to each face region in the image to be processed.
  • the ambiguity module 706 is configured to obtain a background blurring strength according to the physical distance information, and perform a blurring process on the background region in the image to be processed according to the background blurring intensity.
  • the area obtaining module 708 is configured to acquire an area corresponding to each face area, use a face area with the largest area as the base area, and use a face area other than the base area as the face blur area.
  • the strength obtaining module 710 is configured to obtain the portrait blurring intensity corresponding to the face blurring area according to the physical distance information corresponding to the basic area and the face blurring area.
  • the portrait blurring module 712 is configured to perform a blurring process on the portrait area corresponding to the face blurring area according to the portrait blurring intensity.
  • the image blurring device first detects a face region in the image to be processed, and obtains a blurring intensity of the background region according to the physical distance information of the face region, and then blurs the background region according to the blurring intensity.
  • the physical distance information can reflect the distance between the face and the lens, and the blurring intensity of the different acquisitions is also different.
  • the degree of blurring changes with the change of the physical distance information, so that the effect of the blurring process can adapt to different shooting scenes, and the blurring process is performed. More precise.
  • the face area is divided into the basic area and the face blurred area, and different face areas are treated differently, which further improves the accuracy of the blurring process.
  • the information acquiring module 704 is further configured to detect a face region in the image to be processed, and acquire physical distance information corresponding to the face region.
  • the background blurring module 706 is further configured to acquire an area corresponding to each face region, obtain a background blur strength according to the physical distance information and the area, and according to the background blur strength Performing a blurring process on the background area in the image to be processed.
  • the intensity obtaining module 710 is further configured to acquire, according to the physical distance information corresponding to the respective face regions, a portrait blurring intensity corresponding to each face region in the image to be processed.
  • the portrait blur module 712 is configured to perform a blurring process on the portrait region corresponding to the face region according to the portrait blur intensity.
  • each module in the image blurring device is for illustrative purposes only. In other embodiments, the image blurring device may be divided into different modules as needed to complete all or part of the functions of the image blurring device.
  • the embodiment of the present application also provides a computer readable storage medium.
  • One or more non-transitory computer readable storage media containing computer executable instructions that, when executed by one or more processors, cause the processor to:
  • the detecting, by the processor, the face area in the image to be processed, and acquiring physical distance information corresponding to the face area includes:
  • the performing, by the processor, acquiring the background blurring intensity according to the physical distance information, and blurring the background region in the image to be processed according to the background blurring intensity Processing includes:
  • the method performed by the processor further comprises:
  • the portrait area corresponding to the face area is blurred according to the portrait blur intensity.
  • the method executed by the processor further comprises:
  • the image blurring intensity corresponding to each face region in the image to be processed includes:
  • the blurring processing on the portrait area corresponding to the face area according to the image blurring intensity includes:
  • the portrait area corresponding to the face blur area is blurred according to the portrait blur intensity.
  • the embodiment of the present application further provides a computer device.
  • the above computer device includes an image processing circuit, and the image processing circuit may be implemented by hardware and/or software components, and may include various processing units defining an ISP (Image Signal Processing) pipeline.
  • Figure 8 is a schematic illustration of an image processing circuit in one embodiment. As shown in FIG. 8, for convenience of explanation, only various aspects of the image processing technique related to the embodiment of the present application are shown.
  • the image processing circuit includes an ISP processor 840 and a control logic 850.
  • the image data captured by imaging device 810 is first processed by ISP processor 840, which analyzes the image data to capture image statistics that can be used to determine and/or control one or more control parameters of imaging device 810.
  • Imaging device 810 can include a camera having one or more lenses 812 and image sensors 814.
  • Image sensor 814 can include a color filter array (such as a Bayer filter) that can capture light intensity and wavelength information captured with each imaging pixel of image sensor 814 and provide a set of primitives that can be processed by ISP processor 840 Image data.
  • a sensor 820 such as a gyroscope, can provide acquired image processing parameters (such as anti-shake parameters) to the ISP processor 840 based on the sensor 820 interface type.
  • the sensor 820 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
  • SMIA Standard Mobile Imaging Architecture
  • image sensor 814 can also transmit raw image data to sensor 820, which can provide raw image data to ISP processor 840 for processing based on sensor 820 interface type, or sensor 820 stores raw image data into image memory 830. .
  • the ISP processor 840 processes the raw image data pixel by pixel in a variety of formats.
  • each image pixel can have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 840 can perform one or more image processing operations on the raw image data, collecting statistical information about the image data. Among them, image processing operations can be performed with the same or different bit depth precision.
  • ISP processor 840 can also receive pixel data from image memory 830.
  • sensor 820 interface sends raw image data to image memory 830, which is then provided to ISP processor 840 for processing.
  • Image memory 830 can be part of a memory device, a storage device, or a separate dedicated memory within an electronic device, and can include DMA (Direct Memory Access) features.
  • DMA Direct Memory Access
  • the ISP processor 840 can perform one or more image processing operations, such as time domain filtering.
  • the image data processed by the ISP processor 840 can be sent to the image memory 830 for additional processing before being displayed.
  • the ISP processor 840 receives the processed data from the image memory 830 and performs image data processing in the original domain and in the RGB and YCbCr color spaces.
  • the processed image data can be output to display 880 for viewing by a user and/or further processed by a graphics engine or a GPU (Graphics Processing Unit). Additionally, the output of ISP processor 840 can also be sent to image memory 830, and display 880 can read image data from image memory 830.
  • image memory 830 can be configured to implement one or more frame buffers. Additionally, the output of ISP processor 840 can be sent to encoder/decoder 870 for encoding/decoding image data. The encoded image data can be saved and decompressed before being displayed on the display 880 device.
  • the image data processed by the ISP can be sent to the blurring module 860 to blush the image before being displayed.
  • the blurring processing of the image data by the blurring module 860 may include acquiring the background blurring intensity according to the physical distance information, and performing blurring processing on the background region in the image data according to the background blurring intensity.
  • the image data after the blurring process can be sent to the encoder/decoder 870 to encode/decode the image data.
  • the encoded image data can be saved and decompressed before being displayed on the display 880 device. It can be understood that the image data processed by the blurring module 860 can be directly sent to the display 880 for display without passing through the encoder/decoder 870.
  • the image data processed by the ISP processor 840 may also be processed by the encoder/decoder 870 and then processed by the blurring module 860.
  • the ambiguity module 860 or the encoder/decoder 870 may be a CPU (Central Processing Unit) or a GPU (Graphics Processing Unit) in the mobile terminal.
  • the statistics determined by the ISP processor 840 can be sent to the control logic 850 unit.
  • the statistics may include image sensor 814 statistics such as auto exposure, auto white balance, auto focus, flicker detection, black level compensation, lens 812 shading correction, and the like.
  • Control logic 850 can include a processor and/or a microcontroller that executes one or more routines, such as firmware, and one or more routines can determine control parameters of imaging device 810 and ISP processing based on received statistical data.
  • Control parameters of the 840 may include sensor 820 control parameters (eg, gain, integration time of exposure control, anti-shake parameters, etc.), camera flash control parameters, lens 812 control parameters (eg, focal length for focus or zoom), or these A combination of parameters.
  • the ISP control parameters may include a gain level and color correction matrix for automatic white balance and color adjustment (eg, during RGB processing), and a lens 812 shading correction parameter.
  • the detecting the face area in the image to be processed and acquiring the physical distance information corresponding to the face area includes:
  • the obtaining a background blurring intensity according to the physical distance information, and performing a blurring process on the background area in the image to be processed according to the background blurring intensity includes:
  • the method further includes:
  • the portrait area corresponding to the face area is blurred according to the portrait blur intensity.
  • the method further includes:
  • the image blurring intensity corresponding to each face region in the image to be processed includes:
  • the blurring processing on the portrait area corresponding to the face area according to the image blurring intensity includes:
  • the portrait area corresponding to the face blur area is blurred according to the portrait blur intensity.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

一种图像虚化方法包括:获取待处理图像;检测所述待处理图像中的人脸区域,并获取所述人脸区域对应的物理距离信息;根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。

Description

图像虚化方法、计算机可读存储介质和计算机设备
相关申请的交叉引用
本申请要求于2017年08月09日提交中国专利局、申请号为201710676169.2、发明名称为“图像虚化方法、装置、计算机可读存储介质和计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,特别是涉及图像虚化方法、装置、计算机可读存储介质和计算机设备。
背景技术
如今人们生活中越来越离不开拍照摄影,特别是随着智能终端的发展,智能终端实现拍照功能后,使拍照应用得更加广泛。同时无论是在个人生活还是商业用途中,都对拍照的质量和用户体验要求越来越高。
然而,拍照的场景往往是复杂多变的,为了使得拍摄的照片适应复杂多变的场景,更加凸显拍摄的主体从而体现层次感,通常的处理方法是保持拍摄主体的清晰度,并将拍摄主体以外的区域进行虚化处理。虚化处理就是将主体以外的区域进行模糊化,使得主体更加突出。传统的虚化方法是先识别图像中的主体,然后将主体以外的区域直接进行固定程度的虚化,使得背景和主体进行区别显示。
发明内容
本申请实施例提供一种图像虚化方法、计算机可读存储介质和计算机设备。
一种图像虚化方法,所述方法包括:
获取待处理图像;
检测所述待处理图像中的人脸区域,并获取所述人脸区域对应的物理距离信息;
根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行如下操作:
获取待处理图像;
检测所述待处理图像中的人脸区域,并获取所述人脸区域对应的物理距离信息;
根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
一种计算机设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行如下操作:
获取待处理图像;
检测所述待处理图像中的人脸区域,并获取所述人脸区域对应的物理距离信息;
根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
本申请实施例提供的图像虚化方法、装置、计算机可读存储介质和计算机设备,首先检测待处理图像中的人脸区域,并根据人脸区域的物理距离信息来获取背景区域的虚化强度,然后根据虚化强度对背景区域进行虚化处理。物理距离信息可以反映人脸与镜头的距离,距离不同获取的虚化强度也不同,使得虚化处理的更加精确。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中电子设备的内部结构示意图;
图2为一个实施例中服务器的内部结构示意图;
图3为一个实施例中图像虚化方法的流程图;
图4为另一个实施例中图像虚化方法的流程图;
图5为一个实施例中获取物理距离信息的原理图;
图6为一个实施例中图像虚化装置的结构示意图;
图7为另一个实施例中图像虚化装置的结构示意图;
图8为一个实施例中图像处理电路的示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将第一个元件与另一个元件区分。举例来说,在不脱离本申请的范围的情况下,可以将第一客户端称为第二客户端,且类似地,可将第二客户端称为第一客户端。第一客户端和第二客户端两者都是客户端,但其不是同一客户端。
图1为一个实施例中电子设备的内部结构示意图。如图1所示,该电子设备包括通过***总线连接的处理器、非易失性存储介质、内存储器和网络接口、显示屏和输入装置。其中,电子设备的非易失性存储介质存储有操作***和计算机可读指令。该计算机可读指令被处理器执行时以实现一种图像虚化方法。该处理器用于提供计算和控制能力,支撑整个电子设备的运行。电子设备中的内存储器为非易失性存储介质中的计算机可读指令的运行提供环境。网络接口用于与服务器进行网络通信,如发送图像虚化请求至服务器,接收服务器返回的虚化处理后的图像等。电子设备的显示屏可以是液晶显示屏或者电子墨水显示屏等,输入装置可以是显示屏上覆盖的触摸层,也可以是电子设备外壳上设置的按键、轨迹球或触控板,也可以是外接的键盘、触控板或鼠标等。该电子设备可以是手机、平板电脑或者个人数字助理或穿戴式设备等。本领域技术人员可以理解,图1中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的电子设备的限定,具体的电子设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
图2为一个实施例中服务器的内部结构示意图。如图2所示,该服务器包括通过***总线连接的处理器、非易失性存储介质、内存储器和网络接口。其中,该服务器的非易失性存储介质存储有操作***和计算机可读指令。该计算机可读指令被处理器执行时以实现一种图像虚化方法。该服务器的处理器用于提供计算和控制能力,支撑整个服务器的运行。该服务器的网络接口用于据以与外部的终端通过网络连接通信,比如接收终端发送的图像虚化请求以及向终端返回虚化处理后的图像等。服务器可以用独立的服务器或者是多个服务器组成的服务器集群来实现。本领域技术人员可以理解,图2中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的服务器的限 定,具体的服务器可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
图3为一个实施例中图像虚化方法的流程图。如图3所示,该图像虚化方法包括操作302至操作306,其中:
操作302,获取待处理图像。
在本申请提供的实施例中,待处理图像是指需要进行虚化处理的图像,可以通过图像采集装置进行采集。图像采集装置是指采集图像的装置,例如图像采集装置可以是照相机、移动终端上的摄像头、摄像机等装置。用户终端接收到图像虚化指令后,可以直接在用户终端对待处理图像进行虚化处理,也可以向服务器发起图像虚化请求,在服务器上对待处理图像进行虚化处理。
可以理解的是,图像虚化指令可以是用户输入的,还可以是用户终端自动触发的。例如,用户通过用户终端输入拍照指令,移动终端在检测到该拍照指令之后,通过摄像头来采集待处理图像。然后自动触发生成图像虚化指令,并对待处理图像进行虚化处理。其中,拍照指令可以是移动终端的物理按键或触屏操作触发的,也可以是语音指令等。
操作304,检测待处理图像中的人脸区域,并获取人脸区域对应的物理距离信息。
在一个实施例中,人脸区域是指待处理图像中人脸所在的区域,物理距离信息是指表示图像采集装置到待处理图像中各个像素点对应的物体之间的物理距离的相关参数。人脸区域对应的物理距离信息是指图像采集装置到人脸之间的物理距离的相关参数。
具体地,可以首先识别待处理图像中的特征点,然后将特征点提取出来与预设人脸模型进行匹配,若该提取的特征点与预设人脸模型相匹配,则将提取特征点所在的区域作为人脸区域。
在本申请提供的实施例中,待处理图像是由若干个像素构成,每个像素都有对应的物理距离信息,该物理距离信息表示该像素所表示的物体到图像采集装置对应的物理距离。
可以理解的是,待处理图像中可能存在多个人脸区域,检测到待处理图像中的人脸区域后,可以获取各个人脸区域对应的区域面积;获取区域面积最大的人脸区域对应的物理距离信息。根据区域面积最大的人脸区域对应的物理距离信息获取背景虚化强度。区域面积是指人脸区域对应的面积大小,该区域面积可以是由人脸区域中所包含像素的数量进行表示,也可以是由人脸区域所占区域大小与待处理图像大小的比例进行表示。
一般地,在采集物体的物理距离时,都有一个有效距离范围,在该有效距离范围内的物体,可以精确地获取到对应的物理距离信息,超过该有效距离范围的物体无法精确地测量。根据硬件性能不同有效距离范围的取值范围不同,该有效距离范围可以通过硬件进行调节。因此,可以将该有效距离范围内的物理距离信息用精确的数值表示,超过该有效距离范围的物理距离信息用一个固定的数值表示。
也就是说,可以只检测该有效距离范围内的人脸区域,查过该有效距离范围的人脸区域可以不获取。则操作304可以包括:检测待处理图像中在预设距离范围内的人脸区域,并获取人脸区域对应的物理距离信息。其中,预设距离范围可以但限于是指有效物理距离信息的取值范围。
操作306,根据物理距离信息获取背景虚化强度,并根据背景虚化强度对待处理图像中的背景区域进行虚化处理。
在本申请提供的实施例中,虚化处理是指将图像进行模糊化处理,根据虚化强度进行虚化处理,虚化强度不同,虚化处理的程度也不同。背景区域可以是指待处理图像中,除人脸区域或人像区域之外的其他区域。其中,人像区域是指待处理图像中的整个人像所在的区域。
背景虚化强度是指表示对背景区域进行虚化处理的程度的参数。根据人脸区域的物理距离信息获取背景虚化强度,再根据该背景虚化强度对背景区域进行虚化处理,得到的虚 化结果就会随着人脸到图像采集装置的实际物理距离改变而改变。一般地,物理距离信息越大,背景虚化强度越小,对背景区域进行虚化处理的程度就越小;物理距离信息越小,背景虚化强度越大,对背景区域进行虚化处理的程度就越大。
上述图像虚化方法,首先检测待处理图像中的人脸区域,并根据人脸区域的物理距离信息来获取背景区域的虚化强度,然后根据虚化强度对背景区域进行虚化处理。物理距离信息可以反映人脸与镜头的距离,距离不同获取的虚化强度也不同,虚化程度随物理距离信息的改变而改变,使得虚化处理的效果能够适应不同的拍摄场景,虚化处理更加精确。
图4为另一个实施例中图像虚化方法的流程图。如图4所示,该图像虚化方法包括操作402至操作410,其中:
操作402,获取待处理图像。
在本申请提供的实施例中,待处理图像可以是直接在本地或者服务器上进行获取。具体地,用户终端在接收到图像虚化指令之后,可以直接根据图像虚化指令中包含的图像存储地址和图像标识去获取对应的待处理图像。图像存储地址可以是用户终端本地的,也可以是服务器上的。在获取到待处理图像之后,可以在本地将待处理图像进行虚化处理,也可以在服务器上对待处理图像进行虚化处理。
操作404,检测待处理图像中的人脸区域,并获取待处理图像中各个人脸区域对应的物理距离信息。
在本申请提供的实施例中,图像采集装置上可以安装双摄像头,通过双摄像头测量图像采集装置到物体之间的物理距离信息。具体地,通过第一摄像头和第二摄像头分别拍摄物体的图像;根据该图像获取第一夹角和第二夹角,其中,第一夹角为第一摄像头到物体所在水平线与第一摄像头到第二摄像头所在水平线之间的夹角,第二夹角为第二摄像头到物体所在水平线与第二摄像头到第一摄像头所在水平线之间的夹角;根据第一夹角、第二夹角及第一摄像头到第二摄像头之间的距离,获取图像采集装置到物体之间的物理距离信息。
图5为一个实施例中获取物理距离信息的原理图。如图5所示,通过第一摄像头502和第二摄像头504分别拍摄物体506的图像,根据该图像可以获取第一夹角A1和第二夹角A2,然后再根据第一夹角A1、第二夹角A2和第一摄像头502到第二摄像头504之间的距离T,可以获取第一摄像头402到第二摄像头504所在水平线上任一点与物体506之间的物理距离D。
可以理解的是,同一个场景中往往会包含多个人像,因此待处理图像中也会包含多个人脸区域。将待处理图像中的每个人脸区域都提取出来,并获取人脸区域对应的物理距离信息。一般地,针对某一个场景获取图像时,可以同时获取该场景对应的深度图。其中,获取的该深度图与图像是一一对应的,深度图中的值表示图像中对应像素的物理距离信息。也就是说,可以在获取待处理图像的同时获取对应的深度图,检测到待处理图像中的人脸区域之后,根据人脸区域中的像素坐标就可以在深度图中获取对应的物理距离信息。
在一个实施例中,由于每个像素都有对应的物理距离信息,而人脸区域包含了多个像素。在获取到人脸区域中每一个像素对应的物理距离信息之后,可以对人脸区域中所有像素对应的物理距离信息求取平均值,或者获取某一个像素对应的物理距离信息,来表示该人脸区域对应的物理距离信息。例如,获取人脸区域的中心像素对应的物理距离信息,来表示该人脸区域对应的物理距离信息。
操作406,根据物理距离信息获取背景虚化强度,并根据背景虚化强度对待处理图像中的背景区域进行虚化处理。
在一个实施例中,认为人像与人脸在同一垂直平面上,人像到图像采集装置的物理距离与人脸到图像采集装置的物理距离相在同一范围内。因此,在获取到物理距离信息和人脸区域后,根据人脸区域对应的物理距离即可获取到待处理图像中的人像区域,然后根据人像区域就可以在待处理图像中确定背景区域。
具体地,检测待处理图像中的人脸区域,并根据人脸区域对应的物理距离信息获取人像距离范围,并根据人像距离范围可以获取待处理图像中的人像区域,再根据该人像区域获取背景区域。其中,人像距离范围是指待处理图像中人像区域对应的物理距离信息的取值范围。由于图像采集装置到人脸的物理距离与到人像的物理距离可以看作是相等的,在检测到人脸区域之后,获取人脸区域对应的物理距离信息,再根据人脸区域对应的物理距离信息就可以确定人像区域对应的物理距离信息的范围,该范围内的物理距离信息认为是人像区域对应的物理距离信息,该范围之外的物理距离信息被认为是背景区域的物理距离信息。
进一步地,操作406之前还可以包括:根据人脸区域对应的物理距离信息获取人像距离范围,并根据人像距离范围内的物理距离信息获取待处理图像中的图像区域;获取图像区域的颜色信息,并根据颜色信息获取待处理图像中除人像区域之外的背景区域。
根据人像距离范围提取的图像区域是待处理图像中与人脸在同一物理距离范围内的物体所在的区域,假设人旁边有其他物体存在,那么提取出来的图像区域就可能存在除人像区域之外的其他物体。这时候可以根据图像区域的颜色信息进一步将人像区域提取出来。
在本申请提供的实施例中,颜色信息是指用来表示图像的色彩的相关参数,例如颜色信息可以包括图像中色彩的色调、饱和度、明度等信息。其中,色彩的色调是指色彩的角度度量,其取值范围为0°~360°,从红色开始按逆时针方向计算,红色为0°,绿色为120°,蓝色为240°。饱和度是指色彩接近光谱的程度,一般饱和度越高,色彩越鲜艳;饱和度越低,色彩越暗淡。明度则表示色彩的明亮程度。
不同的物体往往有不同的颜色特征,即在图像中呈现的颜色信息也是不一样的。例如树木的颜色为绿色、天空为蓝色、大地为黄色等等。根据图像区域中的颜色信息可以提取人像区域和人像区域外的背景区域。
具体地,获取图像区域的颜色分量,提取图像区域中颜色分量在预设范围内的区域作为人像区域。颜色分量是指将待处理图像转化为某一从色彩维度的图像所产生的图像分量,例如颜色分量可以是指图像的RGB颜色分量、CMY颜色分量、HSV颜色分量等,可以理解的是RGB颜色分量、CMY颜色分量、HSV颜色分量之间可以相互转换。
在一个实施例中,获取图像区域的HSV颜色分量,提取图像区域中HSV颜色分量在预设范围内的区域作为人像区域。其中,HSV颜色分量分别是指图像的色调(H)、饱和度(S)、明度(V)分量,分别给这三个分量设定一个预设范围,并将图像区域中这三个分量在预设范围内的区域提取出来,作为人像区域。
举例来说,通过HSV颜色分量来获取人像区域,具体可以是获取图像区域的HSV颜色分量,并获取图像区域中满足条件“H值在20~25、S值在10~50、V值在50~85之间”的区域,作为人像区域。
在一个实施例中,操作406可以包括:获取各个人脸区域对应的区域面积,根据物理距离信息和区域面积获取背景虚化强度,并根据背景虚化强度对待处理图像中的背景区域进行虚化处理。
若获取到多个人脸区域,则每个人脸区域都有对应的物理距离信息,并根据获取的物理距离信息来获取背景虚化强度。更进一步地,可以首先获取各个人脸区域对应的区域面积,根据区域面积和物理距离信息来获取背景虚化强度。例如,在获取到多个人脸区域之后,根据区域面积最大或最小的人脸区域对应的物理距离信息,获取背景虚化强度。还可以是获取各个人脸区域对应的物理距离信息,并根据各个人脸区域对应的物理距离信息的平均值来获取背景虚化强度。
在一个实施例中,物理距离信息与背景虚化强度存在对应关系,获取到物理距离信息之后,根据该物理距离信息和该对应关系就可以获取到背景虚化强度。再根据背景虚化强 度对背景区域进行虚化处理。
操作408,根据各个人脸区域对应的物理距离信息,获取待处理图像中各个人脸区域对应的人像虚化强度。
在一个实施例中,获取到多个人脸区域之后,可以继续对人脸区域对应的人像区域进行虚化处理,根据人脸区域对应的物理距离信息获取人像虚化强度,该人像虚化强度表示了对人像区域进行虚化处理的程度。
操作410,根据人像虚化强度对人脸区域对应的人像区域进行虚化处理。
更进一步地,获取人脸区域对应的区域面积,将区域面积最大的人脸区域作为基础区域,并将除基础区域之外的人脸区域作为人脸虚化区域;根据基础区域和人脸虚化区域对应的物理距离信息,获取人脸虚化区域对应的人像虚化强度;根据人像虚化强度对人脸虚化区域对应的人像区域进行虚化处理。同时根据基础区域对应的物理距离信息获取背景虚化强度。
也就是说,根据区域面积将人脸区域分为基础区域和人脸虚化区域,将基础区域和人脸虚化区域做不同程度的虚化处理。例如,基础区域对应的人像区域不做虚化处理,人脸虚化区域对应的人像区域需要做虚化处理。根据将人脸区域对应的物理距离信息作为基础,获取人脸虚化区域对应的人像虚化强度。
举例来说,假设待处理图像中包含A、B、C三个人脸区域,对应的物理距离信息分别为D a、D b和D c。其中,A区域的区域面积最大,则将A区域作为基础区域,B区域和C区域作为人脸虚化区域。A区域对应的物理距离信息与背景虚化强度存在对应关系,获取到A区域对应的物理距离信息后,则可以获取背景虚化强度。该背景虚化强度可以表示对背景区域虚化处理的强度,假设背景虚化强度为X,且B区域和C区域对应的人像区域的人像虚化强度分别为X b和X c,则X b和X c可以由如下公式计算:
Figure PCTCN2018099403-appb-000001
上述图像虚化方法,首先检测待处理图像中的人脸区域,并根据人脸区域的物理距离信息来获取背景区域的虚化强度,然后根据虚化强度对背景区域进行虚化处理。物理距离信息可以反映人脸与镜头的距离,距离不同获取的虚化强度也不同,虚化程度随物理距离信息的改变而改变,使得虚化处理的效果能够适应不同的拍摄场景,虚化处理更加精确。同时,将人脸区域划分为基础区域和人脸虚化区域,将不同的人脸区域做不同的虚化处理,进一步地提高了虚化处理的精确度。
图6为一个实施例中图像虚化装置的结构示意图。该图像虚化装置600包括图像获取模块602、信息获取模块604和背景虚化模块606。其中:
图像获取模块602,用于获取待处理图像。
信息获取模块604,用于检测所述待处理图像中的人脸区域,并获取所述人脸区域对应的物理距离信息。
背景虚化模块606,用于根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
上述图像虚化装置,首先检测待处理图像中的人脸区域,并根据人脸区域的物理距离信息来获取背景区域的虚化强度,然后根据虚化强度对背景区域进行虚化处理。物理距离信息可以反映人脸与镜头的距离,距离不同获取的虚化强度也不同,虚化程度随物理距离信息的改变而改变,使得虚化处理的效果能够适应不同的拍摄场景,虚化处理更加精确。
图7为另一个实施例中图像虚化装置的结构示意图。该图像虚化装置700包括图像获取模块702、信息获取模块704、背景虚化模块706、区域获取模块708、参数获取模块710和人像虚化模块712。其中:
图像获取模块702,用于获取待处理图像。
信息获取模块704,用于检测所述待处理图像中的人脸区域,并获取所述待处理图像中各个人脸区域对应的物理距离信息。
背景虚化模块706,用于根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
区域获取模块708,用于获取各个人脸区域对应的区域面积,将区域面积最大的人脸区域作为基础区域,并将除基础区域之外的人脸区域作为人脸虚化区域。
强度获取模块710,用于根据所述基础区域和人脸虚化区域对应的物理距离信息,获取人脸虚化区域对应的人像虚化强度。
人像虚化模块712,用于根据所述人像虚化强度对所述人脸虚化区域对应的人像区域进行虚化处理。
上述图像虚化装置,首先检测待处理图像中的人脸区域,并根据人脸区域的物理距离信息来获取背景区域的虚化强度,然后根据虚化强度对背景区域进行虚化处理。物理距离信息可以反映人脸与镜头的距离,距离不同获取的虚化强度也不同,虚化程度随物理距离信息的改变而改变,使得虚化处理的效果能够适应不同的拍摄场景,虚化处理更加精确。同时,将人脸区域划分为基础区域和人脸虚化区域,将不同的人脸区域做不同的虚化处理,进一步地提高了虚化处理的精确度。
在另一个实施例中,信息获取模块704还用于检测所述待处理图像中的人脸区域,并获取所述人脸区域对应的物理距离信息
在本申请提供的实施例中,背景虚化模块706还用于获取各个人脸区域对应的区域面积,根据所述物理距离信息和区域面积获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
在一个实施例中,强度获取模块710还用于根据所述各个人脸区域对应的物理距离信息,获取所述待处理图像中各个人脸区域对应的人像虚化强度。
在其中一个实施例中,人像虚化模块712用于根据所述人像虚化强度对所述人脸区域对应的人像区域进行虚化处理。
上述图像虚化装置中各个模块的划分仅用于举例说明,在其他实施例中,可将图像虚化装置按照需要划分为不同的模块,以完成上述图像虚化装置的全部或部分功能。
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行以下操作:
获取待处理图像;
检测所述待处理图像中的人脸区域,并获取所述人脸区域对应的物理距离信息;
根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
在一个实施例中,被处理器执行的所述检测所述待处理图像中的人脸区域,并获取所述人脸区域对应的物理距离信息包括:
检测所述待处理图像中的人脸区域,并获取所述待处理图像中各个人脸区域对应的物理距离信息。
在本申请提供的其他实施例中,被处理器执行的所述根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理包括:
获取各个人脸区域对应的区域面积,根据所述物理距离信息和区域面积获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
在另一个实施例中,被处理器执行的所述方法还包括:
根据所述各个人脸区域对应的物理距离信息,获取所述待处理图像中各个人脸区域对应的人像虚化强度;
根据所述人像虚化强度对所述人脸区域对应的人像区域进行虚化处理。
在其中一个实施例中,被处理器执行的所述方法还包括:
获取各个人脸区域对应的区域面积,将区域面积最大的人脸区域作为基础区域,并将除基础区域之外的人脸区域作为人脸虚化区域;
所述根据所述各个人脸区域对应的物理距离信息,获取所述待处理图像中各个人脸区域对应的人像虚化强度包括:
根据所述基础区域和人脸虚化区域对应的物理距离信息,获取人脸虚化区域对应的人像虚化强度;
所述根据所述人像虚化强度对所述人脸区域对应的人像区域进行虚化处理包括:
根据所述人像虚化强度对所述人脸虚化区域对应的人像区域进行虚化处理。
本申请实施例还提供一种计算机设备。上述计算机设备中包括图像处理电路,图像处理电路可以利用硬件和/或软件组件实现,可包括定义ISP(Image Signal Processing,图像信号处理)管线的各种处理单元。图8为一个实施例中图像处理电路的示意图。如图8所示,为便于说明,仅示出与本申请实施例相关的图像处理技术的各个方面。
如图8所示,图像处理电路包括ISP处理器840和控制逻辑器850。成像设备810捕捉的图像数据首先由ISP处理器840处理,ISP处理器840对图像数据进行分析以捕捉可用于确定和/或成像设备810的一个或多个控制参数的图像统计信息。成像设备810可包括具有一个或多个透镜812和图像传感器814的照相机。图像传感器814可包括色彩滤镜阵列(如Bayer滤镜),图像传感器814可获取用图像传感器814的每个成像像素捕捉的光强度和波长信息,并提供可由ISP处理器840处理的一组原始图像数据。传感器820(如陀螺仪)可基于传感器820接口类型把采集的图像处理的参数(如防抖参数)提供给ISP处理器840。传感器820接口可以利用SMIA(Standard Mobile Imaging Architecture,标准移动成像架构)接口、其它串行或并行照相机接口、或上述接口的组合。
此外,图像传感器814也可将原始图像数据发送给传感器820,传感器820可基于传感器820接口类型把原始图像数据提供给ISP处理器840进行处理,或者传感器820将原始图像数据存储到图像存储器830中。
ISP处理器840按多种格式逐个像素地处理原始图像数据。例如,每个图像像素可具有8、10、12或14比特的位深度,ISP处理器840可对原始图像数据进行一个或多个图像处理操作、收集关于图像数据的统计信息。其中,图像处理操作可按相同或不同的位深度精度进行。
ISP处理器840还可从图像存储器830接收像素数据。例如,传感器820接口将原始图像数据发送给图像存储器830,图像存储器830中的原始图像数据再提供给ISP处理器840以供处理。图像存储器830可为存储器装置的一部分、存储设备、或电子设备内的独立的专用存储器,并可包括DMA(Direct Memory Access,直接直接存储器存取)特征。
当接收到来自图像传感器814接口或来自传感器820接口或来自图像存储器830的原始图像数据时,ISP处理器840可进行一个或多个图像处理操作,如时域滤波。ISP处理器840处理后的图像数据可发送给图像存储器830,以便在被显示之前进行另外的处理。ISP处理器840从图像存储器830接收处理数据,并对所述处理数据进行原始域中以及RGB和YCbCr颜色空间中的图像数据处理。处理后的图像数据可输出给显示器880,以供用户观看和/或由图形引擎或GPU(Graphics Processing Unit,图形处理器)进一步处理。此外,ISP处理器840的输出还可发送给图像存储器830,且显示器880可从图像存储器830读取图像数据。在一个实施例中,图像存储器830可被配置为实现一个或多个帧缓冲 器。此外,ISP处理器840的输出可发送给编码器/解码器870,以便编码/解码图像数据。编码的图像数据可被保存,并在显示于显示器880设备上之前解压缩。
ISP处理后的图像数据可发送给虚化模块860,以便在被显示之前对图像进行虚化处理。虚化模块860对图像数据虚化处理可包括根据物理距离信息获取背景虚化强度,并根据背景虚化强度对图像数据中的背景区域进行虚化处理等。虚化模块860将图像数据进行虚化处理后,可将虚化处理后的图像数据发送给编码器/解码器870,以便编码/解码图像数据。编码的图像数据可被保存,并在显示与显示器880设备上之前解压缩。可以理解的是,虚化模块860处理后的图像数据可以不经过编码器/解码器870,直接发给显示器880进行显示。ISP处理器840处理后的图像数据还可以先经过编码器/解码器870处理,然后再经过虚化模块860进行处理。其中,虚化模块860或编码器/解码器870可为移动终端中CPU(Central Processing Unit,中央处理器)或GPU(Graphics Processing Unit,图形处理器)等。
ISP处理器840确定的统计数据可发送给控制逻辑器850单元。例如,统计数据可包括自动曝光、自动白平衡、自动聚焦、闪烁检测、黑电平补偿、透镜812阴影校正等图像传感器814统计信息。控制逻辑器850可包括执行一个或多个例程(如固件)的处理器和/或微控制器,一个或多个例程可根据接收的统计数据,确定成像设备810的控制参数以及ISP处理器840的控制参数。例如,成像设备810的控制参数可包括传感器820控制参数(例如增益、曝光控制的积分时间、防抖参数等)、照相机闪光控制参数、透镜812控制参数(例如聚焦或变焦用焦距)、或这些参数的组合。ISP控制参数可包括用于自动白平衡和颜色调整(例如,在RGB处理期间)的增益水平和色彩校正矩阵,以及透镜812阴影校正参数。
以下为运用图8中图像处理技术实现图像虚化方法的操作:
获取待处理图像;
检测所述待处理图像中的人脸区域,并获取所述人脸区域对应的物理距离信息;
根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
在一个实施例中,所述检测所述待处理图像中的人脸区域,并获取所述人脸区域对应的物理距离信息包括:
检测所述待处理图像中的人脸区域,并获取所述待处理图像中各个人脸区域对应的物理距离信息。
在本申请提供的其他实施例中,所述根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理包括:
获取各个人脸区域对应的区域面积,根据所述物理距离信息和区域面积获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
在另一个实施例中,所述方法还包括:
根据所述各个人脸区域对应的物理距离信息,获取所述待处理图像中各个人脸区域对应的人像虚化强度;
根据所述人像虚化强度对所述人脸区域对应的人像区域进行虚化处理。
在其中一个实施例中,所述方法还包括:
获取各个人脸区域对应的区域面积,将区域面积最大的人脸区域作为基础区域,并将除基础区域之外的人脸区域作为人脸虚化区域;
所述根据所述各个人脸区域对应的物理距离信息,获取所述待处理图像中各个人脸区域对应的人像虚化强度包括:
根据所述基础区域和人脸虚化区域对应的物理距离信息,获取人脸虚化区域对应的人像虚化强度;
所述根据所述人像虚化强度对所述人脸区域对应的人像区域进行虚化处理包括:
根据所述人像虚化强度对所述人脸虚化区域对应的人像区域进行虚化处理。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (30)

  1. 一种图像虚化方法,所述方法包括:
    获取待处理图像;
    检测所述待处理图像中的人脸区域,并获取所述人脸区域对应的物理距离信息;及
    根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
  2. 根据权利要求1所述的图像虚化方法,其特征在于,所述检测所述待处理图像中的人脸区域,并获取所述人脸区域对应的物理距离信息包括:
    检测所述待处理图像中的人脸区域,并获取所述待处理图像中各个人脸区域对应的物理距离信息。
  3. 根据权利要求2所述的图像虚化方法,其特征在于,所述根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理包括:
    获取各个人脸区域对应的区域面积,根据所述物理距离信息和区域面积获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
  4. 根据权利要求3所述的图像虚化方法,其特征在于,所述获取各个人脸区域对应的区域面积,根据所述物理距离信息和区域面积获取背景虚化强度,包括:
    获取各个人脸区域对应的区域面积,并获取所述区域面积最大的人脸区域对应的物理距离信息;及
    根据所述区域面积最大的人脸区域对应的物理距离信息获取背景虚化强度。
  5. 根据权利要求2所述的图像虚化方法,其特征在于,所述方法还包括:
    根据所述各个人脸区域对应的物理距离信息,获取所述待处理图像中各个人脸区域对应的人像虚化强度;及
    根据所述人像虚化强度对所述人脸区域对应的人像区域进行虚化处理。
  6. 根据权利要求5所述的图像虚化方法,其特征在于,所述方法还包括:
    获取各个人脸区域对应的区域面积,将区域面积最大的人脸区域作为基础区域,并将除基础区域之外的人脸区域作为人脸虚化区域;
    所述根据所述各个人脸区域对应的物理距离信息,获取所述待处理图像中各个人脸区域对应的人像虚化强度包括:
    根据所述基础区域和人脸虚化区域对应的物理距离信息,获取人脸虚化区域对应的人像虚化强度;
    所述根据所述人像虚化强度对所述人脸区域对应的人像区域进行虚化处理包括:
    根据所述人像虚化强度对所述人脸虚化区域对应的人像区域进行虚化处理。
  7. 根据权利要求1所述的图像虚化方法,其特征在于,所述获取所述人脸区域对应的物理距离信息,包括:
    对所述人脸区域中所有像素对应的物理距离信息求取平均值,作为所述人脸区域对应的物理距离信息;或
    获取任意一个像素对应的物理距离信息,作为所述人脸区域对应的物理距离信息。
  8. 根据权利要求1所述的图像虚化方法,其特征在于,所述根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理,包括:
    获取各个人脸区域对应的区域面积,根据所述物理距离信息和区域面积获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
  9. 根据权利要求1至8任一项所述的图像虚化方法,其特征在于,所述方法还包括:
    根据所述人脸区域对应的物理距离信息获取人像距离范围;及
    根据所述人像距离范围获取所述待处理图像中的人像区域,并根据所述人像区域获取背景区域。
  10. 根据权利要求9所述的图像虚化方法,其特征在于,所述根据所述人像距离范围获取所述待处理图像中的人像区域,并根据所述人像区域获取背景区域,包括:
    根据所述人像距离范围内的物理距离信息获取所述待处理图像中的图像区域;及
    获取所述图像区域的颜色信息,并根据所述颜色信息获取所述待处理图像中除人像区域之外的背景区域。
  11. 一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,存储有计算机可读指令,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行如以下操作:
    获取待处理图像;
    检测所述待处理图像中的人脸区域,并获取所述人脸区域对应的物理距离信息;及
    根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
  12. 根据权利要求11所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行所述检测所述待处理图像中的人脸区域,并获取所述人脸区域对应的物理距离信息时,使得所述处理器还执行以下操作:
    检测所述待处理图像中的人脸区域,并获取所述待处理图像中各个人脸区域对应的物理距离信息。
  13. 根据权利要求12所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行所述根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理时,使得所述处理器还执行以下操作:
    获取各个人脸区域对应的区域面积,根据所述物理距离信息和区域面积获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
  14. 根据权利要求13所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行所述获取各个人脸区域对应的区域面积,根据所述物理距离信息和区域面积获取背景虚化强度时,使得所述处理器还执行以下操作:
    获取各个人脸区域对应的区域面积,并获取所述区域面积最大的人脸区域对应的物理距离信息;及
    根据所述区域面积最大的人脸区域对应的物理距离信息获取背景虚化强度。
  15. 根据权利要求12所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行所述方法时,使得所述处理器还执行以下操作:
    根据所述各个人脸区域对应的物理距离信息,获取所述待处理图像中各个人脸区域对应的人像虚化强度;及
    根据所述人像虚化强度对所述人脸区域对应的人像区域进行虚化处理。
  16. 根据权利要求15所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行所述方法时,使得所述处理器还执行以下操作:
    获取各个人脸区域对应的区域面积,将区域面积最大的人脸区域作为基础区域,并将除基础区域之外的人脸区域作为人脸虚化区域;
    所述计算机可读指令被所述处理器执行所述根据所述各个人脸区域对应的物理距离信息,获取所述待处理图像中各个人脸区域对应的人像虚化强度时,使得所述处理器还执行以下操作:
    根据所述基础区域和人脸虚化区域对应的物理距离信息,获取人脸虚化区域对应的人像虚化强度;
    所述计算机可读指令被所述处理器执行所述根据所述人像虚化强度对所述人脸区域 对应的人像区域进行虚化处理时,使得所述处理器还执行以下操作:
    根据所述人像虚化强度对所述人脸虚化区域对应的人像区域进行虚化处理。
  17. 根据权利要求11所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行所述获取所述人脸区域对应的物理距离信息时,使得所述处理器还执行以下操作:
    对所述人脸区域中所有像素对应的物理距离信息求取平均值,作为所述人脸区域对应的物理距离信息;或
    获取任意一个像素对应的物理距离信息,作为所述人脸区域对应的物理距离信息。
  18. 根据权利要求11所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行所述根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理时,使得所述处理器还执行以下操作:
    获取各个人脸区域对应的区域面积,根据所述物理距离信息和区域面积获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
  19. 根据权利要求11至18任一项所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行所述方法时,使得所述处理器还执行以下操作:
    根据所述人脸区域对应的物理距离信息获取人像距离范围;及
    根据所述人像距离范围获取所述待处理图像中的人像区域,并根据所述人像区域获取背景区域。
  20. 根据权利要求19所述的计算机可读存储介质,其特征在于,所述计算机可读指令被所述处理器执行所述根据所述人像距离范围获取所述待处理图像中的人像区域,并根据所述人像区域获取背景区域时,使得所述处理器还执行以下操作:
    根据所述人像距离范围内的物理距离信息获取所述待处理图像中的图像区域;及
    获取所述图像区域的颜色信息,并根据所述颜色信息获取所述待处理图像中除人像区域之外的背景区域。
  21. 一种计算机设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行以下操作:获取待处理图像;
    检测所述待处理图像中的人脸区域,并获取所述人脸区域对应的物理距离信息;及
    根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
  22. 根据权利要求21所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行所述检测所述待处理图像中的人脸区域,并获取所述人脸区域对应的物理距离信息时,使得所述处理器还执行以下操作:
    检测所述待处理图像中的人脸区域,并获取所述待处理图像中各个人脸区域对应的物理距离信息。
  23. 根据权利要求22所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行所述根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理时,使得所述处理器还执行以下操作:
    获取各个人脸区域对应的区域面积,根据所述物理距离信息和区域面积获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
  24. 根据权利要求13所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行所述获取各个人脸区域对应的区域面积,根据所述物理距离信息和区域面积获取背景虚化强度时,使得所述处理器还执行以下操作:
    获取各个人脸区域对应的区域面积,并获取所述区域面积最大的人脸区域对应的物理距离信息;及
    根据所述区域面积最大的人脸区域对应的物理距离信息获取背景虚化强度。
  25. 根据权利要求22所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行所述方法时,使得所述处理器还执行以下操作:
    根据所述各个人脸区域对应的物理距离信息,获取所述待处理图像中各个人脸区域对应的人像虚化强度;及
    根据所述人像虚化强度对所述人脸区域对应的人像区域进行虚化处理。
  26. 根据权利要求25所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行所述方法时,使得所述处理器还执行以下操作:
    获取各个人脸区域对应的区域面积,将区域面积最大的人脸区域作为基础区域,并将除基础区域之外的人脸区域作为人脸虚化区域;
    所述计算机可读指令被所述处理器执行所述根据所述各个人脸区域对应的物理距离信息,获取所述待处理图像中各个人脸区域对应的人像虚化强度时,使得所述处理器还执行以下操作:
    根据所述基础区域和人脸虚化区域对应的物理距离信息,获取人脸虚化区域对应的人像虚化强度;
    所述计算机可读指令被所述处理器执行所述根据所述人像虚化强度对所述人脸区域对应的人像区域进行虚化处理时,使得所述处理器还执行以下操作:
    根据所述人像虚化强度对所述人脸虚化区域对应的人像区域进行虚化处理。
  27. 根据权利要求21所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行所述获取所述人脸区域对应的物理距离信息时,使得所述处理器还执行以下操作:
    对所述人脸区域中所有像素对应的物理距离信息求取平均值,作为所述人脸区域对应的物理距离信息;或
    获取任意一个像素对应的物理距离信息,作为所述人脸区域对应的物理距离信息。
  28. 根据权利要求21所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行所述根据所述物理距离信息获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理时,使得所述处理器还执行以下操作:
    获取各个人脸区域对应的区域面积,根据所述物理距离信息和区域面积获取背景虚化强度,并根据所述背景虚化强度对所述待处理图像中的背景区域进行虚化处理。
  29. 根据权利要求21至28任一项所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行所述方法时,使得所述处理器还执行以下操作:
    根据所述人脸区域对应的物理距离信息获取人像距离范围;及
    根据所述人像距离范围获取所述待处理图像中的人像区域,并根据所述人像区域获取背景区域。
  30. 根据权利要求29所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行所述根据所述人像距离范围获取所述待处理图像中的人像区域,并根据所述人像区域获取背景区域时,使得所述处理器还执行以下操作:
    根据所述人像距离范围内的物理距离信息获取所述待处理图像中的图像区域;及
    获取所述图像区域的颜色信息,并根据所述颜色信息获取所述待处理图像中除人像区域之外的背景区域。
PCT/CN2018/099403 2017-08-09 2018-08-08 图像虚化方法、计算机可读存储介质和计算机设备 WO2019029573A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710676169.2A CN107704798B (zh) 2017-08-09 2017-08-09 图像虚化方法、装置、计算机可读存储介质和计算机设备
CN201710676169.2 2017-08-09

Publications (1)

Publication Number Publication Date
WO2019029573A1 true WO2019029573A1 (zh) 2019-02-14

Family

ID=61170965

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/099403 WO2019029573A1 (zh) 2017-08-09 2018-08-08 图像虚化方法、计算机可读存储介质和计算机设备

Country Status (2)

Country Link
CN (1) CN107704798B (zh)
WO (1) WO2019029573A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704798B (zh) * 2017-08-09 2020-06-12 Oppo广东移动通信有限公司 图像虚化方法、装置、计算机可读存储介质和计算机设备
CN110099251A (zh) * 2019-04-29 2019-08-06 努比亚技术有限公司 监控视频的处理方法、装置以及计算机可读存储介质
CN110971827B (zh) * 2019-12-09 2022-02-18 Oppo广东移动通信有限公司 人像模式拍摄方法、装置、终端设备和存储介质
CN112217992A (zh) * 2020-09-29 2021-01-12 Oppo(重庆)智能科技有限公司 图像虚化方法、图像虚化装置、移动终端及存储介质
CN113673474B (zh) * 2021-08-31 2024-01-12 Oppo广东移动通信有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN115883958A (zh) * 2022-11-22 2023-03-31 荣耀终端有限公司 一种人像拍摄方法
CN117714893A (zh) * 2023-05-17 2024-03-15 荣耀终端有限公司 一种图像虚化处理方法、装置、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017017585A (ja) * 2015-07-02 2017-01-19 オリンパス株式会社 撮像装置、画像処理方法
CN106875348A (zh) * 2016-12-30 2017-06-20 成都西纬科技有限公司 一种重对焦图像处理方法
CN106952222A (zh) * 2017-03-17 2017-07-14 成都通甲优博科技有限责任公司 一种交互式图像虚化方法及装置
CN107704798A (zh) * 2017-08-09 2018-02-16 广东欧珀移动通信有限公司 图像虚化方法、装置、计算机可读存储介质和计算机设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4752941B2 (ja) * 2009-03-31 2011-08-17 カシオ計算機株式会社 画像合成装置及びプログラム
JP5760727B2 (ja) * 2011-06-14 2015-08-12 リコーイメージング株式会社 画像処理装置および画像処理方法
CN103945118B (zh) * 2014-03-14 2017-06-20 华为技术有限公司 图像虚化方法、装置及电子设备
CN103973977B (zh) * 2014-04-15 2018-04-27 联想(北京)有限公司 一种预览界面的虚化处理方法、装置及电子设备
CN104333700B (zh) * 2014-11-28 2017-02-22 广东欧珀移动通信有限公司 一种图像虚化方法和图像虚化装置
CN105389801B (zh) * 2015-10-20 2018-09-21 厦门美图之家科技有限公司 人物轮廓设置方法、人物图像虚化方法、***及拍摄终端
CN106331492B (zh) * 2016-08-29 2019-04-16 Oppo广东移动通信有限公司 一种图像处理方法及终端
CN106548185B (zh) * 2016-11-25 2019-05-24 三星电子(中国)研发中心 一种前景区域确定方法和装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017017585A (ja) * 2015-07-02 2017-01-19 オリンパス株式会社 撮像装置、画像処理方法
CN106875348A (zh) * 2016-12-30 2017-06-20 成都西纬科技有限公司 一种重对焦图像处理方法
CN106952222A (zh) * 2017-03-17 2017-07-14 成都通甲优博科技有限责任公司 一种交互式图像虚化方法及装置
CN107704798A (zh) * 2017-08-09 2018-02-16 广东欧珀移动通信有限公司 图像虚化方法、装置、计算机可读存储介质和计算机设备

Also Published As

Publication number Publication date
CN107704798B (zh) 2020-06-12
CN107704798A (zh) 2018-02-16

Similar Documents

Publication Publication Date Title
WO2019029573A1 (zh) 图像虚化方法、计算机可读存储介质和计算机设备
WO2019105154A1 (en) Image processing method, apparatus and device
WO2020038028A1 (zh) 夜景拍摄方法、装置、电子设备及存储介质
WO2020038074A1 (zh) 曝光控制方法、装置以及电子设备
KR102279436B1 (ko) 이미지 처리 방법, 장치 및 기기
WO2020034737A1 (zh) 成像控制方法、装置、电子设备以及计算机可读存储介质
CN107481186B (zh) 图像处理方法、装置、计算机可读存储介质和计算机设备
CN107509031B (zh) 图像处理方法、装置、移动终端及计算机可读存储介质
WO2020038087A1 (zh) 超级夜景模式下的拍摄控制方法、装置和电子设备
US10805508B2 (en) Image processing method, and device
CN109685853B (zh) 图像处理方法、装置、电子设备和计算机可读存储介质
CN107563979B (zh) 图像处理方法、装置、计算机可读存储介质和计算机设备
WO2019011154A1 (zh) 白平衡处理方法和装置
WO2019105260A1 (zh) 景深获取方法、装置及设备
US11233948B2 (en) Exposure control method and device, and electronic device
WO2019105254A1 (zh) 背景虚化处理方法、装置及设备
CN109559352B (zh) 摄像头标定方法、装置、电子设备和计算机可读存储介质
CN109068060B (zh) 图像处理方法和装置、终端设备、计算机可读存储介质
CN113313626A (zh) 图像处理方法、装置、电子设备及存储介质
CN107563329B (zh) 图像处理方法、装置、计算机可读存储介质和移动终端
CN107454335B (zh) 图像处理方法、装置、计算机可读存储介质和移动终端
WO2019019890A1 (zh) 图像处理方法、计算机设备和计算机可读存储介质
CN109584311B (zh) 摄像头标定方法、装置、电子设备和计算机可读存储介质
CN107464225B (zh) 图像处理方法、装置、计算机可读存储介质和移动终端
CN107295261B (zh) 图像去雾处理方法、装置、存储介质和移动终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18843872

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18843872

Country of ref document: EP

Kind code of ref document: A1