CN114565531A - Image restoration method, device, equipment and medium - Google Patents

Image restoration method, device, equipment and medium Download PDF

Info

Publication number
CN114565531A
CN114565531A CN202210188238.6A CN202210188238A CN114565531A CN 114565531 A CN114565531 A CN 114565531A CN 202210188238 A CN202210188238 A CN 202210188238A CN 114565531 A CN114565531 A CN 114565531A
Authority
CN
China
Prior art keywords
image
area
lens
lens area
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210188238.6A
Other languages
Chinese (zh)
Inventor
邵昌旭
许亮
李轲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202210188238.6A priority Critical patent/CN114565531A/en
Publication of CN114565531A publication Critical patent/CN114565531A/en
Priority to PCT/CN2022/134873 priority patent/WO2023160075A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the disclosure provides an image restoration method, an image restoration device, an image restoration apparatus and a medium, wherein the method comprises the following steps: acquiring a face image of a target object and an environment image including an environment around the target object, wherein the face image contains a lens area image of glasses worn by the target object; determining a light reflection area in the lens area image according to the matching result of the lens area image and the environment image; and repairing the lens area image according to the light reflection area to obtain a repaired target image. The method can repair the image of the lens area based on the reflection area, and achieves better reflection elimination effect.

Description

Image restoration method, device, equipment and medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image restoration method, apparatus, device, and medium.
Background
In many fields depending on visual algorithms, there is a need to acquire related information by face recognition or face detection and analysis, but if a person wears glasses, the information of the face is likely to be blocked due to reflection of light of the lenses, so that the recognition accuracy is affected, and the algorithm is disabled.
For example, in an intelligent vehicle cabin technology which brings great convenience to people in transportation, a face image in a vehicle can be analyzed to obtain attributes and states of objects such as a driver or a passenger. For example, whether the driver is tired is identified by discriminating the opening and closing of the eyes of the driver. When a driver wears glasses, the fatigue detection algorithm is likely to output an incorrect state detection result due to the reflection of the glasses, and the missing report or the false report about the state of the driver can increase the driving risk and influence the user experience.
Disclosure of Invention
In view of this, the disclosed embodiments provide at least one image restoration method, apparatus, device and medium.
Specifically, the embodiment of the present disclosure is implemented by the following technical solutions:
in a first aspect, an image inpainting method is provided, the method comprising:
acquiring a face image of a target object and an environment image including an environment around the target object, wherein the face image contains a lens area image of glasses worn by the target object;
determining a light reflection area in the lens area image according to the matching result of the lens area image and the environment image;
and repairing the lens area image according to the light reflection area to obtain a repaired target image.
In some optional embodiments, the determining a glistening region in the lens region image according to the matching result of the lens region image and the environment image includes: matching the lens area image with the environment image, and determining a marking area matched with the lens area image in the environment image; extracting a characteristic outline of the marking region; and performing area segmentation on the lens area image by using the characteristic contour of the mark area to obtain the light reflecting area.
In some optional embodiments, after the acquiring the face image of the target object, the method comprises: and carrying out glasses recognition on the facial image, and determining a lens area image in glasses worn by the target object in the facial image.
In some optional embodiments, the determining a glistening region in the lens region image according to the matching result of the lens region image and the environment image includes: and in response to the determination that the reflection phenomenon exists in the lens area image, determining a reflection area in the lens area image according to a matching result of the lens area image and the environment image.
In some optional embodiments, the determining that a glistening phenomenon is present in the lens area image comprises: and determining that the reflection phenomenon exists in the lens area image in response to the lens area image and the environment image having an image area successfully matched.
In some optional embodiments, the determining that a glistening phenomenon is present in the lens area image comprises: determining a first area of the lens area image where the pixel brightness value reaches a preset brightness threshold; and determining that a light reflection phenomenon exists in the lens area image in response to the fact that the area proportion of the first area in the lens area image reaches a preset area condition.
In some optional embodiments, the determining that a glistening phenomenon is present in the lens area image comprises: determining a first area of the lens area image in which the pixel brightness value reaches a preset brightness threshold; determining that a glint phenomenon exists in the lens area image in response to an eye area in the lens area being occluded by the first area.
In some optional embodiments, the acquiring a face image of the target object and an environment image including an environment surrounding the target object includes: acquiring a face image of a target object in a vehicle, which is acquired by a first camera; and acquiring the environment image acquired by the second camera, wherein the environment image comprises an external environment image of the vehicle.
In some optional embodiments, after the repairing the lens area image according to the light reflection area to obtain a repaired target image, the method further includes: based on the target image, a state of the target object is identified.
In a second aspect, there is provided an image repair apparatus, the apparatus comprising:
the device comprises an image acquisition module, a display module and a display module, wherein the image acquisition module is used for acquiring a face image of a target object and an environment image comprising the surrounding environment of the target object, and the face image comprises a lens area image of glasses worn by the target object;
the light reflection area determining module is used for determining a light reflection area in the lens area image according to the matching result of the lens area image and the environment image;
and the image processing module is used for repairing the lens area image according to the light reflection area to obtain a repaired target image.
In some optional embodiments, the light reflection region determining module is specifically configured to: matching the lens area image with the environment image, and determining a marking area matched with the lens area image in the environment image; extracting a characteristic outline of the marking region; and performing area segmentation on the lens area image by using the characteristic contour of the mark area to obtain the light reflecting area.
In some optional embodiments, the image acquisition module, after the acquiring the face image of the target object, is further configured to: and performing glasses recognition on the face image, and determining a lens area image in glasses worn by the target object in the face image.
In some optional embodiments, the light reflection region determining module is specifically configured to: and in response to the determination that the reflection phenomenon exists in the lens area image, determining a reflection area in the lens area image according to a matching result of the lens area image and the environment image.
In some optional embodiments, the light reflection region determining module, when configured to determine that a light reflection phenomenon exists in the lens region image, is specifically configured to: and determining that the reflection phenomenon exists in the lens area image in response to the lens area image and the environment image having an image area successfully matched.
In some optional embodiments, the light reflection region determining module, when configured to determine that a light reflection phenomenon exists in the lens region image, is specifically configured to: determining a first area of the lens area image where the pixel brightness value reaches a preset brightness threshold; and determining that a light reflection phenomenon exists in the lens area image in response to the fact that the area proportion of the first area in the lens area image reaches a preset area condition.
In some optional embodiments, the light reflection region determining module, when configured to determine that a light reflection phenomenon exists in the lens region image, is specifically configured to: determining a first area of the lens area image where the pixel brightness value reaches a preset brightness threshold; determining that a glint phenomenon exists in the lens area image in response to an eye area in the lens area being occluded by the first area.
In some optional embodiments, the image acquisition module is specifically configured to: acquiring a face image of a target object in a vehicle, which is acquired by a first camera; and acquiring the environment image acquired by the second camera, wherein the environment image comprises an external environment image of the vehicle.
In some optional embodiments, the apparatus further comprises a state identification module, configured to, after the lens area image is repaired to obtain a repaired target image: based on the target image, a state of the target object is identified.
In a third aspect, an electronic device is provided, which includes a memory for storing computer instructions executable on a processor, and the processor is configured to implement the image inpainting method according to any embodiment of the present disclosure when executing the computer instructions.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the image inpainting method according to any one of the embodiments of the present disclosure.
According to the image restoration method provided by the technical scheme of the embodiment of the disclosure, the light reflection area in the glasses can be accurately found by matching the lens area image in the glasses worn by the target object with the environment image of the surrounding environment, so that the lens area image can be restored based on the information such as the position and the shape of the light reflection area, the effect of weakening or eliminating the light reflection in the lens area image is better realized, the face information of the target object which is blocked is restored in the restored target image, the influence of the light reflection on the lens on the algorithm in the subsequent application is reduced, and the accuracy of the subsequent algorithm is improved.
Drawings
In order to more clearly illustrate one or more embodiments of the present disclosure or technical solutions in related arts, the drawings used in the description of the embodiments or related arts will be briefly described below, it is obvious that the drawings in the description below are only some embodiments described in one or more embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive exercise.
FIG. 1 is a flow chart illustrating a method of image inpainting in accordance with at least one embodiment of the present disclosure;
FIG. 2 is a flow chart illustrating another method of image inpainting, in accordance with at least one embodiment of the present disclosure;
FIG. 3 is a flow chart illustrating yet another method of image inpainting, in accordance with at least one embodiment of the present disclosure;
FIG. 4 is a block diagram of an image restoration device shown in at least one embodiment of the present disclosure;
fig. 5 is a block diagram of another image restoration device shown in at least one embodiment of the present disclosure;
fig. 6 is a schematic diagram illustrating a hardware structure of an electronic device according to at least one embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the specification, as detailed in the appended claims.
The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present specification. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
As shown in fig. 1, fig. 1 is a flowchart illustrating an image inpainting method according to at least one embodiment of the present disclosure, which may include the following steps:
in step 102, a face image of a target object and an environment image including an environment surrounding the target object are acquired.
Wherein the facial image includes a lens area image of glasses worn by the target object. In this embodiment, the target object is provided with glasses, and the lens area image is an image of an area where a lens in the glasses is located.
In this step, an environment image acquired by at least one camera may be acquired, and when the environment image is multiple, the multiple environment images may be images of environments including different directions around the target object.
The present embodiment does not limit the manner of acquiring the face image of the target object and the environment image of the environment around the target object. When the target object is in different application scenes, different acquisition modes can be adopted.
Several acquisition modes are exemplified as follows:
for example, when the target object uses a mobile phone to perform self-shooting, the face image of the target object may be an image captured by a front camera of the mobile phone on the face of the target object, and the environment image may be an image captured by a rear camera of the mobile phone on the environment in front of the target object. For example, a face image of a target object and an environment image may be obtained from an image captured by a camera. For another example, when the target object is in the vehicle, the face image of the target object may be captured by a camera in the vehicle, and the environment image outside the vehicle may be captured by a camera outside the vehicle or facing outside the vehicle.
It should be noted that the acquisition time of the face image and the acquisition time of the environment image may be at the same time or at different times. When the target object is an object moving in the environment, a face image and an environment image whose acquisition times are simultaneous or adjacent may be used.
In step 104, a glistening area in the lens area image is determined according to the matching result of the lens area image and the environment image.
When the lens of the glasses worn by the target object has a reflection phenomenon, the image of the scenery in the surrounding environment reflected by the lens appears in the lens area image, and the area of the image in the lens area image is a reflection area.
In this step, the lens area image and the environment image may be matched to find out an area with high similarity in the two images, and then the reflection area in the lens area image is determined according to the area with high similarity.
For example, the lens area image and the environment image are matched, and if the similarity between a certain part of the lens area image and a certain area of the environment image reaches a certain condition, the part of the lens area image is directly determined as the light reflection area.
When the lens has a reflection phenomenon, and when a surrounding scene is reflected, due to various factors such as a reflection angle, a curvature of the lens, and a material of the lens, generally, a shape and a color of a scene image in a reflection area are changed from those in an environment image, and when a lens area image is matched with the environment image and an area having a high similarity is searched for, a similarity condition may be set by a person skilled in the art in consideration of an influence of the above factors.
In one embodiment, this step may be to determine a glistening area in the lens area image according to a matching result of the lens area image and the environment image in response to determining that a glistening phenomenon exists in the lens area image. In this embodiment, it may be determined whether a reflection phenomenon exists in the lens area image, and when the reflection phenomenon exists, the reflection area in the lens area image may be determined, so as to reduce unnecessary consumption of computing resources. Several methods for determining whether the image of the lens area has the reflection phenomenon are exemplified as follows, but it is understood that the embodiment is not limited to the following examples:
in one example, the presence of glistenings in the lens area image may be determined in response to the presence of an image area where the lens area image matches the environment image successfully. In this example, the lens area image and the environment image are still matched, if the similarity between the image of a certain partial area or the image of the whole area in the lens area image and the image of a certain partial area of the environment image reaches a set condition, it is determined that an image area successfully matched between the lens area image and the environment image exists, a light reflection phenomenon exists in the lens area image, and then a light reflection area can be determined according to the image area successfully matched; if the set condition is not met, determining that no image area successfully matched with the lens area image and the environment image exists, and no light reflection phenomenon exists in the lens area image, so that the subsequent repair processing can be omitted.
When the surrounding environment is bright, the image area of the lens where the reflection occurs is likely to have a higher brightness value than the other portions where the reflection does not occur. In another example, it may be determined whether there is a glistening phenomenon in the lens area image based on the pixel brightness value in the lens area image.
For example, according to a pixel brightness value in a lens area image, a first area where the pixel brightness value reaches a preset brightness threshold value in the lens area image may be determined, and in response to that an area ratio of the first area to the lens area image reaches a preset area condition, it is determined that a light reflection phenomenon exists in the lens area image. And combining pixel points of which the pixel brightness values reach a preset brightness threshold value in the lens area image into a first area, and when the area proportion of the first area in the lens area image reaches a preset area condition, for example, when the area proportion of the first area in the lens area image reaches 10%, indicating that a light reflection phenomenon exists in the lens area image. When the area proportion of the first area in the image of the lens area does not reach the preset area condition, the reflection phenomenon may not occur on the lens or the reflection phenomenon can be ignored when the reflection area is too small. In other examples, whether the reflection phenomenon exists in the lens area image may also be determined directly according to the area size of the first area.
For another example, according to the pixel brightness value in the lens area image, a first area where the pixel brightness value in the lens area image reaches a preset brightness threshold value may be determined, and in response to the eye area in the lens area being blocked by the first area, it is determined that the reflection phenomenon exists in the lens area image. For the subsequent algorithm focusing on the eye area, if the reflection on the lens does not affect the image of the eyes behind the lens, the reflection phenomenon can be ignored, and if the reflection on the lens blocks the eye area, the reflection phenomenon cannot be ignored. Combining pixel points of which the pixel brightness values reach a preset brightness threshold value in the lens area image into a first area, and when the first area in the lens area image shields an eye area, for example, the outline of eyes is incomplete, and eyes of a complete target object are not detected in an area except the first area in the lens area image, considering that a light reflection phenomenon exists in the lens area image and the light reflection affects the eye area of the target object, and needing to continue image restoration; when the eye area in the lens area image is not shielded by the first area, it is determined that no reflection phenomenon exists in the lens area image or the reflection phenomenon can be ignored, and subsequent repair processing is not performed.
In step 106, the lens area image is restored according to the light reflection area, and a restored target image is obtained.
In this step, the image restoration technology can be used to perform key restoration on the reflection area in the lens area image according to the information provided by the reflection area, so as to eliminate or weaken the influence caused by reflection, and obtain a non-reflection lens area image, i.e. a restored target image.
In the glistening area, it may occur that the image reflected by the lens is superimposed and mixed with the image of the face area of the target object behind the lens, making the face information of the target object difficult to recognize. When the lens area image is repaired, on one hand, the information content of the light reflection area can be estimated by using the structural shape of the light reflection area, the edge color of the light reflection area and other information, and then the light reflection area is filled. On the other hand, the image information of the reflection region in the region where the reflection region matches the environmental image may be used as a reference to repair the reflection region, thereby restoring the image behind the lens.
The embodiment does not limit the specific repair algorithm used in the repair process, such as an image quality enhancement algorithm, a picture completion algorithm, a super-resolution technique, and the like. In one example, a neural network model for image restoration may be trained in advance, a retroreflective area, a lens area image, and an environment image may be input to the neural network model, and a restored target image may be output, and the neural network may predict information on the face of a target object in the lens area image by learning image information of a non-retroreflective area around the retroreflective area and by learning a batch image sample.
According to the image restoration method provided by the technical scheme of the embodiment of the disclosure, the light reflection area in the glasses can be accurately found by matching the lens area image in the glasses worn by the target object with the environment image of the surrounding environment, so that the lens area image can be restored based on the information such as the position and the shape of the light reflection area. The information of the shielded face of the target object, particularly the information of the eye area, is restored in the restored target image, the influence of the reflection on the lens on the algorithm in the subsequent application is reduced, and the accuracy of the subsequent algorithm is improved.
Fig. 2 is another image restoration method provided in at least one embodiment of the present disclosure, which may include the following processes, wherein the same steps as those in the flowchart of fig. 1 will not be described in detail.
In step 202, a face image of a target object and an environment image including an environment surrounding the target object are acquired.
Wherein the facial image includes a lens area image of glasses worn by the target object. The image restoration method in this embodiment may be used as a preprocessing step of various image recognition algorithms, for example, may be applied to various vehicle cabin vision algorithms.
In step 204, the lens area image is matched with the environment image, and a mark area matched with the lens area image in the environment image is determined.
In this step, the lens area image and the environment image may be matched, and if there is a certain condition that the similarity between a certain portion of the lens area image and a certain area of the environment image reaches a certain condition, the area in the environment image is determined to be a mark area matched with the lens area image. The marked area encompasses a scene in a known environment, such as a house, tree, etc.
In step 206, a feature profile of the marked region is extracted.
A feature profile is a set or sets of interconnected curves that outline the scene in the marked area, these curves being composed of a series of edge points. For example, the marked area includes a tree and a utility pole, and the extracted characteristic contour includes a contour of the tree and a contour of the utility pole.
The embodiment is not limited to a specific manner of extracting the feature contour of the mark region, and for example, image segmentation, edge detection, and the like may be adopted for extraction.
In step 208, the lens area image is subjected to area segmentation by using the feature contour of the mark area, so as to obtain the light reflection area.
The present embodiment does not limit the specific manner in which the region segmentation is used.
For example, the extracted feature outline may be used as an area mask covering an area within the feature outline. In the above example, the characteristic profile of the combination of the profile of one tree and the profile of one utility pole may be used as one area mask, or may be used as two area masks, that is, one area mask corresponding to the tree and one area mask corresponding to the utility pole. According to the shape of the area covered by the area mask, an area similar to the shape is fitted in the lens area image, and the light reflection area is obtained by dividing. When there are a plurality of area masks, a plurality of light reflecting areas can be divided.
For another example, the feature contour may be used as a detection target to perform target detection in the lens region image, to obtain a region contour with the highest degree of confidence, and the light reflection region may be divided from the lens region image based on the region contour.
In the method for determining a glare region by matching a lens region image with an environment image, which is described in the previous embodiment, if the similarity between a certain portion of the lens region image and a certain region of the environment image meets a certain condition, the method for directly determining the portion of the lens region image as the glare region is faster than the method of the present embodiment, but the accuracy of the determined glare region is not higher than the method of the present embodiment because: the image presented in the lens area image is a mixed image of the image in the environment reflected by the lens and the image of the face area of the target object behind the lens, so that it is difficult to separate the real light reflection area from the lens area image when matching is performed.
In step 210, the lens area image is restored according to the light reflection area, so as to obtain a restored target image.
The image restoration method provided by the technical scheme of the embodiment of the disclosure finds the marking area matched with the image in the lens area image in the environment image by matching the lens area image in the glasses worn by the target object with the environment image containing the surrounding environment, since the marked region is a region in the environment image which is clear, the feature contour extracted based on the marked region is clearer and more reliable and is closer to the shape of the actual scene in the environment, the reflection of the lens is just the scene in the environment, so the reflection area obtained by the characteristic contour segmentation is closer to the area where the actual reflection of the lens is located, therefore, the lens area image can be repaired based on the information such as the position and the shape of the light reflecting area, the higher the accuracy of the light reflecting area is, the more reliable the information provided for the repairing algorithm is, and the better the repairing and completing effect is. The information of the shielded face of the target object, particularly the information of the eye area, is restored in the restored target image, the influence of the reflection on the lens on the algorithm in the subsequent application is reduced, and the accuracy of the subsequent algorithm is improved.
Fig. 3 provides an image restoration method according to another embodiment of the present disclosure, which may be applied to the field of smart vehicle cabins, for example, and may be executed by a DMS (Driver Monitoring System), an OMS (occupant Monitoring System), a smart driving System, or a cloud, etc., including the following processes, wherein the same steps as those in the processes of fig. 1 and 2 will not be described in detail.
In step 302, a face image of a target object within a vehicle captured by a first camera is acquired.
For example, the first camera may be a camera facing the inside of the vehicle, and the first camera captures images inside the vehicle to obtain an in-vehicle image, and performs image analysis on the in-vehicle image to obtain a face image of the target object. The target object may be a driver, a passenger, a security officer, etc. within the vehicle.
In step 304, glasses recognition is performed on the facial image, and a lens area image in glasses worn by the target object in the facial image is determined.
For example, the glasses detection may be performed on the face image, and if the glasses are detected, the glasses recognition may be further performed on the face image, and the lens area image in the glasses worn by the target object may be obtained by the recognition.
For another example, glasses recognition is directly performed on the face image, whether the target object wears glasses is determined, and when it is determined that the target object wears glasses, a lens area image in the glasses worn by the target object is obtained.
In step 306, the environment image collected by the second camera is acquired, and the environment image includes an external environment image of the vehicle.
For example, the second camera may be a camera facing the outside of the vehicle, and the second camera captures an image of an environment outside the vehicle to obtain an environment image. The environment image at least comprises data of one camera or comprises data of a plurality of cameras.
In one example, the method in this embodiment may be used to restore the face image of the driver collected by the DMS, and considering that the reflection of the glasses of the driver is usually formed by imaging the scenery in front of or at the side of the vehicle, the second camera may use a forward-looking camera and/or a side-looking camera of the vehicle, and the captured environment image includes the external environment in front of and/or at the side of the vehicle, or may use a panoramic camera facing the outside of the vehicle, and the captured environment image includes an outside panoramic image.
In step 308, the lens area image is matched with the environment image, and a mark area in the environment image matched with the lens area image is determined.
For example, if the image of the lens area image matches a certain area of the vehicle exterior image of the environment image, the vehicle exterior area is marked as a marked area.
In step 310, the feature contour of the marked region is extracted to obtain a region mask.
And extracting the characteristic outline of the marked region, and taking the characteristic outline as a region mask.
In step 312, the lens region image is subjected to region segmentation by using the region mask to obtain the light reflection region.
In step 314, the lens area image is restored according to the light reflection area, so as to obtain a restored target image.
In step 316, a state of the target object is identified based on the target image.
Wherein the state of the target object may characterize an emotional or physical state of the target object, in particular, may comprise at least one of: normal state, fatigue state, distraction state.
For example, the target image, i.e. the repaired lens area image, may be input into a state recognition model, which may be a neural network model trained in advance, and may be capable of recognizing the state of the target object based on eye closure, distance between eyelids, blink speed, gaze direction, jump motion, and the like.
For another example, the target image may be padded in the face image to obtain a restored face image, and the face image is input to the state recognition model, and the state recognition model may recognize the state of the target object by combining the features of the eyes and other features of the face, such as yawning of the mouth, changes in facial expression, and the like.
In some embodiments, the eye-related state of the target object may be identified according to the restored target image. Specifically, the eye features may be extracted from the target image to identify the direction of the line of sight or the state of opening and closing of the eyes of the target object, detect the length of time for which the line of sight of the target object is maintained in one direction or the duration of time for which the eyes are closed from the video stream, determine whether the target object is in a state of distraction or fatigue, or determine the level of distraction or fatigue of the target object.
The image restoration method in the embodiment can process the image in advance through the lens reflection elimination technology, and then the image is filled into the algorithm module, so that the accuracy and the usability of the recognition algorithm are improved. Taking the fatigue detection algorithm as an example, under the condition of lens reflection, the detection precision is obviously reduced due to the reduction of the characteristic information on the image, namely false alarm or false alarm, the method of the embodiment can be used for segmenting the lens reflection and repairing the image according to the outline of the scenery outside the vehicle, the key information required by the fatigue monitoring algorithm is restored, and the accuracy of the vision algorithm inside the vehicle is improved.
As shown in fig. 4, the figure is a block diagram of an image restoration apparatus according to at least one embodiment of the present disclosure, the apparatus including:
an image obtaining module 41, configured to obtain a face image of a target object and an environment image including an environment around the target object, where the face image includes a lens area image of glasses worn by the target object.
And a light reflection area determining module 42, configured to determine a light reflection area in the lens area image according to a matching result between the lens area image and the environment image.
And an image processing module 43, configured to repair the lens area image according to the light reflection area, so as to obtain a repaired target image.
In an example, the light reflection area determining module 42 is specifically configured to: matching the lens area image with the environment image, and determining a marking area matched with the lens area image in the environment image; extracting a characteristic outline of the marking region; and performing area segmentation on the lens area image by using the characteristic contour of the mark area to obtain the light reflecting area.
In an example, the image obtaining module 41, after obtaining the face image of the target object, is further configured to: and carrying out glasses recognition on the facial image, and determining a lens area image in glasses worn by the target object in the facial image.
In an example, the light reflection area determining module 42 is specifically configured to: and in response to the determination that the reflection phenomenon exists in the lens area image, determining a reflection area in the lens area image according to a matching result of the lens area image and the environment image.
In one example, the light reflection region determining module 42, when configured to determine that a light reflection phenomenon exists in the lens region image, is specifically configured to: and determining that the reflection phenomenon exists in the lens area image in response to the lens area image and the environment image having an image area successfully matched.
In one example, the light reflection region determining module 42, when configured to determine that a light reflection phenomenon exists in the lens region image, is specifically configured to: determining a first area of the lens area image where the pixel brightness value reaches a preset brightness threshold; and determining that a light reflection phenomenon exists in the lens area image in response to the fact that the area proportion of the first area in the lens area image reaches a preset area condition.
In one example, the light reflection region determining module 42, when configured to determine that a light reflection phenomenon exists in the lens region image, is specifically configured to: determining a first area of the lens area image where the pixel brightness value reaches a preset brightness threshold; determining that a glint phenomenon exists in the lens area image in response to an eye area in the lens area being occluded by the first area.
In an example, the image obtaining module 41 is specifically configured to: acquiring a face image of a target object in a vehicle, which is acquired by a first camera; and acquiring the environment image acquired by the second camera, wherein the environment image comprises an external environment image of the vehicle.
In one example, as shown in fig. 5, the apparatus further includes a state recognition module 44, configured to, after the lens area image is restored to obtain a restored target image: based on the target image, a state of the target object is identified.
The implementation process of the functions and actions of each module in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
The embodiment of the present disclosure further provides an electronic device, as shown in fig. 6, where the electronic device includes a memory 61 and a processor 62, the memory 61 is used for storing computer instructions executable on the processor, and the processor 62 is used for implementing the image inpainting method according to any embodiment of the present disclosure when executing the computer instructions.
The embodiments of the present disclosure also provide a computer program product, which includes a computer program/instruction, when being executed by a processor, the computer program/instruction implements the image inpainting method according to any embodiment of the present disclosure.
The embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and the program, when executed by a processor, implements the image inpainting method according to any embodiment of the present disclosure.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, wherein the modules described as separate parts may or may not be physically separate, and the parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It will be understood that the present description is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (12)

1. An image inpainting method, comprising:
acquiring a face image of a target object and an environment image including an environment around the target object, wherein the face image contains a lens area image of glasses worn by the target object;
determining a glistening area in the lens area image according to the matching result of the lens area image and the environment image;
and repairing the lens area image according to the light reflection area to obtain a repaired target image.
2. The method according to claim 1, wherein the determining the glistening area in the lens area image according to the matching result of the lens area image and the environment image comprises:
matching the lens area image with the environment image, and determining a marking area matched with the lens area image in the environment image;
extracting a characteristic outline of the marking region;
and performing area segmentation on the lens area image by using the characteristic contour of the mark area to obtain the light reflecting area.
3. The method of claim 1, wherein after the obtaining the facial image of the target object, the method comprises:
and carrying out glasses recognition on the facial image, and determining a lens area image in glasses worn by the target object in the facial image.
4. The method according to any one of claims 1-3, wherein the determining the glistening area in the lens area image from the matching of the lens area image and the environment image comprises:
and in response to the determination that the reflection phenomenon exists in the lens area image, determining a reflection area in the lens area image according to a matching result of the lens area image and the environment image.
5. The method of claim 4, wherein said determining that glistening is present in the lens area image comprises:
and determining that the reflection phenomenon exists in the lens area image in response to the lens area image and the environment image having an image area successfully matched.
6. The method of claim 4, wherein said determining that glistening is present in the lens area image comprises:
determining a first area of the lens area image where the pixel brightness value reaches a preset brightness threshold;
and determining that a light reflection phenomenon exists in the lens area image in response to the fact that the area proportion of the first area in the lens area image reaches a preset area condition.
7. The method of claim 4, wherein said determining that glistening is present in the lens area image comprises:
determining a first area of the lens area image where the pixel brightness value reaches a preset brightness threshold;
determining that a glint phenomenon exists in the lens area image in response to an eye area in the lens area being occluded by the first area.
8. The method according to any one of claims 1-7, wherein the acquiring a face image of a target object and an environment image including an environment surrounding the target object comprises:
acquiring a face image of a target object in a vehicle, which is acquired by a first camera;
and acquiring the environment image acquired by the second camera, wherein the environment image comprises an external environment image of the vehicle.
9. The method according to any one of claims 1-8, wherein after the repairing the lens area image according to the light reflection area to obtain a repaired target image, the method further comprises:
based on the target image, a state of the target object is identified.
10. An image restoration apparatus, characterized in that the apparatus comprises:
the device comprises an image acquisition module, a display module and a display module, wherein the image acquisition module is used for acquiring a face image of a target object and an environment image comprising the surrounding environment of the target object, and the face image comprises a lens area image of glasses worn by the target object;
the light reflection area determining module is used for determining a light reflection area in the lens area image according to the matching result of the lens area image and the environment image;
and the image processing module is used for repairing the lens area image according to the light reflection area to obtain a repaired target image.
11. An electronic device, comprising a memory for storing computer instructions executable on a processor, the processor being configured to implement the method of any one of claims 1 to 9 when executing the computer instructions.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 1 to 9.
CN202210188238.6A 2022-02-28 2022-02-28 Image restoration method, device, equipment and medium Pending CN114565531A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210188238.6A CN114565531A (en) 2022-02-28 2022-02-28 Image restoration method, device, equipment and medium
PCT/CN2022/134873 WO2023160075A1 (en) 2022-02-28 2022-11-29 Image inpainting method and apparatus, and device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210188238.6A CN114565531A (en) 2022-02-28 2022-02-28 Image restoration method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN114565531A true CN114565531A (en) 2022-05-31

Family

ID=81716112

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210188238.6A Pending CN114565531A (en) 2022-02-28 2022-02-28 Image restoration method, device, equipment and medium

Country Status (2)

Country Link
CN (1) CN114565531A (en)
WO (1) WO2023160075A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023160075A1 (en) * 2022-02-28 2023-08-31 上海商汤智能科技有限公司 Image inpainting method and apparatus, and device and medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018011681A (en) * 2016-07-20 2018-01-25 富士通株式会社 Visual line detection device, visual line detection program, and visual line detection method
CN108564540B (en) * 2018-03-05 2020-07-17 Oppo广东移动通信有限公司 Image processing method and device for removing lens reflection in image and terminal equipment
CN111582005B (en) * 2019-02-18 2023-08-15 Oppo广东移动通信有限公司 Image processing method, device, computer readable medium and electronic equipment
CN113055579B (en) * 2019-12-26 2022-02-01 深圳市万普拉斯科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN114565531A (en) * 2022-02-28 2022-05-31 上海商汤临港智能科技有限公司 Image restoration method, device, equipment and medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023160075A1 (en) * 2022-02-28 2023-08-31 上海商汤智能科技有限公司 Image inpainting method and apparatus, and device and medium

Also Published As

Publication number Publication date
WO2023160075A1 (en) 2023-08-31

Similar Documents

Publication Publication Date Title
CN107423690B (en) Face recognition method and device
EP1589485B1 (en) Object tracking and eye state identification method
EP3418944A2 (en) Information processing apparatus, information processing method, and program
JP4755202B2 (en) Face feature detection method
RU2431190C2 (en) Facial prominence recognition method and device
CN106682578B (en) Weak light face recognition method based on blink detection
CN105426843B (en) The single-lens lower vena metacarpea of one kind and palmprint image collecting device and image enhancement and dividing method
EP1868138A2 (en) Method of tracking a human eye in a video image
CN112732071B (en) Calibration-free eye movement tracking system and application
CN110059634B (en) Large-scene face snapshot method
CN111062328B (en) Image processing method and device and intelligent robot
Yuen et al. On looking at faces in an automobile: Issues, algorithms and evaluation on naturalistic driving dataset
CN104573704A (en) Eye part detection apparatus and method
CN113221771B (en) Living body face recognition method, device, apparatus, storage medium and program product
CN109508636A (en) Vehicle attribute recognition methods, device, storage medium and electronic equipment
CN112183200B (en) Eye movement tracking method and system based on video image
CN109583364A (en) Image-recognizing method and equipment
Hebbale et al. Real time COVID-19 facemask detection using deep learning
CN113392699A (en) Multi-label deep convolution neural network method and device for face occlusion detection and electronic equipment
CN111814603A (en) Face recognition method, medium and electronic device
Hsu Automatic pedestrian detection in partially occluded single image
WO2023160075A1 (en) Image inpainting method and apparatus, and device and medium
US11048926B2 (en) Adaptive hand tracking and gesture recognition using face-shoulder feature coordinate transforms
Patil et al. A novel method for illumination normalization for performance improvement of face recognition system
CN116982093A (en) Presence attack detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40068044

Country of ref document: HK