CN111311523B - Image processing method, device and system and electronic equipment - Google Patents

Image processing method, device and system and electronic equipment Download PDF

Info

Publication number
CN111311523B
CN111311523B CN202010226538.XA CN202010226538A CN111311523B CN 111311523 B CN111311523 B CN 111311523B CN 202010226538 A CN202010226538 A CN 202010226538A CN 111311523 B CN111311523 B CN 111311523B
Authority
CN
China
Prior art keywords
image
display screen
pixel
target display
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010226538.XA
Other languages
Chinese (zh)
Other versions
CN111311523A (en
Inventor
徐鲁辉
范浩强
李帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN202010226538.XA priority Critical patent/CN111311523B/en
Publication of CN111311523A publication Critical patent/CN111311523A/en
Priority to PCT/CN2020/119611 priority patent/WO2021189807A1/en
Application granted granted Critical
Publication of CN111311523B publication Critical patent/CN111311523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image processing method, an image processing device, an image processing system and electronic equipment, wherein the method comprises the following steps: acquiring an image to be processed; the image to be processed is acquired by a camera device through a target display screen, and image atomization in the image to be processed is removed through an image processing model, so that the image to be processed after the atomization is removed is obtained; the image processing model is obtained through training according to pixel arrangement information of the target display screen. The image processing model can remove image atomization in the image to be processed acquired by the camera device through the target display screen, so that the image quality of the image is improved, and the experience of a user is also improved.

Description

Image processing method, device and system and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, system, and electronic device.
Background
With the development of full-screen technology, the application of full-screen terminal devices will become more and more popular. In the related art, a front camera is provided to a terminal device, and a portion of a display screen of the terminal device where the front camera is installed is generally provided with a groove or a hole so that the front camera can collect an external image, however, the groove or the hole formed on the display screen of the terminal device causes a screen ratio of the display screen to be reduced. In order to improve the screen occupation ratio, a truly comprehensive screen is realized, the front camera can be hidden below the display screen under the condition that the display screen is not perforated, and when the display screen is used, the camera can acquire external images through a light transmission area of the display screen, but due to the existence of a physical diffraction phenomenon, the image quality of the images acquired by the camera in the mode is poor, and the user experience is influenced.
Disclosure of Invention
Accordingly, the present invention is directed to an image processing method, apparatus, system and electronic device, so as to improve the image quality and the user experience.
In order to achieve the above object, the technical scheme adopted by the embodiment of the invention is as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, including: acquiring an image to be processed; the image to be processed is acquired by a camera device through a target display screen; removing image atomization in the image to be processed through an image processing model to obtain a target image from which the atomization is removed; the image processing model is obtained through training according to pixel arrangement information of the target display screen.
In a preferred embodiment of the present invention, the image processing model is trained by: generating simulation atomization information according to pixel arrangement information of a target display screen; fusing the simulated atomization information into a standard image to obtain a sample image; the standard image is acquired by an optical camera; and training the initial model according to the standard image and the sample image to obtain an image processing model.
In a preferred embodiment of the present invention, the pixel arrangement information of the target display screen includes: arrangement information of each sub-pixel in the target display screen; the step of generating the simulated atomization information according to the pixel arrangement information of the target display screen comprises the following steps: generating a sub-pixel distribution map of the target display screen according to the arrangement information of each sub-pixel in the target display screen; in the sub-pixel distribution diagram, a first pixel value is adopted to identify sub-pixels, and a second pixel value is adopted to identify areas except the sub-pixels in a target display screen; and generating simulation atomization information according to the sub-pixel distribution diagram of the target display screen.
In a preferred embodiment of the present invention, the arrangement information of each sub-pixel in the target display screen includes at least one of a position, a shape, a rotation angle, and a size of each sub-pixel; the step of identifying the sub-pixels by using the first pixel values and identifying the areas except the sub-pixels in the target display screen by using the second pixel values includes: for each sub-pixel, determining a first image area corresponding to the sub-pixel according to the arrangement information of the sub-pixel, and identifying the first image area by adopting a first pixel value; and marking the areas except the first image area corresponding to each sub-pixel by adopting the second pixel value.
In a preferred embodiment of the present invention, the step of generating the simulated atomization information according to the sub-pixel distribution diagram of the target display screen includes: carrying out Fourier transform on the sub-pixel distribution diagram of the target display screen to obtain analog atomization information; the simulated atomization information includes diffraction fringe information.
In a preferred embodiment of the present invention, the step of fusing the simulated fog information into the standard image to obtain the sample image includes: and carrying out convolution processing on the simulated atomization information and the standard image to obtain a sample image.
In a preferred embodiment of the present invention, the step of training the initial model according to the standard image and the sample image to obtain an image processing model includes: inputting the sample image into the initial model to obtain an output result; determining a loss value according to the output result and the standard image; and training an initial model based on the loss value to obtain an image processing model.
In a preferred embodiment of the present invention, after the step of obtaining the target image after removing the atomized image, the method further includes: displaying a target image; or rendering the sub-pixels in the target display screen according to the target image.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including: the image acquisition module is used for acquiring an image to be processed; the image to be processed is acquired by the camera device through the target display screen; the image processing module is used for removing image atomization in the image to be processed through the image processing model to obtain an object image from which the atomization is removed; the image processing model is obtained through training according to pixel arrangement information of the target display screen.
In a third aspect, an embodiment of the present invention provides an image processing system, including: a processing device and a storage device; the storage means has stored thereon a computer program which, when run by a processing device, performs the above-described image processing method.
In a fourth aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes a target display screen, an image capturing device disposed under the target display screen, and an image processing system according to the third aspect.
In a fifth aspect, embodiments of the present invention provide a computer readable storage medium having stored thereon a computer program which, when run by a processing device, performs the steps of the image processing method as described above.
The embodiment of the invention has the following beneficial effects:
the invention provides an image processing method, an image processing device, an image processing system and electronic equipment, wherein an image to be processed acquired by a camera device through a target display screen is acquired; removing image atomization in the image to be processed through an image processing model to obtain a target image from which the atomization is removed; the image processing model is obtained through training according to pixel arrangement information of the target display screen. According to the image processing method, through the image processing model, image atomization in the image to be processed acquired by the camera device through the target display screen is removed, so that the image quality of the image is improved, and meanwhile, the experience of a user is also improved.
Additional features and advantages of the invention will be set forth in the description which follows, or in part will be obvious from the description, or may be learned by practice of the invention.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an image acquired by a camera through a target display screen according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
FIG. 3 is a flowchart of an image processing method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an image to be processed after removing the fog according to an embodiment of the present invention;
FIG. 5 is a flowchart of another image processing method according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a sub-pixel distribution diagram according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of diffraction fringe information according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the related art, in order to ensure that a front camera on a display screen of a terminal device normally collects an external image, a groove or a hole, for example, liu Haibing, a water drop screen, a through hole screen, a blind hole screen, etc., is generally provided at a portion of the front camera on the display screen, however, the groove or the hole formed on the display screen of the terminal device causes a reduction in a screen ratio of the display screen. In order to improve the screen occupation ratio and realize a truly comprehensive screen, the front camera is hidden below the display screen under the condition that the display screen is not perforated in the related art, and when the front camera is used, the camera can acquire external images through the light-transmitting area of the display screen. However, when the external image is collected by the camera below the display screen, physical diffraction phenomenon, blurring phenomenon and the like can be generated, so that the image collected by the camera is atomized, and the collected image is poor in image quality. For example, as shown in fig. 1, an image collected by a camera under a display screen may generate diffraction fringes in a highlight region (for example, a region with white dots in fig. 1), and a blur phenomenon may be generated in a non-highlight region (for example, a region with paper cups, apples, etc. in fig. 1), so that user experience may be affected.
Based on the above, the embodiment of the invention provides an image processing method, an image processing device, an image processing system and an electronic device, and the technology can be applied to products with various cameras placed below a display screen, such as mobile phones, computers, cameras, biomedical imaging devices and the like. For ease of understanding, embodiments of the present invention are described in detail below.
Embodiment one:
first, an example electronic device 100 for implementing the image processing method, apparatus, system, and electronic device of the embodiment of the present invention is described with reference to fig. 2.
As shown in fig. 2, an electronic device 100 includes one or more processing devices 102, one or more storage devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected by a bus system 112 and/or other forms of connection mechanisms (not shown). It should be noted that the components and structures of the electronic device 100 shown in fig. 2 are exemplary only and not limiting, as the electronic device may have other components and structures as desired.
The processing device 102 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, may process data from other components in the electronic device 100, and may also control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processing device 102 to implement client functionality and/or other desired functionality in embodiments of the present invention described below (implemented by the processing device). Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, mouse, microphone, touch screen, and the like.
The output device 108 may output various information (e.g., images, text, or sound) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may capture images (e.g., photographs, videos, etc.) desired by the user and store the captured images in the storage device 104 for use by other components.
Illustratively, the various devices in the exemplary electronic device for implementing the image processing method, apparatus, system and electronic device according to the embodiments of the present invention may be integrally provided, or may be separately provided, such as integrally provided with the processing device 102, the storage device 104, the input device 106, the output device 108 and the image capturing device 110. When the devices in the above electronic apparatus are integrally provided, the electronic apparatus may be implemented as a smart terminal such as a smart phone, a tablet computer, a computer, or the like.
Embodiment two:
the present embodiment provides an image processing method that can be executed by a processing device in the above-described electronic device; the processing device may be any device or chip having data processing capabilities. As shown in fig. 3, the method comprises the steps of:
step S302, obtaining an image to be processed; the image to be processed is acquired by the camera device through the target display screen.
The camera device can comprise a camera, for example, a front camera in a mobile phone or a tablet computer, and can be a video camera or a camera; the above-mentioned target Display screen may be an OLED (Organic Light-Emitting Display) Display screen or other Light-permeable Display screen, the OLED Display screen has a Light-permeable portion, and the Display screen does not need a backlight, and a very thin Organic material coating and a glass substrate are used, and when a current passes through the Organic material coating and glass substrate, the Organic material emits Light, and the OLED Display screen is lighter and thinner, has a larger viewing angle, and can save electric energy remarkably.
In a specific implementation, the above-mentioned image capturing device is located below the target display screen, and may be also understood that the image capturing device is disposed on a backlight side of the target display screen, so that the image capturing device may collect the image to be processed through the light transmitting portion of the target display screen. The target display screen may be a whole screen in the intelligent terminal (for example, a whole screen of a mobile phone) or may be a part of a screen in the whole screen, for example, the target display screen may be a screen of a target area corresponding to the position of the camera device in the intelligent terminal (for example, a screen area corresponding to a front camera of the mobile phone), or may be a top half screen of the whole screen in which the camera device is installed, which is not limited in the embodiment of the invention.
The image atomization generated when the image pickup device collects the image to be processed through the target display screen comprises diffraction stripes, blurring and the like, and the image atomization generated when the image to be processed is collected is generally different according to different shooting scenes. For example, the shooting scene may be any scene with light, such as a scene in which a point light source exists, the image shown in fig. 1 may be an image to be processed, the white dot area in fig. 1 is a point light source area (may also be referred to as a highlight area), the point light source area in fig. 1 has obvious diffraction fringes, and the non-point light source area (may also be referred to as a non-highlight area) of the image to be processed is blurred due to the existence of a physical diffraction phenomenon, that is, a blurring phenomenon is generated. The shooting scene can also be a scene without a point light source, and in the scene, the image to be processed is blurred due to the existence of a physical diffraction phenomenon in the image to be processed acquired by the image pickup device through the target display screen, so that the blurring phenomenon is generated.
Step S304, removing image atomization in the image to be processed through an image processing model to obtain an object image from which the atomization is removed; the image processing model is obtained through training according to pixel arrangement information of the target display screen.
The target display screen may include a plurality of light emitting units, each of which includes a plurality of light emitting pixels (corresponding to sub-pixels); each light emitting unit may include three sub-pixels of R (red), G (green), and B (blue); of course, the light emitting units may also be in other forms, such as four sub-pixels of R (red), G (green), B (blue), and W (white), and the number and types of sub-pixels included in each light emitting unit are not limited in the embodiment of the present invention.
The arrangement of the plurality of sub-pixels in at least two light emitting units of the target display screen is non-repetitive, or the arrangement of the plurality of sub-pixels in at least one light emitting unit is different from the arrangement of the plurality of sub-pixels in other light emitting units, and the arrangement is disordered and uneven. For example, the non-repeatability of the plurality of sub-pixels may be a disordered arrangement formed by different shape parameters, size parameters and the like of the sub-pixels, or may be a non-uniform arrangement formed by different position parameters of the sub-pixels, or may be a random arrangement formed by different set attitudes. In some embodiments, the plurality of sub-pixels in the target display screen may also be arranged repeatedly, that is, the sub-pixels in the plurality of light emitting units are arranged identically, or the sub-pixels in the plurality of light emitting units are arranged according to a certain rule.
In some embodiments, the target display screen may further include a light-transmitting portion, where the light-transmitting portion is disposed corresponding to the sub-pixel, for example, a light-transmitting portion is formed in a non-sub-pixel area of the target display screen, and at least two light-transmitting portions are arranged in a non-repetitive manner, where the non-repetitive manner is similar to the non-repetitive manner of the sub-pixel, and will not be described herein.
The image processing model may be a neural network model such as a LeNet, R-CNN (Region-CNN) or Resnet, or may be another deep learning model. The image processing model can be obtained by training according to the pixel arrangement information of the target display screen, because the image atomization generated when the image pickup device collects images through the target display screen is caused by the pixel arrangement structure in the target display screen, the image atomization generated when the image pickup device collects images through the target display screen can be simulated through the pixel arrangement information of the target display screen because the pixel arrangement structure in the target display screen is the root cause of the physical diffraction phenomenon generated when the images are collected, and a large number of images collected by the image pickup device through the target display screen can be simulated based on the image atomization and different standard images (equivalent to clear images), and the image processing model is trained through the images and the standard images until the images output by the image processing model are close to or completely the same as the standard images, so that the trained image processing model is obtained. In a specific implementation, the pixel arrangement information of the target display screen is different, and the simulated image atomization may be the same or different.
After the image processing model is obtained, if the image to be processed is received, the image to be processed contains image fogging, and the image fogging in the image to be processed is removed through the image processing model, so as to obtain a target image after the fogging is removed, as shown in fig. 4, for the image to be processed shown in fig. 1, the obtained target image after the fogging is removed is schematic, and compared with the image shown in fig. 1, the image shown in fig. 4 has obvious improvement in definition and image quality, and the image shown in fig. 4 can also be generally referred to as a repair image or a target image of the image shown in fig. 1.
The invention provides an image processing method, firstly, a camera device in electronic equipment is used for collecting an image to be processed through a target display screen; the image to be processed comprises image atomization generated when the image pickup device collects images through the target display screen; then removing image atomization in the image to be processed through a pre-trained image processing model to obtain the image to be processed after the atomization is removed; the image processing model is obtained through training according to pixel arrangement information of the target display screen. According to the image processing method, through the image processing model, image atomization in the image to be processed acquired by the camera device through the target display screen is removed, so that the image quality of the image is improved, and meanwhile, the experience of a user is also improved.
Embodiment III:
the embodiment of the invention also provides another image processing method which is realized on the basis of the method described in the embodiment; the method focuses on describing a specific process (realized by the following steps S502-S506) of training an image processing model before acquiring an image to be processed; as shown in fig. 5, the method comprises the following specific steps:
step S502, according to the pixel arrangement information of the target display screen, generating simulation atomization information.
Because the image atomization generated when the image capturing device captures an image through the target display screen is caused by the pixel arrangement structure in the target display screen, the pixel arrangement structure in the target display screen can be understood to be the root cause of the physical diffraction phenomenon generated when the image is captured, and therefore, according to the pixel arrangement information of the target display screen acquired in advance, the simulated atomization information matched with the image atomization generated when the image capturing device captures the image through the appointed display can be generated, and the simulated atomization information comprises diffraction fringes and fuzzy information caused by the physical diffraction phenomenon.
The target display screen may include a plurality of light emitting units, each of which includes a plurality of sub-pixels. In practical application, the plurality of light emitting units may be arranged in a matrix, honeycomb, delta, or the like in a regular manner. The arrangement is the same as that of the existing conventional display screen (i.e. the display screen without considering the placement of the camera below the display screen), thereby facilitating the production and manufacture of the display screen and avoiding possible technical difficulties.
In a specific implementation, the pixel arrangement information of the target display screen includes: arrangement information of each sub-pixel in the target display screen. In some embodiments, the step S502 may be implemented by the following steps 10-11:
step 10, determining a sub-pixel distribution diagram of the target display screen according to arrangement information of each sub-pixel in the target display screen; in the sub-pixel distribution diagram, a first pixel value is adopted to identify sub-pixels, and a second pixel value is adopted to identify areas except the sub-pixels in the target display screen.
In some embodiments, the subpixel distribution map of the target display may be determined based on arrangement information of subpixels in each lighting unit of the target display. For example, the arrangement information of the sub-pixels may include a shape, a size, a rotation angle, a position, etc. of each sub-pixel, and/or a shape, a size, a rotation angle, a position, etc. of the light transmitting portion of the target display screen. In some embodiments, the subpixel distribution map of the target display screen is obtained by converting the subpixel arrangement information into an image that includes the arrangement information of each subpixel in each light emitting unit of the target display screen.
In another embodiment, the pixel arrangement information of the target display screen and the sub-pixel distribution diagram of the target display screen can be obtained according to the actual structural image of the target display screen, which is shot in advance. By way of example, the actual structural image may be an actual structure of the light emitting units, the light transmitting parts and other components in the back panel of the target display screen captured by the camera with the aid of the microscope, from which the position, the size, the rotation angle and the position of each sub-pixel in the light emitting units in the target display screen can be determined, thereby determining the sub-pixel distribution map of the target display screen. For example, the photographed actual structural image may also be subjected to image processing, so as to obtain the sub-pixel distribution map.
In particular implementations, when the plurality of subpixels in the target display are non-repeatedly arranged, the actual structural image is typically an actual structure of the entire display of the target display taken by the camera with the aid of a microscope; when the plurality of sub-pixels in the target display screen are repeatedly arranged, the actual structure image may be an actual structure of a partial display screen in the target display screen, which is photographed by the camera with the aid of a microscope.
Specifically, in some embodiments, the sub-pixels and non-sub-pixels of the target display screen may be set to different values, resulting in the sub-pixel profile, e.g., sub-pixels are set to a first pixel value and non-sub-pixels are set to a second pixel value. The first pixel value and the second pixel value are different values, and can be set according to the user requirement. For example, the first pixel value may be set to 255, that is, the region corresponding to the subpixel is set to white, as indicated by the white dots in the subpixel map shown in fig. 6; the second pixel value may be set to 0, i.e. the image area outside the sub-pixel is set to black, as indicated by the black area in the sub-pixel profile shown in fig. 6.
In a specific implementation, the arrangement information of each sub-pixel in the target display screen may include: at least one of a position, a shape, a rotation angle, and a size of each sub-pixel; in the step 10, the sub-pixels are identified by using the first pixel value, and the areas of the target display screen except for the sub-pixels are identified by using the second pixel value in the following manner: for each sub-pixel, determining a first image area corresponding to the sub-pixel according to the position, the shape, the rotation angle and the size of the sub-pixel, and identifying the first image area by adopting a first pixel value; and marking the areas except the first image area corresponding to each sub-pixel by adopting the second pixel value.
The first image area is an image area matched with the position, the shape, the rotation angle and the size of the sub-pixels in the sub-pixel distribution map; for example, the white dots in fig. 6 are schematic diagrams when the shape of the sub-pixels is circular.
And 11, generating simulation atomization information according to the sub-pixel distribution diagram of the target display screen.
Based on the sub-pixel distribution diagram of the target display screen, simulation atomization information corresponding to image atomization generated when the image pickup device collects images through the target display screen can be simulated, and the simulation atomization information comprises all features of the image atomization. In a specific implementation, the above step 12 may be implemented as follows: carrying out Fourier transform on the sub-pixel distribution diagram of the target display screen to obtain analog atomization information; the simulated atomization information includes diffraction fringe information.
The simulated fog information may include diffraction fringe information, which may be a result of fourier transforming the sub-pixel profile, as shown in fig. 7, which is a schematic diagram of diffraction fringe information. The diffraction fringe information can be unevenly or evenly attenuated from the center outwards, so that uneven distribution or even diffraction fringe and image blurring correspondingly occur when the image pickup device collects images through the target display screen.
Step S504, fusing the simulated atomization information into a standard image to obtain a sample image; the standard image is acquired by an optical camera.
The standard image may be an image acquired without a target display screen (also referred to as an on-screen camera) in front of the optical camera, and the standard image is a clear, high-definition image. The optical camera can be the same as or different from the camera device for collecting the image to be processed, and can be a rear camera in a mobile phone or a tablet personal computer, a video camera or a camera in a camera, and the like. Since the standard image is directly photographed by the optical camera, the adverse effect of the target display screen on the photographed image is not caused in the photographing process, and in this case, the standard image is a high-quality image with better definition.
The sample image is obtained by fusing the analog atomization information corresponding to the target display screen into the standard image, and can be understood as an image acquired by an optical camera which is simulated through the target display screen (also referred to as an under-screen camera device) according to the standard image and the analog atomization information, wherein the sample image contains the analog atomization information.
The sample image and the standard image obtained by the method have higher matching degree and quality, are favorable for training the model, and simultaneously effectively avoid adverse effects on the training effect of the image processing model possibly caused by deviation of image content, shooting angle and the like when the sample image is collected by the under-screen camera device and the standard image is collected by the on-screen camera device, so that the image processing model is poor in quality after denoising.
In a specific implementation, the above step S504 may be implemented as follows: convolving the simulated atomization information and the standard image to obtain a sample image; the convolution result of the simulated atomization information and the standard image can be taken as a sample image acquired by the simulated optical camera through the target display screen.
Step S506, training the initial model according to the standard image and the sample image to obtain an image processing model.
The initial model can be a neural network model such as LeNet, R-CNN or Resnet, and the like, and can also be other deep learning models. In the training process, the sample image can be used as input data of an initial model, and the standard image is used as target data. Inputting the sample image into an initial model to obtain an output result, comparing the output result with a standard image, and determining the initial model at the moment as an image processing model when the output result is relatively close to or consistent with the standard image; if the output result is more different from the standard image, the standard image and the corresponding sample image training image initial model are continuously selected to obtain an image processing model.
In specific implementation, the step S506 may be implemented by the following steps 20-22:
step 20, inputting a sample image into the initial model to obtain an output result; the output result is a processed sample image of the initial model output.
In a specific implementation, the initial model may remove the simulated fog information in the sample image, where the simulated fog information includes diffraction fringes and image blur, so as to obtain an image from which the simulated fog information is removed, and the image from which the simulated fog information is removed is the output result.
In some embodiments, the initial model may detect brightness values of pixels in the sample image; determining a spot area containing a point light source in the sample image based on the detected brightness value; and removing diffraction fringe information from the light spot area, and removing a blurring phenomenon from the non-light spot area to obtain a restored image corresponding to the sample image.
And step 21, determining a loss value according to the output result and the standard image. Specifically, the similarity between the output result and the standard image can be calculated to obtain a loss value, for example, the loss value can be obtained by calculating a plurality of similarity algorithms such as a cosine similarity algorithm, a histogram algorithm or a structural similarity measurement algorithm; the difference between the output result and the standard image can be calculated, and the difference is used as a loss value; in particular, there are a number of methods by which the loss value can be calculated by outputting the result and the standard image, which are not listed here. Generally, the higher the similarity or the smaller the difference, the smaller the loss value, the lower the similarity or the larger the difference, and the larger the loss value.
And step 22, training an initial model based on the loss value to obtain an image processing model.
When the method is specifically implemented, parameters in a current initial model can be adjusted according to the loss value, then the next sample image is continuously input into the initial model after parameter adjustment until the loss value converges or the iteration number reaches a preset value, and the initial model obtained at the moment is used as an image processing model; the preset value may be set according to the user's needs, for example, 200 times.
In the training process of the image processing model, a large number of standard images and sample images with high quality and diversity are required to be used as training data, therefore, in some embodiments, a large number of standard images can be collected in advance and convolved with the simulation atomization information, so that a large number of sample images are obtained, the sample images and the standard images are stored in a preset training set in pairs, and when the image processing model is trained, the sample images and the standard images can be determined from the preset training set, and the model is trained.
Step S508, if the image to be processed acquired by the camera device through the target display screen is acquired, inputting the image to be processed into the image processing model, and obtaining the target image after removing the fog.
In some embodiments, the image processing model is used to remove the image fog, which is also called an image restoration mode, that is, the target image after removing the fog output by the image processing model is a restored image of the acquired image to be processed, and the restored image is a clear and high-quality image.
After the target image is obtained, the electronic device may display the target image, may perform rendering processing on the target image, and may perform editing processing on the target image.
The sample image and the standard image obtained by the image processing method have the characteristics of high quality and diversity, and are beneficial to better training an image processing model; in addition, the image atomization generated when the image pickup device collects images through the target display screen can be simulated according to the pixel arrangement information of the target display screen, and the image processing model can continuously learn the image atomization in the training process, so that the image processing model after training is finished can better remove the image atomization in the images to be processed, the image restoration effect and the image atomization removal effect of the image processing model in practical application are improved, and the definition and the picture quality of the images collected through the target display screen by the image pickup device are effectively improved.
Embodiment four:
corresponding to the above-described embodiments of image processing, an embodiment of the present invention provides an image processing apparatus, as shown in fig. 8, including:
an image acquisition module 80 for acquiring an image to be processed; the image to be processed is acquired by the camera device through the target display screen.
The image processing module 81 is configured to remove image fogging in an image to be processed through an image processing model, and obtain a target image from which the fogging is removed; the image processing model is obtained through training according to pixel arrangement information of the target display screen.
The image processing device firstly collects an image to be processed through a target display screen by a camera device in electronic equipment; the image to be processed comprises image atomization generated when the image pickup device collects images through the target display screen; then removing image atomization in the image to be processed through an image processing model to obtain a target image from which the atomization is removed; the image processing model is obtained through training according to pixel arrangement information of the target display screen. According to the image processing method, through the image processing model, image atomization in the image to be processed acquired by the camera device through the target display screen is removed, so that the image quality of the image is improved, and meanwhile, the experience of a user is also improved.
Further, the device further comprises a model training module for: the simulation atomization determining unit is used for generating simulation atomization information according to pixel arrangement information of the target display screen; the sample image determining unit is used for fusing the simulation atomization information into the standard image to obtain a sample image; the standard image is acquired by an optical camera; the training unit is used for training the initial model according to the standard image and the sample image to obtain an image processing model.
Specifically, the apparatus further includes a pixel arrangement information acquisition module configured to: acquiring a design drawing of a target display screen, and determining pixel arrangement information of the target display screen according to the design drawing; or determining pixel arrangement information of the target display screen according to the actual structural image of the target display screen shot in advance.
Specifically, the pixel arrangement information of the target display screen includes: arrangement information of each sub-pixel in the target display screen; the simulated atomization determining unit is used for generating a sub-pixel distribution diagram of the target display screen according to arrangement information of each sub-pixel in the target display screen; in the sub-pixel distribution diagram, a first pixel value is adopted to identify sub-pixels, and a second pixel value is adopted to identify areas except the sub-pixels in the target display screen; and generating simulation atomization information according to the sub-pixel distribution diagram of the target display screen.
Further, the arrangement information of each sub-pixel in the target display screen includes at least one of a position, a shape, a rotation angle and a size of each sub-pixel; the above-mentioned analog atomization determining unit is used for: for each sub-pixel, determining a first image area corresponding to the sub-pixel according to the position, the shape, the rotation angle or the size of the sub-pixel (namely according to the arrangement information of the sub-pixel), and identifying the first image area by adopting a first pixel value; and marking the areas except the first image area corresponding to each sub-pixel by adopting the second pixel value.
In a specific implementation, the analog atomization determining unit is further configured to perform fourier transform on a sub-pixel distribution diagram of the target display screen to obtain analog atomization information; the simulated atomization information includes diffraction fringe information.
Further, the sample image determining unit is configured to perform convolution processing on the simulated fog information and the standard image to obtain a sample image.
Further, the training unit is configured to: inputting the sample image into the initial model to obtain an output result; determining a loss value according to the output result and the standard image; and training an initial model based on the loss value to obtain an image processing model.
In a specific implementation, the device further comprises an image operation module, configured to: displaying a target image; or rendering the sub-pixels in the display screen according to the target image.
The image processing apparatus according to the present embodiment has the same implementation principle and technical effects as those of the image processing method according to the foregoing embodiment, and for brevity, reference may be made to the corresponding contents of the foregoing embodiment where the description of the embodiment is not mentioned.
Fifth embodiment:
based on the foregoing embodiments, the present embodiment provides an image processing system including: a processing device and a storage device; the storage means has stored thereon a computer program which, when run by the processing device, is the image processing method described above.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing method embodiment for the specific working process of the above-described system, which is not described herein again.
Based on the foregoing embodiments, the present embodiment provides another electronic device, which includes not only the components shown in fig. 2, but also a target display screen, an image capturing device disposed below the target display screen, and the foregoing image processing system, where the image capturing device in the electronic device is configured to collect an image to be processed through the target display screen, so as to process the image to be processed by the image processing system, and obtain a target image after de-atomization, and detailed implementation is referred to embodiments of the foregoing image processing method and will not be repeated herein.
Further, the present embodiment also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processing apparatus, performs the steps of the above-described image processing method.
The embodiment of the invention provides an image processing method, an apparatus, a system and a computer program product of an electronic device, which include a computer readable storage medium storing program codes, and instructions included in the program codes may be used to execute the method in the previous method embodiment, and specific implementation may refer to the method embodiment and will not be described herein.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. An image processing method, the method comprising:
acquiring an image to be processed; the image to be processed is acquired by the camera device through the target display screen;
removing image atomization in the image to be processed through an image processing model to obtain an object image from which the atomization is removed; the image processing model is obtained through training according to pixel arrangement information of the target display screen;
The image processing model is obtained through training in the following mode:
generating simulation atomization information according to the pixel arrangement information of the target display screen;
fusing the simulated atomization information into a standard image to obtain a sample image; the standard image is acquired by an optical camera;
training an initial model according to the standard image and the sample image to obtain the image processing model;
the step of training an initial model according to the standard image and the sample image to obtain the image processing model comprises the following steps:
inputting the sample image into the initial model to obtain an output result;
determining a loss value according to the output result and the standard image;
and training the initial model based on the loss value to obtain an image processing model.
2. The method of claim 1, wherein the pixel arrangement information of the target display screen comprises: arrangement information of each sub-pixel in the target display screen;
the step of generating simulated atomization information according to the pixel arrangement information of the target display screen comprises the following steps:
determining a sub-pixel distribution diagram of the target display screen according to arrangement information of each sub-pixel in the target display screen; the sub-pixel distribution diagram is characterized in that a first pixel value is adopted to identify the sub-pixel, and a second pixel value is adopted to identify the region except the sub-pixel in the target display screen;
And generating the simulated atomization information according to the sub-pixel distribution diagram of the target display screen.
3. The method of claim 2, wherein the arrangement information of each sub-pixel in the target display screen includes at least one of a position, a shape, a rotation angle, and a size of each sub-pixel;
the step of identifying the sub-pixel with a first pixel value and identifying the region of the target display screen other than the sub-pixel with a second pixel value includes:
for each sub-pixel, determining a first image area corresponding to the sub-pixel according to the arrangement information of the sub-pixel, and identifying the first image area by adopting a first pixel value;
and identifying the areas except the first image area corresponding to each sub-pixel by adopting the second pixel value.
4. The method of claim 2, wherein the step of generating the simulated fog information from the sub-pixel profile of the target display screen comprises:
performing Fourier transform on a sub-pixel distribution diagram of the target display screen to obtain the simulated atomization information; the simulated atomization information includes diffraction fringe information.
5. The method of claim 1, wherein the step of fusing the simulated fog information into a standard image to obtain a sample image comprises:
and carrying out convolution processing on the simulated atomization information and the standard image to obtain the sample image.
6. The method according to claim 1, wherein the method further comprises:
displaying the target image;
or rendering the sub-pixels in the target display screen according to the target image.
7. An image processing apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring an image to be processed; the image to be processed is acquired by the camera device through the target display screen;
the image processing module is used for removing image atomization in the image to be processed through the image processing model to obtain an object image from which the atomization is removed; the image processing model is obtained through training according to pixel arrangement information of the target display screen;
the image processing model is obtained through training in the following mode:
generating simulation atomization information according to the pixel arrangement information of the target display screen;
Fusing the simulated atomization information into a standard image to obtain a sample image; the standard image is acquired by an optical camera;
training an initial model according to the standard image and the sample image to obtain the image processing model;
training an initial model according to the standard image and the sample image to obtain the image processing model, wherein the training comprises the following steps:
inputting the sample image into the initial model to obtain an output result;
determining a loss value according to the output result and the standard image;
and training the initial model based on the loss value to obtain an image processing model.
8. An image processing system, the system comprising: a processing device and a storage device;
the storage means has stored thereon a computer program which, when run by the processing device, performs the image processing method of any of claims 1 to 6.
9. An electronic device comprising a target display screen and an imaging device disposed below the target display screen, further comprising the image processing system of claim 8.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when run by a processing device, performs the steps of the image processing method according to any one of claims 1 to 6.
CN202010226538.XA 2020-03-26 2020-03-26 Image processing method, device and system and electronic equipment Active CN111311523B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010226538.XA CN111311523B (en) 2020-03-26 2020-03-26 Image processing method, device and system and electronic equipment
PCT/CN2020/119611 WO2021189807A1 (en) 2020-03-26 2020-09-30 Image processing method, apparatus and system, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010226538.XA CN111311523B (en) 2020-03-26 2020-03-26 Image processing method, device and system and electronic equipment

Publications (2)

Publication Number Publication Date
CN111311523A CN111311523A (en) 2020-06-19
CN111311523B true CN111311523B (en) 2023-09-05

Family

ID=71160832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010226538.XA Active CN111311523B (en) 2020-03-26 2020-03-26 Image processing method, device and system and electronic equipment

Country Status (2)

Country Link
CN (1) CN111311523B (en)
WO (1) WO2021189807A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311523B (en) * 2020-03-26 2023-09-05 北京迈格威科技有限公司 Image processing method, device and system and electronic equipment
CN111862002A (en) * 2020-06-29 2020-10-30 维沃移动通信有限公司 Screen water mist removing method and device, electronic equipment and readable storage medium
CN111951192A (en) * 2020-08-18 2020-11-17 义乌清越光电科技有限公司 Shot image processing method and shooting equipment
CN111866402B (en) * 2020-09-07 2021-10-29 三一重工股份有限公司 Parameter adjusting method and device, electronic equipment and storage medium
CN112887598A (en) * 2021-01-25 2021-06-01 维沃移动通信有限公司 Image processing method and device, shooting support, electronic equipment and readable storage medium
CN115456459B (en) * 2022-09-30 2023-05-05 浙江中泽精密科技有限公司 Processing technology method and system for cover plate piece of new energy power battery
CN116200258B (en) * 2023-04-28 2023-07-07 中国医学科学院北京协和医院 Method, device and equipment for eliminating mist on inner wall of culture dish cover
CN117058038B (en) * 2023-08-28 2024-04-30 北京航空航天大学 Diffraction blurred image restoration method based on even convolution deep learning

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05135163A (en) * 1991-04-19 1993-06-01 Ezel Inc Image processing method
CN1217668A (en) * 1997-02-18 1999-05-26 世雅企业股份有限公司 Device and method for image processing
JP2000115511A (en) * 1998-09-30 2000-04-21 Minolta Co Ltd Picture processor
CN104702875A (en) * 2013-12-18 2015-06-10 杭州海康威视数字技术股份有限公司 Liquid crystal display equipment and video image displaying method
CN204889399U (en) * 2015-08-18 2015-12-23 蒋彬 Intelligence body -building mirror
CN107847214A (en) * 2015-08-04 2018-03-27 深圳迈瑞生物医疗电子股份有限公司 Three-D ultrasonic fluid imaging method and system
CN107948498A (en) * 2017-10-30 2018-04-20 维沃移动通信有限公司 One kind eliminates camera Morie fringe method and mobile terminal
CN108153502A (en) * 2017-12-22 2018-06-12 长江勘测规划设计研究有限责任公司 Hand-held augmented reality display methods and device based on transparent screen
CN109035421A (en) * 2018-08-29 2018-12-18 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and storage medium
CN110599965A (en) * 2019-08-09 2019-12-20 深圳市美芒科技有限公司 Image display method, image display device, terminal device, and readable storage medium
CN110855887A (en) * 2019-11-18 2020-02-28 深圳传音控股股份有限公司 Mirror-based image processing method, terminal and computer-readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10410037B2 (en) * 2015-06-18 2019-09-10 Shenzhen GOODIX Technology Co., Ltd. Under-screen optical sensor module for on-screen fingerprint sensing implementing imaging lens, extra illumination or optical collimator array
CN109979382B (en) * 2019-04-23 2021-02-23 清华大学 Screen transmission spectrum-based color correction method and system for under-screen imaging system
CN110460825A (en) * 2019-08-02 2019-11-15 武汉华星光电半导体显示技术有限公司 A kind of imaging compensating device, imaging compensating method and its application
CN110730299B (en) * 2019-09-29 2021-08-17 深圳酷派技术有限公司 Method and device for imaging of camera under screen, storage medium and electronic equipment
CN111311523B (en) * 2020-03-26 2023-09-05 北京迈格威科技有限公司 Image processing method, device and system and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05135163A (en) * 1991-04-19 1993-06-01 Ezel Inc Image processing method
CN1217668A (en) * 1997-02-18 1999-05-26 世雅企业股份有限公司 Device and method for image processing
JP2000115511A (en) * 1998-09-30 2000-04-21 Minolta Co Ltd Picture processor
CN104702875A (en) * 2013-12-18 2015-06-10 杭州海康威视数字技术股份有限公司 Liquid crystal display equipment and video image displaying method
CN107847214A (en) * 2015-08-04 2018-03-27 深圳迈瑞生物医疗电子股份有限公司 Three-D ultrasonic fluid imaging method and system
CN204889399U (en) * 2015-08-18 2015-12-23 蒋彬 Intelligence body -building mirror
CN107948498A (en) * 2017-10-30 2018-04-20 维沃移动通信有限公司 One kind eliminates camera Morie fringe method and mobile terminal
CN108153502A (en) * 2017-12-22 2018-06-12 长江勘测规划设计研究有限责任公司 Hand-held augmented reality display methods and device based on transparent screen
CN109035421A (en) * 2018-08-29 2018-12-18 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and storage medium
CN110599965A (en) * 2019-08-09 2019-12-20 深圳市美芒科技有限公司 Image display method, image display device, terminal device, and readable storage medium
CN110855887A (en) * 2019-11-18 2020-02-28 深圳传音控股股份有限公司 Mirror-based image processing method, terminal and computer-readable storage medium

Also Published As

Publication number Publication date
CN111311523A (en) 2020-06-19
WO2021189807A1 (en) 2021-09-30

Similar Documents

Publication Publication Date Title
CN111311523B (en) Image processing method, device and system and electronic equipment
US11055827B2 (en) Image processing apparatus and method
Abdelhamed et al. A high-quality denoising dataset for smartphone cameras
Ignatov et al. Dslr-quality photos on mobile devices with deep convolutional networks
US10708525B2 (en) Systems and methods for processing low light images
US20200051260A1 (en) Techniques for controlled generation of training data for machine learning enabled image enhancement
US11315274B2 (en) Depth determination for images captured with a moving camera and representing moving features
US20170256036A1 (en) Automatic microlens array artifact correction for light-field images
US10970821B2 (en) Image blurring methods and apparatuses, storage media, and electronic devices
KR102606208B1 (en) Learning-based lens flare removal
CN109803172B (en) Live video processing method and device and electronic equipment
US20230230204A1 (en) Image processing method and apparatus, and method and apparatus for training image processing model
Kwon et al. Controllable image restoration for under-display camera in smartphones
US20220375042A1 (en) Defocus Blur Removal and Depth Estimation Using Dual-Pixel Image Data
CN111951192A (en) Shot image processing method and shooting equipment
TWI672639B (en) Object recognition system and method using simulated object images
WO2023001110A1 (en) Neural network training method and apparatus, and electronic device
Toet et al. Efficient contrast enhancement through log-power histogram modification
CN113160082B (en) Vignetting correction method, system, device and medium based on reference image
WO2020224423A1 (en) Terminal device and zooming processing method and apparatus for image thereof
Tworski et al. Dr2s: Deep regression with region selection for camera quality evaluation
Yang et al. An end‐to‐end perceptual enhancement method for UHD portrait images
CN116506732B (en) Image snapshot anti-shake method, device and system and computer equipment
Singh et al. A Comprehensive Study: Image Forensic Analysis Traditional to Cognitive Image Processing
US11995800B2 (en) Artificial intelligence techniques for image enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant