CN115334234B - Method and device for taking photo supplementary image information in dim light environment - Google Patents

Method and device for taking photo supplementary image information in dim light environment Download PDF

Info

Publication number
CN115334234B
CN115334234B CN202210766024.2A CN202210766024A CN115334234B CN 115334234 B CN115334234 B CN 115334234B CN 202210766024 A CN202210766024 A CN 202210766024A CN 115334234 B CN115334234 B CN 115334234B
Authority
CN
China
Prior art keywords
image
scene
information
image information
action information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210766024.2A
Other languages
Chinese (zh)
Other versions
CN115334234A (en
Inventor
王晓雷
王晓博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ontim Technology Co Ltd
Original Assignee
Beijing Ontim Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ontim Technology Co Ltd filed Critical Beijing Ontim Technology Co Ltd
Priority to CN202210766024.2A priority Critical patent/CN115334234B/en
Publication of CN115334234A publication Critical patent/CN115334234A/en
Application granted granted Critical
Publication of CN115334234B publication Critical patent/CN115334234B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a method and a device for taking photo supplementary image information in a dark environment. The method for taking the supplementary image information of the photo in the dim light environment comprises the following steps: acquiring an original RAW image shot by a camera; preprocessing the RAW image, and identifying useful live-action information in the RAW image; carrying out separation treatment and enhancement treatment on the useful live-action information; generating background image information according to the useful live-action information; and synthesizing the useful real scene information subjected to the enhancement processing and the background image information to obtain a night scene image. According to the method for taking the picture supplementary image information in the dim light environment, disclosed by the invention, the color and texture supplementation of the night scene image is realized in real time, so that a user can take an effect similar to a daytime highlight scene at night, and the user experience is improved.

Description

Method and device for taking photo supplementary image information in dim light environment
Technical Field
The invention relates to the technical field of image processing, in particular to a method and a device for taking a photo in a dark environment and supplementing image information.
Background
Mobile terminals have entered the 5G era, innovations for terminals have been explored, updated and iterated, and with the development of technologies, mobile phones have stronger processing performance, better photographing effect and terminal dim light photographing problem is a problem to be solved urgently. The processing performance of the built-in ISP of the current mobile phone is limited, monopoly is serious by a terminal platform, the differentiated design photographing style almost becomes the standard of the mobile phone, and the problems of high noise, poor definition, dark color and the like of the dim light photographing of the mobile phone platform are solved, so that the user experience is poor.
The existing method for photographing the dark environment controlled by the ISP in the mobile phone performs effect optimization compensation by photographing a RAW image, when the light incoming quantity of a lens is small in the dark environment, RAW data output by a photosensitive sensor is doped with noise, and when the traditional ISP restores the image effect, the noise is forcedly removed through a specific filter, so that partial image information is lost, and the photographing effect is affected.
Disclosure of Invention
Based on the above, the invention aims to provide a method and a device for shooting photo supplementary image information in a dark environment, which can realize color and texture supplementation of night scene images in real time, so that a user can shoot an effect similar to a daytime highlight scene at night, and the user experience is improved.
In a first aspect, the present invention provides a method for taking photo supplemental image information in a dim light environment, comprising the steps of:
acquiring an original RAW image shot by a camera;
preprocessing the RAW image, and identifying useful live-action information in the RAW image;
carrying out separation treatment and enhancement treatment on the useful live-action information;
generating background image information according to the useful live-action information;
and synthesizing the useful real scene information subjected to the enhancement processing and the background image information to obtain a night scene image.
Further, the separation processing of the useful live-action information comprises:
and performing edge extraction on the useful live-action information by using an edge detection algorithm to obtain separated useful live-action information.
Further, the enhancement processing is performed on the useful live-action information, including:
and converting the useful real-scene information into a daytime highlight scene style by using a style conversion network.
Further, generating background image information according to the useful live-action information, including:
obtaining the scene of the image according to the type of the useful live-action information;
and generating background image information corresponding to the scene according to the scene of the image.
Further, generating background image information corresponding to a scene of the image according to the scene, including:
and performing image information supplementation, including complementary color and texture supplementation, by using a picture color and texture compensation network trained under the corresponding scene, so as to generate background image information.
Further, generating background image information corresponding to a scene of the image according to the scene, including:
determining a filter corresponding to the background image according to the scene of the image;
and superposing the filter on the original image to generate background image information.
Further, generating background image information corresponding to a scene of the image according to the scene, including:
matching the mapping materials in the material library according to the scene of the image;
and superposing the superposition materials on the original image to generate background image information.
In a second aspect, the present invention also provides an apparatus for capturing supplemental image information of a photograph in a dim light environment, including:
the image acquisition module is used for acquiring an original RAW image shot by the camera;
the useful live-action information identification module is used for preprocessing the RAW image and identifying useful live-action information in the RAW image;
the useful live-action information enhancement module is used for carrying out separation processing and enhancement processing on the useful live-action information;
the background image information generating module is used for generating background image information according to the useful live-action information;
and the synthesis module is used for synthesizing the useful live-action information subjected to the enhancement processing and the background image information to obtain a night-scene image.
In a third aspect, the present invention also provides an electronic device, including:
at least one memory and at least one processor;
the memory is used for storing one or more programs;
the one or more programs, when executed by the at least one processor, cause the at least one processor to implement the steps of a method of taking photo supplemental image information in a dim light environment as described in any of the first aspects of the present invention.
In a fourth aspect, the present invention also provides a computer-readable storage medium, characterized in that:
the computer readable storage medium stores a computer program which, when executed by a processor, implements the steps of a method of taking photo supplemental image information in a dim light environment according to any of the first aspect of the invention.
The invention provides a method and a device for taking picture supplementary image information in a dim light environment, the basic principle is that an AI neural network is used for detecting the current dim light scene to identify and classify, then a neural network of another picture compensation color is used for supplementing colors to pictures through different scenes, the method is simply classified into two steps to be executed, firstly, the scene classification is carried out, and then a picture color texture compensation network trained under various scenes is used for supplementing image information, including color compensation and texture compensation. The night scene image color and texture supplementing method and device can realize color and texture supplementing of the night scene image in real time, so that a user can shoot an effect similar to a daytime highlight scene at night, and user experience is improved. Based on a specific algorithm or an AI model and other methods, the night scene photo is converted into a virtual-real combination style photo, and the algorithm method is used for realizing the dim light image information compensation and the generation of a virtual scene of a user in different dim light scenes. The problems that a user shoots a photo at night, the photo is unclear in darkness and blurred, the noise is high, and the image of the area irradiated by no light is dark and one piece of pain points are solved. The virtual and the real are combined with the image, the skill shoots out the real scene information required by the user, and the current scene can be decorated through the virtual image.
For a better understanding and implementation, the present invention is described in detail below with reference to the drawings.
Drawings
FIG. 1 is a flow chart of a method for taking supplemental image information of a photograph in a dim light environment according to the present invention;
fig. 2 is a schematic structural diagram of a device for capturing supplemental image information of a photo in a dark environment according to the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the following detailed description of the embodiments of the present application will be given with reference to the accompanying drawings.
It should be understood that the described embodiments are merely some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the embodiments of the present application, are within the scope of the embodiments of the present application.
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the embodiments of the application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims. In the description of this application, it should be understood that the terms "first," "second," "third," and the like are used merely to distinguish between similar objects and are not necessarily used to describe a particular order or sequence, nor should they be construed to indicate or imply relative importance. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art as the case may be.
Furthermore, in the description of the present application, unless otherwise indicated, "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
In view of the problems in the background art, an embodiment of the present application provides a method for capturing supplemental image information of a photo in a dark environment, as shown in fig. 1, the method includes the following steps:
s01: and acquiring an original RAW image shot by the camera.
S02: preprocessing the RAW image, and identifying useful live-action information in the RAW image.
Useful live-action information: the night scene photo information, such as tree information in a photo taken at night, is unclear, such as dark, but has an image outline, and the tree information in the night scene photo is referred to herein as tree outline information in the night scene photo, wherein trees at a distance have a simple black outline and are not clearly visible in the daytime.
In a preferred embodiment, useful live-action information can be identified using a unet neural network.
The neural network structure is like the english letter u, and is therefore named unet. It builds on the FCN architecture, first a series of convolutional layers starting from the left input, here mainly 5 layers, in order to extract the features of the picture, here classical feature extraction networks such as vgg or resnet can be used. Then, the right structure is constructed by up-sampling the extracted features from the lowest layer, the up-sampled features have the same shape as the features of the previous layer, then, the two features are aggregated together, and a convolution layer is added to reduce the number of channels, then up-sampling the reduced features, and repeating the previous operation. And finally, when the shape of the up-sampled pixel is the same as that of the original image, adding a convolution layer for classifying each pixel point.
S03: and carrying out separation processing and enhancement processing on the useful live-action information.
Preferably, the method comprises the following substeps:
s031: and performing edge extraction on the useful live-action information by using an edge detection algorithm to obtain separated useful live-action information.
In other examples, the edge extraction may be performed by using a feature extraction neural network algorithm to separate images.
S032: and converting the useful real-scene information into a daytime highlight scene style by using a style conversion network.
In a specific embodiment, the image is processed using a CycleGAN network. The two mirror image GAN networks of the cycleGAN network respectively learn to obtain two mappings, wherein the first mapping G (B) converts a real image realB with low illumination and low quality into a generated image FakeA with high normal illumination and high quality, the second mapping G (A) converts the real image realA with high normal illumination and high quality into the generated image FakeB with low illumination and low quality, and then the FakeB is used as an input of the network, and the mapping G (B) is used for converting the FakeB into an intermediate diagram with high normal illumination and high quality.
S04: and generating background image information according to the useful live-action information.
When a part of areas have no useful image information in a dim light scene shot by a user, the part of areas need AI networks to freely generate image color and texture information with similar styles in the scene according to useful real scene information in the current scene, and virtual images generated by the method supplement the areas of useful image information in the noon of the picture, so that a part of virtual image generation effect is realized, and finally the problem of original image data loss during the dim light environment shooting is solved.
Preferably, the method comprises the following substeps:
s041: and obtaining the scene of the image according to the type of the useful live-action information.
S042: and generating background image information corresponding to the scene according to the scene of the image.
In a specific embodiment, the image information may be supplemented using a picture color texture compensation network trained in the corresponding scene, including complementary colors and texture supplements, to generate background image information.
In other examples, the virtual image compensation may also be generated using conventional algorithms. For example, determining a filter corresponding to the style of the background image according to the scene of the image; and superposing the filter on the original image to generate background image information.
In other examples, a simple mapping method (such as collecting or painting lantern, leaf, flower, plant and tree in advance, etc. clear and graceful images are stored in advance to form a virtual small local original image, etc. after a user takes a photo, a similar scene is found by a simple scene recognition algorithm, and then the image is locally attached to the small local virtual original image to form a virtual-real combined photo)
S05: and synthesizing the useful real scene information subjected to the enhancement processing and the background image information to obtain a night scene image.
And finally, forming a virtual-real combination image by combining the images generated in the two steps through image splicing, superposition and other synthetic algorithms, and converting the night scene into a virtual-real combination image with super visual experience.
According to the method for taking the picture supplementary image information in the dim light environment, provided by the invention, the color and texture supplementation of the night scene image is realized in real time, so that a user can take an effect similar to a daytime highlight scene at night, and the user experience is improved.
The embodiment of the application further provides a device for capturing photo supplementary image information in a dark environment, as shown in fig. 2, the device 400 for capturing photo supplementary image information in a dark environment includes:
an image acquisition module 401, configured to acquire an original RAW image captured by a camera;
a useful live-action information identifying module 402, configured to preprocess the RAW image, and identify useful live-action information in the RAW image;
the useful live-action information enhancement module 403 is configured to perform separation processing and enhancement processing on the useful live-action information;
a background image information generating module 404, configured to generate background image information according to the useful real scene information;
and the synthesis module 405 is configured to synthesize the enhanced useful live-action information and the background image information to obtain a night-scene image.
Preferably, the useful live-action information enhancement module includes:
and the separation unit is used for carrying out edge extraction on the useful live-action information by using an edge detection algorithm to obtain separated useful live-action information.
Preferably, the useful live-action information enhancement module includes:
and the enhancement unit is used for converting the useful real scene information into a daytime highlight scene style by using a style conversion network.
Preferably, the background image information generating module includes:
the scene identification unit is used for obtaining the scene of the image according to the type of the useful live-action information;
and the background image information generating unit is used for generating background image information corresponding to the scene according to the scene of the image.
Preferably, the background image information generating unit includes:
and the image information supplementing element is used for supplementing the image information by using a picture color texture compensating network trained under the corresponding scene, and comprises complementary colors and textures to generate background image information.
Preferably, the background image information generating unit includes:
the filter style determining element is used for determining a filter corresponding to the style of the background image according to the scene of the image;
and the filter superposition element is used for superposing the filter on the original image to generate background image information.
Preferably, the background image information generating unit includes:
the mapping material determining element is used for matching mapping materials in a material library according to the scene of the image;
and the mapping material superposition element is used for superposing the mapping material on the original image to generate background image information.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements. It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The embodiment of the application also provides electronic equipment, which comprises:
at least one memory and at least one processor;
the memory is used for storing one or more programs;
the one or more programs, when executed by the at least one processor, cause the at least one processor to implement the steps of a method of capturing photo supplemental image information in a dim light environment as described above.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The above-described apparatus embodiments are merely illustrative, wherein the components illustrated as separate components may or may not be physically separate, and the components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the objectives of the disclosed solution. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
Embodiments of the present application also provide a computer-readable storage medium,
the computer readable storage medium stores a computer program which, when executed by a processor, implements the steps of a method of taking photo supplemental image information in a dim light environment as described above.
Computer-usable storage media include both permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to: phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of random access memory (ra M), read only memory (R O M), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by the computing device.
The invention provides a method and a device for taking picture supplementary image information in a dim light environment, the basic principle is that an AI neural network is used for detecting the current dim light scene to identify and classify, then a neural network of another picture compensation color is used for supplementing colors to pictures through different scenes, the method is simply classified into two steps to be executed, firstly, the scene classification is carried out, and then a picture color texture compensation network trained under various scenes is used for supplementing image information, including color compensation and texture compensation. The night scene image color and texture supplementing method and device can realize color and texture supplementing of the night scene image in real time, so that a user can shoot an effect similar to a daytime highlight scene at night, and user experience is improved. Based on a specific algorithm or an AI model and other methods, the night scene photo is converted into a virtual-real combination style photo, and the algorithm method is used for realizing the dim light image information compensation and the generation of a virtual scene of a user in different dim light scenes. The problems that a user shoots a photo at night, the photo is unclear in darkness and blurred, the noise is high, and the image of the area irradiated by no light is dark and one piece of pain points are solved. The virtual and the real are combined with the image, the skill shoots out the real scene information required by the user, and the current scene can be decorated through the virtual image.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention.

Claims (7)

1. A method for taking a picture in a dim light environment to supplement image information, comprising the steps of:
acquiring an original RAW image shot by a camera;
preprocessing the RAW image, and identifying useful live-action information in the RAW image; the useful live-action information is unclear but image information with image outline;
performing edge extraction on the useful live-action information by using an edge detection algorithm to obtain useful live-action information after separation processing; using a style conversion network to convert the useful live-action information after separation processing into a daytime highlight scene style, and obtaining useful live-action information after enhancement processing;
obtaining the scene of the image according to the type of the useful live-action information;
freely generating supplementary virtual image information corresponding to a scene of the image according to the scene;
and synthesizing the useful real scene information subjected to enhancement processing and the supplementary virtual image information to obtain a night scene image.
2. The method of taking supplemental image information of a photograph in a dim light environment according to claim 1, wherein the freely generating supplemental virtual image information corresponding to a scene of the image based on the scene comprises:
and performing image information supplementation, including complementary color and texture supplementation, by using a picture color and texture compensation network trained under the corresponding scene, and generating complementary virtual image information.
3. The method of taking supplemental image information of a photograph in a dim light environment according to claim 1, wherein the freely generating supplemental virtual image information corresponding to a scene of the image based on the scene comprises:
determining a filter corresponding to the background image according to the scene of the image;
and superposing the filter on the original image to generate supplementary virtual image information.
4. The method of taking supplemental image information of a photograph in a dim light environment according to claim 1, wherein the freely generating supplemental virtual image information corresponding to a scene of the image based on the scene comprises:
matching the mapping materials in the material library according to the scene of the image;
and superposing the mapping material on the original image to generate supplementary virtual image information.
5. An apparatus for taking a photograph of supplemental image information in a dim light environment, comprising:
the image acquisition module is used for acquiring an original RAW image shot by the camera;
the useful live-action information identification module is used for preprocessing the RAW image and identifying useful live-action information in the RAW image; the useful live-action information is unclear but image information with image outline;
the useful live-action information enhancement module is used for carrying out edge extraction on the useful live-action information by using an edge detection algorithm to obtain useful live-action information after separation processing; using a style conversion network to convert the useful live-action information after separation processing into a daytime highlight scene style, and obtaining useful live-action information after enhancement processing;
the background image information generation module is used for obtaining the scene of the image according to the type of the useful live-action information; freely generating supplementary virtual image information corresponding to a scene of the image according to the scene;
and the synthesis module is used for synthesizing the useful live-action information subjected to the enhancement processing and the supplementary virtual image information to obtain a night-scene image.
6. An electronic device, comprising:
at least one memory and at least one processor;
the memory is used for storing one or more programs;
the one or more programs, when executed by the at least one processor, cause the at least one processor to perform the steps of a method of taking photo supplemental image information in a dim light environment as claimed in any of claims 1-4.
7. A computer-readable storage medium, characterized by:
the computer readable storage medium stores a computer program which, when executed by a processor, implements the steps of a method of taking photo supplemental image information in a dim light environment as claimed in any of claims 1-4.
CN202210766024.2A 2022-07-01 2022-07-01 Method and device for taking photo supplementary image information in dim light environment Active CN115334234B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210766024.2A CN115334234B (en) 2022-07-01 2022-07-01 Method and device for taking photo supplementary image information in dim light environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210766024.2A CN115334234B (en) 2022-07-01 2022-07-01 Method and device for taking photo supplementary image information in dim light environment

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202410320303.5A Division CN118264919A (en) 2022-07-01 Method and device for taking photo supplementary image information in dim light environment

Publications (2)

Publication Number Publication Date
CN115334234A CN115334234A (en) 2022-11-11
CN115334234B true CN115334234B (en) 2024-03-29

Family

ID=83917071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210766024.2A Active CN115334234B (en) 2022-07-01 2022-07-01 Method and device for taking photo supplementary image information in dim light environment

Country Status (1)

Country Link
CN (1) CN115334234B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872526A (en) * 2010-06-01 2010-10-27 重庆市海普软件产业有限公司 Smoke and fire intelligent identification method based on programmable photographing technology
CN107241559A (en) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 Portrait photographic method, device and picture pick-up device
CN108764370A (en) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and computer equipment
EP3435284A1 (en) * 2017-07-27 2019-01-30 Rockwell Collins, Inc. Neural network foreground separation for mixed reality
WO2020038087A1 (en) * 2018-08-22 2020-02-27 Oppo广东移动通信有限公司 Method and apparatus for photographic control in super night scene mode and electronic device
WO2020119082A1 (en) * 2018-12-10 2020-06-18 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image acquisition
WO2021057277A1 (en) * 2019-09-23 2021-04-01 华为技术有限公司 Photographing method in dark light and electronic device
CN113449572A (en) * 2020-03-27 2021-09-28 西安欧思奇软件有限公司 Face unlocking method and system in dim light scene, storage medium and computer equipment thereof
WO2021204202A1 (en) * 2020-04-10 2021-10-14 华为技术有限公司 Image auto white balance method and apparatus
CN114422682A (en) * 2022-01-28 2022-04-29 安谋科技(中国)有限公司 Photographing method, electronic device, and readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872526A (en) * 2010-06-01 2010-10-27 重庆市海普软件产业有限公司 Smoke and fire intelligent identification method based on programmable photographing technology
CN107241559A (en) * 2017-06-16 2017-10-10 广东欧珀移动通信有限公司 Portrait photographic method, device and picture pick-up device
EP3435284A1 (en) * 2017-07-27 2019-01-30 Rockwell Collins, Inc. Neural network foreground separation for mixed reality
CN108764370A (en) * 2018-06-08 2018-11-06 Oppo广东移动通信有限公司 Image processing method, device, computer readable storage medium and computer equipment
WO2020038087A1 (en) * 2018-08-22 2020-02-27 Oppo广东移动通信有限公司 Method and apparatus for photographic control in super night scene mode and electronic device
WO2020119082A1 (en) * 2018-12-10 2020-06-18 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image acquisition
WO2021057277A1 (en) * 2019-09-23 2021-04-01 华为技术有限公司 Photographing method in dark light and electronic device
CN113449572A (en) * 2020-03-27 2021-09-28 西安欧思奇软件有限公司 Face unlocking method and system in dim light scene, storage medium and computer equipment thereof
WO2021204202A1 (en) * 2020-04-10 2021-10-14 华为技术有限公司 Image auto white balance method and apparatus
CN114422682A (en) * 2022-01-28 2022-04-29 安谋科技(中国)有限公司 Photographing method, electronic device, and readable storage medium

Also Published As

Publication number Publication date
CN115334234A (en) 2022-11-11

Similar Documents

Publication Publication Date Title
Kalantari et al. Deep high dynamic range imaging of dynamic scenes.
US11132771B2 (en) Bright spot removal using a neural network
US10015469B2 (en) Image blur based on 3D depth information
US11854167B2 (en) Photographic underexposure correction using a neural network
CN107124604B (en) A kind of method and device for realizing 3-D image using dual camera
CN101422035B (en) Light source estimation device, light source estimation system, light source estimation method, device having increased image resolution, and method for increasing image resolution
CN110428366A (en) Image processing method and device, electronic equipment, computer readable storage medium
US20220207870A1 (en) Method and apparatus for image processing, terminal
CN108765033A (en) Transmitting advertisement information method and apparatus, storage medium, electronic equipment
CN110349163A (en) Image processing method and device, electronic equipment, computer readable storage medium
Wang et al. Stereoscopic dark flash for low-light photography
CN116157805A (en) Camera image or video processing pipeline using neural embedding
Zhang et al. Deep motion blur removal using noisy/blurry image pairs
CN115115516A (en) Real-world video super-resolution algorithm based on Raw domain
Lee et al. Generative single image reflection separation
CN114897916A (en) Image processing method and device, nonvolatile readable storage medium and electronic equipment
CN108322648A (en) Image processing method and device, electronic equipment, computer readable storage medium
Wu et al. Cycle-retinex: Unpaired low-light image enhancement via retinex-inline cyclegan
CN115334234B (en) Method and device for taking photo supplementary image information in dim light environment
CN109063562A (en) Image processing method and device, electronic equipment, computer readable storage medium
CN112712525A (en) Multi-party image interaction system and method
CN118264919A (en) Method and device for taking photo supplementary image information in dim light environment
CN111105369A (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
CN110992284A (en) Image processing method, image processing apparatus, electronic device, and computer-readable storage medium
Li et al. Rendering nighttime image via cascaded color and brightness compensation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant